Does sizematter

  • 250 views
Uploaded on

 

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
250
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Comment why the blt sub is different for each person..Stresss that the lightweight ontologies emerging are personal ontologies, way in which a user reffered to or characterise a set of entities.Go more slowly in the evaluation .. Say what you
  • In this work we were interested in exploring the influence of the size of documents on the accuracy of a given text processing task.Our hypothesis was that results obtained using longer texts could be comparable to those obtained from microblog size texts.. This is from texts containing as much as 140 characters.To test this hypothesis we generated a corpora which consisted of truncated emails starting from 140 characters and successive multiples thereofThe text processing task we evaluated was topic extraction/ and topic classification.
  • We were particularly interested on study this hypothesis within email based corpora because previous research has shown that - although micropost services are also used in fromal environments such as work, - There is also a tendency to perceive this services negatively as they seem to reduce productivity.. In some environments where restrictions are made on these type of services, there is a tendency to use email as an alternative to micropost services, presenting the same patterns of communication which favor brevity.
  • So within the email based corpus we were interested to determine if email is indeed used as a short messaging service And we wanted to evaluate to what extent the knowledge content provided by a truncated message is comparable to that contained in a full message. Determine if the knowledge content of short emails may be used to obtain useful information about e.g., topics of interest or expertise within an organisation, as a basis for carrying out tasks such as expert finding or content-based social network analysis (SNA)
  • Our data set consisted on the Oak mailing list consisting of a 659 email taken from July to January 2010. As we can see in the graph the corpus is already characterised by having a high frquency of short messages going from zero to 280 characters.
  • For comparing the degree in which the knowledge content of a a shorter message could be compared to that of a full message, we choose a text classification task on non-predefined topics..First we started by partioning the them emails to obtain the several fixed size corpora..Then we performed a corpus-driven topic extraction, in which each topic consists of a weighted vector of keywords.Each document was labelled with the topic it is most similar to..
  • Based on the definition of Naaman, Wagner and Strohmaier, introduce a notation for characterising these streams. They refer to this notation as a Tweetonomy. We’ll talk about it later on.. But they also identify also introduce the term Personal Awareness Stream for referring to the c
  • Based on the definition of Naaman, Wagner and Strohmaier, introduce a notation for characterising these streams. They refer to this notation as a Tweetonomy. We’ll talk about it later on.. But they also identify also introduce the term Personal Awareness Stream for referring to the c

Transcript

  • 1. Does Size Matter? When Small is Good Enough
    A.L. Gentile, A.E. Cano, A.-S. Dadzie, V. Lanfranchi and N. Ireson
    The Oak Group, Department of Computer Science, The University of Sheffield
  • 2. Outline
    Introduction
    Email Corpus
    Dynamic Topic Classification Of Short Texts
    Experiments
    Conclusions
    Outline
  • 3. Introduction
    Settings
    • Main Goal: observation of the influence of the size of documents on the accuracy of a defined text processing task
    • 4. Hypothesis:
    Results obtained using longer texts may be approximated by short texts, of micropost size, i.e., maximum length 140 characters
    • Dataset: artificially generated comparable corpora, consisting of truncated emails, from micropost size (140 characters), and successive multiples thereof
    • 5. Methodology: corpus-driven topic extraction/ document topic classification
  • Introduction
    Micropost Services
    • Mainly used for social information exchange, but also used in more formal (working) environments [Herbsleb et al., 2002, Isaacs et al., 2002]
    • 6. Sometimes perceived negatively in the workplace, as they may be seen to reduce productivity [TNS US Group, 2009], and/or pose threats to security and privacy
    • 7. where restrictions to use are in place, alternatives are sought that obtain the same benefits
    - Same communication patterns in alternative media: email as a short message service for communication via, e.g., mailing lists
  • 8. Introduction
    Research Questions
    • Statistical analysis of emails exchanged via Oak mailing list (over a period of six months) to determine if email is indeed used as a short messaging service.
    • 9. Content analysis of emails as microposts, to evaluate to what degree the knowledge content of truncated or abbreviated messages can be compared to the complete message.
  • Email Corpus
    Oak Mailing List
    Six month period (July - January 2011), 659 emails
  • 10. Topic Classificat
    Topic Classification
    • Goal: evaluate to what degree the knowledge content of a shorter message can be compared to that of a full message.
    • 11. Chosen task: text classification on non-predefined topics.
    • 12. Test bed: generated by preprocessing the Oak email corpus to obtain several fixed-size corpora.
    • 13. Method:
    - Corpus-driven topic extraction: a number of topics are automatically extracted from a document collection; each topic is represented as a weighted vector of terms;
    - Document topic classification: each document is labelled with the topic it is most similar to, and classified into the corresponding cluster.
  • 14. Topic Classificat
    Topic Extraction: Proximity-based Clustering
    • Document corpus D = {d1,...,dk }
    • 15. each document di= {t1,...,tv } is a vector of weighted terms
    • 16. Term clusters C = {c1,...,ck } (clustering performed by using as feature space the inverted index of D)
    • 17. each cluster ck = {t1,...,tn} is a vector of weighted terms
    • 18. each cluster ideally represents a topic in the document collection
  • Topic Classificat
    Email Topic Classification
    • sim(d,Ci): similarity between documents and clusters (by cosine similarity)
    • 19. labelDoc(D): each document d is mapped to the topic Ci , which maximises the similarity sim(d,Ci)
  • Experiments
    Dataset Preparation:
    Comparable Corpora
  • 20. Experiments
    Experimental Approach
    • Corpus-driven topic extraction: using mainCorpus
    • 21. Email classification: apply the labelDoc procedure to all different comparable corpora, including mainCorpus.
  • Experiments
    Corpus Topics
  • 22. Experiments
    Results
  • 23. Conclusions
    A fair portion of the emails exchanged, for the corpus generated from a mailing list, are very short, with approximately 40% falling within the single micropost size, and 65% up to two microposts;
    For the text classification task described, the accuracy of classification for micropost size texts is an acceptable approximation of classification performed on longer texts, with a decrease of only ∼ 5% for up to the second micropost block within a long e-mail.
    Conclusions
  • 24. Conclusions
    Enriching the micro-emails with semantic information (e.g., concepts extracted from domain and standard ontologies). would improve the results obtained using unannotatedtext
    Investigate the influence of other similarity measures.
    Application to expert finding tasks, exploiting dynamic topic extraction as a means to determine authors’ and recipients’ areas of expertise.
    Formal evaluation of topic validity will be required, including the human (expert) annotator in the loop.
    Future Work
  • 25. References
    References
    [1] Herbsleb, J. D., Atkins, D. L., Boyer, D. G., Handel, M., and Finholt, T. A. (2002). Introducing instant messaging and chat in the workplace. In Proc., SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, pages 171–178.
    [2] Isaacs, E., Walendowski, A., Whittaker, S., Schiano, D. J., and Kamm, C. (2002). The character, functions, and styles of instant messaging in the workplace. In Proc., ACM conference on Computer supported cooperative work, pages 11–20.
    [3] TNS US Group (2009). Social media exploding: More than 40% use online social networks.
    http://www.tns- us.com/news/social_media_exploding_more_than.php.