Your SlideShare is downloading. ×
  • Like
Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond?  - Searching and browsing via tag clouds
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? - Searching and browsing via tag clouds

  • 303 views
Published

Panel presentation from ASIST'2011 panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond? …

Panel presentation from ASIST'2011 panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond?
Jacek Gwizdka's presentation on cognitive load during search and browsing via tag clouds. And on he role of tags in information search and navigation between documents.

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
303
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
4
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • socially constructed tags are often presented in a form of a “ cloud ”.

Transcript

  • 1. Panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond? Searching and browsing via tag clouds Jacek Gwizdka Department of Library and Information Science Rutgers University Sunday, Oct 09, 2011 CONTACT: www.jsg.tel
  • 2. Process of Tagging
    • Users associate tags with web resources
    • Tags serve in social, structural, and semantic role
      • structural role: starting points for navigation; helping users to orient themselves
      • semantic role: description of a set of associated resources
  • 3. Tag Clouds
  • 4. My Claims
    • Tag Clouds help in information search
      • by saving searchers’ effort
    • Tag Clouds do not support browsing tasks
      • do not show relationships and do not show history
    • Not just claims…
  • 5. Research Question
    • Do tag clouds benefit users in search tasks?
  • 6. User Interface with Overview Tag Cloud
    • Our retrieval system populated with data from delicious
    List UI Overview Tag Cloud UI Search Result List Tag Cloud
  • 7. User Actions in Two Interfaces 1. List 2. Overview Tag Cloud click
  • 8. Experiment Design
    • 37 participants
      • Working memory assessed using memory span task (Francis & Neath 2003)
    • Within subject design with 2 factors: task and user interface
    • Tasks
      • everyday information search (e.g., travel, shopping) at two levels of task complexity
      • Four task rotations for each of two user interfaces
  • 9. Measures
    • Task completion time
    • Cognitive effort:
      • from mouse clicks: user decisions expressed as user selection of search terms = number of queries , opening documents to view
      • from eye-tracking – reading effort measures: (based on intermediate reading model ) scanning vs. reading; length of reading sequences; reading fixation duration, number of regression fixations in reading sequence, spacing of fixations in reading sequence.
    • Task outcome = relevance * completeness
  • 10. Results
  • 11. Results : Time and User Behavior
    • Overview Tag Cloud + List made users faster and more efficient
      • less time on task: 191s in Overview+List vs. 261s in List UI
      • less queries: 7 in Overview+List vs. 8.3 in List UI
      • no significant differences in task outcomes
    • Overview Tag Cloud facilitated formulation of more effective queries
  • 12. Results : Cognitive Effort
    • Overview Tag Cloud + List required less effort, higher efficiency
      • less fixations (total and mean reading seq len) – more efficient
      • less regressions – less difficulty in reading
    List Overview Tag Cloud + List
  • 13. Results : Cognitive Effort
    • Overview Tag Cloud + List required less effort, higher efficiency
      • less fixations (total and mean reading seq len) – more efficient
      • less regressions – less difficulty in reading
    • Comparing only results list region in two UI conditions
      • less effort invested in results list in Overview Tag Cloud + List
    • Overview Tag Cloud helped to lower cognitive demands
    List Overview Tag Cloud + List
  • 14. Did Tag Cloud Help All Users?
    • No – there are individual differences
    • Two users, same UI and same task
  • 15. Is Tag Cloud Helpful?
    • Yes!
    • Overview Tag Cloud + List UI
    • made people faster and required less effort
      • also reflected in a number of eye-tracking measures
  • 16. Browsing large sets of tagged documents
  • 17. An Example of Browsing (CiteULike)
    • A typical model of browsing with tag clouds:
    • Pivot browsing : a lightweight navigation mechanism
    1. information  2. retrieval  3. algorithms  4. phylogeny
  • 18. Is There a Problem?
  • 19. Users’ Conceptualizations
    • The labyrinth
    • … being lost
    The journey … switching direction and being stack The space … increasing distance, and continuity 18 participants
  • 20. What’s the Problem?
    • Users
      • feel lost
      • experience “switching”, yet expect some continuity
    • In Pivot Browsing each step is treated as a separate move
      • View is “re-oriented” - New list of documents along with their tags
      • At each step context is switched
      • Relationships between steps are not shown
        • e.g., overlap between tag clouds not indicated
    • Pivot browsing seems to be not lightweight
      • conceptualizing multiple tags assigned in different quantities to different documents is difficult
  • 21. Research Questions
    • How can we support continuity in “tag-space” browsing?
    • How can we promote better understanding
    • of tag-document relationships (sensemaking) ?
  • 22. Recall : Example of Navigation (CiteULike) 1. information  2. retrieval  3. algorithms  4. phylogeny
  • 23. User Interface with “History tag clouds” (Tag Trails) Supporting continuity in tag-space navigation by providing history information  retrieval  algorithms  phylogeny History tag clouds
  • 24. User Interface with Heat map (Tag Trails 2)
    • Supporting continuity in tag-space navigation by providing history and making (some) relationships (more) explicit
    Tag cloud Results list Column-tags: most recently visited tags from left to right Row-tags: selection of most frequent tags Cells color-coded according to tag’s df Heat map
  • 25. Summary & Conclusions
    • Tagging – “metadata for free”: does the effort pay off?
    • Yes, but not for all tasks
    • Tag clouds
      • helpful in search tasks
      • but to support browsing new presentations of tags needed
  • 26. Thank you! Questions?
    • Jacek Gwizdka | contact: http://jsg.tel
    • Related publications:
    • Gwizdka, J. (2009a). What a difference a tag cloud makes: Effects of tasks and cognitive abilities on search results interface use. Information Research, 14(4), paper 414. Available online at <http://informationr.net/ir/14-4/paper414.html>
    • Gwizdka, J. (2010c). Of kings, traffic signs and flowers: Exploring navigation of tagged documents. In Proceedings of Hypertext’2010 (pp. 167-172). ACM Press.
    • Gwizdka, J. & Bakelaar, P. (2009a). Tag trails: Navigating with context and history. CHI ’09 extended abstracts (pp. 4579-4584). ACM Press.
    • Gwizdka, J. & Bakelaar, P. (2009b). Navigating one million tags. Short paper and poster presented at ASIS&T’2009, Vancouver, BC, Canada.
    • Cole, M.J. & Gwizdka, J. (2008). Tagging semantics: Investigations with WordNet. Proceedings of JCDL’2008. ACM Press.
    • Gwizdka, J. & Cole, M.J. (2007). Finding it on Google, finding it on del.icio.us. In L. Kovács, N. Fuhr, & C. Meghini (Eds.), Lecture notes in computer science (LNCS): Vol. 4765. Research and advanced technology for digital libraries, ECDL’2007. (pp. 559-562). Springer-Verlag
  • 27. Extra Slides
    • Intro to Reading model
    • Tag cloud examples
  • 28. Introducing Reading Model
    • Scanning fixations provide some semantic information
      • limited to foveal visual field (1° visual acuity) (Rayner & Fischer, 1996)
    • Reading fixation sequences provide more information than isolated “scanning” fixations
      • information is gained from the larger parafoveal region (5° beyond foveal focus; asymmetrical, in dir of reading) (Rayner et al., 2003)
      • some types of semantic information is available only through reading sequences
    • We implemented the E-Z Reader reading model (Reichle et al., 2006)
      • Lexical fixations duration >113 ms (Reingold & Rayner, 2006)
      • Each lexical fixation is classified to Scanning or Reading (S,R)
      • These sequences used to create a two-state model
  • 29. Reading Model – States and Characteristics
    • Two states: transition probabilities
    • Number of lexical fixations and duration
  • 30. Example Reading Sequence
  • 31. Tag Clouds Everywhere!