Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? - Searching and browsing via tag clouds

473 views

Published on

Panel presentation from ASIST'2011 panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond?
Jacek Gwizdka's presentation on cognitive load during search and browsing via tag clouds. And on he role of tags in information search and navigation between documents.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
473
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • socially constructed tags are often presented in a form of a “ cloud ”.
  • Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? - Searching and browsing via tag clouds

    1. 1. Panel: Social Tagging and Folksonomies: Indexing, Retrieving…and Beyond? Searching and browsing via tag clouds Jacek Gwizdka Department of Library and Information Science Rutgers University Sunday, Oct 09, 2011 CONTACT: www.jsg.tel
    2. 2. Process of Tagging <ul><li>Users associate tags with web resources </li></ul><ul><li>Tags serve in social, structural, and semantic role </li></ul><ul><ul><li>structural role: starting points for navigation; helping users to orient themselves </li></ul></ul><ul><ul><li>semantic role: description of a set of associated resources </li></ul></ul>
    3. 3. Tag Clouds
    4. 4. My Claims <ul><li>Tag Clouds help in information search </li></ul><ul><ul><li>by saving searchers’ effort </li></ul></ul><ul><li>Tag Clouds do not support browsing tasks </li></ul><ul><ul><li>do not show relationships and do not show history </li></ul></ul><ul><li>Not just claims… </li></ul>
    5. 5. Research Question <ul><li>Do tag clouds benefit users in search tasks? </li></ul>
    6. 6. User Interface with Overview Tag Cloud <ul><li>Our retrieval system populated with data from delicious </li></ul>List UI Overview Tag Cloud UI Search Result List Tag Cloud
    7. 7. User Actions in Two Interfaces 1. List 2. Overview Tag Cloud click
    8. 8. Experiment Design <ul><li>37 participants </li></ul><ul><ul><li>Working memory assessed using memory span task (Francis & Neath 2003) </li></ul></ul><ul><li>Within subject design with 2 factors: task and user interface </li></ul><ul><li>Tasks </li></ul><ul><ul><li>everyday information search (e.g., travel, shopping) at two levels of task complexity </li></ul></ul><ul><ul><li>Four task rotations for each of two user interfaces </li></ul></ul>
    9. 9. Measures <ul><li>Task completion time </li></ul><ul><li>Cognitive effort: </li></ul><ul><ul><li>from mouse clicks: user decisions expressed as user selection of search terms = number of queries , opening documents to view </li></ul></ul><ul><ul><li>from eye-tracking – reading effort measures: (based on intermediate reading model ) scanning vs. reading; length of reading sequences; reading fixation duration, number of regression fixations in reading sequence, spacing of fixations in reading sequence. </li></ul></ul><ul><li>Task outcome = relevance * completeness </li></ul>
    10. 10. Results
    11. 11. Results : Time and User Behavior <ul><li>Overview Tag Cloud + List made users faster and more efficient </li></ul><ul><ul><li>less time on task: 191s in Overview+List vs. 261s in List UI </li></ul></ul><ul><ul><li>less queries: 7 in Overview+List vs. 8.3 in List UI </li></ul></ul><ul><ul><li>no significant differences in task outcomes </li></ul></ul><ul><li>Overview Tag Cloud facilitated formulation of more effective queries </li></ul>
    12. 12. Results : Cognitive Effort <ul><li>Overview Tag Cloud + List required less effort, higher efficiency </li></ul><ul><ul><li>less fixations (total and mean reading seq len) – more efficient </li></ul></ul><ul><ul><li>less regressions – less difficulty in reading </li></ul></ul>List Overview Tag Cloud + List
    13. 13. Results : Cognitive Effort <ul><li>Overview Tag Cloud + List required less effort, higher efficiency </li></ul><ul><ul><li>less fixations (total and mean reading seq len) – more efficient </li></ul></ul><ul><ul><li>less regressions – less difficulty in reading </li></ul></ul><ul><li>Comparing only results list region in two UI conditions </li></ul><ul><ul><li>less effort invested in results list in Overview Tag Cloud + List </li></ul></ul><ul><li>Overview Tag Cloud helped to lower cognitive demands </li></ul>List Overview Tag Cloud + List
    14. 14. Did Tag Cloud Help All Users? <ul><li>No – there are individual differences </li></ul><ul><li>Two users, same UI and same task </li></ul>
    15. 15. Is Tag Cloud Helpful? <ul><li>Yes! </li></ul><ul><li>Overview Tag Cloud + List UI </li></ul><ul><li>made people faster and required less effort </li></ul><ul><ul><li>also reflected in a number of eye-tracking measures </li></ul></ul>
    16. 16. Browsing large sets of tagged documents
    17. 17. An Example of Browsing (CiteULike) <ul><li>A typical model of browsing with tag clouds: </li></ul><ul><li>Pivot browsing : a lightweight navigation mechanism </li></ul>1. information  2. retrieval  3. algorithms  4. phylogeny
    18. 18. Is There a Problem?
    19. 19. Users’ Conceptualizations <ul><li>The labyrinth </li></ul><ul><li>… being lost </li></ul>The journey … switching direction and being stack The space … increasing distance, and continuity 18 participants
    20. 20. What’s the Problem? <ul><li>Users </li></ul><ul><ul><li>feel lost </li></ul></ul><ul><ul><li>experience “switching”, yet expect some continuity </li></ul></ul><ul><li>In Pivot Browsing each step is treated as a separate move </li></ul><ul><ul><li>View is “re-oriented” - New list of documents along with their tags </li></ul></ul><ul><ul><li>At each step context is switched </li></ul></ul><ul><ul><li>Relationships between steps are not shown </li></ul></ul><ul><ul><ul><li>e.g., overlap between tag clouds not indicated </li></ul></ul></ul><ul><li>Pivot browsing seems to be not lightweight </li></ul><ul><ul><li>conceptualizing multiple tags assigned in different quantities to different documents is difficult </li></ul></ul>
    21. 21. Research Questions <ul><li>How can we support continuity in “tag-space” browsing? </li></ul><ul><li>How can we promote better understanding </li></ul><ul><li>of tag-document relationships (sensemaking) ? </li></ul>
    22. 22. Recall : Example of Navigation (CiteULike) 1. information  2. retrieval  3. algorithms  4. phylogeny
    23. 23. User Interface with “History tag clouds” (Tag Trails) Supporting continuity in tag-space navigation by providing history information  retrieval  algorithms  phylogeny History tag clouds
    24. 24. User Interface with Heat map (Tag Trails 2) <ul><li>Supporting continuity in tag-space navigation by providing history and making (some) relationships (more) explicit </li></ul>Tag cloud Results list Column-tags: most recently visited tags from left to right Row-tags: selection of most frequent tags Cells color-coded according to tag’s df Heat map
    25. 25. Summary & Conclusions <ul><li>Tagging – “metadata for free”: does the effort pay off? </li></ul><ul><li>Yes, but not for all tasks </li></ul><ul><li>Tag clouds </li></ul><ul><ul><li>helpful in search tasks </li></ul></ul><ul><ul><li>but to support browsing new presentations of tags needed </li></ul></ul>
    26. 26. Thank you! Questions? <ul><li>Jacek Gwizdka | contact: http://jsg.tel </li></ul><ul><li>Related publications: </li></ul><ul><li>Gwizdka, J. (2009a). What a difference a tag cloud makes: Effects of tasks and cognitive abilities on search results interface use. Information Research, 14(4), paper 414. Available online at <http://informationr.net/ir/14-4/paper414.html> </li></ul><ul><li>Gwizdka, J. (2010c). Of kings, traffic signs and flowers: Exploring navigation of tagged documents. In Proceedings of Hypertext’2010 (pp. 167-172). ACM Press. </li></ul><ul><li>Gwizdka, J. & Bakelaar, P. (2009a). Tag trails: Navigating with context and history. CHI ’09 extended abstracts (pp. 4579-4584). ACM Press. </li></ul><ul><li>Gwizdka, J. & Bakelaar, P. (2009b). Navigating one million tags. Short paper and poster presented at ASIS&T’2009, Vancouver, BC, Canada. </li></ul><ul><li>Cole, M.J. & Gwizdka, J. (2008). Tagging semantics: Investigations with WordNet. Proceedings of JCDL’2008. ACM Press. </li></ul><ul><li>Gwizdka, J. & Cole, M.J. (2007). Finding it on Google, finding it on del.icio.us. In L. Kovács, N. Fuhr, & C. Meghini (Eds.), Lecture notes in computer science (LNCS): Vol. 4765. Research and advanced technology for digital libraries, ECDL’2007. (pp. 559-562). Springer-Verlag </li></ul>
    27. 27. Extra Slides <ul><li>Intro to Reading model </li></ul><ul><li>Tag cloud examples </li></ul>
    28. 28. Introducing Reading Model <ul><li>Scanning fixations provide some semantic information </li></ul><ul><ul><li>limited to foveal visual field (1° visual acuity) (Rayner & Fischer, 1996) </li></ul></ul><ul><li>Reading fixation sequences provide more information than isolated “scanning” fixations </li></ul><ul><ul><li>information is gained from the larger parafoveal region (5° beyond foveal focus; asymmetrical, in dir of reading) (Rayner et al., 2003) </li></ul></ul><ul><ul><li>some types of semantic information is available only through reading sequences </li></ul></ul><ul><li>We implemented the E-Z Reader reading model (Reichle et al., 2006) </li></ul><ul><ul><li>Lexical fixations duration >113 ms (Reingold & Rayner, 2006) </li></ul></ul><ul><ul><li>Each lexical fixation is classified to Scanning or Reading (S,R) </li></ul></ul><ul><ul><li>These sequences used to create a two-state model </li></ul></ul>
    29. 29. Reading Model – States and Characteristics <ul><li>Two states: transition probabilities </li></ul><ul><li>Number of lexical fixations and duration </li></ul>
    30. 30. Example Reading Sequence
    31. 31. Tag Clouds Everywhere!

    ×