Successfully reported this slideshow.

Active & Passive Utility of Search Interface Features in different Information Seeking Task Stages

1

Share

Loading in …3
×
1 of 60
1 of 60

More Related Content

More from TimelessFuture

Related Books

Free with a 14 day trial from Scribd

See all

Related Audiobooks

Free with a 14 day trial from Scribd

See all

Active & Passive Utility of Search Interface Features in different Information Seeking Task Stages

  1. 1. Active & Passive Utility of Search Interface Features in different Information Seeking Task Stages Hugo C. Huurdeman, Max L. Wilson, Jaap Kamps University of Amsterdam, University of Nottingham huurdeman @ uva.nl, max.wilson @ nottingham.ac.uk, kamps @ uva.nl ACM CHIIIR conference, March 14, 2016 Chapel Hill, NC, USA dl.acm.org/citation.cfm?id=2854957
  2. 2. 1. Introduction • Information seeking theory: • stages within complex tasks • involving learning / knowledge construction • Research into Search User Interfaces (SUIs) • proposed many interactive features • usefulness proven in micro level studies, but not widely adopted • Our study: investigating the utility of various SUI features at different macro-level stages
  3. 3. Related Work task-based inf. seeking, SUIs & feature utility2
  4. 4. 2. Related Work: Task-Based Inf. Seeking • Context: task-based information seeking & searching [Wilson09] • IR systems: “deliver task-specific information leading to problem resolution” [Toms11] • Information seeking models: discussing inf. seeking stages +uncertainty- FormulationInitiation Selection Exploration Collection Presentation Prefocus Formulation Postfocus [Kuhlthau91] [Vakkari01]
  5. 5. [Ahlberg&Shneiderman94] [Google Wonder Wheel] [ClusterMap] [Epicurious] [Donato10] [Hearst&Degler13] [Proulx et al., 2006] •SUIs may aid users to: • express needs, formulate queries, provide understanding & to track progress [Hearst09] •Complexity of designing effective SUIs [Shneiderman05] •Many proposed interactive features: • search suggestions [Niu14], facets [Tunkelang09], item trays [Donato10], .. 2. Related Work: Search User Interfaces
  6. 6. [Ahlberg&Shneiderman94] [Google Wonder Wheel] [ClusterMap] [Epicurious] [Donato10] [Hearst&Degler13] [Proulx et al., 2006]
  7. 7. Few features have made it to the general search engines, however
  8. 8. 2. Related Work: SUI features over time •Most common: phases of singular search session • Facet use in ‘decision making stages’ [Kules&Capra12] • Query suggestions for difficult topics & during later phases in task [Niu&Kelly14] • Search stage sensitive and agnostic features [DiriyeEA10] • Conceptually bridging macro-level inf. seeking models & micro level systems [Huurdeman&Kamps14] •Few cases also multiple search sessions • e.g. [Liu&Belkin15,Wilson&schraefel08]
  9. 9. Setup multistage task design, protocol, system & logging3
  10. 10. 3. Setup • User study (26 participants; 24 analyzed) • Undergrads Univ. of Nottingham (6 F, 12 M, 18-25y) • Experimental SUI resembling common Search Engine • Within-participants • Task stage independent variable • Task design: explicit multistage approach
  11. 11. 3. Setup: Multistage Task Design sim. work task: writing essay subtask subtask subtask prepare list of 
 3 topics choose topic;
 formulate specific
 question find and select 
 additional
 pages to cite 15 minutes 15 minutes 15 minutes initiation
 topic selection
 exploration focus formulation collecting presenting Topics (b/o discussions teaching staff) • Autonomous Vehicles (AV) • Virtual Reality (VR)
  12. 12. 3. Setup: Protocol Training task Pre- Questionnaire Topic Assignment Introduction system Task Post-task Questionnaire 3x Post-experiment questionnaire Debriefing interview
  13. 13. • Experimental system: SearchAssist • Results, Query Corrections, Query Suggestions: Bing Web API • Category Filters: DMOZ • Categorization and analysis: • Max Wilson’s framework of SUI features [Wilson11]
  14. 14. Control Input PersonalizableInformational
  15. 15. 3. Setup: Logging eyetribe.com
  16. 16. 3. Setup: Data / Task details • AV & VR topics invoked comparable behaviours: • analysed as one topic set • Total duration main tasks • Total task time: 32:56 • 36.8% SUI, 33% Task screen, 30.2% Webpages Stage 1: 11:32 Stage 2: 8:24 Stage 3: 12:59
  17. 17. Findings: Active Behaviour behaviour directly and indirectly derivable from logs4
  18. 18. 4.1 Active Behaviour: Clicks 0 4 8 Sig. clicks on interface 
 features over time Category filters ➡ Stage 1 Stage 2 Stage 3
  19. 19. 4.1 Active Behaviour: Clicks 0 4 8 Sig. clicks on interface 
 features over time Category filters ➡ Tag Cloud ➡ Stage 1 Stage 2 Stage 3
  20. 20. 4.1 Active Behaviour: Clicks 0 4 8 Sig. clicks on interface 
 features over time Category filters ➡ Tag Cloud ➡ Search button ➡ Stage 1 Stage 2 Stage 3
  21. 21. 4.1 Active Behaviour: Clicks 0 4 8 Sig. clicks on interface 
 features over time Category filters ➡ Tag Cloud ➡ Search button ➡ Saved Results Stage 1 Stage 2 Stage 3
  22. 22. 4.2 Active Behaviour: Queries •Mean number of queries** (unique): • Stage 1: 9.5 (8.1) ➡ Stage 2: 5.5 (5.1) ➡ Stage 3: 5.9 (5.3) 0 2,5 5 7,5 10 Stage 1 Stage 2 Stage 3 Search Box Query Suggestions Recent Queries
  23. 23. 4.3 Active Behaviour: Query words •Mean number of query words**: “virtual reality” (P.02) “impact of virtual reality on society art and culture“ “autonomous vehicles” (P.06) “autonomous vehicles costs
 insurance industry” 0 1,25 2,5 3,75 5 Stage 1 Stage 2 Stage 3 Mean Number of Query words
  24. 24. 4.4 Active Behaviour: Visited pages • Visited pages (unique)**: • Stage 1: 8.0 (7.3) • Stage 2: 6.4 (5.9) • dwell time highest • Stage 3: 14.2 (10.8) • Mean rank visited pages • from 3.1 to 6.4 0 4 8 12 16 Stage 1 Stage 2 Stage 3 Results List Saved Results
  25. 25. 4.5 Active Behaviour: Wrapup • Clicks: • decreasing for Query Box (input), Category Filters & Tag Cloud (control) • increasing for Saved Results (personalizable) • Queries: • decreasing over time, but more complex • Popularity of certain features and impopularity of others: •Some features used in passive instead of active ways?
  26. 26. Findings: Passive Behaviour behaviour not typically caught in interaction logs5 eyetribe.com
  27. 27. Passive behaviour: mouse hovers Category filters** ➡ • Mouse movements: • movements to reach a feature, also to aid processing contents [Rodden08] •Focus here on mouse movements not leading to click • Tendencies mostly overlap with active interaction measure 0% 25% 50% 75% 100% 1 2 3
  28. 28. Passive behaviour: mouse hovers Category filters** ➡ Tag Cloud* ➡ • Mouse movements: • movements to reach a feature, also to aid processing contents [Rodden08] •Focus here on mouse movements not leading to click • Tendencies mostly overlap with active interaction measure 0% 25% 50% 75% 100% 1 2 3
  29. 29. Passive behaviour: mouse hovers Category filters** ➡ Tag Cloud* ➡ Query Box** ➡ • Mouse movements: • movements to reach a feature, also to aid processing contents [Rodden08] •Focus here on mouse movements not leading to click • Tendencies mostly overlap with active interaction measure 0% 25% 50% 75% 100% 1 2 3
  30. 30. Passive behaviour: mouse hovers Category filters** ➡ Tag Cloud* ➡ Query Box** ➡ Results List* ⤻ • Mouse movements: • movements to reach a feature, also to aid processing contents [Rodden08] •Focus here on mouse movements not leading to click • Tendencies mostly overlap with active interaction measure 0% 25% 50% 75% 100% 1 2 3
  31. 31. 5.2 Passive Behaviour: eye fixations Stage 1 (exploration) Stage 2 (focus formulation) Stage 3 (postfocus, collection) • Overview of eye movement via heatmaps:
  32. 32. Passive behaviour: eye tracking eye tracking fixations 0 25 50 75 100 1 2 3 • Further insights via eye tracking fixation counts • fixations > 80 ms, similar to e.g. [Buscher08] Query Suggestions* ➡
  33. 33. Passive behaviour: eye tracking eye tracking fixations 0 25 50 75 100 1 2 3 Tag Cloud* ➡ • Further insights via eye tracking fixation counts • fixations > 80 ms, similar to e.g. [Buscher08] Query Suggestions* ➡
  34. 34. Passive behaviour: eye tracking eye tracking fixations 0 25 50 75 100 1 2 3 Category filters** ➡ Tag Cloud* ➡ • Further insights via eye tracking fixation counts • fixations > 80 ms, similar to e.g. [Buscher08] Query Suggestions* ➡
  35. 35. Passive behaviour: eye tracking eye tracking fixations 0 25 50 75 100 1 2 3 Category filters** ➡ Tag Cloud* ➡ Query Box** ➡ • Further insights via eye tracking fixation counts • fixations > 80 ms, similar to e.g. [Buscher08] Query Suggestions* ➡
  36. 36. Passive behaviour: eye tracking eye tracking fixations 0 25 50 75 100 1 2 3 Category filters** ➡ Tag Cloud* ➡ Query Box** ➡ Results List* ⤻ • Further insights via eye tracking fixation counts • fixations > 80 ms, similar to e.g. [Buscher08] Query Suggestions* ➡
  37. 37. 3.4 Passive Behaviour: Active vs. Passive 0% 2% 4% 6% 8% Stage 1 Stage 2 Stage 3 Tag Cloud [5.8% fixations ⬌ 3.1% clicks] Subtle differences between passive and active use:
  38. 38. 3.4 Passive Behaviour: Active vs. Passive 0% 2% 4% 6% 8% Stage 1 Stage 2 Stage 3 Query Suggestions [3.6% fix. ⬌ 1.9% clicks] Tag Cloud [5.8% fixations ⬌ 3.1% clicks] Subtle differences between passive and active use:
  39. 39. 3.4 Passive Behaviour: Active vs. Passive 0% 2% 4% 6% 8% Stage 1 Stage 2 Stage 3 Query Suggestions [3.6% fix. ⬌ 1.9% clicks] Tag Cloud [5.8% fixations ⬌ 3.1% clicks] Recent Queries [3% fix. ⬌ 2% clicks] Subtle differences between passive and active use:
  40. 40. 3.4 Passive Behaviour: Active vs. Passive 0% 2% 4% 6% 8% Stage 1 Stage 2 Stage 3 Query Suggestions [3.6% fix. ⬌ 1.9% clicks] Tag Cloud [5.8% fixations ⬌ 3.1% clicks] Recent Queries [3% fix. ⬌ 2% clicks] Subtle differences between passive and active use: Opposite for Category Filters [5% ⬌ 3.8%]
  41. 41. 5.4 Passive Behaviour: Wrapup •Fixations & mouse moves • validating active behaviour • subtle differences active and passive use • Could subjective ratings and qualitative feedback provide more insights?
  42. 42. Findings: Perceived Feature Utility perceived usefulness (post-stage & experiment)6
  43. 43. 6.2 Perceived Usefulness: post-experiment • Post-experiment questionnaire: • In which stage or stages were SUI features most useful? • Pronounced differences • significant differences for all features 0% 25% 50% 75% 100% Query Box / 
 Results List Category
 Filters Tag 
 Cloud Query 
 Suggestions Recent 
 Queries Saved 
 Results
  44. 44. 6.3 Perceived Usefulness: Category Filters • “good at the start (…) but later I wanted something more specific” (P.11) • common remarks in 2nd and 3rd stage: • “… could be more specific in its categories” • “…hard to find the category I want” (P. 27)
  45. 45. 6.3 Perceived Usefulness: Tag Cloud • at the start: • “…aids exploring the topic” (P.06); • “came up with words that I hadn’t thought of” • later stages: • “doesn’t help to narrow the search much” (P.18) • “in the end seemed to be too general” (P.07)
  46. 46. 6.3 Perceived Usefulness: Tag Cloud • at the start: • “…aids exploring the topic” (P.06); • “came up with words that I hadn’t thought of” • later stages: • “doesn’t help to narrow the search much” (P.18) • “in the end seemed to be too general” (P.07) • Post-experiment comments: • “…was good at the beginning, because when you are not exactly sure what you are looking for, it can give inspiration” (P.12) • “… nice to look at what other kinds of ideas [exist] that maybe you didn’t think of. Then one word may spark your interest” (P.15)
  47. 47. 6.3 Perceived Utility: Query Suggestions • “…was good at the start but as soon as I got more specific into my topic, that went down” (P.11) • “clicked [it] .. a couple of times .. it gave me sort of serendipitous results, which are useful” (P.24)
  48. 48. 6.3 Perceived Utility: Recent Queries • Naturally: “…most useful in the end because I had more searches from before” (P.26) • “The previous searches became more useful ‘as I made them’ because they were there and I could see what I searched before. I was sucking myself in and could work by looking at those.” (P.23) • May aid searchers in 
 their information journey..
  49. 49. 6.3 Perceived Utility: Saved Results • “most useful in the end” (P.12) • “At the start [I was] saving a lot of general things about different topics. Later on I went back to the saved ones for the topic I chose and then sort of went on from that and see what else I should search” (P.26) • “I just felt I was organizing my research a little bit” (P.18) • It “helps me to lay out the plans of my research”.
  50. 50. Conclusion towards more dynamic support7
  51. 51. 0%! 20%! 40%! 60%! 80%! 100%! Stage 1! Stage 2! Stage 3! Percentageofparticipants! input / informational! control! personalisable! Stage 2! Stage 3! input / informational! control! personalisable! Conclusion: Findings Summary • Informational features highly useful in most stages • Decreasing use of input features • Control features decreasingly useful • likely caused by a user’s evolving domain knowledge • Personalizable features increasingly useful • ‘growing’ with a user’s understanding, task management support SUI features perceived as most useful, per stage
  52. 52. 7. Conclusion: theoretical roundup complex information seeking task pre-focus stage: • vague understanding • limited domain knowledge • trouble expressing information need • large amount of new information • explaining prominent role of control features • explore information • filter result set using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
  53. 53. 7. Conclusion: theoretical roundup complex information seeking task pre-focus stage: • vague understanding • limited domain knowledge • trouble expressing information need • large amount of new information • explaining prominent role of control features • explore information • filter result set focus formulation stage: • more directed search • better understanding • seeking more relevant information, using differentiated criteria • control features become less essential • “not specific enough” • personalizable feat’s more important: may “grow” with emerging understanding using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
  54. 54. 7. Conclusion: theoretical roundup complex information seeking task pre-focus stage: • vague understanding • limited domain knowledge • trouble expressing information need • large amount of new information • explaining prominent role of control features • explore information • filter result set focus formulation stage: • more directed search • better understanding • seeking more relevant information, using differentiated criteria • control features become less essential • “not specific enough” • personalizable feat’s more important: may “grow” with emerging understanding postfocus stage • specific searches • re-checks additional information • precise expression • low uniqueness, high redundancy of info • long, precise, queries • further decline of control features • frequent use of personalizable features • “see what else to search” using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
  55. 55. 7. Conclusion: Future Work •Our study: essay writing simulated work task • Extension to other types of complex tasks, user populations •Further research into task-aware search systems • additional features may be useful at different stages • e.g. user hints, assistance • improvement of current features
  56. 56. 7. Conclusion: towards dynamic SUIs •Most Web search systems converged over static and familiar designs • trialled features often struggled to provide value for searchers • perhaps impeding search [Diriye10] if introduced in simple tasks, or at the wrong moment •Our work provides insights into when SUI features are useful during search episodes • potential responsive and adaptive SUIs
  57. 57. References (1/2) [Ahlberg&Shneiderman94] C. Ahlberg and B. Shneiderman. Visual information seeking: Tight coupling of dynamic query filters with starfield displays. In CHI, pages 313–317. ACM, 1994. 
 [Buscher08] G. Buscher, A. Dengel, and L. van Elst. Eye movements as implicit relevance feedback. In CHI’08 extended abstracts on Human factors in computing systems, pages 2991–2996. ACM, 2008. [Diriye10] A. Diriye, A. Blandford, and A. Tombros. When is system support effective? In Proc. IIiX, pages 55–64. ACM, 2010. [Diriye13] A. Diriye, A. Blandford, A. Tombros, and P. Vakkari. The role of search interface features during information seeking. In TPDL, volume 8092 of LNCS, pages 235–240. Springer, 2013. [Donato10] D. Donato, F. Bonchi, T. Chi, and Y. Maarek. Do You Want to Take Notes?: Identifying Research Missions in Yahoo! Search Pad. In Proc. WWW’10, pages 321–330, 2010. ACM. [Hearst09] M. A. Hearst. Search user interfaces. Cambridge University Press, 2009. [Hearst13] M. A. Hearst and D. Degler. Sewing the seams of sensemaking: A practical interface for tagging and organizing saved search results. In HCIR. ACM, 2013.
 [Huurdeman&Kamps14] H. C. Huurdeman and J. Kamps. From Multistage Information-seeking Models to Multistage Search Systems. In Proc. IIiX’14, pages 145–154, 2014. ACM [Kuhlthau91] C. C. Kuhlthau. Inside the search process: Information seek- ing from the user’s perspective. JASIS, 42:361–371, 1991. [Kuhlthau04] C. C. Kuhlthau. Seeking Meaning: A Process Approach to Library and Information Services. Libraries Unlimited, 2004. [Kules12] B. Kules and R. Capra. Influence of training and stage of search on gaze behavior in a library catalog faceted search interface. JASIST, 63:114–138, 2012. [LiuBelkin15] J. Liu and N. J. Belkin. Personalizing information retrieval for multi-session tasks. JASIST, 66(1):58–81, Jan. 2015. [Marchionini06] G. Marchionini. Exploratory search: from finding to understanding. CACM, 49(4):41–46, 2006. [Niu14] X. Niu and D. Kelly. The use of query suggestions during information search. IPM, 50:218–234, 2014. [Proulx06] P. Proulx, S. Tandon, A. Bodnar, D. Schroh, W. Wright, D. Schroh, R. Harper, and W. Wright. Avian Flu Case Study with nSpace and GeoTime. In Proceedings of the IEEE Symposium on Visual Analytics Science and Technology (VAST'06). IEEE, 2006.
  58. 58. References (2/2) [Toms11] E. G. Toms. Task-based information searching and retrieval. In Interactive Information Seeking, Behaviour and Retrieval. Facet, 2011. [Rodden08] K. Rodden, X. Fu, A. Aula, and I. Spiro. Eye-mouse coordination patterns on web search results pages. In CHI’08 Extended Abstracts, pages 2997–3002. ACM, 2008. [Shneiderman05] B. Shneiderman and C. Pleasant. Designing the user in- terface: strategies for effective human-computer interaction. Pearson Education, 2005. [Tunkelang09] D. Tunkelang. Faceted search. Synthesis lectures on information concepts, retrieval, and services, 1(1):1–80, 2009. [Vakkari01] P. Vakkari. A theory of the task-based information retrieval process: a summary and generalisation of a longitudinal study. Journal of Documentation, 57:44–60, 2001. [White05] R. W. White, I. Ruthven, and J. M. Jose. A study of factors affecting the utility of implicit relevance feedback. In SIGIR, pages 35–42. ACM, 2005. [White09] R. W. White and R. A. Roth. Exploratory search: Beyond the query-response paradigm. Synthesis Lectures on Information Concepts, Retrieval, and Services, 1:1–98, 2009. [Wilson&schraefel08] M. L. Wilson and m. c. schraefel. A longitudinal study of exploratory and keyword search. In In Proc. JCDL’08, pages 52–56. ACM, 2008. [Wilson99] T. D. Wilson. Models in information behaviour research. Journal of Documentation, 55:249–270, 1999. [Wilson11] M. L. Wilson. Interfaces for information retrieval. In I. Ruthven and D. Kelly, editors. Interactive Information Seeking, Behaviour and Retrieval. Facet, 2011.
  59. 59. Acknowledgements • This research was supported by: • EPSRC Platform Grant EP/M000877/1 and • NWO Grant 640.005.001, WebART • Thanks to participants & reviewers, and
 Sanna Kumpulainen • Possibility to present this work here • SIGIR Student Travel Grant
  60. 60. Active & Passive Utility of Search Interface Features in different Information Seeking Task Stages Hugo C. Huurdeman, Max L. Wilson, Jaap Kamps University of Amsterdam, University of Nottingham huurdeman @ uva.nl, max.wilson @ nottingham.ac.uk, kamps @ uva.nl ACM CHIIIR conference, March 14, 2016 Chapel Hill, NC, USA dl.acm.org/citation.cfm?id=2854957

×