Active & Passive Utility of Search Interface Features in different Information Seeking Task Stages
Active & Passive Utility of Search
Interface Features in different
Information Seeking Task Stages
Hugo C. Huurdeman, Max L. Wilson, Jaap Kamps
University of Amsterdam, University of Nottingham
huurdeman @ uva.nl, max.wilson @ nottingham.ac.uk, kamps @ uva.nl
ACM CHIIIR conference, March 14, 2016
Chapel Hill, NC, USA
dl.acm.org/citation.cfm?id=2854957
1. Introduction
• Information seeking theory:
• stages within complex tasks
• involving learning / knowledge construction
• Research into Search User Interfaces (SUIs)
• proposed many interactive features
• usefulness proven in micro level studies, but not widely
adopted
• Our study: investigating the utility of various SUI
features at different macro-level stages
2. Related Work: SUI features over time
•Most common: phases of singular search session
• Facet use in ‘decision making stages’ [Kules&Capra12]
• Query suggestions for difficult topics & during later
phases in task [Niu&Kelly14]
• Search stage sensitive and agnostic features [DiriyeEA10]
• Conceptually bridging macro-level inf. seeking models &
micro level systems [Huurdeman&Kamps14]
•Few cases also multiple search sessions
• e.g. [Liu&Belkin15,Wilson&schraefel08]
3. Setup: Protocol
Training task
Pre-
Questionnaire
Topic
Assignment
Introduction
system
Task
Post-task
Questionnaire
3x
Post-experiment
questionnaire
Debriefing
interview
• Experimental system: SearchAssist
• Results, Query Corrections, Query
Suggestions: Bing Web API
• Category Filters: DMOZ
• Categorization and analysis:
• Max Wilson’s framework of SUI features
[Wilson11]
4.1 Active Behaviour: Clicks
0
4
8
Sig. clicks on interface
features over time
Category filters ➡
Stage 1 Stage 2 Stage 3
4.1 Active Behaviour: Clicks
0
4
8
Sig. clicks on interface
features over time
Category filters ➡
Tag Cloud ➡
Stage 1 Stage 2 Stage 3
4.1 Active Behaviour: Clicks
0
4
8
Sig. clicks on interface
features over time
Category filters ➡
Tag Cloud ➡
Search button ➡
Stage 1 Stage 2 Stage 3
4.1 Active Behaviour: Clicks
0
4
8
Sig. clicks on interface
features over time
Category filters ➡
Tag Cloud ➡
Search button ➡
Saved Results
Stage 1 Stage 2 Stage 3
4.3 Active Behaviour: Query words
•Mean number of query words**:
“virtual reality” (P.02)
“impact of virtual reality on
society art and culture“
“autonomous vehicles” (P.06)
“autonomous vehicles costs
insurance industry”
0
1,25
2,5
3,75
5
Stage 1 Stage 2 Stage 3
Mean Number of Query words
4.4 Active Behaviour: Visited pages
• Visited pages (unique)**:
• Stage 1: 8.0 (7.3)
• Stage 2: 6.4 (5.9)
• dwell time highest
• Stage 3: 14.2 (10.8)
• Mean rank visited pages
• from 3.1 to 6.4
0
4
8
12
16
Stage 1 Stage 2 Stage 3
Results List
Saved Results
4.5 Active Behaviour: Wrapup
• Clicks:
• decreasing for Query Box (input), Category Filters & Tag
Cloud (control)
• increasing for Saved Results (personalizable)
• Queries:
• decreasing over time, but more complex
• Popularity of certain features and impopularity of others:
•Some features used in passive instead of active ways?
Passive behaviour: mouse hovers
Category filters** ➡
• Mouse movements:
• movements to reach a feature, also to aid processing contents [Rodden08]
•Focus here on mouse movements not leading to click
• Tendencies mostly overlap with active interaction measure
0%
25%
50%
75%
100%
1 2 3
Passive behaviour: mouse hovers
Category filters** ➡
Tag Cloud* ➡
• Mouse movements:
• movements to reach a feature, also to aid processing contents [Rodden08]
•Focus here on mouse movements not leading to click
• Tendencies mostly overlap with active interaction measure
0%
25%
50%
75%
100%
1 2 3
Passive behaviour: mouse hovers
Category filters** ➡
Tag Cloud* ➡
Query Box** ➡
• Mouse movements:
• movements to reach a feature, also to aid processing contents [Rodden08]
•Focus here on mouse movements not leading to click
• Tendencies mostly overlap with active interaction measure
0%
25%
50%
75%
100%
1 2 3
Passive behaviour: mouse hovers
Category filters** ➡
Tag Cloud* ➡
Query Box** ➡
Results List* ⤻
• Mouse movements:
• movements to reach a feature, also to aid processing contents [Rodden08]
•Focus here on mouse movements not leading to click
• Tendencies mostly overlap with active interaction measure
0%
25%
50%
75%
100%
1 2 3
5.2 Passive Behaviour: eye fixations
Stage 1 (exploration) Stage 2 (focus formulation)
Stage 3 (postfocus, collection)
• Overview of eye movement via heatmaps:
Passive behaviour: eye tracking
eye tracking fixations
0
25
50
75
100
1 2 3
• Further insights via eye tracking fixation counts
• fixations > 80 ms, similar to e.g. [Buscher08]
Query Suggestions* ➡
Passive behaviour: eye tracking
eye tracking fixations
0
25
50
75
100
1 2 3
Tag Cloud* ➡
• Further insights via eye tracking fixation counts
• fixations > 80 ms, similar to e.g. [Buscher08]
Query Suggestions* ➡
Passive behaviour: eye tracking
eye tracking fixations
0
25
50
75
100
1 2 3
Category filters** ➡
Tag Cloud* ➡
• Further insights via eye tracking fixation counts
• fixations > 80 ms, similar to e.g. [Buscher08]
Query Suggestions* ➡
Passive behaviour: eye tracking
eye tracking fixations
0
25
50
75
100
1 2 3
Category filters** ➡
Tag Cloud* ➡
Query Box** ➡
• Further insights via eye tracking fixation counts
• fixations > 80 ms, similar to e.g. [Buscher08]
Query Suggestions* ➡
Passive behaviour: eye tracking
eye tracking fixations
0
25
50
75
100
1 2 3
Category filters** ➡
Tag Cloud* ➡
Query Box** ➡
Results List* ⤻
• Further insights via eye tracking fixation counts
• fixations > 80 ms, similar to e.g. [Buscher08]
Query Suggestions* ➡
3.4 Passive Behaviour: Active vs. Passive
0%
2%
4%
6%
8%
Stage 1 Stage 2 Stage 3
Tag Cloud [5.8% fixations ⬌ 3.1% clicks]
Subtle differences between passive and active use:
3.4 Passive Behaviour: Active vs. Passive
0%
2%
4%
6%
8%
Stage 1 Stage 2 Stage 3
Query Suggestions [3.6% fix. ⬌ 1.9% clicks]
Tag Cloud [5.8% fixations ⬌ 3.1% clicks]
Subtle differences between passive and active use:
3.4 Passive Behaviour: Active vs. Passive
0%
2%
4%
6%
8%
Stage 1 Stage 2 Stage 3
Query Suggestions [3.6% fix. ⬌ 1.9% clicks]
Tag Cloud [5.8% fixations ⬌ 3.1% clicks]
Recent Queries [3% fix. ⬌ 2% clicks]
Subtle differences between passive and active use:
3.4 Passive Behaviour: Active vs. Passive
0%
2%
4%
6%
8%
Stage 1 Stage 2 Stage 3
Query Suggestions [3.6% fix. ⬌ 1.9% clicks]
Tag Cloud [5.8% fixations ⬌ 3.1% clicks]
Recent Queries [3% fix. ⬌ 2% clicks]
Subtle differences between passive and active use:
Opposite for Category Filters [5% ⬌ 3.8%]
5.4 Passive Behaviour: Wrapup
•Fixations & mouse moves
• validating active behaviour
• subtle differences active and passive use
• Could subjective ratings and qualitative feedback
provide more insights?
6.2 Perceived Usefulness: post-experiment
• Post-experiment questionnaire:
• In which stage or stages were SUI features most useful?
• Pronounced differences
• significant differences for all features
0%
25%
50%
75%
100%
Query Box /
Results List
Category
Filters
Tag
Cloud
Query
Suggestions
Recent
Queries
Saved
Results
6.3 Perceived Usefulness: Category Filters
• “good at the start (…) but later I
wanted something more specific” (P.11)
• common remarks in 2nd and 3rd stage:
• “… could be more specific in its
categories”
• “…hard to find the category I want” (P.
27)
6.3 Perceived Usefulness: Tag Cloud
• at the start:
• “…aids exploring the topic” (P.06);
• “came up with words that I hadn’t thought of”
• later stages:
• “doesn’t help to narrow the search much” (P.18)
• “in the end seemed to be too general” (P.07)
6.3 Perceived Usefulness: Tag Cloud
• at the start:
• “…aids exploring the topic” (P.06);
• “came up with words that I hadn’t thought of”
• later stages:
• “doesn’t help to narrow the search much” (P.18)
• “in the end seemed to be too general” (P.07)
• Post-experiment comments:
• “…was good at the beginning, because when you
are not exactly sure what you are looking for, it can
give inspiration” (P.12)
• “… nice to look at what other kinds of ideas [exist]
that maybe you didn’t think of. Then one word may
spark your interest” (P.15)
6.3 Perceived Utility: Query Suggestions
• “…was good at the start but as soon
as I got more specific into my topic,
that went down” (P.11)
• “clicked [it] .. a couple of times .. it
gave me sort of serendipitous
results, which are useful” (P.24)
6.3 Perceived Utility: Recent Queries
• Naturally: “…most useful in the end
because I had more searches from
before” (P.26)
• “The previous searches became more
useful ‘as I made them’ because they
were there and I could see what I
searched before. I was sucking myself
in and could work by looking at
those.” (P.23)
• May aid searchers in
their information journey..
6.3 Perceived Utility: Saved Results
• “most useful in the end” (P.12)
• “At the start [I was] saving a lot of
general things about different topics.
Later on I went back to the saved
ones for the topic I chose and then
sort of went on from that and see what
else I should search” (P.26)
• “I just felt I was organizing my
research a little bit” (P.18)
• It “helps me to lay out the plans of my
research”.
0%!
20%!
40%!
60%!
80%!
100%!
Stage 1! Stage 2! Stage 3!
Percentageofparticipants!
input / informational!
control!
personalisable!
Stage 2! Stage 3!
input / informational!
control!
personalisable!
Conclusion: Findings Summary
• Informational features highly
useful in most stages
• Decreasing use of input features
• Control features decreasingly
useful
• likely caused by a user’s evolving
domain knowledge
• Personalizable features
increasingly useful
• ‘growing’ with a user’s
understanding, task
management support
SUI features perceived as most
useful, per stage
7. Conclusion: theoretical roundup
complex information seeking task
pre-focus stage:
• vague understanding
• limited domain knowledge
• trouble expressing
information need
• large amount of new
information
• explaining prominent role of
control features
• explore information
• filter result set
using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
7. Conclusion: theoretical roundup
complex information seeking task
pre-focus stage:
• vague understanding
• limited domain knowledge
• trouble expressing
information need
• large amount of new
information
• explaining prominent role of
control features
• explore information
• filter result set
focus formulation stage:
• more directed search
• better understanding
• seeking more relevant
information, using
differentiated criteria
• control features become less
essential
• “not specific enough”
• personalizable feat’s more
important: may “grow” with
emerging understanding
using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
7. Conclusion: theoretical roundup
complex information seeking task
pre-focus stage:
• vague understanding
• limited domain knowledge
• trouble expressing
information need
• large amount of new
information
• explaining prominent role of
control features
• explore information
• filter result set
focus formulation stage:
• more directed search
• better understanding
• seeking more relevant
information, using
differentiated criteria
• control features become less
essential
• “not specific enough”
• personalizable feat’s more
important: may “grow” with
emerging understanding
postfocus stage
• specific searches
• re-checks additional
information
• precise expression
• low uniqueness, high
redundancy of info
• long, precise, queries
• further decline of control
features
• frequent use of
personalizable features
• “see what else to search”
using [Kuhlthau04,Vakkari&Hakkala00,Vakkari01]
7. Conclusion: Future Work
•Our study: essay writing simulated work task
• Extension to other types of complex tasks, user
populations
•Further research into task-aware search systems
• additional features may be useful at different stages
• e.g. user hints, assistance
• improvement of current features
7. Conclusion: towards dynamic SUIs
•Most Web search systems converged over static
and familiar designs
• trialled features often struggled to provide value for
searchers
• perhaps impeding search [Diriye10] if introduced in simple
tasks, or at the wrong moment
•Our work provides insights into when SUI
features are useful during search episodes
• potential responsive and adaptive SUIs
References (1/2)
[Ahlberg&Shneiderman94] C. Ahlberg and B. Shneiderman. Visual information seeking: Tight coupling of dynamic query filters
with starfield displays. In CHI, pages 313–317. ACM, 1994.
[Buscher08] G. Buscher, A. Dengel, and L. van Elst. Eye movements as implicit relevance feedback. In CHI’08 extended
abstracts on Human factors in computing systems, pages 2991–2996. ACM, 2008.
[Diriye10] A. Diriye, A. Blandford, and A. Tombros. When is system support effective? In Proc. IIiX, pages 55–64. ACM, 2010.
[Diriye13] A. Diriye, A. Blandford, A. Tombros, and P. Vakkari. The role of search interface features during information seeking.
In TPDL, volume 8092 of LNCS, pages 235–240. Springer, 2013.
[Donato10] D. Donato, F. Bonchi, T. Chi, and Y. Maarek. Do You Want to Take Notes?: Identifying Research Missions in Yahoo!
Search Pad. In Proc. WWW’10, pages 321–330, 2010. ACM.
[Hearst09] M. A. Hearst. Search user interfaces. Cambridge University Press, 2009.
[Hearst13] M. A. Hearst and D. Degler. Sewing the seams of sensemaking: A practical interface for tagging and organizing
saved search results. In HCIR. ACM, 2013.
[Huurdeman&Kamps14] H. C. Huurdeman and J. Kamps. From Multistage Information-seeking Models to Multistage Search
Systems. In Proc. IIiX’14, pages 145–154, 2014. ACM
[Kuhlthau91] C. C. Kuhlthau. Inside the search process: Information seek- ing from the user’s perspective. JASIS, 42:361–371,
1991.
[Kuhlthau04] C. C. Kuhlthau. Seeking Meaning: A Process Approach to Library and Information Services. Libraries Unlimited,
2004.
[Kules12] B. Kules and R. Capra. Influence of training and stage of search on gaze behavior in a library catalog faceted search
interface. JASIST, 63:114–138, 2012.
[LiuBelkin15] J. Liu and N. J. Belkin. Personalizing information retrieval for multi-session tasks. JASIST, 66(1):58–81, Jan. 2015.
[Marchionini06] G. Marchionini. Exploratory search: from finding to understanding. CACM, 49(4):41–46, 2006.
[Niu14] X. Niu and D. Kelly. The use of query suggestions during information search. IPM, 50:218–234, 2014.
[Proulx06] P. Proulx, S. Tandon, A. Bodnar, D. Schroh, W. Wright, D. Schroh, R. Harper, and W. Wright. Avian Flu Case Study
with nSpace and GeoTime. In Proceedings of the IEEE Symposium on Visual Analytics Science and Technology (VAST'06).
IEEE, 2006.
References (2/2)
[Toms11] E. G. Toms. Task-based information searching and retrieval. In Interactive Information
Seeking, Behaviour and Retrieval. Facet, 2011.
[Rodden08] K. Rodden, X. Fu, A. Aula, and I. Spiro. Eye-mouse coordination patterns on web
search results pages. In CHI’08 Extended Abstracts, pages 2997–3002. ACM, 2008.
[Shneiderman05] B. Shneiderman and C. Pleasant. Designing the user in- terface: strategies
for effective human-computer interaction. Pearson Education, 2005.
[Tunkelang09] D. Tunkelang. Faceted search. Synthesis lectures on information
concepts, retrieval, and services, 1(1):1–80, 2009.
[Vakkari01] P. Vakkari. A theory of the task-based information retrieval process: a summary
and generalisation of a longitudinal study. Journal of Documentation, 57:44–60, 2001.
[White05] R. W. White, I. Ruthven, and J. M. Jose. A study of factors affecting the utility of
implicit relevance feedback. In SIGIR, pages 35–42. ACM, 2005.
[White09] R. W. White and R. A. Roth. Exploratory search: Beyond the query-response
paradigm. Synthesis Lectures on Information Concepts, Retrieval, and Services, 1:1–98, 2009.
[Wilson&schraefel08] M. L. Wilson and m. c. schraefel. A longitudinal study of exploratory and
keyword search. In In Proc. JCDL’08, pages 52–56. ACM, 2008.
[Wilson99] T. D. Wilson. Models in information behaviour research. Journal of Documentation,
55:249–270, 1999.
[Wilson11] M. L. Wilson. Interfaces for information retrieval. In I. Ruthven and D. Kelly, editors.
Interactive Information Seeking, Behaviour and Retrieval. Facet, 2011.
Acknowledgements
• This research was supported by:
• EPSRC Platform Grant EP/M000877/1
and
• NWO Grant 640.005.001, WebART
• Thanks to participants & reviewers, and
Sanna Kumpulainen
• Possibility to present this work here
• SIGIR Student Travel Grant
Active & Passive Utility of Search
Interface Features in different
Information Seeking Task Stages
Hugo C. Huurdeman, Max L. Wilson, Jaap Kamps
University of Amsterdam, University of Nottingham
huurdeman @ uva.nl, max.wilson @ nottingham.ac.uk, kamps @ uva.nl
ACM CHIIIR conference, March 14, 2016
Chapel Hill, NC, USA
dl.acm.org/citation.cfm?id=2854957