Cognitive Ability Effects on Effort in Web Search & Navigation by Gwizdka


Published on

Presentation from Web Search and Navigation symposium at 21st Annual Meeting of Society for Text and Discourse, Poitiers, July 11-13, 2011

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • decision incorporates assessment of the effort needed to continue searching to obtain a better (or more) information vis-à-vis the expected utility of that information
  • Some insight offered by examining differences in reading models for high vs. low WM people
  • Eye tracking work on reading behavior in information search have mostly analyzed eye gaze position aggregates ('hot spots').This does not address the fixation sub-sequences that are true reading behavior.
  • Cognitive Ability Effects on Effort in Web Search & Navigation by Gwizdka

    1. 1. Cognitive Ability Effects on Effort in Web Search and Navigation Jacek Gwizdka Department of Library and Information Science Rutgers University New Brunswick, New Jersey, USA Text & Discourse Annual Meeting, University of Poitiers,
    2. 2. Background• People are assumed to strive to minimize effort – the principle of least effort (Zipf, 1949)• In more demanding situations people are expected to make decisions based on satisficing (Simon 1956; Rational Analysis framework: Anderson, 1990)• Bounded rationality and satisficing were found to explain behaviour on low-level and high-level info search tasks (Fu & Gray, 2006; Gray & Fu, 2001; Mansourian & Ford, 2007)• For increased difficulty one could expect that a user would perform less actions and stop sooner 2
    3. 3. Experiment• 37 participants – Working memory assessed using memory span task (Francis & Neath 2003)• Within subject design with 2 factors: task and user interface• Tasks – everyday information search (e.g., travel, shopping) at two levels of task complexity – Four task rotations for each of two user interfaces Information Information Fact finding Fact finding gathering gathering Information Information Fact finding Fact finding gathering gathering Information Information Fact finding Fact finding gathering gathering Information Information Fact finding Fact finding gathering gathering 3
    4. 4. User Interfaces: Result List vs. Overview Tag-Cloud 1 UI. List Start View New Tag Search Delete Tag Results Click Click Result “Back” URL button Click “Done” & enter answer View one result page End 2 UI. List + Overview Tag Cloud 4
    5. 5. Research Questions• How does performance and effort change in more demanding situations? – Task and user interface effects; – Individual differences - cognitive ability effects. 5
    6. 6. Measures• Task completion time• Cognitive effort: – search and navigation decisions expressed as user actions: selection of search terms – number of queries, selection of documents to view – reading effort: scanning vs. reading; length of reading sequences; length of reading fixations (based on reading model)• Performance: task outcome = relevance * completeness 6
    7. 7. Introducing Reading Model• Scanning fixations provide some semantic information – limited to foveal visual field (1° visual acuity) (Rayner & Fischer, 1996)• Reading fixation sequences provide more information than isolated “scanning” fixations – information is gained from the larger parafoveal region (5° beyond foveal focus; asymmetrical, in dir of reading) (Rayner et al., 2003) – some types of semantic information is available only through reading sequences• We implemented the E-Z Reader reading model (Reichle et al., 2006) – Lexical fixations duration >113 ms (Reingold & Rayner, 2006) – Each lexical fixation is classified to Scanning or Reading (S,R) – These sequences used to create a two-state model 7
    8. 8. Example Reading Sequence 8
    9. 9. Results 9
    10. 10. Task  User Behaviour & Reading Model Diffs• Task outcome: no sig differences between conditions• Task: more complex tasks required more effort – More actions (7.8 vs. 4.5) and longer time (255 vs. 195 s) – Longer max reading fixation length and more reading fixation regressions 10
    11. 11. UI User Behaviour & Reading Model Diffs• Overview+List User Interface required less effort – Users were faster (191s vs. 261s in List UI) – Less reading effort: • Scanning more likely (transitions: SS RS: higher; SR lower) • Scan path length of reading sequences shorter • Less and shorter mean fixations per page visited List Overview + List 11
    12. 12. Individual Differences• Two users, same UI and task 12
    13. 13. Individual Differences – Least Effort?•Higher cognitive ability searchers were faster in Overview UI and on simple tasks (while they entered same number of queries)•Higher ability searchers did more in more demanding situations – but higher search effort did not seem to improve task outcomesFor task complexity factor and working memory (WM): F(144,1)=4.2; p=.042 F(144,1)=3.1; p=.08 13
    14. 14. Task and Working Memory – Eye-tracking Data• Number and duration of reading sequences differs between task complexity levels – (borderline: 0.05<p<0.1)• For high WM searchers: – for complex tasks more reading – for simple tasks less reading• For low WM no such difference! 14
    15. 15. Summary & Conclusions• UI effect on effort: Overview+List UI• Task complexity effect reflected in user actions and in some eye-tracking measures• Effects of cognitive abilities (WM) on effort : – low WM – in more complex tasks less documents read  satisficing – high WM – more effort on complex tasks than needed  opportunistic discovery of information? (Erdelez, S., 1997) – “violation” of the least effort principle not fully explained yet 15
    16. 16. Thank you! Questions?Jacek Gwizdka contact: http://jsg.telPoODLE Project: Personalization of the Digital Library Experience supported by US Institute of Museum and Library Studies (IMLS) grant LG-06-07-0105-07
    17. 17. Extra Slides• Eye-movements – Reading model details• Current research project 17
    18. 18. Eye-gaze patterns• Eye-tracking research have frequently analyzed eye-gaze position aggregates (hot spots’) – spatiotemporal-intensity – heat maps – also sequential – scan paths• Higher-order patterns: – reading models 18
    19. 19. Reading Eye Patterns• Reading and scanning have easily distinguished patterns of fixations and saccades. (Rayner & Fischer, 1996)• Lexical Processing of Words – Reading research has established word availability is a function of fixation duration: – Orthographic recognition: 40-50 ms • time to move data from eyes to mind – Phonological recognition: 55-70ms – Lexical availability (typical): 113 ms – 150ms (Rayner, 1998) • Unfamiliar or complex meanings require longer processing – Eyes do not saccade until the word has been processed 19
    20. 20. Scan Fixations vs. Reading Fixations• Scanning fixations provide some semantic information, limited to foveal (1° visual acuity) visual field (Rayner & Fischer, 1996)• Fixations in a reading sequence provide more information than isolated “scanning” fixations: – information is gained from the larger parafoveal (5° beyond foveal focus) region (Rayner et al., 2003) (asymmetrical, in dir of reading) – richer semantic structure available from text compositions (sentences, paragraphs, etc.)• Some of the types of semantic information available only through reading sequences may be crucial to satisfy task requirements. 20
    21. 21. Reading Models• We implemented the E-Z Reader reading model (Reichle et al., 2006) – Inputs: (eye fixation location, duration) – Fixation duration >113 ms – threshold for lexical processing (Reingold & Rayner, 2006) – The algorithm distinguishes reading fixation sequences from isolated fixations, called scanning fixations – Each lexical fixation is classified to (S,R) (Scan, Reading) – These sequences used to create a state model 21
    22. 22. Reading Model – States and Characteristics• Two states: transition probabilities• Number of lexical fixations and duration 22
    23. 23. Example Reading Sequence 23
    24. 24. Current Project:Can We Implicitly Detect Relevance Decisions?• Implicit characterization of Information Search Process using physiological devices• Can we detect when searchers make information relevance decisions? Emotiv EPOC wireless EEG headset EEG• Start with pupillometry pupil animation Eye tracking – info relevance (Oliveria, Russell, Aula, 2009) – low-level decision timing (Einhäuser, et al. 2010)• Also look at EEG, GSR Tobii T-60Funded by Google Research Award eye-trackerAnd by IMLS Career award GSR