SlideShare a Scribd company logo
Jacek Gwizdka Department of Library and Information Science School of Communication and Information Rutgers University Monday, April 4, 2011 Learning about Information Searchers from Eye-Tracking CONTACT:   www.jsg.tel
Outline Overall research goals Eye-tracking – fundamentals Eye-fixation patterns: reading models (Exp 1; Exp 3) Search results presentation and cognitive abilities (Exp 2) Summary and Challenges 2
Overall Research Goals Characterization and enhancement of human information interaction mediated by computing technology  Characterization: cognitive and affective user states –traditionally: little access to the mental/emotional states of users while they are engaged in the search process Implicit data collection about searchers’ cognitive and affective states in relation to information search phases Enhancement: Personalization and Adaptation 3
Example: Implicit Characterization of Cognitive Load on Web Search 4 higher average cognitive load: Q & B 35% 27% higher peak  cognitive load: C START Q formulate  query Lview search results list B  bookmark page Cview  content page END 97% 58% 30% 42% 7% 95% (Gwizdka, JASIST, 2010)
Eye-Tracking? Early attempts late XIX c.; early 1950’s -  using a movie camera and hand-coding (Fitts, Jones & Milton 1950) Now computerized and “easy to use”  infrared light sources and cameras stationary and mobile 5 Current Tobii eye-trackers
Eye-tracking – fundamental assumptions Top-down vs. bottom-up control  in between: language processing (higher-level) controls when eyes move, visual processing (lower-level) controls where eyes move(Reichle et al., 1998) Eye-mind link hypothesis: attention is where eyes are focused (Just & Carpenter, 1980; 1987) Overt and covert attention Attention can move with no eye movement BUT eyes cannot move without attention 6
Data from Eye-tracking Devices eye gaze points  eye gaze points in screen coordinates + distance  eye fixations in screen coordinates + validity  pupil diameter  [head position 3D, distance from monitor] 50/60Hz; 300Hz; 1000-2000Hz eye-trackers common: 60Hz: one data record every 16.67ms 7 Tobii T-60  eye-tracker
Eye-Tracking Can …  Eye tracking can allow identification of the specific content acquired by the person from Web pages  Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems And more… 8 Example: composing answer and  from information on a Web page (video)
Related Work in Information Science Interaction with search results Interaction with SERPs (Granka et al., 2004; Lorigo et al., 2007; 2008) Effects results presentation (Cutrell et al., 2007; Kammerer al., 2010) Relevance detection (Buscher, et al. 2009) Implicit Feedback (Fu, X., 2009); Query expansion (Buscher, et al. 2009) Relevance detection  Pupillometry (Oliveira, Aula, Russell, 2009) Detection of task differences from eye-gaze patterns Reading/reasoning/search/object manipulation (Iqbal & Bailey, 2004) Informational vs. transactional tasks (Terai , et al., 2008) Task detection is also one of our research interests 9
Experiment 1: Journalism tasks – Open Web Search 32 journalism students 4 journalistic tasks (realistic, created by journalism faculty and journalists) Tasks:  advanced obituary (OBI)  interview preparation (INT) copy editing (CPE) background information (BIC) 10 Task facets: ,[object Object]
level: whole document vs. segment
nature of task goal
complexity – number of steps neededNote: OBI vs. CPE are most dissimilar
Experiment 1 – Research Questions Can we detect task type (differences in task facets) from implicit interaction data (e.g., eye-tracking) ? How do we aggregate information from eye-tracking data? 11
Eye-gaze patterns Eye-tracking research have frequently analyzed eye-gaze position aggregates ('hot spots’) spatiotemporal-intensity – heat maps also sequential – scan paths 12 ,[object Object]
reading models ,[object Object]
 Scan Fixations vs. Reading Fixations Scanning fixations provide some semantic information, limited to foveal(1° visual acuity) visual field (Rayner & Fischer, 1996) Fixations in a reading sequence provide more information than isolated “scanning” fixations: information is gained from the larger parafoveal (5° beyond foveal focus) region (Rayner et al., 2003) (asymmetrical, in dir of reading) richer semantic structure available from text compositions (sentences, paragraphs, etc.)  Some of the types of semantic information available only through reading sequences may be crucial to satisfy task requirements. 14
Reading Models We implemented the E-Z Reader reading model (Reichle et al., 2006) Inputs: (eye fixation location, duration) Fixation duration >113 ms – threshold for lexical processing (Reingold & Rayner, 2006) The algorithm distinguishes reading fixation sequences from isolated fixations, called 'scanning' fixations Each lexical fixation is classified to (S,R) (Scan, Reading) These sequences used to create a state model 15
Reading Model – States and Characteristics Two states: transition probabilities Number of lexical fixations and duration 16
Example Reading Sequence 17
Results: Search Task Effect on Reading/Scanning Task effects on transition probabilities SR & RS(all subjects & pages) 18 ,[object Object]
For CPE to continue scanningSearchers are adopting different reading strategies for different task types (Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, 2010)
Results: Search Task Facets and Text Acquisition For highly attended pages 19 Total Text Acquisition on SERPs and Content  per page Total Text Acquired on SERPs and Content
Results: Search Task Facets and State Transitions   For highly attended pages 20 Read  Scan Scan Read Scan Read Read  Scan State Transitions on Content pages per page State Transitions on SERPs per page
Task Facets Effects - Summary For highly attended pages 21 (Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, submitted, 2011)
Scan<->Read Transition Probabilities in 2 Experiments Person’s tendency to readscan related to scanread? (i.e., is p related to q ?) p ~ 1-q Genomics tasks (N=40) Journalistic tasks (N=32) correlation (Spearman ρ): 0.914 and 0.830
Experiment 1: Conclusions Searchers’ reading / scanning behavior affected by task Tasks facets can be “detected” from eye-tracking data (from reading model properties) Reading models can be built on the fly (during search)  real-time observations of eye movements can be used by adaptive search systems Challenge: Lack of baseline data about reading models of individuals  23
Experiment 2: Result List vs. Overview Tag-Cloud 37 participants Everyday information seeking tasks (travel, shopping…) 	- two levels of task complexity Two user interfaces 24 2. Overview UI  (Tag Cloud) 1. List UI
Experiment 2: User Actions in Two Interfaces 25 1. List 2. Overview  Tag Cloud
Experiment 2: Research Questions Does the search results overview benefit users? Task effects? Individual differences -  cognitive ability effects? 26
General Results Search results overview (“tag cloud”) benefited users  made them faster facilitated formulation of more effective queries More complex tasks were indeed more demanding – required more search effort  27 (Gwizdka, Information Research, 2009)
Task and UI and Reading Model differences Complex tasks required more reading effort Longer max reading fixation length and more reading fixation regressions  Overview UI required less effort Scanning more likely (S-S higher; S-R lower; R-S higher) Total reading scan path length shorter but total scan path (including scanning) were longer Less and shorter mean fixations per page visited 28 List                    	            	Overview
Task and UI Interaction and Reading model data For complex tasks UI effect  Higher probability of short reading sequences in Overview UI For simple tasks UI effect Shorter length of reading scan paths per page and less fixations per page Task & UI interaction  Speed of reading:  for complex tasks faster reading in Overview than in List UI for simple tasks faster in List than in Overview UI 29
User Interface Features – Individual Differences Two users, same UI and task 30
Individual Differences – Least Effort?  Higher cognitive ability searchers were faster in Overview UI and on simple tasks (same number of  queries) Higher ability searchers did more in more demanding situations higher search effort did not seem to improve task outcomes  31 For task complexity factor and working memory (WM) F(144,1)=4.2; p=.042 F(144,1)=3.1; p=.08
Task and Working Memory – Eye-tracking Data  High WM less likely to keep scanning High WM higher reading speed (scan path/total fixation duration) Number and duration of reading sequences differs  (borderline: 0.05<p<0.1) For high WM searchers:  for complex more reading for simple tasks less reading  For low WM no such difference! 32
Experiment 2: Conclusions Overview UI was faster – reflected in some eye-tracking measures Task complexity differences reflected in some eye-tracking measures Some effects of cognitive abilities on interaction e.g., task & high WM – more effort than needed  opportunistic discovery of information? “violation” of the least effort principle not fully explained yet 33
34
Current Project: Can We Implicitly Detect Relevance Decisions? ,[object Object]

More Related Content

Similar to Learning about Information Searchers from Eye-Tracking by Jacek Gwizdka

Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
jacekg
 
Discovering Common Motifs in Cursor Movement Data
Discovering Common Motifs in Cursor Movement DataDiscovering Common Motifs in Cursor Movement Data
Discovering Common Motifs in Cursor Movement Data
Yandex
 
An interdisciplinary journey with the SAL spaceship – results and challenges ...
An interdisciplinary journey with the SAL spaceship – results and challenges ...An interdisciplinary journey with the SAL spaceship – results and challenges ...
An interdisciplinary journey with the SAL spaceship – results and challenges ...
Stefan Dietze
 
Icpc13.ppt
Icpc13.pptIcpc13.ppt
Icpc13.ppt
Ptidej Team
 
DBLP-SSE: A DBLP Search Support Engine
DBLP-SSE: A DBLP Search Support EngineDBLP-SSE: A DBLP Search Support Engine
DBLP-SSE: A DBLP Search Support Engine
Yi Zeng
 
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
Ying-Hsang Liu
 
Information_Retrieval_Models_Nfaoui_El_Habib
Information_Retrieval_Models_Nfaoui_El_HabibInformation_Retrieval_Models_Nfaoui_El_Habib
Information_Retrieval_Models_Nfaoui_El_Habib
El Habib NFAOUI
 
Query formulation process
Query formulation processQuery formulation process
Query formulation processmalathimurugan
 
Wcre13a.ppt
Wcre13a.pptWcre13a.ppt
Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...
IJwest
 
Icpc13.ppt
Icpc13.pptIcpc13.ppt
ReadingBehaviour_LiteratureReview.pdf
ReadingBehaviour_LiteratureReview.pdfReadingBehaviour_LiteratureReview.pdf
ReadingBehaviour_LiteratureReview.pdf
yoya989
 
Chapter 1.pptx
Chapter 1.pptxChapter 1.pptx
Chapter 1.pptx
Habtamu100
 
Predicting User Knowledge Gain in Informational Search Sessions
Predicting User Knowledge Gain in Informational Search SessionsPredicting User Knowledge Gain in Informational Search Sessions
Predicting User Knowledge Gain in Informational Search Sessions
Ran Yu
 
What Is Log Analyis
What Is Log AnalyisWhat Is Log Analyis
What Is Log Analyis
Jim Jansen
 
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
Workshop on Distributed Software Development, Software Ecosystems and Systems-of-Systems
 
Web analytics presentation
Web analytics presentationWeb analytics presentation
Web analytics presentation
Jim Jansen
 
Web analytics webinar
Web analytics webinarWeb analytics webinar
Web analytics webinar
Jim Jansen
 
Know the user
Know the userKnow the user
Know the user
John Kelleher
 
Image Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic PeopleImage Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic People
IRJET Journal
 

Similar to Learning about Information Searchers from Eye-Tracking by Jacek Gwizdka (20)

Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
Panel: Social Tagging and Folksonomies: Indexing, Retrieving... and Beyond? ...
 
Discovering Common Motifs in Cursor Movement Data
Discovering Common Motifs in Cursor Movement DataDiscovering Common Motifs in Cursor Movement Data
Discovering Common Motifs in Cursor Movement Data
 
An interdisciplinary journey with the SAL spaceship – results and challenges ...
An interdisciplinary journey with the SAL spaceship – results and challenges ...An interdisciplinary journey with the SAL spaceship – results and challenges ...
An interdisciplinary journey with the SAL spaceship – results and challenges ...
 
Icpc13.ppt
Icpc13.pptIcpc13.ppt
Icpc13.ppt
 
DBLP-SSE: A DBLP Search Support Engine
DBLP-SSE: A DBLP Search Support EngineDBLP-SSE: A DBLP Search Support Engine
DBLP-SSE: A DBLP Search Support Engine
 
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
Individual Differences, User Perceptions and Eye Gaze in Biomedical Search In...
 
Information_Retrieval_Models_Nfaoui_El_Habib
Information_Retrieval_Models_Nfaoui_El_HabibInformation_Retrieval_Models_Nfaoui_El_Habib
Information_Retrieval_Models_Nfaoui_El_Habib
 
Query formulation process
Query formulation processQuery formulation process
Query formulation process
 
Wcre13a.ppt
Wcre13a.pptWcre13a.ppt
Wcre13a.ppt
 
Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...Architecture of an ontology based domain-specific natural language question a...
Architecture of an ontology based domain-specific natural language question a...
 
Icpc13.ppt
Icpc13.pptIcpc13.ppt
Icpc13.ppt
 
ReadingBehaviour_LiteratureReview.pdf
ReadingBehaviour_LiteratureReview.pdfReadingBehaviour_LiteratureReview.pdf
ReadingBehaviour_LiteratureReview.pdf
 
Chapter 1.pptx
Chapter 1.pptxChapter 1.pptx
Chapter 1.pptx
 
Predicting User Knowledge Gain in Informational Search Sessions
Predicting User Knowledge Gain in Informational Search SessionsPredicting User Knowledge Gain in Informational Search Sessions
Predicting User Knowledge Gain in Informational Search Sessions
 
What Is Log Analyis
What Is Log AnalyisWhat Is Log Analyis
What Is Log Analyis
 
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
WDES 2015 paper: A Systematic Mapping on the Relations between Systems-of-Sys...
 
Web analytics presentation
Web analytics presentationWeb analytics presentation
Web analytics presentation
 
Web analytics webinar
Web analytics webinarWeb analytics webinar
Web analytics webinar
 
Know the user
Know the userKnow the user
Know the user
 
Image Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic PeopleImage Based Tool for Level 1 and Level 2 Autistic People
Image Based Tool for Level 1 and Level 2 Autistic People
 

More from jacekg

Neuro-Physiological Evidence as a Basis for Studying Search
Neuro-Physiological Evidence  as a Basis for Studying SearchNeuro-Physiological Evidence  as a Basis for Studying Search
Neuro-Physiological Evidence as a Basis for Studying Search
jacekg
 
Cognitive Abilities and Email: Impact of Interface and Task - Dissertation ...
Cognitive Abilities and Email:  Impact of Interface and Task -  Dissertation ...Cognitive Abilities and Email:  Impact of Interface and Task -  Dissertation ...
Cognitive Abilities and Email: Impact of Interface and Task - Dissertation ...
jacekg
 
Towards Neuro–Information Science
Towards Neuro–Information ScienceTowards Neuro–Information Science
Towards Neuro–Information Science
jacekg
 
Inferring Cognitive States from Multimodal Measures in Information Science
Inferring Cognitive States from Multimodal Measures in Information ScienceInferring Cognitive States from Multimodal Measures in Information Science
Inferring Cognitive States from Multimodal Measures in Information Science
jacekg
 
Towards Interaction Models Derived From Eye-tracking Data .
Towards Interaction Models Derived From Eye-tracking Data    .Towards Interaction Models Derived From Eye-tracking Data    .
Towards Interaction Models Derived From Eye-tracking Data .
jacekg
 
HT2010 Paper Presentation by Jacek Gwizdka
HT2010 Paper Presentation by Jacek GwizdkaHT2010 Paper Presentation by Jacek Gwizdka
HT2010 Paper Presentation by Jacek Gwizdka
jacekg
 

More from jacekg (6)

Neuro-Physiological Evidence as a Basis for Studying Search
Neuro-Physiological Evidence  as a Basis for Studying SearchNeuro-Physiological Evidence  as a Basis for Studying Search
Neuro-Physiological Evidence as a Basis for Studying Search
 
Cognitive Abilities and Email: Impact of Interface and Task - Dissertation ...
Cognitive Abilities and Email:  Impact of Interface and Task -  Dissertation ...Cognitive Abilities and Email:  Impact of Interface and Task -  Dissertation ...
Cognitive Abilities and Email: Impact of Interface and Task - Dissertation ...
 
Towards Neuro–Information Science
Towards Neuro–Information ScienceTowards Neuro–Information Science
Towards Neuro–Information Science
 
Inferring Cognitive States from Multimodal Measures in Information Science
Inferring Cognitive States from Multimodal Measures in Information ScienceInferring Cognitive States from Multimodal Measures in Information Science
Inferring Cognitive States from Multimodal Measures in Information Science
 
Towards Interaction Models Derived From Eye-tracking Data .
Towards Interaction Models Derived From Eye-tracking Data    .Towards Interaction Models Derived From Eye-tracking Data    .
Towards Interaction Models Derived From Eye-tracking Data .
 
HT2010 Paper Presentation by Jacek Gwizdka
HT2010 Paper Presentation by Jacek GwizdkaHT2010 Paper Presentation by Jacek Gwizdka
HT2010 Paper Presentation by Jacek Gwizdka
 

Recently uploaded

Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
Neo4j
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 

Recently uploaded (20)

Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024GraphSummit Singapore | The Art of the  Possible with Graph - Q2 2024
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 

Learning about Information Searchers from Eye-Tracking by Jacek Gwizdka

  • 1. Jacek Gwizdka Department of Library and Information Science School of Communication and Information Rutgers University Monday, April 4, 2011 Learning about Information Searchers from Eye-Tracking CONTACT: www.jsg.tel
  • 2. Outline Overall research goals Eye-tracking – fundamentals Eye-fixation patterns: reading models (Exp 1; Exp 3) Search results presentation and cognitive abilities (Exp 2) Summary and Challenges 2
  • 3. Overall Research Goals Characterization and enhancement of human information interaction mediated by computing technology Characterization: cognitive and affective user states –traditionally: little access to the mental/emotional states of users while they are engaged in the search process Implicit data collection about searchers’ cognitive and affective states in relation to information search phases Enhancement: Personalization and Adaptation 3
  • 4. Example: Implicit Characterization of Cognitive Load on Web Search 4 higher average cognitive load: Q & B 35% 27% higher peak cognitive load: C START Q formulate query Lview search results list B bookmark page Cview content page END 97% 58% 30% 42% 7% 95% (Gwizdka, JASIST, 2010)
  • 5. Eye-Tracking? Early attempts late XIX c.; early 1950’s - using a movie camera and hand-coding (Fitts, Jones & Milton 1950) Now computerized and “easy to use” infrared light sources and cameras stationary and mobile 5 Current Tobii eye-trackers
  • 6. Eye-tracking – fundamental assumptions Top-down vs. bottom-up control in between: language processing (higher-level) controls when eyes move, visual processing (lower-level) controls where eyes move(Reichle et al., 1998) Eye-mind link hypothesis: attention is where eyes are focused (Just & Carpenter, 1980; 1987) Overt and covert attention Attention can move with no eye movement BUT eyes cannot move without attention 6
  • 7. Data from Eye-tracking Devices eye gaze points eye gaze points in screen coordinates + distance eye fixations in screen coordinates + validity pupil diameter [head position 3D, distance from monitor] 50/60Hz; 300Hz; 1000-2000Hz eye-trackers common: 60Hz: one data record every 16.67ms 7 Tobii T-60 eye-tracker
  • 8. Eye-Tracking Can … Eye tracking can allow identification of the specific content acquired by the person from Web pages Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems And more… 8 Example: composing answer and from information on a Web page (video)
  • 9. Related Work in Information Science Interaction with search results Interaction with SERPs (Granka et al., 2004; Lorigo et al., 2007; 2008) Effects results presentation (Cutrell et al., 2007; Kammerer al., 2010) Relevance detection (Buscher, et al. 2009) Implicit Feedback (Fu, X., 2009); Query expansion (Buscher, et al. 2009) Relevance detection Pupillometry (Oliveira, Aula, Russell, 2009) Detection of task differences from eye-gaze patterns Reading/reasoning/search/object manipulation (Iqbal & Bailey, 2004) Informational vs. transactional tasks (Terai , et al., 2008) Task detection is also one of our research interests 9
  • 10.
  • 11. level: whole document vs. segment
  • 13. complexity – number of steps neededNote: OBI vs. CPE are most dissimilar
  • 14. Experiment 1 – Research Questions Can we detect task type (differences in task facets) from implicit interaction data (e.g., eye-tracking) ? How do we aggregate information from eye-tracking data? 11
  • 15.
  • 16.
  • 17. Scan Fixations vs. Reading Fixations Scanning fixations provide some semantic information, limited to foveal(1° visual acuity) visual field (Rayner & Fischer, 1996) Fixations in a reading sequence provide more information than isolated “scanning” fixations: information is gained from the larger parafoveal (5° beyond foveal focus) region (Rayner et al., 2003) (asymmetrical, in dir of reading) richer semantic structure available from text compositions (sentences, paragraphs, etc.) Some of the types of semantic information available only through reading sequences may be crucial to satisfy task requirements. 14
  • 18. Reading Models We implemented the E-Z Reader reading model (Reichle et al., 2006) Inputs: (eye fixation location, duration) Fixation duration >113 ms – threshold for lexical processing (Reingold & Rayner, 2006) The algorithm distinguishes reading fixation sequences from isolated fixations, called 'scanning' fixations Each lexical fixation is classified to (S,R) (Scan, Reading) These sequences used to create a state model 15
  • 19. Reading Model – States and Characteristics Two states: transition probabilities Number of lexical fixations and duration 16
  • 21.
  • 22. For CPE to continue scanningSearchers are adopting different reading strategies for different task types (Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, 2010)
  • 23. Results: Search Task Facets and Text Acquisition For highly attended pages 19 Total Text Acquisition on SERPs and Content per page Total Text Acquired on SERPs and Content
  • 24. Results: Search Task Facets and State Transitions For highly attended pages 20 Read  Scan Scan Read Scan Read Read  Scan State Transitions on Content pages per page State Transitions on SERPs per page
  • 25. Task Facets Effects - Summary For highly attended pages 21 (Cole, Gwizdka, Liu, Bierig, Belkin & Zhang, submitted, 2011)
  • 26. Scan<->Read Transition Probabilities in 2 Experiments Person’s tendency to readscan related to scanread? (i.e., is p related to q ?) p ~ 1-q Genomics tasks (N=40) Journalistic tasks (N=32) correlation (Spearman ρ): 0.914 and 0.830
  • 27. Experiment 1: Conclusions Searchers’ reading / scanning behavior affected by task Tasks facets can be “detected” from eye-tracking data (from reading model properties) Reading models can be built on the fly (during search)  real-time observations of eye movements can be used by adaptive search systems Challenge: Lack of baseline data about reading models of individuals 23
  • 28. Experiment 2: Result List vs. Overview Tag-Cloud 37 participants Everyday information seeking tasks (travel, shopping…) - two levels of task complexity Two user interfaces 24 2. Overview UI (Tag Cloud) 1. List UI
  • 29. Experiment 2: User Actions in Two Interfaces 25 1. List 2. Overview Tag Cloud
  • 30. Experiment 2: Research Questions Does the search results overview benefit users? Task effects? Individual differences - cognitive ability effects? 26
  • 31. General Results Search results overview (“tag cloud”) benefited users made them faster facilitated formulation of more effective queries More complex tasks were indeed more demanding – required more search effort 27 (Gwizdka, Information Research, 2009)
  • 32. Task and UI and Reading Model differences Complex tasks required more reading effort Longer max reading fixation length and more reading fixation regressions Overview UI required less effort Scanning more likely (S-S higher; S-R lower; R-S higher) Total reading scan path length shorter but total scan path (including scanning) were longer Less and shorter mean fixations per page visited 28 List Overview
  • 33. Task and UI Interaction and Reading model data For complex tasks UI effect Higher probability of short reading sequences in Overview UI For simple tasks UI effect Shorter length of reading scan paths per page and less fixations per page Task & UI interaction Speed of reading: for complex tasks faster reading in Overview than in List UI for simple tasks faster in List than in Overview UI 29
  • 34. User Interface Features – Individual Differences Two users, same UI and task 30
  • 35. Individual Differences – Least Effort? Higher cognitive ability searchers were faster in Overview UI and on simple tasks (same number of queries) Higher ability searchers did more in more demanding situations higher search effort did not seem to improve task outcomes 31 For task complexity factor and working memory (WM) F(144,1)=4.2; p=.042 F(144,1)=3.1; p=.08
  • 36. Task and Working Memory – Eye-tracking Data High WM less likely to keep scanning High WM higher reading speed (scan path/total fixation duration) Number and duration of reading sequences differs (borderline: 0.05<p<0.1) For high WM searchers: for complex more reading for simple tasks less reading For low WM no such difference! 32
  • 37. Experiment 2: Conclusions Overview UI was faster – reflected in some eye-tracking measures Task complexity differences reflected in some eye-tracking measures Some effects of cognitive abilities on interaction e.g., task & high WM – more effort than needed opportunistic discovery of information? “violation” of the least effort principle not fully explained yet 33
  • 38. 34
  • 39.
  • 40. Can we detect when searchers make information relevance decisions?Emotiv EPOC wireless EEG headset EEG Start with pupillometry info relevance (Oliveria, Russell, Aula, 2009) low-level decision timing (Einhäuser, et al. 2010) Also look at EEG, GSR Funded by Google Research Award pupil animation Eye tracking Tobii T-60 eye-tracker GSR
  • 41. Summary & Conclusions Eye tracking enables high resolution analysis of searcher’s activity during interactions with information systems There is more beyond eye-gaze locations with timestamps Eye-tracking data: can support for identification of search task types reflects differences in searcher performance on user interfaces reflects individual differences between searchers High potential for implicit detection of a searcher’s states 36
  • 42. Some Challenges High-resolution data (low-level) How do we create higher-level patterns? How do we detect them computationally? How do we deal with ind. diffs(baseline data)? 37 (Iqbal & Bailey, 2004) (Terai et al., 2008) (Lorigo et al., 2008)
  • 43. High-resolution Eye-tracking is Coming Soon to You Eye tracking technology is declining in price and in 2-3 years could be part of standard displays. Already in luxury cars and semi-trucks (sleep detection) Computers with built in eye-tracking 38 Tobii / Lenovo proof of concept eye-tracking laptop - March 2011
  • 44. Thank you! Questions? Jacek Gwizdka contact: http://jsg.tel PoODLEProject: Personalization of the Digital Library Experience IMLS grant LG-06-07-0105-07 http://comminfo.rutgers.edu/research/poodle or for short: http://bit.ly/poodle_project PoODLE PIs: Nicholas J. Belkin, Jacek Gwizdka, Xiangmin Zhang Post-Doc: Ralf Bierig, PhD Students: Michael Cole (Reading Models + E-Z Reader algorithm), Jingjing Liu, (now Asst Prof.), Chang Liu

Editor's Notes

  1. Tasks varied in several dimensions: complexity defined as the number of necessary steps needed to achieve the task goal (e.g. identifying an expert and then finding their contact information), the task product (factual vs. intellectual, e.g. fact checking vs. production of a document), the information object (a complete document vs. a document segment), andthe nature of the task goal (specific vs. amorphous).
  2. Tasks varied in several dimensions: complexity defined as the number of necessary steps needed to achieve the task goal (e.g. identifying an expert and then finding their contact information), the task product (factual vs. intellectual, e.g. fact checking vs. production of a document), the information object (a complete document vs. a document segment), andthe nature of the task goal (specific vs. amorphous).
  3. Eye tracking work on reading behavior in information search have mostly analyzed eye gaze position aggregates (&apos;hot spots&apos;).This does not address the fixation sub-sequences that are true reading behavior.
  4. Reading models can be built on the fly They only requires analysis of the recent eye movement sequence to classify the observed fixations