Look Before You Link: Eye Tracking in Multiple Coordinated View Visualization.


Published on

Presenter: Chris Weaver
BELIV 2010 Workshop

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Look Before You Link: Eye Tracking in Multiple Coordinated View Visualization.

  1. 1. 1
  2. 2. Look Before You Link: Eye Tracking in Multiple Coordinated View Visualization Chris Weaver School of Computer Science and the Center for Spatial Analysis University of Oklahoma weaver@cs.ou.edu
  3. 3. Pre-filtering Grouping Cross-filtering γ G’ Name mr >= k minrating mn G mn φmn mn πmn V mn σmn coordinated multiple views Movies T φm T’ γ G φmd G’ πmd V σmd Date m m md md md md box office average rating Rating φmr T’ πmr V σmr number of ratings are common in id? mr mr Genres γ G’ Name Tg φg T’ g gn G gn φgn gn πgn V gn σgn visual analysis tools Oscars T φo T’ γ G φot G’ πot V σot Type o o ot ot ot ot γ G’ Name Tp φp T’ p pn G pn φpn pn πpn V pn σpn People |γpn(pn)| >= k γ G φpr G’ πpr V σpr Role minroles pr pr pr pr elemental forms of coordination are established compound forms of coordination are emerging cross-filtering node, edge, pack ∞ matrix matrices Λ λ? Layout Grouping 1 2 7 Nodes Tα’ γα Gα φα G’α N φα Nα Σ T N input ψN T N graph πN T N glyph φN T view N σN? 3 4 8 Edges E E E E Tα’ σα E φαβ Eαβ Σ Tinput ψE T graph πE T glyph φE T view σE? ’ P 5 P 6 P 9 P α β id id Cαβ φαβ Cαβ P φαβ Pαβ Σ Tinput ψP T graph πP T glyph φP T view σP? Packs Tβ’ σβ P φβα Pβα Cliquing Drilling Slicing Collecting Forming Encoding Filtering Brushing 3
  4. 4. Cross-filtered views analytic utility arises from navigation and selection in individual views and in compositions of views by chaining together sequences of interactions Jigsaw list view 4
  5. 5. we’re still looking mostly at tool designs in terms of representation interaction process how does representation shape interaction? how does interaction reflect analytic process?
  6. 6. coordination is we act here a special kind while looking there of interaction ...on purpose! pixels/points here and there can be shapes/regions entire views (is coordinated interaction like juggling? or more like a sobriety test?) 6
  7. 7. supplant (not replace) input tracking with eye tracking exploit dual spatial modalities of gaze and motion to analyze interaction patterns are entire views more suitable targets for current hardware capabilities? SMIvision RED 250 temporal rate (250Hz) latency (10ms) spatial resolution (0.5°, ~10 pixels) 7
  8. 8. Cinegraph (visualization) High-dimensional drill-down into people, genres, awards, release dates, and box office characteristics of mainstream movies Data Sources: www.imdb.com and InfoVis 2007 Contest Co-Chairs 8
  9. 9. Cinegraph (metavisualization) 9
  10. 10. 10
  11. 11. so what are we planning to do? beat the hardware into submission (sigh...) implement a Java API for calibration and data collection splice gaze data into the input event stream consumed by views expose gaze data to the Improvise transformation pipeline/query language metavisualize aggregated gazes in the multiview context precompute query ensembles for likely future paths of interaction across coordinations? think about head-to-head collaborative coordination (we have two trackers) how far can we go looking at the view level? 11
  12. 12. Thanks! 12