Psych590 Presentation Interest Point Sujoy

453 views
402 views

Published on

Published in: Technology, Design
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
453
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Psych590 Presentation Interest Point Sujoy

  1. 1. Interest points: Cost effective methods for capturing overt visual attention<br />Sujoy Kumar Chowdhury<br />
  2. 2. Research area covered<br />Selective visual attention<br />Saliency map<br />Interest point<br />Eye tracking: Interpretation, comparison<br />Low cost usability techniques<br />Cost: time, $, deployment overhead<br />
  3. 3. Selective visual attention<br />Stimulus driven: auto processing, subconscious, bottom-up, fast, exogenous<br />Goal driven: conscious, top-down, slow, endogenous<br />Overt attention: co-located with fixation, absorbable<br />Covert attention: not co-located with fixation<br />Saccade<br />Fixation<br />
  4. 4. IM, SM, FM, (IP)<br />IM = Interest point map (performance –test)<br />SM = Saliency map (predictive)<br />FM = Fixation map (performance-test)<br />(IP = Interest point plot) (performance-test)<br />Eye-tracking: Heat-map, gaze-plot<br />
  5. 5. Qualitative, Quantitative, Formative, Summative<br />Number of users:( 3039, 56)<br />Monetary cost: Cheap<br />When done<br />Time requirement:Quick<br />
  6. 6. How many users<br />Kara Pernice & Jakob Nielsen (2009) <br />
  7. 7. Think aloud: Concurrent VS Retrospective<br />Concurrent: May slow down user, data not representative, not usually done with eye-tracking (Johansen, S. A., & Hansen, J. P. (2006))<br />Retrospective: User omits/ forgets data <br />
  8. 8. Key references<br />Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009). Everyone knows what is interesting: Salient locations which should be fixated. Journal of Vision, 9(11), 1-22.<br />Kara Pernice & Jakob Nielsen (2009) .Eyetracking Methodology: 65 Guidelines for How to Conduct and Evaluate Usability Studies Using Eyetracking<br />Johansen, S. A., & Hansen, J. P. (2006). Do we need eye trackers to tell where people look? In CHI &apos;06 extended abstracts on Human factors in computing systems (pp. 923-928). Montréal, Québec, Canada<br />
  9. 9. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  10. 10. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  11. 11.
  12. 12. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  13. 13. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  14. 14. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  15. 15. Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />
  16. 16. Variability in heat-map due to number of users<br />Kara Pernice & Jakob Nielsen (2009) <br />
  17. 17. Variability in heat-map<br />Kara Pernice & Jakob Nielsen (2009) <br />
  18. 18. Variability in heat-map due to number of users<br />
  19. 19. Why need a better deliverable than heatmap<br />Kara Pernice & Jakob Nielsen (2009) <br />
  20. 20. Why need a better deliverable than heatmap<br />Kara Pernice & Jakob Nielsen (2009) <br />
  21. 21. Why need a better deliverable than heat-map: Recommendations<br />30 + 9 = 39 users required for heatmap for representative data (85%)<br />24% extra users (9 users) required to account for eye-tracking data loss<br />Better watch live eye-tracking and listen to user thinking aloud<br />Good for slow-motion gaze-replay later<br />Heat-maps still can be used, but only as illustration, not as primary data<br />Test with small number of users (6), test more frequently<br />Kara Pernice & Jakob Nielsen (2009) <br />
  22. 22. Recommendations<br />“ The way to happiness (or at least a high ROI) is to conserve your budget and invest most of it in discount usability methods. Test a small number of users in each study and rely on qualitative analysis and your own insight instead of chasing overly expensive quantitative data. The money you save can be spent on running many more studies. The two most fruitful things to test are your competitors’ sites and more versions of your own site. Use iterative design to try out a bigger range of design possibilities, polishing the usability as you go, instead of blowing your entire budget on one big study.&quot;<br />Kara Pernice & Jakob Nielsen (2009) <br />
  23. 23. Where did you look at: Do we need eye-trackers<br />Johansen, S. A., & Hansen, J. P. (2006). <br />
  24. 24. Where did you look at: Do we need eye-trackers<br />10 users, 17 Web designers, 8 web pages<br />Self-reported gaze pattern (by user), predicted gaze-pattern (by web designer)<br />Users could reliably remember 70% of the web elements they had actually seen<br />Web designers could only predict 46% of the elements typically seen (squint-test)<br />No difference between simple and complex webpages in number of remembered items<br />Users were not good at remembering Area of Interest (AOI) sequence<br />Memory difference between logo and other web elements<br />Johansen, S. A., & Hansen, J. P. (2006). <br />
  25. 25. Comments<br />Users repeated the eye-movements, Web designers used paper<br />User might have thought: “Better look at things I could recall”<br />N-gram analysis, Levensthein distance 16 (SD = 12.8)<br />Johansen, S. A., & Hansen, J. P. (2006). <br />
  26. 26. What to ask<br />What are the most interesting points?<br />Where would you look at to do this?<br />Where did you look at?<br />Masciocchi, C. M., Mihalas, S., Parkhurst, D., & Niebur, E. (2009)<br />Johansen, S. A., & Hansen, J. P. (2006). <br />
  27. 27. Scan-path: Saliency model and goal-dependence<br />Foulsham, T., & Underwood, G. (2008). What can saliency models predict about eye movements? <br />D. norton & L. Stark (1971). Scan-path theory: Top-down recapitualtion<br />
  28. 28. Interesting objects are visually salient<br />Elazary, L., & Itti, L. (2008), N = 78 <br />
  29. 29. StomperScrutinizer: A Squint test without squinting<br />
  30. 30. ChalkMark: First Impression testing<br />
  31. 31. Crazy Egg<br />
  32. 32. Five second test<br />
  33. 33. Interest point test: Unblurred<br />3X<br />1X<br />5X<br />2X<br />4X<br />
  34. 34. Interest point test: Blurred/ Squinted<br />3X<br />1X<br />5X<br />2X<br />4X<br />
  35. 35. Interest point plot: Unblurred/ Unsquinted<br />
  36. 36. Interest point plot: Blurred/ Squinted<br />
  37. 37. Interest point heat-map: Unblurred/ Unsquinted<br />(This placeholder image is computationally generated or predictive, NOT based on performance test)<br />
  38. 38. Interest point heat-map: Blurred/ Squinted<br />
  39. 39. Variant: Task-Interest point test: Blurred (Mark five probable points where the price could be located)<br />3X<br />1X<br />5X<br />2X<br />4X<br />
  40. 40. Variant: Task-Interest point test: Unblurred (Mark five points where you would look at to get the price)<br />3X<br />1X<br />5X<br />2X<br />4X<br />
  41. 41. Interest point based usability tests: Recommendations based on hypotheses<br />Sequence: blurred-squint tests before unblurred-unsquinted tests, Retrospective<br />Tasks: Generic exploratory interest point test before task-centric test.<br />Questions: “Look freely”  “Where did you look”  “Where would you look to do this”  “What are the most interesting points”<br />Key report: based on interest point plot (qualitative, formative, 5 user) rather than heat-map (quantitative, summative, 30 user)<br />Implementation: Static-Web app  Dynamic-URL, Provision for Area of Interest (AOI) <br />
  42. 42. Statistical comparison<br />Between subjects<br />ET vs. interest-point-plot (IP) <br />ET vs. interest-point-map (IM)<br />IP vs. IM (Number of user)<br />Squinted vs. Unsquinted<br />Exploratory vs. Task-centric<br />

×