Behind Eyetracking: How the Brain Utilizies Its Eyes (Dixon Cleveland)


Published on

Given at UXPA-DC's User Focus Conference, Oct. 19, 2012

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Eyes are part of the brain, not just connected to it. Nature poked holes in our skin to let photons in.Lots of interesting design tradeoffs to maximize use of photons in the universe.Four functional parts to our vision systems: 1) The eyes themselves are wonderful instruments – collecting the photons and creating images of the physical environment we live in. 2) The image processing done in the visual cortex back in the occipital lobe 3) Interpreting the environment and deciding how to interact with it. 4) Feedback control of the eye to provide the information we want the most.
  • Here is the eye: the front-end instrument, the sensor, that takes in the photons and makes usable images of our outside world.It sees short, medium, and long distances away from our bodies, giving a deep range to environmental scope.The key elements are the cornea, lens and retina. The cornea lets photons in but keeps physical stuff out. The lens puts a fine focus the image, accommodating a large range of distances The retina converts photons to electrical impulses that make up the images our brain processes.
  • For eyetracking enthusiasts, one of the most serendipitous features of our vision systems is that eyes are excellent pointers.We point our eyes right at what we want to look at, and we do it with high precision.This is what is so important for usability: We always look at what we think is most important to us at the time. And eyetrackers tell us what people are looking at. So we can infer what is most important to them.
  • So why are our eyes such good pointers?One way to look at it is from an engineering point of view. (I am an engineer, so I like to look at it this way.)The eyes have two conflicting objectives: 1) Take in a wide field of view – so we keep the big picture – and detect predators in our peripheral vision 2) See detail – so we can sweat the small stuff when we have to.So how does nature solve the problem: 1) Create a wide peripheral vision with low resolution, and 2) Create a small central vision with very high resolution.The very center of our central vision has 150 times the resolution of our peripheral vision.So when we want to see something clearly, we have to point our eyes right at the thing so we can see it most clearly.And since nature had to punch a hole in our skin to let photons in, our eyes are visible to the outside world.And since your eyes are visible to mine, I can see what you choose to look at. We humans are pretty good eyetrackers, and we make a lot of use of our interpersonal eyetracking.
  • Here is another look at the cone density at the center of our macular regions.Remember, the foveola has 150 times the cone density of our peripheral vision.But this high-resolution foveola only spans a 1.2 degree range. So we have to point our eyes to at least within 0.6 degrees of the target if we want to use our foveola.But we actually do better than that.There have not been really good studies at how precisely we actually point our eyes, but consider this:To make best use of the foveola we would generally want the foveola to cover the target, and targets generally have a finite width, so we would want to center of the foveola on the target.Also, there are about 100 cones across the foveola, and to get a good image the eye muscles have to hold the eye still enough to prevent blur, or the value of all those cones would be lost.So our ocular muscles have to hold the eye still to within about 1/50th of a degree. (Microsaccades)And if the muscles are this stable, why wouldn’t the eye use this precision in its positioning function.Well, we do not really know the answer to this question, but my guess is that most eyes point repeatedly to within about a quarter to a tenth of a degree. This is a lot better than most eyetrackers can measure eye angles these days, so we engineers have a long way to go yet, but it also means that the value of eyetracking will continue to improve as the technology evolves.
  • So here are the marvelous muscles that point the eye so well.Because of the importance of our sight, our ocular muscles are the most precise muscles we have in our bodies.They are also the strongest with respect to the size of the load they move.And in addition to being precise and strong, they are fast.When you move your gaze from one point to the next, your eyes are not much good to you. The image is all blurry.So the eyes have to move fast to minimize down time.
  • The rapid eye motions between fixations are called saccades. They take between 20 and 80 milliseconds, depending on how large the saccade is.You can see the snappy, waste no time, motion of the saccades in this slide. Then when your eye is still, it has to stay that way long enough to get enough photons in to develop a clear picture. These are called fixations. In these time traces, the eyes seem to jitter around a lot. This jitter is not real eye motion, however. It is noise in the eyetracking instrument. This is actually one of the cleanest raw tracks you can get from todays eyetracking instruments, which shows how far the …
  • Doug Munoz, Queens UniversityRich Krauzlis, Salk Institute, La Jolla
  • Accurate calculation of the gazepoint depends on both eyeball position and orientation data in the “world” space.The eye’s location is typically defined with respect to the eye’s 1st nodal point – through which both the optic and visual vectors pass.The eye’s gaze orientation is defined as its gaze vector.The gaze line is defined as a projection of the gaze vector from the eye location.The gaze point is defined as the point where the gaze line intercepts the environment – which for a 2D screen is the monitor surface plane.As you can see from this geometry, errors in the estimates of either the eye position or the gaze vector lead to errors in the estimated gazepoint.
  • Behind Eyetracking: How the Brain Utilizies Its Eyes (Dixon Cleveland)

    1. 1. Behind Eyetracking:How the Brain Utilizes Its Eyes Dixon Cleveland LC Technologies, Inc.
    2. 2. Visual Pathways
    3. 3. Eye Cross Section
    4. 4. Eyes are Excellent Pointers Con e D e n sity R od D e n sity Visu a l Axis O p tic Axis F ove ola 1.2 d e g ~ 100 c on e s a c ross E ye A n ato m y
    5. 5. Rod and Cone Density
    6. 6. Cone Density
    7. 7. Macular Region
    8. 8. Ocular Muscles
    9. 9. Fixations and Saccades
    10. 10. Visual Pathways in the Brain
    11. 11. • Perhaps the most important reason eyetracking will play an ever more crucial role in UX is that our eye activity is driven by our brains. The old cliché that our eyes provide a window to our minds is well substantiated physiologically and cognitively. By providing insights to brain activity, monitoring eye activity has significant potential to help programs interact with people in more naturally human ways – both in general program design and in on-line human interaction.• This discussion summarizes the physiological and cognitive processes underlying the brain’s control of the eyes, which provides the foundation for using eyetracking in UX.
    12. 12. • Old cliché: Eyes are window to the brain• Main cognitive input to the brain is the eyes• It is the main channel through which we perceive the outside world• Eyes can only look one place at a time• We have to move our eyes around to get targeted information• Our brains drive our eyes• It’s a continuous feedback loop• It happens 4 times per second
    13. 13. Gazepoint Calculation Model G a ze V e cto r G a ze G a ze Po int LineEyeLo ca tio n