Real-Time Simulation of Impaired Vision in Naturalistic Settings with Gaze-Contingent Display
1. Abstract
Effective management and treatment of glaucoma and
other visual diseases depends on early diagnosis. The
ability to simulate the visual consequences of disease
offers potential benefits for public awareness of signs
and symptoms, patient and clinician education.
Experiments could identify the changes in behaviour,
for example during driving, that one uses to
compensate at early stages of the disease’s
development. Furthermore, by understanding how
visual field defects affect performance on visual tasks,
we can help develop new strategies to cope with
different diseases such as macular degeneration.
We have developed a Gaze-Contingent Display (GCD)
system that simulates an arbitrary visual field in virtual
environment allowing investigators to simulate visual
field defects as well to educate the public and health
care professionals about visual health and warning
signs of ocular and visual dysfunction and disease.
The system consists of three primary components.
Eye-Head Tracking System
Vision-based Eye-Head tracking system that
determines the gaze direction and position.
Head Tracker –(IS-900) is a hybrid acoustic-inertial six
degree of freedom (6DOF) position and orientation
tracking system.
Eye Tracker Subsystem – (VISION2000) video eye
tracking system uses the pupil/corneal reflection
technique to obtain accurate measurements of eye
position.
Measurement of eye and head position and orientation
allows for estimation of line of sight and the point of
regard.
References
1. Duchowski, A. T. A breadth-first survey of eye tracking applications. Behavior
Research Methods, Instruments, and Computers (BRMIC)34, 4, 455–470 (2002).
2. Geisler, W.S., Perry, J.S. Real-time Simulation of Arbitrary Visual Fields. Eye
Tracking Research & Application. Proceedings of the 2002 symposium on eye
tracking research & applications (2002), 83 – 87.
3. Huang, H. A Calibrated Combined Head-Eye Tracking System. York (2004).
4. Perry, J.S., Geisler, W. S. Gaze-contingent real-time simulation of arbitrary visual
fields. SPIE (2002).
5. Robinson M, Laurence J, Zacher J, Hogue A, Allison R, Harris LR, Jenkin M,
Stuerzlinger W (2002) Growing IVY: Building the Immersive Visual environment at
York. ICAT. Proc. 11th Int. Conf. on Artificial Reality andTelexistance, Tokyo, Japan,
Dec 5-7, 2001
Virtual Environment
Virtual environments for experiments can be created
with script files and imported 3D objects .
Navigation can be achieved through a variety of input
devices, for example, Logitech Driving Force Pro
Steering Wheel.
A custom developed API, Virtual Environment library
(VE), is used to support system development. It
provides a framework for screen geometry and
positioning, with parallel rendering of screens on
multiprocessor machines, a general input device
mechanism, an event-based programming model and a
simple language for specifying the physical setup of the
environment; including screens, input devices and user-
specific parameters[5].
EL-MAR
Vision2000 Eye tracker
InterSense IS-900
Head Tracker
Computer
Gaze Computing
Algorithm On-Screen
Point of Regard
(POR)
Head Tracker
Station
Figure 1 - System diagram;
a)
Figure 3 - R- Resolution map representing visual field of a Glaucoma patient [4]; I0-
Original image; B0-B4 Different levels of blur based on the threshold lim30.32
its of a resolution map; F-Final image that the participants will see after all blur levels are
combined together ;
B0 B1
B3 B4
F
B2
I0
a)
c)
d)
Conclusion
The system :
• Creates an experimental environment to study the
effects of low vision on everyday tasks such as driving
and navigation.
• Provides an opportunity to study behavioural patterns
in simulated early stages of glaucoma and macular
degeneration.
Figure 4 R- Resolution map representing visual field of a Glaucoma patient [4]; (a,b,c,d,e)
examples of a different points of gaze for navigation in virtual scene;
Arbitrary Visual Field Simulation
The simulation of an arbitrary visual field was done by
using OpenGL and Shading Language capabilities and
techniques that are supported by GPU (nVidia), and thus
supporting fast real time performance.
During each frame the following actions are executed:
1. High resolution image is extracted from the virtual
camera.
2. Multiresolution pyramid of resized original image
(mipmaps) is constructed from the original image.
3. Resolution map is aligned with gaze direction (i.e. Point
of Regard), which is estimated by the Head-Eye
Tracking system.
4. Different blur levels are achieved by combining
different levels of multiresolution pyramid with Gaussian
blur based on the intensity level of a resolution map at
each pixel position.
OT
OW
OD
OE
IS-900 Sonistrips
WCF
l.o.s
TSCF
POR
ECF
DCF
DWM
1
0M
2
1M
Figure 2 - Coordinate frames of the combined system and their relationship