Single Gaze Gestures

          Emilie Møllenbach                              Martin Lillholm                            ...
periment showed that the complexity of the alphabet caused                  Vickers et. al. 2008] and in another project r...
time pressure and a more complex navigation, selection and                 SD= 33,46) compared to the Tobii (Tobii short: ...
3.1.2.4     Task Times                                                       References
There was a significant difference...
Upcoming SlideShare
Loading in...5
×

Mollenbach Single Gaze Gestures

1,272

Published on

This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces.
The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were evaluated on two eye tracking devices (Tobii/QuickGlance (QG)). The main findings show that there is a significant difference in selection times between long and short SGGs, between vertical and horizontal selections, as well as between the different tracking systems.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,272
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Mollenbach Single Gaze Gestures

  1. 1. Single Gaze Gestures Emilie Møllenbach Martin Lillholm Alastair Gail John Paulin Hansen Loughborough University Nordic Bioscience Imaging Loughborough University IT University Denmark e.mollenbach@lboro.ac.uk mli@nordicbioscience.com a.g.gale@lboro.ac.uk paulin@itu.dk Abstract • Speed: Saccades can cover a 1° to 40° visual angle and last between 30-120ms [Duchowski, 2003]. This is sub- This paper examines gaze gestures and their applicability as a stantially faster than a standard dwell time selection of generic selection method for gaze-only controlled interfaces. approximately 300-500ms [Jacob, 1991]. The method explored here is the Single Gaze Gesture (SGG), i.e. • Screen real-estate: SGGs need not take up much - if any - gestures consisting of a single point-to-point eye movement. screen space. Transparent gesture initiation and comple- Horizontal and vertical, long and short SGGs were evaluated on tion fields would allow more on-screen information space two eye tracking devices (Tobii/QuickGlance (QG)). The main to be unaffected by gaze. findings show that there is a significant difference in selection times between long and short SGGs, between vertical and hori- • Avoiding the ‘Midas Touch’- issue [Jacob, 1991, Du- zontal selections, as well as between the different tracking sys- chowski, 2003]: The sequential nature of gestures means tems. that the initial point of gaze is of no consequence. However, gesture interaction comes with its own set of issues; CR Categories: H5.2. User Interfaces: Input devices and strate- the main problem being faulty selections caused by inspection gies (e.g., mouse, touch-screen), Interaction styles (e.g., com- eye movements - this is the gaze gesture version of the Midas mands, menus, forms, direct manipulation) Touch problem. Also repeatability of very complex gestures can be difficult. The following should be considered, when imple- Keywords: Gaze Interaction, Gaze Gestures, Interaction De- menting gaze gestures: sign • The potential overlap between selection and inspection 1 Introduction eye movements should be eliminated or minimized. • Easy correction and minimizing the consequence of an Eye trackers offer access to specialized software which enables error. people with motor-skill impairments to interact with computers; giving them access to communication aids, games etc. Various 2 Gestures in HCI physical conditions impact greatly on the user’s ability to control gaze based interfaces, i.e. erratic eye movements in the form of large jitters can make fixations difficult. Or, only being able to Gaze gesture research has primarily focused on translating ges- complete horizontal or vertical eye movements limits control tures designed for manual/stylus input to gaze interaction. The options; spasms are also common. As a consequence, gaze based main similarity between gaze and stylus as input is that they selection methods need to be noise tolerant, easily repeatable both make use of a single pointer. Perlins Quikwriting [Perlin, and sustainable. Single gestures are proposed as a robust gaze 1998] has served as inspiration for many gaze gesture based selection method which can potentially accommodate the needs systems and there are two reasons for this: (1) the stylus does of motor impaired users by being cognitively and physiological- not need to be lifted from the screen and (2) the motions used ly easy to sustain. Single gestures seem particularly well suited are continuous. These characteristics seem to mirror the re- for top level navigation, i.e. switching between applications, quirements for a gaze only input [Porta et al. 2008]. The main returning to a default state, exit-, return-, space- functions etc. issue with this direct translation from manual/stylus control to a Single gestures by hand, i.e. swiping, have become increasingly gaze based approach is that unlike continuous and permanent popular when navigating in mobile devices. Single Gaze Ges- use of a stylus, gaze does not afford separation of inspection and tures (SGG) have the potential of becoming a generic input navigation tasks. method alongside dwell-time selection, which could increase the flexibility and usability of gaze-only applications. The benefits 2.1 Gaze Gestures as Text Input of SGGs as a selection method should be: There have generally been two approaches to gaze gestures in text input: (1) gestures as deliberate character representations and (2) gestures as interface controls. Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed EyeWrite (Figure 1a) and EyeS (Figure 1b) are examples of for commercial advantage and that copies bear this notice and the full citation on the gesture based text input systems where each gesture is a unique first page. Copyrights for components of this work owned by others than ACM must be character representation. EyeWrite was a system based on a honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. PDA text input system called EdgeWrite. By combining the four Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail corners of a square in various ways letters are formed. An ex- permissions@acm.org. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 177
  2. 2. periment showed that the complexity of the alphabet caused Vickers et. al. 2008] and in another project regarding ‘World of difficulty [Wobbrock et. al. 2008]. Warcraft’ [Istance et al., 2009]. The goal has been to make a complete task analysis of these types of games and apply gaze in the most appropriate way. 2.3 Anti-Saccades Another approach to saccadic eye movement selection is the anti-saccade. Saccades follow a predetermined path in order to accommodate the brain with specific detailed information. Gen- Figure 1 a.Gaze gesture in the EyeWrite System [Wobbrock et erally we conduct what are known as pro-saccades [Kristjánsson al. 2008] b. Gaze Gestures in the EyeS system [Porta et al. et al. 2004]; the eye moves towards objects of interest or visual stimuli for further inspection. However, when using pro- 2008] saccades in selection there is a chance of the previously men- tioned overlap between natural inspection and selection eye EyeS was a similar system. Here single characters would be movements occurring. A different approach to gaze selection is produced by fixating on hotspots in various sequences and the the concept of using the anti-saccade [Huckauf et al, 2005]. gaze gestures were designed to resemble the shape of the letter With anti-saccades the user must force gaze in the opposite which was being completed [Porta et. al. 2008]. In general the direction of where the visual stimulus is being presented. With research has shown this approach to gaze gestures - where the practice the control of anti-saccades, both in terms of latency complexity and range of gestures required for all letters and text and precision of trajectory, can improve [Everling et al. editing functions, causes a heavy physiological and cognitive 1998].The concept of anti-saccades is interesting because it load – to be problematic. SSGs are an attempt at simplifying constitutes a counter-intuitive eye movement which is potential- gestures to make them robust and reliable as well as keeping the ly easily distinguishable from other eye movements during cognitive load low. ordinary scene viewing. The second approch is when gestures are tied to information 2.4 Single Gaze Gestures visualization. Urbina et.al [2007] presented three such interface designs. StarWrite allowed the user to drag letters from a half- Most gaze gesture research has been created for specific tasks circle onto a text-field. In the pEYEdit interface expanding and evaluated within the realm of that task, i.e. text input or menus were implemented, i.e. each slice contained a group of controlling a specific computer-game. The approach taken in letters - when selected, the group would expand into a new pie this research is an attempt at determining the boundaries of a where each slice had only one letter and the appropriate charac- simple gesture based selection method. These boundaries are ter could be chosen. And finally in IWrite the characters were based on assumptions regarding sustainable eye movement placed in a frame and selection was completed by short saccades patterns rather than on a specific task, i.e. text input or game- from the intended character to the outer frame which functioned control. As shown much of the existing research concerns itself as an on-screen-button. with complex gaze gestures (a 2 or more stroke gesture). These gestures have the advantage of increasing the interaction ‘voca- Another visualization based gesture interface was presented by bulary’ of gaze. However, this increase brings with it both cog- Bee et al. [2008] This was an adapted version of Quikwriting nitive and physiological issues. Cognitively it may be difficult to [Perlin, 1998]. They argued that continuous writing is the best remember a large number of gestures and physiologically it may suited text entry method for gaze. be difficult to create and complete them [Porta et al. 2008]. The main benefit of a visualization based approach is that the cognitive load is lowered compared to solely memory based 3 SGGs on different Eye Trackers gesture alphabets, because the interface guides the user through the selection process. However, these systems require much This experiment was designed to explore the following three precision from both the eye tracking system and the user. Dy- hypotheses. Firstly, does frame-rate and automatic smoothing on namic visualization also quickly overloads the bi-directional eye trackers have an effect on either the selection completion channel of gaze requiring a lot of inspection in order make the time or the selection error rate? - Secondly, is there a difference correct selection, i.e. further promoting the inspection/selection in completing selection saccades in different directions, i.e. problem and finally visualizations take up screen space. horizontal and vertical? – And thirdly, is there a difference in the completion times of gestures depending on various lengths of 2.2 Gaze Gestures in Computer Games the eye movements across a screen? - Theoretically, there should be no or very little difference in completing gestures limited by Avatar based computer games (e.g. MMORGs (Massive Multip- screen size as they are both within a 10° visual angle [Duchows- layer Online Roleplaying Games)) often require multiple tasks ki 2003]. to be dealt with simultaneously by the user, which is a challenge for a mono-modal input such as gaze. The major incentive – 3.1.1 Experimental Design which is allowing people with severe disabilities to stand on In order to force a wide range of eye movement patterns; on- equal footing in virtual communities – has been addressed by screen dynamics were integrated in the design, ensuring a ‘noi- [Istance et al., 2008, Vickers et. al. 2008, Istance et al., 2009]. sy’ setting. Colored blocks (red, blue, green, yellow) descended the screen in random order. The object of the task was to identi- They proposed a novel approach to gaze selection, called snap fy and select the block which had moved down the furthest clutch. A modal interface that allows the user to control a cha- before it disappeared (Figure 2). This had two consequences; racter in the computer game ‘Second Life’[Istance et al., 2008, 178
  3. 3. time pressure and a more complex navigation, selection and SD= 33,46) compared to the Tobii (Tobii short: M=78,81, perception process compared to a static layout. SD=19,34; Tobii long: M=131,45, SD = 16.4). Also there was a significant difference in selection times between long and short Gesture selections were completed by looking from one selec- gestures on both eye tracking systems, with short gaze gestures tion area to the opposite side, i.e. a green selection was done by having significantly lower selection times. (Figure 3) looking at the green field and then looking opposite within a 1000ms timeframe; if the gesture was not completed within this timeframe the system would re-set. Feedback was given to the user twofold: (1) the initiation area had a thin line indicating color and (2) a light grey shift indicated when the user was looking at an area. The lengths between vertical and horizontal eye movements were all equal; this caused the horizontal fields to be slightly larger than the vertical fields. Long SGGs required the user to cover 70% of the screen and short SGGs only required movements to cover 40% (Figure 2a and 2b). Participants alternately started with the long or short SGG interface, in order to counter any learning effect. Figure 3 Selection Times: Long and Short SGGs on both the Eye Tracking Systems. Error bars standard error of mean 3.1.2.2 Directional Selection Times Figure 4, shows the directional selection times. Each color Figure 2 a. Long SGG Interface; b. Short SGG Interface represents a condition (QuickGlance-Short, QuickGlance-Long, The participants were introduced to the test environment and Tobii-Short, Tobii-Long) the selection times produce a clear task framework before beginning the experiment. Nine partici- pattern which is repeated in all conditions (Figure 4). pants took part in the study (four female) all of which had nor- mal or corrected to normal vision. None of them were color- blind. Five had previous experience with gaze interaction. The application was written in Java and testing was completed on a QuickGlance 3 (20 frames/sec) system and a Tobii 1750 (50 frames/sec). Each participant had to complete 20 successful selections 3 times in each condition (QG-long, QG-short, Tobii- long, Tobii-short); totalling 240 SGG selections pr. participant. The independent variables were: input device (QuickGlance3 and Tobii 1750), selection method (long SGG and Short SGG) and direction of eye movement (Left/Right, Right/Left, Top/Bottom, Bottom/Top). The dependent variables were: Se- lection Time (the time from when the user exits the initiation field and enters the opposite field); Selection Error (a full com- pleted selection which does not correspond to the current target); Missed Target Error (targets which descent the screen without being selected); Task Time (the time from a successful selection Figure 4 Selection Times: Direction, Short and Long SGGs to the next successful selection). and Eye Tracking system. Error bars standard error of mean 3.1.2 Results When comparing the overall horizontal and vertical means there Data was subjected to two 3-factor 2x2x4 within subjects was a significant difference in the directional selection times F ANOVAs with input device, selection method and direction as (1, 1040) = 72,61; p < 10-3. A Bonferonni post hoc analysis independent variables and selection time and selection error showed that horizontal selection times (M=139,88; SD=122,31) were the dependent variables. Also a 2x2 within subjects ANO- were significantly faster than vertical selection times VAs, with input device and selection method as independent (M=185,32; SD=149,93). variables were done. Here task time was analyzed as the depen- dent variable. All data was included in the analyses. Error bars represent the standard error of the mean. 3.1.2.3 Selection Error The effect seen in the vertical and horizontal selection times are 3.1.2.1 Overall selection times not carried over into selection errors. There was no significant difference between errors in horizontal and vertical selections, F There was a significant difference in selection times F (3, 24) = (1, 215) = 0,942; p > 0.05. 140, 27; p < 10-3. A Bonferonni post hoc analysis showed that selection times were significantly longer on the QuickGlance System (QG short: M = 157,5; SD= 27,36; QG long: M= 275,2; 179
  4. 4. 3.1.2.4 Task Times References There was a significant difference in task time F (1, 53) = 20,54; p < 10-3. A Bonferonni post hoc analysis showed that task com- BEE AND ANDRÉ, 2008, Writing With your Eye: A Dwell Time pletion times were significantly faster on the Tobii (M=2116,35, Free Writing System Adapted to the Nature of the Human Eye SD =214,53) than on the QuickGlance (M=3011,47ms, Gaze, Perception in Multimodal Dialogue Systems,111-122, SD=1433,01ms). There was no significant difference between Springer. the long and short SGG task times on either input device. DUCHOWSKI, A.T., 2003. Eye Tracking Methodology: Theory and Practice, Springer-Verlag New York, Inc. Secaucus 4 Discussion EVERLING, S. & FISCHER, B., 1998. The antisaccade: a review of basic research and clinical studies. Neuropsychologia, 36(9), The main intention of this research was to show that simple 885-899 SGGs could be used as a form of interaction. The overall con- clusion of this work is that SGGs could indeed be a useful addi- HUCKAUF, A., GOETTEL, T., HEINBOCKEL, M., AND URBINA, M. tion to gaze interaction. 2005. What you don't look at is what you get: anti-saccades can reduce the midas touch-problem. APGV '05, vol. 95. ACM, New The significant difference between the systems (QuickGlance, York, NY, 170-170. Tobii) is no surprise, but it means that hardware developers and ISTANCE, H., BATES, R., HYRSKYKARI, A., AND VICKERS, S. 2008. interface designers could look at ways of optimizing for saccadic Snap clutch, a moded approach to solving the Midas touch selection, depending on the precision of the equipment, i.e. low- problem. ETRA '08. ACM, New York, NY, 221-228. cost eye trackers have different affordances compared to high- end precision trackers. The significant difference between hori- ISTANCE, H., VICKERS, S., AND HYRSKYKARI, A. 2009. Gaze- zontal and vertical eye movements requires more research. It based interaction with massively multiplayer on-line games. could be a consequence of the experimental design having CHI EA '09. ACM, New York, NY, 4381-4386. slightly larger horizontal selection fields or that reading direc- JACOB, R.J.K., 1991. The use of eye movements in human- tion has an effect, it could also be a fundamental of gaze beha- computer interaction techniques: what you look at is what you vior. There was a significant difference between long and short get. ACM Transactions on Information Systems (TOIS), 9(2), SGGs. Theoretically, there should not be a difference between the two, and this difference was not expected. However, the 152-169. significance might be a consequence of short SGGs being foveal KRISTJÁNSSON, Á., VANDENBROUCKE, M.W.G. & DRIVER, J., and long SGGs being peripheral. 2004. When pros become cons for anti-versus prosaccades: factors with opposite or common effects on different saccade The lack of significance in long and short gesture-task- types. Experimental Brain Research, 155(2), 231-244. completion-times indicates that the selection time difference is of no real consequence. This is an argument for using both types PERLIN, K. 1998. Quikwriting: continuous stylus-based text gestures in interface design. This could, for instance, be done by entry. In Proceedings of the 11th Annual ACM Symposium on mapping long and short gestures to appropriate tasks, i.e. in a list User interface Software and Technology . UIST '98. ACM, New search small incremental steps could be done by short SGGs and York, NY, 215-216. long SGGs could represent larger interval lengths. PORTA, M. AND TURINA, M. 2008. Eye-S: a full-screen input modality for pure eye-based communication. In Proceedings of The speed of these types of gestures (i.e. and average of the 2008 Symposium on Eye Tracking Research & Applica- 78,81ms for short SGG and 131,45ms for long SGG on the tions (Savannah, Georgia, March 26 - 28, 2008. ETRA '08. Tobii) and the time pressured nature of the task indicate that ACM, New York, NY, 27-34. Single Gaze Gestures are efficient, robust and, due to their sim- plicity, sustainable. URBINA, M.H. , HUCKAUF, A., Dwell time free eye typing ap- proaches, The 3rd Conference on Communication by Gaze Interaction – COGAIN 2007, p. 65-70. 5 Future Research VICKERS, S., ISTANCE, H.O., HYRSKYKARI,A., ALI, N., BATES, R. An extension of this work would be to examine these types of Keeping an Eye on the Game: Eye Gaze Interaction with Mas- gestures in an environment where targets move in all direction in sively Multiplayer Online Games and Virtual Communities for order to gain a clearer picture of their robustness in dynamic 3D Motor Impaired Users. Proc. 7th ICDVRAT with Art Abilitation, environments. Maia, Portugal, 2008. WARD, D. J., BLACKWELL, A. F., AND MACKAY, D. J. 2000. Gaze gestures have often been evaluated as a substitution for Dasher—a data entry interface using continuous gestures and dwell-time, rather than as a potential addition. SSGs are thought language models. In Proceedings of the 13th Annual ACM Sym- of only as an addition. The interesting task ahead will be to posium on User interface Software and Technology (San Diego, implement systems which employ SGGs, Complex Gaze Ges- California, United States, November 06 - 08, 2000). UIST '00. tures, Dwell and other gaze selection methods in combination, to ACM, New York, NY, 129-137. strengthen the flexibility and usability of gaze interaction. WOBBROCK, J. O., RUBINSTEIN, J., SAWYER, M. W., AND DU- 6 ACKNOWLEDGMENTS CHOWSKI, A. T. 2008. Longitudinal evaluation of discrete con- This work was supported by the COGAIN European Network of secutive gaze gestures for text entry. ETRA '08. ACM, New York, Excellence, funded under the FP6/IST program of the European NY, 11-18. Commission. 180

×