Pie menus offer several features which are advantageous especially for gaze control. Although the optimal number of slices per pie
and of depth layers has already been established for manual control, these values may differ in gaze control due to differences in spatial accuracy and congitive processing. Therefore, we investigated the layout limits for hierarchical pie menu in gaze control. Our user study indicates that providing six slices in multiple depth layers guarantees fast and accurate selections. Moreover, we compared two different methods of selecting a slice. Novices performed well with both, but selecting via selection borders produced better performance for experts than the standard dwell time selection.
Interacciones de quimioterapia y radioterapiaGonzalo Pavez
Interacciones de quimioterapia y radioterapia
Chemo-radiation interactions
Heterogeneidad tumoral
Quimioterapia de inducción
Resistencia tumoral a qumioterapia y radioterapia
Inmunoterapia
Inhibidores de PARP
Interacciones de quimioterapia y radioterapiaGonzalo Pavez
Interacciones de quimioterapia y radioterapia
Chemo-radiation interactions
Heterogeneidad tumoral
Quimioterapia de inducción
Resistencia tumoral a qumioterapia y radioterapia
Inmunoterapia
Inhibidores de PARP
Prezentacija rezultata TDR poslovanja u 2013. godini na tržištu Bosne i Herce...TDR d.o.o Rovinj
Prezentacija poslovanja TDR-a na tržištu Bosne i Hercegovine u 2013. godini (press konferencija u Sarajevu, 3. lipnja 2014.)
S dvije milijarde prodanih cigareta u 2013. godini i stabilnim tržišnim udjelom većim od 30 posto, TDR je lider na duhanskom tržištu Bosne i Hercegovine.
Client Side Exploitation Techniques for attack client-side then access into intranet for fun, Additional latest Microsoft vulnerability that never patch for year (MS was Suck...)
Markkinointityökalujen määrä on räjähtänyt käsiin. Käytämme päivittäisessä työssämme yhä useampia erilaisia työkaluja, jotka keräävät markkinointidataa. Samalla kuitenkin datan analysointi vaikeutuu ja vaatii yhä enemmän resursseja.
Miten lähdemme liikkeelle? Tarvitaanko aina kivuliaan pitkiä projekteja vai onko löydettävissä nopeita voittoja markkinointianalytiikan tehokkuuden kasvattamiseksi? Mikä on tärkeintä: työkalut, raportit, analysointi vai toimenpiteet?
INVESTIGATIONS OF THE INFLUENCES OF A CNN’S RECEPTIVE FIELD ON SEGMENTATION O...adeij1
Segmentation of objects with various sizes is relatively less explored in medical imaging, and has been very challenging in computer vision tasks in general. We hypothesize that the receptive field of a deep model corresponds closely to the size of object to be segmented, which could critically influence the segmentation accuracy of objects with varied sizes. In this study, we employed “AmygNet”, a dual-branch fully convolutional neural network (FCNN) with two different sizes of receptive fields, to investigate the effects of receptive field on segmenting four major subnuclei of bilateral amygdalae. The experiment was conducted on 14 subjects, which are all 3-dimensional MRI human brain images. Since the scale of different subnuclear groups are different, by investigating the accuracy of each subnuclear group while using receptive fields of various sizes, we may find which kind of receptive field is suitable for object of which scale respectively. In the given condition, AmygNet with multiple receptive fields presents great potential in segmenting objects of different sizes.
Skill Learning in Surgical Application - Laparoscopy Training Luís Rita
Final Project - Designing Mechatronic Systems for Rehabilitation.
Laparoscopy surgery is a minimally invasive surgery (MIS) performed in the pelvis or abdominal cavity. This technique is part of a broader field known as endoscopy. Which applies the same principles to different body parts.
Sorbonne Université - 5th Year - 1st Semester - Mechatronic Systems for Rehabilitation.
Prezentacija rezultata TDR poslovanja u 2013. godini na tržištu Bosne i Herce...TDR d.o.o Rovinj
Prezentacija poslovanja TDR-a na tržištu Bosne i Hercegovine u 2013. godini (press konferencija u Sarajevu, 3. lipnja 2014.)
S dvije milijarde prodanih cigareta u 2013. godini i stabilnim tržišnim udjelom većim od 30 posto, TDR je lider na duhanskom tržištu Bosne i Hercegovine.
Client Side Exploitation Techniques for attack client-side then access into intranet for fun, Additional latest Microsoft vulnerability that never patch for year (MS was Suck...)
Markkinointityökalujen määrä on räjähtänyt käsiin. Käytämme päivittäisessä työssämme yhä useampia erilaisia työkaluja, jotka keräävät markkinointidataa. Samalla kuitenkin datan analysointi vaikeutuu ja vaatii yhä enemmän resursseja.
Miten lähdemme liikkeelle? Tarvitaanko aina kivuliaan pitkiä projekteja vai onko löydettävissä nopeita voittoja markkinointianalytiikan tehokkuuden kasvattamiseksi? Mikä on tärkeintä: työkalut, raportit, analysointi vai toimenpiteet?
INVESTIGATIONS OF THE INFLUENCES OF A CNN’S RECEPTIVE FIELD ON SEGMENTATION O...adeij1
Segmentation of objects with various sizes is relatively less explored in medical imaging, and has been very challenging in computer vision tasks in general. We hypothesize that the receptive field of a deep model corresponds closely to the size of object to be segmented, which could critically influence the segmentation accuracy of objects with varied sizes. In this study, we employed “AmygNet”, a dual-branch fully convolutional neural network (FCNN) with two different sizes of receptive fields, to investigate the effects of receptive field on segmenting four major subnuclei of bilateral amygdalae. The experiment was conducted on 14 subjects, which are all 3-dimensional MRI human brain images. Since the scale of different subnuclear groups are different, by investigating the accuracy of each subnuclear group while using receptive fields of various sizes, we may find which kind of receptive field is suitable for object of which scale respectively. In the given condition, AmygNet with multiple receptive fields presents great potential in segmenting objects of different sizes.
Skill Learning in Surgical Application - Laparoscopy Training Luís Rita
Final Project - Designing Mechatronic Systems for Rehabilitation.
Laparoscopy surgery is a minimally invasive surgery (MIS) performed in the pelvis or abdominal cavity. This technique is part of a broader field known as endoscopy. Which applies the same principles to different body parts.
Sorbonne Université - 5th Year - 1st Semester - Mechatronic Systems for Rehabilitation.
Multiwindow Fusion for Wearable Activity RecognitionOresti Banos
The recognition of human activity has been extensively
investigated in the last decades. Typically, wearable sensors are used to register body motion signals that are analyzed by following a set of signal processing and machine learning steps to recognize the activity
performed by the user. One of the most important steps refers to the signal segmentation, which is mainly performed through windowing approaches. In fact, it has been proved that the choice of window size directly conditions the performance of the recognition system. Thus, instead of limiting to a specific window configuration, this work proposes the use of multiple recognition systems operating on multiple window sizes. The suggested model employs a weighted decision fusion mechanism to fairly leverage the potential yielded by each recognition system
based on the target activity set. This novel technique is benchmarked on a well-known activity recognition dataset. The obtained results show a significant improvement in terms of performance with respect to common systems operating on a single window size.
Medical Images are regularly of low contrast and boisterous/Noisy (absence of clarity) because of
the circumstances they are being taken. De-noising these pictures is a troublesome undertaking as they
ought to exclude any antiquities or obscuring of edges in the pictures. The Bayesian shrinkage strategy has
been chosen for thresholding in light of its sub band reliance property. The spatial space and Wavelet
based de-noising systems utilizing delicate thresholding strategy are contrasted and the proposed technique
utilizing GA (Genetic Algorithm) is used. The GA procedure is proposed in view of PSNR and results are
contrasted and existing spatial space and wavelet based de-noising separating strategies. The proposed
calculation gives improved visual clarity to diagnosing the restorative pictures. The proposed strategy in
view of GA surveys the better execution on the premise of the quantitative metric i.e PSNR (Peak Signal
to Noise-Ratio) and visual impacts. Reenactment results demonstrate that the GA based proposed
technique beats the current de-noising separating strategies.
A usability study was used to measure user performance and user preferences for a CAVETM
immersive stereoscopic virtual environment with wand interfaces compared directly with a workstation nonstereoscopic
traditional CAD interface with keyboard and mouse. In both the CAVETM and the adaptable
technology environments, crystal eye glasses are used to produce a stereoscopic view. An ascension flock of
birds tracking system is used for tracking the user’s head and wand pointing device positions in 3D space.
It is argued that with these immersive technologies, including the use of gestures and hand movements, a more
natural interface in immersive virtual environments is possible. Such an interface allows a more rapid and
efficient set of actions to recognize geometry, interaction within a spatial environment, the ability to find errors,
and navigate through a virtual environment. The wand interface provides a significantly improved means of
interaction. This study quantitatively measures the differences in interaction when compared with traditional
human computer interfaces.
This paper provides analysis via usability study methods for Find and Repair Manipulation termed as
Benchmark 2. During testing, testers are given some time to “play around” with the CAVETM environment for
familiarity before undertaking a specific exercise. The testers are then instructed regarding tasks to be
completed, and are asked to work quickly without sacrificing accuracy. The research team timed each task, and
recorded activity on evaluation sheets for Find and Repair Manipulation Test. At the completion of the testing
scenario involving navigation, the subject/testers were given a survey document and asked to respond by
checking boxes to communicate their subjective opinions.
Similar to Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control (14)
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
Relevance feedback (RF) mechanisms are widely adopted in Content-Based Image Retrieval (CBIR) systems to improve image retrieval performance. However, there exist some intrinsic problems: (1) the semantic gap between high-level concepts and low-level features and (2) the subjectivity of human perception of visual contents. The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data. In total, 882 images from 101 categories are viewed by 10 subjects to test the usefulness of implicit RF, where the relevance of each image is known beforehand. A set of measures based on fixations are thoroughly evaluated which include fixation duration, fixation count, and the number of revisits. Finally, the paper proposes a decision tree to predict the user’s input during the image searching tasks. The prediction precision of the decision tree is over 87%, which spreads light on a promising integration of natural eye movement into CBIR systems in the future.
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
The intuitive user interfaces of PCs and PDAs, such as pen display and touch panel, have become widely used in recent times. In this study, we have developed an eye-tracking pen display based on the stereo bright pupil technique. First, the bright pupil camera was developed by examining the arrangement of cameras and LEDs for pen display. Next, the gaze estimation method was proposed for the stereo bright pupil camera, which enables one point calibration. Then, the prototype of the eyetracking pen display was developed. The accuracy of the system was approximately 0.7° on average, which is sufficient for human interaction support. We also developed an eye-tracking tabletop as an application of the proposed stereo bright pupil technique.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
The visual field is the area of space that can be seen when an observer fixates a given point. Many visual capabilities vary with position in the visual field and many diseases result in changes in the visual field. With current technology, it is possible to build very complex real-time visual field simulations that employ gaze-contingent displays. Nevertheless, there are still no established techniques to evaluate such systems. We have developed a method to evaluate a system’s contingency by employing visual blind spot localization as well as foveal fixation. During the experiment, gaze-contingent and static conditions were compared. There was a strong correlation between predicted results and gaze-contingent trials. This evaluation method can also be used with patient populations and for the evaluation of gaze-contingent display systems, when there is need to evaluate a visual field outside of the foveal region.
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
The study of surgeons’ eye movements is an innovative way of assessing skill and situation awareness, in that a comparison of eye movement strategies between expert surgeons and novices may show differences that can be used in training. Our preliminary study compared eye movements of 4 experts and
4 novices performing a simulated gall bladder removal task on a
dummy patient with an audible heartbeat and simulated vital signs displayed on a secondary monitor. We used a head-mounted Locarna PT-Mini eyetracker to record fixation locations during the operation. The results showed that novices concentrated so hard on the surgical
display that they were hardly able to look at the patient’s vital signs, even when heart rate audibly changed during the procedure. In comparison, experts glanced occasionally at the vitals monitor, thus being able to observe the patient condition.
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
The portability of an eye tracking system encourages us to develop a technique for estimating 3D point-of-regard. Unlike conventional methods, which estimate the position in the 2D image coordinates of the mounted camera, such a technique can represent richer gaze information of the human moving in the larger area. In this paper, we propose a method for estimating the 3D point-of-regard and a visualization technique of gaze trajectories under natural head movements for the head-mounted device. We employ visual SLAM technique to estimate head configuration and extract environmental information. Even in cases where the head moves dynamically, the proposed method could obtain 3D point-of-regard. Additionally, gaze trajectories are appropriately overlaid on the scene camera image.
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
Recent advances in high magnification retinal imaging have allowed for visualization of individual retinal photoreceptors, but these systems also suffer from distortions due to fixational eye motion. Algorithms developed to remove these distortions have the added benefit of providing arc second level resolution of the eye movements that produce them. The system also allows for visualization of targets on the retina, allowing for absolute retinal position measures to the level of individual cones. This paper will describe the process used to remove the eye movement artifacts and present analysis of their spectral characteristics. We find a roughly 1/f amplitude spectrum similar to that reported by Findlay (1971) with no evidence for a distinct
tremor component.
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
Gaze visualizations represent an effective way for gaining fast insights into eye tracking data. Current approaches do not adequately support eye tracking studies for three-dimensional (3D) virtual environments. Hence, we propose a set of advanced gaze visualization techniques for supporting gaze behavior analysis in such environments. Similar to commonly used gaze visualizations for twodimensional
stimuli (e.g., images and websites), we contribute advanced 3D scan paths and 3D attentional maps. In addition, we introduce a models of interest timeline depicting viewed models, which can be used for displaying scan paths in a selected time segment. A prototype toolkit is also discussed which combines an implementation of our proposed techniques. Their potential for facilitating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts.
Skovsgaard Small Target Selection With Gaze AloneKalle
Accessing the smallest targets in mainstream interfaces using gaze
alone is difficult, but interface tools that effectively increase the size of selectable objects can help. In this paper, we propose a conceptual framework to organize existing tools and guide the development of new tools. We designed a discrete zoom tool and conducted a proof-of-concept experiment to test the potential of the framework and the tool. Our tool was as fast as and more accurate than the currently available two-step magnification tool. Our framework shows potential to guide the design, development, and testing of zoom tools to facilitate the accessibility of mainstream
interfaces for gaze users.
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user’s eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The
user successfully typed on a wall-projected interface using his eye movements.
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
Analysis of recordings made by a wearable eye tracker is complicated by video stream synchronization, pupil coordinate mapping, eye movement analysis, and tracking of dynamic Areas Of Interest (AOIs) within the scene. In this paper a semi-automatic system is developed to help automate these processes. Synchronization is accomplished
via side by side video playback control. A deformable eye template and calibration dot marker allow reliable initialization via simple drag and drop as well as a user-friendly way to correct the algorithm when it fails. Specifically, drift may be corrected by nudging the detected pupil center to the appropriate coordinates. In a case study, the impact of surrogate nature views on physiological health and perceived well-being is examined via analysis of gaze over images of nature. A match-moving methodology was developed to track AOIs for this particular application but is applicable toward similar future studies.
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
Eye-tracking has been widely used for research purposes in fields such as linguistics and marketing. However, there are many possibilities of how eye-trackers could be used in other disciplines like physics. A part of physics education research deals with the differences between novices and experts, specifi-cally how each group solves problems. Though there has been a great deal of research about these differences there has been no research that focuses on noticing exactly where experts and no-vices look while solving the problems. Thus, to complement the past research, I have created a new technique called gaze scrib-ing. Subjects wear a head mounted eye-tracker while solving electrical circuit problems on a graphics monitor. I monitor both scan patterns of the subjects and combine that with videotapes of their work while solving the problems. This new technique has yielded new information and elaborated on previous studies.
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods
proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
This paper describes a study that seeks to explore the correlation between eye movements and the interpretation of geometric shapes. This study is intended to inform the development of an eye tracking interface for computational tools to support and enhance the natural interaction required in creative design. A common criticism of computational design tools is that they do not enable manipulation of designed shapes according to all perceived features. Instead the manipulations afforded are limited by formal structures of shapes. This research examines the potential for eye movement data to be used to recognise and make available for manipulation the perceived features in shapes. The objective of this study was to analyse eye movement data with the intention of recognising moments in which an interpretation of shape is made. Results suggest that fixation duration and saccade amplitude prove to be consistent indicators of shape interpretation.
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
Eye gaze interaction for disabled people is often dealt with by designing ad-hoc interfaces, in which the big size of their elements compensates for both the inaccuracy of eye trackers and the instability of the human eye. Unless solutions for reliable eye cursor control are employed, gaze pointing in ordinary graphical operating environments is a very difficult task. In this paper we present an eye-driven cursor for MS Windows which behaves differently according to the “context”. When the user’s gaze is perceived within the desktop or a folder, the cursor can be discretely shifted from one icon to another. Within an application window or where there are no icons, on the contrary, the cursor can be continuously and precisely moved. Shifts in the four directions (up, down, left, right) occur through dedicated buttons. To increase user awareness of the currently pointed spot on the screen while continuously moving the cursor, a replica of the spot is provided within the active direction button, resulting in improved pointing performance.
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking.
We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
The purpose of this study is to explore how the viewers’ previous training is related to their aesthetic viewing in various interactions with the form and the context, in relation to apparel design. Berlyne’s two types of exploratory behavior, diversive and specific, provided a theoretical framework to this study. Twenty female subjects (mean age=21, SD=1.089) participated. Twenty model images, posed by a male and a female model, were shown on an eye-tracker screen for 10 seconds each. The findings of this study verified Berlyne’s concepts of visual exploration. One of the different findings from Berlyne’s theory was that the untrained viewers’ visual attention tended to be more significantly focused on peripheral areas of visual interest, compared to the trained viewers, while there was no significant difference on the central, foremost areas of visual interest between the two groups. The overall aesthetic viewing patterns were also identified.
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
We report on the results of a study in which pairs of subjects were involved in spoken dialogues and one of the subjects also operated a simulated vehicle. We estimated the driver’s cognitive load based on pupil size measurements from a remote eye tracker. We compared the cognitive load estimates based on the physiological pupillometric data and driving performance data. The physiological and performance measures show high correspondence suggesting that remote eye tracking might provide reliable driver cognitive load estimation, especially in simulators. We also introduced a new pupillometric cognitive load measure that shows promise in tracking cognitive load changes on time scales of several seconds.
2. 3.2 Participants
Twelve volunteers participated in the study, aged between 23 and
30 (26 in mean). All reported normal or corrected-to-normal vision
and were familiar with computer usage. Two of them had prior
experience with eye tracking and pie menus.
3.3 Apparatus
The study took place in a room without windows under indirect arti-
ficial lightning. The pie menus were presented on a 21 Sony GDM-
F520 CRT display with a resolution of 1280x960 at a frame rate
of 75Hz. The eye tracking device used was a head-mounted Eye-
link2. The spatial resolution of this set-up, considering the nominal
tracking resolution of 0.5◦ , was about 12 pixels.
Figure 2: Example trial and selection procedure. After selecting
the last slice (d), the next trial starts (e).
3.4 Design
Independent variables throughout the study were the number of
slices per pie (width), the number of hierarchical layers per pie 4.1 Width
(depth), and the method of selection. These factors were varied
blockwise. In total, 13 blocks by 32 trials were to be performed Selection time: For investigating the effects of menu width, blocks
with the configuration and selection method described in Table 1. with menus of four, six, eight, and twelve slices were compared. All
these menus consisted of two depth layers. For four slices, IST took
Table 1: Menu layout, selection method and visualization condition 667.141 ms (standard error se=31.18). For six slices, IST was in
for all 13 blocks. mean 786.35 ms (se=38.60), for eight slices 907.01 ms (se=54.37),
and for twelve slices 933.11 ms (se=40.31) (see Fig. 3). These
Block # Width Depth Sel. Method Visualization differences were of significance (F(3,33)=27.52, p < .001). Post
1 4 2 sel. borders yes hoc comparisons revealed that all numbers of slices differed signif-
2 8 2 sel. borders yes icantly from each other except eight and twelve slices.
3 6 2 sel. borders yes
4 12 2 sel. borders yes
1000
5 4 2 sel. borders yes
Mean Item Selection Time (ms)
950
6 4 3 sel. borders yes 900
7 4 4 sel. borders yes 850
800
8 4 2 sel. borders yes 750
9 4 2 sel. borders no 700
10 4 2 sel. borders yes 650
600
11 4 2 dwell time yes 550
12 8 3 sel. borders yes 500
4 6 8 12
13 8 3 dwell time yes
Number of items
Errors and item selection times (ISTs, measured from the onset of
the pie until the selection of one slice) served as dependent vari- Figure 3: Effect of the number of slices on item selection times.
ables. ISTs were computed instead of the usual task completion
times in order to compare performance between the different menu
layouts. An error was defined as every single false selection. For
Error rate: For four slices, 5.62% errors were produced (se=1.04).
example, for the task “N - O”, the selection of “N - W” or “O - O”
With six slices, the error rate reached 9.58% (se= 1.40), with
was counted as one, the selection of “W- N” as two errors.
eight slices 21.51% (se=3.67), and with twelve slices 22.62%
(se=3.68). Also for the error rate, menu width had a significant
3.5 Procedure effect F(3,33)=16.77, p <.001. Again, this effect was due to differ-
ences between all numbers of slices except eight and twelve slices.
The task was to select as fast and as accurate as possible objects
through a pie menu, which were depicted above the centre top of These data indicate that six slices seem to be the maximal number
the screen. After fixating the start button the pie menu popped up of slices which can be suggested for using pie menus in gaze control
(see Fig. 2a and 2b). Each selection was accompanied by a click both, in terms of fast and accurate performance.
sound [Majaranta et al. 2006]. With a selection, either the next
pie layer popped up or, the menus were closed and the start button 4.2 Depth layers
appeared again together with a new task until the block was finished
(see Fig. 2).
Selection time: For examining the effects of number of layers,
menus of two, three, and four layers were compared, all based on
4 Results pies of four slices. IST was 667.14 ms (se=31.18) for two layers,
749.85 ms (se=48.02) for three layers, and 746.83 ms (se=31.76)
IST and errors were entered into ANOVAs for repeated measures. for four layers (see Fig. 4). These differences were of significance
Except for the investigation of learning effects, data for the menu of (F(2,22)=9.13, p <.001). Post hoc analysis showed that this effect
four slices presented in two layers were taken from the second run. was due to the faster IST with two layers relative to three and four.
94
3. 1000 performance between the steps of both layers should not differ. If,
Mean Item Selection Time (ms)
950
900
however, users solve this task step by step, in the marking ahead
850 condition the first selection might still succeed whereas the second
800
may be more error-prone and/or slower.
750
700
650 Selection time: Performance between the very first block and
600 the block without visual presentation did only marginally differ
550
500
(F(1,11)=4.04, p =.07). In addition, the IST for the first menu layer
2 3 4 was with 951.09 ms (se= 90.18) slower than for the second layer
Depth level
(824.31 ms, se=79.53; F(1,11)=11.29, p <.01, see Fig. 6). How-
ever, there was no interaction between both variables suggesting
that there were no specific differences between both blocks (F<1).
Figure 4: Effect of the number of layers on item selection time.
1200 Pie Menu
Mean Item Selection Time (ms)
1100 Marking Menu
Error rate: Errors were as high as 5.62% (se=1.04) for two, 6.03% 1000
(se=1.04) for three, and 6.06% (se=1.26) for four layers. The effect 900
of menu depth on IST was not significant (F<1). 800
700
These results show that the depth of a pie menu is not as crucial in 600
gaze control as is the width. This is in contrast to the data provided 500
for manual control by Kurtenbach and Buxton [1993]. 1 2
Menu layer
4.3 Learnability
Figure 6: Item selection times for the first and second menu layer
Selection time: Effects of learning were investigated comparing separately for the very first block of the marking menu and the
performance for the menu of four slices arranged in two layers, marking ahead condition.
which was repeated four times throughout the whole experiment.
In the first run, users took 817.03 ms (se=61.81) per item. This
was reduced to 667.14 ms (se= 31.18) in the second, to 633.46 ms Error rate: In errors, performance between the very first run and
(se=30,36) in the third, and to 586.88 ms (se=28.19) in the fourth the marking block did not differ (F<1). As in selection times, the
run (see Fig. 5). The effect of learning was statistically significant menu layers (i.e., first versus second selection) produced a signifi-
(F(3,33)=17.14, p <.001). Each run produced significantly faster cant effect (F1,11)= 14.63, p <.01). This was due to more errors
selection times, except the second and third (p =.15). The decrease in the second (9.5% se=1.31) than in the first menu layer (5.88%,
from the third to the fourth run was marginally significant (p =.06). se=.83). There was no interaction between both variables (F<1).
1000
4.5 Selection Method
Mean Item Selection Time (ms)
950
900
850 Selection time: The investigation of whether selection via selection
800 borders can actually compete with the standard selection procedure
750
700
using dwell times (400 ms) was performed on two menu designs:
650 A small menu of four slices and two depth layers and a larger menu
600
of eight slices and three depth layers. The statistical comparison re-
550
500
vealed a main effect of menu size (F(1,11)=58.04, p <.001) where
1 2 3 4 selection took less time in the small menu (663.37 ms, se=25.29)
Run
relative to the larger one (887.59 ms, se=45.42). However, there
was neither a main effect of selection method (F<1) nor an interac-
tion with it (F<1), indicating that in terms of selection speed, both
Figure 5: Effect of learning on selection times per item. selection methods can be regarded as equally useable.
Error rate: In errors, learning let to a decrease from 16.05% Error rate: In errors, there was also an effect of menu size
(se=2.73) over 5.62% (se=1.04) and 3.30% (se=.82) to 5.72% (F(1,11)=19.56, p <.001. Here were, with 10.55% (se=2.02), less
(se=1.24). These differences were also of significance (F(3, errors per selection for the small pie menu as for the large (21.43%,
33)=18.63, p <.001). Post hoc comparisons revealed that perfor- se=3.49) (see Fig. 7). Selection via selection borders was with
mance in the first session was worse than in all further sessions. 11.72% (se=1.67) more effective than selection via dwell times
(20.27%, se=3.91; F(1,11)=7.55, p <.02). Again, there was no
interaction between both variables (F<1).
4.4 Marking Ahead Selection
In order to further investigating learning, one block without visual 5 Discussion and Conclusion
feedback was performed. The assumption of the marking ahead
strategy is that users have a complete mental conception of the When designing pie menus for gaze control the number of items
whole series of actions. In order to test this assumption, perfor- per layer in a pie menu seems to be the most crucial factor. As
mance in this marking ahead block was compared to performance our data revealed, up to six slices per pie can be effectively and
on the very first run. Importantly, we included the menu layer (i.e., efficiently selected with eye trackers with about 0.5◦ of spatial ac-
selection in the first versus in the second layer) as a further vari- curacy (i.e. professional eye tracking equipment). Of course the ra-
able: If users have a mental conception of the whole task, then dius (180 px in our study) may affect the optimal number of slices
95
4. 30 Border Sel. tions by dwelling on an item produced more errors than selections
Dwell Time
25 by borders. One might thus improve the accuracy by increasing the
20
dwelling time. However, dwell time was perceived as a “more nat-
Errors in %
15
ural”, “intuitive” but also “slower” selection method among partici-
pants without prior experience in gaze control. Taken together, one
10
might suppose that selection by selection borders provides a bet-
5 ter performance for selecting items in a pie menu than dwell times.
0 The arrangement of the pie menus might also be responsible for
4;2 8;3
Menu size
the superiority of selection by borders: Since all new layers were
centred around the outer border of the current pie, selection by bor-
ders already brings the eye towards the centre of the next pie menu.
Figure 7: Effect of the selection method on error rates. Hence, with other designs like centring the pie around the current
fixation position, dwell time selection might compete with selec-
tion by borders. However, as already discussed above, respective
designs may be of disadvantage for the usability and learnability of
and should thus be investigated in further experiments. Addition- pie menus.
ally, one should take into consideration that the tasks for the vari-
ous numbers of slices varied in difficulty: For four and eight slices, To sum up, pie menus are a suitable and promising interfaces for
tasks were given with cardinal points, and for six and twelve slices, gaze interaction can allocate up to six items in width and multiple
they were given using the clock. We suppose cardinal points to be depth layers, allowing a fast and accurate navigation through hier-
more difficult: Some subjects confused “W” with “O” and vice- archical levels by using or combining multiple selection methods.
versa (like confusing left with right), committing in mean 1.91% These qualities may give pie and marking menus the chance to es-
errors, which made up about 20% of the total errors. For the eight tablish as a standard in gaze control.
slices menu, perceiving and remembering coordinates like “SW-
SW - S - W” can be assumed to be more difficult than the numbers References
like “8 - 8 - 6 - 10” used with six and twelve slices.
Performance with two depth layers was found to be significantly C ALLAHAN , J., H OPKINS , D., W EISER , M., AND S HNEIDER -
faster than with more layers. One explanation may be, that par- MAN , B. 1988. An empirical comparison of pie vs. linear menus.
ticipants were able to mark the selection path completely ahead. In CHI ’88: Proceedings of the SIGCHI conference on Human
This strategy was harder to follow with more than two depth layers. factors in computing systems, ACM, New York, NY, USA, 95–
Even though, the performance achieved with three and four depth 100.
layers was acceptable and showed no additional costs presenting H UCKAUF, A., AND U RBINA , M. H. 2008. Gazing with peyes:
more depth layers. Therefore, to allocate more items in a pie menu, towards a universal input for various applications. In ETRA ’08:
our data suggest increasing the number of depth layers. Proceedings of the 2008 symposium on Eye tracking research &
The results show that for gaze control, slice width is more important applications, ACM, New York, NY, USA, 51–54.
than menu depth. This is in contrast to the data provided by Kurten- H UCKAUF, A., AND U RBINA , M. H. 2008. On object selection
bach and Buxton [1993] who found no limitation for the number in gaze controlled environments. In Journal of Eye Movement
of slices per menu, but for the number of depth levels. We assume Research, vol. 2 of 4, 1–7.
that the difference in number of slices is mainly due to the lower
accuracy of gaze tracking, as well as to the difficulty of performing I STANCE , H., BATES , R., H YRSKYKARI , A., AND V ICKERS , S.
selective actions with a perceptual organ [Zhai et al. 1999]. 2008. Snap clutch, a moded approach to solving the midas touch
problem. In ETRA ’08: Proceedings of the 2008 symposium
Of course, the number of layers is restricted by the screen size. on Eye tracking research & applications, ACM, New York, NY,
Therefore, it may not be infinite. An alternative method of present- USA, 221–228.
ing more layers might be arranging forthcoming pie menus either
directly overlaying the former one, or centred on the current fix- K URTENBACH , G., AND B UXTON , W. 1993. The limits of expert
ation position. Both of these alternatives, however, have a severe performance using hierarchic marking menus. In CHI ’93: Pro-
disadvantage inherent: Whereas the first solution would require ad- ceedings of the SIGCHI conference on Human factors in com-
ditional saccades back to the starting point, destroying the naviga- puting systems, ACM Press, New York, NY, USA, 482–487.
tion metaphor adopted for hierarchical menus, the second solution ¨ ¨
M AJARANTA , P., M AC K ENZIE , S., AULA , A., AND R AIH A , K.-
would reduce the capability of marking ahead, since each menu J. 2006. Effects of feedback and dwell time on eye typing speed
would change in position on the screen each time it appears, which and accuracy. Univers. Access Inf. Soc. 5, 2, 199–208.
may interfere with the path learning process seen in this experiment.
U RBINA , M. H., AND H UCKAUF, A. 2007. Dwell-time free eye
Subjects showed a significant learning effect using pie menus. Even typing approaches. In Proceedings of the 3rd Conference on
after 128 selections, they continued improving significantly their Communication by Gaze Interaction (COGAIN 2007), 65–70.
IST, with a constant and relatively low error rate. Experienced
users have been expected to be capable of marking ahead a com- Z HAI , S., M ORIMOTO , C., AND I HDE , S. 1999. Manual and gaze
plete path (or gesture). This could be confirmed for our observers: input cascaded (magic) pointing. In CHI ’99: Proceedings of
After already 96 trials with a menu designed with four slices and the SIGCHI conference on Human factors in computing systems,
two layers, the accuracy of performance without any visual cue did ACM Press, New York, NY, USA, 246–253.
not differ from performance within the first 32 trials. Even if there
was a lower selection speed for these blind trials, the hypothesis Z HAI , S. 2008. On the ease and efficiency of human-computer
of marking ahead trajectories can be confirmed also for pie menus interfaces. In ETRA ’08: Proceedings of the 2008 symposium
operated by gaze. on Eye tracking research & applications, ACM, New York, NY,
USA, 9–10.
The selection methods differed in accuracy, but not in IST: Selec-
96