Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Istance Designing Gaze Gestures For Gaming An Investigation Of PerformanceKalle
To enable people with motor impairments to use gaze control to play online games and take part in virtual communities, new interaction techniques are needed that overcome the limitations of dwell clicking on icons in the games interface. We have investigated gaze gestures as a means of achieving this. We report the results of an experiment with 24 participants that examined performance differences between different gestures. We were able to predict the effect on performance of the numbers of legs in the gesture and the primary direction of eye movement in a gesture. We also report the outcomes of user trials in which 12 experienced gamers used the gaze gesture interface to play World of Warcraft. All participants were able to move around and engage other characters in fighting episodes successfully. Gestures were good for issuing specific commands such as spell casting, and less good for continuous control of movement compared with other gaze interaction techniques we have developed.
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
Bieg Eye And Pointer Coordination In Search And Selection TasksKalle
Selecting a graphical item by pointing with a computer mouse is a ubiquitous task in many graphical user interfaces. Several techniques have been suggested to facilitate this task, for instance, by reducing the required movement distance. Here we
measure the natural coordination of eye and mouse pointer
control across several search and selection tasks. We find that users automatically minimize the distance to likely targets in an intelligent, task dependent way. When target location is highly predictable, top-down knowledge can enable users to initiate pointer movements prior to target fixation. These findings question the utility of existing assistive pointing techniques and suggest that alternative approaches might be more effective.
Kollenberg Visual Search In The (Un)Real World How Head Mounted Displays Affe...Kalle
Head-mounted displays (HMDs) that use a see-through display method allow for superimposing computer-generated images upon a real-world view. Such devices, however, normally restrict the user’s field of view. Furthermore, low display resolution and display curvature are suspected to make foveal as well as peripheral vision more difficult and may thus affect visual processing. In order to evaluate this assumption, we compared performance and eye-movement patterns in a visual search paradigm under different viewing conditions: participants either wore an HMD, had their field of view restricted by blinders or could avail themselves of an unrestricted field of view (normal viewing). From the head and eye-movement recordings we calculated the contribution of eye rotation to lateral shifts of attention. Results show that wearing an HMD leads to less eye rotation and requires more head movements than under blinders conditions and during normal viewing.
Istance Designing Gaze Gestures For Gaming An Investigation Of PerformanceKalle
To enable people with motor impairments to use gaze control to play online games and take part in virtual communities, new interaction techniques are needed that overcome the limitations of dwell clicking on icons in the games interface. We have investigated gaze gestures as a means of achieving this. We report the results of an experiment with 24 participants that examined performance differences between different gestures. We were able to predict the effect on performance of the numbers of legs in the gesture and the primary direction of eye movement in a gesture. We also report the outcomes of user trials in which 12 experienced gamers used the gaze gesture interface to play World of Warcraft. All participants were able to move around and engage other characters in fighting episodes successfully. Gestures were good for issuing specific commands such as spell casting, and less good for continuous control of movement compared with other gaze interaction techniques we have developed.
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
Bieg Eye And Pointer Coordination In Search And Selection TasksKalle
Selecting a graphical item by pointing with a computer mouse is a ubiquitous task in many graphical user interfaces. Several techniques have been suggested to facilitate this task, for instance, by reducing the required movement distance. Here we
measure the natural coordination of eye and mouse pointer
control across several search and selection tasks. We find that users automatically minimize the distance to likely targets in an intelligent, task dependent way. When target location is highly predictable, top-down knowledge can enable users to initiate pointer movements prior to target fixation. These findings question the utility of existing assistive pointing techniques and suggest that alternative approaches might be more effective.
Kollenberg Visual Search In The (Un)Real World How Head Mounted Displays Affe...Kalle
Head-mounted displays (HMDs) that use a see-through display method allow for superimposing computer-generated images upon a real-world view. Such devices, however, normally restrict the user’s field of view. Furthermore, low display resolution and display curvature are suspected to make foveal as well as peripheral vision more difficult and may thus affect visual processing. In order to evaluate this assumption, we compared performance and eye-movement patterns in a visual search paradigm under different viewing conditions: participants either wore an HMD, had their field of view restricted by blinders or could avail themselves of an unrestricted field of view (normal viewing). From the head and eye-movement recordings we calculated the contribution of eye rotation to lateral shifts of attention. Results show that wearing an HMD leads to less eye rotation and requires more head movements than under blinders conditions and during normal viewing.
2. ЩО ПОТРІБНО ДЛЯ СТВОРЕННЯ ? БУДЬ-ЯКИЙ ТЕКСТОВИЙ РЕДАКТОР ....І ЗВИЧАЙНО....ВАШЕ БАЖАННЯ
3. СТРУКТУРА ДОКУМЕНТА <HTML> <HEAD> <TITLE> ЗАГОЛОВОК ДОКУМЕНТА </TITLE> </HEAD> <BODY> ПЕР ШИ Й ТЕКСТ НА СТОРІНЦІ </BODY> </HTML>
4. <HTML> .</HTML> - ПОЧАТОК ТА КІНЕЦЬ ДОКУМЕНТА <HEAD>, </HEAD> - МІСТИТЬСЯ СЛУЖБОВА ІНФОРМАЦІЯ <TITLE>, </TITLE> - НАЗВА СТОРІНКИ <BODY>, </BODY> - ІНФОРМАЦІЯ, ЩО МІСТИТЬСЯ БЕЗПОСЕРЕДНЬО НА СТОРІНЦІ ОСНОВНІ ТЕГИ
5. <BODY BGCOLOR=«НАЗВА КОЛЬОРУ ФОНУ"> - КОЛІР ФОНУ leftmargin – ВІДСТУП ЗЛІВА rightmargin – ВІДСТУП СПРАВА topmargin – ВІДСТУП ЗВЕРХУ bottommargin – ВІДСТУП ЗНИЗУ ЗАГАЛЬНИЙ ВИГЛЯД <BODY ПАРАМЕТР=" ЧИСЛО В ПІКСЕЛЯХ"> ПАРАМЕТРИ ВІДСТУПІВ
6. <BODY BACKGROUND= “ ШЛЯХ ДО ФАЙЛА ” > - КАРТИНКА В ЯКОСТІ ФОНУ <TABLE BGCOLOR=«НАЗВА КОЛЬОРУ ФОНУ"> - КОЛІР ФОНУ В ТАБЛИЦІ <TABLE WIDTH=“N" BORDER=“M"> - СТВОРЕННЯ ТАБЛИЦІ , N – ШИРИНА ТАБЛИЦІ В ПІКСЕЛЯХ, M – ТОВЩИНА РАМКИ <TR> - СТВОРЕННЯ РЯДКА <TD> - СТВОРЕННЯ СТОВБЦЯ
7. <A HREF=«ШЛЯХ ДО ФАЙЛА"> ЯКІР </a> - СТВОРЕННЯ ГІПЕРПОСИЛАНЬ ЗАГОЛОВКИ ТЕГ H1 <Hn> Заголовок </Hn> n - число від 1 до 6. Заголовки різних розмірів (БІЛЬШЕ n – МЕНШИЙ РОЗМІР) <FONT SIZE="Число від 1 до 7"> </FONT> - РОЗМІР ШРИФТУ <FONT COLOR=“ НАЗВА КОЛЬОРУ "> </FONT> - КОЛІР ТЕКСТУ
8. НАКРЕСЛЕННЯ ТЕКСТУ <B> Жирный текст </B> <I> Курсив </I> <U> Підкреслений </U> <STRIKE> Перекреслений </STRIKE> <SUP> Верхній індекс </SUP> <SUB> Нижній індекс </SUB>
9. <IMG SRC=«ШЛЯХ ДО МАЛЮНКА"> - ВСТАВКА МАЛЮНКА НУМЕРОВАНИЙ СПИСОК <OL> <LI> 1 РЯДОК <LI> 2 РЯДОК </OL> НЕНУМЕРОВАНИЙ СПИСОК <UL> <LI> 1 РЯДОК <LI> 2 РЯДОК </UL>
10. <HR WIDTH=« ДОВЖИНА В ПІКСЕЛЯХ "> <HR SIZE=« ТОВЩИНА В ПІКСЕЛЯХ " WIDTH=« ДОВЖИНА В ПІКСЕЛЯХ "> <HR COLOR=« НАЗВА КОЛЬОРУ " WIDTH=« ДОВЖИНА В ПІКСЕЛЯХ "> ВСТАВКА ГОРИЗОНТАЛЬНОЇ ЛІНІЇ