Gaze estimation systems use calibration procedures that require active subject participation to estimate the point-of-gaze accurately. In these procedures, subjects are required to fixate on a specific point or points in space at specific time instances. This
paper describes a gaze estimation system that does not use calibration procedures that require active user participation. The
system estimates the optical axes of both eyes using images from a stereo pair of video cameras without a personal calibration procedure. To estimate the point-of-gaze, which lies along the visual axis, the angles between the optical and visual axes are estimated by a novel automatic procedure that minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Experiments
with four subjects demonstrate that the RMS error of this point-of-gaze estimation system is 1.3º.
Coutinho A Depth Compensation Method For Cross Ratio Based Eye TrackingKalle
Traditional cross-ratio methods (TCR) project a light pattern and use invariant properties of projective geometry to estimate the gaze position. Advantages of the TCR methods include robustness to large head movements and in general requires just a one time per user calibration. However, the accuracy of TCR methods decay significantly for head movements along the camera optical axis, mainly due to the angular difference between the optical and visual axis of the eye. In this paper we propose a depth compensation cross-ratio (DCR) method that improves the accuracy of TCR methods for large head depth variations. Our solution compensates the angular offset using a 2D onscreen vector computed from a simple calibration procedure. The length of the 2D vector, which varies with head distance, is adjusted by a scale factor that is estimated from relative size variations of the corneal reflection pattern. The proposed DCR solution was compared to a TCR method using synthetic and real data from 2 users. An average improvement of 40% was observed with synthetic data, and 8% with the real data.
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
This paper presents a user-calibration-free method for estimating the point of gaze (POG) on a display accurately with estimation of the horizontal angles between the visual and the optical axes of both eyes. By using one pair of cameras and two light sources, the
optical axis of the eye can be estimated. This estimation is carried
out by using a spherical model of the cornea. The point of intersection
of the optical axis of the eye with the display is termed POA.By detecting the POAs of both the eyes, the POG is approximately estimated as the midpoint of the line joining the POAs of both the eyes on the basis of the binocular eye model; therefore, we can estimate
the horizontal angles between the visual and the optical axes of both the eyes without requiring user calibration. We have developed
a prototype system based on this method using a 19 display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600 mm from the display. The result shows that the average of the root-meansquare error (RMSE) of measurement of POG in the display screen coordinate system is 16.55 mm (equivalent to less than 1.58◦).
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...IJCSEIT Journal
Edge detection plays a vital role in computer vision and image processing. Edge of the image is one of the
most significant features which are mainly used for image analyzing process. An efficient algorithm for
extracting the edge features of images using simplified version of Gabor Wavelet is proposed in this paper.
Conventional Gabor Wavelet is widely used for edge detection applications. Due do the high computational
complexity of conventional Gabor Wavelet, this may not be used for real time application. Simplified Gabor
wavelet based approach is highly effective at detecting both the location and orientation of edges. The
results proved that the performance of proposed Simplified version of Gabor wavelet is superior to
conventional Gabor Wavelet, other edge detection algorithm and other wavelet based approach. The
performance of the proposed method is proved with the help of FOM, PSNR and Average run time.
The optic disc (OD) is one of the important part of the eye for detecting various diseases such as Diabetic Retinopathy and Glaucoma. The localization of optic disc is extremely important for determining hard exudates and lesions. Diagnosis of the disease can prevent people from vision loss. This paper analyzes various techniques which are proposed by different authors for the exact localization of optic disc to prevent vision loss.
Coutinho A Depth Compensation Method For Cross Ratio Based Eye TrackingKalle
Traditional cross-ratio methods (TCR) project a light pattern and use invariant properties of projective geometry to estimate the gaze position. Advantages of the TCR methods include robustness to large head movements and in general requires just a one time per user calibration. However, the accuracy of TCR methods decay significantly for head movements along the camera optical axis, mainly due to the angular difference between the optical and visual axis of the eye. In this paper we propose a depth compensation cross-ratio (DCR) method that improves the accuracy of TCR methods for large head depth variations. Our solution compensates the angular offset using a 2D onscreen vector computed from a simple calibration procedure. The length of the 2D vector, which varies with head distance, is adjusted by a scale factor that is estimated from relative size variations of the corneal reflection pattern. The proposed DCR solution was compared to a TCR method using synthetic and real data from 2 users. An average improvement of 40% was observed with synthetic data, and 8% with the real data.
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
This paper presents a user-calibration-free method for estimating the point of gaze (POG) on a display accurately with estimation of the horizontal angles between the visual and the optical axes of both eyes. By using one pair of cameras and two light sources, the
optical axis of the eye can be estimated. This estimation is carried
out by using a spherical model of the cornea. The point of intersection
of the optical axis of the eye with the display is termed POA.By detecting the POAs of both the eyes, the POG is approximately estimated as the midpoint of the line joining the POAs of both the eyes on the basis of the binocular eye model; therefore, we can estimate
the horizontal angles between the visual and the optical axes of both the eyes without requiring user calibration. We have developed
a prototype system based on this method using a 19 display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600 mm from the display. The result shows that the average of the root-meansquare error (RMSE) of measurement of POG in the display screen coordinate system is 16.55 mm (equivalent to less than 1.58◦).
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...IJCSEIT Journal
Edge detection plays a vital role in computer vision and image processing. Edge of the image is one of the
most significant features which are mainly used for image analyzing process. An efficient algorithm for
extracting the edge features of images using simplified version of Gabor Wavelet is proposed in this paper.
Conventional Gabor Wavelet is widely used for edge detection applications. Due do the high computational
complexity of conventional Gabor Wavelet, this may not be used for real time application. Simplified Gabor
wavelet based approach is highly effective at detecting both the location and orientation of edges. The
results proved that the performance of proposed Simplified version of Gabor wavelet is superior to
conventional Gabor Wavelet, other edge detection algorithm and other wavelet based approach. The
performance of the proposed method is proved with the help of FOM, PSNR and Average run time.
The optic disc (OD) is one of the important part of the eye for detecting various diseases such as Diabetic Retinopathy and Glaucoma. The localization of optic disc is extremely important for determining hard exudates and lesions. Diagnosis of the disease can prevent people from vision loss. This paper analyzes various techniques which are proposed by different authors for the exact localization of optic disc to prevent vision loss.
Mutual Information for Registration of Monomodal Brain Images using Modified ...IDES Editor
Image registration has great significance in medicine,
with a lot of techniques anticipated in it. This research work
implies an approach for medical image registration that
registers images of the mono modalities for CT or MRI images
using Modified Adaptive Polar Transform (MAPT). The
performance of the Adaptive Polar Transform (APT) with the
proposed technique is examined. The results prove that MAPT
performs better than APT technique. The proposed scheme not
only reduces the source of errors and also reduces the elapsed
time for registration of brain images. An analysis is presented
for the medical image processing on mutual- information-based
registration.
Lec12: Shape Models and Medical Image SegmentationUlaş Bağcı
ShapeModeling – M-reps
– Active Shape Models (ASM)
– Oriented Active Shape Models (OASM)
– Application in anatomy recognition and segmentation – Comparison of ASM and OASM
ActiveContour(Snake) • LevelSet • Applications Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energy functional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
Retinal Macular Edema Detection Using Optical Coherence Tomography ImagesIOSRJVSP
Macular Edema affects around 20 million people of the world each year. Optical Coherence Tomography (OCT), a non-invasive eye-imaging modality, is capable of detecting Macular Edema both in its early and advanced stages. In this paper, an algorithm which detects Macular Edema from OCT images has been presented. Initially the images are filtered to de-noise them. Then, the retinal layers - Inner Limiting Membrane (ILM) and Retinal Pigment Epithelium (RPE) are segmented using Graph Theory method. Region splitting is performed on the OCT scan and the thickness between the two layers in the different regions are determined. Area enclosed between the two layers is also estimated. Support Vector Machine, a binary classifier is used to draw a classification between normal and abnormal OCT scans. Region-wise thickness, a few Haralick’s features, area between ILM and RPE and a few wavelet features are used to train the classifier. The classifier yielded an accuracy of 95% and a sensitivity of 100%. Thus, this algorithm can be used by ophthalmologists in early detection of Macular Edema.
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINAijait
The location of Optic Disc (OD) is of critical importance in retinal image analysis. This research paper carries out a new automated methodology to detect the optic disc (OD) in retinal images. OD detection helps the ophthalmologists to find whether the patient is affected by diabetic retinopathy or not. The proposed technique is to use line operator which gives higher percentage of detection than the already existing methods. The purpose of this project is to automatically detect the position of the OD in digital retinal fundus images. The method starts with converting the RGB image input into its LAB component. This image is smoothed using bilateral smoothing filter. Further, filtering is carried out using line operator. After which gray orientation and binary map orientation is carried out and then with the use of the resulting maximum image variation the area of the presence of the OD is found. The portions other
than OD are blurred using 2D circular convolution. On applying mathematical steps like peak classification, concentric circles design and image difference calculation, OD is detected. The proposed method was evaluated using a subset of the STARE project’s dataset and the success percentage was found
to be 96%.
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Land Boundary Detection of an Island using improved Morphological OperationCSCJournals
Image analysis is one of the important tasks to obtain the information about earth surface. To detect and mark a particular land area, it is required to have the image from remote place. To recognize the same, the accurate boundary of that area has to be detected. In this paper, the example of remote sensing image has been considered. The accurate detection of the boundary is a complex task. A novel method has been proposed in this paper to detect the boundary of such land. Mathematical morphology is a simple and efficient method for this type of task. The morphological analysis is performed using structure elements (SE). By using mathematical morphology the images can be enhanced and then the boundary can be detected easily. Simultaneously the noise is removed by using the proposed model. The results exhibit the performance of the proposed method. Keywords: Remote Sensing images ; Edge detection; Gray- scale Morphological analysis, Structuring Element (SE).
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
Mutual Information for Registration of Monomodal Brain Images using Modified ...IDES Editor
Image registration has great significance in medicine,
with a lot of techniques anticipated in it. This research work
implies an approach for medical image registration that
registers images of the mono modalities for CT or MRI images
using Modified Adaptive Polar Transform (MAPT). The
performance of the Adaptive Polar Transform (APT) with the
proposed technique is examined. The results prove that MAPT
performs better than APT technique. The proposed scheme not
only reduces the source of errors and also reduces the elapsed
time for registration of brain images. An analysis is presented
for the medical image processing on mutual- information-based
registration.
Lec12: Shape Models and Medical Image SegmentationUlaş Bağcı
ShapeModeling – M-reps
– Active Shape Models (ASM)
– Oriented Active Shape Models (OASM)
– Application in anatomy recognition and segmentation – Comparison of ASM and OASM
ActiveContour(Snake) • LevelSet • Applications Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energy functional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
Retinal Macular Edema Detection Using Optical Coherence Tomography ImagesIOSRJVSP
Macular Edema affects around 20 million people of the world each year. Optical Coherence Tomography (OCT), a non-invasive eye-imaging modality, is capable of detecting Macular Edema both in its early and advanced stages. In this paper, an algorithm which detects Macular Edema from OCT images has been presented. Initially the images are filtered to de-noise them. Then, the retinal layers - Inner Limiting Membrane (ILM) and Retinal Pigment Epithelium (RPE) are segmented using Graph Theory method. Region splitting is performed on the OCT scan and the thickness between the two layers in the different regions are determined. Area enclosed between the two layers is also estimated. Support Vector Machine, a binary classifier is used to draw a classification between normal and abnormal OCT scans. Region-wise thickness, a few Haralick’s features, area between ILM and RPE and a few wavelet features are used to train the classifier. The classifier yielded an accuracy of 95% and a sensitivity of 100%. Thus, this algorithm can be used by ophthalmologists in early detection of Macular Edema.
AN AUTOMATIC SCREENING METHOD TO DETECT OPTIC DISC IN THE RETINAijait
The location of Optic Disc (OD) is of critical importance in retinal image analysis. This research paper carries out a new automated methodology to detect the optic disc (OD) in retinal images. OD detection helps the ophthalmologists to find whether the patient is affected by diabetic retinopathy or not. The proposed technique is to use line operator which gives higher percentage of detection than the already existing methods. The purpose of this project is to automatically detect the position of the OD in digital retinal fundus images. The method starts with converting the RGB image input into its LAB component. This image is smoothed using bilateral smoothing filter. Further, filtering is carried out using line operator. After which gray orientation and binary map orientation is carried out and then with the use of the resulting maximum image variation the area of the presence of the OD is found. The portions other
than OD are blurred using 2D circular convolution. On applying mathematical steps like peak classification, concentric circles design and image difference calculation, OD is detected. The proposed method was evaluated using a subset of the STARE project’s dataset and the success percentage was found
to be 96%.
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Land Boundary Detection of an Island using improved Morphological OperationCSCJournals
Image analysis is one of the important tasks to obtain the information about earth surface. To detect and mark a particular land area, it is required to have the image from remote place. To recognize the same, the accurate boundary of that area has to be detected. In this paper, the example of remote sensing image has been considered. The accurate detection of the boundary is a complex task. A novel method has been proposed in this paper to detect the boundary of such land. Mathematical morphology is a simple and efficient method for this type of task. The morphological analysis is performed using structure elements (SE). By using mathematical morphology the images can be enhanced and then the boundary can be detected easily. Simultaneously the noise is removed by using the proposed model. The results exhibit the performance of the proposed method. Keywords: Remote Sensing images ; Edge detection; Gray- scale Morphological analysis, Structuring Element (SE).
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
Computer vision is often used with mobile robot for feature tracking, landmark sensing, and obstacle detection. Almost all high-end robotics systems are now equipped with pairs of cameras arranged to provide depth perception. In stereo vision application, the disparity between the stereo images allows depth estimation within a scene. Detecting conjugate pair in stereo images is a challenging problem known as the correspondence problem. The goal of this research is to assess the performance of SIFT, MSER, and SURF, the well known matching algorithms, in solving the correspondence problem and then in estimating the depth within the scene. The results of each algorithm are evaluated and presented. The conclusion and recommendations for future works, lead towards the improvement of these powerful algorithms to achieve a higher level of efficiency within the scope of their performance.
TDR - inovator i regionalni lider (Kanfanar, 17.06.2010.)TDR d.o.o Rovinj
Prezentacija s predstavljanja Ronhill Unlimited s osvrtom na stanje duhanske industrije u regiji i svijetu i TDR-ove rezultate u regiji.
Prezentira Davor Tomašković, predsjednik Uprave.
Kammerer Gaze Based Web Search The Impact Of Interface Design On Search Resul...Kalle
This paper presents a study which examined the selection of Web search results with a gaze-based input device. A standard list interface was compared to a grid and a tabular layout with regard to task performance and subjective ratings. Furthermore, the gazebased input device was compared to conventional mouse interaction. Test persons had to accomplish a series of search tasks by selecting search results. The study revealed that mouse users accomplished more tasks correctly than users of the gazebased input device. However, no differences were found between input devices regarding the number of search results taken into account to accomplish a task. Regarding task completion time and ease of search result selection only in the list interface gaze-based interaction was inferior to mouse interaction. Moreover, with a gaze-based input device search tasks were accomplished faster in tabular presentation than in a standard list interface, suggesting a tabular interface as best suited for gaze-based interaction.
Markkinointityökalujen määrä on räjähtänyt käsiin. Käytämme päivittäisessä työssämme yhä useampia erilaisia työkaluja, jotka keräävät markkinointidataa. Samalla kuitenkin datan analysointi vaikeutuu ja vaatii yhä enemmän resursseja.
Miten lähdemme liikkeelle? Tarvitaanko aina kivuliaan pitkiä projekteja vai onko löydettävissä nopeita voittoja markkinointianalytiikan tehokkuuden kasvattamiseksi? Mikä on tärkeintä: työkalut, raportit, analysointi vai toimenpiteet?
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
The purpose of this study is to explore how the viewers’ previous training is related to their aesthetic viewing in various interactions with the form and the context, in relation to apparel design. Berlyne’s two types of exploratory behavior, diversive and specific, provided a theoretical framework to this study. Twenty female subjects (mean age=21, SD=1.089) participated. Twenty model images, posed by a male and a female model, were shown on an eye-tracker screen for 10 seconds each. The findings of this study verified Berlyne’s concepts of visual exploration. One of the different findings from Berlyne’s theory was that the untrained viewers’ visual attention tended to be more significantly focused on peripheral areas of visual interest, compared to the trained viewers, while there was no significant difference on the central, foremost areas of visual interest between the two groups. The overall aesthetic viewing patterns were also identified.
Droege Pupil Center Detection In Low Resolution ImagesKalle
In some situations, high quality eye tracking systems are not affordable. This generates the demand for inexpensive systems built upon non-specialized, off the shelf devices. Investigations show that algorithms developed for high resolution systems do not perform satisfactorily on such lowcost and low resolution systems. We investigate
algorithms specifically tailored to such low resolution input devices, based on combination of different strategies. An approach called gradient direction consensus is introduced and compared to image based correlation with adaptive templates as well as other known methods. The results are compared using synthetic input data with known ground truth.
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
Analysis of recordings made by a wearable eye tracker is complicated by video stream synchronization, pupil coordinate mapping, eye movement analysis, and tracking of dynamic Areas Of Interest (AOIs) within the scene. In this paper a semi-automatic system is developed to help automate these processes. Synchronization is accomplished
via side by side video playback control. A deformable eye template and calibration dot marker allow reliable initialization via simple drag and drop as well as a user-friendly way to correct the algorithm when it fails. Specifically, drift may be corrected by nudging the detected pupil center to the appropriate coordinates. In a case study, the impact of surrogate nature views on physiological health and perceived well-being is examined via analysis of gaze over images of nature. A match-moving methodology was developed to track AOIs for this particular application but is applicable toward similar future studies.
Conventional non-vision based navigation systems relying on purely Global Positioning System (GPS) or inertial sensors can provide the 3D position or orientation of the user. However GPS is often not available in forested regions and cannot be used indoors. Visual odometry provides an independent method to estimate position and orientation of the user/system based on the images captured by the moving user accurately. Vision based systems also provide information (e.g. images, 3D location of landmarks, detection of scene objects) about the scene that the user is looking at. In this project, a set of techniques are used for the accurate pose and position estimation of the moving vehicle for autonomous navigation using the images obtained from two cameras placed at two different locations of the same area on the top of the vehicle. These cases are referred to as stereo vision. Stereo vision provides a method for the 3D reconstruction of the environment which is required for pose and position estimation. Firstly, a set of images are captured. The Harris corner detector is utilized to automatically extract a set of feature points from the images and then feature matching is done using correlation based matching. Triangulation is applied on feature points to find the 3D co-ordinates. Next, a new set of images is captured. Then repeat the same technique for the new set of images too. Finally, by using the 3D feature points, obtained from the first set of images and the new set of images, the pose and position estimation of moving vehicle is done using QUEST algorithm.
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
Recent advances in high magnification retinal imaging have allowed for visualization of individual retinal photoreceptors, but these systems also suffer from distortions due to fixational eye motion. Algorithms developed to remove these distortions have the added benefit of providing arc second level resolution of the eye movements that produce them. The system also allows for visualization of targets on the retina, allowing for absolute retinal position measures to the level of individual cones. This paper will describe the process used to remove the eye movement artifacts and present analysis of their spectral characteristics. We find a roughly 1/f amplitude spectrum similar to that reported by Findlay (1971) with no evidence for a distinct
tremor component.
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
The intuitive user interfaces of PCs and PDAs, such as pen display and touch panel, have become widely used in recent times. In this study, we have developed an eye-tracking pen display based on the stereo bright pupil technique. First, the bright pupil camera was developed by examining the arrangement of cameras and LEDs for pen display. Next, the gaze estimation method was proposed for the stereo bright pupil camera, which enables one point calibration. Then, the prototype of the eyetracking pen display was developed. The accuracy of the system was approximately 0.7° on average, which is sufficient for human interaction support. We also developed an eye-tracking tabletop as an application of the proposed stereo bright pupil technique.
Design and implementation of video tracking system based on camera field of viewsipij
The basic idea of this paper is to design and implement of video tracking system based on Camera Field of
View (CFOV), Otsu’s method was used to detect targets such as vehicles and people. Whereas most
algorithms were spent a lot of time to execute the process, an algorithm was developed to achieve it in a
little time. The histogram projection was used in both directional to detect target from search region,
which is robust to various light conditions in Charge Couple Device (CCD) camera images and saves
computation time.
Our algorithm based on background subtraction, and normalize cross correlation operation from a series
of sequential sub images can estimate the motion vector. Camera field of view (CFOV) was determined and
calibrated to find the relation between real distance and image distance. The system was tested by
measuring the real position of object in the laboratory and compares it with the result of computed one. So
these results are promising to develop the system in future.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
The Technology Research of Camera Calibration Based On LabVIEWIJRES Journal
The technology of camera calibration is most important part for machine vision detection and
location, the accuracy of calibration directly determines the processing accuracy of machine vision systems. In
this paper, we use LabVIEW and MATLAB to calibrate the internal and external parameters of the camera, at
the same time, we use dot calibration board, the circle edge is detected by Canny operator, then with the method
of circle fitting based on subpixel edge extraction, the information of dots image coordinate is extracted. The
present method reduces the difficulty of camera calibration and shortens the software development cycle, the
most important is that it has a high calibration accuracy, which can meet the actual industrial detection accuracy,
the results of experimental show that the method is feasible.
Mulligan Robust Optical Eye Detection During Head MovementKalle
Finding the eye(s) in an image is a critical rst step in a remote gaze-tracking system with a working volume large enough to encompass head movements occurring during normal user behavior. We briey review an optical method which exploits the retroreective properties of the eye, and present a novel method for combining difference images to reject motion artifacts. Best performance is obtained when a curvatur operator is used to enhance punctate features, and search is restricted to a neighborhood about the last known location. Optimal setting of the size of this neighborhood is aided by a statistical model of naturally-occurring head movements; we present head-movement statistics mined from a corpus of around 800 hours of video, collected in a team-performance experiment.
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
Relevance feedback (RF) mechanisms are widely adopted in Content-Based Image Retrieval (CBIR) systems to improve image retrieval performance. However, there exist some intrinsic problems: (1) the semantic gap between high-level concepts and low-level features and (2) the subjectivity of human perception of visual contents. The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data. In total, 882 images from 101 categories are viewed by 10 subjects to test the usefulness of implicit RF, where the relevance of each image is known beforehand. A set of measures based on fixations are thoroughly evaluated which include fixation duration, fixation count, and the number of revisits. Finally, the paper proposes a decision tree to predict the user’s input during the image searching tasks. The prediction precision of the decision tree is over 87%, which spreads light on a promising integration of natural eye movement into CBIR systems in the future.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
The visual field is the area of space that can be seen when an observer fixates a given point. Many visual capabilities vary with position in the visual field and many diseases result in changes in the visual field. With current technology, it is possible to build very complex real-time visual field simulations that employ gaze-contingent displays. Nevertheless, there are still no established techniques to evaluate such systems. We have developed a method to evaluate a system’s contingency by employing visual blind spot localization as well as foveal fixation. During the experiment, gaze-contingent and static conditions were compared. There was a strong correlation between predicted results and gaze-contingent trials. This evaluation method can also be used with patient populations and for the evaluation of gaze-contingent display systems, when there is need to evaluate a visual field outside of the foveal region.
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
Pie menus offer several features which are advantageous especially for gaze control. Although the optimal number of slices per pie
and of depth layers has already been established for manual control, these values may differ in gaze control due to differences in spatial accuracy and congitive processing. Therefore, we investigated the layout limits for hierarchical pie menu in gaze control. Our user study indicates that providing six slices in multiple depth layers guarantees fast and accurate selections. Moreover, we compared two different methods of selecting a slice. Novices performed well with both, but selecting via selection borders produced better performance for experts than the standard dwell time selection.
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
The study of surgeons’ eye movements is an innovative way of assessing skill and situation awareness, in that a comparison of eye movement strategies between expert surgeons and novices may show differences that can be used in training. Our preliminary study compared eye movements of 4 experts and
4 novices performing a simulated gall bladder removal task on a
dummy patient with an audible heartbeat and simulated vital signs displayed on a secondary monitor. We used a head-mounted Locarna PT-Mini eyetracker to record fixation locations during the operation. The results showed that novices concentrated so hard on the surgical
display that they were hardly able to look at the patient’s vital signs, even when heart rate audibly changed during the procedure. In comparison, experts glanced occasionally at the vitals monitor, thus being able to observe the patient condition.
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
The portability of an eye tracking system encourages us to develop a technique for estimating 3D point-of-regard. Unlike conventional methods, which estimate the position in the 2D image coordinates of the mounted camera, such a technique can represent richer gaze information of the human moving in the larger area. In this paper, we propose a method for estimating the 3D point-of-regard and a visualization technique of gaze trajectories under natural head movements for the head-mounted device. We employ visual SLAM technique to estimate head configuration and extract environmental information. Even in cases where the head moves dynamically, the proposed method could obtain 3D point-of-regard. Additionally, gaze trajectories are appropriately overlaid on the scene camera image.
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
Gaze visualizations represent an effective way for gaining fast insights into eye tracking data. Current approaches do not adequately support eye tracking studies for three-dimensional (3D) virtual environments. Hence, we propose a set of advanced gaze visualization techniques for supporting gaze behavior analysis in such environments. Similar to commonly used gaze visualizations for twodimensional
stimuli (e.g., images and websites), we contribute advanced 3D scan paths and 3D attentional maps. In addition, we introduce a models of interest timeline depicting viewed models, which can be used for displaying scan paths in a selected time segment. A prototype toolkit is also discussed which combines an implementation of our proposed techniques. Their potential for facilitating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts.
Skovsgaard Small Target Selection With Gaze AloneKalle
Accessing the smallest targets in mainstream interfaces using gaze
alone is difficult, but interface tools that effectively increase the size of selectable objects can help. In this paper, we propose a conceptual framework to organize existing tools and guide the development of new tools. We designed a discrete zoom tool and conducted a proof-of-concept experiment to test the potential of the framework and the tool. Our tool was as fast as and more accurate than the currently available two-step magnification tool. Our framework shows potential to guide the design, development, and testing of zoom tools to facilitate the accessibility of mainstream
interfaces for gaze users.
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user’s eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The
user successfully typed on a wall-projected interface using his eye movements.
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
Eye-tracking has been widely used for research purposes in fields such as linguistics and marketing. However, there are many possibilities of how eye-trackers could be used in other disciplines like physics. A part of physics education research deals with the differences between novices and experts, specifi-cally how each group solves problems. Though there has been a great deal of research about these differences there has been no research that focuses on noticing exactly where experts and no-vices look while solving the problems. Thus, to complement the past research, I have created a new technique called gaze scrib-ing. Subjects wear a head mounted eye-tracker while solving electrical circuit problems on a graphics monitor. I monitor both scan patterns of the subjects and combine that with videotapes of their work while solving the problems. This new technique has yielded new information and elaborated on previous studies.
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods
proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
This paper describes a study that seeks to explore the correlation between eye movements and the interpretation of geometric shapes. This study is intended to inform the development of an eye tracking interface for computational tools to support and enhance the natural interaction required in creative design. A common criticism of computational design tools is that they do not enable manipulation of designed shapes according to all perceived features. Instead the manipulations afforded are limited by formal structures of shapes. This research examines the potential for eye movement data to be used to recognise and make available for manipulation the perceived features in shapes. The objective of this study was to analyse eye movement data with the intention of recognising moments in which an interpretation of shape is made. Results suggest that fixation duration and saccade amplitude prove to be consistent indicators of shape interpretation.
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
Eye gaze interaction for disabled people is often dealt with by designing ad-hoc interfaces, in which the big size of their elements compensates for both the inaccuracy of eye trackers and the instability of the human eye. Unless solutions for reliable eye cursor control are employed, gaze pointing in ordinary graphical operating environments is a very difficult task. In this paper we present an eye-driven cursor for MS Windows which behaves differently according to the “context”. When the user’s gaze is perceived within the desktop or a folder, the cursor can be discretely shifted from one icon to another. Within an application window or where there are no icons, on the contrary, the cursor can be continuously and precisely moved. Shifts in the four directions (up, down, left, right) occur through dedicated buttons. To increase user awareness of the currently pointed spot on the screen while continuously moving the cursor, a replica of the spot is provided within the active direction button, resulting in improved pointing performance.
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking.
We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
We report on the results of a study in which pairs of subjects were involved in spoken dialogues and one of the subjects also operated a simulated vehicle. We estimated the driver’s cognitive load based on pupil size measurements from a remote eye tracker. We compared the cognitive load estimates based on the physiological pupillometric data and driving performance data. The physiological and performance measures show high correspondence suggesting that remote eye tracking might provide reliable driver cognitive load estimation, especially in simulators. We also introduced a new pupillometric cognitive load measure that shows promise in tracking cognitive load changes on time scales of several seconds.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
A novel gaze estimation method based on a novel aspherical model of the cornea is proposed in this paper. The model is a surface of revolution about the optical axis of the eye. The calculation method is explained on the basis of the model. A prototype system for estimating the point of gaze (POG) has been developed using this method. The proposed method has been found to be more accurate than the gaze estimation method based on a spherical model of the
cornea.
Moshnyaga The Use Of Eye Tracking For Pc Energy ManagementKalle
This paper discusses a new application of eye-tracking, namely power management, and outlines its implementation in personal computer system. Unlike existing power management technology, which “senses” a PC user through keyboard and/or mouse, our technology “watches” the user through a single camera. The technology tracks the user’s eyes keeping the display active only if the user looks at the screen. Otherwise it dims the display down or even switches it off to save energy. We implemented the technology in hardware and present the results of its experimental evaluation.
2. Figure 1 Ray-tracing diagram (not to scale in order to be able to show all the elements of interest), showing schematic representations of
the eye, a camera and a light source.
(l i o j ) (u ij o j ) (c o j ) 0 . 1 η η
1
η1 η2 η1 (o1 o 2 )
c η1 η2 1 1
η2 η2 η2 (o1 o 2 )
η1 η2
normal to the plane defined by
l i , o j and uij 2
1
Notice that (1) shows that, for each camera j, all the planes de- (o1 o 2 ) .
2
fined by oj, li and uij contain the line defined by points c and oj.
If the light sources, li, are positioned such that at least two of Next, consider an imaginary ray that originates at the pupil cen-
those planes are not coincident, the planes intersect at the line ter, p, travels through the aqueous humor and cornea (effective
defined by c and oj. If ηj is a vector in the direction of the line of index of refraction ≈ 1.3375) and refracts at a point rj on the
intersection of the planes, then corneal surface as it travels into the air (index of refraction ≈ 1),
such that the refracted ray passes through the nodal point of
c o j kc , j η j for some kc , j camera j, oj, and intersects the camera image plane at a point up,j.
This refraction results in the formation of a virtual image of the
In particular, if two light sources are considered (i = 1, 2), pupil center (virtual pupil center), pv,j, located on the extension
of the refracted ray, i.e.,
[(l1 o j ) (u1 j o j )] [(l 2 o j ) (u 2 j o j )]
ηj p v , j o j k p , j (o j u p , j ) for some k p , j .
[(l1 o j ) (u1 j o j )] [(l 2 o j ) (u 2 j o j )]
hj
where [(l1 – oj) (u1j – oj)] is the normal to the plane defined by
oj, l1 and u1j, and [(l2 – oj) (u2j – oj)] is the normal to the plane In strict terms, the spatial location of pv,j depends on the position
defined by oj, l2 and u2j. of the nodal point of the camera, oj, relative to the eye. There-
fore, in general, the spatial location of pv,j will be slightly differ-
Having two cameras, the position of the center of curvature of ent for each of the two cameras. Despite this, an approximate
the cornea, c, can be found as the intersection of the lines given virtual image of the pupil center, pv, can be found as the mid-
by (2)–(3), j = 1, 2. Since, in practice, the estimated coordinates point of the shortest segment defined by a point belonging to
of the images of the corneal reflection centers, uij, are corrupted each of the lines given by (5), j = 1, 2, i.e.,
by noise, those lines may not intersect. Therefore, c is found as
the midpoint of the shortest segment defined by a point belong-
ing to each of those lines. It can be shown that, in such case, c is
given by
30
3. Figure 2 Simplified schematics of the eye. The optical axis of the eye connects the center of the pupil with the center of curvature of the
cornea. Gaze is directed along the visual axis, which connects the center of the region of highest acuity of the retina (fovea) with the center
of curvature of the cornea.
1 h h
1
h1 h 2 h1 (o1 o 2 ) 3 The AAE Algorithm
pv h1 h 2 h 1 h 1 h 2 h 2 h 2 (o1 o 2 )
2 1 2 3.1 Theory
1
(o1 o 2 ) .
2 The AAE algorithm is based on the assumption that at each time
instant the visual axes of both eyes intersect on the observation
Since c is on the optical axis of the eye and assuming that pv is surface (e.g., display). The unknown angles between the optical
also on the optical axis (Fig. 1), (3)–(6) provide a closed-form and visual axes can be estimated by minimizing the distance
solution for the reconstruction of the optical axis of the eye in 3- between the intersections of the left and right visual axes with
D space without the knowledge of any subject-specific eye pa- that observation surface.
rameter. In particular, the direction of the optical axis of the eye
is given by the unit vector Two coordinate systems are used to describe the relation be-
tween the optical and visual axes of the eye. The first is a statio-
pv c nary right-handed Cartesian World Coordinate System (WCS)
ω with the origin at the center of the display, the Xw-axis in the
pv c horizontal direction, the Yw-axis in the vertical direction and the
Zw-axis perpendicular to the display (see Figure 2). The second
The PoG, g, is defined as the intersection of the visual axis, ra- is a non-stationary right-handed Cartesian Eye Coordinate Sys-
ther than the optical axis, with the scene. The visual axis might tem (ECS), which is attached to the eye, with the origin at the
deviate from the optical axis by as much as 5º [Young and center of curvature of the cornea, c, the Zeye axis that coincides
Sheena 1975]. Therefore, if the PoG is estimated from the inter- with the optical axis of the eye and Xeye and Yeye axes that, in the
section of the optical axis with the scene, the estimation might primary gaze position, are in the horizontal and vertical direc-
suffer from large, subject dependent errors. As an example, for tions, respectively. The Xeye-Yeye plane rotates according to List-
the experiments described in Section 4 the RMS errors in PoG ing’s law [Helmholtz 1924] around the Zeye axis for different
estimation by the intersection of the optical axis with the com- gaze directions.
puter monitor ranges from 1° to 4.5° (2.5° on average for all four
subjects). If the angle between the optical and visual axes is In the ECS, the unknown 3-D angle between the optical and the
estimated with a one-point user-calibration procedure [Guestrin visual axes of the eye can be expressed by the horizontal1, α, and
and Eizenman 2008], during which the subject is required to vertical2, β, components of this angle (see Figure 2). The unit
fixate a known point in space for several seconds, the PoG esti- vector in the direction of the visual axis with respect to the ECS,
mation error is reduced to 0.8° RMS. In the next section, we νECS, is then expressed as
present a novel AAE algorithm, in which the angles between the
optical and visual axes are estimated automatically.
1
The angle between the projection of the visual axis on the Xeye-Zeye
plane and the Zeye axis. It is equal to 90° if the visual axis is in the –Xeye
direction.
2
The angle between the visual axis and its projection on the Xeye-Zeye
plane. It is equal to 90° if the visual axis is in the +Yeye direction.
31
4. sin( )cos( ) g 0 g( 0 , 0 ) c k0 ν 0
ν ECS ( , )
sin( )
cos( )cos( ) where ν 0 ν ( 0 , 0 ) and k0 k ( 0 , 0 ) .
Then, using (10),
The unit vector in the direction of the visual axis with respect to
the WCS, ν , can be expressed as
g( , )
a k ν 0 k0 ν
ν ( , ) R ν ECS ( , ) 0 , 0
where R is the rotation matrix from the ECS to the WCS (inde- g ( , )
b k ν 0 k0 ν
pendent of α and β), which can be calculated from the orienta- 0 , 0
tion of the optical axis of the eye and Listing’s law [Helmholtz
1924].
where, from (8)-(9),
Because the visual axis goes through the center of curvature of
the cornea, c, and the PoG is defined by the intersection of the cos( 0 )cos( 0 )
visual axis with the display (Zw = 0), the PoG in the WCS is ν ( , )
given by ν R
0
0 , 0
sin( 0 ) cos( 0 )
g( , ) c k ( , ) ν ( , ) c k ( , ) R ν ECS ( , )
sin( 0 )sin( 0 )
ν ( , )
where k ( , ) is a line parameter defined by the intersection of ν R
cos( 0 )
the visual axis with the surface of the display: 0 , 0 cos( 0 )sin( 0 )
cn
k ( , ) and, from (11),
ν ( , ) n
k ( , ) ν n
where n = [0 0 1]T is the normal to the display surface and “T” k k0
denotes transpose. 0 , 0 ν0 n
The estimation of αL, βL, αR and βR is based on the assumption ν n
k ( , )
that at each time instant the visual axes of both eyes intersect on k k0
the surface of the display (superscripts “L” and “R” are used to 0 , 0
ν0 n
denote parameters of the left and right eyes, respectively). The
unknown angles αL, βL, αR and βR can be estimated by minimiz- Using the above linear approximation, the sum of the squared
ing the distance between the intersections of the left and right distances between the left and right PoGs in the objective func-
visual axes with that surface (left and right PoGs). tion (12) can be expressed as
The objective function to be minimized is then
F ( L , L , R , R ) M i x y i
2
2
i
F ( , , , ) g ( , ) g ( , )
2
L L R R L
i
L L R
i
R R
2
i
where M i ai a i b i is a 3x4 matrix,
L L R R
bi
where the subscript i identifies the i-th gaze sample. y i g 0, i g 0, i
L R
is a 3x1 vector and
T
The above objective function is non-linear, and thus a numerical x ( 0 ) ( 0 ) ( 0 ) ( 0 ) is a 4x1
L L L L R R R R
optimization procedure is required to solve for the unknown vector of unknown angles. The subscript “i" is used to explicitly
angles αL, βL, αR and βR. However, since the deviations of the indicate the correspondence to the specific time instance “i” or i-
unknown angles αL, βL, αR and βR from the expected “average” th gaze sample.
values α0L, β0L, α0R and β0R are relatively small, a linear approx-
imation of (10) can be obtained by using the first three terms of The solution to (21) can be obtained in a closed form using least
its Taylor’s series expansion: squares as
g( , ) xopt MT M MT y
1
g( , ) g ( 0 , 0 ) ( 0 )
0 , 0
g( , ) where the optimization over several time instances is achieved
( 0 )
by stacking the matrices on top of each other:
0 , 0
Let
32
5. (a) (b)
Figure 3 Estimation results with a 40 cm x 30 cm observation surface (e.g., 20” monitor). Noise STD in the optical axis was set to
0.1° and in the center of curvature of the cornea to 0.5 mm. (a) Estimated subject-specific angles; (b) Root-Mean-Square Error
(RMSE) of PoG estimation (top) and Left PoG - Right PoG distance (bottom).
LAB®. One thousand PoGs were used to estimate the angles
M1 y1
M y between the optical and visual axes for each set of eye parame-
M 2 ; y 2 ters ( R , L , R and L ). The initial value for the angle be-
tween the optical and visual axes was set to zero (the initial val-
M N y N ue did not affect the final results as long as it was within ±20° of
the actual value). The first update of the algorithm was done
Finally, the estimates of the subject-specific angles are given by after 100 PoGs to prevent large fluctuations during start-up.
T T Figure 3 shows an example of one simulation. The parameters
L
ˆ ˆ
L R
ˆ ˆ
R 0L 0L 0R 0R x opt
for the simulations were as follows: 1) Standard deviation (STD)
of the noise in the components of the direction of the optical axis
Since the objective function (21) is a linear approximation of the 0.1°, 2) STD of the noise in the coordinates of the center of cur-
objective function (12), several iterations of (14)-(24) might be vature of the cornea 0.5mm, 3) observation surface 40cm x
needed to converge to the true minimum of the objective func- 30cm (20” monitor) 4) the center of curvature of the cornea of
tion (12). In the first iteration, α0L, β0L, α0R and β0R are set to the subject’s right eye, cR, at [3 0 75]T (i.e., approximately 75 cm
zero. In subsequent iterations, α0L, β0L, α0R and β0R are set to the from the display’s surface), 5) the center of curvature of the
ˆ ˆ ˆ ˆ
values of 0 L , 0 L , 0 R and 0 R from the preceding iteration. cornea of the subject’s left eye, cL, at [-3 0 75]T (i.e., inter-
pupillary distance = 6 cm). As can be seen from
The above methodology to estimate the angle between the opti-
cal and visual axes is suitable for “on-line” estimation as a new Figure 3(a), after approximately 500 PoGs the solution con-
matrix Mi is added to M and a new vector yi is added to y for verges to within ±0.5° of the true values of R , L , R and
each new estimate of the centers of curvature of the corneas and L.
optical axes.
Figure 3(b) shows the root-mean-square (RMS) error in the PoG
3.2 Numerical Simulations estimation (top) and the value of the objective function (bottom).
The objective function was effectively minimized after the first
The performance of the AAE algorithm was evaluated as a func- update (i.e., after 100 iterations) while the RMS error continues
to decline until iteration 500. This is due to the fact that even
tion of noise in the estimates of the direction of the optical axis
though after 100 iterations the distance between the points-of-
and the coordinates of the center of curvature of the cornea of gaze of the left and right eyes was minimized, PoGs of the both
each eye. In the simulations, the angles between the optical and eyes had similar biases relative to the actual PoGs. From itera-
visual axes were randomly drawn from a uniform distribution tion 100 to 500 the PoGs of the two eyes drifted simultaneously
with a range of (-5, 0) for R , (0, 5) for L and (-5, 5) for R towards their true values.
and L . The PoGs were randomly drawn from a uniform distri-
bution over the observation surface and the optical axes of the As shown in [Model and Eizenman 2009], the performance of
two eyes were calculated. Noise in the estimates of the center of the algorithm for the estimation of the angles between the opti-
curvature of the cornea and the optical axis was simulated by cal and visual axes depends on the range of the viewing angles
adding independent zero-mean white Gaussian processes to the between the visual axes of both eyes and the vectors normal to
the observation surface. Therefore, the simulations used four
coordinates of the centers of curvature of the cornea (X,Y and Z)
different observation surfaces: a) a plane that provides viewing
and to the horizontal and vertical components of the direction of angles in the range of ±14.9º horizontally and ±11.3º vertically
the optical axis. The AAE algorithm was implemented in MAT- (40 cm x 30 cm observation surface that is similar in size to a
33
6. (a) (b)
Figure 4 RMS errors for the angle between the optical and visual axes, for four different surfaces, as a function of the noise in the esti-
mation of the components of the direction of the optical axis. 27 head positions (all the combinations of Xhead = -10 cm, 0 cm, 10 cm;
Yhead = -10 cm, 0 cm, 10 cm; Zhead = -10 cm, 0 cm, 10 cm). (a) L ; (b) L .
20” monitor), b) a plane that provides viewing angles in the 4 Experiments
range of ±28º horizontally and ±21.8º degrees vertically (80 cm
x 60 cm observation surface that is similar in size to a 40” moni-
tor), c) a plane that provide viewing angles in the range of A two-camera REGT system [Guestrin and Eizemnan 2007] was
±46.8º horizontally and ±38.7º (160 cm x 120 cm) and d) a used for experiments with four subjects. The STD of the noise in
pyramid (30 cm x 30 cm base, 15 cm height, 45º angle between the estimates of the components of the direction of the optical
the base and each of the facets). axis in this system is 0.4° and the STD of the noise in the esti-
mates of the coordinates of the center of curvature of the cornea
For these simulations, the coordinates of the center of curvature is 1.0 mm. Subjects were seated at a distance of approximately
of the cornea of the right eye, cR, were set at 27 different posi- 75cm from a computer monitor (plane Z=0) and head move-
tions (all the combinations of X= -7 cm, 3 cm, 13 cm; Y= -10 ments were not restrained. Given the limited tracking range of
cm, 0 cm, 10 cm; Z= 85 cm, 75 cm, 65 cm, e.g., cR = [-7 0 75]T, the system (approximately ±15º horizontally and vertically),
[3 0 75]T , [13 0 75]T, etc). The simulations were repeated for only a relatively small observation planes (<20” monitor) or
different noise levels in the components of the direction of the more complicated pyramid observation surfaces could be used
optical axis. For all the simulations, the noise level in the coor- for the experiments. Because the expected RMS errors with a
dinates of the center of the curvature of the cornea was set to 1 20” observation surface ( > 2.5º and > 3º, see Figure 4) are
mm. For each eye position and noise level in the optical axis the larger than the expected errors when R , L , R and L are
simulations were repeated 100 times. Figure 4 shows the aggre- set a-priori to -2.5º, 2.5º, 0, 0, respectively, there was no point in
gate (all head positions) RMS error in the estimations of and evaluating the performance of the algorithm with such an obser-
vation surface. Therefore, the experiments were performed with
as a function of the noise level in the components of the di-
rection of the optical axis. a pyramid observation surface (30 cm x 30 cm base, 15 cm
height, and pinnacle towards the viewer). The base of the pyra-
As can be seen from Figure 4, the performance of the AAE algo- mid was mounted on the plane Z=9 (cm) and the pinnacle was
rithm improves when the range of angles between the subject’s located at [0,-1.5,24]T.
gaze vectors, as he/she looks at the displays, and vectors normal
to the observation surfaces increases. For a plane that provides During the experiment, each subject was asked to look at the
viewing angles in the range of ±14.9º horizontally and ±11.3º pyramid and thousand estimates of the center of curvature of the
vertically (40 cm x 30 cm observation surface), a noise level of cornea and the direction of the optical axis were obtained for
0.4 degrees in the components of the direction of the optical axis each eye. The subject-specific angles between the optical and
results in a RMS error of 2.6º in the estimation of L . For the visual axes were estimated, off-line, using the AAE algorithm.
same noise level in the optical axis, the RMS error in the estima- Next, the pyramid was removed and the subject was asked to
tion of L is reduced to only 0.5º when a plane that provides complete a standard one-point calibration procedure [Guestrin
viewing angles in the range of ±46.8º horizontally and ±38.7º and Eizemnan 2007]. The values of the estimated subject-
vertically (160 cm x 120 cm) is used. This is similar to the per- specific angles between the optical and visual axes with the
formance of the algorithm with the pyramidal observation sur- AAE procedure and a one-point calibration procedure are shown
face. in Table I.
34
7. optical and visual axes with an RMS error of 0.5°. These results
TABLE I are consistent with the numerical analysis (see Section 3.2) and
were achieved with an observation surface that included four
HORIZONTAL ( ) AMD VERTICAL ( ) COMPONENTS OF THE ANGLE
planes. Based on the numerical analysis (see Figure 4), an eye-
BETWEEN THE OPTICAL AND VISUAL AXES
tracking system that estimates the components of the direction of
Subject-specific angles, (°) the optical axis with an accuracy of 0.1° can use a 30 cm x 40
Subjects: AAE 1-point calib cm observation plane (e.g., 20” computer monitor) to achieve a
L R L R L R L R similar RMS error. Increasing the observation plane to 60 cm x
1 1.8 -2.6 -0.1 -0.7 0.9 -2.9 -0.2 -1.0 80 cm will allow eye trackers that can estimate the components
2 1.1 -1.0 -2.8 -0.7 0.9 -1.3 -3.1 -1.5 of the direction of the optical axis with an accuracy of 0.2° to
3 0.3 -0.7 2.7 0.3 -0.1 -0.3 2.1 0.4 achieve similar RMS errors. As the accuracy of eye-tracking
4 -0.2 -3.1 0.6 0.1 0.1 -2.3 0.5 0.7 systems will improve, it will become feasible to use the AAE
algorithm presented in this paper to estimate accurately the angle
‘AAE’ – and estimated with the AAE procedure. ‘1-point between the optical and visual axis using a single observation
calib’ – and estimated with a one-point calibration proce- plane. The use of a single observation surface will improve con-
dure. siderably the utility of the UCF-REGT system.
Since the estimates of the subject-specific angles between the Experiments with four subjects demonstrated that when a two-
optical and visual axes that are obtained by the one-point cali- camera REGT system [Guestrin and Eizemnan 2007] is used
bration procedure minimize the RMS error between the esti- with a single point calibration procedure, the RMS error is 0.8°.
mated PoG and the actual PoG, the values obtained by the one- With the UCF-REGT system, the RMS error of the PoG estima-
point calibration procedure will serve as a reference (“gold tion is increased to 1.3° (see Table II). User-calibration-free gaze
standard”) for the calculations of the errors of the AAE proce- estimation systems that estimate the PoG from the intersection
dure. The estimation errors of the AAE procedure are - of the optical axis of one of the eyes with the display (e.g., [Shih
0.08±0.59 [°] in (left and right combined) and -0.19±0.43 [°] et al. 2000]) or from the midpoint of the intersections of the
in (left and right combined). Given the noise characteristics of optical axes of both eyes with the display (e.g., [Nagamatsu et
the gaze estimation system, these estimation errors are in the al. 2009]) can exhibit large, subject dependent, gaze estimation
range predicted by the numerical simulations (Figure 4). errors. For the experiments described in Section 4 (see Table II),
calculations of the PoG by the intersection of the optical axis of
Following the estimation of the angles between the optical and one eye with the computer monitor results in RMS errors rang-
visual axes, the subjects looked at a grid of nine points (3x3, ing from 1° to 4.5° (average 2.5°). By using the midpoint, the
8.5° apart) displayed on a computer monitor. Fifty PoGs were RMS errors for the four subjects are reduced to a range between
collected at each point. The RMS errors in the estimation of the 1.5° to 3.2° (average 2.0°). With the UCF-REGT system, the
PoG for all four subjects are summarized in Table II. RMS error is in the range of 0.8° to 1.7° (1.3° on average for all
four subjects). The UCF-REGT system reduces the variability of
PoG estimation errors between subjects and improves signifi-
TABLE II cantly the accuracy of “calibration-free” REGT systems.
POG ESTIMATION ERROR
RMS error, (°)
Subjects:
References
UCF-REGT 1-point calib REGT
Left Eye Right Eye Left Eye Right Eye
1 1.2 0.8 0.6 0.5 DUCHOWSKI, A. T. (2002). "A breadth-first survey of eye-
2 1.3 1.4 1.0 0.7 tracking applications." Behavior Research Methods
3 1.4 1.0 0.8 0.8 Instruments & Computers 34(4): 455-470.
4 1.6 1.7 0.9 1.1
‘UCF-REGT’ – PoG is estimated with the user-calibration-free EIZENMAN, M., MODEL, D. and GUESTRIN, E. D. (2009). Covert
REGT . ‘1-point calib REGT’ – PoG is estimated with the Monitoring of the Point-of-Gaze. IEEE TIC-STH, Toronto,
REGT system that uses one-point user-calibration procedure.
ON, Canada, 551-556.
For the User-Calibration-Free REGT (UCF-REGT) system de- EIZENMAN, M., SAPIR-PICHHADZE, R., WESTALL, C. A., WONG,
scribed in this paper the average RMS error in PoG estimation
A. M., LEE, H. and MORAD, Y. (2006). "Eye-movement
for all four subjects is 1.3°. For the REGT system that uses the
one-point calibration procedure the average RMS error is 0.8°. responses to disparity vergence stimuli with artificial
monocular scotomas." Current Eye Research 31(6): 471-
5 Discussion and Conclusions 480.
A user-calibration-free REGT system was presented. The system GOLDBERG, J. H. and KOTVAL, X. P. (1999). "Computer interface
uses the assumption that at each time instant both eyes look at evaluation using eye movements: methods and constructs."
the same point in space to eliminate the need for a user- International Journal of Industrial Ergonomics 24(6): 631-
calibration procedure that requires subjects to fixate on known
645.
targets at specific time intervals to estimate the angles between
the optical and visual axes. Experiments with a REGT system
that estimates the components of the direction of the optical axis GUESTRIN, E. D. and EIZEMNAN, M. (2007). Remote point-of-
with an accuracy of 0.4° [Guestrin and Eizemnan 2007] showed gaze estimation with free head movements requiring a
that the AAE algorithm can estimate the angles between the
35
8. single-point calibration. Proc. of Annual International SHIH, S.-W., WU, Y.-T. and LIU, J. (2000). A calibration-free
Conference of the IEEE Engineering in Medicine and gaze tracking technique. In Proceedings of 15th Int.
Biology Society, 4556-4560. Conference on Pattern Recognition 201-204.
GUESTRIN, E. D. and EIZENMAN, M. (2006). "General theory of SHIH, S. W. and LIU, J. (2004). "A novel approach to 3-D gaze
remote gaze estimation using the pupil center and corneal tracking using stereo cameras." IEEE Transactions on
reflections." IEEE Transactions on Biomedical Engineering Systems, Man, and Cybernetics, Part B: Cybernetics 34(1):
53(6): 1124-1133. 234-245.
GUESTRIN, E. D. and EIZENMAN, M. (2008). Remote point-of- WETZEL, P. A., KRUEGER-ANDERSON, G., POPRIK, C. and
gaze estimation requiring a single-point calibration for BASCOM, P. (1997). An eye tracking system for analysis of
applications with infants. Proc. of the 2008 Symposium on pilots’ scan paths, United States Air Force Armstrong
Eye Tracking Research & Applications. Savannah, GA, Laboratory.
USA, ACM: 267-274.
YOUNG, L. R. and SHEENA, D. (1975). "Survey of eye-movement
HARBLUK, J. L., NOY, Y. I., TRBOVICH, P. L. and EIZENMAN, M. recording methods." Behavior Research Methods &
(2007). "An on-road assessment of cognitive distraction: Instrumentation 7(5): 397-429.
Impacts on drivers' visual behavior and braking
performance." Accident Analysis & Prevention 39(2): 372-
379.
HELMHOLTZ, H. (1924). Helmholtz's treatise on physiological
optics. Translated from the 3rd German ed. Edited by J. P. C.
Southall Rochester, NY, Optical Society of America.
HUTCHINSON, T. E., WHITE, K. P., JR., MARTIN, W. N.,
REICHERT, K. C. and FREY, L. A. (1989). "Human-computer
interaction using eye-gaze input." Systems, Man and
Cybernetics, IEEE Transactions on 19(6): 1527-1534.
LOSHE, G. (1997). "Consumer eye movement patterns of Yellow
Pages advertising." Journal of Advertising 26(1): 61-73.
MODEL, D. and EIZENMAN, M. (2010). "An Automatic Personal
Calibration Procedure for Advanced Gaze Estimation
Systems." IEEE Transactions on Biomedical Engineering,
Accepted.
MODEL, D., GUESTRIN, E. D. and EIZENMAN, M. (2009). An
Automatic Calibration Procedure for Remote Eye-Gaze
Tracking Systems. 31st Annual International Conference of
the IEEE EMBS, Minneapolis, MN, USA, 4751-4754.
NAGAMATSU, T., KAMAHARA, J. and TANAKA, N. (2009).
Calibration-free gaze tracking using a binocular 3D eye
model. Proceedings of the 27th International Conference on
Human factors in Computing Systems, Boston, MA, USA,
ACM
RAYNER, K. (1998). "Eye movements in reading and information
processing: 20 years of research." Psychological Bulletin
124(3): 372-422.
36