Thank you, I am Diako Mardanbegi. I am a phd student at IT U. I’m working on the HMET.This work is about a simple method for compensating the PE when we use the monocular HMET. It is more engineering but since kenneth mentioned yesterday it’s good that engineas and reserchers become closer somtimes and listen to eachother.It’s basically a method that we are going to implement and test it on our HMET in our lab, here I’m going to give a short presentation on that.
As you know video….In contrast, systems that called head-mounted gaze trackers have another camera mounted on the head for capturing the user’s field of view and allow for estimating the PoR in the user’s field of view when the user is fully mobile. Mobility is the main advantage of head-mounted gaze trackers compare to the remote gaze trackers.
However, a common problem with monocular head-mounted gaze trackers is that they introduce gaze estimation errors when the distance between the point of regard and the user (a.k.afixation distance) is different than when the system was calibrated. This error is due to the scene camera and the eye are not co-axial (a.k.a. parallax error).Video… Parallax error limits the head-mounted gaze trackers to be used in a certain range of distance (effective depth),
Here we can see the geometical figure of that
One method for removing the parallax betwean eye and the scene camera…..This method is used in some commercial head-mounted eye trackers like ISCAN and ASL. This method is a direct way for eliminating the parallax error and the eye tracker works quite accurate for different depths using only one time calibration, however needs some special hardware which make the eye tracker somewhat expensive and complicated design. But for the monocular eye trackers that don’t have…coaxial.. We need to compensate for this error
The standard method to compensate for parallax errors is to assume that all fixation planes are located on a finite set of distances and then perform calibration for each of these planes for each user. Recording the eye/scene image and then gaze estimation on the software. This requires that the distance for each fronto- parallel working plane should be set manually before gaze estimation. The approach is therefore most appropriate for offline gaze analysis. Another assumption is that the working plane should be fronto-parallel with respect to the scene camera, and therefore there will be errors introduced when planes are viewed from different angles.
For real time gaze estimation…..Several calibrations….Now we have the mapping function for each depth. And we want to use the eye tracker. Which one should be used?
Several pose estimation algorithms exsitand can be used for obtaining the distance between the camera and the objects. These methods are based on the geometrical extraction of primitives which allow the matching of 2D features (points or lines) extracted from the image with known 3D features of an object. All these methods need to calibrate the scene camera andWe already have these points on Tobiiglasse as infrared markers
One question is that when we use the one plane calibration how does the error pattern look like for different fixation planeshorizontal and vertical components of the errorSo the error is different for diferent points of scene image
Error is also a function of the depthSupose one of the components of the error in one point….Nemodar…Parallax is larger for the close distances which is well known in stereo camera systems.Curve fitting…We know the error for each of these points in each plane, and after interpolation we can find the error of that points for any other distance
How does changing the calibration depth change the curve..?Changing the calibration depth only shifts the curve upper and downerSo the error pattern is not a function of calibration distance.
Ok when we studied the error pattern we can ask two questionsbekhoonWhat we have observed is that the shape of the error curve is not changed too much betwean the users….orientation/position of the scene camera So as a result we can say that we can use the same eye tracker with the same error behavior for different users and We just need one time calibration . And with this method we can reduce the parallax error in real time when the user is looking at a plane from diferent distances and angles
If you are looking for a low cost ….visit our webpage and download the HaythamGOOG we can go for dinner earlier!I should look at that
Real-time parallax error compensation in head-mounted eye trackers
Real-time parallax errorcompensation in head-mounted eye trackers Diako Mardanbegi Dan Witzner Hansen
Common method...• All working planes are located on a finite set of distances• Performing the calibration for each of these planes• Recording the eye/scene image• Using the appropriate calibration data for gaze estimation in each depth o Fine for offline gaze estimation o Fixation planes should be fronto-parallel
Method for online gaze estimationSeveral calibrations c1 c2 c3 c4 c5 Parameters of the mapping function
Measuring the depth• Calibrated scene camera P1 x=[K]X P3 P2 Known size triangle• Depth can be obtained for every point inside the fixation plane (Zmeasured)