SlideShare a Scribd company logo
1 of 4
Download to read offline
Towards Task-Independent Person Authentication Using Eye Movement Signals
                                                       Tomi Kinnunen and Filip Sedlak and Roman Bednarikāˆ—
                                                                      School of Computing
                                                         University of Eastern Finland, Joensuu, Finland




Figure 1: A Gaussian mixture model (GMM) with universal backround model (UBM). User-dependent models are adapted from the UBM
and the recognition score is normalized using the UBM score.


Abstract                                                                                           The goal of biometric person authentication is to recognize persons
                                                                                                   based on their physical, behavioral and/or learned traits. Physical
We propose a person authentication system using eye movement                                       traits, such as ļ¬ngerprints, are directly measured from oneā€™s body
signals. In security scenarios, eye-tracking has earlier been used                                 with the aid of a biometric sensor. Behavioral traits, such as hand-
for gaze-based password entry. A few authors have also used phys-                                  writing and speech, on the other hand, involve physical action per-
ical features of eye movement signals for authentication in a task-                                formed by the user, and hence involve a time-varying component
dependent scenario with matched training and test samples. We                                      which can be controlled by conscious action. The behavioral signal
propose and implement a task-independent scenario whereby the                                      captured by the sensor hence includes a (possibly very complex)
training and test samples can be arbitrary. We use short-term eye                                  mixture of the cognitive (behavioral) component and the physical
gaze direction to construct feature vectors which are modeled us-                                  component corresponding to the individual physiology.
ing Gaussian mixtures. The results suggest that there are person-
speciļ¬c features in the eye movements that can be modeled in a                                     Human eyes provide a rich source of information about the iden-
task-independent manner. The range of possible applications ex-                                    tity of a person. The biometric systems utilizing commonly known
tends beyond the security-type of authentication to proactive and                                  physical properties of eyes - iris and retinal patterns - achieve high
user-convenience systems.                                                                          recognition accuracy. The eyes, however, have a strong behavioral
                                                                                                   component as well, the eye movements. The eye, the oculomotor
CR Categories: K.6.5 [Management of Computing and Infor-                                           plant and the human visual system develop individually for each
mation Systems]: Security and Protectionā€”Authentication; I.5.2                                     person and thus, it is reasonable to hypothesize that some part of
[Pattern Recognition]: Design Methodologyā€”Feature evaluation                                       the resulting eye-movements is individual too. In this paper we
and selection;                                                                                     study eye movement features to recognize persons, independently
                                                                                                   on the task. Thus, eye movements could provide complementary
Keywords: Biometrics, eye tracking, task independence                                              information for iris, retina, or face recognition. In addition, an eye
                                                                                                   tracker enables also liveness detection, that is, validating whether
                                                                                                   the biometric template originates from a real, living human being.
1     Introduction
    āˆ— e-mail:                                                                                      Another advantage of biometric eye movement authentication
                {tkinnu,fsedlak,bednarik}@cs.joensuu.ļ¬
                                                                                                   would be that it enables unnoticeable continuous authentication of
Copyright Ā© 2010 by the Association for Computing Machinery, Inc.                                  a person; the identity of the user can be re-authenticated continu-
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
                                                                                                   ously without any interaction by the user. From the technological
for commercial advantage and that copies bear this notice and the full citation on the             viewpoint and accessibility, the accuracy of eye trackers is contin-
first page. Copyrights for components of this work owned by others than ACM must be                uously improving; at the same time, eye trackers have already been
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         integrated with some low-cost webcams, see e.g. [Cogain ]. It can
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   be hypothesized that future notebooks and mobile devices will have
permissions@acm.org.                                                                               integrated eye trackers as standard equipment.
ETRA 2010, Austin, TX, March 22 ā€“ 24, 2010.
Ā© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             187
1.1   Related Work                                                             Design of the authentication stimulus should be done with great
                                                                               care to avoid learning effect of the user; if the user learns the task,
The idea of using eye-movements for person authentication is not               such as the order of the moving object, his behavior changes which
new. The currently adopted approaches consider eye gaze as an al-              may lead to increased recognition errors. From the security point of
ternative way to input user-speciļ¬c PIN numbers or passwords; see              view, the repetitious task can easily be copied and intrusion system
[Kumar et al. 2007] for a recent review of such methods. In these              built to imitate the expected input.
systems, the user may input the pass-phrase by, for instance, gaze-
typing the password [Kumar et al. 2007], using graphical pass-
words such as human faces [Passfaces ], or using gaze gestures sim-            Here we take a step towards eye movement biometrics where we
ilar to mouse gestures in web browsers [Luca et al. 2007]. In such             have minimal or absolutely no prior knowledge of the underlying
systems, it is important to design the user interface (presenting the          task. We call such problem as task-independent eye-movement bio-
keypad or visual tokens; providing user feedback) to ļ¬nd acceptable            metrics. We stem this terminology from other behavioral biometric
trade-off between security and usability - too complicated cognitive           tasks, such as text-independent speaker [Kinnunen and Li 2010] and
task becomes easily distracting. The eye gaze input is useful in se-           writer [Bulacu and Schomaker 2007] identiļ¬cation.
curity application where shoulder surļ¬ng is expected (e.g. watch-
ing over oneā€™s shoulder when he/she is typing his/her PIN code in
an ATM device). However, it is still based on the ā€œwhat-the-user-              2 Data Preprocessing and Feature Extraction
knowsā€ type of authentication; it might also be called cognometric
authentication [Passfaces ].                                                   The eye-coordinates are denoted here by xleft (t), yleft (t), xright (t),
                                                                               yright (t) for each timestamp t. Some trackers, including the one
In this paper, we focus on biometric authentication, that is, who
                                                                               used in this study, provide information about pupil diameter of both
the person is rather than what he/she knows. Biometric authentica-
                                                                               eyes as well; we focus on the gaze coordinate data only. The eye
tion could be combined with the cognometric techniques, or used as
                                                                               movement signal is inherently noisy and includes missing data due
the only authentication method in low-security scenarios (e.g. cus-
                                                                               to blinks, involuntary head movements, or corneal moisture tear
tomized user proļ¬les in computers, adaptive user interfaces). We
                                                                               ļ¬lm and irregularities, for instance. Typically, the eye-tracking de-
are aware of two prior studies that have utilized physical features of
                                                                               vice gives information about the missing data in a validity variable.
the eye movements to recognize persons [Kasprowski 2004; Bed-
                                                                               We assume continuity of the data within the missing parts and lin-
narik et al. 2005]. In [Kasprowski 2004], the author used a custom-
                                                                               early interpolate the data in-between the valid segments. Interpola-
made head-mounted infrared oculography eye tracker (ā€œOBER 2ā€)
                                                                               tion is applied independently for all four coordinate time series.
with sampling rate of 250 Hz to collect data from N = 47 sub-
jects. The subjects followed a jumping stimulation point presented
on a normal computer screen at twelve successive point positions.              It is a known fact that the eye is never still and constantly moves
Each subject had multiple recording sessions and one task consisted            around the ļ¬xation center. While most eye-trackers cannot mea-
of less than 10 seconds of data and lead to ļ¬xed-dimensional (2048             sure the microscopic movements of the eye ā€“ the tremor, drift, and
data points) pattern. The stimulation reference signal, together with          microsaccades ā€“ due to accuracy and temporal resolution, we con-
robust ļ¬xation detection, was used for normalizing and aligning the            centrate on the movements on a coarser level of detail that can be
gaze coordinate data. From the normalized signals, several fea-                effectively measured. Even during a single ļ¬xation, the eye con-
tures were derived: average velocity direction, distance to stimula-           stantly samples the region of interest. We hypothesize that the way
tion, eye difference, discrete Fourier transform (DFT) and discrete            the eye moves during the ļ¬xation and smooth pursuit have bearing
wavelet transform (DWT). Five well-known pattern matching tech-                on some individual property of the oculomotor plant.
niques, as well as their ensemble classiļ¬er, were then used for ver-
ifying the identity of a person. Average recognition error of the              We describe the movement as a histogram of all angles the eye
classiļ¬er ensemble around 7 % to 8 % was reported.                             travel during a certain period. Similar technique was used for the
                                                                               task-dependent recognition in [Kasprowski 2004]. Figure 2 sum-
In an independent study [Bednarik et al. 2005], the authors used a
                                                                               marizes the idea. We consider short-term data window (L sam-
commercially available eye tracker (Tobii ET-1750), built in a TFT
                                                                               ples) that expands over a temporal span of few seconds. The lo-
panel, to collect data from N = 12 subjects at sampling rate of 50
                                                                               cal velocity direction of the gaze (from time step t to time step
Hz. The stimulus consisted of a single cross shown in the middle of
                                                                               t + 1) is computed using trigonometric identities and transformed
the screen and displayed for 1 second. The eye-movement features
                                                                               into a normalized histogram, or discrete probability mass function
consisted of pupil diameter and its time derivative, gaze velocity,
                                                                               z = (p(Īø1 ), p(Īø2 ), . . . , p(ĪøK )), where p(Īøk ) > 0 for all k and
and time-varying distance between the eyes. Discrete Fourier trans-
form (DFT), principal component analysis (PCA) and their combi-                   k p(Īøk ) = 1. The K histogram bin midpoints are pre-located
                                                                               at angles Īøk = k(2Ļ€/K) where k = 0, 1, . . . , K āˆ’ 1. The data
nation were then applied to derive low-dimensional features, fol-
                                                                               window is then shifted forward by S samples (here we choose S =
lowed by k-nearest neighbor template matching. The time deriva-
                                                                               L/2). The method produces a time sequence {z t }, t = 1, 2, . . . , T
tive of the pupil size was found to be the best dynamic feature,
                                                                               of K-dimensional feature vectors which are considered indepen-
yielding identiļ¬cation rate of 50 % to 60 % (the best feature, dis-
                                                                               dent from each other and used in statistical modeling as explained
tance between the eyes, was not considered interesting since it can
                                                                               in the next Section.
be obtained without an eye tracker).

1.2   What is New in This Study: Task Independence                             Note that in the example of Fig. 2 most of the ļ¬xations have a
                                                                               ā€œnorth-eastā€ or ā€œsouth-westā€ tendency and this shows up in the his-
Both of the studies [Kasprowski 2004; Bednarik et al. 2005] are                togram as an ā€œeigendirectionā€ of the velocity. It is worth empha-
inherently task-dependent: they assume that the same stimulus ap-              sizing that we deliberately avoid the use of any ļ¬xation detection
pears in training and testing. This approach has the advantage that            ā€“ which is error prone ā€“ but instead use all the data within a given
the reference template (training sample) and test sample can be ac-            window. This data is likely to include several ļ¬xations and sac-
curately aligned. However, the accuracy comes with a price paid                cades. Since the amount of time spent on ļ¬xating is typically more
on convenience: in security scenario, the user is forced to perform            than 90 %, the relative contribution of the saccades in the histogram
a certain task which becomes both learned and easily distracting.              is smaller.


                                                                         188
Shortāˆ’term gaze (temporal
                                    Eye gaze data (entire task)                                                   window of 1.6 seconds)
                                                                                                    500
                             800
                             700
     y coordinate (pixels)




                                                                            y coordinate (pixels)
                                                                                                    450
                             600
                             500
                                                                                                    400
                             400
                             300
                                                                                                    350                                             Local
                             200                                                                                                                    velocity
                             100                                                                                                                    direction                                                                                                               46
                                                                                                    300
                                      200      400     600     800   1000                             400        450    500     550      600            650                                                                                                                 44




                                                                                                                                                                       Equal error rate (EER %)
                                            x coordinate (pixels)                                                   x coordinate (pixels)
                                                                                                                                                                                                  45
                                                                                                                                                                                                                                                                            42
                                                                                                          Histogram of velocity directions
                                   Shortāˆ’term coordinate data                                                                                                                                     40
                                                                                                            within the temporal window                                                                                                                                      40
                             650                                                                                                                                                                  35
                                                                                                                             90                                                                                                                                             38
                             600                                                                                    120                  60
                                                                                                                                                                                                  30
                                                                                                                                                                                                                                                                            36
     coordinate (pixels)




                             550                                                                             150                  0.05         30                                                 25
                                                                                                                                                                                                                                                                            34
                             500
                                                                                                           180                                      0                                                  200                                                                  32
                             450




                                                                                                                                                                                                   wi
                                                                                                                                                                                                             400                                                            30




                                                                                                                                                                                                       nd
                             400




                                                                                                                                                                                                        ow
                                                         x coordinate                                        210                               330
                                                                                                                                                                                                                                                                     60     28




                                                                                                                                                                                                            le
                             350                                                                                                                                                                                      600




                                                                                                                                                                                                             ng
                                                         y coordinate                                                                                                                                                                                           50




                                                                                                                                                                                                                 th
                                                                                                                    240                  300                                                                                                           40      (K)          26
                                                                                                                                                                                                                                                          bins




                                                                                                                                                                                                                  (L
                             300                                                                                           270                                                                                              800                30
                                                                                                                                                                                                                                                    gram




                                                                                                                                                                                                                      )
                                0       50       100        150      200
                                                                                                    Magnitude of each arc is proportional to the                                                                                          20   histo
                               Time (samples reletive to window location)                                                                                                                                                            10
                                                                                                    probability of that velocity direction within the                                                                             1000
                                                                                                    window.


                               Figure 2: Illustration of velocity direction features.                                                                                                      Figure 3: Effect of feature parameters to accuracy.


3   Task-Independent Modeling of Features                                                                                                               nm is the soft count of vectors assigned to the mth Gaussian and
                                                                                                                                                        r > 0 is a ļ¬xed relevance factor. Variances and weights can also
We adopt some machinery from modern text-independent speaker
                                                                                                                                                        be adapted. For small amount of data, the adapted vector is interpo-
recognition [Kinnunen and Li 2010]. In speaker recognition, the
                                                                                                                                                        lation between the data and the UBM, whereas for large number of
speech signal is also transformed into a sequence of short-term fea-
                                                                                                                                                        data the effect of UBM is reduced. Given a sequence of independent
ture vectors that are considered independent. The matching prob-
                                                                                                                                                        test vectors {z 1 , . . . , z T }, the match score is computed as the dif-
lem becomes to quantifying the similarity of the given ā€œfeature vec-
                                                                                                                                                        ference of the target person and the UBM average log-likelihoods:
tor cloudsā€. To this end, each person is modeled using a Gaussian
mixture model (GMM) [Bishop 2006]. An important advance in                                                                                                                                        T
speaker recognition was normalizing the speaker models with re-                                                                                                      1
                                                                                                                                                           score =                                      {log p(z t |Ī˜target ) āˆ’ log p(z t |Ī˜UBM )}.                   (2)
spect to a so-called universal background model (UBM) [Reynolds                                                                                                      T                            t=1
et al. 2000]. This method (Fig. 1) is now well-established base-
line method in that ļ¬eld. The UBM, which is just another Gaus-
sian mixture model, is ļ¬rst trained from a large set of data. The
                                                                                                                                                        4 Experiments
user-dependent GMMs are then derived by adapting the parame-                                                                                            4.1     Experimental Setup
ters of the UBM to that speakerā€™s training data. The adapted is
an interpolated model between the UBM (prior model) and the ob-                                                                                         For the experiments, we collected a database of N = 17 users.
served training data. This maximum a posteriori (MAP) training of                                                                                       The initial number of participants was higher (23), but we left out
the models gives better accuracy than maximum likelihood (ML)                                                                                           the recordings with low quality data - e.g. due to a frequent loss
trained models for limited amount of data. Normalization by the                                                                                         of eye. Participants were naive of the actual purpose of the study.
UBM likelihood in the test phase also emphasizes the difference of                                                                                      The recordings were conducted in a quiet usability laboratory with
the given person from the general population of persons.                                                                                                constant lighting. After a calibration, the target stimuli consisted of
                                                                                                                                                        a two sub-tasks. First, the display showed an introductory text with
The GMM-UBM method [Reynolds et al. 2000] can be shortly
                                                                                                                                                        instructions related to the content and duration of the task. Partici-
summarized as follows. Both the UBM and the user-dependent
                                                                                                                                                        pants were also asked to keep their head as stable as possible. The
models are described by a mixture of multivariate Gaussians with
                                                                                                                                                        following stimulus was a 25 minutes long video of a section of Ab-
some mean vectors Āµm , diagonal covariance matrices (Ī£m ) and
                                                                                                                                                        solutely fabulous - a comedy series produced by BBC. The video
mixture weights (wm ). The density function of GMM is then
                                                                                                                                                        contained original sound and subtitles in Finnish. The screen reso-
p(z|Ī˜) = M wm N (z|Āµm , Ī£m ), where N (Ā·|Ā·) denotes mul-
              m=1                                                                                                                                       lution of the video was 800 x 600 pixels and it was shown on a 20.1
tivariate Gaussian and Ī˜ denotes collectively all the parameters.                                                                                       inch LCD ļ¬‚at panel Viewsonic VP201b (true native resolution 1600
The mixture weights satisfy wm ā‰„ 0, m wm = 1. The UBM                                                                                                   x 1200, anti-glare surface, contrast ration 800:1, response time 16
parameters are found via the expectation-maximization (EM) algo-                                                                                        ms, viewing distance approximately 1 m). A Tobii X120 (sampling
rithm, and the user-dependent mean vectors are derived as,                                                                                              rate 120 Hz, accuracy 0.5 degree) eye-tracker was employed in the
                                                                                                                                                        study and the stimuli were presented using the Tobii Studio analysis
                                            Āµm = Ī±m z m + (1 āˆ’ Ī±m )Āµm ,
                                                    Ėœ                                                                              (1)                  software v. 1.5.2.
      Ėœ
where z m is the posterior-weighted mean of the training sample.                                                                                        We use one training ļ¬le and one test ļ¬le per person and do cross-
The adaptation coefļ¬cient is deļ¬ned as Ī±m = nm /(nm +r), where                                                                                          matching of all ļ¬le pairs, which leads to 17 genuine access tri-


                                                                                                                                          189
als and 272 impostor access trials. We measure the accuracy by                                       Table 1: Effect of training and test data durations.
equal error rate (EER) which corresponds to the operating point
for which the false acceptance (accepting an impostor) and false                                   Training data    Test data   Equal error rate (EER %)
rejection (rejecting a legitimate user) are equal. The UBM train-                                  10 sec           10 sec                47.1
ing data, person-dependent training data and test data are all dis-                                1 min            10 sec                47.1
joint. The UBM is trained by a deterministic splitting method fol-                                 3 min            10 sec                35.3
lowed by 7 k-means iterations and 2 EM iterations. In creating the                                 6 min            10 sec                29.8
user-dependent adapted Gaussian mixture models, we adapt both                                      9 min            10 sec                28.7
the mean and variance vectors with relevance factor r = 16.                                        1 min            1 min                 41.9
                                                                                                   2 min            2 min                 41.2
                                                                                                   4 min            4 min                 29.4
                                             Window length L = 900 samples (7.5 sec)
                                                                                                   5 min            3 min                 29.4
                                    45            window shift S = 450 samples                     6 min            2 min                 29.4
                                                   Histogram size K = 27 bins                      7 min            1 min                 29.8
         Equal error rate (EER %)




                                    40

                                                                                             ther studies using larger number of participants and wider range of
                                    35                                                       tasks and features. Several improvements could be done on the data
                                                                                             processing side as well: using complementary features from pupil
                                    30
                                                                                             diameter, using feature and score normalization and discriminative
                                                                                             training [Kinnunen and Li 2010].
                                    25
                                         5      10       15      20    25      30            References
                                             Number of Gaussian components (M)

                                                                                                                                                    ĀØ
                                                                                             B EDNARIK , R., K INNUNEN , T., M IHAILA , A., AND F R ANTI , P.
                                         Figure 4: Effect of model order.
                                                                                                2005. Eye-movements as a biometric. In Proc. 14th Scandina-
                                                                                                vian Conference on Image Analysis (SCIA 2005). 780-789.
4.2   Results                                                                                B ISHOP, C. 2006. Pattern Recognition and Machine Learning.
We ļ¬rst optimize the feature extractor and classiļ¬er parameters.                                Springer Science+Business Media, LLC, New York.
The UBM is trained from 140 minutes of data whereas the train-                               B ULACU , M., AND S CHOMAKER , L. 2007. Text-independent
ing and test segments have approximate durations of ļ¬ve and one                                 writer identiļ¬cation and veriļ¬cation using textual and allo-
minute, respectively. We ļ¬x the number of Gaussians to M = 12                                   graphic features. IEEE Trans. on Pattern Analysis and Machine
and vary the length of the data window (L) and the number of his-                               Intelligence 29, 4 (April), 19ā€“41.
togram bins (K). The error rates presented in Fig. 3 suggest that
increasing the window size improves accuracy, and the number of                              C OGAIN. Open source gaze tracking, freeware and low cost
bins does not have to be too high (20 ā‰¤ K ā‰¤ 30). We ļ¬x K = 27                                   eye tracking. WWW page, http://www.cogain.org/
and L = 900 and further ļ¬ne-tune the number of Gaussians in Fig.                                eyetrackers/low-cost-eye-trackers.
4. The accuracy improves when increasing the number of Gaus-                                 K ASPROWSKI , P. 2004. Human Identiļ¬cation Using Eye Move-
sians, and achieves optimum at approximately 6 ā‰¤ M ā‰¤ 18. In the                                ments. PhD thesis, Silesian University of Technology, Insti-
following we set M = 16.                                                                       tute of Computer Science, Gliwice, Poland. http://www.
Finally, we display the error rates for varying lengths of training                            kasprowski.pl/phd/PhD_Kasprowski.pdf.
and test data in Table 1. Here we reduced the amount of UBM                                  K INNUNEN , T., AND L I , H. 2010. An overview of text-
training data to 70 minutes so as to allow using longer training                                independent speaker recognition: from features to supervectors.
and test segments for the target persons. Increasing the amount of                              Speech Communication 52, 1 (January), 12ā€“40.
data improves accuracy, saturating to accuracy around 30 % EER.
Although the accuracy is not sufļ¬cient for a realistic application,                          K UMAR , M., G ARFINKEL , T., B ONEH , D., AND W INOGRAD , T.
it is clearly below the chance level (50 % EER), suggesting that                               2007. Reducing shoulder-surļ¬ng by using gaze-based password
there is individual information in the eye movements which can be                              entry. In Proc. of the 3rd symposium on Usable privacy and
modeled. The error rates are clearly higher than in [Kasprowski                                security table of contents, ACM, 13ā€“19.
2004; Bednarik et al. 2005]. However, those studies used ļ¬xed-
dimensional templates that were carefully aligned whereas our sys-                           L UCA , A. D., W EISS , R., AND D REWES , H. 2007. Evaluation of
tem does not use any explicit temporal alignment.                                               eye-gaze interaction methods for security enhanced PIN-entry.
                                                                                                In Proc. of the 19th Australasian conf. on Computer-Human In-
                                                                                                teraction: Entertaining User Interfaces, ACM, 199ā€“202.
5     Conclusions
                                                                                             PASSFACES. Passfaces: Two factor authentication for the enter-
We approached the problem of task-independent eye-movement                                     prise. WWW page, http://www.passfaces.com/.
biometric authentication from bottom-up without any prior signal                             R EYNOLDS , D., Q UATIERI , T., AND D UNN , R. 2000. Speaker
model of how the resulting eye-movement signal and features are                                 veriļ¬cation using adapted gaussian mixture models. Digital Sig-
generated by the oculomotor plant. Instead, we had an intuitive                                 nal Processing 10, 1 (January), 19ā€“41.
guess that the way the ocular muscles operate the eye is individ-
ual and there is some independence on the underlying task. Our                               S ALVUCCI , D., AND G OLDBERG , J. 2000. Identifying ļ¬xations
results indicate that this is a feasible conception. The error rates                            and saccades in eye-tracking protocols. In ETRA ā€™00: Proceed-
are too high to be useful in a realistic security system, but signiļ¬-                           ings of the 2000 symposium on Eye tracking research & applica-
cantly lower than the chance level (50 %) which warrants for fur-                               tions, ACM, New York, NY, USA, 71ā€“78.


                                                                                       190

More Related Content

What's hot

Paper id 25201496
Paper id 25201496Paper id 25201496
Paper id 25201496
IJRAT
Ā 
Dohyoung lee icassp2012_poster
Dohyoung lee icassp2012_posterDohyoung lee icassp2012_poster
Dohyoung lee icassp2012_poster
University of Toronto
Ā 

What's hot (18)

Secure Authentication System Using Video Surveillance
Secure Authentication System Using Video SurveillanceSecure Authentication System Using Video Surveillance
Secure Authentication System Using Video Surveillance
Ā 
Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...Security for Identity Based Identification using Water Marking and Visual Cry...
Security for Identity Based Identification using Water Marking and Visual Cry...
Ā 
A Study of Iris Recognition
A Study of Iris RecognitionA Study of Iris Recognition
A Study of Iris Recognition
Ā 
Bw33449453
Bw33449453Bw33449453
Bw33449453
Ā 
Paper id 25201496
Paper id 25201496Paper id 25201496
Paper id 25201496
Ā 
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Ā 
Optimization of human finger knuckle print as a neoteric biometric identifier
Optimization of human finger knuckle print as a neoteric biometric identifierOptimization of human finger knuckle print as a neoteric biometric identifier
Optimization of human finger knuckle print as a neoteric biometric identifier
Ā 
A Comparison Based Study on Biometrics for Human Recognition
A Comparison Based Study on Biometrics for Human RecognitionA Comparison Based Study on Biometrics for Human Recognition
A Comparison Based Study on Biometrics for Human Recognition
Ā 
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually Impaired
Ā 
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
A Novel Biometric Approach for Authentication In Pervasive Computing Environm...
Ā 
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
Scale Invariant Feature Transform Based Face Recognition from a Single Sample...
Ā 
Icetet 2010 id 94 fkp segmentation
Icetet 2010   id 94 fkp segmentationIcetet 2010   id 94 fkp segmentation
Icetet 2010 id 94 fkp segmentation
Ā 
Iris feature extraction
Iris feature extractionIris feature extraction
Iris feature extraction
Ā 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
Ā 
Dohyoung lee icassp2012_poster
Dohyoung lee icassp2012_posterDohyoung lee icassp2012_poster
Dohyoung lee icassp2012_poster
Ā 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
Ā 
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D FacesDepth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
Depth-Image-based Facial Analysis between Age Groups and Recognition of 3D Faces
Ā 
Biometrics Research/Thesis Paper
Biometrics Research/Thesis PaperBiometrics Research/Thesis Paper
Biometrics Research/Thesis Paper
Ā 

Viewers also liked

TEMA 2A GRAMMAR -AR VERBS
TEMA 2A GRAMMAR -AR VERBSTEMA 2A GRAMMAR -AR VERBS
TEMA 2A GRAMMAR -AR VERBS
SenoraAmandaWhite
Ā 
Using Student Research Center
Using Student Research CenterUsing Student Research Center
Using Student Research Center
David Smolen
Ā 
Es3 Pifferati Valentina
Es3 Pifferati ValentinaEs3 Pifferati Valentina
Es3 Pifferati Valentina
guest48e709
Ā 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouter
guest2f17d3
Ā 
Irma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A PeIrma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A Pe
180590
Ā 
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
ZFConf Conference
Ā 
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion GuideFor Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
imroselle
Ā 
Pr1
Pr1Pr1
Pr1
NatkaOA
Ā 

Viewers also liked (20)

Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12Predstavljanje poslovanja - press konferencija 15.06.12
Predstavljanje poslovanja - press konferencija 15.06.12
Ā 
TEMA 2A GRAMMAR -AR VERBS
TEMA 2A GRAMMAR -AR VERBSTEMA 2A GRAMMAR -AR VERBS
TEMA 2A GRAMMAR -AR VERBS
Ā 
Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution Images
Ā 
Coutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking
Coutinho A Depth Compensation Method For Cross Ratio Based Eye TrackingCoutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking
Coutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking
Ā 
03 Wrath Of God Against Jews
03 Wrath Of God Against Jews03 Wrath Of God Against Jews
03 Wrath Of God Against Jews
Ā 
Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...
Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...
Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...
Ā 
Using Student Research Center
Using Student Research CenterUsing Student Research Center
Using Student Research Center
Ā 
Es3 Pifferati Valentina
Es3 Pifferati ValentinaEs3 Pifferati Valentina
Es3 Pifferati Valentina
Ā 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Ā 
Hyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van WouterHyves Cbw Mitex Harry Van Wouter
Hyves Cbw Mitex Harry Van Wouter
Ā 
2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South Korea2014 Stop slavery! Pocheon African Art musuem in South Korea
2014 Stop slavery! Pocheon African Art musuem in South Korea
Ā 
Tell The Difference
Tell The DifferenceTell The Difference
Tell The Difference
Ā 
Irma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A PeIrma Irmayani Kelas 2 A Pe
Irma Irmayani Kelas 2 A Pe
Ā 
Cactus Blossoms!
Cactus Blossoms!Cactus Blossoms!
Cactus Blossoms!
Ā 
Konferencija za medije, Beograd, 1.03.2011: Rekordna prodaja u Srbiji
Konferencija za medije, Beograd, 1.03.2011: Rekordna prodaja u SrbijiKonferencija za medije, Beograd, 1.03.2011: Rekordna prodaja u Srbiji
Konferencija za medije, Beograd, 1.03.2011: Rekordna prodaja u Srbiji
Ā 
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
ZFConf 2011: Š”Š¾Š·Š“Š°Š½ŠøŠµ REST-API Š“Š»Ń стŠ¾Ń€Š¾Š½Š½Šøх рŠ°Š·Ń€Š°Š±Š¾Ń‚чŠøŠŗŠ¾Š² Šø Š¼Š¾Š±ŠøŠ»ŃŒŠ½Ń‹Ń… устрŠ¾Š¹...
Ā 
Musica Elettronica
Musica ElettronicaMusica Elettronica
Musica Elettronica
Ā 
A BIBLIONET program első Ć©ve Hargita megyĆ©ben
A BIBLIONET program első Ć©ve Hargita megyĆ©benA BIBLIONET program első Ć©ve Hargita megyĆ©ben
A BIBLIONET program első Ć©ve Hargita megyĆ©ben
Ā 
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion GuideFor Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
For Rightsof All: Ending Jim Crow in Alaska Viewer Discussion Guide
Ā 
Pr1
Pr1Pr1
Pr1
Ā 

Similar to Kinnunen Towards Task Independent Person Authentication Using Eye Movement Signals

Mobile to server face recognition. Skripsi 1.
Mobile to server face recognition. Skripsi 1.Mobile to server face recognition. Skripsi 1.
Mobile to server face recognition. Skripsi 1.
Adryan Rezza
Ā 
G041041047
G041041047G041041047
G041041047
ijceronline
Ā 

Similar to Kinnunen Towards Task Independent Person Authentication Using Eye Movement Signals (20)

F0932733
F0932733F0932733
F0932733
Ā 
Biometric authentication system
Biometric authentication systemBiometric authentication system
Biometric authentication system
Ā 
I017335457
I017335457I017335457
I017335457
Ā 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique  Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
Ā 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid TechniqueBiometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
Ā 
IRJET- Secure Vault System using Iris Biometrics and PIC Microcontroller
IRJET-  	  Secure Vault System using Iris Biometrics and PIC MicrocontrollerIRJET-  	  Secure Vault System using Iris Biometrics and PIC Microcontroller
IRJET- Secure Vault System using Iris Biometrics and PIC Microcontroller
Ā 
A PROJECT REPORT ON IRIS RECOGNITION SYSTEM USING MATLAB
A PROJECT REPORT ON IRIS RECOGNITION SYSTEM USING MATLABA PROJECT REPORT ON IRIS RECOGNITION SYSTEM USING MATLAB
A PROJECT REPORT ON IRIS RECOGNITION SYSTEM USING MATLAB
Ā 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ā 
Transparent Developmental Biometric Based System Protect User Reauthenticatio...
Transparent Developmental Biometric Based System Protect User Reauthenticatio...Transparent Developmental Biometric Based System Protect User Reauthenticatio...
Transparent Developmental Biometric Based System Protect User Reauthenticatio...
Ā 
Facial-Expression Based Mouse Cursor Control for Physically Challenged Indivi...
Facial-Expression Based Mouse Cursor Control for Physically Challenged Indivi...Facial-Expression Based Mouse Cursor Control for Physically Challenged Indivi...
Facial-Expression Based Mouse Cursor Control for Physically Challenged Indivi...
Ā 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
Ā 
J1076975
J1076975J1076975
J1076975
Ā 
Survey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNSurvey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNN
Ā 
Bw33449453
Bw33449453Bw33449453
Bw33449453
Ā 
An Eye State Recognition System using Transfer Learning: Inceptionā€‘Based Deep...
An Eye State Recognition System using Transfer Learning: Inceptionā€‘Based Deep...An Eye State Recognition System using Transfer Learning: Inceptionā€‘Based Deep...
An Eye State Recognition System using Transfer Learning: Inceptionā€‘Based Deep...
Ā 
Face Recognition System using OpenCV
Face Recognition System using OpenCVFace Recognition System using OpenCV
Face Recognition System using OpenCV
Ā 
Mobile to server face recognition. Skripsi 1.
Mobile to server face recognition. Skripsi 1.Mobile to server face recognition. Skripsi 1.
Mobile to server face recognition. Skripsi 1.
Ā 
Iris scanning
Iris scanningIris scanning
Iris scanning
Ā 
G041041047
G041041047G041041047
G041041047
Ā 
Remote User Authentication using blink mechanism - ā€˜Iblinkā€™ with Machine Lear...
Remote User Authentication using blink mechanism - ā€˜Iblinkā€™ with Machine Lear...Remote User Authentication using blink mechanism - ā€˜Iblinkā€™ with Machine Lear...
Remote User Authentication using blink mechanism - ā€˜Iblinkā€™ with Machine Lear...
Ā 

More from Kalle

More from Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Ā 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Ā 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Ā 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Ā 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Ā 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Ā 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Ā 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Ā 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Ā 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Ā 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
Ā 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
Ā 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
Ā 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Ā 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Ā 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Ā 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Ā 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Ā 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Ā 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Ā 

Kinnunen Towards Task Independent Person Authentication Using Eye Movement Signals

  • 1. Towards Task-Independent Person Authentication Using Eye Movement Signals Tomi Kinnunen and Filip Sedlak and Roman Bednarikāˆ— School of Computing University of Eastern Finland, Joensuu, Finland Figure 1: A Gaussian mixture model (GMM) with universal backround model (UBM). User-dependent models are adapted from the UBM and the recognition score is normalized using the UBM score. Abstract The goal of biometric person authentication is to recognize persons based on their physical, behavioral and/or learned traits. Physical We propose a person authentication system using eye movement traits, such as ļ¬ngerprints, are directly measured from oneā€™s body signals. In security scenarios, eye-tracking has earlier been used with the aid of a biometric sensor. Behavioral traits, such as hand- for gaze-based password entry. A few authors have also used phys- writing and speech, on the other hand, involve physical action per- ical features of eye movement signals for authentication in a task- formed by the user, and hence involve a time-varying component dependent scenario with matched training and test samples. We which can be controlled by conscious action. The behavioral signal propose and implement a task-independent scenario whereby the captured by the sensor hence includes a (possibly very complex) training and test samples can be arbitrary. We use short-term eye mixture of the cognitive (behavioral) component and the physical gaze direction to construct feature vectors which are modeled us- component corresponding to the individual physiology. ing Gaussian mixtures. The results suggest that there are person- speciļ¬c features in the eye movements that can be modeled in a Human eyes provide a rich source of information about the iden- task-independent manner. The range of possible applications ex- tity of a person. The biometric systems utilizing commonly known tends beyond the security-type of authentication to proactive and physical properties of eyes - iris and retinal patterns - achieve high user-convenience systems. recognition accuracy. The eyes, however, have a strong behavioral component as well, the eye movements. The eye, the oculomotor CR Categories: K.6.5 [Management of Computing and Infor- plant and the human visual system develop individually for each mation Systems]: Security and Protectionā€”Authentication; I.5.2 person and thus, it is reasonable to hypothesize that some part of [Pattern Recognition]: Design Methodologyā€”Feature evaluation the resulting eye-movements is individual too. In this paper we and selection; study eye movement features to recognize persons, independently on the task. Thus, eye movements could provide complementary Keywords: Biometrics, eye tracking, task independence information for iris, retina, or face recognition. In addition, an eye tracker enables also liveness detection, that is, validating whether the biometric template originates from a real, living human being. 1 Introduction āˆ— e-mail: Another advantage of biometric eye movement authentication {tkinnu,fsedlak,bednarik}@cs.joensuu.ļ¬ would be that it enables unnoticeable continuous authentication of Copyright Ā© 2010 by the Association for Computing Machinery, Inc. a person; the identity of the user can be re-authenticated continu- Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed ously without any interaction by the user. From the technological for commercial advantage and that copies bear this notice and the full citation on the viewpoint and accessibility, the accuracy of eye trackers is contin- first page. Copyrights for components of this work owned by others than ACM must be uously improving; at the same time, eye trackers have already been honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on integrated with some low-cost webcams, see e.g. [Cogain ]. It can servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail be hypothesized that future notebooks and mobile devices will have permissions@acm.org. integrated eye trackers as standard equipment. ETRA 2010, Austin, TX, March 22 ā€“ 24, 2010. Ā© 2010 ACM 978-1-60558-994-7/10/0003 $10.00 187
  • 2. 1.1 Related Work Design of the authentication stimulus should be done with great care to avoid learning effect of the user; if the user learns the task, The idea of using eye-movements for person authentication is not such as the order of the moving object, his behavior changes which new. The currently adopted approaches consider eye gaze as an al- may lead to increased recognition errors. From the security point of ternative way to input user-speciļ¬c PIN numbers or passwords; see view, the repetitious task can easily be copied and intrusion system [Kumar et al. 2007] for a recent review of such methods. In these built to imitate the expected input. systems, the user may input the pass-phrase by, for instance, gaze- typing the password [Kumar et al. 2007], using graphical pass- words such as human faces [Passfaces ], or using gaze gestures sim- Here we take a step towards eye movement biometrics where we ilar to mouse gestures in web browsers [Luca et al. 2007]. In such have minimal or absolutely no prior knowledge of the underlying systems, it is important to design the user interface (presenting the task. We call such problem as task-independent eye-movement bio- keypad or visual tokens; providing user feedback) to ļ¬nd acceptable metrics. We stem this terminology from other behavioral biometric trade-off between security and usability - too complicated cognitive tasks, such as text-independent speaker [Kinnunen and Li 2010] and task becomes easily distracting. The eye gaze input is useful in se- writer [Bulacu and Schomaker 2007] identiļ¬cation. curity application where shoulder surļ¬ng is expected (e.g. watch- ing over oneā€™s shoulder when he/she is typing his/her PIN code in an ATM device). However, it is still based on the ā€œwhat-the-user- 2 Data Preprocessing and Feature Extraction knowsā€ type of authentication; it might also be called cognometric authentication [Passfaces ]. The eye-coordinates are denoted here by xleft (t), yleft (t), xright (t), yright (t) for each timestamp t. Some trackers, including the one In this paper, we focus on biometric authentication, that is, who used in this study, provide information about pupil diameter of both the person is rather than what he/she knows. Biometric authentica- eyes as well; we focus on the gaze coordinate data only. The eye tion could be combined with the cognometric techniques, or used as movement signal is inherently noisy and includes missing data due the only authentication method in low-security scenarios (e.g. cus- to blinks, involuntary head movements, or corneal moisture tear tomized user proļ¬les in computers, adaptive user interfaces). We ļ¬lm and irregularities, for instance. Typically, the eye-tracking de- are aware of two prior studies that have utilized physical features of vice gives information about the missing data in a validity variable. the eye movements to recognize persons [Kasprowski 2004; Bed- We assume continuity of the data within the missing parts and lin- narik et al. 2005]. In [Kasprowski 2004], the author used a custom- early interpolate the data in-between the valid segments. Interpola- made head-mounted infrared oculography eye tracker (ā€œOBER 2ā€) tion is applied independently for all four coordinate time series. with sampling rate of 250 Hz to collect data from N = 47 sub- jects. The subjects followed a jumping stimulation point presented on a normal computer screen at twelve successive point positions. It is a known fact that the eye is never still and constantly moves Each subject had multiple recording sessions and one task consisted around the ļ¬xation center. While most eye-trackers cannot mea- of less than 10 seconds of data and lead to ļ¬xed-dimensional (2048 sure the microscopic movements of the eye ā€“ the tremor, drift, and data points) pattern. The stimulation reference signal, together with microsaccades ā€“ due to accuracy and temporal resolution, we con- robust ļ¬xation detection, was used for normalizing and aligning the centrate on the movements on a coarser level of detail that can be gaze coordinate data. From the normalized signals, several fea- effectively measured. Even during a single ļ¬xation, the eye con- tures were derived: average velocity direction, distance to stimula- stantly samples the region of interest. We hypothesize that the way tion, eye difference, discrete Fourier transform (DFT) and discrete the eye moves during the ļ¬xation and smooth pursuit have bearing wavelet transform (DWT). Five well-known pattern matching tech- on some individual property of the oculomotor plant. niques, as well as their ensemble classiļ¬er, were then used for ver- ifying the identity of a person. Average recognition error of the We describe the movement as a histogram of all angles the eye classiļ¬er ensemble around 7 % to 8 % was reported. travel during a certain period. Similar technique was used for the task-dependent recognition in [Kasprowski 2004]. Figure 2 sum- In an independent study [Bednarik et al. 2005], the authors used a marizes the idea. We consider short-term data window (L sam- commercially available eye tracker (Tobii ET-1750), built in a TFT ples) that expands over a temporal span of few seconds. The lo- panel, to collect data from N = 12 subjects at sampling rate of 50 cal velocity direction of the gaze (from time step t to time step Hz. The stimulus consisted of a single cross shown in the middle of t + 1) is computed using trigonometric identities and transformed the screen and displayed for 1 second. The eye-movement features into a normalized histogram, or discrete probability mass function consisted of pupil diameter and its time derivative, gaze velocity, z = (p(Īø1 ), p(Īø2 ), . . . , p(ĪøK )), where p(Īøk ) > 0 for all k and and time-varying distance between the eyes. Discrete Fourier trans- form (DFT), principal component analysis (PCA) and their combi- k p(Īøk ) = 1. The K histogram bin midpoints are pre-located at angles Īøk = k(2Ļ€/K) where k = 0, 1, . . . , K āˆ’ 1. The data nation were then applied to derive low-dimensional features, fol- window is then shifted forward by S samples (here we choose S = lowed by k-nearest neighbor template matching. The time deriva- L/2). The method produces a time sequence {z t }, t = 1, 2, . . . , T tive of the pupil size was found to be the best dynamic feature, of K-dimensional feature vectors which are considered indepen- yielding identiļ¬cation rate of 50 % to 60 % (the best feature, dis- dent from each other and used in statistical modeling as explained tance between the eyes, was not considered interesting since it can in the next Section. be obtained without an eye tracker). 1.2 What is New in This Study: Task Independence Note that in the example of Fig. 2 most of the ļ¬xations have a ā€œnorth-eastā€ or ā€œsouth-westā€ tendency and this shows up in the his- Both of the studies [Kasprowski 2004; Bednarik et al. 2005] are togram as an ā€œeigendirectionā€ of the velocity. It is worth empha- inherently task-dependent: they assume that the same stimulus ap- sizing that we deliberately avoid the use of any ļ¬xation detection pears in training and testing. This approach has the advantage that ā€“ which is error prone ā€“ but instead use all the data within a given the reference template (training sample) and test sample can be ac- window. This data is likely to include several ļ¬xations and sac- curately aligned. However, the accuracy comes with a price paid cades. Since the amount of time spent on ļ¬xating is typically more on convenience: in security scenario, the user is forced to perform than 90 %, the relative contribution of the saccades in the histogram a certain task which becomes both learned and easily distracting. is smaller. 188
  • 3. Shortāˆ’term gaze (temporal Eye gaze data (entire task) window of 1.6 seconds) 500 800 700 y coordinate (pixels) y coordinate (pixels) 450 600 500 400 400 300 350 Local 200 velocity 100 direction 46 300 200 400 600 800 1000 400 450 500 550 600 650 44 Equal error rate (EER %) x coordinate (pixels) x coordinate (pixels) 45 42 Histogram of velocity directions Shortāˆ’term coordinate data 40 within the temporal window 40 650 35 90 38 600 120 60 30 36 coordinate (pixels) 550 150 0.05 30 25 34 500 180 0 200 32 450 wi 400 30 nd 400 ow x coordinate 210 330 60 28 le 350 600 ng y coordinate 50 th 240 300 40 (K) 26 bins (L 300 270 800 30 gram ) 0 50 100 150 200 Magnitude of each arc is proportional to the 20 histo Time (samples reletive to window location) 10 probability of that velocity direction within the 1000 window. Figure 2: Illustration of velocity direction features. Figure 3: Effect of feature parameters to accuracy. 3 Task-Independent Modeling of Features nm is the soft count of vectors assigned to the mth Gaussian and r > 0 is a ļ¬xed relevance factor. Variances and weights can also We adopt some machinery from modern text-independent speaker be adapted. For small amount of data, the adapted vector is interpo- recognition [Kinnunen and Li 2010]. In speaker recognition, the lation between the data and the UBM, whereas for large number of speech signal is also transformed into a sequence of short-term fea- data the effect of UBM is reduced. Given a sequence of independent ture vectors that are considered independent. The matching prob- test vectors {z 1 , . . . , z T }, the match score is computed as the dif- lem becomes to quantifying the similarity of the given ā€œfeature vec- ference of the target person and the UBM average log-likelihoods: tor cloudsā€. To this end, each person is modeled using a Gaussian mixture model (GMM) [Bishop 2006]. An important advance in T speaker recognition was normalizing the speaker models with re- 1 score = {log p(z t |Ī˜target ) āˆ’ log p(z t |Ī˜UBM )}. (2) spect to a so-called universal background model (UBM) [Reynolds T t=1 et al. 2000]. This method (Fig. 1) is now well-established base- line method in that ļ¬eld. The UBM, which is just another Gaus- sian mixture model, is ļ¬rst trained from a large set of data. The 4 Experiments user-dependent GMMs are then derived by adapting the parame- 4.1 Experimental Setup ters of the UBM to that speakerā€™s training data. The adapted is an interpolated model between the UBM (prior model) and the ob- For the experiments, we collected a database of N = 17 users. served training data. This maximum a posteriori (MAP) training of The initial number of participants was higher (23), but we left out the models gives better accuracy than maximum likelihood (ML) the recordings with low quality data - e.g. due to a frequent loss trained models for limited amount of data. Normalization by the of eye. Participants were naive of the actual purpose of the study. UBM likelihood in the test phase also emphasizes the difference of The recordings were conducted in a quiet usability laboratory with the given person from the general population of persons. constant lighting. After a calibration, the target stimuli consisted of a two sub-tasks. First, the display showed an introductory text with The GMM-UBM method [Reynolds et al. 2000] can be shortly instructions related to the content and duration of the task. Partici- summarized as follows. Both the UBM and the user-dependent pants were also asked to keep their head as stable as possible. The models are described by a mixture of multivariate Gaussians with following stimulus was a 25 minutes long video of a section of Ab- some mean vectors Āµm , diagonal covariance matrices (Ī£m ) and solutely fabulous - a comedy series produced by BBC. The video mixture weights (wm ). The density function of GMM is then contained original sound and subtitles in Finnish. The screen reso- p(z|Ī˜) = M wm N (z|Āµm , Ī£m ), where N (Ā·|Ā·) denotes mul- m=1 lution of the video was 800 x 600 pixels and it was shown on a 20.1 tivariate Gaussian and Ī˜ denotes collectively all the parameters. inch LCD ļ¬‚at panel Viewsonic VP201b (true native resolution 1600 The mixture weights satisfy wm ā‰„ 0, m wm = 1. The UBM x 1200, anti-glare surface, contrast ration 800:1, response time 16 parameters are found via the expectation-maximization (EM) algo- ms, viewing distance approximately 1 m). A Tobii X120 (sampling rithm, and the user-dependent mean vectors are derived as, rate 120 Hz, accuracy 0.5 degree) eye-tracker was employed in the study and the stimuli were presented using the Tobii Studio analysis Āµm = Ī±m z m + (1 āˆ’ Ī±m )Āµm , Ėœ (1) software v. 1.5.2. Ėœ where z m is the posterior-weighted mean of the training sample. We use one training ļ¬le and one test ļ¬le per person and do cross- The adaptation coefļ¬cient is deļ¬ned as Ī±m = nm /(nm +r), where matching of all ļ¬le pairs, which leads to 17 genuine access tri- 189
  • 4. als and 272 impostor access trials. We measure the accuracy by Table 1: Effect of training and test data durations. equal error rate (EER) which corresponds to the operating point for which the false acceptance (accepting an impostor) and false Training data Test data Equal error rate (EER %) rejection (rejecting a legitimate user) are equal. The UBM train- 10 sec 10 sec 47.1 ing data, person-dependent training data and test data are all dis- 1 min 10 sec 47.1 joint. The UBM is trained by a deterministic splitting method fol- 3 min 10 sec 35.3 lowed by 7 k-means iterations and 2 EM iterations. In creating the 6 min 10 sec 29.8 user-dependent adapted Gaussian mixture models, we adapt both 9 min 10 sec 28.7 the mean and variance vectors with relevance factor r = 16. 1 min 1 min 41.9 2 min 2 min 41.2 4 min 4 min 29.4 Window length L = 900 samples (7.5 sec) 5 min 3 min 29.4 45 window shift S = 450 samples 6 min 2 min 29.4 Histogram size K = 27 bins 7 min 1 min 29.8 Equal error rate (EER %) 40 ther studies using larger number of participants and wider range of 35 tasks and features. Several improvements could be done on the data processing side as well: using complementary features from pupil 30 diameter, using feature and score normalization and discriminative training [Kinnunen and Li 2010]. 25 5 10 15 20 25 30 References Number of Gaussian components (M) ĀØ B EDNARIK , R., K INNUNEN , T., M IHAILA , A., AND F R ANTI , P. Figure 4: Effect of model order. 2005. Eye-movements as a biometric. In Proc. 14th Scandina- vian Conference on Image Analysis (SCIA 2005). 780-789. 4.2 Results B ISHOP, C. 2006. Pattern Recognition and Machine Learning. We ļ¬rst optimize the feature extractor and classiļ¬er parameters. Springer Science+Business Media, LLC, New York. The UBM is trained from 140 minutes of data whereas the train- B ULACU , M., AND S CHOMAKER , L. 2007. Text-independent ing and test segments have approximate durations of ļ¬ve and one writer identiļ¬cation and veriļ¬cation using textual and allo- minute, respectively. We ļ¬x the number of Gaussians to M = 12 graphic features. IEEE Trans. on Pattern Analysis and Machine and vary the length of the data window (L) and the number of his- Intelligence 29, 4 (April), 19ā€“41. togram bins (K). The error rates presented in Fig. 3 suggest that increasing the window size improves accuracy, and the number of C OGAIN. Open source gaze tracking, freeware and low cost bins does not have to be too high (20 ā‰¤ K ā‰¤ 30). We ļ¬x K = 27 eye tracking. WWW page, http://www.cogain.org/ and L = 900 and further ļ¬ne-tune the number of Gaussians in Fig. eyetrackers/low-cost-eye-trackers. 4. The accuracy improves when increasing the number of Gaus- K ASPROWSKI , P. 2004. Human Identiļ¬cation Using Eye Move- sians, and achieves optimum at approximately 6 ā‰¤ M ā‰¤ 18. In the ments. PhD thesis, Silesian University of Technology, Insti- following we set M = 16. tute of Computer Science, Gliwice, Poland. http://www. Finally, we display the error rates for varying lengths of training kasprowski.pl/phd/PhD_Kasprowski.pdf. and test data in Table 1. Here we reduced the amount of UBM K INNUNEN , T., AND L I , H. 2010. An overview of text- training data to 70 minutes so as to allow using longer training independent speaker recognition: from features to supervectors. and test segments for the target persons. Increasing the amount of Speech Communication 52, 1 (January), 12ā€“40. data improves accuracy, saturating to accuracy around 30 % EER. Although the accuracy is not sufļ¬cient for a realistic application, K UMAR , M., G ARFINKEL , T., B ONEH , D., AND W INOGRAD , T. it is clearly below the chance level (50 % EER), suggesting that 2007. Reducing shoulder-surļ¬ng by using gaze-based password there is individual information in the eye movements which can be entry. In Proc. of the 3rd symposium on Usable privacy and modeled. The error rates are clearly higher than in [Kasprowski security table of contents, ACM, 13ā€“19. 2004; Bednarik et al. 2005]. However, those studies used ļ¬xed- dimensional templates that were carefully aligned whereas our sys- L UCA , A. D., W EISS , R., AND D REWES , H. 2007. Evaluation of tem does not use any explicit temporal alignment. eye-gaze interaction methods for security enhanced PIN-entry. In Proc. of the 19th Australasian conf. on Computer-Human In- teraction: Entertaining User Interfaces, ACM, 199ā€“202. 5 Conclusions PASSFACES. Passfaces: Two factor authentication for the enter- We approached the problem of task-independent eye-movement prise. WWW page, http://www.passfaces.com/. biometric authentication from bottom-up without any prior signal R EYNOLDS , D., Q UATIERI , T., AND D UNN , R. 2000. Speaker model of how the resulting eye-movement signal and features are veriļ¬cation using adapted gaussian mixture models. Digital Sig- generated by the oculomotor plant. Instead, we had an intuitive nal Processing 10, 1 (January), 19ā€“41. guess that the way the ocular muscles operate the eye is individ- ual and there is some independence on the underlying task. Our S ALVUCCI , D., AND G OLDBERG , J. 2000. Identifying ļ¬xations results indicate that this is a feasible conception. The error rates and saccades in eye-tracking protocols. In ETRA ā€™00: Proceed- are too high to be useful in a realistic security system, but signiļ¬- ings of the 2000 symposium on Eye tracking research & applica- cantly lower than the chance level (50 %) which warrants for fur- tions, ACM, New York, NY, USA, 71ā€“78. 190