SlideShare a Scribd company logo
Learning Relevant Eye Movement Feature Spaces Across Users
                           Zakria Hussain∗                                      Kitsuchart Pasupa†                              John Shawe-Taylor‡
                      University College London                             University of Southampton                        University College London


Abstract                                                                                           mon feature space. Instead of the mappings mentioned above, we
                                                                                                   chose to employ the popular Nystr¨ m approximation [Williams and
                                                                                                                                    o
In this paper we predict the relevance of images based on a low-                                   Seeger 2000] using a greedy algorithm called sparse kernel princi-
dimensional feature space found using several users’ eye move-                                     pal components analysis [Hussain and Shawe-Taylor 2009]. After
ments. Each user is given an image-based search task, during which                                 this low-dimensional representation we can project all new users
their eye movements are extracted using a Tobii eye tracker. The                                   into this space and run the perceptron algorithm. We show that
users also provide us with explicit feedback regarding the relevance                               learning in this new refined feature space allows us to predict rel-
of images. We demonstrate that by using a greedy Nystr¨ m algo-
                                                             o                                     evant images early on in the search task, indicating that our low-
rithm on the eye movement features of different users, we can find                                  dimensional feature space has helped construct a space which gen-
a suitable low-dimensional feature space for learning. We validate                                 eralises eye movements across several different users.
the suitability of this feature space by projecting the eye movement
                                                                                                   The paper is organised as follows. Sub-Sections 1.1 and 1.2 are the
features of a new user into this space, training an online learning al-
                                                                                                   background techniques. Section 2 describes our novel method in
gorithm using these features, and showing that the number of mis-
                                                                                                   detail, and the experiments and conclusions are given in Section 3
takes (regret over time) made in predicting relevant images is lower
                                                                                                   and 4, respectively.
than when using the original eye movement features. We also plot
Recall-Precision and ROC curves, and use a sign test to verify the
statistical significance of our results.                                                            1.1       Background I: Online learning

CR Categories: G.3 [Probability and Statistics]: Nonparametric                                     We are given a sequence of examples (inputs) x ∈ Rn in n-
statistics, Time series analysis—; H.3.3 [Information Storage and                                  dimensional space. These examples will be the different eye move-
Retrieval]: Information Search and Retrieval—Relevance feedback                                    ment features extracted from the eye tracker. Furthermore, each
                                                                                                   user indicates relevance of images (i.e., outputs) y ∈ {−1, 1}
                                                                                                   where 1 indicates relevance and −1 indicates irrelevance.1 There-
Keywords: Feature selection, Eye movement features, Online
                                                                                                   fore we will concentrate on the binary classification problem. We
learning, Nystr¨ m method, Tobii eye tracker
               o
                                                                                                   will also be given T ∈ N users, who produce an m-sample of input-
                                                                                                   output pairs St = {(xt1 , yt1 ), . . . , (xtm , ytm )} where t ∈ T . Fi-
1 Introduction                                                                                     nally, given any m-sample S we would like to construct a classifier
                                                                                                   hm : x → {−1, +1} which maps inputs to outputs, where m indi-
In this study we devise a novel strategy of generalising eye move-                                 cates the number of examples used to construct the classifier hm .
ments of users, based on their viewing habits over images. We
                                                                                                   Given a classifier h (·) we can measure the regret incurred so far, by
analyse data from a search task requiring a user to view images on
                                                                                                   counting the number of times our classifier has made an incorrect
a computer screen whilst their eye movements are recorded. The
                                                                                                   prediction. More formally, given an (x, y) pair, we say a mistake
search task is defined as finding images most related to bicycles,
                                                                                                   is made if the prediction and true label disagree i.e., h (x) = y.
horses or transport and the user is required to indicate which im-
                                                                                                   Therefore, given x1 , . . . , x and a classifier computed using in-
ages on each page are relevant to the task. Using these eye move-
                                                                                                   puts we have the regret:
ment features we run an online learning algorithm [Cesa-Bianchi
and Lugosi 2006], such as the perceptron, to see how well we can
predict the relevant images. However, is it possible to improve the
                                                                                                                                     X
                                                                                                                     regret( ) =           I(h (xi ) = yi )                       (1)
results of the perceptron when taking into account the behaviour of                                                                  i=1
other users’ eye movements? We present some preliminary results
that answer this question in the affirmative.                                                       where I(a) is equal to 1 if a is true and 0 otherwise. The regret
                                                                                                   can be thought of as the cumulative number of mistakes made over
The initial goal of this work was to use the Multi-Task Feature
                                                                                                   time.
Learning (MTFL) algorithm of [Argyriou et al. 2008]. However, in
order to use nonlinear kernels, the MTFL algorithm requires a low-                                 The protocol used for online learning (see [Cesa-Bianchi and Lu-
dimensional mapping using a Gram-Scmidt, Eigen-decomposition,                                      gosi 2006] for an introduction) is described below in Figure 1. The
etc., in order to construct explicit features [Shawe-Taylor and Cris-
tianini 2004]. However, simply using these low-dimensional map-                                       • Initialise regret(0) = 0
pings, without applying the MTFL algorithm also generates a com-                                      • Repeat for i = 1, 2, . . .
   ∗ Department
                                                                                                           1. receive input xi and construct hypothesis hi
                 of Computer Science, z.hussain@cs.ucl.ac.uk                                               2. predict output hi (xi )
    † Schoolof Electronics and Computer Science, kp2@ecs.soton.ac.uk
    ‡ Department of Computer Science, jst@cs.ucl.ac.uk
                                                                                                           3. receive real output yi
                                                                                                           4. regret(i) = regret(i − 1) + I(hi (xi ) = yi )
Copyright © 2010 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or                                  Figure 1: Online learning protocol
classroom use is granted without fee provided that copies are not made or distributed
for commercial advantage and that copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than ACM must be                idea is to minimise the regret over time. We would like to make the
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
                                                                                                   least number of mistakes throughout the online learning protocol.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                      1 In   practice all images not clicked as relevant are deemed irrelevant.
permissions@acm.org.
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             181
We will show in this paper that we can leverage useful informa-                matrix contains information from all T −1 samples and so we apply
tion from other tasks to construct a low-dimensional feature space             the sparse kernel principal components analysis algorithm [Hussain
leading to our online learning algorithm having smaller regret than            and Shawe-Taylor 2009] to find the index set ˆ (where |ˆ = k < m)
                                                                                                                            i         i|
when applied using individual samples. We will use the Perceptron              needed to compute the projection of Equation (2). This equates to
algorithm [Cesa-Bianchi and Lugosi 2006] in our experiments.                   finding a space with maximum variance between all of the different
                                                                               user behaviours. Given a new users eye movement feature vectors
1.2 Background II: Low-dimensional mappings                                    XT we can project it into this joint low-dimensional mapping and
                                                                               run any online learning algorithm in this space:
The Gram-Schmidt orthogonalisation procedure finds an orthonor-
mal basis for a matrix and is commonly used in machine learning                                             ˆˆ
                                                                                                         XT XR
when an explicit feature representation of inputs is required from
                                                                                       ˆ           ˆ −1
                                                                               where R = chol(Kˆˆ ). We choose an online learning algorithm
a nonlinear kernel [Shawe-Taylor and Cristianini 2004]. However,                                     ii
Gram-Schmidt does not take into account the residual loss between              due to the speed, simplicity and practicality of the approach. For
the low-dimensional mapping and may not be able to find a rich                  instance, when presenting users with images in a Content-Based
enough feature space.                                                          Image Retrieval (CBIR) system [Datta et al. 2008] we would like
                                                                               to retrieve relevant images as quickly as possible. Although we
A related method, known as the Nystr¨ m method, has been pro-
                                       o                                       do not explicitly model a CBIR system, we maintain that a future
posed and recently received considerable attention in the machine              development towards applying our techniques within CBIR systems
learning community [Smola and Sch¨ lkopf 2000][Drineas and Ma-
                                    o                                          could be useful in retrieving relevant images quickly. In the current
honey 2005][Kumar et al. 2009]. As shown by [Hussain and                       paper we are more interested in the feasibility study of whether or
Shawe-Taylor 2009] the greedy algorithm of [Smola and Sch¨ lkopf
                                                             o                 not any important information can be gained by the eye movements
2000] finds a low-dimensional space that minimises the residual                 of other users.
error between the low-dimensional feature space and the original
feature space. It corresponds to finding the maximum variance in                For practical purposes we will randomly choose a subset of inputs
the data and hence viewed as a sparse kernel principal components                                                    ˆ
                                                                               from each user in order to construct X – in practice this should
analysis. Due to these advantages, we will use the Nystr¨ m method
                                                        o                      not effect the quality of the Nystr¨ m approximation [Smola and
                                                                                                                  o
in our experiments.                                                            Sch¨ lkopf 2000]. However, for completeness we describe the al-
                                                                                   o
                                                                               gorithm without this randomness. The feature projection procedure
We will assume throughout that our examples x have already been                is described below in Algorithm 1. Furthermore, based on these
mapped into a higher-dimensional feature space F using φ : Rn →
F. Therefore φ(x) = x, but we will make no explicit reference
to φ(x). Therefore, if our input matrix X contain features as col-              Algorithm 1: Nystr¨ m feature projection for multiple users
                                                                                                   o
umn vectors then we can construct a kernel matrix K = X X ∈                      Input: training inputs X1 , . . . , XT −1 , new inputs XT and
Rm×m using inner-products, where denotes the transpose. Fur-                         k≤m∈N
thermore, we define each kernel function as κ(x, x ) = x x .                      Output: new features zT 1 , . . . , zT m
                                                                                  1: concatenate all feature vectors to get
Let Ki = [κ(x1 , xi ), . . . , κ(xm , xi )] denote the ith column vec-
tor of the kernel matrix. Let i = (1, . . . , k), k ≤ m, be an index                                   ˆ
                                                                                                       X = (X1 , . . . , XT −1 )
vector and let Kii indicate the square matrix constructed using the
                     o                      ˜
indices i. The Nystr¨ m approximation K, which approximates the                                              ˆ   ˆ ˆ
                                                                                   2: Create a kernel matrix K = X X
kernel matrix K, can be defined like so:                                            3: Run the sparse KPCA algorithm of [Hussain and
                      K = Ki K−1 Ki .
                      ˜                                                               Shawe-Taylor 2009] to find index set ˆ such that k = |ˆ
                                                                                                                          i                i|
                              ii
                                                                                   4: Compute:
Furthermore, to project into the space defined by the index vector
i we follow the regime outlined by [Diethe et al. 2009], where by                                         ˆ        ˆ −1
                                                                                                          R = chol(Kˆˆ )
                                                                                                                     ii
taking the Cholesky decomposition of K−1 to get an upper triangle
                                        ii
                                                                                   5: for j = 1, . . . , m do
matrix R = chol(K−1 ) where R R = K−1 , we can make a
                       ii                     ii
projection into this low-dimensional space. Therefore, given any                   6:    construct new feature vectors using inputs in XT
input xj we can create the corresponding new input zj using the                                                        ˆ ˆ
following projection:                                                                                      zT j = xT j XˆR ,
                                                                                                                        i                          (3)

                                                                                   7: end for
                        zj = Kji R .                              (2)

We have now described all of the components needed to describe                 feature vectors we also train the Multi-Task Feature Learning algo-
our algorithm. However, it should be noted that the projection of              rithm of [Argyriou et al. 2008] on the T − 1 samples by viewing
Equation (2) is usually only applied to a single sample S, whereas             each as a task. The algorithm produces a matrix D ∈ Rδ×δ where
in our work we have T different samples S1 , . . . , ST . We follow a          δ < m, which after taking the Cholesky decomposition of D we
similar methodology described by [Argyriou et al. 2008] (who use               have RD = chol(D), and hence a low-dimensional feature map-
Gram-Schmidt) using the Nystr¨ m method.
                               o                                               ping zT j = zT j RD . Unlike our Algorithm 1 this low-dimensional
                                                                               mapping also utilises information about the outputs.
2 Our method
                                                                               3 Experimental Setup
Given T − 1 input-output samples S1 , . . . , ST −1 we have Xt =
(xt1 , . . . , xtm ) where t = 1, . . . , T − 1. We concatenate these          The database used for images was the PASCAL VOC 2007 chal-
                                           ˆ
feature vectors into a larger matrix X = (X1 , . . . , XT −1 ) and             lenge database [Everingham et al. ] which contains over 9000 im-
                                              ˆ    ˆ ˆ
compute the corresponding kernel matrix K = X X. This kernel                   ages, each of which contains at least 1 object from a possible list


                                                                         182
of 20 objects such as cars, trains, cows, people, etc. There were               “Nystr¨ m features”.2 We also made a mapping using the Multi-
                                                                                       o
T = 23 users in the experiments, and participants were asked to                 Task Feature Learning algorithm of [Argyriou et al. 2008], as de-
view twenty pages, each of which contained fifteen images. Their                 scribed at the end of Section 2, which we refer to as the “Multi-
eye movements were recorded by a Tobii X120 eye tracker [Tobii                  Task features”. The baseline used the Original features extracted
Studio Ltd ] connected to a PC using a 19-inch monitor (resolution              using the Tobii eye tracker described above. The Original features
of 1280x1024). The eye tracker has approximately 0.5 degrees of                 did not compute any low-dimensional feature space and was sim-
accuracy with a sample rate of 120 Hz and used infrared lens to                 ply used with the perceptron without using information from other
detect pupil centres and corneal reflection. Figure 2 shows an ex-               users’ eye movements.
ample image presented to a user along with the eye movements (in
red).                                                                           Figure 3 plots the regret, RP and ROC curves when training the
                                                                                perceptron algorithm with the Original features, the Nystr¨ m fea-
                                                                                                                                              o
                                                                                tures and the Multi-Task features. We average each measure over
                                                                                the 23 leave-one-out users, and also over ten random splits of k fea-
                                                                                                                     ˆ
                                                                                ture vectors needed to construct X. It is clear that there is a small
                                                                                difference between the Multi-Task features and Original features in
                                                                                favour of the Multi-Task features. However, the Nystr¨ m features
                                                                                                                                          o
                                                                                constructed using the Nystr¨ m method yield much superior results
                                                                                                              o
                                                                                with respect to both of these techniques. The regret of the per-
                                                                                ceptron using the Nystr¨ m features is shown in Figure 3(a), and is
                                                                                                          o
                                                                                always smaller than the Original and Multi-Task features (smaller
                                                                                regret is better). It is not surprising that we beat the Original fea-
                                                                                tures method, but our improvement over the Multi-Task features is
                                                                                more surprising as it also utilises the relevance feedback in order
                                                                                to construct a suitable low-dimensional feature space. However,
                                                                                the Nystr¨ m method only uses the eye movement features. Fig-
                                                                                          o
                                                                                ure 3(b) plots the Recall-Precision curve and shows dramatic im-
                                                                                provements over the Original and Multi-Task features. This plot
                                                                                suggests that using the Nystr¨ m features allows us to retrieve more
                                                                                                               o
                                                                                relevant images at the start of a search. This would be important
                                                                                for a CBIR system (for instance). Finally, the ROC curves pic-
Figure 2: An example page shown to a participant. Eye movements                 tured in Figure 3(c) also suggest improved performance when using
of the participant are shown in red. The red circles mark fixations              the Nystr¨ m method to construct low-dimensional features across
                                                                                          o
and small red dots correspond to raw measurements that belong to                users.
those fixations. The black dots mark raw measurements that were
not included in any of the fixations.                                              Algorithm                   Average Regret    AAP       AUROC
                                                                                  Original features              0.2886        0.5687     0.6983
                                                                                  Nystr¨ m features
                                                                                       o                         0.2433        0.6540     0.7662
Each participant was asked to carry out one search task among a                   Multi-Task features            0.2752        0.5802     0.7107
possible of three, which included looking for a Bicycle (8 users), a
Horse (7 users) or some form of Transport (8 users). Any images
                                                                                Table 1: Summary statistics for curves in Figure 3. The bold font
within a search page that a user thought contained images from
                                                                                indicates statistical significance at the level p < 0.01 over the base-
there search, they marked as relevant.
                                                                                line.
The eye movement features extracted from the Tobii eye tracker
can be found in Table 1 of [Pasupa et al. 2009] and was collected               In Table 1 we state summary statistics related to the curves plotted
by [Saunders and Klami 2008]. Most of these are motivated by                    in Figure 3. As mentioned earlier we randomly select k features
features considered in the literature for text retrieval studies – how-                                                            ˆ
                                                                                from each data matrix Xt in order to construct X, and repeat this
ever, in addition, they also consider more image-based eye move-                for ten random seeds – the results in Table 1 average over these ten
ment features. These are computed for each full image and based                 splits. Therefore, for Figure 3(a), 3(b) and 3(c) we have the Average
on the eye trajectory and locations of the images in the page. They             Regret, the Average Average Precision (AP), and the Area Under
cover the three main types of information typically considered in               the ROC curve (AUROC), respectively. Using Nystr¨ m features are
                                                                                                                                      o
reading studies: fixations, regressions, and refixations. The number              clearly better than using the Original and Multi-Task features with a
of features extracted were 33, computed from: (i) raw measure-                  significance level of p < 0.01. Though using Multi-Task features is
ments from the eye tracker; (ii) fixations estimated from the raw                slightly better than using the Original features, however, the results
measurements using the standard ClearView fixation filter provided                are only significant at the level p < 0.05. The significance level
with the Tobii eye tracking software, with settings of: “radius 50              of the direction of differences between the two measures was tested
pixels, minimum duration 100 ms”. In the next subsection, these                 using the sign test [Siegel and Castellian 1988].
33 features correspond to the “Original features”. Finally, in order
to evaluate our methods with the perceptron we use the Regret (see
Equation (1)), the Recall-Precision (RP) curve and the Receiver                 4 Conclusions
Operating Characteristic (ROC) curve.
                                                                                We took eye gaze data of several users viewing images for a search
                                                                                task, and constructed a novel method for producing a good low-
3.1 Results                                                                     dimensional space for eye movement features relevant to the search
                                                                                task. We employed the Nystr¨ m method and showed it to pro-
                                                                                                                o
Given 23 users, we used the features of 22 users (i.e., leave-one-              duce better results than the extracted eye movement features [Pa-
user-out) in order to compute our low-dimensional mapping using
the Nystr¨ m method outlined in Algorithm 1 – referred to as the
          o                                                                        2 For   the plots we use Nystroem.



                                                                          183
90                                                              1                                                                              1
                                                                                                                           Original features
                      80                                                                                                   Nystroem features
                                                                                     0.9
                                                                                                                           Muti Task features                       0.8
                      70
                                                                                     0.8
  Cumulative Regret




                      60




                                                                                                                                                    True Positive
                                                                                                                                                                    0.6




                                                                         Precision
                      50                                                             0.7

                      40                                                             0.6
                                                                                                                                                                    0.4
                      30
                                                                                     0.5
                      20                                                                                                                                            0.2
                                                 Original features                                                                                                                                   Original features
                                                                                     0.4
                      10                         Nystroem features                                                                                                                                   Nystroem features
                                                 Multi Task features                                                                                                                                 Multi Task features
                      0                                                                                                                                              0
                       0    50   100     150     200      250      300                 0      0.2     0.4            0.6         0.8            1                     0   0.2       0.4        0.6          0.8            1
                                       Samples                                                              Recall                                                                  False Positive

                                 (a) The Regret                                            (b) The Recall-Precision curve                                                       (c) The ROC curve


                           Figure 3: Averaged over 23 users: results of the perceptron algorithm using Original, Nystr¨ m and Multi-Task features.
                                                                                                                      o


supa et al. 2009] and a state-of-the-art Multi-Task Feature learning                                                       C ESA -B IANCHI , N., AND L UGOSI , G. 2006. Prediction, Learn-
algorithm. The results of this paper indicate that if we would like                                                           ing, and Games. Cambridge University Press, New York, NY,
to predict the relevance of images using the eye movements of a                                                               USA.
user, then it is possible to improve these results by learning relevant
information from the eye movement behaviour of other users view-                                                           DATTA , R., J OSHI , D., L I , J., AND WANG , J. Z. 2008. Image
ing similar (not necessarily the same) images and given similar (not                                                         retrieval: Ideas, influences, and trends of the new age. ACM
necessarily the same) search tasks.                                                                                          Computing Surveys 40, 5:1–5:60.

                                                                                                                           D IETHE , T., H USSAIN , Z., H ARDOON , D. R., AND S HAWE -
The first obvious application of our work is to CBIR systems. By
                                                                                                                              TAYLOR , J. 2009. Matching pursuit kernel fisher discriminant
recording the eye movement of users we could apply Algorithm 1 to
                                                                                                                              analysis. In Proceedings of 12th International Conference on
project gaze features into a common feature space, and then apply a
                                                                                                                              Artificial Intelligence and Statistics, AISTATS, vol. 5 of JMLR:
CBIR system in this space. Based on the results we reported within
                                                                                                                              W&CP 5.
this paper, we would hope that relevant images were retrieved early
on in the search. Another application to CBIR, would be to use the                                                         D RINEAS , P., AND M AHONEY, M. W. 2005. On the Nystr¨ m     o
method we presented as a form of verification of relevance. Some                                                               method for approximating a gram matrix for improved kernel-
users may give incorrect relevance feedback due to tiredness, lack                                                            based learning. Journal of Machine Learning Research 6, 2153–
of interest, etc., but based on our method we could correct such                                                              2175.
actions as and when users make judgements on relevance. As the
perceptron would improve classification over time and we would                                                              E VERINGHAM , M., VAN G OOL , L., W ILLIAMS , C. K. I.,
expect users to make mistakes near the end of a search, then such                                                             W INN , J., AND Z ISSERMAN , A. The PASCAL Visual Object
corrective action would be useful to CBIR systems requiring accu-                                                             Classes Challenge 2007 (VOC2007) Results. http://www.pascal-
rate Relevance Feedback mechanisms to improve search.                                                                         network.org/challenges/VOC/voc2007/workshop/index.html.

The machine learning view is that we are projecting the feature vec-                                                       H USSAIN , Z., AND S HAWE -TAYLOR , J. 2009. Theory of match-
tors into a common Nystr¨ m space (constructed from feature vec-
                            o                                                                                                 ing pursuit. In Advances in Neural Information Processing Sys-
tors of different users), and then running the perceptron with these                                                          tems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
features. Our experiments suggest better performance than simply                                                              Eds. 721–728.
using the perceptron together with the original features. Therefore,
we would like to construct regret bounds for the perceptron when                                                           K UMAR , S., M OHRI , M., AND TALWALKAR , A. 2009. Sample
using this common low-dimensional space. Finally, for practical                                                               techniques for the Nystr¨ m method. In Proceedings of 12th In-
                                                                                                                                                      o
purposes we believe that randomly selecting columns from the ker-                                                             ternational Conference on Artificial Intelligence and Statistics,
nel matrix to find ˆ would generate a feature space [Kumar et al.
                    i                                                                                                         AISTATS, vol. 5 of JMLR: W&CP 5.
2009] with little deterioration in test accuracy, but leading to com-                                                      PASUPA , K., S AUNDERS , C., S ZEDMAK , S., K LAMI , A., K ASKI ,
putational speed-ups.                                                                                                        S., AND G UNN , S. 2009. Learning to rank images from eye
                                                                                                                             movements. In HCI ’09: Proceeding of the IEEE 12th Interna-
                                                                                                                             tional Conference on Computer Vision (ICCV’09) Workshops on
Acknowledgements                                                                                                             Human-Computer Interaction, 2009–2016.

The research leading to these results has received funding                                                                 S AUNDERS , C., AND K LAMI , A.            2008.     Database of
from the European Community’s Seventh Framework Programme                                                                     eye-movement recordings. Tech. Rep. Project Deliverable
(FP7/2007–2013) under grant agreement n◦ 216529, Personal In-                                                                 Report D8.3, PinView FP7-216529.          available on-line at
formation Navigator Adapting Through Viewing, PinView.                                                                        http://www.pinview.eu/deliverables.php.

                                                                                                                           S HAWE -TAYLOR , J., AND C RISTIANINI , N. 2004. Kernel Meth-
References                                                                                                                    ods for Pattern Analysis. Cambridge University Press, Cam-
                                                                                                                              bridge, U.K.

A RGYRIOU , A., E VGENIOU , T., AND P ONTIL , M. 2008. Convex                                                              S IEGEL , S., AND C ASTELLIAN , N. J. 1988. Nonparametric statis-
   multi-task feature learning. Machine Learning 73, 3, 243–272.                                                              tics for the behavioral sciences. McGraw-Hill, Singapore.


                                                                                                            184
¨
S MOLA , A., AND S CH OLKOPF, B. 2000. Sparse greedy matrix
   approximation for machine learning. In Proceedings of the 17th
   International Conference on Machine Learning, 911–918.
T OBII      S TUDIO     LTD.             Tobii    Studio   Help.
   http://studiohelp.tobii.com/StudioHelp 1.2/.
W ILLIAMS , C. K. I., AND S EEGER , M. 2000. Using the Nystr¨ m
                                                            o
  method to speed up kernel machines. In Advances in Neural
  Information Processing Systems, MIT Press, vol. 13, 682–688.




                                                                    185
186

More Related Content

What's hot

Image Processing 1
Image Processing 1Image Processing 1
Image Processing 1jainatin
 
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
IDES Editor
 
MLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learningMLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learning
Charles Deledalle
 
BMC 2012
BMC 2012BMC 2012
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...Matt Hirsch - MIT Media Lab
 
Object Detection with Discrmininatively Trained Part based Models
Object Detection with Discrmininatively Trained Part based ModelsObject Detection with Discrmininatively Trained Part based Models
Object Detection with Discrmininatively Trained Part based Modelszukun
 
Visual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality ExperiencesVisual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality Experiences
Pawel Kobylinski
 
Object Recognition with Deformable Models
Object Recognition with Deformable ModelsObject Recognition with Deformable Models
Object Recognition with Deformable Modelszukun
 
Uniform and non uniform single image deblurring based on sparse representatio...
Uniform and non uniform single image deblurring based on sparse representatio...Uniform and non uniform single image deblurring based on sparse representatio...
Uniform and non uniform single image deblurring based on sparse representatio...
ijma
 
Puneet Singla
Puneet SinglaPuneet Singla
Puneet Singlapsingla
 
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registration
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, RegistrationCVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registration
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registrationzukun
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectorszukun
 
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Joo-Haeng Lee
 
Fcv poster bao
Fcv poster baoFcv poster bao
Fcv poster baozukun
 
Scale and object aware image retargeting for thumbnail browsing
Scale and object aware image retargeting for thumbnail browsingScale and object aware image retargeting for thumbnail browsing
Scale and object aware image retargeting for thumbnail browsing
perillaroc
 
Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009
Juan Luis Nieves
 

What's hot (20)

Image Processing 1
Image Processing 1Image Processing 1
Image Processing 1
 
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
 
Curve fitting
Curve fittingCurve fitting
Curve fitting
 
MLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learningMLIP - Chapter 3 - Introduction to deep learning
MLIP - Chapter 3 - Introduction to deep learning
 
BMC 2012
BMC 2012BMC 2012
BMC 2012
 
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...
SIGGRAPH 2012 Computational Display Course - 3 Computational Light Field Disp...
 
Nanotechnology
NanotechnologyNanotechnology
Nanotechnology
 
Object Detection with Discrmininatively Trained Part based Models
Object Detection with Discrmininatively Trained Part based ModelsObject Detection with Discrmininatively Trained Part based Models
Object Detection with Discrmininatively Trained Part based Models
 
Visual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality ExperiencesVisual Attention Convergence Index for Virtual Reality Experiences
Visual Attention Convergence Index for Virtual Reality Experiences
 
Object Recognition with Deformable Models
Object Recognition with Deformable ModelsObject Recognition with Deformable Models
Object Recognition with Deformable Models
 
Wiamis2010 poster
Wiamis2010 posterWiamis2010 poster
Wiamis2010 poster
 
56 58
56 5856 58
56 58
 
Uniform and non uniform single image deblurring based on sparse representatio...
Uniform and non uniform single image deblurring based on sparse representatio...Uniform and non uniform single image deblurring based on sparse representatio...
Uniform and non uniform single image deblurring based on sparse representatio...
 
Puneet Singla
Puneet SinglaPuneet Singla
Puneet Singla
 
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registration
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, RegistrationCVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registration
CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: Isocontours, Registration
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
 
Fcv poster bao
Fcv poster baoFcv poster bao
Fcv poster bao
 
Scale and object aware image retargeting for thumbnail browsing
Scale and object aware image retargeting for thumbnail browsingScale and object aware image retargeting for thumbnail browsing
Scale and object aware image retargeting for thumbnail browsing
 
Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009
 

Viewers also liked

חטיבת חיר בהתקדמות ורדיפה
חטיבת חיר בהתקדמות ורדיפהחטיבת חיר בהתקדמות ורדיפה
חטיבת חיר בהתקדמות ורדיפהhaimkarel
 
Creating a Leak-Free Water Containment Tank
Creating a Leak-Free Water Containment TankCreating a Leak-Free Water Containment Tank
Creating a Leak-Free Water Containment Tank
Kryton International Inc.
 
Movie it process
Movie it processMovie it process
Movie it processSana Samad
 
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...ZFConf Conference
 
TEMA 3B PRESENT TENSE of SER and ESTAR
TEMA 3B PRESENT TENSE of SER and ESTARTEMA 3B PRESENT TENSE of SER and ESTAR
TEMA 3B PRESENT TENSE of SER and ESTAR
SenoraAmandaWhite
 
הכשרת המפקד הנמוך
הכשרת המפקד הנמוךהכשרת המפקד הנמוך
הכשרת המפקד הנמוךhaimkarel
 
Viena – praga
Viena –  pragaViena –  praga
Viena – pragatsanchezro
 
Why businesses should talktojason the Leeds public relations and communicatio...
Why businesses should talktojason the Leeds public relations and communicatio...Why businesses should talktojason the Leeds public relations and communicatio...
Why businesses should talktojason the Leeds public relations and communicatio...Jason Kelly
 
Ecobuild Londres
Ecobuild LondresEcobuild Londres
Ecobuild LondresEva Cajigas
 
Boonmak
BoonmakBoonmak
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
guestbe4094
 
TEMA 3A ME GUSTAN and ME ENCANTAN
TEMA 3A ME GUSTAN and ME ENCANTANTEMA 3A ME GUSTAN and ME ENCANTAN
TEMA 3A ME GUSTAN and ME ENCANTAN
SenoraAmandaWhite
 
Global Trends in Open Educational Resources
Global Trends in Open Educational ResourcesGlobal Trends in Open Educational Resources
Global Trends in Open Educational Resourcesnazzzy
 

Viewers also liked (20)

חטיבת חיר בהתקדמות ורדיפה
חטיבת חיר בהתקדמות ורדיפהחטיבת חיר בהתקדמות ורדיפה
חטיבת חיר בהתקדמות ורדיפה
 
Creating a Leak-Free Water Containment Tank
Creating a Leak-Free Water Containment TankCreating a Leak-Free Water Containment Tank
Creating a Leak-Free Water Containment Tank
 
Movie it process
Movie it processMovie it process
Movie it process
 
สงครามครูเสด 2003
สงครามครูเสด 2003สงครามครูเสด 2003
สงครามครูเสด 2003
 
Stc charc carrf
Stc charc carrfStc charc carrf
Stc charc carrf
 
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...
ZFConf 2011: Как может помочь среда разработки при написании приложения на Ze...
 
TEMA 3B PRESENT TENSE of SER and ESTAR
TEMA 3B PRESENT TENSE of SER and ESTARTEMA 3B PRESENT TENSE of SER and ESTAR
TEMA 3B PRESENT TENSE of SER and ESTAR
 
China
ChinaChina
China
 
הכשרת המפקד הנמוך
הכשרת המפקד הנמוךהכשרת המפקד הנמוך
הכשרת המפקד הנמוך
 
กำแพงเบอร น(สมบร ณ_)-1
กำแพงเบอร น(สมบร ณ_)-1กำแพงเบอร น(สมบร ณ_)-1
กำแพงเบอร น(สมบร ณ_)-1
 
News
News News
News
 
Viena – praga
Viena –  pragaViena –  praga
Viena – praga
 
Why businesses should talktojason the Leeds public relations and communicatio...
Why businesses should talktojason the Leeds public relations and communicatio...Why businesses should talktojason the Leeds public relations and communicatio...
Why businesses should talktojason the Leeds public relations and communicatio...
 
นิกิตา ครุสชอฟ
นิกิตา ครุสชอฟนิกิตา ครุสชอฟ
นิกิตา ครุสชอฟ
 
Ecobuild Londres
Ecobuild LondresEcobuild Londres
Ecobuild Londres
 
Boonmak
BoonmakBoonmak
Boonmak
 
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...Galerija Magicus   Dnevnik Esencija Do 21 3 2010   Ciklus Cernik I Madonin Sv...
Galerija Magicus Dnevnik Esencija Do 21 3 2010 Ciklus Cernik I Madonin Sv...
 
TEMA 3A ME GUSTAN and ME ENCANTAN
TEMA 3A ME GUSTAN and ME ENCANTANTEMA 3A ME GUSTAN and ME ENCANTAN
TEMA 3A ME GUSTAN and ME ENCANTAN
 
PBOPlus Introduction
PBOPlus IntroductionPBOPlus Introduction
PBOPlus Introduction
 
Global Trends in Open Educational Resources
Global Trends in Open Educational ResourcesGlobal Trends in Open Educational Resources
Global Trends in Open Educational Resources
 

Similar to Hussain Learning Relevant Eye Movement Feature Spaces Across Users

Hardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye MovementsHardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye Movements
Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Kalle
 
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsGrindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Kalle
 
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
sipij
 
Isvc08
Isvc08Isvc08
Scene Description From Images To Sentences
Scene Description From Images To SentencesScene Description From Images To Sentences
Scene Description From Images To Sentences
IRJET Journal
 
I0343065072
I0343065072I0343065072
I0343065072
ijceronline
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
ijsrd.com
 
Action Recognition using Nonnegative Action
Action Recognition using Nonnegative ActionAction Recognition using Nonnegative Action
Action Recognition using Nonnegative Action
suthi
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kalle
 
Object tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform ImplementationObject tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform Implementation
Editor IJCATR
 
uai2004_V1.doc.doc.doc
uai2004_V1.doc.doc.docuai2004_V1.doc.doc.doc
uai2004_V1.doc.doc.docbutest
 
Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorHassan Nasser
 
Action Genome: Action As Composition of Spatio Temporal Scene Graphs
Action Genome: Action As Composition of Spatio Temporal Scene GraphsAction Genome: Action As Composition of Spatio Temporal Scene Graphs
Action Genome: Action As Composition of Spatio Temporal Scene Graphs
Sangmin Woo
 
Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.
Ian Sa
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
INFOGAIN PUBLICATION
 
Sparse representation based human action recognition using an action region-a...
Sparse representation based human action recognition using an action region-a...Sparse representation based human action recognition using an action region-a...
Sparse representation based human action recognition using an action region-a...
Wesley De Neve
 
Image Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkImage Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic Framework
CSCJournals
 
D018112429
D018112429D018112429
D018112429
IOSR Journals
 

Similar to Hussain Learning Relevant Eye Movement Feature Spaces Across Users (20)

Hardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye MovementsHardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye Movements
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsGrindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
 
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
 
Isvc08
Isvc08Isvc08
Isvc08
 
Scene Description From Images To Sentences
Scene Description From Images To SentencesScene Description From Images To Sentences
Scene Description From Images To Sentences
 
I0343065072
I0343065072I0343065072
I0343065072
 
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object TrackingIntegrated Hidden Markov Model and Kalman Filter for Online Object Tracking
Integrated Hidden Markov Model and Kalman Filter for Online Object Tracking
 
Action Recognition using Nonnegative Action
Action Recognition using Nonnegative ActionAction Recognition using Nonnegative Action
Action Recognition using Nonnegative Action
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
 
Object tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform ImplementationObject tracking with SURF: ARM-Based platform Implementation
Object tracking with SURF: ARM-Based platform Implementation
 
uai2004_V1.doc.doc.doc
uai2004_V1.doc.doc.docuai2004_V1.doc.doc.doc
uai2004_V1.doc.doc.doc
 
Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulator
 
Action Genome: Action As Composition of Spatio Temporal Scene Graphs
Action Genome: Action As Composition of Spatio Temporal Scene GraphsAction Genome: Action As Composition of Spatio Temporal Scene Graphs
Action Genome: Action As Composition of Spatio Temporal Scene Graphs
 
Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.Preemptive RANSAC by David Nister.
Preemptive RANSAC by David Nister.
 
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...
 
Sparse representation based human action recognition using an action region-a...
Sparse representation based human action recognition using an action region-a...Sparse representation based human action recognition using an action region-a...
Sparse representation based human action recognition using an action region-a...
 
Image Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkImage Denoising Based On Sparse Representation In A Probabilistic Framework
Image Denoising Based On Sparse Representation In A Probabilistic Framework
 
Afmkl
AfmklAfmkl
Afmkl
 
D018112429
D018112429D018112429
D018112429
 

More from Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Kalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Kalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Kalle
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Kalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Kalle
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Kalle
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
Kalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
Kalle
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Kalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
Kalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Kalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Kalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Kalle
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Kalle
 

More from Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 

Hussain Learning Relevant Eye Movement Feature Spaces Across Users

  • 1. Learning Relevant Eye Movement Feature Spaces Across Users Zakria Hussain∗ Kitsuchart Pasupa† John Shawe-Taylor‡ University College London University of Southampton University College London Abstract mon feature space. Instead of the mappings mentioned above, we chose to employ the popular Nystr¨ m approximation [Williams and o In this paper we predict the relevance of images based on a low- Seeger 2000] using a greedy algorithm called sparse kernel princi- dimensional feature space found using several users’ eye move- pal components analysis [Hussain and Shawe-Taylor 2009]. After ments. Each user is given an image-based search task, during which this low-dimensional representation we can project all new users their eye movements are extracted using a Tobii eye tracker. The into this space and run the perceptron algorithm. We show that users also provide us with explicit feedback regarding the relevance learning in this new refined feature space allows us to predict rel- of images. We demonstrate that by using a greedy Nystr¨ m algo- o evant images early on in the search task, indicating that our low- rithm on the eye movement features of different users, we can find dimensional feature space has helped construct a space which gen- a suitable low-dimensional feature space for learning. We validate eralises eye movements across several different users. the suitability of this feature space by projecting the eye movement The paper is organised as follows. Sub-Sections 1.1 and 1.2 are the features of a new user into this space, training an online learning al- background techniques. Section 2 describes our novel method in gorithm using these features, and showing that the number of mis- detail, and the experiments and conclusions are given in Section 3 takes (regret over time) made in predicting relevant images is lower and 4, respectively. than when using the original eye movement features. We also plot Recall-Precision and ROC curves, and use a sign test to verify the statistical significance of our results. 1.1 Background I: Online learning CR Categories: G.3 [Probability and Statistics]: Nonparametric We are given a sequence of examples (inputs) x ∈ Rn in n- statistics, Time series analysis—; H.3.3 [Information Storage and dimensional space. These examples will be the different eye move- Retrieval]: Information Search and Retrieval—Relevance feedback ment features extracted from the eye tracker. Furthermore, each user indicates relevance of images (i.e., outputs) y ∈ {−1, 1} where 1 indicates relevance and −1 indicates irrelevance.1 There- Keywords: Feature selection, Eye movement features, Online fore we will concentrate on the binary classification problem. We learning, Nystr¨ m method, Tobii eye tracker o will also be given T ∈ N users, who produce an m-sample of input- output pairs St = {(xt1 , yt1 ), . . . , (xtm , ytm )} where t ∈ T . Fi- 1 Introduction nally, given any m-sample S we would like to construct a classifier hm : x → {−1, +1} which maps inputs to outputs, where m indi- In this study we devise a novel strategy of generalising eye move- cates the number of examples used to construct the classifier hm . ments of users, based on their viewing habits over images. We Given a classifier h (·) we can measure the regret incurred so far, by analyse data from a search task requiring a user to view images on counting the number of times our classifier has made an incorrect a computer screen whilst their eye movements are recorded. The prediction. More formally, given an (x, y) pair, we say a mistake search task is defined as finding images most related to bicycles, is made if the prediction and true label disagree i.e., h (x) = y. horses or transport and the user is required to indicate which im- Therefore, given x1 , . . . , x and a classifier computed using in- ages on each page are relevant to the task. Using these eye move- puts we have the regret: ment features we run an online learning algorithm [Cesa-Bianchi and Lugosi 2006], such as the perceptron, to see how well we can predict the relevant images. However, is it possible to improve the X regret( ) = I(h (xi ) = yi ) (1) results of the perceptron when taking into account the behaviour of i=1 other users’ eye movements? We present some preliminary results that answer this question in the affirmative. where I(a) is equal to 1 if a is true and 0 otherwise. The regret can be thought of as the cumulative number of mistakes made over The initial goal of this work was to use the Multi-Task Feature time. Learning (MTFL) algorithm of [Argyriou et al. 2008]. However, in order to use nonlinear kernels, the MTFL algorithm requires a low- The protocol used for online learning (see [Cesa-Bianchi and Lu- dimensional mapping using a Gram-Scmidt, Eigen-decomposition, gosi 2006] for an introduction) is described below in Figure 1. The etc., in order to construct explicit features [Shawe-Taylor and Cris- tianini 2004]. However, simply using these low-dimensional map- • Initialise regret(0) = 0 pings, without applying the MTFL algorithm also generates a com- • Repeat for i = 1, 2, . . . ∗ Department 1. receive input xi and construct hypothesis hi of Computer Science, z.hussain@cs.ucl.ac.uk 2. predict output hi (xi ) † Schoolof Electronics and Computer Science, kp2@ecs.soton.ac.uk ‡ Department of Computer Science, jst@cs.ucl.ac.uk 3. receive real output yi 4. regret(i) = regret(i − 1) + I(hi (xi ) = yi ) Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or Figure 1: Online learning protocol classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be idea is to minimise the regret over time. We would like to make the honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. least number of mistakes throughout the online learning protocol. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail 1 In practice all images not clicked as relevant are deemed irrelevant. permissions@acm.org. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 181
  • 2. We will show in this paper that we can leverage useful informa- matrix contains information from all T −1 samples and so we apply tion from other tasks to construct a low-dimensional feature space the sparse kernel principal components analysis algorithm [Hussain leading to our online learning algorithm having smaller regret than and Shawe-Taylor 2009] to find the index set ˆ (where |ˆ = k < m) i i| when applied using individual samples. We will use the Perceptron needed to compute the projection of Equation (2). This equates to algorithm [Cesa-Bianchi and Lugosi 2006] in our experiments. finding a space with maximum variance between all of the different user behaviours. Given a new users eye movement feature vectors 1.2 Background II: Low-dimensional mappings XT we can project it into this joint low-dimensional mapping and run any online learning algorithm in this space: The Gram-Schmidt orthogonalisation procedure finds an orthonor- mal basis for a matrix and is commonly used in machine learning ˆˆ XT XR when an explicit feature representation of inputs is required from ˆ ˆ −1 where R = chol(Kˆˆ ). We choose an online learning algorithm a nonlinear kernel [Shawe-Taylor and Cristianini 2004]. However, ii Gram-Schmidt does not take into account the residual loss between due to the speed, simplicity and practicality of the approach. For the low-dimensional mapping and may not be able to find a rich instance, when presenting users with images in a Content-Based enough feature space. Image Retrieval (CBIR) system [Datta et al. 2008] we would like to retrieve relevant images as quickly as possible. Although we A related method, known as the Nystr¨ m method, has been pro- o do not explicitly model a CBIR system, we maintain that a future posed and recently received considerable attention in the machine development towards applying our techniques within CBIR systems learning community [Smola and Sch¨ lkopf 2000][Drineas and Ma- o could be useful in retrieving relevant images quickly. In the current honey 2005][Kumar et al. 2009]. As shown by [Hussain and paper we are more interested in the feasibility study of whether or Shawe-Taylor 2009] the greedy algorithm of [Smola and Sch¨ lkopf o not any important information can be gained by the eye movements 2000] finds a low-dimensional space that minimises the residual of other users. error between the low-dimensional feature space and the original feature space. It corresponds to finding the maximum variance in For practical purposes we will randomly choose a subset of inputs the data and hence viewed as a sparse kernel principal components ˆ from each user in order to construct X – in practice this should analysis. Due to these advantages, we will use the Nystr¨ m method o not effect the quality of the Nystr¨ m approximation [Smola and o in our experiments. Sch¨ lkopf 2000]. However, for completeness we describe the al- o gorithm without this randomness. The feature projection procedure We will assume throughout that our examples x have already been is described below in Algorithm 1. Furthermore, based on these mapped into a higher-dimensional feature space F using φ : Rn → F. Therefore φ(x) = x, but we will make no explicit reference to φ(x). Therefore, if our input matrix X contain features as col- Algorithm 1: Nystr¨ m feature projection for multiple users o umn vectors then we can construct a kernel matrix K = X X ∈ Input: training inputs X1 , . . . , XT −1 , new inputs XT and Rm×m using inner-products, where denotes the transpose. Fur- k≤m∈N thermore, we define each kernel function as κ(x, x ) = x x . Output: new features zT 1 , . . . , zT m 1: concatenate all feature vectors to get Let Ki = [κ(x1 , xi ), . . . , κ(xm , xi )] denote the ith column vec- tor of the kernel matrix. Let i = (1, . . . , k), k ≤ m, be an index ˆ X = (X1 , . . . , XT −1 ) vector and let Kii indicate the square matrix constructed using the o ˜ indices i. The Nystr¨ m approximation K, which approximates the ˆ ˆ ˆ 2: Create a kernel matrix K = X X kernel matrix K, can be defined like so: 3: Run the sparse KPCA algorithm of [Hussain and K = Ki K−1 Ki . ˜ Shawe-Taylor 2009] to find index set ˆ such that k = |ˆ i i| ii 4: Compute: Furthermore, to project into the space defined by the index vector i we follow the regime outlined by [Diethe et al. 2009], where by ˆ ˆ −1 R = chol(Kˆˆ ) ii taking the Cholesky decomposition of K−1 to get an upper triangle ii 5: for j = 1, . . . , m do matrix R = chol(K−1 ) where R R = K−1 , we can make a ii ii projection into this low-dimensional space. Therefore, given any 6: construct new feature vectors using inputs in XT input xj we can create the corresponding new input zj using the ˆ ˆ following projection: zT j = xT j XˆR , i (3) 7: end for zj = Kji R . (2) We have now described all of the components needed to describe feature vectors we also train the Multi-Task Feature Learning algo- our algorithm. However, it should be noted that the projection of rithm of [Argyriou et al. 2008] on the T − 1 samples by viewing Equation (2) is usually only applied to a single sample S, whereas each as a task. The algorithm produces a matrix D ∈ Rδ×δ where in our work we have T different samples S1 , . . . , ST . We follow a δ < m, which after taking the Cholesky decomposition of D we similar methodology described by [Argyriou et al. 2008] (who use have RD = chol(D), and hence a low-dimensional feature map- Gram-Schmidt) using the Nystr¨ m method. o ping zT j = zT j RD . Unlike our Algorithm 1 this low-dimensional mapping also utilises information about the outputs. 2 Our method 3 Experimental Setup Given T − 1 input-output samples S1 , . . . , ST −1 we have Xt = (xt1 , . . . , xtm ) where t = 1, . . . , T − 1. We concatenate these The database used for images was the PASCAL VOC 2007 chal- ˆ feature vectors into a larger matrix X = (X1 , . . . , XT −1 ) and lenge database [Everingham et al. ] which contains over 9000 im- ˆ ˆ ˆ compute the corresponding kernel matrix K = X X. This kernel ages, each of which contains at least 1 object from a possible list 182
  • 3. of 20 objects such as cars, trains, cows, people, etc. There were “Nystr¨ m features”.2 We also made a mapping using the Multi- o T = 23 users in the experiments, and participants were asked to Task Feature Learning algorithm of [Argyriou et al. 2008], as de- view twenty pages, each of which contained fifteen images. Their scribed at the end of Section 2, which we refer to as the “Multi- eye movements were recorded by a Tobii X120 eye tracker [Tobii Task features”. The baseline used the Original features extracted Studio Ltd ] connected to a PC using a 19-inch monitor (resolution using the Tobii eye tracker described above. The Original features of 1280x1024). The eye tracker has approximately 0.5 degrees of did not compute any low-dimensional feature space and was sim- accuracy with a sample rate of 120 Hz and used infrared lens to ply used with the perceptron without using information from other detect pupil centres and corneal reflection. Figure 2 shows an ex- users’ eye movements. ample image presented to a user along with the eye movements (in red). Figure 3 plots the regret, RP and ROC curves when training the perceptron algorithm with the Original features, the Nystr¨ m fea- o tures and the Multi-Task features. We average each measure over the 23 leave-one-out users, and also over ten random splits of k fea- ˆ ture vectors needed to construct X. It is clear that there is a small difference between the Multi-Task features and Original features in favour of the Multi-Task features. However, the Nystr¨ m features o constructed using the Nystr¨ m method yield much superior results o with respect to both of these techniques. The regret of the per- ceptron using the Nystr¨ m features is shown in Figure 3(a), and is o always smaller than the Original and Multi-Task features (smaller regret is better). It is not surprising that we beat the Original fea- tures method, but our improvement over the Multi-Task features is more surprising as it also utilises the relevance feedback in order to construct a suitable low-dimensional feature space. However, the Nystr¨ m method only uses the eye movement features. Fig- o ure 3(b) plots the Recall-Precision curve and shows dramatic im- provements over the Original and Multi-Task features. This plot suggests that using the Nystr¨ m features allows us to retrieve more o relevant images at the start of a search. This would be important for a CBIR system (for instance). Finally, the ROC curves pic- Figure 2: An example page shown to a participant. Eye movements tured in Figure 3(c) also suggest improved performance when using of the participant are shown in red. The red circles mark fixations the Nystr¨ m method to construct low-dimensional features across o and small red dots correspond to raw measurements that belong to users. those fixations. The black dots mark raw measurements that were not included in any of the fixations. Algorithm Average Regret AAP AUROC Original features 0.2886 0.5687 0.6983 Nystr¨ m features o 0.2433 0.6540 0.7662 Each participant was asked to carry out one search task among a Multi-Task features 0.2752 0.5802 0.7107 possible of three, which included looking for a Bicycle (8 users), a Horse (7 users) or some form of Transport (8 users). Any images Table 1: Summary statistics for curves in Figure 3. The bold font within a search page that a user thought contained images from indicates statistical significance at the level p < 0.01 over the base- there search, they marked as relevant. line. The eye movement features extracted from the Tobii eye tracker can be found in Table 1 of [Pasupa et al. 2009] and was collected In Table 1 we state summary statistics related to the curves plotted by [Saunders and Klami 2008]. Most of these are motivated by in Figure 3. As mentioned earlier we randomly select k features features considered in the literature for text retrieval studies – how- ˆ from each data matrix Xt in order to construct X, and repeat this ever, in addition, they also consider more image-based eye move- for ten random seeds – the results in Table 1 average over these ten ment features. These are computed for each full image and based splits. Therefore, for Figure 3(a), 3(b) and 3(c) we have the Average on the eye trajectory and locations of the images in the page. They Regret, the Average Average Precision (AP), and the Area Under cover the three main types of information typically considered in the ROC curve (AUROC), respectively. Using Nystr¨ m features are o reading studies: fixations, regressions, and refixations. The number clearly better than using the Original and Multi-Task features with a of features extracted were 33, computed from: (i) raw measure- significance level of p < 0.01. Though using Multi-Task features is ments from the eye tracker; (ii) fixations estimated from the raw slightly better than using the Original features, however, the results measurements using the standard ClearView fixation filter provided are only significant at the level p < 0.05. The significance level with the Tobii eye tracking software, with settings of: “radius 50 of the direction of differences between the two measures was tested pixels, minimum duration 100 ms”. In the next subsection, these using the sign test [Siegel and Castellian 1988]. 33 features correspond to the “Original features”. Finally, in order to evaluate our methods with the perceptron we use the Regret (see Equation (1)), the Recall-Precision (RP) curve and the Receiver 4 Conclusions Operating Characteristic (ROC) curve. We took eye gaze data of several users viewing images for a search task, and constructed a novel method for producing a good low- 3.1 Results dimensional space for eye movement features relevant to the search task. We employed the Nystr¨ m method and showed it to pro- o Given 23 users, we used the features of 22 users (i.e., leave-one- duce better results than the extracted eye movement features [Pa- user-out) in order to compute our low-dimensional mapping using the Nystr¨ m method outlined in Algorithm 1 – referred to as the o 2 For the plots we use Nystroem. 183
  • 4. 90 1 1 Original features 80 Nystroem features 0.9 Muti Task features 0.8 70 0.8 Cumulative Regret 60 True Positive 0.6 Precision 50 0.7 40 0.6 0.4 30 0.5 20 0.2 Original features Original features 0.4 10 Nystroem features Nystroem features Multi Task features Multi Task features 0 0 0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Samples Recall False Positive (a) The Regret (b) The Recall-Precision curve (c) The ROC curve Figure 3: Averaged over 23 users: results of the perceptron algorithm using Original, Nystr¨ m and Multi-Task features. o supa et al. 2009] and a state-of-the-art Multi-Task Feature learning C ESA -B IANCHI , N., AND L UGOSI , G. 2006. Prediction, Learn- algorithm. The results of this paper indicate that if we would like ing, and Games. Cambridge University Press, New York, NY, to predict the relevance of images using the eye movements of a USA. user, then it is possible to improve these results by learning relevant information from the eye movement behaviour of other users view- DATTA , R., J OSHI , D., L I , J., AND WANG , J. Z. 2008. Image ing similar (not necessarily the same) images and given similar (not retrieval: Ideas, influences, and trends of the new age. ACM necessarily the same) search tasks. Computing Surveys 40, 5:1–5:60. D IETHE , T., H USSAIN , Z., H ARDOON , D. R., AND S HAWE - The first obvious application of our work is to CBIR systems. By TAYLOR , J. 2009. Matching pursuit kernel fisher discriminant recording the eye movement of users we could apply Algorithm 1 to analysis. In Proceedings of 12th International Conference on project gaze features into a common feature space, and then apply a Artificial Intelligence and Statistics, AISTATS, vol. 5 of JMLR: CBIR system in this space. Based on the results we reported within W&CP 5. this paper, we would hope that relevant images were retrieved early on in the search. Another application to CBIR, would be to use the D RINEAS , P., AND M AHONEY, M. W. 2005. On the Nystr¨ m o method we presented as a form of verification of relevance. Some method for approximating a gram matrix for improved kernel- users may give incorrect relevance feedback due to tiredness, lack based learning. Journal of Machine Learning Research 6, 2153– of interest, etc., but based on our method we could correct such 2175. actions as and when users make judgements on relevance. As the perceptron would improve classification over time and we would E VERINGHAM , M., VAN G OOL , L., W ILLIAMS , C. K. I., expect users to make mistakes near the end of a search, then such W INN , J., AND Z ISSERMAN , A. The PASCAL Visual Object corrective action would be useful to CBIR systems requiring accu- Classes Challenge 2007 (VOC2007) Results. http://www.pascal- rate Relevance Feedback mechanisms to improve search. network.org/challenges/VOC/voc2007/workshop/index.html. The machine learning view is that we are projecting the feature vec- H USSAIN , Z., AND S HAWE -TAYLOR , J. 2009. Theory of match- tors into a common Nystr¨ m space (constructed from feature vec- o ing pursuit. In Advances in Neural Information Processing Sys- tors of different users), and then running the perceptron with these tems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, features. Our experiments suggest better performance than simply Eds. 721–728. using the perceptron together with the original features. Therefore, we would like to construct regret bounds for the perceptron when K UMAR , S., M OHRI , M., AND TALWALKAR , A. 2009. Sample using this common low-dimensional space. Finally, for practical techniques for the Nystr¨ m method. In Proceedings of 12th In- o purposes we believe that randomly selecting columns from the ker- ternational Conference on Artificial Intelligence and Statistics, nel matrix to find ˆ would generate a feature space [Kumar et al. i AISTATS, vol. 5 of JMLR: W&CP 5. 2009] with little deterioration in test accuracy, but leading to com- PASUPA , K., S AUNDERS , C., S ZEDMAK , S., K LAMI , A., K ASKI , putational speed-ups. S., AND G UNN , S. 2009. Learning to rank images from eye movements. In HCI ’09: Proceeding of the IEEE 12th Interna- tional Conference on Computer Vision (ICCV’09) Workshops on Acknowledgements Human-Computer Interaction, 2009–2016. The research leading to these results has received funding S AUNDERS , C., AND K LAMI , A. 2008. Database of from the European Community’s Seventh Framework Programme eye-movement recordings. Tech. Rep. Project Deliverable (FP7/2007–2013) under grant agreement n◦ 216529, Personal In- Report D8.3, PinView FP7-216529. available on-line at formation Navigator Adapting Through Viewing, PinView. http://www.pinview.eu/deliverables.php. S HAWE -TAYLOR , J., AND C RISTIANINI , N. 2004. Kernel Meth- References ods for Pattern Analysis. Cambridge University Press, Cam- bridge, U.K. A RGYRIOU , A., E VGENIOU , T., AND P ONTIL , M. 2008. Convex S IEGEL , S., AND C ASTELLIAN , N. J. 1988. Nonparametric statis- multi-task feature learning. Machine Learning 73, 3, 243–272. tics for the behavioral sciences. McGraw-Hill, Singapore. 184
  • 5. ¨ S MOLA , A., AND S CH OLKOPF, B. 2000. Sparse greedy matrix approximation for machine learning. In Proceedings of the 17th International Conference on Machine Learning, 911–918. T OBII S TUDIO LTD. Tobii Studio Help. http://studiohelp.tobii.com/StudioHelp 1.2/. W ILLIAMS , C. K. I., AND S EEGER , M. 2000. Using the Nystr¨ m o method to speed up kernel machines. In Advances in Neural Information Processing Systems, MIT Press, vol. 13, 682–688. 185
  • 6. 186