In this paper we predict the relevance of images based on a low-dimensional
feature space found using several users’ eye movements. Each user is given an image-based search task, during which their eye movements are extracted using a Tobii eye tracker. The users also provide us with explicit feedback regarding the relevance of images. We demonstrate that by using a greedy Nyström algorithm on the eye movement features of different users, we can find a suitable low-dimensional feature space for learning. We validate the suitability of this feature space by projecting the eye movement features of a new user into this space, training an online learning algorithm using these features, and showing that the number of mistakes (regret over time) made in predicting relevant images is lower than when using the original eye movement features. We also plot
Recall-Precision and ROC curves, and use a sign test to verify the statistical significance of our results.
Estimating Human Pose from Occluded Images (ACCV 2009)Jia-Bin Huang
We address the problem of recovering 3D human pose from single 2D images, in which the pose estimation problem is formulated as a direct nonlinear regression from image observation to 3D joint positions. One key issue that has not been addressed in the literature is how to estimate 3D pose when humans in the scenes are partially or heavily occluded. When occlusions occur, features extracted from image observations (e.g., silhouettes-based shape features, histogram of oriented gradient, etc.) are seriously corrupted, and consequently the regressor (trained on un-occluded images) is unable to estimate pose states correctly. In this paper, we present a method that is capable of handling occlusions using sparse signal representations, in which each test sample is represented as a compact linear combination of training samples. The sparsest solution can then be efficiently obtained by solving a convex optimization problem with certain norms (such as l1-norm). The corrupted test image can be recovered with a sparse linear combination of un-occluded training images which can then be used for estimating human pose correctly (as if no occlusions exist). We also show that the proposed approach implicitly performs relevant feature selection with un-occluded test images. Experimental results on synthetic and real data sets bear out our theory that with sparse representation 3D human pose can be robustly estimated when humans are partially or heavily occluded in the scenes.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
Estimating Human Pose from Occluded Images (ACCV 2009)Jia-Bin Huang
We address the problem of recovering 3D human pose from single 2D images, in which the pose estimation problem is formulated as a direct nonlinear regression from image observation to 3D joint positions. One key issue that has not been addressed in the literature is how to estimate 3D pose when humans in the scenes are partially or heavily occluded. When occlusions occur, features extracted from image observations (e.g., silhouettes-based shape features, histogram of oriented gradient, etc.) are seriously corrupted, and consequently the regressor (trained on un-occluded images) is unable to estimate pose states correctly. In this paper, we present a method that is capable of handling occlusions using sparse signal representations, in which each test sample is represented as a compact linear combination of training samples. The sparsest solution can then be efficiently obtained by solving a convex optimization problem with certain norms (such as l1-norm). The corrupted test image can be recovered with a sparse linear combination of un-occluded training images which can then be used for estimating human pose correctly (as if no occlusions exist). We also show that the proposed approach implicitly performs relevant feature selection with un-occluded test images. Experimental results on synthetic and real data sets bear out our theory that with sparse representation 3D human pose can be robustly estimated when humans are partially or heavily occluded in the scenes.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Factorization including Spatial Constraint with Iterative Reweighted Regression”, International Conference on Pattern Recognition, ICPR 2012, Tsukuba, Japan, November 2012.
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Decomposition including Spatio-Temporal Constraint”, International Workshop on Background Model Challenges, ACCV 2012, Daejon, Korea, November 2012.
Visual Attention Convergence Index for Virtual Reality ExperiencesPawel Kobylinski
The paper introduces a novel quantitative method in the domain of eye tracking (ET) for virtual reality (VR). The method might be of interest to researchers on the human factor in VR, behavioral psychologists, and designers of VR experiences. Several mathematical formulas describing a novel index quantifying convergence of visual attention are introduced. The index is based on recently developed distance variance, a function of distances between observations in metric spaces. An aggregated version of the visual attention convergence index introduced in the paper allows to measure the effectiveness of any system of attentional cues employed by a designer to guide the attention of VR experience participants along an intended narration line. An individual version of the index allows to capture individual differences in the convergence of visual attention across participants. Possibilities for real-life and academic usage of the index are discussed and example results of application to real VR ET data are summarized.
Cite as:
Kobylinski P., Pochwatko G. (2020) Visual Attention Convergence Index for Virtual Reality Experiences. In: Ahram T., Taiar R., Colson S., Choplin A. (eds) Human Interaction and Emerging Technologies. IHIET 2019. Advances in Intelligent Systems and Computing, vol 1018. Springer, Cham
https://link.springer.com/chapter/10.1007/978-3-030-25629-6_48
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Joo-Haeng Lee
The presentation file for the talk in ACDDE 2012.
http://www.acdde2012.org/
It deals with the research result published in ICPR 2012 with the title as "Camera Calibration from a Single Image based on Coupled Line Cameras and Rectangle Constraint"
https://iapr.papercept.net/conferences/scripts/abstract.pl?ConfID=7&Number=70
When it comes to preventing water damage and preserving the integrity of concrete structures, waterproofing is essential. In the case of concrete tanks, however, effective, reliable waterproofing is doubly critical,
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
C. Guyon, T. Bouwmans. E. Zahzah, “Foreground Detection via Robust Low Rank Matrix Decomposition including Spatio-Temporal Constraint”, International Workshop on Background Model Challenges, ACCV 2012, Daejon, Korea, November 2012.
Visual Attention Convergence Index for Virtual Reality ExperiencesPawel Kobylinski
The paper introduces a novel quantitative method in the domain of eye tracking (ET) for virtual reality (VR). The method might be of interest to researchers on the human factor in VR, behavioral psychologists, and designers of VR experiences. Several mathematical formulas describing a novel index quantifying convergence of visual attention are introduced. The index is based on recently developed distance variance, a function of distances between observations in metric spaces. An aggregated version of the visual attention convergence index introduced in the paper allows to measure the effectiveness of any system of attentional cues employed by a designer to guide the attention of VR experience participants along an intended narration line. An individual version of the index allows to capture individual differences in the convergence of visual attention across participants. Possibilities for real-life and academic usage of the index are discussed and example results of application to real VR ET data are summarized.
Cite as:
Kobylinski P., Pochwatko G. (2020) Visual Attention Convergence Index for Virtual Reality Experiences. In: Ahram T., Taiar R., Colson S., Choplin A. (eds) Human Interaction and Emerging Technologies. IHIET 2019. Advances in Intelligent Systems and Computing, vol 1018. Springer, Cham
https://link.springer.com/chapter/10.1007/978-3-030-25629-6_48
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Joo-Haeng Lee
The presentation file for the talk in ACDDE 2012.
http://www.acdde2012.org/
It deals with the research result published in ICPR 2012 with the title as "Camera Calibration from a Single Image based on Coupled Line Cameras and Rectangle Constraint"
https://iapr.papercept.net/conferences/scripts/abstract.pl?ConfID=7&Number=70
When it comes to preventing water damage and preserving the integrity of concrete structures, waterproofing is essential. In the case of concrete tanks, however, effective, reliable waterproofing is doubly critical,
Original Power Point retrieved from http://www.mrsshirley.net/powerpoint/realidades/grammar/
real1grammar/real1grammar.htm.Educational use granted if credit given to original author.
Original PowerPoint retrieved from http://www.mrsshirley.net/powerpoint/realidades/grammar/
real1grammar/real1grammar.htm. Education use granted with credit given to original author.
Hardoon Image Ranking With Implicit Feedback From Eye MovementsKalle
In order to help users navigate an image search system, one could
provide explicit information on a small set of images as to which
of them are relevant or not to their task. These rankings are learned
in order to present a user with a new set of images that are relevant
to their task. Requiring such explicit information may not
be feasible in a number of cases, we consider the setting where
the user provides implicit feedback, eye movements, to assist when
performing such a task. This paper explores the idea of implicitly
incorporating eye movement features in an image ranking task
where only images are available during testing. Previous work had
demonstrated that combining eye movement and image features improved
on the retrieval accuracy when compared to using each of
the sources independently. Despite these encouraging results the
proposed approach is unrealistic as no eye movements will be presented
a-priori for new images (i.e. only after the ranked images are
presented would one be able to measure a user’s eye movements
on them). We propose a novel search methodology which combines
image features together with implicit feedback from users’
eye movements in a tensor ranking Support Vector Machine and
show that it is possible to extract the individual source-specific
weight vectors. Furthermore, we demonstrate that the decomposed
image weight vector is able to construct a new image-based semantic
space that outperforms the retrieval accuracy than when solely
using the image-features.
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
To estimate viewer’s contextual understanding, features of their
eye-movements while viewing question statements in response to definition statements, and features of correct and incorrect responses were extracted and compared. Twelve directional features
of eye-movements across a two-dimensional space were created, and these features were compared between correct and incorrect responses. The procedure of estimating the response was developed with Support Vector Machines, using these features. The estimation performance and accuracy were assessed across combinations of features. The number of definition statements, which needed to be memorized to answer the question statements during the experiment, affected the estimation accuracy. These results provide evidence that features of eye-movements during reading statements
can be used as an index of contextual understanding.
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsKalle
We present a novel method for the measurement of the similarity between aggregates of scanpaths. This may be thought of as a solution to the “average scanpath” problem. As a by-product of this method, we derive a classifier for groups of scanpaths drawn from various classes. This capability is empirically demonstrated using data gathered from an experiment in an attempt to automatically determine expert/novice classification for a set of visual tasks.
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES sipij
Human action recognition is still a challenging problem and researchers are focusing to investigate this
problem using different techniques. We propose a robust approach for human action recognition. This is
achieved by extracting stable spatio-temporal features in terms of pairwise local binary pattern (P-LBP)
and scale invariant feature transform (SIFT). These features are used to train an MLP neural network
during the training stage, and the action classes are inferred from the test videos during the testing stage.
The proposed features well match the motion of individuals and their consistency, and accuracy is higher
using a challenging dataset. The experimental evaluation is conducted on a benchmark dataset commonly
used for human action recognition. In addition, we show that our approach outperforms individual features
i.e. considering only spatial and only temporal feature.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Integrated Hidden Markov Model and Kalman Filter for Online Object Trackingijsrd.com
Visual prior from generic real-world images study to represent that objects in a scene. The existing work presented online tracking algorithm to transfers visual prior learned offline for online object tracking. To learn complete dictionary to represent visual prior with collection of real world images. Prior knowledge of objects is generic and training image set does not contain any observation of target object. Transfer learned visual prior to construct object representation using Sparse coding and Multiscale max pooling. Linear classifier is learned online to distinguish target from background and also to identify target and background appearance variations over time. Tracking is carried out within Bayesian inference framework and learned classifier is used to construct observation model. Particle filter is used to estimate the tracking result sequentially however, unable to work efficiently in noisy scenes. Time sift variance were not appropriated to track target object with observer value to prior information of object structure. Proposal HMM based kalman filter to improve online target tracking in noisy sequential image frames. The covariance vector is measured to identify noisy scenes. Discrete time steps are evaluated for identifying target object with background separation. Experiment conducted on challenging sequences of scene. To evaluate the performance of object tracking algorithm in terms of tracking success rate, Centre location error, Number of scenes, Learning object sizes, and Latency for tracking.
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
As prototypes of data glasses having both data augmentation and gaze tracking capabilities are becoming available, it is now possible to develop proactive gaze-controlled user interfaces to display information about objects, people, and other entities in real-world setups. In order to decide which objects the augmented information should be about, and how saliently to augment, the system needs an estimate of the importance or relevance of the objects of the scene for the user at a given time. The estimates will be used to minimize distraction of the user, and for providing efficient spatial management of the augmented items. This work is a feasibility study on inferring the relevance of objects in dynamic scenes from gaze. We collected gaze data from subjects watching a video for a pre-defined task. The results show that a simple ordinal logistic regression model gives relevance rankings of scene objects with a promising accuracy.
Object tracking with SURF: ARM-Based platform ImplementationEditor IJCATR
Several algorithms for object tracking, are developed, but our method is slightly different, it’s about how to adapt and implement such algorithms on mobile platform.
We started our work by studying and analyzing feature matching algorithms, to highlight the most appropriate implementation technique for our case.
In this paper, we propose a technique of implementation of the algorithm SURF (Speeded Up Robust Features), for purposes of recognition and object tracking in real time. This is achieved by the realization of an application on a mobile platform such a Raspberry pi, when we can select an image containing the object to be tracked, in the scene captured by the live camera pi. Our algorithm calculates the SURF descriptor for the two images to detect the similarity therebetween, and then matching between similar objects. In the second level, we extend our algorithm to achieve a tracking in real time, all that must respect raspberry pi performances. So, the first thing is setting up all libraries that the raspberry pi need, then adapt the algorithm with card’s performances. This paper presents experimental results on a set of evaluation images as well as images obtained in real time.
Action Genome: Action As Composition of Spatio Temporal Scene GraphsSangmin Woo
Jingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Carlos Niebles. Action genome: Actions as composition of spatio-temporal scene graphs. arXiv preprint arXiv:1912.06992, 2019.
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
In order to enable a computer to construct and display a three-dimensional array, solid objects from a single two-dimensional photograph, the rules and assumptions of depth perception have been carefully analyzed and mechanized. It is assumed that a photograph is a perspective projection of a set of objects which can be constructed from transformations of known three-dimensional models, and that the objects are supported by other visible objects or by a ground plane. These assumptions enable a computer to obtain a reasonable, three-dimensional description from the edge information in a photograph by means of a topological, mathematical process. A computer program has been written which can process a photograph into a line drawing .transform the line drawing into a three-dimensional representation and, finally, display the three-dimensional structure with all the hidden lines removed, from any point of view. The 2-D to 3-D construction and 3-D to 2-D display processes are sufficiently general to handle most collections of planar-surfaced objects and provide a valuable starting point for future investigation of computer-aided three-dimensional systems.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
Relevance feedback (RF) mechanisms are widely adopted in Content-Based Image Retrieval (CBIR) systems to improve image retrieval performance. However, there exist some intrinsic problems: (1) the semantic gap between high-level concepts and low-level features and (2) the subjectivity of human perception of visual contents. The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data. In total, 882 images from 101 categories are viewed by 10 subjects to test the usefulness of implicit RF, where the relevance of each image is known beforehand. A set of measures based on fixations are thoroughly evaluated which include fixation duration, fixation count, and the number of revisits. Finally, the paper proposes a decision tree to predict the user’s input during the image searching tasks. The prediction precision of the decision tree is over 87%, which spreads light on a promising integration of natural eye movement into CBIR systems in the future.
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
The intuitive user interfaces of PCs and PDAs, such as pen display and touch panel, have become widely used in recent times. In this study, we have developed an eye-tracking pen display based on the stereo bright pupil technique. First, the bright pupil camera was developed by examining the arrangement of cameras and LEDs for pen display. Next, the gaze estimation method was proposed for the stereo bright pupil camera, which enables one point calibration. Then, the prototype of the eyetracking pen display was developed. The accuracy of the system was approximately 0.7° on average, which is sufficient for human interaction support. We also developed an eye-tracking tabletop as an application of the proposed stereo bright pupil technique.
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
Individuals with severe multiple disabilities have little or no opportunity to express their own wishes, make choices and move independently. Because of this, the objective of this work has been to develop a prototype for a gaze-driven device to manoeuvre powered wheelchairs or other moving platforms. The prototype has the same capabilities as a normal powered wheelchair, with two exceptions. Firstly, the prototype is controlled by eye movements instead of by a normal joystick. Secondly, the prototype is equipped with a sensor that stops all motion when the machine approaches an obstacle. The prototype has been evaluated in a preliminary clinical test with two users. Both users clearly communicated that they appreciated and had mastered the ability to control a powered wheelchair with their eye movements.
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
The visual field is the area of space that can be seen when an observer fixates a given point. Many visual capabilities vary with position in the visual field and many diseases result in changes in the visual field. With current technology, it is possible to build very complex real-time visual field simulations that employ gaze-contingent displays. Nevertheless, there are still no established techniques to evaluate such systems. We have developed a method to evaluate a system’s contingency by employing visual blind spot localization as well as foveal fixation. During the experiment, gaze-contingent and static conditions were compared. There was a strong correlation between predicted results and gaze-contingent trials. This evaluation method can also be used with patient populations and for the evaluation of gaze-contingent display systems, when there is need to evaluate a visual field outside of the foveal region.
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
Pie menus offer several features which are advantageous especially for gaze control. Although the optimal number of slices per pie
and of depth layers has already been established for manual control, these values may differ in gaze control due to differences in spatial accuracy and congitive processing. Therefore, we investigated the layout limits for hierarchical pie menu in gaze control. Our user study indicates that providing six slices in multiple depth layers guarantees fast and accurate selections. Moreover, we compared two different methods of selecting a slice. Novices performed well with both, but selecting via selection borders produced better performance for experts than the standard dwell time selection.
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies.
Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with
text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by
borders, which allowed them faster selections than the dwell time method.
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
The study of surgeons’ eye movements is an innovative way of assessing skill and situation awareness, in that a comparison of eye movement strategies between expert surgeons and novices may show differences that can be used in training. Our preliminary study compared eye movements of 4 experts and
4 novices performing a simulated gall bladder removal task on a
dummy patient with an audible heartbeat and simulated vital signs displayed on a secondary monitor. We used a head-mounted Locarna PT-Mini eyetracker to record fixation locations during the operation. The results showed that novices concentrated so hard on the surgical
display that they were hardly able to look at the patient’s vital signs, even when heart rate audibly changed during the procedure. In comparison, experts glanced occasionally at the vitals monitor, thus being able to observe the patient condition.
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
The portability of an eye tracking system encourages us to develop a technique for estimating 3D point-of-regard. Unlike conventional methods, which estimate the position in the 2D image coordinates of the mounted camera, such a technique can represent richer gaze information of the human moving in the larger area. In this paper, we propose a method for estimating the 3D point-of-regard and a visualization technique of gaze trajectories under natural head movements for the head-mounted device. We employ visual SLAM technique to estimate head configuration and extract environmental information. Even in cases where the head moves dynamically, the proposed method could obtain 3D point-of-regard. Additionally, gaze trajectories are appropriately overlaid on the scene camera image.
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
Recent advances in high magnification retinal imaging have allowed for visualization of individual retinal photoreceptors, but these systems also suffer from distortions due to fixational eye motion. Algorithms developed to remove these distortions have the added benefit of providing arc second level resolution of the eye movements that produce them. The system also allows for visualization of targets on the retina, allowing for absolute retinal position measures to the level of individual cones. This paper will describe the process used to remove the eye movement artifacts and present analysis of their spectral characteristics. We find a roughly 1/f amplitude spectrum similar to that reported by Findlay (1971) with no evidence for a distinct
tremor component.
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
Gaze visualizations represent an effective way for gaining fast insights into eye tracking data. Current approaches do not adequately support eye tracking studies for three-dimensional (3D) virtual environments. Hence, we propose a set of advanced gaze visualization techniques for supporting gaze behavior analysis in such environments. Similar to commonly used gaze visualizations for twodimensional
stimuli (e.g., images and websites), we contribute advanced 3D scan paths and 3D attentional maps. In addition, we introduce a models of interest timeline depicting viewed models, which can be used for displaying scan paths in a selected time segment. A prototype toolkit is also discussed which combines an implementation of our proposed techniques. Their potential for facilitating eye tracking studies in virtual environments was supported by a user study among eye tracking and visualization experts.
Skovsgaard Small Target Selection With Gaze AloneKalle
Accessing the smallest targets in mainstream interfaces using gaze
alone is difficult, but interface tools that effectively increase the size of selectable objects can help. In this paper, we propose a conceptual framework to organize existing tools and guide the development of new tools. We designed a discrete zoom tool and conducted a proof-of-concept experiment to test the potential of the framework and the tool. Our tool was as fast as and more accurate than the currently available two-step magnification tool. Our framework shows potential to guide the design, development, and testing of zoom tools to facilitate the accessibility of mainstream
interfaces for gaze users.
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
This paper presents a low-cost gaze tracking system that is based on a webcam mounted close to the user’s eye. The performance of the gaze tracker was evaluated in an eye-typing task using two different typing applications. Participants could type between 3.56 and 6.78 words per minute, depending on the typing system used. A pilot study to assess the usability of the system was also carried out in the home of a user with severe motor impairments. The
user successfully typed on a wall-projected interface using his eye movements.
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
Analysis of recordings made by a wearable eye tracker is complicated by video stream synchronization, pupil coordinate mapping, eye movement analysis, and tracking of dynamic Areas Of Interest (AOIs) within the scene. In this paper a semi-automatic system is developed to help automate these processes. Synchronization is accomplished
via side by side video playback control. A deformable eye template and calibration dot marker allow reliable initialization via simple drag and drop as well as a user-friendly way to correct the algorithm when it fails. Specifically, drift may be corrected by nudging the detected pupil center to the appropriate coordinates. In a case study, the impact of surrogate nature views on physiological health and perceived well-being is examined via analysis of gaze over images of nature. A match-moving methodology was developed to track AOIs for this particular application but is applicable toward similar future studies.
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
Eye-tracking has been widely used for research purposes in fields such as linguistics and marketing. However, there are many possibilities of how eye-trackers could be used in other disciplines like physics. A part of physics education research deals with the differences between novices and experts, specifi-cally how each group solves problems. Though there has been a great deal of research about these differences there has been no research that focuses on noticing exactly where experts and no-vices look while solving the problems. Thus, to complement the past research, I have created a new technique called gaze scrib-ing. Subjects wear a head mounted eye-tracker while solving electrical circuit problems on a graphics monitor. I monitor both scan patterns of the subjects and combine that with videotapes of their work while solving the problems. This new technique has yielded new information and elaborated on previous studies.
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
In certain applications such as radiology and imagery analysis, it is important to minimize errors. In this paper we evaluate a structured inspection method that uses eye tracking information as a feedback mechanism to the image inspector. Our two-phase method starts with a free viewing phase during which gaze data is collected. During the next phase, we either segment the image, mask previously seen areas of the image, or combine the two techniques, and repeat the search. We compare the different methods
proposed for the second search phase by evaluating the inspection method using true positive and false negative rates, and subjective workload. Results show that gaze-blocked configurations reduced the subjective workload, and that gaze-blocking without segmentation showed the largest increase in true positive identifications and the largest decrease in false negative identifications of previously unseen objects.
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
This paper describes a study that seeks to explore the correlation between eye movements and the interpretation of geometric shapes. This study is intended to inform the development of an eye tracking interface for computational tools to support and enhance the natural interaction required in creative design. A common criticism of computational design tools is that they do not enable manipulation of designed shapes according to all perceived features. Instead the manipulations afforded are limited by formal structures of shapes. This research examines the potential for eye movement data to be used to recognise and make available for manipulation the perceived features in shapes. The objective of this study was to analyse eye movement data with the intention of recognising moments in which an interpretation of shape is made. Results suggest that fixation duration and saccade amplitude prove to be consistent indicators of shape interpretation.
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
Eye gaze interaction for disabled people is often dealt with by designing ad-hoc interfaces, in which the big size of their elements compensates for both the inaccuracy of eye trackers and the instability of the human eye. Unless solutions for reliable eye cursor control are employed, gaze pointing in ordinary graphical operating environments is a very difficult task. In this paper we present an eye-driven cursor for MS Windows which behaves differently according to the “context”. When the user’s gaze is perceived within the desktop or a folder, the cursor can be discretely shifted from one icon to another. Within an application window or where there are no icons, on the contrary, the cursor can be continuously and precisely moved. Shifts in the four directions (up, down, left, right) occur through dedicated buttons. To increase user awareness of the currently pointed spot on the screen while continuously moving the cursor, a replica of the spot is provided within the active direction button, resulting in improved pointing performance.
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
Laboratory eyetrackers, constrained to a fixed display and static (or accurately tracked) observer, facilitate automated analysis of fixation data. Development of wearable eyetrackers has extended environments and tasks that can be studied at the expense of automated analysis. Wearable eyetrackers provide 2D point-of-regard (POR) in scene-camera coordinates, but the researcher is typically interested in some high-level semantic property (e.g., object identity, region, or material) surrounding individual fixation points. The synthesis of POR into fixations and semantic information remains a labor-intensive manual task, limiting the application of wearable eyetracking.
We describe a system that segments POR videos into fixations and allows users to train a database-driven, object-recognition system. A correctly trained library results in a very accurate and semi-automated translation of raw POR data into a sequence of objects, regions or materials.
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
The purpose of this study is to explore how the viewers’ previous training is related to their aesthetic viewing in various interactions with the form and the context, in relation to apparel design. Berlyne’s two types of exploratory behavior, diversive and specific, provided a theoretical framework to this study. Twenty female subjects (mean age=21, SD=1.089) participated. Twenty model images, posed by a male and a female model, were shown on an eye-tracker screen for 10 seconds each. The findings of this study verified Berlyne’s concepts of visual exploration. One of the different findings from Berlyne’s theory was that the untrained viewers’ visual attention tended to be more significantly focused on peripheral areas of visual interest, compared to the trained viewers, while there was no significant difference on the central, foremost areas of visual interest between the two groups. The overall aesthetic viewing patterns were also identified.
2. We will show in this paper that we can leverage useful informa- matrix contains information from all T −1 samples and so we apply
tion from other tasks to construct a low-dimensional feature space the sparse kernel principal components analysis algorithm [Hussain
leading to our online learning algorithm having smaller regret than and Shawe-Taylor 2009] to find the index set ˆ (where |ˆ = k < m)
i i|
when applied using individual samples. We will use the Perceptron needed to compute the projection of Equation (2). This equates to
algorithm [Cesa-Bianchi and Lugosi 2006] in our experiments. finding a space with maximum variance between all of the different
user behaviours. Given a new users eye movement feature vectors
1.2 Background II: Low-dimensional mappings XT we can project it into this joint low-dimensional mapping and
run any online learning algorithm in this space:
The Gram-Schmidt orthogonalisation procedure finds an orthonor-
mal basis for a matrix and is commonly used in machine learning ˆˆ
XT XR
when an explicit feature representation of inputs is required from
ˆ ˆ −1
where R = chol(Kˆˆ ). We choose an online learning algorithm
a nonlinear kernel [Shawe-Taylor and Cristianini 2004]. However, ii
Gram-Schmidt does not take into account the residual loss between due to the speed, simplicity and practicality of the approach. For
the low-dimensional mapping and may not be able to find a rich instance, when presenting users with images in a Content-Based
enough feature space. Image Retrieval (CBIR) system [Datta et al. 2008] we would like
to retrieve relevant images as quickly as possible. Although we
A related method, known as the Nystr¨ m method, has been pro-
o do not explicitly model a CBIR system, we maintain that a future
posed and recently received considerable attention in the machine development towards applying our techniques within CBIR systems
learning community [Smola and Sch¨ lkopf 2000][Drineas and Ma-
o could be useful in retrieving relevant images quickly. In the current
honey 2005][Kumar et al. 2009]. As shown by [Hussain and paper we are more interested in the feasibility study of whether or
Shawe-Taylor 2009] the greedy algorithm of [Smola and Sch¨ lkopf
o not any important information can be gained by the eye movements
2000] finds a low-dimensional space that minimises the residual of other users.
error between the low-dimensional feature space and the original
feature space. It corresponds to finding the maximum variance in For practical purposes we will randomly choose a subset of inputs
the data and hence viewed as a sparse kernel principal components ˆ
from each user in order to construct X – in practice this should
analysis. Due to these advantages, we will use the Nystr¨ m method
o not effect the quality of the Nystr¨ m approximation [Smola and
o
in our experiments. Sch¨ lkopf 2000]. However, for completeness we describe the al-
o
gorithm without this randomness. The feature projection procedure
We will assume throughout that our examples x have already been is described below in Algorithm 1. Furthermore, based on these
mapped into a higher-dimensional feature space F using φ : Rn →
F. Therefore φ(x) = x, but we will make no explicit reference
to φ(x). Therefore, if our input matrix X contain features as col- Algorithm 1: Nystr¨ m feature projection for multiple users
o
umn vectors then we can construct a kernel matrix K = X X ∈ Input: training inputs X1 , . . . , XT −1 , new inputs XT and
Rm×m using inner-products, where denotes the transpose. Fur- k≤m∈N
thermore, we define each kernel function as κ(x, x ) = x x . Output: new features zT 1 , . . . , zT m
1: concatenate all feature vectors to get
Let Ki = [κ(x1 , xi ), . . . , κ(xm , xi )] denote the ith column vec-
tor of the kernel matrix. Let i = (1, . . . , k), k ≤ m, be an index ˆ
X = (X1 , . . . , XT −1 )
vector and let Kii indicate the square matrix constructed using the
o ˜
indices i. The Nystr¨ m approximation K, which approximates the ˆ ˆ ˆ
2: Create a kernel matrix K = X X
kernel matrix K, can be defined like so: 3: Run the sparse KPCA algorithm of [Hussain and
K = Ki K−1 Ki .
˜ Shawe-Taylor 2009] to find index set ˆ such that k = |ˆ
i i|
ii
4: Compute:
Furthermore, to project into the space defined by the index vector
i we follow the regime outlined by [Diethe et al. 2009], where by ˆ ˆ −1
R = chol(Kˆˆ )
ii
taking the Cholesky decomposition of K−1 to get an upper triangle
ii
5: for j = 1, . . . , m do
matrix R = chol(K−1 ) where R R = K−1 , we can make a
ii ii
projection into this low-dimensional space. Therefore, given any 6: construct new feature vectors using inputs in XT
input xj we can create the corresponding new input zj using the ˆ ˆ
following projection: zT j = xT j XˆR ,
i (3)
7: end for
zj = Kji R . (2)
We have now described all of the components needed to describe feature vectors we also train the Multi-Task Feature Learning algo-
our algorithm. However, it should be noted that the projection of rithm of [Argyriou et al. 2008] on the T − 1 samples by viewing
Equation (2) is usually only applied to a single sample S, whereas each as a task. The algorithm produces a matrix D ∈ Rδ×δ where
in our work we have T different samples S1 , . . . , ST . We follow a δ < m, which after taking the Cholesky decomposition of D we
similar methodology described by [Argyriou et al. 2008] (who use have RD = chol(D), and hence a low-dimensional feature map-
Gram-Schmidt) using the Nystr¨ m method.
o ping zT j = zT j RD . Unlike our Algorithm 1 this low-dimensional
mapping also utilises information about the outputs.
2 Our method
3 Experimental Setup
Given T − 1 input-output samples S1 , . . . , ST −1 we have Xt =
(xt1 , . . . , xtm ) where t = 1, . . . , T − 1. We concatenate these The database used for images was the PASCAL VOC 2007 chal-
ˆ
feature vectors into a larger matrix X = (X1 , . . . , XT −1 ) and lenge database [Everingham et al. ] which contains over 9000 im-
ˆ ˆ ˆ
compute the corresponding kernel matrix K = X X. This kernel ages, each of which contains at least 1 object from a possible list
182
3. of 20 objects such as cars, trains, cows, people, etc. There were “Nystr¨ m features”.2 We also made a mapping using the Multi-
o
T = 23 users in the experiments, and participants were asked to Task Feature Learning algorithm of [Argyriou et al. 2008], as de-
view twenty pages, each of which contained fifteen images. Their scribed at the end of Section 2, which we refer to as the “Multi-
eye movements were recorded by a Tobii X120 eye tracker [Tobii Task features”. The baseline used the Original features extracted
Studio Ltd ] connected to a PC using a 19-inch monitor (resolution using the Tobii eye tracker described above. The Original features
of 1280x1024). The eye tracker has approximately 0.5 degrees of did not compute any low-dimensional feature space and was sim-
accuracy with a sample rate of 120 Hz and used infrared lens to ply used with the perceptron without using information from other
detect pupil centres and corneal reflection. Figure 2 shows an ex- users’ eye movements.
ample image presented to a user along with the eye movements (in
red). Figure 3 plots the regret, RP and ROC curves when training the
perceptron algorithm with the Original features, the Nystr¨ m fea-
o
tures and the Multi-Task features. We average each measure over
the 23 leave-one-out users, and also over ten random splits of k fea-
ˆ
ture vectors needed to construct X. It is clear that there is a small
difference between the Multi-Task features and Original features in
favour of the Multi-Task features. However, the Nystr¨ m features
o
constructed using the Nystr¨ m method yield much superior results
o
with respect to both of these techniques. The regret of the per-
ceptron using the Nystr¨ m features is shown in Figure 3(a), and is
o
always smaller than the Original and Multi-Task features (smaller
regret is better). It is not surprising that we beat the Original fea-
tures method, but our improvement over the Multi-Task features is
more surprising as it also utilises the relevance feedback in order
to construct a suitable low-dimensional feature space. However,
the Nystr¨ m method only uses the eye movement features. Fig-
o
ure 3(b) plots the Recall-Precision curve and shows dramatic im-
provements over the Original and Multi-Task features. This plot
suggests that using the Nystr¨ m features allows us to retrieve more
o
relevant images at the start of a search. This would be important
for a CBIR system (for instance). Finally, the ROC curves pic-
Figure 2: An example page shown to a participant. Eye movements tured in Figure 3(c) also suggest improved performance when using
of the participant are shown in red. The red circles mark fixations the Nystr¨ m method to construct low-dimensional features across
o
and small red dots correspond to raw measurements that belong to users.
those fixations. The black dots mark raw measurements that were
not included in any of the fixations. Algorithm Average Regret AAP AUROC
Original features 0.2886 0.5687 0.6983
Nystr¨ m features
o 0.2433 0.6540 0.7662
Each participant was asked to carry out one search task among a Multi-Task features 0.2752 0.5802 0.7107
possible of three, which included looking for a Bicycle (8 users), a
Horse (7 users) or some form of Transport (8 users). Any images
Table 1: Summary statistics for curves in Figure 3. The bold font
within a search page that a user thought contained images from
indicates statistical significance at the level p < 0.01 over the base-
there search, they marked as relevant.
line.
The eye movement features extracted from the Tobii eye tracker
can be found in Table 1 of [Pasupa et al. 2009] and was collected In Table 1 we state summary statistics related to the curves plotted
by [Saunders and Klami 2008]. Most of these are motivated by in Figure 3. As mentioned earlier we randomly select k features
features considered in the literature for text retrieval studies – how- ˆ
from each data matrix Xt in order to construct X, and repeat this
ever, in addition, they also consider more image-based eye move- for ten random seeds – the results in Table 1 average over these ten
ment features. These are computed for each full image and based splits. Therefore, for Figure 3(a), 3(b) and 3(c) we have the Average
on the eye trajectory and locations of the images in the page. They Regret, the Average Average Precision (AP), and the Area Under
cover the three main types of information typically considered in the ROC curve (AUROC), respectively. Using Nystr¨ m features are
o
reading studies: fixations, regressions, and refixations. The number clearly better than using the Original and Multi-Task features with a
of features extracted were 33, computed from: (i) raw measure- significance level of p < 0.01. Though using Multi-Task features is
ments from the eye tracker; (ii) fixations estimated from the raw slightly better than using the Original features, however, the results
measurements using the standard ClearView fixation filter provided are only significant at the level p < 0.05. The significance level
with the Tobii eye tracking software, with settings of: “radius 50 of the direction of differences between the two measures was tested
pixels, minimum duration 100 ms”. In the next subsection, these using the sign test [Siegel and Castellian 1988].
33 features correspond to the “Original features”. Finally, in order
to evaluate our methods with the perceptron we use the Regret (see
Equation (1)), the Recall-Precision (RP) curve and the Receiver 4 Conclusions
Operating Characteristic (ROC) curve.
We took eye gaze data of several users viewing images for a search
task, and constructed a novel method for producing a good low-
3.1 Results dimensional space for eye movement features relevant to the search
task. We employed the Nystr¨ m method and showed it to pro-
o
Given 23 users, we used the features of 22 users (i.e., leave-one- duce better results than the extracted eye movement features [Pa-
user-out) in order to compute our low-dimensional mapping using
the Nystr¨ m method outlined in Algorithm 1 – referred to as the
o 2 For the plots we use Nystroem.
183
4. 90 1 1
Original features
80 Nystroem features
0.9
Muti Task features 0.8
70
0.8
Cumulative Regret
60
True Positive
0.6
Precision
50 0.7
40 0.6
0.4
30
0.5
20 0.2
Original features Original features
0.4
10 Nystroem features Nystroem features
Multi Task features Multi Task features
0 0
0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Samples Recall False Positive
(a) The Regret (b) The Recall-Precision curve (c) The ROC curve
Figure 3: Averaged over 23 users: results of the perceptron algorithm using Original, Nystr¨ m and Multi-Task features.
o
supa et al. 2009] and a state-of-the-art Multi-Task Feature learning C ESA -B IANCHI , N., AND L UGOSI , G. 2006. Prediction, Learn-
algorithm. The results of this paper indicate that if we would like ing, and Games. Cambridge University Press, New York, NY,
to predict the relevance of images using the eye movements of a USA.
user, then it is possible to improve these results by learning relevant
information from the eye movement behaviour of other users view- DATTA , R., J OSHI , D., L I , J., AND WANG , J. Z. 2008. Image
ing similar (not necessarily the same) images and given similar (not retrieval: Ideas, influences, and trends of the new age. ACM
necessarily the same) search tasks. Computing Surveys 40, 5:1–5:60.
D IETHE , T., H USSAIN , Z., H ARDOON , D. R., AND S HAWE -
The first obvious application of our work is to CBIR systems. By
TAYLOR , J. 2009. Matching pursuit kernel fisher discriminant
recording the eye movement of users we could apply Algorithm 1 to
analysis. In Proceedings of 12th International Conference on
project gaze features into a common feature space, and then apply a
Artificial Intelligence and Statistics, AISTATS, vol. 5 of JMLR:
CBIR system in this space. Based on the results we reported within
W&CP 5.
this paper, we would hope that relevant images were retrieved early
on in the search. Another application to CBIR, would be to use the D RINEAS , P., AND M AHONEY, M. W. 2005. On the Nystr¨ m o
method we presented as a form of verification of relevance. Some method for approximating a gram matrix for improved kernel-
users may give incorrect relevance feedback due to tiredness, lack based learning. Journal of Machine Learning Research 6, 2153–
of interest, etc., but based on our method we could correct such 2175.
actions as and when users make judgements on relevance. As the
perceptron would improve classification over time and we would E VERINGHAM , M., VAN G OOL , L., W ILLIAMS , C. K. I.,
expect users to make mistakes near the end of a search, then such W INN , J., AND Z ISSERMAN , A. The PASCAL Visual Object
corrective action would be useful to CBIR systems requiring accu- Classes Challenge 2007 (VOC2007) Results. http://www.pascal-
rate Relevance Feedback mechanisms to improve search. network.org/challenges/VOC/voc2007/workshop/index.html.
The machine learning view is that we are projecting the feature vec- H USSAIN , Z., AND S HAWE -TAYLOR , J. 2009. Theory of match-
tors into a common Nystr¨ m space (constructed from feature vec-
o ing pursuit. In Advances in Neural Information Processing Sys-
tors of different users), and then running the perceptron with these tems 21, D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
features. Our experiments suggest better performance than simply Eds. 721–728.
using the perceptron together with the original features. Therefore,
we would like to construct regret bounds for the perceptron when K UMAR , S., M OHRI , M., AND TALWALKAR , A. 2009. Sample
using this common low-dimensional space. Finally, for practical techniques for the Nystr¨ m method. In Proceedings of 12th In-
o
purposes we believe that randomly selecting columns from the ker- ternational Conference on Artificial Intelligence and Statistics,
nel matrix to find ˆ would generate a feature space [Kumar et al.
i AISTATS, vol. 5 of JMLR: W&CP 5.
2009] with little deterioration in test accuracy, but leading to com- PASUPA , K., S AUNDERS , C., S ZEDMAK , S., K LAMI , A., K ASKI ,
putational speed-ups. S., AND G UNN , S. 2009. Learning to rank images from eye
movements. In HCI ’09: Proceeding of the IEEE 12th Interna-
tional Conference on Computer Vision (ICCV’09) Workshops on
Acknowledgements Human-Computer Interaction, 2009–2016.
The research leading to these results has received funding S AUNDERS , C., AND K LAMI , A. 2008. Database of
from the European Community’s Seventh Framework Programme eye-movement recordings. Tech. Rep. Project Deliverable
(FP7/2007–2013) under grant agreement n◦ 216529, Personal In- Report D8.3, PinView FP7-216529. available on-line at
formation Navigator Adapting Through Viewing, PinView. http://www.pinview.eu/deliverables.php.
S HAWE -TAYLOR , J., AND C RISTIANINI , N. 2004. Kernel Meth-
References ods for Pattern Analysis. Cambridge University Press, Cam-
bridge, U.K.
A RGYRIOU , A., E VGENIOU , T., AND P ONTIL , M. 2008. Convex S IEGEL , S., AND C ASTELLIAN , N. J. 1988. Nonparametric statis-
multi-task feature learning. Machine Learning 73, 3, 243–272. tics for the behavioral sciences. McGraw-Hill, Singapore.
184
5. ¨
S MOLA , A., AND S CH OLKOPF, B. 2000. Sparse greedy matrix
approximation for machine learning. In Proceedings of the 17th
International Conference on Machine Learning, 911–918.
T OBII S TUDIO LTD. Tobii Studio Help.
http://studiohelp.tobii.com/StudioHelp 1.2/.
W ILLIAMS , C. K. I., AND S EEGER , M. 2000. Using the Nystr¨ m
o
method to speed up kernel machines. In Advances in Neural
Information Processing Systems, MIT Press, vol. 13, 682–688.
185