EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the users objects of interest in a sequence of visual stimuli mov- ing horizontally or vertically in front of the user’s view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the user’s eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the pro- posed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.
Attention Approximation: From the web to multi-screen televisionCaroline Jay
The move towards the provision of television content over two or more screens represents an enormous opportunity and a considerable challenge. A scientific understanding of what causes people to switch attention between the main screen and a 'second screen' mobile device during television viewing is key to the development of this technology. This seminar describes how ‘attention approximation’, a technique we have used to model visual attention and design screen reader presentation of Web content, can be used to investigate viewing behaviour, and ultimately drive the provision of content across multiple screens.
Attention Approximation: From the web to multi-screen televisionCaroline Jay
The move towards the provision of television content over two or more screens represents an enormous opportunity and a considerable challenge. A scientific understanding of what causes people to switch attention between the main screen and a 'second screen' mobile device during television viewing is key to the development of this technology. This seminar describes how ‘attention approximation’, a technique we have used to model visual attention and design screen reader presentation of Web content, can be used to investigate viewing behaviour, and ultimately drive the provision of content across multiple screens.
Smartphones as ubiquitous devices for behavior analysis and better lifestyle ...University of Geneva
Final PhD Defence presented in March 2016 at the University of Padua, Italy. 3 years PhD under the supervision of Prof. Ombretta Gaggi. Work focused on how it is possible to use smartphone to understand and analyse user behaviour, and how it is possible to use this information to further promote better lifestyle to individuals.
Presentation slides for our paper "Combining Adversarial and Reinforcement Learning for Video Thumbnail Selection", ACM ICMR 2021. https://doi.org/10.1145/3460426.3463630.
We developed a new method for unsupervised video thumbnail selection. The developed network architecture selects video thumbnails based on two criteria: the representativeness and the aesthetic quality of their visual content. Training relies on a combination of adversarial and reinforcement learning. The former is used to train a discriminator, whose goal is to distinguish the original from a reconstructed version of the video based on a small set of candidate thumbnails. The discriminator’s feedback is a measure of the representativeness of the selected thumbnails. This measure is combined with estimates about the aesthetic quality of the thumbnails (made using a SoA Fully Convolutional Network) to form a reward and train the thumbnail selector via reinforcement learning. Experiments on two datasets (OVP and Youtube) show the competitiveness of the proposed method against other SoA approaches. An ablation study with respect to the adopted thumbnail selection criteria documents the importance of considering the aesthetics, and the contribution of this information when used in combination with measures about the representativeness of the visual content.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to one-shot detection using architectures such as YOLOv3. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Author: Utkarsh Contractor
Interest in immersive media increased significantly over recent years. Besides applications in entertainment, culture, health, industry, etc., telepresence and remote collaboration gained importance due to the pandemic and climate crisis. Immersive media have the potential to increase social integration and to reduce greenhouse gas emissions. As a result, technologies along the whole pipeline from capture to display are maturing and applications are becoming available, creating business opportunities. One aspect of immersive technologies that is still relatively undeveloped is the understanding of perception and quality, including subjective and objective assessment. The interactive nature of immersive media poses new challenges to estimation of saliency or visual attention, and to the development of quality metrics. The V-SENSE lab of Trinity College Dublin addresses these questions in current research. This talk will highlight corresponding examples in 360 VR video, light fields, volumetric video and XR.
Depth-Based Real Time Head Motion Tracking Using 3D Template Matching愚 屠
In this work, we propose a system to estimate head poses only using depth information in real-time. An optimization method based on 3D model fitting is developed. We iteratively minimize the distance between source and target point clouds of a user’s head. The method give fully real-time responses (30fps) without the GPU speedup. We adopt a commodity depth sensor named Microsoft Kinect as well as Asus Xtion, and use the depth image as the only input so that our system will not be affected by illumination variations. However, the simplicity of this acquisition device comes at the cost of frequent noises in the acquired data. We demonstrate that 6 degrees of freedom real-time head motion tracking in 3D space can be achieved with such noisy depth data.
Human action recognition with kinect using a joint motion descriptorSoma Boubou
- We proposed a novel descriptor for motion of skeleton joints.
- Proposed descriptor proved to outperform the state-of-the-art descriptors such as HON4D and the one proposed by Chen et al 2013.
- Our proposed approached proved to be effective for periodic actions (e.g., Waving, Walking, Jogging, Side-Boxing, etc).
- Grouping was effective for actions with unique joints trajectories (e.g., Tennis serving, Side kicking , etc).
- Grouping joints into eight groups is always effective with actions of MSR3D dataset.
Long-term Face Tracking in the Wild using Deep LearningElaheh Rashedi
This paper investigates long-term face tracking of a specific person given his/her face image in a single frame as a query in a video stream. Through taking advantage of pre-trained deep learning models on big data, a novel system is developed for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we present a detection-verification-tracking method (dubbed as 'DVT') which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An offline trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an offline trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the queried person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is also tested on many other types of videos and shows very promising results.
Detecting and Improving Distorted Fingerprints using rectification techniques.sandipan paul
In this detection and improving distorted fingerprint using rectification techniques like SVM, PCA classifier etc.
In this ppt a distorted fingerprint is taken and improve that distorted fingerprint into normal one.
workshop for UXPA DC on April 12, 2014, entitled "All this UX data! Now what?" Attendees learned how to deal with large amounts of user experience data from tests, and how to combine certain data to tell a succinct story.
Smartphones as ubiquitous devices for behavior analysis and better lifestyle ...University of Geneva
Final PhD Defence presented in March 2016 at the University of Padua, Italy. 3 years PhD under the supervision of Prof. Ombretta Gaggi. Work focused on how it is possible to use smartphone to understand and analyse user behaviour, and how it is possible to use this information to further promote better lifestyle to individuals.
Presentation slides for our paper "Combining Adversarial and Reinforcement Learning for Video Thumbnail Selection", ACM ICMR 2021. https://doi.org/10.1145/3460426.3463630.
We developed a new method for unsupervised video thumbnail selection. The developed network architecture selects video thumbnails based on two criteria: the representativeness and the aesthetic quality of their visual content. Training relies on a combination of adversarial and reinforcement learning. The former is used to train a discriminator, whose goal is to distinguish the original from a reconstructed version of the video based on a small set of candidate thumbnails. The discriminator’s feedback is a measure of the representativeness of the selected thumbnails. This measure is combined with estimates about the aesthetic quality of the thumbnails (made using a SoA Fully Convolutional Network) to form a reward and train the thumbnail selector via reinforcement learning. Experiments on two datasets (OVP and Youtube) show the competitiveness of the proposed method against other SoA approaches. An ablation study with respect to the adopted thumbnail selection criteria documents the importance of considering the aesthetics, and the contribution of this information when used in combination with measures about the representativeness of the visual content.
Chen Sagiv, co founder and co CEO of SagivTech, gave an introduction talk to Computer Vision at She Codes branch in Google Campus TLV.
In the talk an overview was given on what is computer vision, where it is used, some basic notions and algorithms and the AI revolution.
The problem of scene classification in surveillance footage is of great importance for ensuring security in public areas. With challenges such as low quality feeds, occlusion, viewpoint variations, background clutter etc. The task is both challenging and error-prone. Therefore it is important to keep the false positives low to maintain a high accuracy of detection. In this paper, we adapt high performing CNN architectures to identify abandoned luggage in a surveillance feed. We explore several CNN based approaches, from Transfer Learning on the Imagenet dataset to one-shot detection using architectures such as YOLOv3. Using network visualization techniques, we gain insight into what the neural network sees and the basis of classification decision. The experiments have been conducted on real world datasets, and highlights the complexity in such classifications. Obtained results indicate that a combination of proposed techniques outperforms the individual approaches.
Author: Utkarsh Contractor
Interest in immersive media increased significantly over recent years. Besides applications in entertainment, culture, health, industry, etc., telepresence and remote collaboration gained importance due to the pandemic and climate crisis. Immersive media have the potential to increase social integration and to reduce greenhouse gas emissions. As a result, technologies along the whole pipeline from capture to display are maturing and applications are becoming available, creating business opportunities. One aspect of immersive technologies that is still relatively undeveloped is the understanding of perception and quality, including subjective and objective assessment. The interactive nature of immersive media poses new challenges to estimation of saliency or visual attention, and to the development of quality metrics. The V-SENSE lab of Trinity College Dublin addresses these questions in current research. This talk will highlight corresponding examples in 360 VR video, light fields, volumetric video and XR.
Depth-Based Real Time Head Motion Tracking Using 3D Template Matching愚 屠
In this work, we propose a system to estimate head poses only using depth information in real-time. An optimization method based on 3D model fitting is developed. We iteratively minimize the distance between source and target point clouds of a user’s head. The method give fully real-time responses (30fps) without the GPU speedup. We adopt a commodity depth sensor named Microsoft Kinect as well as Asus Xtion, and use the depth image as the only input so that our system will not be affected by illumination variations. However, the simplicity of this acquisition device comes at the cost of frequent noises in the acquired data. We demonstrate that 6 degrees of freedom real-time head motion tracking in 3D space can be achieved with such noisy depth data.
Human action recognition with kinect using a joint motion descriptorSoma Boubou
- We proposed a novel descriptor for motion of skeleton joints.
- Proposed descriptor proved to outperform the state-of-the-art descriptors such as HON4D and the one proposed by Chen et al 2013.
- Our proposed approached proved to be effective for periodic actions (e.g., Waving, Walking, Jogging, Side-Boxing, etc).
- Grouping was effective for actions with unique joints trajectories (e.g., Tennis serving, Side kicking , etc).
- Grouping joints into eight groups is always effective with actions of MSR3D dataset.
Long-term Face Tracking in the Wild using Deep LearningElaheh Rashedi
This paper investigates long-term face tracking of a specific person given his/her face image in a single frame as a query in a video stream. Through taking advantage of pre-trained deep learning models on big data, a novel system is developed for accurate video face tracking in the unconstrained environments depicting various people and objects moving in and out of the frame. In the proposed system, we present a detection-verification-tracking method (dubbed as 'DVT') which accomplishes the long-term face tracking task through the collaboration of face detection, face verification, and (short-term) face tracking. An offline trained detector based on cascaded convolutional neural networks localizes all faces appeared in the frames, and an offline trained face verifier based on deep convolutional neural networks and similarity metric learning decides if any face or which face corresponds to the queried person. An online trained tracker follows the face from frame to frame. When validated on a sitcom episode and a TV show, the DVT method outperforms tracking-learning-detection (TLD) and face-TLD in terms of recall and precision. The proposed system is also tested on many other types of videos and shows very promising results.
Detecting and Improving Distorted Fingerprints using rectification techniques.sandipan paul
In this detection and improving distorted fingerprint using rectification techniques like SVM, PCA classifier etc.
In this ppt a distorted fingerprint is taken and improve that distorted fingerprint into normal one.
workshop for UXPA DC on April 12, 2014, entitled "All this UX data! Now what?" Attendees learned how to deal with large amounts of user experience data from tests, and how to combine certain data to tell a succinct story.
Similar to EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements (20)
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Assure Contact Center Experiences for Your Customers With ThousandEyes
EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements
1. IT UNIVERSITY OF COPENHAGEN
EyeGrip: Detecting Targets in a Series of
Uni-directional Moving Objects
Using Optokinetic Nystagmus
Eye Movements
Shahram Jalaliniya - Diako Mardanbegi
IT University of Copenhagen
Pervasive Interaction Technology Lab
2. IT UNIVERSITY OF COPENHAGEN
MOTIVATION
• Information age & overwhelming users with data
• Data is getting more visual (e.g. Web, Facebook)
• Scrolling among visual data is becoming more popular
3. IT UNIVERSITY OF COPENHAGEN
• Scrolling includes:
- Scrolling
- Stopping the page
- Bringing the desired content back
(not always an easy task)
4. IT UNIVERSITY OF COPENHAGEN
EYEGRIP
EyeGrip automatically detects moving images that seem
to be interesting for a user among other scrolling images
by monitoring and analyzing user’s eye movements.
5. IT UNIVERSITY OF COPENHAGEN
EYEGRIP WORKS BASED ON OKN EYE MOVEMENTS
• OKN is an eye movement that tend to track the motion of
one element at a time in a set of unidirectional moving
stimuli
• OKN: Optokinetic nystagmus is a combination of saccadic
and smooth pursuit eye movements.
6. IT UNIVERSITY OF COPENHAGEN
HOW DOES EYEGRIP WORK?
When one of the images grabs our
attention we follow that image for
a longer time that creates a peak
in the OKN signal.
180$
200$
220$
240$
260$
280$
300$
320$
340$
360$
380$
6000$ 7500$ 9000$ 10500$12000$13500$15000$16500$18000$19500$21000$22500$24000$25500$27000$28
Original$data$
180$
200$
220$
240$
260$
280$
300$
320$
340$
360$
380$
6000$ 7500$ 9000$ 10500$ 12000$13500$ 15000$ 16500$ 18000$19500$21
Original$data$
7. IT UNIVERSITY OF COPENHAGEN
EXPERIMENT GOAL
Testing the feasibility of EyeGrip in different scrolling
conditions:
- Different speeds
- Maximum number of visible images on the screen (manipulated by
changing image width)
8. IT UNIVERSITY OF COPENHAGEN
EXPERIMENTAL DESIGN (3 × 2 )
• 20 participants
• 3 speeds
26.5, 37.5, and 49 °⁄sec
• 2 image widths
18°(
𝑊 𝑖𝑚𝑎𝑔𝑒
𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦
= 0.6) and 9°(
𝑊 𝑖𝑚𝑎𝑔𝑒
𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦
= 0.3)
9. IT UNIVERSITY OF COPENHAGEN
APPARATUS
• Head-mounted eye tracker with the Haytham open
source gaze tracker (20 Hz sampling rate)
• Laptop to display the scrolling images & collect eye data
34.5 cm
19.5cm
Small width condi ons: 1, 3,and 5
(a)
34.5 cm
19.5cm
Big width condi on: 2, 4, and 6
(b)
10. IT UNIVERSITY OF COPENHAGEN
EXPERIMENT TASK
• Visual search among faces: Participants should
press space key as soon as they see Bill Clinton’s
picture among other faces
• Participants repeated the task for all 6 conditions
• 40 random images of famous people is displayed
in each condition where 7 was Bill Clinton’s photos
11. USER ERROR
The error rate: total missing target images by
participants divided by total number of targets
Precision(NoEvent)"
n"6"
Window=10"
Window=16"
Window=20"
Window=30"
0"
1"
2"
3"
4"
5"
6"
7"
8"
9"
10"
Condi<on"
1"
Condi<on"
2"
Condi<on"
3"
Condi<on"
4"
Condi<on"
5"
Condi<on"
6"
Error$rate$(%)$
(b)$
Usererrorrate(%)
Cond. 1 2 3 4 5 6
Speed slow slow med med fast fast
Image
width
small big small big small big
12. IT UNIVERSITY OF COPENHAGEN
DATA ANALYSIS
• Cleaning data: Removing 5 participants with less than 75% data
• Normalization: finding left & right eye coordinates during the
experiment by displaying 2 red circles a the beginning of each
task. We used these coordinates to bring all the data in the same
range (min-max normalization)
• Aggregation: we aggregated the data from all 15 participants
• Labeling data: we used the space key to label the collected data
13. IT UNIVERSITY OF COPENHAGEN
EVENT DETECTION ALGORITHM
• We used the default setting for the Multilayer perceptron
algorithm in the WEKA with a single hidden layer
• Horizontal coordinate of the pupil center was the only
feature
• Sliding window is selected based on maximum
performance
- 30 frames for conditions 1,2,4
- 20 frames for conditions 3,6
- 16 frames for condition 5
15. IT UNIVERSITY OF COPENHAGEN
DESIGN GUIDELINES
• Images should move in a one direction at a
certain speed
• There should be a balance between moving
speed & max number of images
• The visual search task should not be very
complex otherwise all images will draw equally
high attention that increases falls positives
16. IT UNIVERSITY OF COPENHAGEN
STUDY 1: A PICTURE SELECTION SYSTEM
• 8 participants
• Speed: 37 °⁄sec
• Image width: 18 °
• Selecting Clinton pictures
0%#
10%#
20%#
30%#
40%#
50%#
60%#
70%#
80%#
90%#
100%#
Accuracy# Precision# Recall#
1"
2"
3"
4"
5"
Mental'
Demand'
Physical'
Demand'
Temporal'
Demand'
Performance' Effor t' Frustra: on'
5'lickert'scale'
17. IT UNIVERSITY OF COPENHAGEN
STUDY 2: MIND READING GAME
• 10 participants
• Select 1 of 4 characters
• Participants are asked to count
repetitions of selected person
• Accuracy:100%
18. IT UNIVERSITY OF COPENHAGEN
OTHER SUGGESTED APPLICATIONS
• Interaction with scrolling menus (e.g. scrolling cards
on Google Glass)
• EyeGrip for browsing Facebook page
• Advertisement on public displays
• Text reading assistant for small displays (slowing
down text when a user has problem with a word)
• Assistant for visual inspection in production lines
(automatically detecting unqualified products)
19. IT UNIVERSITY OF COPENHAGEN
RELATED WORK
• Pursuit [1]: is a calibration-free
technique to detect limited number
of moving objects on the screen
using smooth pursuit eye
movements.
• EyeGrip detects unlimited number
of unidirectional moving objects.
[1] M´elodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous
Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In
Proceedings of UbiComp ’13. ACM, 439–448.
20. IT UNIVERSITY OF COPENHAGEN
CONCLUSIONS
• EyeGrip is a calibration-free and implicit eye
interaction technique to select an object among
other moving unidirectional objects (top-down
attention)
• EyeGrip can be used in gaze-contingent user
interfaces to detect what draws users attention
(bottom-up attention)
• Simpler algorithms (e.g. threshold-based method)
can be applied to detect the event in EyeGrip