Jan Hendrik Hammer, Fraunhofer, Karlsruhe Institute of Technology, Germany, "Eyetracking, Gaze Analysis and Gaze-based interaction. Paper presented at the 3d workshop 'Evaluating Use and Impact' of the Scottish Network of Digital Cultural Heritage Resources Evaluation, Glasgow, 31 March 2016
Binocular Eye Trackingand Calibration in Head-mounted DisplaysMichael Stengel
Presentation slides from my talk on eye tracking and gaze-contingency for Virtual Reality applications.
In this talk I present the Eye Tracking Head-mounted Display proposed in the paper "An Affordable Solution for Binocular Eye Trackingand Calibration in Head-mounted Displays".
The paper won the "Best Student Paper Award" at ACM Multimedia 2015 in Brisbane, Australia.
Google Glass, The META and Co. - How to calibrate your Optical See-Through He...Jens Grubert
Slides from our ISMAR 2014 tutorial http://stctutorial.icg.tugraz.at/
Abstract:
Head Mounted Displays such as Google Glass and the META have the potential to spur consumer-oriented Optical See-Through Augmented Reality applications. A correct spatial registration of those displays relative to a user’s eye(s) is an essential problem for any HMD-based AR application.
At our ISMAR 2014 tutorial we provide an overview of established and novel approaches for the calibration of those displays (OST calibration) including hands on experience in which participants will calibrate such head mounted displays.
A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment and military applications that have been explored. This paper describes the characteristics of Augmented Reality systems. Registration and sensing errors are two of the biggest problems in building effective Augmented Reality systems, so this paper throws light on problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using Augmented Reality.
Orkun Oguz (CyVision): Critical Factors for the Mass Adaption of XRAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Orkun Oguz (CyVision): Critical Factors for the Mass Adaption of XR
This session focuses on ‘Visual Comfort’ as an additional factor for the success of the XR industry looking at natural viewing experience, high latency and vergence & accommodation conflict and the need to present information with true depth cues even for one eye, which is required for comfortable viewing experience without any visual conflicts.
http://AugmentedWorldExpo.com
Binocular Eye Trackingand Calibration in Head-mounted DisplaysMichael Stengel
Presentation slides from my talk on eye tracking and gaze-contingency for Virtual Reality applications.
In this talk I present the Eye Tracking Head-mounted Display proposed in the paper "An Affordable Solution for Binocular Eye Trackingand Calibration in Head-mounted Displays".
The paper won the "Best Student Paper Award" at ACM Multimedia 2015 in Brisbane, Australia.
Google Glass, The META and Co. - How to calibrate your Optical See-Through He...Jens Grubert
Slides from our ISMAR 2014 tutorial http://stctutorial.icg.tugraz.at/
Abstract:
Head Mounted Displays such as Google Glass and the META have the potential to spur consumer-oriented Optical See-Through Augmented Reality applications. A correct spatial registration of those displays relative to a user’s eye(s) is an essential problem for any HMD-based AR application.
At our ISMAR 2014 tutorial we provide an overview of established and novel approaches for the calibration of those displays (OST calibration) including hands on experience in which participants will calibrate such head mounted displays.
A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. A revolution in computer interface design is changing the way we think about computers. Rather than typing on a keyboard and watching a television monitor, Augmented Reality lets people use familiar, everyday objects in ordinary ways. This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real environment in real time. It describes the medical, manufacturing, visualization, path planning, entertainment and military applications that have been explored. This paper describes the characteristics of Augmented Reality systems. Registration and sensing errors are two of the biggest problems in building effective Augmented Reality systems, so this paper throws light on problems. Future directions and areas requiring further research are discussed. This survey provides a starting point for anyone interested in researching or using Augmented Reality.
Orkun Oguz (CyVision): Critical Factors for the Mass Adaption of XRAugmentedWorldExpo
A talk from the Develop Track at AWE USA 2018 - the World's #1 XR Conference & Expo in Santa Clara, California May 30- June 1, 2018.
Orkun Oguz (CyVision): Critical Factors for the Mass Adaption of XR
This session focuses on ‘Visual Comfort’ as an additional factor for the success of the XR industry looking at natural viewing experience, high latency and vergence & accommodation conflict and the need to present information with true depth cues even for one eye, which is required for comfortable viewing experience without any visual conflicts.
http://AugmentedWorldExpo.com
A Short Introduction to Computer Vision-based Marker TrackingJens Grubert
A Short Introduction to Computer Vision-based Marker Tracking used in Augmented and Virtual Reality applications. Theoretical fundamentals are combined with a publicly available source code
Human Activity Recognition using Smartphone's sensor Pankaj Mishra
Human activity recognition plays significant role in medical field and in security system. In this project we have design a model which recognize a person’s activity based on Smartphone.
A 3- dimensional Smartphone sensor named accelerometer and gyroscope is used to collect time series signal, from which 26 features are generated in time and frequency domain. The activities are classified using 2 different dormant learning method i.e. k-nearest neighbor algorithm, decision tree algorithm.
Cyborg Design: Multimodal Interactions, Information, and Environments for Wea...Bennett King
This presentation provides an overview or wearable computing for the UX community and design principals that can be used for wearable experience design. It was first given at the IA Summit in San Diego on March 30th, 2014.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
TRIDELITY - your provider of autostereoscopic (= glasses-free) 3D monitors with highest imaginable 3D quality and crispiest 3D perception both for 3D digital signage and entertainment as for science, industry and education.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
In February 2012 Annika Naschitzki presented to both Wellington and Auckland audiences about Optimal Usability's new eye tracker, and what it can do. Here is the presentation, however if you would like Anni to come into your organisation to do the presentation please get in touch: anni@optimalusability.com
Magic Leap is a leading augmented reality (AR) startup company. Magic Leap had raised more than $1.3 billion of venture funding from Alibaba, Google, Qualcomm, and other investors since its foundation in 2010. Financial Times valued Magic Leap at $4.5 billion. Following analysis regarding Magic Leap AR strategy is based on its recent published patent application US20160026253.
Help me build an international 3D printing community. Come here weekly to check out the latest trends, stats, and samples in 3D printing. Comments are welcome!
This is my Portfolio which includes just a few of my favorite projects out of the 30+ that I have developed during my time in college. The portfolio exemplifies my product design, 3D design, product development, and medical device knowledge/skills.
Kohlbecker Low Latency Combined Eye And Head Tracking System For Teleoperatin...Kalle
We have developed a low-latency combined eye and head tracker suitable for teleoperating a remote robotic head in real-time. Eye and head movements of a human (wizard) are tracked and replicated by the robot with a latency of 16.5 ms. The tracking is achieved by three fully synchronized cameras attached to a head mount. One forward-looking, wide-angle camera is used to determine the wizard’s head pose with respect to the LEDs on the video monitor; the other two cameras are for binocular eye tracking. The whole system operates at a sample rate of 220 Hz, which allows the capture and reproduction of biological movements as precisely as possible while keeping the overall latency low. In future studies, this setup will be used as an experimental platform for Wizard-of-Oz evaluations of gaze-based human-robot interaction. In particular, the question will be addressed as to what extent aspects of human eye
movements need to be implemented in a robot in order to guarantee a smooth interaction.
Eye tracking and its economic feasibilityJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how the economic feasibility of eye tracking technology is becoming better through improvements in infrared LEDs, micro-projectors, image sensors, and microprocessors. The capability to track an eye’s movement can help us better identify tired drivers and equipment operators, understand the eye movements of retail shoppers, and develop better human-computer interfaces. Tired drivers and machine operators lead to accidents and these accidents lead to loss of human life and equipment damage. Retailers would like to better understand the eye movements of their customers in order to better design retail stores. Eye trackers would enable one type of human-computer interface, Google Glasses, to understand the information that users are viewing and thus what they want to access
Eye tracking is done with a combination of infrared LEDs, micro-projectors, image sensors, and microprocessors. All of these components are experiencing rapid improvements in cost and performance as feature sizes are made smaller and the number of transistors are increased. Improvements in image sensors have led to higher accuracy and precision where precision refers to consistency. Much of these improvements have come from higher pixel densities and sampling frequencies of the image sensors; the latter enables tracking even when there are head movements.
These improvements have also led to lower costs and cost reductions continue to occur. The cost of high-end eye tracking systems have dropped from about 30,000 USD in 2000 to 18,000 in 2010 and 5,000 in 2013. Further reductions will occur as Moore’s Law continues and as higher volumes enable lower margins.
A Short Introduction to Computer Vision-based Marker TrackingJens Grubert
A Short Introduction to Computer Vision-based Marker Tracking used in Augmented and Virtual Reality applications. Theoretical fundamentals are combined with a publicly available source code
Human Activity Recognition using Smartphone's sensor Pankaj Mishra
Human activity recognition plays significant role in medical field and in security system. In this project we have design a model which recognize a person’s activity based on Smartphone.
A 3- dimensional Smartphone sensor named accelerometer and gyroscope is used to collect time series signal, from which 26 features are generated in time and frequency domain. The activities are classified using 2 different dormant learning method i.e. k-nearest neighbor algorithm, decision tree algorithm.
Cyborg Design: Multimodal Interactions, Information, and Environments for Wea...Bennett King
This presentation provides an overview or wearable computing for the UX community and design principals that can be used for wearable experience design. It was first given at the IA Summit in San Diego on March 30th, 2014.
The technology is growing vastly. Everyone in humanity has some limitations. One of those limitations is visual disability. So we are here with a system that helps the visually disabled people. The framework here contains object detection with voice assistance within an app and a hardware part attached to the blinds stick for distance calculation. The app is designed to support the blind person to explore freely anywhere he wants. The working of the framework begins by surveilling the situations around the user and distinguishing them utilizing a camera. The app will then detect the objects present in the input video frame by using the SSD algorithm comparing it with the trained model. The video captured is partitioned into grids to detect the object obstacle. In this way, the subtleties of the object detected can be achieved and along with it distance measurement can also be calculated using specific algorithms. A Text to Speech TTS converter is utilized for changing over the data about the object detected into an audio speech format. The framework application passes on the scene which the blind people is going in his her territorial language with the snap of a catch. The technologies utilized here makes the framework execution effective. Sabin Khader | Meerakrishna M R | Reshma Roy | Willson Joseph C "Godeye: An Efficient System for Blinds" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31631.pdf Paper Url :https://www.ijtsrd.com/engineering/computer-engineering/31631/godeye-an-efficient-system-for-blinds/sabin-khader
TRIDELITY - your provider of autostereoscopic (= glasses-free) 3D monitors with highest imaginable 3D quality and crispiest 3D perception both for 3D digital signage and entertainment as for science, industry and education.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
In February 2012 Annika Naschitzki presented to both Wellington and Auckland audiences about Optimal Usability's new eye tracker, and what it can do. Here is the presentation, however if you would like Anni to come into your organisation to do the presentation please get in touch: anni@optimalusability.com
Magic Leap is a leading augmented reality (AR) startup company. Magic Leap had raised more than $1.3 billion of venture funding from Alibaba, Google, Qualcomm, and other investors since its foundation in 2010. Financial Times valued Magic Leap at $4.5 billion. Following analysis regarding Magic Leap AR strategy is based on its recent published patent application US20160026253.
Help me build an international 3D printing community. Come here weekly to check out the latest trends, stats, and samples in 3D printing. Comments are welcome!
This is my Portfolio which includes just a few of my favorite projects out of the 30+ that I have developed during my time in college. The portfolio exemplifies my product design, 3D design, product development, and medical device knowledge/skills.
Kohlbecker Low Latency Combined Eye And Head Tracking System For Teleoperatin...Kalle
We have developed a low-latency combined eye and head tracker suitable for teleoperating a remote robotic head in real-time. Eye and head movements of a human (wizard) are tracked and replicated by the robot with a latency of 16.5 ms. The tracking is achieved by three fully synchronized cameras attached to a head mount. One forward-looking, wide-angle camera is used to determine the wizard’s head pose with respect to the LEDs on the video monitor; the other two cameras are for binocular eye tracking. The whole system operates at a sample rate of 220 Hz, which allows the capture and reproduction of biological movements as precisely as possible while keeping the overall latency low. In future studies, this setup will be used as an experimental platform for Wizard-of-Oz evaluations of gaze-based human-robot interaction. In particular, the question will be addressed as to what extent aspects of human eye
movements need to be implemented in a robot in order to guarantee a smooth interaction.
Eye tracking and its economic feasibilityJeffrey Funk
These slides use concepts from my (Jeff Funk) course entitled analyzing hi-tech opportunities to analyze how the economic feasibility of eye tracking technology is becoming better through improvements in infrared LEDs, micro-projectors, image sensors, and microprocessors. The capability to track an eye’s movement can help us better identify tired drivers and equipment operators, understand the eye movements of retail shoppers, and develop better human-computer interfaces. Tired drivers and machine operators lead to accidents and these accidents lead to loss of human life and equipment damage. Retailers would like to better understand the eye movements of their customers in order to better design retail stores. Eye trackers would enable one type of human-computer interface, Google Glasses, to understand the information that users are viewing and thus what they want to access
Eye tracking is done with a combination of infrared LEDs, micro-projectors, image sensors, and microprocessors. All of these components are experiencing rapid improvements in cost and performance as feature sizes are made smaller and the number of transistors are increased. Improvements in image sensors have led to higher accuracy and precision where precision refers to consistency. Much of these improvements have come from higher pixel densities and sampling frequencies of the image sensors; the latter enables tracking even when there are head movements.
These improvements have also led to lower costs and cost reductions continue to occur. The cost of high-end eye tracking systems have dropped from about 30,000 USD in 2000 to 18,000 in 2010 and 5,000 in 2013. Further reductions will occur as Moore’s Law continues and as higher volumes enable lower margins.
Virtual keyboard
A virtual keyboard is a software component that allows a user to enter characters.[1] A virtual keyboard can usually be operated with multiple input devices, which may include a touchscreen, an actual computer keyboard and a computer mouse.
An optical virtual keyboard was invented and patented by IBM engineers in 2008.[6] It optically detects and analyses human hand and finger motions and interprets them as operations on a physically non-existent input device like a surface having painted keys. In that way it allows to emulate unlimited types of manually operated input devices such as a mouse or keyboard. All mechanical input units can be replaced by such virtual devices, optimized for the current application and for the user's physiology maintaining speed, simplicity and unambiguity of manual data input.
Eye mouse is a computationally efficient, reliable and cost effective solution for determination of gaze in real time. In a gaze determination system, images of the eye are taken by a camera and sent to an image processing system. The image is processed from a low resolution camera to determine the location of the user’s iris in relation to the rest of the eye. This location is passed through a specialized set of algorithms and gaze direction is determined. This data is used to control mouse events on a computer display.
You can download a beta version from here: http://eyemouse.org/
EYE GAZE COMMUNICATION SYSTEM
The Eye gaze System is a communication system for people with complex physical disabilities.
This operates with eyes by looking at control keys displayed on a screen.
Eye Movement based Human Computer Interaction TechniqueJobin George
Eye movement-based interaction is one of several areas of current research in human computer interaction in which a new interface style seems to be emerging. In the non-command style, the computer passively monitors the user and responds as appropriate, rather than waiting for the user to issue specific commands. In describing eye movement-based human-computer interaction we can see two distinctions, one is in the nature of the user’s eye movements and the other, in the nature of the responses. In the world created by an eye movement based interface, users could move their eyes to scan the scene, just as they would a real world scene, unaffected by the presence of eye tracking equipment movement, on the eye movement axis. The alternative is to instruct users of the eye movement based interface to move their eyes in particular ways. On the response axis, objects could respond to a user’s eye movements in a natural way that is, the object responds to the user’s looking in the same way real objects do. The alternative is unnatural response, where objects respond in ways not experienced in the real world.
Now a days Eye tracking technology is applied in many fields like automotive defense and medical industries. The fields of advertising, entertainment, packaging and web design have all benefited significantly from studying the visual behavior of the consumer. Every day, as eye tracking is used in creative new ways, the list of applications grows.
This presentation is related to the description and working principle of Virtual reality world. I presented this slide in my 1st semester of Masters Degree. I hope it will be helpful to many.
The Virtual Dimension Center (VDC) Fellbach has compiled the state of the art as well as the market situation and areas of application of the technology field "Eye Tracking" and put it together in a whitepaper.
COSC 426 Lecture 1: Introduction to Augmented RealityMark Billinghurst
This is the first lecture of the COSC 426 graduate course on Augmented Reality taught at the University of Canterbury. It was taught by Mark Billinghurst on July 17th 2014. It covers a basic introduction to Augmented Reality.
presentation for augmented reality. ,It consists of introduction, working, components of AR, applications, limitations, recent development and conclusion. all the best for your presentation
Similar to Jan Hendrik Hammer, Fraunhofer, KIT, Eyetracking and Gaze Analysis (20)
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. Real-Time 3D Gaze Analysis in
Mobile Applications 2
Outline
Applications for eye tracking
Mobile and stationary eye tracking
From eye tracking to gaze analysis
What can we get from eye tracking? An example.
Further options for eye tracking
3. Real-Time 3D Gaze Analysis in
Mobile Applications 3
Applications for Eye Tracking
Assistive technology as interaction modality for people with
disabilities
User experience and interaction (interface design)
Marketing and consumer research
Interaction with products
Perception of advertisement
Compare viewing behaviour of experts and novices
During sports
During the perception of artworks
...
4. Real-Time 3D Gaze Analysis in
Mobile Applications 4
Mobile and Stationary
Devices
5. Real-Time 3D Gaze Analysis in
Mobile Applications 5
Stationary Devices
Small headbox
Only for gaze on displays
(mostly only one)
Prices:
Device: 80 .. 40,000+ £
Software:
SDKs for free programming
Analysis software: 800 .. 8,000+ £
Tobii Pro TX300
(www.tobiipro.com)
SMI RED250
(www.smivision.com)
Smart eye pro
(http://smarteye.se)
SMI RED250Mobile
(www.smivision.com)
Tobii Pro X3-120
(www.tobiipro.com)
Aurora (http://smarteye.se)
Gazepoint GP3 (www.gazept.com)
6. Real-Time 3D Gaze Analysis in
Mobile Applications 6
Mobile Devices
Binocular eye tracking
Below 100 g
Hardware and live viewing:
1,800-10,000 £
Gaze analysis
SDKs for free or software
(~8,000 £)
Gaze registration further costs
Semi-automatic: gaze annotations
Fully-automatic: pose estimation needed
Dikablis Professional
(www.ergoneers.com)
Tobii Pro Glasses 2
(www.tobiipro.com)
SMI ETG 2 Professional
(www.eyetracking-glasses.com)
Pupil labs eye tracker (https://pupil-labs.com)
7. Real-Time 3D Gaze Analysis in
Mobile Applications 7
Estimation of 3D Eye Ball Position
Pose estimation of scene camera
Inside-out tracking
Scene camera
Marker tracking
Markers distributed over the scene
Distraction of visual attention
Outside-in tracking
External cameras + infrared light sources
Infrared marker attached to device
High accuracy
Costs: 4,000 .. 150,000+ £
Eye tracker e.g. from Pupil Labs
(https://pupil-labs.com)
Marker attached in
scene
Infrared markers
attached to e.g. SMI ETG 2
(http://www.eyetracking-glasses.com)
Camera tracking system e.g. from ART
(http://www.ar-tracking.com)
Dikablis Professional
(www.ergoneers.com)
8. Real-Time 3D Gaze Analysis in
Mobile Applications 8
From Eye Tracking to Gaze
Analysis
9. Real-Time 3D Gaze Analysis in
Mobile Applications 9
Gaze Analysis Overview
Eye Tracking
• Line-of-sight reconstruction
• Gaze point computation
Gaze movement
computation
• Fixations, saccades, smooth pursuits, ...
Gaze analysis
• Gaze metrics
• Areas-of-interest (AOIs)
• visually most relevant objects
10. Real-Time 3D Gaze Analysis in
Mobile Applications 10
Gaze Analysis Overview
Eye Tracking
• Line-of-sight reconstruction
• Gaze point computation
Gaze movement
computation
• Fixations, saccades, smooth pursuits, ...
Gaze analysis
• Gaze metrics
• Areas-of-interest (AOIs)
• visually most relevant objects
11. Real-Time 3D Gaze Analysis in
Mobile Applications 11
Nodal point of eye relative to scene camera
Viewing direction
view cone
Eye Tracking Data for Line of Sight Reconstruction
Guestrin, E. D. D. , Eizenman, M. (2006) General
theory of remote gaze estimation using the pupil
center and corneal reflections. IEEE transactions on
bio-medical engineering, 53, 1124-1133
12. Real-Time 3D Gaze Analysis in
Mobile Applications 12
Estimation of 3D Eye Ball Position
Pose estimation of scene camera
Inside-out tracking
Scene camera
Marker tracking
Markers distributed over the scene
Distraction of visual attention
Outside-in tracking
External cameras + infrared light sources
Infrared marker attached to device
High accuracy
Costs: 4,000 .. 150,000+ £
Eye tracker e.g. from Pupil Labs
(https://pupil-labs.com)
Marker attached in
scene
Infrared markers
attached to e.g. SMI ETG 2
(http://www.eyetracking-glasses.com)
Camera tracking system e.g. from ART
(http://www.ar-tracking.com)
Dikablis Professional
(www.ergoneers.com)
13. Real-Time 3D Gaze Analysis in
Mobile Applications 13
3D Modell of Environment
Manual creation
Very time consuming
Only static scenes
Valencian Kitchen, National Museum of Decorative Arts
(Madrid, Ministry of Culture, Spain)
The Laboratory of Lavoisier, Musée des
arts et mètiers, (Paris, France)
SmartControlRoom, Fraunhofer IOSB
(Karlsruhe, Germany)
14. Real-Time 3D Gaze Analysis in
Mobile Applications 14
Line-of-Sight Reconstruction
3D gaze point computation =
intersection line-of-sight with 3D
world
15. Real-Time 3D Gaze Analysis in
Mobile Applications 15
Gaze analysis overview
Eye Tracking
• Line of sight reconstruction
• Gaze point computation
Gaze movement
computation
• Fixations , saccades, smooth pursuits, ...
Gaze analysis
• Gaze metrics
• Areas of interest (AOIs)
• visually most relevant objects
16. Real-Time 3D Gaze Analysis in
Mobile Applications 16
3D Scanpath Visualization
Gazepoints (violet):
7221 gaze points, 4 min 48 s 8820 gaze points, 5 min 52 s
17. Real-Time 3D Gaze Analysis in
Mobile Applications 17
3D Scanpath Visualization
Fixations (≥100 ms, red):
7221 gaze points, 4 min 48 s 8820 gaze points, 5 min 52 s
18. Real-Time 3D Gaze Analysis in
Mobile Applications 18
3D Scanpath Visualization
Fixations and saccades:
19. Real-Time 3D Gaze Analysis in
Mobile Applications 19
Gaze analysis overview
Eye Tracking
• Line of sight reconstruction
• Gaze point computation
Gaze movement
computation
• Fixations, saccades, smooth pursuits, ...
Gaze analysis
• Gaze metrics
• Areas of interest (AOIs)
• visually most relevant objects
20. Real-Time 3D Gaze Analysis in
Mobile Applications 20
Areas of Interest (AOIs)
Definition of AOIs Visualization of a hit AOI
21. Real-Time 3D Gaze Analysis in
Mobile Applications 21
Areas of Interest (AOIs)
Definition of AOIs Visualization of a hit AOI
22. Real-Time 3D Gaze Analysis in
Mobile Applications 22
Entry time + 1st fixation number
Duration of 1st fixation
Number of fixations on AOI
Cumulative fixation time
Dwell time
Cumulative fixation time + cumulative duration of saccades on AOI
Revisits
Number of saccades entering the AOI minus 1
AOI-based Metrics
23. Real-Time 3D Gaze Analysis in
Mobile Applications 23
What can we get from eye
tracking? An example.
24. Real-Time 3D Gaze Analysis in
Mobile Applications 24
Two types of experiments
T1: Freely viewing artworks
How do subjects look at the scene?
What are the most attractive areas?
T2: Freely viewing artworks while
listening to audio guide
Do people follow the told stories?
Experiments in the EU-Project ARtSENSE
25. Real-Time 3D Gaze Analysis in
Mobile Applications 25
Different Scanning Behavior
Normalized heat map (green to red)
Freely viewing (5 minutes) Listening to audio guide (6.5 minutes)
26. Real-Time 3D Gaze Analysis in
Mobile Applications 26
Different Scanning Behavior
Food and kitchen tools
Important persons and animals (part of audio guide stories)
27. Real-Time 3D Gaze Analysis in
Mobile Applications 27
Cumulative Fixation Time (CFT)
Food and kitchen tools
Important persons and animals (part of audio guide stories)
CFT = 45 seconds (41 %)
CFT = 64 seconds (59 %)
CFT = 30 seconds (14%)
CFT = 181 seconds (86 %)
28. Real-Time 3D Gaze Analysis in
Mobile Applications 28
Scanpath Comparison
Scanpath 1: Freely viewing
Scanpath 2: Listening to audio guide
29. Real-Time 3D Gaze Analysis in
Mobile Applications 30
Live Computation of Visually Most Relevant Objects
in Small Time Windows
0
625
532
751
328
0 200 400 600 800 1000
tablet3
tablet2
servant2
tablet1
servant1
Time/ms
CFT in 4 s Time Window at Time t
766
469
469
548
250
0 200 400 600 800 1000
tablet3
tablet2
servant2
tablet1
servant1
Time/ms
CFT in 4 s Time Window at Time t + 1 s
Time window [t - 4 s, t]
Time window [t - 3 s, t + 1 s]
Basis for recommendations in AR Glasses
tablet1
tablet2
tablet3
servant1 servant2
30. Real-Time 3D Gaze Analysis in
Mobile Applications 31
Gaze-Based Interaction
31. Real-Time 3D Gaze Analysis in
Mobile Applications 32
Stationary Gaze Key Press
Video stream selection using gaze key press
32. Real-Time 3D Gaze Analysis in
Mobile Applications 33
Mobile Gaze Key Press – First tests
Selection by pointing
33. Real-Time 3D Gaze Analysis in
Mobile Applications 34
Mobile Gaze Key Press – First tests
Pointing (26.5 sec) Mobile gaze key press (16 sec, ≈3/5)
34. Real-Time 3D Gaze Analysis in
Mobile Applications 35
Further Interesting Stuff
Stationary eye tracking via webcam (www.eyezag.de)
No eye tracker needed
Subjects can participate from all over the world
35. Real-Time 3D Gaze Analysis in
Mobile Applications 36
Further Interesting Stuff
Eye tracking in virtual reality
You can walk around and the immersion
is great!
Cheap pose estimation for 3D eye
tracking
Modeling of 3D world needed
Eye tracking hardware and analysis
software not available for all VR devices
– but in probably in the future
HTC Vive
(www.htcvive.com)
36. Real-Time 3D Gaze Analysis in
Mobile Applications 37
Summary
What do you want the people to look at?
Images on display stationary eye tracking
Large volumes mobile eye tracking
Workflow: Eye tracking gaze movement computation gaze
analysis
Gaze-based interaction (gaze key press)
Alternatives to usual eye tracking and eye tracking in VR
37. Real-Time 3D Gaze Analysis in
Mobile Applications 38
Thank you for your
attention!
Questions?
38. Real-Time 3D Gaze Analysis in
Mobile Applications 39
Contribution
1st fully automated real-time capable 3D gaze analysis for
mobile applications
Publication: Jan Hendrik Hammer, Michael Maurus, Jürgen Beyerer.
Real-time 3D gaze analysis in mobile applications. In: Proceedings
of the 2013 Conference on Eye Tracking South Africa, ETSA '13, S.
75-78, ACM, August 2013.
39. Real-Time 3D Gaze Analysis in
Mobile Applications 40
Realistic Heatmap Generation in 3D Environments
Visual acuity decreases with
deviation from visual axis
Projected Gaussian
Projected Gaussian Occlusion test
40. Real-Time 3D Gaze Analysis in
Mobile Applications 41
Contribution
1st method for realistic heatmap visualization of gaze data
in 3D environments
Publication: Michael Maurus, Jan Hendrik Hammer, Jürgen Beyerer.
Realistic Heatmap Visualization for Interactive Analysis of 3D Gaze
Data. In: Proceedings of the Symposium on Eye Tracking Research
and Applications, S. 295--298, ACM, März 2014.
44. Real-Time 3D Gaze Analysis in
Mobile Applications 45
Motivation: Adaptive AR Museum Guide
1. Freely viewing artwork
3. Recommendation
4. Gesture interaction
2. Detection of visually
most relevant objects
Editor's Notes
Eyetrackers.net/en/buy
SMI:
iView X READ 18900 €
ETG 2 9.900 €
RED m 14.900 €
Experiment Suit 360: 4.900 €
Gazepoint (http://www.gazept.com/shop/):
GP3 Eye Tracker: $ 495
Gazepoint Analysis Professional Edition Software $995.00
Tobii:
Tobii Pro Analystics SDK: free and for all their screen based eye trackers
Tobii Pro Studio: Basic: 4700 $, Profession: 8900$, Enterprise: 12900$
Tobii X60 29,900 $
Tobii Pro X3-120: ca. 100 €
Smart eye pro:
Smart eye Aurora:
Pubil: 2300 €
Tobii pro glasses 2 live vieweing: 15000 $
Tobii pro glasses 2 incl. Analysis software 25000 $
ETG Analysis Pro (60 Hz Glasses, Data Recorder, Analysis Software): ~ 30,000 $
Outside tracking: Costs: 5,000 .. 200,000 $+
Eye tracking solves the question of where your eyes are directed to in the given scene. This is NOT what you are looking at because “looking” involves some kind processing in our brains. Eye tracking is the processing of the raw data from the used device to reconstruct the line of sight for gaze point computation.
What you are looking at is more part of the gaze movement computation. Our gaze path consists of different gaze movements. The two most important ones, I want you to remember are fixations and saccades. Fixations are periods during which our gaze remains still at some point and we process what we see. Fixations range between 60 ms and somewhere below 2 seconds. Between fixations, the points we look at, our gaze jumps and these ballistic movements are called saccades.
Using these gaze path we can then analyse the gaze using different gaze metrics and AOIs, so called areas of interest, to e.g. infer the visually most relevant objects of a scene.
Geometrical model for computation of the viewing direction
Outside tracking: Costs: 5,000 .. 200,000 €+
Definition of AOIs in Valencian Kitchen
Hit AOIs are highlighted in realtime
Metrics for attractiveness of an AOI
Duke of Wellington
audio guides – tablets -> further information
HMD -> figure out where people look
Scanpath -> tartlets
System determines tartlets as visually most relevant objects -> recommendation system
Question -> Explicit interaction by hand gestures
Today - implicit gaze interaction