Facial recognition technology has many applications such as providing alerts in driving, enabling feedback systems for e-learning, and aiding in medical practices and psychological research. It works by detecting emotion signatures - unique patterns in facial action units like eyebrows, eyes, nose and lips that indicate specific emotions. Consumers are demanding more interactive systems, and facial recognition meets this need by allowing analysis of real-time facial expressions to provide feedback. The budget for developing such a system would be approximately $50,000.
A deep learning facial expression recognition based scoring system for restau...CloudTechnologies
A deep learning facial expression recognition based scoring system for restaurants
Cloud Technologies providing Complete Solution for all
Academic Projects Final Year/Semester Student Projects
For More Details,
Contact:
Mobile:- +91 8121953811,
whatsapp:- +91 8522991105,
Email ID: cloudtechnologiesprojects@gmail.com
Deep learning on face recognition (use case, development and risk)Herman Kurnadi
1) Face recognition using deep learning methods has achieved high accuracy, nearing and sometimes surpassing human-level performance on some datasets.
2) The document outlines the key steps in face recognition systems using deep learning: face detection, alignment, feature extraction, and recognition. It discusses several influential deep learning models that have improved accuracy.
3) Applications discussed include security, health, and marketing/retail uses. Concerns about bias and privacy are also mentioned.
The study of attention estimation for child-robot interaction scenariosjournalBEEI
One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.
Speech given by Mark Billinghurst at the AWE 2014 conference on how to use multimodal speech and gesture interaction with Augmented Reality applications. Talk given on May 28th, 2014.
Facial recognition systems analyze facial images to identify individuals. They measure facial features to create a unique template for each face. Historically, early systems used neural networks to recognize aligned faces. More advanced techniques like eigenfaces, laplacianfaces, and locality preserving projections map faces into subspaces to analyze them. Facial recognition has improved accuracy in identifying faces with variations in expression. However, it has limitations as it only utilizes a subset of human facial nodal points and does not account for manifold structure or biometric characteristics. Future areas of development include 3D recognition and unobtrusive audio-video identification systems.
Facial recognition technology has many applications such as providing alerts in driving, enabling feedback systems for e-learning, and aiding in medical practices and psychological research. It works by detecting emotion signatures - unique patterns in facial action units like eyebrows, eyes, nose and lips that indicate specific emotions. Consumers are demanding more interactive systems, and facial recognition meets this need by allowing analysis of real-time facial expressions to provide feedback. The budget for developing such a system would be approximately $50,000.
A deep learning facial expression recognition based scoring system for restau...CloudTechnologies
A deep learning facial expression recognition based scoring system for restaurants
Cloud Technologies providing Complete Solution for all
Academic Projects Final Year/Semester Student Projects
For More Details,
Contact:
Mobile:- +91 8121953811,
whatsapp:- +91 8522991105,
Email ID: cloudtechnologiesprojects@gmail.com
Deep learning on face recognition (use case, development and risk)Herman Kurnadi
1) Face recognition using deep learning methods has achieved high accuracy, nearing and sometimes surpassing human-level performance on some datasets.
2) The document outlines the key steps in face recognition systems using deep learning: face detection, alignment, feature extraction, and recognition. It discusses several influential deep learning models that have improved accuracy.
3) Applications discussed include security, health, and marketing/retail uses. Concerns about bias and privacy are also mentioned.
The study of attention estimation for child-robot interaction scenariosjournalBEEI
One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.
Speech given by Mark Billinghurst at the AWE 2014 conference on how to use multimodal speech and gesture interaction with Augmented Reality applications. Talk given on May 28th, 2014.
Facial recognition systems analyze facial images to identify individuals. They measure facial features to create a unique template for each face. Historically, early systems used neural networks to recognize aligned faces. More advanced techniques like eigenfaces, laplacianfaces, and locality preserving projections map faces into subspaces to analyze them. Facial recognition has improved accuracy in identifying faces with variations in expression. However, it has limitations as it only utilizes a subset of human facial nodal points and does not account for manifold structure or biometric characteristics. Future areas of development include 3D recognition and unobtrusive audio-video identification systems.
Artful Mockers: Brand Parodies Amplified Through Animationgladwise
The document discusses brand parodies seen in animated TV shows like The Simpsons and Futurama. It describes a scene from Futurama that references Apple's 1984 Super Bowl commercial where crowds swarm to buy the new eyePhone. It also talks about a scene from The Simpsons where Bart is attacked by Mapple Brainiacs for criticizing modern society, and Lisa enjoys listening to a myPod while visiting the Mapple store and talking to CEO Steve Mobs.
World Usability Day 2010 Lietuvoje renginio prezentacija Gaze-Based Interaction. Dr. D.Miniotas, VGTU.
Daugiau renginio prezentacijų rasite čia:
http://www.ideacode.lt/wud2010-lietuvoje-renginio-prezentacijos
As smartphones evolve researchers are studying new techniques to ease the human-mobile interaction. We propose EyePhone, a novel “hand-free” interfacing system capable of driving mobile applications/functions using only the user’s eyes movement and actions (e.g., wink). EyePhone tracks the user’s eye movement across the phone’s display using the camera mounted on the front of the phone; more specifically, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone display as a user views a particular application; and ii) detect eye blinks that emulate mouse clicks to activate the target application under view. We present a prototype implementation of EyePhone on a Nokia N810, which is capable of tracking the position of the eye on the display, mapping this positions to an application that is activated by a wink. At no time does the user have to physically touch the phone display.
1) The document describes EyePhone, a hand-free interfacing system that uses eye tracking and blink detection to control a mobile phone.
2) EyePhone tracks the user's eye movement using a camera mounted on the front of the phone. It detects the user's eyes, creates a template of their open eyes, and then tracks eye movement.
3) The document discusses two applications of EyePhone - EyeMenu, which allows users to control a customized menu by looking at buttons, and a car safety application to detect driver drowsiness through eye tracking.
This document summarizes a research paper titled "EyePhone: Activating Mobile Phones With Your Eyes". It discusses the following key points:
1. The paper proposes a system called EyePhone that allows users to control their mobile phone with eye movements and blinks detected by the front-facing camera. EyePhone tracks the user's eye on the display and detects blinks to emulate mouse clicks.
2. EyePhone works in four phases - eye detection, open eye template creation, eye tracking, and blink detection. It uses template matching and thresholding techniques to detect eyes, track eye movements, and determine when the user blinks.
3. The system was evaluated for accuracy of eye tracking and blink
Socially-Sensitive Interfaces: From Offline Studies to Interactive ExperiencesElisabeth André
This document summarizes the work of Elisabeth André and the Human-Centered Multimedia lab on developing socially sensitive interfaces. The lab focuses on human-computer interaction, social signal processing, and building embodied conversational agents and social robots. Their work aims to enrich computer interfaces with human abilities like nonverbal communication. While social signal processing research has grown, applications have not translated well. The challenges include dealing with noisy real-world data, non-prototypical behaviors, and multimodal fusion. The lab works on analyzing social signals, generating expressive behaviors in virtual agents and robots, and applications like training presentation skills. Current work focuses on mobile social signal processing.
The white paper discusses two research studies on consumer perceptions and usability of gesture interaction in vehicles. The first study with 11 participants found that gestures were faster and more successful than traditional controls for most tasks, though opening the glovebox was slower. The second study with 45 consumers found initial skepticism toward gestures but most were able to successfully use gestures for tasks like lights and calls. Overall, gestures were seen as more enjoyable than traditional controls when focused on simple, valuable uses.
This white paper discusses two research studies on consumer perceptions of gesture interaction in vehicles. The first study evaluated the usability of predefined gestures and found that gestures were faster and more successful than conventional controls for most tasks. The second study assessed general consumer expectations and found initial skepticism toward gestures but that most participants could successfully control functions. Overall, gestures were seen as more enjoyable than conventional controls when focused on natural, simple gestures providing feedback. Cultural differences may also impact gesture design.
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
This document provides an introduction to user experience research, including human factors, usability testing, and neuromarketing techniques. It discusses measuring usability through studies, eye tracking to analyze user behavior on interfaces, and using biometrics to evaluate emotional responses. The presentation encourages learning more about user experience design and how it can impact careers by improving how people and companies interact through technology.
This document summarizes a study that used the Think-Aloud Protocol technique to observe users' mental models and gestural interactions when using a touchscreen tablet for the first time. Five participants completed tasks in a calendar app and space game app on an iPad2. Observations found that users struggled with gestures requiring indirect touch like long presses and spin gestures. They tended to use direct touch gestures like taps and drag-and-drops based on prior mouse experience. The study provides insight into how users' existing mental models can influence their understanding of new touchscreen interactions.
Self-talk discrimination in Human-Robot Interaction Situations For Engagement...Jade Le Maitre
This document describes a study on developing a metric to characterize engagement in human-robot interaction situations for cognitive stimulation exercises with elderly users. The researchers designed a triadic situation involving a user, a computer providing exercises, and a robot providing encouragement. They analyzed social signals like self-talk and system-directed speech during wizard-of-oz experiments. An automatic recognition system was developed to detect these dialogue acts, achieving 71% accuracy. The durations of detected acts were combined to estimate an "Interaction Effort" measure of user engagement during exercises. The measure effectively captured engagement levels of elderly patients in cognitive stimulation tasks.
Mobile UX London - Mobile Usability Hands-on by SABRINA DUDAMobileUXLondon
MUXL is a community of experience creators and innovators working in UX, Product, Mobile, Design & Development, collaborating to diffuse ideas and knowledge in a supportive and creative environment. https://mobileuxlondon.com
What are the latest facts and figures on mobile retail? How do you perform a user experience design evaluation?
This workshop will start with a short overview of mobile retail stats, mobile design principles and a basic framework for user experience evaluation. We will then get hands-ons working in groups of 3 to 4 people to analyze a mobile shop in order to apply our learnings and also share our experiences.
elievable Virtual Social Interactions
Which gesture indicates which emotion?
Is a different level of emotion conveyed by related gestures?
Do we need to use the same pose for conveying the same emotion and intensity for anthropomorphic characters as in human, or does the pose needs to be exaggerated?
How to design realistic body movement?
User experience design portfolio, Harry Brenton Harry Brenton
1) The document contains 5 case studies from the portfolio of Harry Brenton, a user experience designer. The case studies describe projects developing digital products for music annotation, medical education, stroke simulation, emergency medicine training, and dance visualization.
2) The first case study describes developing an interface called the "Social Timeline" to allow musicians to collaboratively annotate audio and video recordings.
3) The second case study involved creating a 3D anatomy tutorial using dynamic linking to demonstrate body systems and functions to medical students.
4) Another case study developed a virtual patient for stroke assessment training using speech, gestures and gaze as inputs for clinicians to diagnose the simulated patient.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
Dana El Halabi conducted research for an emotion-centered mobile application for Barcelona. Primary research included surveys, interviews, and co-creation sessions. Surveys found that locals want authentic experiences beyond commercial options. Interviews revealed locals get stuck in routines while expats explore more. Experts see a trend toward authentic experiences. Co-creation showed that emotions don't determine activities as much as current feelings. The proposed application, "YUHU", would provide unique experiences based on detected emotions. However, challenges remain in accurately understanding human emotions and differences.
● Comparative Analysis of Scheduling Algorithms Performance in a Long Term Evolution Network
● Efficient Authentication Algorithm for Secure Remote Access in Wireless Sensor Networks
● Emoji Essence: Detecting User Emotional Response on Visual Centre Field with Emoticons
● Enhanced Information Systems Success Model for Patient Information Assurance
● Natural Language Processing and Its Challenges on Omotic Language Group of Ethiopia
● Quick Quantum Circuit Simulation
Gesture Interaction with Children for Education, Fitness, GamesObermannCenter
The document discusses research on natural user interactions (NUIs) for children. It describes several touchscreen interaction studies comparing how children and adults use touchscreen devices. The studies found that children miss more touch targets than adults, especially smaller targets. Children's gestures are recognized less accurately by modern recognizers designed for adults. The research aims to better understand how children interact with touchscreens and whole-body interfaces to inform the design of more effective interaction techniques and recognition algorithms for children.
This document discusses user interface design. It covers three areas of interface design: between software components, software and non-human systems, and the human user interface. The document outlines golden rules for user interface design like placing the user in control and making the interface consistent. It also discusses analyzing users, tasks, and display content. The interface design process involves analysis, defining objects/actions, modeling states/events, and evaluating prototypes.
Artful Mockers: Brand Parodies Amplified Through Animationgladwise
The document discusses brand parodies seen in animated TV shows like The Simpsons and Futurama. It describes a scene from Futurama that references Apple's 1984 Super Bowl commercial where crowds swarm to buy the new eyePhone. It also talks about a scene from The Simpsons where Bart is attacked by Mapple Brainiacs for criticizing modern society, and Lisa enjoys listening to a myPod while visiting the Mapple store and talking to CEO Steve Mobs.
World Usability Day 2010 Lietuvoje renginio prezentacija Gaze-Based Interaction. Dr. D.Miniotas, VGTU.
Daugiau renginio prezentacijų rasite čia:
http://www.ideacode.lt/wud2010-lietuvoje-renginio-prezentacijos
As smartphones evolve researchers are studying new techniques to ease the human-mobile interaction. We propose EyePhone, a novel “hand-free” interfacing system capable of driving mobile applications/functions using only the user’s eyes movement and actions (e.g., wink). EyePhone tracks the user’s eye movement across the phone’s display using the camera mounted on the front of the phone; more specifically, machine learning algorithms are used to: i) track the eye and infer its position on the mobile phone display as a user views a particular application; and ii) detect eye blinks that emulate mouse clicks to activate the target application under view. We present a prototype implementation of EyePhone on a Nokia N810, which is capable of tracking the position of the eye on the display, mapping this positions to an application that is activated by a wink. At no time does the user have to physically touch the phone display.
1) The document describes EyePhone, a hand-free interfacing system that uses eye tracking and blink detection to control a mobile phone.
2) EyePhone tracks the user's eye movement using a camera mounted on the front of the phone. It detects the user's eyes, creates a template of their open eyes, and then tracks eye movement.
3) The document discusses two applications of EyePhone - EyeMenu, which allows users to control a customized menu by looking at buttons, and a car safety application to detect driver drowsiness through eye tracking.
This document summarizes a research paper titled "EyePhone: Activating Mobile Phones With Your Eyes". It discusses the following key points:
1. The paper proposes a system called EyePhone that allows users to control their mobile phone with eye movements and blinks detected by the front-facing camera. EyePhone tracks the user's eye on the display and detects blinks to emulate mouse clicks.
2. EyePhone works in four phases - eye detection, open eye template creation, eye tracking, and blink detection. It uses template matching and thresholding techniques to detect eyes, track eye movements, and determine when the user blinks.
3. The system was evaluated for accuracy of eye tracking and blink
Socially-Sensitive Interfaces: From Offline Studies to Interactive ExperiencesElisabeth André
This document summarizes the work of Elisabeth André and the Human-Centered Multimedia lab on developing socially sensitive interfaces. The lab focuses on human-computer interaction, social signal processing, and building embodied conversational agents and social robots. Their work aims to enrich computer interfaces with human abilities like nonverbal communication. While social signal processing research has grown, applications have not translated well. The challenges include dealing with noisy real-world data, non-prototypical behaviors, and multimodal fusion. The lab works on analyzing social signals, generating expressive behaviors in virtual agents and robots, and applications like training presentation skills. Current work focuses on mobile social signal processing.
The white paper discusses two research studies on consumer perceptions and usability of gesture interaction in vehicles. The first study with 11 participants found that gestures were faster and more successful than traditional controls for most tasks, though opening the glovebox was slower. The second study with 45 consumers found initial skepticism toward gestures but most were able to successfully use gestures for tasks like lights and calls. Overall, gestures were seen as more enjoyable than traditional controls when focused on simple, valuable uses.
This white paper discusses two research studies on consumer perceptions of gesture interaction in vehicles. The first study evaluated the usability of predefined gestures and found that gestures were faster and more successful than conventional controls for most tasks. The second study assessed general consumer expectations and found initial skepticism toward gestures but that most participants could successfully control functions. Overall, gestures were seen as more enjoyable than conventional controls when focused on natural, simple gestures providing feedback. Cultural differences may also impact gesture design.
Computer Science
Active and Programmable Networks
Active safety systems
Ad Hoc & Sensor Network
Ad hoc networks for pervasive communications
Adaptive, autonomic and context-aware computing
Advance Computing technology and their application
Advanced Computing Architectures and New Programming Models
Advanced control and measurement
Aeronautical Engineering,
Agent-based middleware
Alert applications
Automotive, marine and aero-space control and all other control applications
Autonomic and self-managing middleware
Autonomous vehicle
Biochemistry
Bioinformatics
BioTechnology(Chemistry, Mathematics, Statistics, Geology)
Broadband and intelligent networks
Broadband wireless technologies
CAD/CAM/CAT/CIM
Call admission and flow/congestion control
Capacity planning and dimensioning
Changing Access to Patient Information
Channel capacity modelling and analysis
Civil Engineering,
Cloud Computing and Applications
Collaborative applications
Communication application
Communication architectures for pervasive computing
Communication systems
Computational intelligence
Computer and microprocessor-based control
Computer Architecture and Embedded Systems
Computer Business
Computer Sciences and Applications
Computer Vision
Computer-based information systems in health care
Computing Ethics
Computing Practices & Applications
Congestion and/or Flow Control
Content Distribution
Context-awareness and middleware
Creativity in Internet management and retailing
Cross-layer design and Physical layer based issue
Cryptography
Data Base Management
Data fusion
Data Mining
Data retrieval
Data Storage Management
Decision analysis methods
Decision making
Digital Economy and Digital Divide
Digital signal processing theory
Distributed Sensor Networks
Drives automation
Drug Design,
Drug Development
DSP implementation
E-Business
E-Commerce
E-Government
Electronic transceiver device for Retail Marketing Industries
Electronics Engineering,
Embeded Computer System
Emerging advances in business and its applications
Emerging signal processing areas
Enabling technologies for pervasive systems
Energy-efficient and green pervasive computing
Environmental Engineering,
Estimation and identification techniques
Evaluation techniques for middleware solutions
Event-based, publish/subscribe, and message-oriented middleware
Evolutionary computing and intelligent systems
Expert approaches
Facilities planning and management
Flexible manufacturing systems
Formal methods and tools for designing
Fuzzy algorithms
Fuzzy logics
GPS and location-based app
This document provides an introduction to user experience research, including human factors, usability testing, and neuromarketing techniques. It discusses measuring usability through studies, eye tracking to analyze user behavior on interfaces, and using biometrics to evaluate emotional responses. The presentation encourages learning more about user experience design and how it can impact careers by improving how people and companies interact through technology.
This document summarizes a study that used the Think-Aloud Protocol technique to observe users' mental models and gestural interactions when using a touchscreen tablet for the first time. Five participants completed tasks in a calendar app and space game app on an iPad2. Observations found that users struggled with gestures requiring indirect touch like long presses and spin gestures. They tended to use direct touch gestures like taps and drag-and-drops based on prior mouse experience. The study provides insight into how users' existing mental models can influence their understanding of new touchscreen interactions.
Self-talk discrimination in Human-Robot Interaction Situations For Engagement...Jade Le Maitre
This document describes a study on developing a metric to characterize engagement in human-robot interaction situations for cognitive stimulation exercises with elderly users. The researchers designed a triadic situation involving a user, a computer providing exercises, and a robot providing encouragement. They analyzed social signals like self-talk and system-directed speech during wizard-of-oz experiments. An automatic recognition system was developed to detect these dialogue acts, achieving 71% accuracy. The durations of detected acts were combined to estimate an "Interaction Effort" measure of user engagement during exercises. The measure effectively captured engagement levels of elderly patients in cognitive stimulation tasks.
Mobile UX London - Mobile Usability Hands-on by SABRINA DUDAMobileUXLondon
MUXL is a community of experience creators and innovators working in UX, Product, Mobile, Design & Development, collaborating to diffuse ideas and knowledge in a supportive and creative environment. https://mobileuxlondon.com
What are the latest facts and figures on mobile retail? How do you perform a user experience design evaluation?
This workshop will start with a short overview of mobile retail stats, mobile design principles and a basic framework for user experience evaluation. We will then get hands-ons working in groups of 3 to 4 people to analyze a mobile shop in order to apply our learnings and also share our experiences.
elievable Virtual Social Interactions
Which gesture indicates which emotion?
Is a different level of emotion conveyed by related gestures?
Do we need to use the same pose for conveying the same emotion and intensity for anthropomorphic characters as in human, or does the pose needs to be exaggerated?
How to design realistic body movement?
User experience design portfolio, Harry Brenton Harry Brenton
1) The document contains 5 case studies from the portfolio of Harry Brenton, a user experience designer. The case studies describe projects developing digital products for music annotation, medical education, stroke simulation, emergency medicine training, and dance visualization.
2) The first case study describes developing an interface called the "Social Timeline" to allow musicians to collaboratively annotate audio and video recordings.
3) The second case study involved creating a 3D anatomy tutorial using dynamic linking to demonstrate body systems and functions to medical students.
4) Another case study developed a virtual patient for stroke assessment training using speech, gestures and gaze as inputs for clinicians to diagnose the simulated patient.
Basic Gesture Based Communication for Deaf and Dumb is an Application which converts Input Gesture to Corresponding text. It is observed that people having Speech or Listening Disability face many communication problem while interacting with other people. Also it is not easy for people without such disability to understand what the opposite person wants to say with the help of the gesture he or she may be showing. In order to overcome this barrier we made an attempt of creating an application which will detect these gesture and provide a textual output enabling a smoother process of communication. There is a lot of research being done on Gesture Recognition. This Project will help the users ie the deaf and dumb people to communicate with other people without having any barriers due their disability.
Dana El Halabi conducted research for an emotion-centered mobile application for Barcelona. Primary research included surveys, interviews, and co-creation sessions. Surveys found that locals want authentic experiences beyond commercial options. Interviews revealed locals get stuck in routines while expats explore more. Experts see a trend toward authentic experiences. Co-creation showed that emotions don't determine activities as much as current feelings. The proposed application, "YUHU", would provide unique experiences based on detected emotions. However, challenges remain in accurately understanding human emotions and differences.
● Comparative Analysis of Scheduling Algorithms Performance in a Long Term Evolution Network
● Efficient Authentication Algorithm for Secure Remote Access in Wireless Sensor Networks
● Emoji Essence: Detecting User Emotional Response on Visual Centre Field with Emoticons
● Enhanced Information Systems Success Model for Patient Information Assurance
● Natural Language Processing and Its Challenges on Omotic Language Group of Ethiopia
● Quick Quantum Circuit Simulation
Gesture Interaction with Children for Education, Fitness, GamesObermannCenter
The document discusses research on natural user interactions (NUIs) for children. It describes several touchscreen interaction studies comparing how children and adults use touchscreen devices. The studies found that children miss more touch targets than adults, especially smaller targets. Children's gestures are recognized less accurately by modern recognizers designed for adults. The research aims to better understand how children interact with touchscreens and whole-body interfaces to inform the design of more effective interaction techniques and recognition algorithms for children.
This document discusses user interface design. It covers three areas of interface design: between software components, software and non-human systems, and the human user interface. The document outlines golden rules for user interface design like placing the user in control and making the interface consistent. It also discusses analyzing users, tasks, and display content. The interface design process involves analysis, defining objects/actions, modeling states/events, and evaluating prototypes.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Non-Verbal Communication PowerPoint PPT Content Modern SampleAndrew Schwartz
163 slides include: what is non-verbal communication and what it includes, the categories within non-verbal communication, non-verbal behaviors, highlighting non-verbal statistics, tips to understand non-verbal communication, the 65 body areas displaying non-verbal communication behaviors, analyzing non-verbal communication, understanding eye access cues, how to detect lies, non-verbal communication trivia: time, space, voice, touch, objects, how to's and more.
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONSIJMIT JOURNAL
The document describes a new model called Measuring User Experience using Digitally Transformed/Converted Emotions (MUDE) which measures two metrics of user experience (satisfaction and errors) using facial expressions and gestures captured by an Intel interactive camera. An experiment was conducted with 70 participants who used a software application while their facial expressions and gestures were recorded. The results from the camera were then compared to responses from a System Usability Scale questionnaire to determine if attitudes towards usability matched between the two methods. The study found consistency between the camera-captured emotions and questionnaire responses regarding usability. The MUDE model provides a new approach to evaluating user experience based on digitally measuring emotions expressed during interaction.
Touch is one of the most common forms of sign language used in oral communication. It is most commonly used by deaf and dumb people who have difficulty hearing or speaking. Communication between them or ordinary people. Various sign-language programs have been developed by many manufacturers around the world, but they are relatively flexible and affordable for end users. Therefore, this paper has presented software that introduces a type of system that can automatically detect sign language to help deaf and mute people communicate better with other people or ordinary people. Pattern recognition and hand recognition are developing fields of research. Being an integral part of meaningless hand-to-hand communication plays a major role in our daily lives. The handwriting system gives us a new, natural, easy-to-use communication system with a computer that is very common to humans. Considering the similarity of the human condition with four fingers and one thumb, the software aims to introduce a real-time hand recognition system based on the acquisition of some of the structural features such as position, mass centroid, finger position, thumb instead of raised or folded finger.
Similar to Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Interaction (20)
Mending Clothing to Support Sustainable Fashion_CIMaR 2024.pdfSelcen Ozturkcan
Ozturkcan, S., Berndt, A., & Angelakis, A. (2024). Mending clothing to support sustainable fashion. Presented at the 31st Annual Conference by the Consortium for International Marketing Research (CIMaR), 10-13 Jun 2024, University of Gävle, Sweden.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
Travis Hills of MN is Making Clean Water Accessible to All Through High Flux ...Travis Hills MN
By harnessing the power of High Flux Vacuum Membrane Distillation, Travis Hills from MN envisions a future where clean and safe drinking water is accessible to all, regardless of geographical location or economic status.
Sexuality - Issues, Attitude and Behaviour - Applied Social Psychology - Psyc...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
CLASS 12th CHEMISTRY SOLID STATE ppt (Animated)eitps1506
Description:
Dive into the fascinating realm of solid-state physics with our meticulously crafted online PowerPoint presentation. This immersive educational resource offers a comprehensive exploration of the fundamental concepts, theories, and applications within the realm of solid-state physics.
From crystalline structures to semiconductor devices, this presentation delves into the intricate principles governing the behavior of solids, providing clear explanations and illustrative examples to enhance understanding. Whether you're a student delving into the subject for the first time or a seasoned researcher seeking to deepen your knowledge, our presentation offers valuable insights and in-depth analyses to cater to various levels of expertise.
Key topics covered include:
Crystal Structures: Unravel the mysteries of crystalline arrangements and their significance in determining material properties.
Band Theory: Explore the electronic band structure of solids and understand how it influences their conductive properties.
Semiconductor Physics: Delve into the behavior of semiconductors, including doping, carrier transport, and device applications.
Magnetic Properties: Investigate the magnetic behavior of solids, including ferromagnetism, antiferromagnetism, and ferrimagnetism.
Optical Properties: Examine the interaction of light with solids, including absorption, reflection, and transmission phenomena.
With visually engaging slides, informative content, and interactive elements, our online PowerPoint presentation serves as a valuable resource for students, educators, and enthusiasts alike, facilitating a deeper understanding of the captivating world of solid-state physics. Explore the intricacies of solid-state materials and unlock the secrets behind their remarkable properties with our comprehensive presentation.
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Discovery of An Apparent Red, High-Velocity Type Ia Supernova at 𝐳 = 2.9 wi...Sérgio Sacani
We present the JWST discovery of SN 2023adsy, a transient object located in a host galaxy JADES-GS
+
53.13485
−
27.82088
with a host spectroscopic redshift of
2.903
±
0.007
. The transient was identified in deep James Webb Space Telescope (JWST)/NIRCam imaging from the JWST Advanced Deep Extragalactic Survey (JADES) program. Photometric and spectroscopic followup with NIRCam and NIRSpec, respectively, confirm the redshift and yield UV-NIR light-curve, NIR color, and spectroscopic information all consistent with a Type Ia classification. Despite its classification as a likely SN Ia, SN 2023adsy is both fairly red (
�
(
�
−
�
)
∼
0.9
) despite a host galaxy with low-extinction and has a high Ca II velocity (
19
,
000
±
2
,
000
km/s) compared to the general population of SNe Ia. While these characteristics are consistent with some Ca-rich SNe Ia, particularly SN 2016hnk, SN 2023adsy is intrinsically brighter than the low-
�
Ca-rich population. Although such an object is too red for any low-
�
cosmological sample, we apply a fiducial standardization approach to SN 2023adsy and find that the SN 2023adsy luminosity distance measurement is in excellent agreement (
≲
1
�
) with
Λ
CDM. Therefore unlike low-
�
Ca-rich SNe Ia, SN 2023adsy is standardizable and gives no indication that SN Ia standardized luminosities change significantly with redshift. A larger sample of distant SNe Ia is required to determine if SN Ia population characteristics at high-
�
truly diverge from their low-
�
counterparts, and to confirm that standardized luminosities nevertheless remain constant with redshift.
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
(June 12, 2024) Webinar: Development of PET theranostics targeting the molecu...Scintica Instrumentation
Targeting Hsp90 and its pathogen Orthologs with Tethered Inhibitors as a Diagnostic and Therapeutic Strategy for cancer and infectious diseases with Dr. Timothy Haystead.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Interaction
1. Modeling the Dynamics of Gaze-
Contingent Social Behaviors in
Human-Agent Interaction
University of Augsburg, Germany
Human Centered Multimedia
Elisabeth André
2. 2
My Background
Social Robotics and Virtual Agents
European and BMBF Projects on Affective
Computing
4. 4
Explicit versus Implicit
Interaction with Eye Gaze
Explicit Interaction:
Open interaction with a system where
humans intentionally input discrete
commands to explicitly express their needs
Implicit Interaction:
Information that people convey indirectly in
a conversation, but which may be derived
from dialogue and context information.
Unconscious Interaction:
Continuous (often nonverbal) behavior
people not voluntarily control, but which
may be (but are not necessarily expected
to be) interpreted as the implicit expression
of a particular need or intention
http://www.vision-systems.com/
5. 5
Eye Gaze to Initiate Contact with
a Human User
Breaking the Ice in Human-Agent
Communication: Eye-Gaze Based
Initiation of Contact with an
Embodied Conversational Agent
Tober et al. IVA 2009
6. 6
Five Phases of Flirting [Givens, 1978]
Attention Phase
Men and women arise each other’s attention
Ambivalent non-verbal behavior
Recognition Phase
One interactant recognizes the interest of the other
He or she may then signal readiness to continue the interaction,
e.g., by a friendly smile.
Interaction Phase
After mutual interest has been established, the man or woman
may be initiated the interaction phase and engage in a
conversation
Sexual-Arousal and Resolution Phases are somehow
missing relevance to human-agent communication.
8. 8
Interaction Modes
Interactive version
Non-interactive version with ideal flirt behavior:
In the non-interactive ideal version the virtual agent behaves like
in the interactive version except for that it does not respond to
the user’s eye gaze behavior, but assumes a perfect eye gaze
behavior from the user and thus follows a fixed sequence.
Non-interactive version with anti-flirt behavior:
Duration of mutual gaze is increased from 3 s to 7 s
Facial expression remains neutral (which can be interpreted as a
bored attitude towards the user)
Virtual agent looks away upwards after gazing at the user
instead of downwards
9. 9
Results
1. In the interactive and the ideal mode, the agent was
able to show the users that Alfred had an interest in
them and the users also had the feeling that he was
flirting with them.
2. We found that the effect was increased when moving
from the ideal to the interactive mode.
3. The interactive version contributed to the user’s
enjoyment, increased their interest to continue the
interaction or even to engage in a conversation with
Alfred.
10. 10
Conclusions
Alfred was lacking of attractiveness, but the eye gaze
enabled agent improved the flirting interaction.
Flirting tactics as implemented in this work are of
benefit to a much broader range of situations with agents
than just dating, e.g. initiate human-agent interaction or
regulating turn-taking in dialogues.
11. 11
Setting
Discovering eye gaze behavior
during human-agent conversation
an interactive storytelling
application. ICMI-MLMI 2010.
12. 12
Gaze Model
Parameters set on the basis of data found in the
literature
Non-interactive Interactive
Looks around
4.0 s
(2-6 s)
4.0 s
(2-6 s)
Gazes at user
(Wait for gaze)
2.0 s
(1-3 s)
2.0 s
(1-3 s)
Mutual gaze n/a
1.0 s
(0.75-1.25 s)
13. 13
Evaluation
Compared the 2 different gaze behavior models:
non-interactive vs. interactive
Study with 19 subjects
How do people respond to different gaze models?
Does the gaze model affect their sense of social presence?
Order of the 2 gaze models was randomized for each
subject to avoid any bias due to ordering effects
17. 17
Results
In total users were much more looking at Emma
compared to human-human interaction
Argyle & Cook Kendon Our Study
Looking at interlocutor 58%
50%
(28% - 70%)
76%
(46% - 98%)
Looking at interlocutor
while listening
75% 81%
Looking at interlocutor
while speaking
41% 71%
18. 18
Conclusions
Interactive gaze mode led to a better user experience
compared to the non-interactive gaze mode
Users adhere to patterns of gaze behaviors for speaker
and addressee that are also characteristic of dyadic
human-human interactions
They looked more often to the virtual interlocutor than is
typical of human-human interactions.
20. 20
Empathetic Artificial Listener
Attention: pay attention to the signals produced by a speaker
Perception of signals
Comprehension: understand meaning attached to signals
Internal reaction: the
comprehension of the
meaning may create cognitive
and emotional reaction
Decision: communication or
not of the internal reaction
Generation: display
behaviors
21. 21
Generation of Facial
Expressions
FACS (Facial Action Coding System) can be used to
generate and recognize facial expressions.
Action Units are used to describe emotional
expressions.
Seven Action Units were identified for the robotic face
(out of 40 Action Units for the human face)
Upper face:
inner brows raiser (AU 1),
brown lowerer (AU 4),
upper lid raiser (AU 5)
and eye closure (AU 43).
Lower face:
lip corner puller (AU 12),
lip corner depressor (AU 15)
and lip opening (AU 25).
22. 22
Social Signal Interpretation:
SSI by Augsburg University
Multiple Sensor Input
ECG, Skin Conduction, Blood
Glucose Level,
Speech, Acceleration, …
Preprocessing and Feature Analysis
Filtering,
Frequency
Analysis,
…
Pattern Recognition
Fusion and
Final Decision
Physiological and
Affective State,
Context Information
SSI is freely available under:
http://www.openssi.net
Johannes Wagner, Florian Lingenfelser, Tobias
Baur, Ionut Damian, Felix Kistler, Elisabeth André:
The social signal interpretation (SSI) framework:
multimodal signal processing and recognition in
real-time. ACM Multimedia 2013: 831-834
29. 29
Multimodal Dialogue with a
Robot
G. Mehlmann, M. Häring, K. Janowski, T.
Baur, P. Gebhard, E. André: Exploring a
Model of Gaze for Grounding in Multi-
modal HRI. ICMI 2014: 247-254
30. 30
Research Strategy
Model of Human Social
Behaviors
Multimodal Behavior Simulation
Corpus on Human Social Behaviors
Build
Simulate
Evaluate
Refine
Statistics
33. 33
Gaze Recognition
The glasses provide the video image and the gaze
coordinates
-, -
-, -
...
156, 543
189, 527
145, 567
211, 542
34. 34
Gaze-Based Disambiguation
3
1
42
Do you mean this red object there?
„Do you
mean this
red object
there?“
1 1 1 2 2 3 3 3 2 1 1 1 2 3Gaze:
Speech:
Use this information
for disambiguation!
Gaze
36. 36
Robot Behavior
The robot‘s behavior depends on the role.
In the speaker role, the
robot awaits the dialog
manager‘s decision to play
a behavior.
In the addressee role, the robot
shows some idle gaze behavior,
occasionally reacting to the users‘
gaze movements, emotional
expressions and other cues.
37. 37
Gaze-based Interaction
Object grounding:
The robot follows the user’s hand movements.
The robot follows the user’s gaze.
Social grounding:
The robot seeks and recognizes mutual gaze.
Turn management:
The robot recognizes when the user yields the turn.
39. 39
Results of a Study
Object grounding was more effective than social grounding.
People were able to interact more efficiently with object grounding.
Social grounding did not improve the perception of the interaction.
Assumption:
People were rather concentrating on the task instead of the
social interaction with the robot.
42. 42
Conclusions
Effect of gaze-aware agents:
Gaze-aware agents have a positive effect on user perception.
Gaze-aware agents improve grounding.
Side effects:
Midas Touch Problem:
• The agent should not respond to each detected gaze behavior.
Unnatural user behavior:
• Use of gaze as a pointing device
Timing is the key.