The document describes the BLUE EYES technology, which aims to create computational machines that have human-like perceptual and sensory abilities. BLUE EYES uses sensing technologies like video cameras and microphones to identify user actions and extract information to determine a user's physical, emotional, or informational state. This would allow computers to interact with users more naturally by understanding their moods, expressions, and needs. The document discusses several technologies used in BLUE EYES, including eye tracking, facial recognition, and affective computing techniques to help computers detect and respond appropriately to human emotions.
human activity recognization using machine learning with data analysisVenkat Projects
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Final Year Project-Gesture Based Interaction and Image ProcessingSabnam Pandey, MBA
This document summarizes a student's final year project report on developing a gesture recognition system for browsing pictures. The student aims to implement algorithms for skin and contour detection of a user's hand in real-time images from a webcam. The report includes chapters on literature review of gesture recognition and image processing techniques, methodology using the waterfall model, requirements analysis and design diagrams, implementation details using OpenCV, and testing and evaluation of the project objectives and aims.
This document summarizes a student project on human activity recognition using smartphones. A group of 4 students submitted the project to partially fulfill requirements for a Bachelor of Technology degree in computer science and engineering. The project involved developing a system to recognize human activities using the accelerometer and gyroscope sensors in smartphones. Various machine learning algorithms were tested and evaluated on experimental data collected from smartphone sensors. The goal of the project was to create an accurate and lightweight activity recognition system for smartphones, while also exploring active learning methods to reduce the amount of labeled training data needed.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
The document describes the Sixth Sense technology, which allows users to access and interact with digital information by using hand gestures that are detected by a camera. The Sixth Sense consists of a camera, projector, mirror, and mobile device. It captures gestures and the visual scene, projects interfaces onto surfaces, and connects to the cloud for additional data. Examples are given of how it could be used to get book information, check travel statuses, take photos, and more. Requirements for building a Sixth Sense device and some advantages and disadvantages are also outlined.
The Blue Eyes technology developed by IBM aims to give computers human-like perceptual abilities such as facial recognition, emotion sensing, and the ability to react based on a user's emotional state. The technology uses cameras and microphones to identify facial expressions and physiological measurements that correspond to basic emotions. It was inspired by Paul Ekman's research correlating facial expressions and physiological responses. The goal of Blue Eyes technology is to allow for more natural human-computer interaction and help computers understand human emotions.
Easy Presentation style. always new interesting topic . Gesture Recognition many types. According to the Markets and Markets analysis, the growth of gesture recognition is going to be huge. So, we have a huge opportunity to play in this technology.
human activity recognization using machine learning with data analysisVenkat Projects
Human activity recognition, or HAR for short, is a broad field of study concerned with identifying the specific movement or action of a person based on sensor data.
The sensor data may be remotely recorded, such as video, radar, or other wireless methods. It contains data generated from accelerometer, gyroscope and other sensors of Smart phone to train supervised predictive models using machine learning techniques like SVM , Random forest and decision tree to generate a model. Which can be used to predict the kind of movement being carried out by the person, which is divided into six categories walking, walking upstairs, walking down-stairs, sitting, standing and laying?
MLM and SVM achieved accuracy of more than 99.2% in the original data set and 98.1% using new feature selection method. Results show that the proposed feature selection approach is a promising alternative to activity recognition on smart phones.
Final Year Project-Gesture Based Interaction and Image ProcessingSabnam Pandey, MBA
This document summarizes a student's final year project report on developing a gesture recognition system for browsing pictures. The student aims to implement algorithms for skin and contour detection of a user's hand in real-time images from a webcam. The report includes chapters on literature review of gesture recognition and image processing techniques, methodology using the waterfall model, requirements analysis and design diagrams, implementation details using OpenCV, and testing and evaluation of the project objectives and aims.
This document summarizes a student project on human activity recognition using smartphones. A group of 4 students submitted the project to partially fulfill requirements for a Bachelor of Technology degree in computer science and engineering. The project involved developing a system to recognize human activities using the accelerometer and gyroscope sensors in smartphones. Various machine learning algorithms were tested and evaluated on experimental data collected from smartphone sensors. The goal of the project was to create an accurate and lightweight activity recognition system for smartphones, while also exploring active learning methods to reduce the amount of labeled training data needed.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
The document describes the Sixth Sense technology, which allows users to access and interact with digital information by using hand gestures that are detected by a camera. The Sixth Sense consists of a camera, projector, mirror, and mobile device. It captures gestures and the visual scene, projects interfaces onto surfaces, and connects to the cloud for additional data. Examples are given of how it could be used to get book information, check travel statuses, take photos, and more. Requirements for building a Sixth Sense device and some advantages and disadvantages are also outlined.
The Blue Eyes technology developed by IBM aims to give computers human-like perceptual abilities such as facial recognition, emotion sensing, and the ability to react based on a user's emotional state. The technology uses cameras and microphones to identify facial expressions and physiological measurements that correspond to basic emotions. It was inspired by Paul Ekman's research correlating facial expressions and physiological responses. The goal of Blue Eyes technology is to allow for more natural human-computer interaction and help computers understand human emotions.
Easy Presentation style. always new interesting topic . Gesture Recognition many types. According to the Markets and Markets analysis, the growth of gesture recognition is going to be huge. So, we have a huge opportunity to play in this technology.
Gesture recognition technology allows humans to control devices through visible bodily motions and hand movements instead of physical interfaces. It works by detecting and interpreting gestures using cameras and analyzing features like hand position and motion. Popular applications include virtual keyboards and navigation systems that respond to gestures of the head, hands or eyes. Future technologies may enable self-reliant communication through gestures for people with disabilities.
Sixth Sense technology allows users to access digital information about objects and surfaces in the physical world using hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera recognizes hand gestures and objects, and the projector displays additional digital information onto physical surfaces based on the camera's input. Some examples of uses include getting information about books by gesturing near them, checking flight statuses by gesturing over boarding passes, and making calls or accessing maps with hand gestures in the air. The technology aims to more seamlessly integrate digital information into everyday life using natural hand motions.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
The document discusses the Sixth Sense technology, which aims to connect the digital world to the physical world. It describes the key components of the Sixth Sense device prototype, including a camera, projector, mirror, and colored markers on the fingers. The device processes gestures to project digital information onto physical surfaces. Some applications mentioned include using maps, taking photos, drawing, making calls, getting product/flight information, and interacting with objects. Future projects building on this technology, like mouseless computing and allowing multiple views on a single display, are also discussed.
Media Player with Face Detection and Hand GestureIRJET Journal
This document presents a proposed media player that can be controlled through face detection and hand gestures. It discusses designing a media player that will pause a video if a user's face is not detected by the camera and resume playback when the face is detected again. It also describes adding functionality to control the media player using hand gestures tracked by color bands worn on the fingers, like controlling the mouse cursor movement and left click. The proposed system aims to increase human-computer interaction by reducing the need for physical interaction and allowing control from a distance. It reviews related works on face detection, hand gesture recognition and emotion-based media players.
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
This document summarizes a seminar report on Blue Eyes Technology submitted by Ms. Roshmi Sarmah. The report describes Blue Eyes Technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and touch. It discusses how this could allow computers to interact with humans more naturally by recognizing emotions, attention, and physical states. The report provides an overview of the Blue Eyes system hardware and its capabilities for monitoring a user's physiological signals, visual attention, and position in real-time using wireless sensors.
The document describes a project to create a sixth sense robot using an Atmel ATMega8 microcontroller development board. The robot uses computer vision techniques to track colored markers on a user's fingers to interpret gestures and send commands to control motors on the robot. Specifically, it captures images with a webcam, processes them to detect different colored markers, and sends signals to an H-bridge motor controller to move the robot forward, backward, or turn based on the detected gesture. The code uses color thresholding algorithms to identify pixels matching the colors of each marker and determine the gesture.
This document summarizes a technology called Sixth Sense, which allows users to perform gestures to interact with digital information rather than using keyboards or mice. It discusses using commands recognized by a speech integrated circuit instead of gestures to overcome limitations of gesture recognition. The speech IC is trained to recognize commands, which then trigger actions performed by a mobile device and projected for the user.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Part 1 - Gesture Recognition TechnologyPatel Saunak
Gesture recognition technology allows humans to interface with computers using body movements detected by cameras. Cameras read gestures like hand movements and send that data to computers, which can then use the gestures for control or input. Current research focuses on emotion recognition from facial expressions and interpreting sign language through computer vision algorithms. Gesture recognition has applications in assistive robotics, gaming, alternative interfaces, and more.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
Review of methods and techniques on Mind Reading Computer MachineMadhavi39
The document discusses research into developing computer systems that can read human minds. It describes how researchers are using sensors and cameras to analyze facial expressions and brain activity in order to infer mental states like emotions, thoughts, and level of engagement. The document outlines some techniques being used, like analyzing electroencephalography and functional near-infrared spectroscopy brain scan data or facial feature extraction from video feeds. Potential applications mentioned include assistive technologies for people with disabilities and monitoring driver attention and mood.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
The document describes a technical seminar presentation on Sixth Sense technology. It provides an abstract, introduction, and overview of the components and technologies involved in Sixth Sense, including a camera, colored markers, mobile phone, projector, and mirror. Some key advantages are discussed such as portability, support for multi-touch and multi-user interaction, low cost, connecting the physical and digital worlds, and enabling real-time data access from machines.
The document discusses a project to develop a desktop application that converts sign language to speech and text to sign language. It aims to help communicate with deaf people by removing barriers. The team plans to use EmguCV and C# Speech Engine. It has created an application that converts signs to text using image processing. Future work includes completing the software to cover all words in Arabic sign language.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
The Blue Eyes Technology aims to create machines with human-like perceptual abilities using modern cameras and microphones. It can understand users' actions, where they are looking, and their physical/emotional states. Blue refers to Bluetooth for wireless communication, and Eyes because eye movements provide important information. The technology uses inputs like heart rate, facial expressions, eye movements, and voice for affective computing to detect and respond to human emotions. It analyzes facial expressions, especially the eyes and mouth, to determine emotional states. An emotion mouse also senses physiological attributes. The document discusses various methods for implementing this technology, including gaze input and interest tracking systems.
The document discusses the Blue Eyes technology, which aims to develop computers that can understand users' emotions, identity, and presence through techniques like facial recognition and speech recognition. The technology uses non-obtrusive sensing methods to gather physiological data from users to determine their emotional states. This would allow computers to interact more naturally with humans. Experimental results showed that measures of skin conductivity, heart rate, finger temperature, and mouse movements can reliably predict a user's emotional state. Future work aims to improve these techniques with smaller, less intrusive sensors.
Gesture recognition technology allows humans to control devices through visible bodily motions and hand movements instead of physical interfaces. It works by detecting and interpreting gestures using cameras and analyzing features like hand position and motion. Popular applications include virtual keyboards and navigation systems that respond to gestures of the head, hands or eyes. Future technologies may enable self-reliant communication through gestures for people with disabilities.
Sixth Sense technology allows users to access digital information about objects and surfaces in the physical world using hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera recognizes hand gestures and objects, and the projector displays additional digital information onto physical surfaces based on the camera's input. Some examples of uses include getting information about books by gesturing near them, checking flight statuses by gesturing over boarding passes, and making calls or accessing maps with hand gestures in the air. The technology aims to more seamlessly integrate digital information into everyday life using natural hand motions.
Day by day lots of efforts are been taken towards
developing an intelligent and natural interface between computer
system and users. And looking at the technologies now a day’s it
has become possible by means of variety of media information like
visualization, audio, paint etc. Gesture has become important part
of human communication to convey the information. Thus In this
paper we proposed a method for HAND GESTURE
RECOGNIZATION which includes Hand Segmentation, Hand
Tracking and Edge Traversal Algorithm. We have designed a
system which is limited to the hardware parts such as computer
and webcam. The system consists of four modules: Hand
Tracking and Segmentation, Feature Extraction, Neural
Training, and Testing. The objective of this system to explore the
utility of a neural network-based approach to the recognition of
the hand gestures that create a system that will easily identify the
gesture and use them for device control and convey information
instead of normal inputs devices such as mouse and keyboard.
IRJET- Enhanced Look Based Media Player with Hand Gesture RecognitionIRJET Journal
The document describes a proposed enhanced media player that uses face detection and hand gesture recognition to control playback. Specifically, it will:
1. Continuously monitor the user's face using a webcam and only play the video when the user is looking at the screen, pausing otherwise.
2. Detect hand gestures like raising a hand to increase volume, decrease volume, switch to the next video, or previous video.
3. The system is intended to provide a better media playback experience by automating control and preventing the user from missing parts of a video if they look away. Both face detection and hand gesture recognition are implemented using computer vision algorithms like HAAR cascades.
The document discusses the Sixth Sense technology, which aims to connect the digital world to the physical world. It describes the key components of the Sixth Sense device prototype, including a camera, projector, mirror, and colored markers on the fingers. The device processes gestures to project digital information onto physical surfaces. Some applications mentioned include using maps, taking photos, drawing, making calls, getting product/flight information, and interacting with objects. Future projects building on this technology, like mouseless computing and allowing multiple views on a single display, are also discussed.
Media Player with Face Detection and Hand GestureIRJET Journal
This document presents a proposed media player that can be controlled through face detection and hand gestures. It discusses designing a media player that will pause a video if a user's face is not detected by the camera and resume playback when the face is detected again. It also describes adding functionality to control the media player using hand gestures tracked by color bands worn on the fingers, like controlling the mouse cursor movement and left click. The proposed system aims to increase human-computer interaction by reducing the need for physical interaction and allowing control from a distance. It reviews related works on face detection, hand gesture recognition and emotion-based media players.
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
This document summarizes a seminar report on Blue Eyes Technology submitted by Ms. Roshmi Sarmah. The report describes Blue Eyes Technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and touch. It discusses how this could allow computers to interact with humans more naturally by recognizing emotions, attention, and physical states. The report provides an overview of the Blue Eyes system hardware and its capabilities for monitoring a user's physiological signals, visual attention, and position in real-time using wireless sensors.
The document describes a project to create a sixth sense robot using an Atmel ATMega8 microcontroller development board. The robot uses computer vision techniques to track colored markers on a user's fingers to interpret gestures and send commands to control motors on the robot. Specifically, it captures images with a webcam, processes them to detect different colored markers, and sends signals to an H-bridge motor controller to move the robot forward, backward, or turn based on the detected gesture. The code uses color thresholding algorithms to identify pixels matching the colors of each marker and determine the gesture.
This document summarizes a technology called Sixth Sense, which allows users to perform gestures to interact with digital information rather than using keyboards or mice. It discusses using commands recognized by a speech integrated circuit instead of gestures to overcome limitations of gesture recognition. The speech IC is trained to recognize commands, which then trigger actions performed by a mobile device and projected for the user.
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
Gesture recognition is to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head,
and/or body. Hand Gestures have greater importance in designing an intelligent and efficient human–computer interface. The applications
of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper a survey on
various recent gesture recognition approaches is provided with particular emphasis on hand gestures. A review of static hand posture
methods are explained with different tools and algorithms applied on gesture recognition system, including connectionist models, hidden
Markov model, and fuzzy clustering. Challenges and future research directions are also highlighted.
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Part 1 - Gesture Recognition TechnologyPatel Saunak
Gesture recognition technology allows humans to interface with computers using body movements detected by cameras. Cameras read gestures like hand movements and send that data to computers, which can then use the gestures for control or input. Current research focuses on emotion recognition from facial expressions and interpreting sign language through computer vision algorithms. Gesture recognition has applications in assistive robotics, gaming, alternative interfaces, and more.
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...Waqas Tariq
This work is the part of vision based hand gesture recognition system for Natural Human Computer Interface. Hand tracking and segmentation are the primary steps for any hand gesture recognition system. The aim of this paper is to develop robust and efficient hand segmentation algorithm where three algorithms for hand segmentation using different color spaces with required morphological processing have were utilized. Hand tracking and segmentation algorithm (HTS) is found to be most efficient to handle the challenges of vision based system such as skin color detection, complex background removal and variable lighting condition. Noise may contain, sometime, in the segmented image due to dynamic background. An edge traversal algorithm was developed and applied on the segmented hand contour for removal of unwanted background noise.
Review of methods and techniques on Mind Reading Computer MachineMadhavi39
The document discusses research into developing computer systems that can read human minds. It describes how researchers are using sensors and cameras to analyze facial expressions and brain activity in order to infer mental states like emotions, thoughts, and level of engagement. The document outlines some techniques being used, like analyzing electroencephalography and functional near-infrared spectroscopy brain scan data or facial feature extraction from video feeds. Potential applications mentioned include assistive technologies for people with disabilities and monitoring driver attention and mood.
The Gesture Recognition Technology is rapidly growing technology and this PPT describes about the working of gesture recognition technology,the sub fields in it, its applications and the challenges it faces.
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
The presentation contains a technology for identifying any type of body motions commonly originating from hand and face using artificial neural network.This include identifying sign language also.This technology is for speech impaired individuals.
// I have shared an IJCSE standard paper in this topic
The document describes a technical seminar presentation on Sixth Sense technology. It provides an abstract, introduction, and overview of the components and technologies involved in Sixth Sense, including a camera, colored markers, mobile phone, projector, and mirror. Some key advantages are discussed such as portability, support for multi-touch and multi-user interaction, low cost, connecting the physical and digital worlds, and enabling real-time data access from machines.
The document discusses a project to develop a desktop application that converts sign language to speech and text to sign language. It aims to help communicate with deaf people by removing barriers. The team plans to use EmguCV and C# Speech Engine. It has created an application that converts signs to text using image processing. Future work includes completing the software to cover all words in Arabic sign language.
This document is a final report on gesture recognition submitted by three students. It contains an abstract, introduction, background information on gesture recognition including American Sign Language and object recognition techniques. It discusses digital image processing and neural networks. It outlines the approach, modules, flowcharts, results and conclusions of the project, which developed a method to recognize static hand gestures using a perceptron neural network trained on orientation histograms of the input images. Source code and applications are also discussed.
The Blue Eyes Technology aims to create machines with human-like perceptual abilities using modern cameras and microphones. It can understand users' actions, where they are looking, and their physical/emotional states. Blue refers to Bluetooth for wireless communication, and Eyes because eye movements provide important information. The technology uses inputs like heart rate, facial expressions, eye movements, and voice for affective computing to detect and respond to human emotions. It analyzes facial expressions, especially the eyes and mouth, to determine emotional states. An emotion mouse also senses physiological attributes. The document discusses various methods for implementing this technology, including gaze input and interest tracking systems.
The document discusses the Blue Eyes technology, which aims to develop computers that can understand users' emotions, identity, and presence through techniques like facial recognition and speech recognition. The technology uses non-obtrusive sensing methods to gather physiological data from users to determine their emotional states. This would allow computers to interact more naturally with humans. Experimental results showed that measures of skin conductivity, heart rate, finger temperature, and mouse movements can reliably predict a user's emotional state. Future work aims to improve these techniques with smaller, less intrusive sensors.
Blue Eyes technology aims to create machines that have human-like perceptual and sensory abilities. It uses Bluetooth and eye tracking to understand a user's emotions, identify them, and interact as partners. The system includes a Data Acquisition Unit that collects sensor data and a Central System Unit that analyzes the data. It has applications in security, assistive technologies, and interactive devices. The technology aims to reduce human error and make human-computer interaction more natural.
The document discusses Blue Eyes technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and the ability to understand human emotions. It does this through technologies like facial recognition, speech recognition, and sensors that can detect physical and emotional states. The goal is to create computers that can interact more naturally with humans. The document outlines some of the key techniques researchers are exploring to develop affective computing, such as detecting facial expressions to identify emotions, using eye tracking to determine where a user is looking, and sensors in a mouse that can identify emotions through touch.
Blue eye technology aims to enable computers to understand human behavior, feelings, and sensory abilities through technologies like visual attention monitoring, physiological condition monitoring, gesture recognition, facial recognition, and eye tracking. Some goals of blue eye technology are to create interactive computers that can act as partners to users by sensing their physical and emotional states and responding appropriately through technologies developed by IBM since 1997.
Blue Eyes technology, developed by IBM since 1997, aims to give computers human-like abilities to understand and respond to human emotions and behaviors. It uses sensors like cameras and microphones to detect facial expressions and voice tones in order to assess a person's emotional state. The system processes this sensory data using software to determine how to naturally interact with and respond to the human. Blue Eyes technology seeks to develop machines that can perceive users in a similar way that humans perceive each other to facilitate more intuitive human-computer interaction.
A Seminar Report On Blue Eyes Technology Submitted ByJennifer Daniel
This document is a seminar report submitted by Reshma J. Shetty on the topic of Blue Eyes Technology. Blue Eyes Technology aims to give computers human-like perceptual abilities such as facial recognition, speech recognition, and the ability to understand human emotions and behaviors. The report describes several technologies used in Blue Eyes including Emotion Mouse, which can detect a user's emotions through their interactions with the mouse; MAGIC pointing, which uses eye tracking and gaze input; speech recognition; and SUITOR, which tracks a user's interests over time. The goal of Blue Eyes is to create computers that can interact with humans more naturally by sensing human presence, emotions, and needs.
The document discusses Blue Eyes technology, which uses sensors and image processing techniques to identify human emotions based on facial expressions and eye movements. It can sense emotions like sad, happy, surprised. The technology aims to give computers human-like perceptual abilities by analyzing facial expressions and eye gaze. This is done using sensors like cameras and microphones along with techniques like facial recognition and eye tracking. It has applications in control rooms, driver monitoring systems, and interfaces that adapt based on inferred user interests from eye gaze. The document provides details of various sensors involved - emotion mouse, expression glasses, speech recognition systems - and how they can help computers understand and interact with humans at a more personal level.
Gesture recognition technology uses cameras to read human body movements and gestures as a form of input to control devices and applications. A camera captures gestures like hand movements and facial expressions and sends that data to a computer for interpretation. Gesture recognition allows humans to interact with machines naturally without physical devices by using gestures to control cursors, activate menus, or control games and other applications. There are different methods for capturing and interpreting gestures including using wired gloves, depth cameras, stereo cameras, single cameras, or motion controllers.
This document describes a system for controlling a computer cursor using eye movement. The system uses a Raspberry Pi connected to a camera and PIR sensor. It detects the pupil center in images to determine eye position. OpenCV is used for pupil detection and a support vector machine classifies eye movements. Eye movements control cursor direction while blinks emulate mouse clicks. This allows hands-free computer use for people with disabilities. The system was found to accurately track eye position and enable cursor movement with high efficiency levels. Overall, the document presents a method for an eye-controlled computer cursor using low-cost hardware and software.
Hand Gesture Recognition System for Human-Computer Interaction with Web-Camijsrd.com
This paper represents a comparative study of exiting hand gesture recognition systems and gives the new approach for the gesture recognition which is easy cheaper and alternative of input devices like mouse with static and dynamic hand gestures, for interactive computer applications. Despite the increase in the attention of such systems there are still certain limitations in literature. Most applications require different constraints like having distinct lightning conditions, usage of a specific camera, making the user wear a multi-coloured glove or need lots of training data. The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). This interface is simple enough to be run using an ordinary webcam and requires little training.
The document summarizes a seminar on Blue Eye technology presented by Bhupesh Lahare. Blue Eye technology aims to create computers that can interact with users through eye movements, facial expressions, and speech like humans. It discusses how the Blue Eyes system works using data acquisition and central system units to obtain physiological data from sensors. Different techniques used in Blue Eye technology are also summarized such as Emotion Mouse, MAGIC pointing, speech recognition, and SUITOR for tracking user interests. Examples of Blue Eye enabled devices include pod cars, pong robots, emotional iPods, and smart phones. The document concludes that future devices may be operated through eye contact and voice commands.
This document summarizes a survey on detecting hand gestures to be used as input for computer interactions. The introduction discusses how graphical user interfaces are being upgraded to provide more efficient visual interfaces using touchscreen technologies. However, these technologies are still too expensive for laptops and desktops. The paper then proposes developing a virtual mouse system using a webcam to capture hand movements and perform mouse functions like left and right clicks. The methodology section outlines the key steps of the proposed system which includes skin detection, contour extraction from images, and mapping detected hand gestures to cursor movements and controls. Finally, the conclusion discusses the goal of making this technology cheaper and more accessible to use as a standard input device without additional hardware requirements.
The document describes Blue Eyes technology, which aims to give computers human-like perceptual and sensory abilities. It discusses several components of the Blue Eyes system, including the Data Acquisition Unit (DAU) and Central System Unit (CSU), which are connected via Bluetooth. The document outlines some of the key technologies used in Blue Eyes, such as Emotion Mouse (which senses user mood through touch), MAGIC (which tracks eye movement), and SUITOR (which detects users' areas of interest). It envisions future applications where devices could be controlled by speech and gestures. The goal of Blue Eyes technology is to make interaction with computers more natural and intuitive.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Gesture control algorithm for personal computerseSAT Journals
Abstract As our dependency on computers is increasing every day, these intelligent machines are making inroads in our daily life and society. This requires more friendly methods for interaction between humans and computers (HCI) than the conventionally used interaction devices (mouse & keyboard) because they are unnatural and cumbersome to use at times (by disabled people). Gesture Recognition can be useful under such conditions and provides intuitive interaction. Gestures are natural and intuitive means of communication and mostly occur from hands or face of human beings. This work introduces a hand gesture recognition system to recognize real time gestures of the user (finger movements) in unstrained environments. This is an effort to adapt computers to our natural means of communication: Speech and Hand Movements. All the work has been done using Matlab 2011b and in a real time environment which provides a robust technique for feature extraction and speech processing. A USB 2.0 camera continuously tracks the movement of user’s finger which is covered with red marker by filtering out green and blue colors from the RGB color space. Java based commands are used to implement the mouse movement through moving finger and GUI keyboard. Then a microphone is used to make use of the speech processing and instruct the system to click on a particular icon or folder throughout the screen of the system. So it is possible to take control of the whole computer system. Experimental results show that proposed method has high accuracy and outperforms Sub-gesture Modeling based methods [5] Keywords: Hand Gesture Recognition (HGR), Human-Computer Interaction (HCI), Intuitive Interaction
The document summarizes a student project to develop a virtual mouse interface using computer vision and finger tracking. The project is divided into 5 modules: 1) basic video operations in OpenCV, 2) image processing techniques, 3) object tracking, 4) finger-tip detection, and 5) using detected finger motions to control mouse functions. Key functions demonstrated include moving the cursor, left and right clicking, dragging, brightness control, and scrolling. Evaluation of the system found finger tracking accuracy between 60-85% for different gestures. The project aims to provide an alternative input method that reduces hardware needs and workspace.
Sign Language Identification based on Hand GesturesIRJET Journal
This document presents a study on sign language identification based on hand gestures. The researchers aim to develop a system that can recognize American Sign Language gestures from video sequences. They use two different models - a Convolutional Neural Network (CNN) to analyze the spatial features of video frames, and a Recurrent Neural Network (RNN) to analyze the temporal features across frames. The document discusses the methodology used, including data collection from videos, pre-processing of frames, feature extraction using CNN models, and gesture classification. It also provides a literature review on previous studies related to sign language recognition and communication systems for deaf people.
Sign Language Identification based on Hand Gestures
ABSTRACT
1. BLUE EYES TECHNOLOGY
College: Vasireddy Venkatadri Institute Of Technology
Dept: Computer Science Engineering
S. Jaya Sindhura(12BQ1A0595)
M. Sujana(12BQ1A0566)
ABSTRACT
Is it possible to create a computer, which can
interact with us as we interact each other? For
example imagine in a fine morning you walk on
to your computer room and switch on your
computer, and then it tells you “Hey friend,
good morning you seem to be in a bad mood
today. And then it Opens your mail box and
shows you some of the mails and tries to cheer
you. It seems to be a fiction, but it will be the
life lead by “BLUE EYES” in the very near
future. The basic idea behind this technology
is to give the computer the human power. We
all have some perceptual abilities. That is we can
Understand each other’s feelings. For example
we can understand one’s emotional state by
analyzing his facial expression. If we add these
perceptual abilities of human to computers
would enable computers to work together with
human beings as intimate partners. The “BLUE
EYES” technology aims at creating
computational machines that have perceptual
and sensory ability like those of human beings.
How can we make computers “see” and “feel”?
BLUE EYES uses sensing technology to
identify a user’s actions and to extract key
information. For Example: In Future a BLUE
EYES-enabled television could become active
when the user makes eye contact at which
point the user could then tell the television to
“turn on”. This paper is about the software,
benefits and interconnection of various parts
involved in the “BLUE EYE” technology.
INTRODUCTION
Imagine yourself in a world where humans
interact with computers. You are sitting in front
of your personal computer that can listen, talk,
or even scream aloud. It has the ability to gather
information about you and interact with you
through special techniques like facial
recognition, speech recognition, etc. It can even
understand your emotions at the touch of the
mouse. It verifies your identity, feels your
presents, and starts interacting with you.
The BLUE EYES technology aims at creating
computational machines that have perceptual
and sensory ability like those of human beings.
It uses non-obtrusive sensing method,
employing most modern video cameras and
microphones to identify the user’s actions
through the use of imparted sensory abilities.
The machine can understand what a user wants,
where he is looking at, and even realize his
physical or emotional states.
The primary objective of the research is to give a
computer the ability of the human being to asses
a situation by using the senses of sight, hearing,
and touch. The BLUEEYES project aims at
creating computational devices with the sort of
perceptual abilities that people take for granted.
Thus BLUEEYES are the technology to make
computers sense and understand human behavior
and feelings and react in the proper ways.
AIMS:
1) To design smarter devices
2) To create devices with emotional intelligence
3) To create computational devices with
2. Perceptual abilities.
TRACKS USED
Our emotional changes are mostly reflected in
our heart pulse rate, breathing rate, facial
expressions, eye movements, voice etc. Hence
these are the parameters on which blue
technology is being developed. Making
computers see and feel Blue Eyes uses sensing
technology to identify a user's actions and to
extract key information. This information is then
analyzed to determine the users physical,
emotional, or informational state, which in turn
can be used to help make the user more
productive by performing expected actions or by
providing expected information
TECHNOLOGIES USED
The process of making emotional computers
with sensing abilities is known as affective
computing. The steps used in this are:
1) Giving sensing abilities.
2) Detecting human emotions.
3) Respond properly
The first step, is to give machines the equivalent
of the eyes, ears, and other sensory organs that
humans use to recognize and express emotion.
computer scientists are exploring a variety of
mechanisms including voice-recognition
software that can discern not only what is being
said but the tone in which it is said; cameras that
can track subtle facial expressions, eye
movements, and hand gestures; and biometric
sensors that can measure body temperature,
blood pressure, muscle tension, and other
physiological signals associated with emotion.
In the second step, the computers have to detect
even the minor variations of our moods. For e.g.
person may hit the keyboard very fast either in
the happy mood or in the angry mood.
In the third step the computers have to react in
accordance MJ with the emotional states.
Various methods of accomplishing affective
Computing
1) MAGIC POINTING
2) SUITOR
3) EMOTIONAL MOUSE
4)ARTIFICIAL INTELLIGENCE AND
SPEECH RECOGNITION
MAGIC POINTING:
MAGIC stands for Manual Acquisition with
Gaze Tracking Technology. a computer with this
technology could move the cursor by following
the direction of the user's eyes. This type of
technology will enable the computer to
automatically transmit information related to the
screen that the user is gazing at. Also, it will
enable the computer to determine, from the
user's expression, if he or she understood the
information on the screen, before automatically
deciding to proceed to the next program. The
user pointing is still done by the hand, but the
cursor always appears at the right position as if
by MAGIC. By varying input technology and
eye tracking, we get MAGIC pointing. Gaze
tracking has long been considered as an
alternative or potentially superior pointing
method for computer input.
Two specific MAGIC pointing techniques, one
conservative and one liberal, were designed,
analyzed, and implemented with an eye tracker
one has to be conscious of where one looks and
how long one looks at an object. If one does not
look at a target continuously for a set threshold
(e.g., 200ms), the target will not be successfully
selected
.
Once the cursor position had been redefined, the
user would need to only make a small movement
to, and click on, the target with a regular manual
input device. We have designed two MAGIC
pointing techniques, one liberal and the other
conservative in terms of target identification and
cursor placement.
The liberal MAGIC pointing technique: cursor is
placed in the vicinity of a target that the user
fixates on. Actuate input device, observe the
cursor position and decide in which direction to
steer the cursor. The cost to this method is the
increased manual movement amplitude.
3. The conservative MAGIC pointing technique
with "intelligent offset" To initiate a pointing
trial, there are two strategies available to the
user. One is to follow "virtual inertia:" move
from tie cursor's current position towards the
new target the user is looking at. This is likely
the strategy the user will employ, due to the way
the user interacts with today's interface. The
alternative strategy, which may be more
advantageous but takes time to learn, is to ignore
the previous cursor position and make a motion
which is most convenient and least effortful to
the user for a given input device.
The goal of the conservative MAGIC pointing
method is the following. Once the user looks at a
target and moves the input device, the cursor
will appear "out of the blue" in motion towards
the target, on the side of the target opposite to
the initial actuation vector. In comparison to the
liberal approach, this conservative approach has
both pros and cons. While with this technique
the cursor would never be over-active and jump
to a place the user does not intend to acquire, it
may require more hand-eye coordination effort.
MAGIC pointing techniques offer the following
potential advantages:
1.Reduction of manual stress and fatigue, since
the cross screen long-distance cursor
movement is eliminated from manual control.
2.Practical accuracy level. In comparison to
traditional pure gaze pointing whose accuracy
is fundamentally limited by the nature of eye
movement, the MAGIC pointing techniques let
the hand complete the pointing task, so they
can be as accurate as any other manual input
techniques.
3. A more natural mental model for the user. The
user does not have to be aware of the role of
the eye gaze. To the user, pointing continues
to be a manual task, with a cursor
conveniently appearing where it needs to be.
4. Speed. Since the need for large magnitude
pointing operations is less than with pure
manual cursor control, it is possible that
MAGIC pointing will be faster than pure
manual pointing.
5. Improved subjective speed and ease-of-use.
Since the manual pointing amplitude is smaller,
the user may perceive the MAGIC pointing
system to operate faster and more pleasantly
than pure manual control, even if it operates at
the same speed or more slowly.
ADVANTAGES in liberal conservative
approach:
1) Reduction of manual stress and fatigue
2) Practical accuracy level
3) A more natural mental model for the user
4) Faster than pure manual pointing
5) Improved subjective speed and ease of use
DISDVANTAGES in liberal conservative
approach:
1)Liberal approach is distracting when the user
is trying to read
2)The motor action computation cannot start
until the cursor appears
3)In conservative approach, uncertainty of the
exact location prolong the target acquisition
time.
EYE TRACKER
The liberal MAGIC pointing technique: the
curser is placed in the vicinity of the target that
the user fixates on
4. The conservative MAGIC pointing technique
with “intelligent offset”
Figure 4.3.Bright (left) and dark (right) pupil
images resulting from on-axis and off-axis
illumination. The glints, or corneal reflections,
from the on- and off-axis light sources can be
easily identified as the bright points in the iris.
Eye tracking data can be acquired
simultaneously with MRI scanning using a
system that illuminates the left eye of a subject
with an infrared (IR) source, acquires a video
image of that eye, locates the corneal reflection
(CR) of the IR source, and in real time
calculates/displays/records the gaze direction
and pupil diameter.
Once the pupil has been detected, the corneal
reflection is determined from the dark pupil
image. The reflection is then used to estimate the
user's point of gaze in terms of the screen
coordinates where the user is looking at. An
initial calibration procedure, similar to that
required by commercial eye trackers.
2)SUITOR
SUITOR stands for Simple User Interface
Tracker. It implements the method for putting
computational devices in touch with their users
changing moods. By watching what we page the
user is currently browsing, the SUITOR can find
additional information on that topic. The key is
that the user simply interacts with the computer
as usual and the computer infers user interest
based on what it sees the user do.
3)EMOTION MOUSE
This is the mouse embedded with sensors that
Can state the
physiological attributes such as temperature,
Body pressure, pulse rate, and touching style,
etc. The computer can determine the user’s
emotional states by a single touch. IBM is still
Performing research on this mouse and will be
available in the market within the next two or
three years. The expected accuracy is 75%.
One goal of human computer interaction (HCI)
is to make an adaptive, smart computer system.
In order to start creating smart computers, the
computer must start gaining information about
the user. One proposed method for gaining user
information through touch is via a computer
input device, the mouse. From the physiological
data obtained from the user, an emotional state
may be determined which would then be related
to the task the user is currently doing on the
computer. Over a period of time, a user model
will be built in order to gain a sense of the
user's personality.
5. By matching a person’s emotional state and the
context of the expressed emotion, over a period
of time the person’s personality is being
exhibited. Therefore, by giving the computer a
longitudinal understanding of the emotional state
of its user, the computer could adapt a working
style which fits with its user’s personality. The
result of this collaboration could increase
productivity for the user. One way of gaining
information from a user non-intrusively is by
video. Cameras have been used to detect
a person’s emotional state. We have explored
gaining information through touch. One obvious
place to put sensors is on the mouse.
EXPERIMENT
Based on Paul Elman’s facial expression work,
we see a correlation between a person’s
emotional state and a person’s physiological
measurements. Selected works from Elman and
others on measuring facial behaviors describe
Elman’s Facial Action Coding System (Elman
and Rosenberg, 1997).
One of his experiments involved participants
attached to devices to record certain
measurements including pulse, galvanic skin
response (GSR), temperature, somatic
movement and blood pressure. He then recorded
the measurements as the participants were
instructed to mimic facial expressions which
corresponded to the six basic emotions. He
defined the six basic emotions as anger, fear,
sadness, disgust, joy and surprise. From this
work, Dryer (1993) determined how
physiological measures could be used to
distinguish various emotional states. The
measures taken were GSR, heart rate, skin
temperature and general somatic activity (GSA).
These data were then subject to two analyses.
For the first analysis, a multidimensional
scaling. (MDS) procedure was used to determine
the dimensionality of the data.
RESULT
The data for each subject consisted of scores for
four physiological assessments [GSA, GSR,
pulse, and skin temperature, for each of the six
emotions (anger, disgust, fear, happiness,
sadness, and surprise)] across the five
minute baseline and test sessions. GSA data was
sampled 80 times per second, GSR and
temperature were reported approximately 3-
4times per second and pulse was recorded as a
beat was detected, approximately 1 time per
second
4)ARTIFICIAL INTELLIGENCE
Artificial intelligence (Al) involves two basic
ideas. First, it involves studying the thought
processes of human beings. Second, it deals with
representing those processes via machines (like
computers, robots, etc). Al is behavior of a
machine, which, if performed by a human being,
would be called intelligent. It makes machines
smarter and more useful, and is less expensive
than natural intelligence.
Natural language processing (NLP) refers to
artificial intelligence methods of communicating
with a computer in a natural language like
English. The main objective of a NLP program
is to understand input and initiate action. The
input words are scanned and matched against
6. internally stored known words. Identification of
a key word causes some action to be taken. In
this way, one can communicate with the
computer in one's language. No special
commands or computer language are required.
There is no need to enter programs in a special
language for creating software.
SPEECH RECOGNITION:
The user speaks to the computer through a
microphone, which, in used; a simple system
may contain a minimum of three filters. The
more the number of filters used, the higher the
probability of accurate recognition. The filter
output is then fed to the ADC to translate the
analogue signal into digital word. The ADC
samples the filter outputs many times a second.
Each sample represents different amplitude of
the signal .The spoken words are processed by
the filters and ADCs. The binary representation
of each of these words becomes a template or
standard, against which the future words are
compared. These templates are stored in the
memory. Once the storing process is completed,
the system can go into its active mode and is
capable of identifying spoken words. As each
word is spoken, it is converted into binary
equivalent and stored in RAM. The computer
then starts searching and compares the binary
input pattern with the templates, t is to be noted
that even if the same speaker talks the same text,
there are always slight variations in amplitude or
loudness of the signal, pitch, frequency
difference, time gap, etc. Due to this reason,
there is never a perfect match between the
template and binary input word. The pattern
matching process therefore uses statistical
techniques and is designed to look for the best
fit.
The values of binary input words are subtracted
from the corresponding values in the templates.
If both the values are same, the difference is
zero and there is perfect match. If not, the
subtraction produces some difference or error.
The smaller the error, the better the match.
When the best match occurs, the word is
identified and displayed on the screen.
BLUE EYES HARDWARE
DATA ACQUISITION UNIT (DAU):
1) Main task is to fetch the physiological data
2) Sensor will send data to the central system to
be processed.
3) ID cards and PIN codes provide operator's
authorization.
JAZZ MULTI SENSOR:
1) It supplies Raw digital data regarding eye
position, level of blood oxygenation.
2) Eye movement is measured using direct
infrared transducers.
CENTRAL SYSTEM UNIT
1) The box contains a Bluetooth module and
PCM codec for voice data transmission
2) Unit maintains the other side of the blue tooth
connection, buffers incoming sensor data,
performs on-line data analysis.
CONNECTION MANAGER:
The Connection Manager Handles:
1) Communication with the CSU hardware
7. 2) Searching for new devices in the covered
range.
3) Establishing Bluetooth connections
4) Connection authentication
5) Incoming data buffering
6) Sending alerts
DATA ANALYSIS MODULE:
Performs the analysis of the raw sensor data,
in order to obtain information about the
physiological condition.
DATA LOGGER MODULE:
1)The raw or processed physiological data,alerts
and operator’s voice are stored.
2) A Voice Data Acquisition module delivers the
voice data.
VISUALIZATION MODULE:
1)Enables them to watch each of the working
operator’s physiological condition along with
a preview of selected video source and
related sound stream.
2)The Visualization module can be set in an off-
line mode, where all the data is fetched from
the database.
BLUETOOTH technology provides means for
creating personal area network linking DAU and
central system unit.
BLUE EYES enabled devices:
POD:
The first blue Eye enabled mass production
device was POD, the car manufactured y
TOYOTA. It could keep the driver alert and
active. It could tell the driver to go slow if he is
driving too fast and it could pull over the driver
when he feels drowsy. Also it could hear the
driver some sort of interesting music when he is
getting bored.
PONG:
IBM released a robot designed for
demonstrating the new technology. The Blue
Eyes robot is equipped with a computer capable
8. of analyzing a person's glances and other forms
of expressions of feelings, before automatically
determining the next type of action. IBM has
released a robot called PONG, which is
equipped with the Blue Eyes technology. PONG
is capable of perceiving the person standing in
front of it, smiles when the person calls his
name, and expresses loneliness when it loses
sight of the person.
APPLICATIONS
GENERIC CONTROL ROOMS:
Power stations
Flight control centers
AUTOMOBILE INDUSTRY:
The user can concentrate on observation and
manual operations, and still control the
machinery by voice input commands.
FLIGHT CONTROL CENTERS:
With reliable speech recognition equipment,
pilots can give commands and information to the
computers by simply speaking into their
microphones—they don’t have to use their
hands for this purpose.
AIRFORCE AND MILITARY:
To control weapons by voice commands
ADVANTAGES
1) Faster than Pure manual pointing
2) Improved subjective speed and case of use
3) Practical accuracy level
4) SUITOR
Computers would have been much more
powerful, had they gained perceptual and
sensory abilities of the living beings on the
earth. What needs to be developed is an intimate
relationship between the computer and the
humans. And the Simple User Interest Tracker
(SUITOR) is a revolutionary approach in this
direction.
A car equipped with an affective computing
system could recognize when a driver is feeling
drowsy and advise her to pull over, or it might
sense when a stressed-out motorist is about to
explode and warn him to slow down and cool
off.
A computer endowed with emotional
intelligence, on the other hand, could recognize
when its operator is feeling angry or frustrated
and try to respond in an appropriate fashion.
Such a computer might slow down or replay a
tutorial program for a confused student, or
recognize when a designer is frustrated or vexed
and suggest he take a break.
DISADVANTAGES
1) Liberal approach is Distracting when the user
is trying to read.
2) The motor action Computation cannot start
until the cursor appears.
FUTURE ENHANCEMENTS
1) In the future, ordinary household devices-
such as televisions, Ovens may be able to do
their jobs when we look at them and speak to
them.
2) Future applications of blue eye technology is
limitless
CONCLUSION
1) Provide more delicate and user friendly
facilities in computing devices
2) Gap between the electronic and physical
world is reduced
3) The computers can be run using implicit
commands instead of the explicit commands