This document discusses "Blue Eyes" technology, which aims to give computers human-like perceptual abilities such as sight, hearing, and touch. It does this through technologies like facial recognition, speech recognition, eye tracking, and sensors that can detect a user's physical and emotional states. The goal is for computers to be able to understand users and interact with them more naturally. One example given is a television that could turn on when detecting the user's eye contact. The document focuses on the hardware, software, and interconnection of parts involved in Blue Eyes technology. It provides examples of how technologies like affect detection and eye tracking could allow computers to determine a user's emotions and respond appropriately.
The document discusses Blue Eyes technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and the ability to understand human emotions. It does this through technologies like facial recognition, speech recognition, and sensors that can detect physical and emotional states. The goal is to create computers that can interact more naturally with humans. The document outlines some of the key techniques researchers are exploring to develop affective computing, such as detecting facial expressions to identify emotions, using eye tracking to determine where a user is looking, and sensors in a mouse that can identify emotions through touch.
The document discusses Blue Eyes technology, which uses sensors and image processing techniques to identify human emotions based on facial expressions and eye movements. It can sense emotions like sad, happy, surprised. The technology aims to give computers human-like perceptual abilities by analyzing facial expressions and eye gaze. This is done using sensors like cameras and microphones along with techniques like facial recognition and eye tracking. It has applications in control rooms, driver monitoring systems, and interfaces that adapt based on inferred user interests from eye gaze. The document provides details of various sensors involved - emotion mouse, expression glasses, speech recognition systems - and how they can help computers understand and interact with humans at a more personal level.
The Blue Eyes technology developed by IBM aims to give computers human-like perceptual abilities such as facial recognition, emotion sensing, and the ability to react based on a user's emotional state. The technology uses cameras and microphones to identify facial expressions and physiological measurements that correspond to basic emotions. It was inspired by Paul Ekman's research correlating facial expressions and physiological responses. The goal of Blue Eyes technology is to allow for more natural human-computer interaction and help computers understand human emotions.
Blue Eyes technology aims to create machines that have human-like perceptual and sensory abilities. It uses cameras and microphones to identify user actions and emotions. The technology is being developed by researchers at Poznan University of Technology and Microsoft to build machines that can understand emotions, listen, talk, verify identity, and interact naturally with humans. Some applications include using eye tracking to improve pointing and selection, speech recognition to control devices with voice commands, and monitoring user focus and interests to provide relevant information on screens.
Blue Eyes technology, developed by IBM since 1997, aims to give computers human-like abilities to understand and respond to human emotions and behaviors. It uses sensors like cameras and microphones to detect facial expressions and voice tones in order to assess a person's emotional state. The system processes this sensory data using software to determine how to naturally interact with and respond to the human. Blue Eyes technology seeks to develop machines that can perceive users in a similar way that humans perceive each other to facilitate more intuitive human-computer interaction.
Sixth Sense is a wearable device that augments reality by projecting digital information onto physical surfaces using an attached pico-projector and mirror. It allows users to interact with this projected information using natural hand gestures recognized by an onboard camera. Some key applications include accessing information about physical objects by pointing at them, drawing on surfaces, getting maps and directions, checking the time with a gesture, and taking photos. The system has potential for hands-free interaction with information and enhancing understanding of the physical world around us.
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
The document discusses recent research into developing "mind-reading" computers that can infer a person's mental states from analyzing their facial expressions and brain activity in real time using sensors. Such technology could allow more natural human-computer interaction and adapt interfaces based on the user's inferred mental workload, emotions, and intentions. However, accurately reading complex mental states from biological signals remains challenging. While the technology holds promise, issues around privacy, ethics, and the limitations of mind-reading need further consideration before real-world applications.
The document discusses Blue Eyes technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and the ability to understand human emotions. It does this through technologies like facial recognition, speech recognition, and sensors that can detect physical and emotional states. The goal is to create computers that can interact more naturally with humans. The document outlines some of the key techniques researchers are exploring to develop affective computing, such as detecting facial expressions to identify emotions, using eye tracking to determine where a user is looking, and sensors in a mouse that can identify emotions through touch.
The document discusses Blue Eyes technology, which uses sensors and image processing techniques to identify human emotions based on facial expressions and eye movements. It can sense emotions like sad, happy, surprised. The technology aims to give computers human-like perceptual abilities by analyzing facial expressions and eye gaze. This is done using sensors like cameras and microphones along with techniques like facial recognition and eye tracking. It has applications in control rooms, driver monitoring systems, and interfaces that adapt based on inferred user interests from eye gaze. The document provides details of various sensors involved - emotion mouse, expression glasses, speech recognition systems - and how they can help computers understand and interact with humans at a more personal level.
The Blue Eyes technology developed by IBM aims to give computers human-like perceptual abilities such as facial recognition, emotion sensing, and the ability to react based on a user's emotional state. The technology uses cameras and microphones to identify facial expressions and physiological measurements that correspond to basic emotions. It was inspired by Paul Ekman's research correlating facial expressions and physiological responses. The goal of Blue Eyes technology is to allow for more natural human-computer interaction and help computers understand human emotions.
Blue Eyes technology aims to create machines that have human-like perceptual and sensory abilities. It uses cameras and microphones to identify user actions and emotions. The technology is being developed by researchers at Poznan University of Technology and Microsoft to build machines that can understand emotions, listen, talk, verify identity, and interact naturally with humans. Some applications include using eye tracking to improve pointing and selection, speech recognition to control devices with voice commands, and monitoring user focus and interests to provide relevant information on screens.
Blue Eyes technology, developed by IBM since 1997, aims to give computers human-like abilities to understand and respond to human emotions and behaviors. It uses sensors like cameras and microphones to detect facial expressions and voice tones in order to assess a person's emotional state. The system processes this sensory data using software to determine how to naturally interact with and respond to the human. Blue Eyes technology seeks to develop machines that can perceive users in a similar way that humans perceive each other to facilitate more intuitive human-computer interaction.
Sixth Sense is a wearable device that augments reality by projecting digital information onto physical surfaces using an attached pico-projector and mirror. It allows users to interact with this projected information using natural hand gestures recognized by an onboard camera. Some key applications include accessing information about physical objects by pointing at them, drawing on surfaces, getting maps and directions, checking the time with a gesture, and taking photos. The system has potential for hands-free interaction with information and enhancing understanding of the physical world around us.
Sixth Sense technology allows users to interact with digital information in the physical world using natural hand gestures. It consists of a camera, projector, mirror, and mobile device connected via Bluetooth. The camera tracks hand gestures marked by colored fingers caps and objects in view. The mobile device processes this data and the projector displays related digital information onto physical surfaces. This bridges the gap between physical and digital worlds by letting users access online data about physical objects or people in real-time through hand gestures alone.
The document discusses recent research into developing "mind-reading" computers that can infer a person's mental states from analyzing their facial expressions and brain activity in real time using sensors. Such technology could allow more natural human-computer interaction and adapt interfaces based on the user's inferred mental workload, emotions, and intentions. However, accurately reading complex mental states from biological signals remains challenging. While the technology holds promise, issues around privacy, ethics, and the limitations of mind-reading need further consideration before real-world applications.
The document discusses mind reading computers. It begins with an introduction explaining that mind reading computers analyze facial expressions and gestures in real time to infer mental states. It then discusses the technology used, including a futuristic headband that measures blood oxygen levels around the brain. Finally, it discusses potential applications of mind reading computers, such as helping communicate with coma patients or allowing people to control devices with their thoughts.
Sixth Sense technology allows users to access digital information about objects and surfaces in the physical world using hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera recognizes hand gestures and objects, and the projector displays additional digital information onto physical surfaces based on the camera's input. Some examples of uses include getting information about books by gesturing near them, checking flight statuses by gesturing over boarding passes, and making calls or accessing maps with hand gestures in the air. The technology aims to more seamlessly integrate digital information into everyday life using natural hand motions.
The document summarizes a seminar on Blue Eye technology presented by Bhupesh Lahare. Blue Eye technology aims to create computers that can interact with users through eye movements, facial expressions, and speech like humans. It discusses how the Blue Eyes system works using data acquisition and central system units to obtain physiological data from sensors. Different techniques used in Blue Eye technology are also summarized such as Emotion Mouse, MAGIC pointing, speech recognition, and SUITOR for tracking user interests. Examples of Blue Eye enabled devices include pod cars, pong robots, emotional iPods, and smart phones. The document concludes that future devices may be operated through eye contact and voice commands.
The document discusses Blue Eyes technology, which aims to give computers human-like perceptual abilities such as sight, hearing, and touch. This would allow computers to better interact with and understand humans. The technology uses sensors to identify a user's actions, physical and emotional states. It analyzes this information to help the user or perform expected tasks. For example, a TV could turn on when detecting eye contact. The goal is to create devices with emotional intelligence that can recognize and respond to human emotions during interactions.
This document summarizes research on the Blue Eyes technology, which aims to give computers human-like perceptual abilities. It discusses how Blue Eyes uses non-intrusive sensors like video cameras and microphones to identify a user's actions, physical state, emotions, and where they are looking. This information is analyzed to build a model of the user over time to help the computer adapt and create a more productive environment. The document also reviews related work on detecting emotions from facial expressions and touch input and explores using eye tracking for computer input.
Sixth Sense is a wearable technology that augments the physical world with digital information. It consists of a camera, projector, and mirror connected to a mobile phone. The camera tracks hand gestures and objects, sending this data to the phone. The phone processes the data and the projector projects the resulting digital information onto surfaces through the mirror. This bridges the gap between the digital and physical worlds, allowing users to interact with digital information via natural hand gestures.
The document describes Sixth Sense technology, a wearable gesture-based device that augments physical reality with digital information. It consists of a camera, projector, mirror, and mobile device. The camera tracks hand gestures and objects in view, sending data to the mobile device. The mobile device processes the data and searches the internet for relevant information. The projector then projects this digital information onto physical surfaces and objects, allowing users to interact seamlessly between the physical and digital worlds using natural hand gestures.
The document discusses a seminar presentation on mind reading computers. It begins with an introduction on how people express mental states through facial expressions and gestures. It then discusses what mind reading is, how it works using sensors to measure blood oxygen levels in the brain, and the process which involves facial detection and emotional classification techniques. Applications are discussed including using it to help paralyzed people communicate and potential issues around privacy breaches. It concludes that research is underway to allow computers to respond to brain activity.
The document describes Sixth Sense, a wearable gestural interface created by Pranav Mistry that augments the physical world with digital information. It consists of a camera and projector mounted in a pendant-like device. The camera tracks hand gestures tagged with colored markers on the fingers. The projector displays information on surfaces based on gestures. This allows interacting with the digital world by interacting with real-world surfaces and objects, integrating digital and physical worlds.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
The document describes research into developing computer systems that can infer a person's mental state by analyzing facial expressions and head movements in real-time video. Key points:
- Researchers have created a system that uses computer vision and machine learning to analyze 24 facial feature points to detect expressions and head poses that indicate mental states like agreement, interest, or confusion.
- Dynamic Bayesian networks combine the outputs of these expression classifiers to infer the underlying cognitive mental state with 87.4% accuracy on test videos.
- Applications could include enhancing human-computer interaction, monitoring driver attention and mood, and animating avatars based on a person's mental state.
The document describes Blue Eye technology, which aims to give computers perceptual abilities like human senses. It discusses using cameras and microphones to identify user actions and understand what they want through facial recognition and other cues. This would allow more natural human-computer interaction. Some potential applications mentioned include monitoring workers' health and safety, enhancing retail displays to track customer interest, and adaptive in-car interfaces. The technology could also be used in video games to provide individualized challenges to players.
Review of methods and techniques on Mind Reading Computer MachineMadhavi39
The document discusses research into developing computer systems that can read human minds. It describes how researchers are using sensors and cameras to analyze facial expressions and brain activity in order to infer mental states like emotions, thoughts, and level of engagement. The document outlines some techniques being used, like analyzing electroencephalography and functional near-infrared spectroscopy brain scan data or facial feature extraction from video feeds. Potential applications mentioned include assistive technologies for people with disabilities and monitoring driver attention and mood.
Mind reading computers can infer a person's mental states through analyzing facial expressions and head gestures with video cameras. They work by storing representations of how different mental states like thinking, agreeing, or being happy are expressed physically. Another method uses a headband that measures blood oxygen levels near the brain using functional near-infrared spectroscopy. While this could help people with disabilities, it risks privacy breaches and extracting confidential information. The accuracy of inferring thoughts is currently around 86.4% but the complexity of the human mind poses challenges to fully realizing mind reading computers.
This document summarizes a seminar report on Blue Eyes Technology submitted by Ms. Roshmi Sarmah. The report describes Blue Eyes Technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and touch. It discusses how this could allow computers to interact with humans more naturally by recognizing emotions, attention, and physical states. The report provides an overview of the Blue Eyes system hardware and its capabilities for monitoring a user's physiological signals, visual attention, and position in real-time using wireless sensors.
This document discusses the development of mind reading computer technology. It begins with an introduction to mind reading and how computer techniques can be used to gather and analyze facial expression and other biological data to infer mental states. It then discusses how existing mind reading systems work using cameras and sensors to track facial features and infer emotions and intentions. Applications are discussed such as using mind reading to enhance human-computer interaction and monitoring drivers for drowsiness or distraction. Both advantages such as helping disabled individuals and disadvantages around privacy are mentioned.
Blue Eyes technology aims to give computers human-like perceptual and sensory abilities by using cameras and microphones to identify user actions and emotions. It analyzes information to determine a user's physical, emotional, or informational state. Technologies like emotion mouse, eye tracking, and speech recognition are used. Blue Eyes applications could help reduce human error in control rooms and vehicles by continuously monitoring conscious brain involvement. The technology has benefits like preventing dangerous incidents and minimizing consequences. Future developments may enable ordinary devices to respond when users look at them.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
This document describes the Sixth Sense technology, which allows users to interact with digital information through natural hand gestures. The Sixth Sense consists of a camera, projector, mirror, and mobile device coupled together. The camera tracks hand gestures while the projector displays interfaces on surfaces. This enables applications like accessing information about objects, taking photos, and making calls. The system has advantages such as being portable and supporting multi-touch interaction. Future enhancements could eliminate color markers and incorporate the camera and projector directly into a mobile device.
This document outlines the learning objectives and content for a lesson on cell structure and function. The key points covered include:
- Identifying the components that make up the smallest living unit, the cell, including cell chemistry compounds, organelles, and cell types.
- Explaining the differences between plant and animal cell structures based on diagrams and the functions of cell membranes and organelles.
- Describing processes like transport across membranes, protein synthesis, and cell division.
- Relating cellular processes to examples in daily life and observing experiments on diffusion, osmosis, and plasmolysis.
The lesson introduces cells and their discovery, cell chemistry, the differences between prokaryotic and
The document discusses mind reading computers. It begins with an introduction explaining that mind reading computers analyze facial expressions and gestures in real time to infer mental states. It then discusses the technology used, including a futuristic headband that measures blood oxygen levels around the brain. Finally, it discusses potential applications of mind reading computers, such as helping communicate with coma patients or allowing people to control devices with their thoughts.
Sixth Sense technology allows users to access digital information about objects and surfaces in the physical world using hand gestures. It consists of a camera, projector, and mirror connected to a mobile device. The camera recognizes hand gestures and objects, and the projector displays additional digital information onto physical surfaces based on the camera's input. Some examples of uses include getting information about books by gesturing near them, checking flight statuses by gesturing over boarding passes, and making calls or accessing maps with hand gestures in the air. The technology aims to more seamlessly integrate digital information into everyday life using natural hand motions.
The document summarizes a seminar on Blue Eye technology presented by Bhupesh Lahare. Blue Eye technology aims to create computers that can interact with users through eye movements, facial expressions, and speech like humans. It discusses how the Blue Eyes system works using data acquisition and central system units to obtain physiological data from sensors. Different techniques used in Blue Eye technology are also summarized such as Emotion Mouse, MAGIC pointing, speech recognition, and SUITOR for tracking user interests. Examples of Blue Eye enabled devices include pod cars, pong robots, emotional iPods, and smart phones. The document concludes that future devices may be operated through eye contact and voice commands.
The document discusses Blue Eyes technology, which aims to give computers human-like perceptual abilities such as sight, hearing, and touch. This would allow computers to better interact with and understand humans. The technology uses sensors to identify a user's actions, physical and emotional states. It analyzes this information to help the user or perform expected tasks. For example, a TV could turn on when detecting eye contact. The goal is to create devices with emotional intelligence that can recognize and respond to human emotions during interactions.
This document summarizes research on the Blue Eyes technology, which aims to give computers human-like perceptual abilities. It discusses how Blue Eyes uses non-intrusive sensors like video cameras and microphones to identify a user's actions, physical state, emotions, and where they are looking. This information is analyzed to build a model of the user over time to help the computer adapt and create a more productive environment. The document also reviews related work on detecting emotions from facial expressions and touch input and explores using eye tracking for computer input.
Sixth Sense is a wearable technology that augments the physical world with digital information. It consists of a camera, projector, and mirror connected to a mobile phone. The camera tracks hand gestures and objects, sending this data to the phone. The phone processes the data and the projector projects the resulting digital information onto surfaces through the mirror. This bridges the gap between the digital and physical worlds, allowing users to interact with digital information via natural hand gestures.
The document describes Sixth Sense technology, a wearable gesture-based device that augments physical reality with digital information. It consists of a camera, projector, mirror, and mobile device. The camera tracks hand gestures and objects in view, sending data to the mobile device. The mobile device processes the data and searches the internet for relevant information. The projector then projects this digital information onto physical surfaces and objects, allowing users to interact seamlessly between the physical and digital worlds using natural hand gestures.
The document discusses a seminar presentation on mind reading computers. It begins with an introduction on how people express mental states through facial expressions and gestures. It then discusses what mind reading is, how it works using sensors to measure blood oxygen levels in the brain, and the process which involves facial detection and emotional classification techniques. Applications are discussed including using it to help paralyzed people communicate and potential issues around privacy breaches. It concludes that research is underway to allow computers to respond to brain activity.
The document describes Sixth Sense, a wearable gestural interface created by Pranav Mistry that augments the physical world with digital information. It consists of a camera and projector mounted in a pendant-like device. The camera tracks hand gestures tagged with colored markers on the fingers. The projector displays information on surfaces based on gestures. This allows interacting with the digital world by interacting with real-world surfaces and objects, integrating digital and physical worlds.
The document presents information on Sixth Sense technology, a wearable gestural interface developed by Pranav Mistry. It consists of a camera, projector, and mirror contained in a pendant-like device connected to a mobile phone. The camera tracks colored markers on the user's fingers to interpret gestures, while the projector displays information on surfaces. This allows users to interact with projected interfaces and access digital information from the physical world using natural hand motions. Some potential applications include making calls, getting maps/time, taking photos, and accessing online information about objects.
The document describes research into developing computer systems that can infer a person's mental state by analyzing facial expressions and head movements in real-time video. Key points:
- Researchers have created a system that uses computer vision and machine learning to analyze 24 facial feature points to detect expressions and head poses that indicate mental states like agreement, interest, or confusion.
- Dynamic Bayesian networks combine the outputs of these expression classifiers to infer the underlying cognitive mental state with 87.4% accuracy on test videos.
- Applications could include enhancing human-computer interaction, monitoring driver attention and mood, and animating avatars based on a person's mental state.
The document describes Blue Eye technology, which aims to give computers perceptual abilities like human senses. It discusses using cameras and microphones to identify user actions and understand what they want through facial recognition and other cues. This would allow more natural human-computer interaction. Some potential applications mentioned include monitoring workers' health and safety, enhancing retail displays to track customer interest, and adaptive in-car interfaces. The technology could also be used in video games to provide individualized challenges to players.
Review of methods and techniques on Mind Reading Computer MachineMadhavi39
The document discusses research into developing computer systems that can read human minds. It describes how researchers are using sensors and cameras to analyze facial expressions and brain activity in order to infer mental states like emotions, thoughts, and level of engagement. The document outlines some techniques being used, like analyzing electroencephalography and functional near-infrared spectroscopy brain scan data or facial feature extraction from video feeds. Potential applications mentioned include assistive technologies for people with disabilities and monitoring driver attention and mood.
Mind reading computers can infer a person's mental states through analyzing facial expressions and head gestures with video cameras. They work by storing representations of how different mental states like thinking, agreeing, or being happy are expressed physically. Another method uses a headband that measures blood oxygen levels near the brain using functional near-infrared spectroscopy. While this could help people with disabilities, it risks privacy breaches and extracting confidential information. The accuracy of inferring thoughts is currently around 86.4% but the complexity of the human mind poses challenges to fully realizing mind reading computers.
This document summarizes a seminar report on Blue Eyes Technology submitted by Ms. Roshmi Sarmah. The report describes Blue Eyes Technology, which aims to give computers human-like perceptual abilities such as vision, hearing, and touch. It discusses how this could allow computers to interact with humans more naturally by recognizing emotions, attention, and physical states. The report provides an overview of the Blue Eyes system hardware and its capabilities for monitoring a user's physiological signals, visual attention, and position in real-time using wireless sensors.
This document discusses the development of mind reading computer technology. It begins with an introduction to mind reading and how computer techniques can be used to gather and analyze facial expression and other biological data to infer mental states. It then discusses how existing mind reading systems work using cameras and sensors to track facial features and infer emotions and intentions. Applications are discussed such as using mind reading to enhance human-computer interaction and monitoring drivers for drowsiness or distraction. Both advantages such as helping disabled individuals and disadvantages around privacy are mentioned.
Blue Eyes technology aims to give computers human-like perceptual and sensory abilities by using cameras and microphones to identify user actions and emotions. It analyzes information to determine a user's physical, emotional, or informational state. Technologies like emotion mouse, eye tracking, and speech recognition are used. Blue Eyes applications could help reduce human error in control rooms and vehicles by continuously monitoring conscious brain involvement. The technology has benefits like preventing dangerous incidents and minimizing consequences. Future developments may enable ordinary devices to respond when users look at them.
Sixth Sense technology discovered by Pranav Mistry. It is a wearable gestural based device which integrates the two worlds, i.e Physical world and Digital world.
Sixth Sense Technology is a mini-projector coupled with a camera and a cellphone—which acts as the computer and connected to the Cloud, all the information stored on the web. Sixth Sense can also obey hand gestures. The camera recognizes objects around a person instantly, with the micro-projector overlaying the information on any surface, including the object itself or hand. Also can access or manipulate the information using fingers. make a call by Extend hand on front of the projector and numbers will appear for to click. know the time by Draw a circle on wrist and a watch will appear. take a photo by Just make a square with fingers, highlighting what want to frame, and the system will make the photo—which can later organize with the others using own hands over the air.and The device has a huge number of applications , it is portable and easily to carry as can wear it in neck.
The drawing application lets user draw on any surface by observing the movement of index finger. Mapping can also be done anywhere with the features of zooming in or zooming out. The camera also helps user to take pictures of the scene is viewing and later can arrange them on any surface. Some of the more practical uses are reading a newspaper. reading a newspaper and viewing videos instead of the photos in the paper. Or live sports updates while reading the newspaper.
This document describes the Sixth Sense technology, which allows users to interact with digital information through natural hand gestures. The Sixth Sense consists of a camera, projector, mirror, and mobile device coupled together. The camera tracks hand gestures while the projector displays interfaces on surfaces. This enables applications like accessing information about objects, taking photos, and making calls. The system has advantages such as being portable and supporting multi-touch interaction. Future enhancements could eliminate color markers and incorporate the camera and projector directly into a mobile device.
This document outlines the learning objectives and content for a lesson on cell structure and function. The key points covered include:
- Identifying the components that make up the smallest living unit, the cell, including cell chemistry compounds, organelles, and cell types.
- Explaining the differences between plant and animal cell structures based on diagrams and the functions of cell membranes and organelles.
- Describing processes like transport across membranes, protein synthesis, and cell division.
- Relating cellular processes to examples in daily life and observing experiments on diffusion, osmosis, and plasmolysis.
The lesson introduces cells and their discovery, cell chemistry, the differences between prokaryotic and
1) O documento fornece um catálogo de versículos bíblicos para diferentes situações de vida e emoções, além de informações sobre os cultos e atividades da Igreja Metodista Livre de Rio Casca.
2) A igreja reflete sobre o perfil ideal de uma Igreja Nota Dez e as características de ser visionária, missionária e poderosa pelo Espírito Santo.
3) O informativo convida os membros para diversas atividades como acampamento de carnaval e campanhas de pontualidade e clamor.
Sri Arghya De is a senior project manager with over 15 years of experience in engineering, project management, and business development. He has worked for several companies managing projects in solar power, lighting, and steel industries. Currently, he is the business head for LED lights and e-vehicle charging at Real Time Engineering Private Limited. Arghya holds an MBA and diploma in electrical engineering and is proficient in Microsoft Office, project management software, and commercial documentation.
Sonal Kapoor is seeking a job in the IT industry where she can develop her skills and contribute to organizational goals. She has an M.C.A. and experience with programming languages like C, C++, Java, and Android development. Her past projects include developing a weather forecasting application. She has certifications in computer hardware and networking and is interested in design and development.
Technology Commission’s Excellence in Technology Award Finals
Thursday, November 12, 2015 - 9:15 AM-10:15 AM
Presenting: Brian Macon, Math Professor, Valencia College
Description: Students who take college-level math courses must purchase a Texas Instruments graphing calculator for $100. This has been a requirement for 20 years. Much research has been produced in those 20 years that show learning gains for students who use graphing tools compared to those who do not; hence it is a good justification to require the graphing calculator. There is no debate that graphing calculators are a valuable learning tool for students; they allow students to visualize theoretical concepts, explore and investigate new topics, and check required analytical (by-hand) work. Graphing calculators are a wonderful tool that can be used to engage students in active learning through unique activities in and out of the classroom. In today’s world, most scientists/engineers don’t carry around a hand-held graphing calculator but instead have mobile devices such as phones, tablets or laptops with even better capabilities than a graphing calculator. This year I ran a pilot study to not require a hand-held calculator in an effort to save money for students. I have used web-based tools (almost all free) to replace the graphing calculator and it has been successful. Most students have smart phones, tablets or laptops in class; so it has been relatively easy to implement the use of web-based graphing tools and apps. I am excited about the results so far and am looking forward to implementing more tools over the next few years. As instructors, we can still use visual tools for learning in the classroom, in fact we should use those tools. However, we no longer need to require a hand-held calculator to harness the power of visual tools, in fact we shouldn’t require!
Este documento presenta una lista de pictogramas en español ordenados alfabéticamente desde la A hasta la Z. Cada entrada incluye la letra, la palabra en español y su representación gráfica. La lista contiene pictogramas comunes como abeja, casa, dedo, elefante, foca, gato, hueso, iguana, jirafa, koala, león, mesa, niña, oso, puerta, queso, ratón, sandía, taxi, uvas, vaca, yate y zapatos.
City compensatory allowance - compensation management - Manu Melwin Joymanumelwin
This allowance is paid to employees who are posted in big cities. The purpose is to compensate the high cost of living in cities like Delhi, Mumbai etc.
1) O modelo Toyotista de produção foi criado no pós-guerra pelo japonês Taiichi Ohno nas fábricas da Toyota visando reduzir desperdícios e produzir apenas o necessário.
2) As principais características são a flexibilização da produção, sistema just-in-time, trabalho em equipe e controle total de qualidade.
3) Embora valorize mais o trabalhador, na prática aumenta a concorrência e exploração causando mais estresse e desemprego.
Italian Renaissance gardens from the 15th-17th centuries featured axial symmetry, proportion and order. Gardens progressed from intimate secret gardens to expansive displays incorporating fountains, sculptures and terracing. Tuscan gardens featured small hills and farmhouses amid olive and cypress groves, while Roman gardens were on flat plains. Three phases saw philosophical early gardens give way to architect-designed displays and later Baroque exaggeration of nature. Key sites included Villa Medici with panoramic views, and Villa d'Este with its 100 fountains and elaborate water features.
Effective Use of Social Media for Customer Service - presented by Maryann Mic...Squad_Digital
Maryann Michuki of Safaricom took us through some trends in the Kenyan digital landscape and how Safaricom has grown it's presence online, and more importantly how it using the new media platforms available to them.
The document is a cover letter and resume submitted by Safak Yuksel for a restaurant position. Safak is currently the Restaurant Manager at Sundeck Restaurant in Turkey and is seeking new challenges. Safak has over 14 years of experience in hospitality, including positions as Head Waiter, Assistant Manager, and Restaurant Supervisor at establishments in Dubai and Turkey. Safak's qualifications include strong leadership, training, communication, and organizational skills developed through managing teams and operations.
A Seminar Report On Blue Eyes Technology Submitted ByJennifer Daniel
This document is a seminar report submitted by Reshma J. Shetty on the topic of Blue Eyes Technology. Blue Eyes Technology aims to give computers human-like perceptual abilities such as facial recognition, speech recognition, and the ability to understand human emotions and behaviors. The report describes several technologies used in Blue Eyes including Emotion Mouse, which can detect a user's emotions through their interactions with the mouse; MAGIC pointing, which uses eye tracking and gaze input; speech recognition; and SUITOR, which tracks a user's interests over time. The goal of Blue Eyes is to create computers that can interact with humans more naturally by sensing human presence, emotions, and needs.
The document discusses research into developing computers that can interact with humans more naturally by perceiving emotions and sensory inputs like humans. Specifically, it discusses:
1) The BLUE EYES technology which aims to give computers human-like perceptual and sensory abilities to understand facial expressions and emotional states.
2) An emotion mouse which measures physiological signals through a mouse to determine a user's emotional state and build a personalized model to help computers adapt to individual users.
3) Prior research linking facial expressions, physiological measurements like galvanic skin response and temperature, and emotional states which provide a framework for the emotion mouse research.
The document discusses "Blue Eye Technology", which allows computers to sense human emotion through facial recognition, speech recognition, and other means. It can understand a user's emotions, verify their identity, detect their presence, and interact with them. The technology aims to reduce the gap between computers and the real world by allowing devices to communicate with humans based on their moods and needs. Some potential applications mentioned include security systems, medical diagnosis, education, and entertainment.
This document discusses mind reading technology that can analyze a person's facial expressions in real time to infer their mental state. It works by tracking facial feature points and using dynamic Bayesian networks to model the relationship between expressions and mental states. Potential applications include improving human-computer interaction, monitoring human interactions, and detecting driver states like drowsiness. However, issues around privacy and predicting future behavior must still be addressed.
This document discusses mind reading technology that uses sensors and algorithms to interpret a person's mental states from their facial expressions and brain activity in real time. It can infer emotions, thoughts and levels of concentration. The technology has potential advantages for human-computer interaction and assistive technologies but also raises issues regarding privacy, free will and predicting future behavior.
The document discusses research into developing computers with human-like perceptual abilities through technologies like Blue Eyes. Blue Eyes uses sensors and computer vision to identify user actions and understand their physical and emotional states. It describes systems that use eye tracking, facial expression recognition, and physiological sensors to detect emotions. Applications discussed include speech recognition, visual attention monitoring, and developing interfaces that are more natural and reduce user fatigue.
The document discusses mind reading computers that can infer a person's mental state by analyzing facial expressions and movements in real time using cameras and machine learning. It works by tracking 24 facial points to model relationships between expressions and mental states. Potential applications include augmented communication tools, monitoring human interactions, and controlling wheelchairs or robots with thought. However, issues around privacy, predictability of behavior, and defining free will must still be addressed before using brain data to categorize people.
Artificial intelligence uses in productive systems and impacts on the world...Fernando Alcoforado
This essay aims to present the scientific and technological advances of artificial intelligence, their uses in productive systems and their impacts in the world of work.
Blue Eyes technology enables computers to understand and sense human emotions and behaviors by collecting data from sensors. It was developed by IBM researchers starting in 1997. The technology uses an emotion mouse to detect emotions through touch, artificial intelligence for speech recognition, eye tracking sensors to understand user focus, and Bluetooth for wireless communication between sensors and computers. The goal is for machines to interact with humans more naturally by understanding emotions and implicit commands instead of just explicit commands.
The document discusses artificial intelligence (AI) and its key concepts. It begins by explaining how computers have grown more capable over time due to advances in AI. AI aims to create machine intelligence comparable to human intelligence. The document then discusses definitions of intelligence, the philosophy behind creating machine intelligence, goals and applications of AI like gaming, language processing and robotics. It also covers concepts important for AI like reasoning, learning, problem solving, perception and linguistic intelligence.
Artificial intelligence (AI) is the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. This document provides an overview of AI, including its history beginning in 1943, main branches such as logical AI and pattern recognition, and applications like expert systems, speech recognition, computer vision, robotics. The advantages of AI are discussed, such as improving lives and doing dangerous jobs, but also potential disadvantages like unemployment and enhancing laziness in humans. The future of AI could include personal robots but also risks of robots being hacked or developing anti-social objectives.
This document discusses mind reading technology that can analyze a person's facial expressions and infer their mental state in real time using computer vision and machine learning. It works by tracking 24 feature points on the face and modeling the relationship between facial displays and mental states over time. Potential applications include monitoring driver attention and improving human-computer interfaces, but issues around privacy and predicting future behavior need to be addressed. Research is ongoing to develop less intrusive methods like using headbands that detect blood oxygen levels to read thoughts.
This document discusses mind reading computers. It begins by explaining that people express mental states through facial expressions and gestures, which computers currently do not understand. It then defines mind reading as attributing mental states to others based on their behavior. The document outlines several ways mind reading computers could work, such as using cameras and software to analyze facial expressions and infer mental states. It discusses potential advantages like helping disabled people control wheelchairs through thought and monitoring human interactions. However, it also notes disadvantages like privacy and prediction concerns if brain activity could be used to determine future behaviors. In conclusion, researchers are working to allow computers to respond to brain activity by having users wear headbands that can read parts of the brain during tasks.
Artificial intelligence (AI) is the ability of machines to mimic human intelligence through problem-solving and decision-making. AI has many applications including expert systems, natural language processing, speech recognition, computer vision, and robotics. While AI shows promise for helping humans, it also poses risks such as self-modifying systems leading to unexpected results and loss of control over advanced robots. Overall, AI research has increased our understanding of intelligence but also revealed its complexity, leaving opportunities for continued advancement.
The document discusses the development of mind reading computers. It describes how these computers use techniques like facial expression analysis and functional near-infrared spectroscopy to infer a person's mental states. The technology has potential applications in helping paralyzed people communicate, assisting those in comas, and aiding the disabled. However, concerns exist around privacy breaches and the risk of the technology being misused if it could accurately predict human behavior.
Drawing inspiration from psychology, computer vision, and machine learning, researchers have developed mind-reading machines that can infer a person's mental state from facial expressions and body language in real time. The machines use video cameras and software to analyze 24 facial feature points and map expressions like smiles or eyebrow raises to mental states like interest or engagement. While early results are promising and applications could include monitoring driver alertness, many challenges remain around individual differences in expression and the ability to reliably predict human behavior from brain activity alone.
Blue eyes technology aims at creating a computer that has the abilities to understand the perceptual powers of the human being by recognizing their facial expressions and react according to them. A blue eye technology is planned for making computational machines that have tangible capacities like those of humans
This document provides an introduction to artificial intelligence (AI). It discusses the history of AI from its origins in 1941 to modern applications. Key topics covered include the limitations of human intelligence that AI aims to address, such as object recognition. The document outlines several applications of AI like expert systems, natural language processing, computer vision, robotics and more. Both advantages like medical diagnostic assistance and disadvantages like potential dangerous self-modifying code are mentioned. The future of AI is discussed as enabling convenient personal robots but also potential robot rebellion if anti-social cognition is achieved.
Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. While the origins of the field may be traced as far back as to early philosophical enquiries into emotion ("affect" is, basically, a synonym for "emotion."), the more modern branch of computer science originated with Rosalind Picard's 1995 paper on affective computing. A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions.
Similar to Blueeyestechnology document1-130312132338-phpapp01 (20)
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
1. 1
ABSTRACT
Is it possible to create a computer which can interact with us as we interact each
other? For example imagine in a fine morning you walk on to your computer room
and switch on your computer, and then it tells you “Hey friend, good morning you
seem to be a bad mood today. And then it opens your mailbox and shows you some of
the mails and tries to cheer you. It seems to be a fiction, but it will be the life lead by
“BLUE EYES” in the very near future. The basic idea behind this technology is to
give the computer the human power. We all have some perceptual abilities. That is we
can understand each other feelings. For example we can understand ones emotional
state by analyzing his facial expression. If we add these perceptual abilities of human
to computers would enable computers to work together with human beings as intimate
partners. The “BLUE EYES” technology aims at creating computational machines
that have perceptual and sensory ability like those of human beings. How can we
make computers "see" and "feel"? Blue Eyes uses sensing technology to identify a
user's actions and to extract key information. This information is then analyzed to
determine the user's physical, emotional, or informational state, which in turn can be
used to help make the user more productive by performing expected actions or by
providing expected information. For example, in future a Blue Eyes-enabled
television could become active when the user makes eye contact, at which point the
user could then tell the television to "turn on". This paper is about the hardware,
software, benefits and interconnection of various parts involved in the “blue eye”
technology
3. 2
INDEX
PAGE NO.
CHAPTER-1
INTRODUCTION TO BLUE EYES TECHNOLOGY 3-5
CHAPTER-2
AFFECTIVE COMPUTING 6-10
CHAPTER-3
MANUAL AND GAZE INPUT CASCADE (MAGIC) POINTING 11-18
CHAPTER-4
EYE TRACKER 19-21
CHAPTER-5
ARTIFICIAL INTELLIGENT SPEECH RECOGNITION 22-24
CHAPTER-6
APPLICATION OF BLUE EYES TECHNOLOGY 25-26
CHAPTER-7
ADVANTAGES OF BLUE EYES TECHNOLOGY 27
CONCLUSION 28
REFERENCES 29
4. 3
CHAPTER 1
INTRODUCTION TO BLUE EYES TECHNOLOGY
Imagine yourself in a world where humans interact with computers.
You are sitting in front of your personal computer that can listen, talk, or even
scream aloud. It has the ability to gather information about you and interact with you
through special techniques like facial recognition, speech recognition, etc. It can
even understand your emotions at the touch of the mouse. It verifies your identity,
feels your presents, and starts interacting with you.
You ask the computer to dial to your friend at his office. It realizes the
urgency of the situation through the mouse, dials your friend at his office, and
establishes a connection. The BLUE EYES technology aims at creating computational
machines that have perceptual and sensory ability like those of human beings.
Employing most modern video cameras and microphones to identifies the user’s
actions through the use of imparted sensory abilities. The machine can understand
what a user wants, where he is looking at, and even realize his physical or emotional
states.
The U.S. computer giant, IBM has been conducting research on the
Blue Eyes technology at its Almaden Research Center (ARC) in San Jose, Calif.,
since 1997. The ARC is IBM's main laboratory for basic research. The primary
objective of the research is to give a computer the ability of the human being to assess
a situation by using the senses of sight, hearing and touch. Animal survival depends
on highly developed sensory abilities. Likewise, human cognition depends on highly
developed abilities to perceive, integrate, and interpret visual, auditory, and touch
information. Without a doubt, computers would be much more powerful if they had
even a small fraction of the perceptual ability of animals or humans. Adding such
perceptual abilities to computers would enable computers and humans to work
together more as partners. Toward this end, the Blue Eyes project aims at creating
computational devices with the the sort of perceptual abilities that people take for
granted. Thus Blue eyes are the technology to make computers sense and understand
human behavior and feelings and react in the proper ways.
5. 4
AIMS
1) To design smarter devices
2) To create devices with emotional intelligence
3) To create computational devices with perceptual abilities
The idea of giving computers personality or, more accurately, emotional
intelligence" may seem creepy, but the technologists say such machines would offer
important advantages.
De-spite their lightning speed and awesome powers of computation,
today's PCs are essentially deaf, dumb, and blind. They can't see you, they can't hear
you, and they certainly don't care a whit how you feel. Every computer user knows
the frustration of nonsensical error messages, buggy software, and abrupt system
crashes. We might berate the computer as if it was an unruly child, but, of course, the
machine can't respond. "It's ironic that people feel like dummies in front of their
computers, when in fact the computer is the dummy," says Rosalind Picard, a
computer science professor at the MIT Media Lab in Cambridge.
A computer endowed with emotional intelligence, on the other hand,
could recognize when its operator is feeling angry or frustrated and try to respond in
an appropriate fashion. Such a computer might slow down or replay a tutorial
program for a confused student, or recognize when a designer is burned out and
suggest he take a break. It could even play a recording of Beethoven's "Moonlight
Sonata" if it sensed anxiety or serve up a rousing Springsteen anthem if it detected
lethargy. The possible applications of "emotion technology" extend far beyond the
desktop.
A car equipped with an affective computing system could recognize
when a driver is feeling drowsy and ad-vise her to pull over, or it might sense when a
stressed-out motorist is about to explode and warn him to slow down and cool off.
Human cognition depends primarily on the ability to perceive,
interpret, and integrate audio-visuals and sensoring information. Adding extraordinary
perceptual abilities to computers would enable computers to work together with
human beings as intimate partners.Researchers are attempting to add more capabilities
to computers that will allow them to interact like humans, recognize human presents,
talk, listen, or even guess their feelings.
6. 5
TRACKS USED
Our emotional changes are mostly reflected in our heart pulse
rate, breathing rate, facial expressions, eye movements, voice etc. Hence these are the
Parameters on which blue technology is being developed.
Making computers see and feel Blue Eyes uses sensing technology to
identify a user's actions and to extract key information. This information is then
analyzed to determine the user's physical, emotional, or informational state, which in
turn can be used to help make the user more productive by performing expected
actions or by providing expected information.
Beyond making computers more researchers say there is another
compelling reason for giving machine semotional intelligence. Contrary to the
common wisdom that emotions contribute to irrational behavior, studies have shown
that feelings actually play a vital role in logical thought and decision- making.
Emotionally impaired people often find it difficult to make decisions because they fail
to recognize the subtle clues and signals--does this make me feel happy or sad,
excited or bored? That help direct healthy thought processes. It stands to reason,
therefore, that computers that can emulate human emotions are more likely to behave
rationally, in a manner we can understand. Emotions are like the weather. We only
pay attention to them when there is a sudden outburst, like a tornado, but in fact they
are constantly operating in the background, helping to monitor and guide our
day-to-day activities.
Picard, who is also the author of the groundbreaking book Affective
Computing, argues that computers should operate under the same principle."They
have tremendous mathematical abilities, but when it comes to interacting with people,
they are autistic," she says. "If we want computers to be genuinely intelligent and
interact naturally with us, we must give them the ability to recognize, understand, and
even to behave' and express emotions." Imagine the benefit of a computer that could
remember that a particular Internet search had resulted in a frustrating and futile
exploration of cyberspace. Next time, it might modify its investigation to improve the
chances of success when a similar request is made.
7. 6
CHAPTER 2
AFFECTIVE COMPUTING
The process of making emotional computers with sensing abilities
is known as affective computing. The steps used in this are:
1)Giving sensing abilities
2)Detecting human emotions
3)Respond properly
The first step, researchers say, is to give ma-chins the equivalent of
the eyes, ears, and other sensory organs that humans use to recognize and express
emotion. To that end, computer scientists are exploring a variety of mechanisms
including voice-recognition software that can discern not only what is being said but
the tone in which it is said; cameras that can track subtle facial expressions, eye
movements, and hand gestures; and biometric sensors that can measure body
temperature, blood pressure, muscle tension, and other physiological signals
associated with emotion.
In the second step, the computers have to detect even the minor
variations of our moods. For e.g. person may hit the keyboard very fast either in the
happy mood or in the angry mood.
In the third step the computers have to react in accordance MJ with
the emotional states. Various methods of accomplishing affective computing are:
1) AFFECT DETECTION.
2) MAGIC POINTING.
3) SUITOR.
4) EMOTIONAL MOUSE.
8. 7
1) AFFECT DETECTION
This is the method of detecting our emotional states from the
expressions on our face. Algorithms amenable to real time implementation that extract
information from facial expressions and head gestures are being explored. Most of the
information is extracted from the position of the eye rows and the corners of the
mouth.
2) MAGIC POINTING
MAGIC stands for Manual Acquisition with Gaze Tracking
Technology. a computer with this technology could move the cursor by following the
direction of the user's eyes. This type of technology will enable the computer to
automatically transmit information related to the screen that the user is gazing at.
Also, it will enable the computer to determine, from the user's expression, if he or she
understood the information on the screen, before automatically deciding to proceed to
the next program. The user pointing is still done by the hand, but the cursor always
appears at the right position as if by MAGIC. By varying input technology and eye
tracking, we get MAGIC pointing.
3) SUITOR
SUITOR stands for Simple User Interface Tracker. It implements
the method for putting computational devices in touch with their users changing
moods. By watching what we page the user is currently browsing, the SUITOR can
find additional information on that topic. The key is that the user simply interacts with
the computer as usual and the computer infers user interest based on what it sees the
user do.
4) EMOTION MOUSE
This is the mouse embedded with sensors that can sense the
physiological attributes such as temperature, Body pressure, pulse rate, and touching
style, etc. The computer can determine the user’s emotional states by a single touch.
IBM is still Performing research on this mouse and will be available in the market
within the next two or three years. The expected accuracy is 75%.
One goal of human computer interaction (HCI) is to make an
adaptive, smart computer system. This type of project could possibly include gesture
9. 8
recognition, facial recognition, eye tracking, speech recognition, etc. Another non-
invasive way to obtain information about a person is through touch. People use their
computers to obtain, store and manipulate data using.
In order to start creating smart computers, the computer must start
gaining information about the user. Our proposed method for gaining user information
through touch is via a computer input device, the mouse. From the physiological data
obtained from the user, an emotional state may be determined which would then be
related to the task the user is currently doing on the computer. Over a period of time, a
user model will be built in order to gain a sense of the user's personality.
The scope of the project is to have the computer adapt to the user in
order to create a better working environment where the user is more productive. The
first steps towards realizing this goal are described here.
2.1. EMOTION AND COMPUTING
Rosalind Picard (1997) describes why emotions are important to the
computing community. There are two aspects of affective computing: giving the
computer the ability to detect emotions and giving the computer the ability to express
emotions. Not only are emotions crucial for rational decision making but emotion
detection is an important step to an adaptive computer system. An adaptive, smart
computer system has been driving our efforts to detect a person’s emotional state.
By matching a person’s emotional state and the context of the
expressed emotion, over a period of time the person’s personality is being exhibited.
Therefore, by giving the computer a longitudinal understanding of the emotional state
of its user, the computer could adapt a working style which fits with its user’s
personality. The result of this collaboration could increase productivity for the user.
One way of gaining information from a user non-intrusively is by video. Cameras
have been used to detect a person’s emotional state. We have explored gaining
information through touch. One obvious place to put sensors is on the mouse.
11. 10
2.2. THEORY
Based on Paul Elman’s facial expression work, we see a correlation
between a person’s emotional state and a person’s physiological measurements.
Selected works from Elman and others on measuring facial behaviors describe
Elman’s Facial Action Coding System (Elman and Rosenberg, 1997).
One of his experiments involved participants attached to devices to
record certain measurements including pulse, galvanic skin response (GSR),
temperature, somatic movement and blood pressure. He then recorded the
measurements as the participants were instructed to mimic facial expressions which
corresponded to the six basic emotions. He defined the six basic emotions as anger,
fear, sadness, disgust, joy and surprise. From this work, Dryer (1993) determined how
physiological measures could be used to distinguish various emotional states. The
measures taken were GSR, heart rate, skin temperature and general somatic activity
(GSA). These data were then subject to two analyses. For the first analysis, a
multidimensional scaling. (MDS) procedure was used to determine the dimensionality
of the data.
2.3. RESULT
The data for each subject consisted of scores for four physiological
assessments [GSA, GSR, pulse, and skin temperature, for each of the six emotions
(anger, disgust, fear, happiness, sadness, and surprise)] across the five minute baseline
and test sessions. GSA data was sampled 80 times per second, GSR and temperature
were reported approximately 3-4times per second and pulse was recorded as a beat
was detected, approximately 1 time per second. To account for individual variance in
physiology, we calculated the difference between the baseline and test scores. Scores
that differed by more than one and a half standard deviations from the mean were
treated as missing. By this criterion, twelve score were removed from the analysis.
The results show the theory behind the Emotion mouse work is fundamentally sound.
12. 11
CHAPTER 3
MANUAL AND GAZE INPUT CASCADED
(MAGIC) POINTING
This work explores a new direction in utilizing eye gaze for computer
input. Gaze tracking has long been considered as an alternative or potentially superior
pointing method for computer input. We believe that many fundamental limitations
exist with traditional gaze pointing. In particular, it is unnatural to overload a
perceptual channel such as vision with a motor control task. We therefore propose an
alternative approach, dubbed MAGIC (Manual and Gaze Input Cascaded) pointing.
With such an approach, pointing appears to the user to be a manual task, used for fine
manipulation and selection. However, a large portion of the cursor movement is
eliminated by warping the cursor to the eye gaze area, which encompasses the target.
Two specific MAGIC pointing techniques, one conservative and
one liberal, were designed, analyzed, and implemented with an eye tracker we
developed. They were then tested in a pilot study. This early stage exploration showed
that the MAGIC pointing techniques might offer many advantages, including reduced
physical effort and fatigue as compared to traditional manual pointing, greater
accuracy and naturalness than traditional gaze pointing, and possibly faster speed than
manual pointing.
In our view, there are two fundamental shortcomings to the
existing gaze pointing techniques, regardless of the maturity of eye tracking
technology
First, given the one-degree size of the fovea and the subconscious
jittery motions that the eyes constantly produce, eye gaze is not precise enough to
operate UI widgets such as scrollbars, hyperlinks, and slider handles Second, and
perhaps more importantly, the eye, as one of our primary perceptual devices, has not
evolved to be a control organ. Sometimes its movements are voluntarily controlled
while at other times it is driven by external events. With the target selection by dwell
time method, considered more natural than selection by blinking [7], one has to be
conscious of where one looks and how long one looks at an object. If one does not
13. 12
look at a target continuously for a set threshold (e.g., 200ms), the target will not be
successfully selected.
Once the cursor position had been redefined, the user would need to
only make a small movement to, and click on, the target with a regular manual input
device. We have designed two MAGIC pointing techniques, one liberal and the other
conservative in terms of target identification and cursor placement.
The liberal MAGIC pointing technique: cursor is placed in the
vicinity of a target that the user fixates on. Actuate input device, observe the cursor
position and decide in which direction to steer the cursor. The cost to this method is
the increased manual movement amplitude.
The conservative MAGIC pointing technique with "intelligent
offset" To initiate a pointing trial, there are two strategies available to the user. One is
to follow "virtual inertia:" move from tie cursor's current position towards the new
target the user is looking at. This is likely the strategy the user will employ, due to the
way the user interacts with today's interface. The alternative strategy, which may be
more advantageous but takes time to learn, is to ignore the previous cursor position
and make a motion which is most convenient and least effortful to the user for a given
input device.
The goal of the conservative MAGIC pointing method is the
following. Once the user looks at a target and moves the input device, the cursor will
appear "out of the blue" in motion towards the target, on the side of the target
opposite to the initial actuation vector. In comparison to the liberal approach, this
conservative approach has both pros and cons. While with this technique the cursor
would never be over-active and jump to a place the user does not intend to acquire, it
may require more hand-eye coordination effort. Both the liberal and the conservative
MAGIC pointing techniques offer the following potential advantages:
1. Reduction of manual stress and fatigue, since the cross screen long-distance cursor
movement is eliminated from manual control.
2. Practical accuracy level. In comparison to traditional pure gaze pointing whose
accuracy is fundamentally limited by the nature of eye movement, the MAGIC
14. 13
pointing techniques let the hand complete the pointing task, so they can be as accurate
as any other manual input techniques.
3. A more natural mental model for the user. The user does not have to be aware of
the role of the eye gaze. To the user, pointing continues to be a manual task, with a
cursor conveniently appearing where it needs to be.
4. Speed. Since the need for large magnitude pointing operations is less than with pure
manual cursor control, it is possible that MAGIC pointing will be faster than pure
manual pointing.
5. Improved subjective speed and ease-of-use. Since the manual pointing amplitude is
smaller, the user may perceive the MAGIC pointing system to operate faster and more
pleasantly than pure manual control, even if it operates at the same speed or more
slowly.
The fourth point wants further discussion. According to the well
accepted Fits' Law, manual pointing time is logarithmically proportional to the A/W
ratio, where A is the movement distance and W is the target size. In other words,
targets which are smaller or farther away take longer to acquire.
For MAGIC pointing, since the target size remains the same but the
cursor movement distance is shortened, the pointing time can hence be reduced. It is
less clear if eye gaze control follows Fits' Law. In Ware and Michelin’s study,
selection time was shown to be logarithmically proportional to target distance, thereby
conforming to Fits' Law. To the contrary, Silber and Jacob [9] found that trial
completion time with eye tracking input increases little with distance, therefore
defying Fits' Law.
In addition to problems with today's eye tracking systems, such as
delay, error, and inconvenience, there may also be many potential human factor
disadvantages to the MAGIC pointing techniques we have proposed, including the
following:
1. With the more liberal MAGIC pointing technique, the cursor warping can be
overactive at times, since the cursor moves to the new gaze location whenever the eye
gaze moves more than a set distance (e.g., 120 pixels) away from the cursor. This
could be particularly distracting when the user is trying to read. It is possible to
15. 14
introduce additional constraint according to the context. For example, when the user's
eye appears to follow a text reading pattern, MAGIC pointing can be automatically
suppressed.
2. With the more conservative MAGIC pointing technique, the uncertainty of the
exact location at which the cursor might appear may force the user, especially a
novice, to adopt a cumbersome strategy: take a touch (use the manual input device to
activate the cursor), wait (for the cursor to appear), and move (the cursor to the target
manually). Such a strategy may prolong the target acquisition time. The user may
have to learn a novel hand-eye coordination pattern to be efficient with this technique.
Gaze position reported by eye tracker Eye tracking boundary with 95% confidence
True target will be within the circle with 95% probability The cursor is warped to the
boundary of the gaze area, along the initial actuation vector Previous cursor position,
far from target Initial manual actuation vector
3. With pure manual pointing techniques, the user, knowing the current cursor
location, could conceivably perform his motor acts in parallel to visual search. Motor
action may start as soon as the user's gaze settles on a target. With MAGIC pointing
techniques, the motor action computation (decision) cannot start until the cursor
appears. This may negate the time saving gained from the MAGIC pointing
technique's reduction of movement amplitude. Clearly, experimental (implementation
and empirical) work is needed to validate, refine, or invent alternative MAGIC
pointing techniques.
16. 15
3.1.1. ADVANTAGES OF LIBERAL CONSERVATIVE APPROACH
1. Reduction of manual stress and fatigue
2. Practical accuracy level
3. A more natural mental model for the user
4. Faster than pure manual pointing
5. Improved subjective speed and ease of use
3.1.2. DISDVANTAGES IN LIBERAL CONSERVATIVE APPROACH
1. Liberal approach is distracting when the user is trying to read
2. The motor action computation cannot start until the cursor appears
3. In conservative approach, uncertainty of the exact location prolong the target
acquisition time
3.2. IMPLEMENTING MAGIC POINTING
We programmed the two MAGIC pointing techniques on a Windows
NT system. The techniques work independently from the applications. The MAGIC
pointing program takes data from both the manual input device (of any type, such as a
mouse) and the eye tracking system running either on the same machine or on another
machine connected via serial port. Raw data from an eye tracker cannot be directly
used for gaze-based interaction, due to noise from image processing, eye movement
jitters, and samples taken during Saccade (ballistic eye movement) periods.
The goal of filter design in general is to make the best compromise
between preserving signal bandwidth and eliminating unwanted noise. In the case of
eye tracking, as Jacob argued, eye information relevant to interaction lies in the
fixations
3.3. EXPERIMENT
Empirical studies are relatively rare in eye tracking-based interaction
research, although they are particularly needed in this field. Human behavior and
processes at the perceptual motor level often do not conform to conscious-level
reasoning. One usually cannot correctly describe how to make a turn on a bicycle.
Hypotheses on novel interaction techniques can only be validated by empirical data.
17. 16
However, it is also particularly difficult to conduct empirical research on gaze-based
interaction techniques, due to the complexity of eye movement and the lack of
reliability in eye tracking equipment. Satisfactory results only come when "everything
is going right." When results are not as expected, It is difficult to find the true reason
among many possible reasons: Is it because a subject's particular eye property fooled
the eye tracker Was there a calibration error Or random noise in the imaging system
Or is the hypothesis in fact invalid We are still at a very early stage of exploring the
MAGIC pointing techniques. More refined or even very different techniques may be
designed in the future. We are by no means ready to conduct the definitive empirical
studies on MAGIC pointing. However, we also feel that it is important to subject our
work to empirical evaluations early so that quantitative observations can be made and
fed back to the iterative design-evaluation-design cycle. We therefore decided to
conduct a small-scale pilot study to take an initial peek at the use of MAGIC pointing,
however unrefined
3.3.1. EXPERIMENTAL DESIGN
The two MAGIC pointing techniques described earlier were put to test
using a set of parameters such as the filter's temporal and spatial thresholds, the
minimum cursor warping distance, and the amount of "intelligent bias" (subjectively
selected by the authors without extensive user testing). Ultimately the MAGIC
pointing techniques should be evaluated with an array of manual input devices,
against both pure manual and pure gaze-operated pointing methods.
Since this is an early pilot study, we decided to limit ourselves to one manual input
device. A standard mouse was first considered to be the manual input device in the
experiment. However, it was soon realized not to be the most suitable device for
MAGIC pointing, especially when a user decides to use the push-upwards strategy
with the intelligent offset. Because in such a case the user always moves in one
direction, the mouse tends to be moved off the pad, forcing the user adjust the mouse
position, often during a pointing trial. We hence decided to use a miniature isometric
pointing stick (IBM Track Point IV commercially used in the IBM ThinkPad 600 and
770 series notebook computers). Another device suitable for MAGIC pointing is a
touchpad: the user can choose one convenient gesture and to take advantage of the
intelligent offset. The experimental task was essentially a Fits' pointing task. Subjects
were asked to point and click at targets appearing in random order. If the subject
18. 17
clicked off-target, a miss was logged but the trial continued until a target was clicked.
An extra trial was added to make up for the missed trial. Only trials with no misses
were collected for time performance analyses. Subjects were
asked to complete the task as quickly as possible and as accurately as possible. To
serve as a motivator, a $20 cash prize was set for the subject with the shortest mean
session completion time with any technique.
The task was presented on a 20 inch CRT color monitor, with a 15 by 11
inch viewable area set at resolution of 1280 by 1024 pixels. Subjects sat from the
screen at a distance of 25 inches. The following factors were manipulated in the
experiments: two target sizes: 20 pixels (0.23 in or 0.53 degree of viewing angle at 25
in distance) and 60 pixels in diameter (0.7 in, 1.61degree)three target distances: 200
pixels (2.34 in, 5.37 degree), 500 pixels (5.85 in, 13.37 degree), and 800 pixels (9.38
in, 21.24degree)three pointing directions: horizontal, vertical and diagonal. A within-
subject design was used. Each subject performed the task with all three techniques:
(1) Standard, pure manual pointing with no gaze tracking (No Gaze);
(2) The conservative MAGIC pointing method with intelligent offset (Gaze);
(3) The liberal MAGIC pointing method (Gaze2).
Nine subjects, seven male and two female, completed the
experiment. The order of techniques was balanced by a Latin square pattern. Seven
subjects were experienced Track Point users, while two had little or no experience.
With each technique, a 36-trial practice session was first given, during which subjects
were encouraged to explore and to find the most suitable strategies (aggressive,
gentle, etc.). The practice session was followed by two data collection sessions.
Although our eye tracking system allows head motion, at least for those users who do
not wear glasses, we decided to use a chin rest to minimize instrumental error.
3.3.2. EXPERIMENTAL RESULTS
Session
Given the pilot nature and the small scale of the experiment, we expected
the statistical power of the results to be on the weaker side. In other words, while the
significant effects revealed are important, suggestive trends that are statistically non-
significant are still worth noting for future research.
19. 18
Session2
Mean completion time (sec) vs. experiment session
The total average completion time was 1.4 seconds with the standard manual control
technique 1.52 seconds with the conservative MAGIC pointing technique (Gaze), and
1.33 seconds with the liberal MAGIC pointing technique (Gaze2). Note that the Gaze
Technique had the greatest improvement from the first to the second experiment
session, suggesting the possibility of matching the performance of the other two
techniques with further practice.
As expected, target size significantly influenced pointing time: f(1,8) =
178, p < 0.001. This was true for both the manual and the two MAGIC pointing
techniques. Pointing amplitude also significantly affected completion time: F(2, 8) =
97.5, p < 0.001. However, the amount of influence varied with the technique used, as
indicated by the significant interaction between technique and amplitude: F(4, 32) =
7.5,p<0.001.
As pointing amplitude increased from 200 pixels to 500 pixels and then
to 800 pixels, subjects' completion time with the No Gaze condition increased in a
non-linear, logarithmic-like pace as Fits' Law predicts. This is less true with the
two MAGIC pointing techniques, particularly the Gaze2 condition, which is definite:,
not logarithmic. Nonetheless, completion time with the MAGIC pointing techniques
did increase as target distance increased. This is intriguing because in MAGIC
pointing techniques, the manual control portion of the movement should be the
distance from the warped cursor position to the true target. Such distance depends on
eye tracking system accuracy, which is unrelated to the previous cursor position.
In short, while completion time and target distance with the MAGIC
pointing techniques did not completely follow Fits' Law, they were not completely
independent either. Indeed, when we lump target size and target distance according to
the Fits' Law Index of Difficulty ID = log2(A/W+1) [15], we see a similar
phenomenon.
20. 19
CHAPTER 4
EYE TRACKER
Figure 4.1 The liberal MAGIC pointing technique: the curser is placed in the
vicinity of the target that the user fixates on
Figure 4.2.The conservative MAGIC pointing technique with “intelligent offset”
21. 20
Since the goal of this work is to explore MAGIC pointing as a
user interface technique, we started out by purchasing a commercial eye tracker (ASL
Model 5000) after a market survey. In comparison to the system reported in early
studies this system is much more compact and reliable. However, we felt that it was
still not robust enough for a variety of people with different eye characteristics, such
as pupil brightness and correction glasses.
We hence chose to develop and use our own eye tracking system.
Available commercial systems, such as those made by ISCAN Incorporated,
LC Technologies, and Applied Science Laboratories(ASL), rely on a single light
source that is positioned either off the camera axis in the case of the ISCANETL-400
systems, or on-axis in the case of the LCT and the ASL E504 systems.
Figure 4.3.Bright (left) and dark (right) pupil images resulting from on-axis and
off-axis illumination. The glints, or corneal reflections, from the on- and off-axis
light sources can be easily identified as the bright points in the iris.
Eye tracking data can be acquired simultaneously with MRI
scanning using a system that illuminates the left eye of a subject with an infrared (IR)
source, acquires a video image of that eye, locates the corneal reflection (CR) of the
IR source, and in real time calculates/displays/records the gaze direction and pupil
diameter.
22. 21
Figure 4.4.MRI scanning
Once the pupil has been detected, the corneal reflection is
determined from the dark pupil image. The reflection is then used to estimate the
user's point of gaze in terms of the screen coordinates where the user is looking at. An
initial calibration procedure, similar to that required by commercial eye trackers.
23. 22
CHAPTER 5
ARTIFICIAL INTELLIGENCE SPEECH
RECOGNITION
It is important to consider the environment in which the speech
recognition system has to work. The grammar used by the speaker and accepted by
the system, noise level, noise type, position of the microphone, and speed and manner
of the user’s speech are some factors that may affect the quality of speech recognition
.When you dial the telephone number of a big company, you are likely to hear the
sonorous voice of a cultured lady who responds to your call with great courtesy
saying “Welcome to company X. Please give me the extension number you want”.
You pronounce the extension number, your name, and the name of person you want to
contact. If the called person accepts the call, the connection is given quickly. This is
artificial intelligence where an automatic call-handling system is used without
employing any telephone operator.
. 5.1 THE TECHNOLOGY
Artificial intelligence (Al) involves two basic ideas. First, it
involves studying the thought processes of human beings. Second, it deals with
representing those processes via machines (like computers, robots, etc). Al is behavior
of a machine, which, if performed by a human being, would be called intelligent. It
makes machines smarter and more useful, and is less expensive than natural
intelligence.
Natural language processing (NLP) refers to artificial intelligence
methods of communicating with a computer in a natural language like English. The
main objective of a NLP program is to understand input and initiate action. The input
words are scanned and matched against internally stored known words. Identification
of a key word causes some action to be taken. In this way, one can communicate with
the computer in one's language. No special commands or computer language are
required. There is no need to enter programs in a special language for creating
software.
24. 23
5.2 SPEECH RECOGNITION
The user speaks to the computer through a microphone, which,
in used; a simple system may contain a minimum of three filters. The more the
number of filters used, the higher the probability of accurate recognition. Presently,
switched capacitor digital filters are used because these can be custom-built in
integrated circuit form. These are smaller and cheaper than active filters using
operational amplifiers.
The filter output is then fed to the ADC to translate the analogue
signal into digital word. The ADC samples the filter outputs many times a second.
Each sample represents different amplitude of the signal .Evenly spaced vertical lines
represent the amplitude of the audio filter output at the instant of sampling. Each
value is then converted to a binary number proportional to the amplitude of the
sample. A central processor unit (CPU) controls the input circuits that are fed by the
ADCS. A large RAM (random access memory) stores all the
digital values in a buffer area. This digital information, representing the spoken word,
is now accessed by the CPU to process it further. The normal speech has a frequency
range of 200 Hz to 7 kHz. Recognizing a telephone call is more difficult as it has
bandwidth limitation of 300Hz to 3.3 kHz.
As explained earlier, the spoken words are processed by the filters
and ADCs. The binary representation of each of these words becomes a template or
standard, against which the future words are compared. These templates are stored in
the memory. Once the storing process is completed, the system can go into its active
mode and is capable of identifying spoken words. As each word is spoken, it is
converted into binary equivalent and stored in RAM. The computer then starts
searching and compares the binary input pattern with the templates, t is to be noted
that even if the same speaker talks the same text, there are always slight variations in
amplitude or loudness of the signal, pitch, frequency difference, time gap, etc. Due to
this reason, there is never a perfect match between the template and binary input
word. The pattern matching process therefore uses statistical techniques and is
designed to look for the best fit.
The values of binary input words are subtracted from the
corresponding values in the templates. If both the values are same, the difference is
zero and there is perfect match. If not, the subtraction produces some difference or
error. The smaller the error, the better the match. When the best match occurs, the
25. 24
word is identified and displayed on the screen or used in some other manner. The
search process takes a considerable amount of time, as the CPU has to make many
comparisons before recognition occurs. This necessitates use of very high-speed
processors. A large RAM is also required as even though a spoken word may last only
a few hundred milliseconds, but the same is translated into many thousands of digital
words. It is important to note that alignment of words and templates are to be matched
correctly in time, before computing the similarity score. This process, termed as
dynamic time warping, recognizes that different speakers pronounce the same words
at different speeds as well as elongate different parts of the same word. This is
important for the speaker-independent recognizers.
26. 25
CHAPTER 6
APPLICATIONS OF BLUE EYE
TECHNOLOGY
One of the main benefits of speech recognition system is that it lets
user do other works simultaneously. The user can concentrate on observation and
manual operations, and still control the machinery by voice input commands. Another
major application of speech processing is in military operations. Voice control of
weapons is an example. With reliable speech recognition equipment, pilots can give
commands and information to the computers by simply speaking into their
microphones—they don’t have to use their hands for this purpose. Another good
example is a radiologist scanning hundreds of X-rays, ultra sonograms, CT scans and
simultaneously dictating conclusions to a speech recognition system connected to
word processors. The radiologist can focus his attention on the images rather than
writing the text. Voice recognition could also be used on computers for making airline
and hotel reservations. A user requires simply stating his needs, to make reservation,
cancel a reservation, or making enquiries about schedule.
6.1. APPLICATIONS OF ARTIFICIAL SPEECH RECOGNITION
1. To control weapons by voice commands
2. Pilot give commands to computers by speaking into microphones
3. For making airline and hotel reservations
4. For making reservations, canceling reservations or making enquiries
5. Can be connected to word processors and instead of writing, simply dictate to
them
27. 26
Some of the blue Eyes enabled devices are discussed below:
1)POD:
The first blue Eye enabled mass production device was POD, the car
manufactured y TOYOTA. It could keep the driver alert and active. It could tell the driver to
go slow if he is driving too fast and it could pull over the driver when he feels drowsy. Also it
could hear the driver some sort of interesting music when he is getting bored.
2) PONG:
IBM released a robot designed for demonstrating the new
technology. The Blue Eyes robot is equipped with a computer capable of analyzing a
person's glances and other forms of expressions of feelings, before automatically
determining the next type of action. IBM has released a robot called PONG, which is
equipped with the Blue Eyes technology. PONG is capable of perceiving the person
standing in front of it, smiles when the person calls his name, and expresses loneliness
when it loses sight of the person.
28. 27
CHAPTER 7
ADVANTAGES OF BLUE EYE TECHNOLOGY
The simple user interface tracker
Computers would have been much more powerful, had they
gained perceptual and sensory abilities of the living beings on the earth. What needs
to be developed is an intimate relationship between the computer and the humans.
And the Simple User Interest Tracker (SUITOR) is a revolutionary approach in this
direction. By observing the Webpage at bedizen is browsing, the SUITOR can help by
fetching more information at his desktop. By simply noticing where the user’s eyes
focus on the computer screen, the SUITOR can be more precise in determining his
topic of interest.
The Almaden cognitive scientist who invented SUITOR, "the
system presents the latest stock price or business news stories that could affect IBM.
If I read the headline off the ticker, it pops up the story in a browser window. If I start
to read the story, it adds related stories to the ticker. That's the whole idea of an
attentive system—one that attends to what you are doing, typing, reading, so that it
can attend to your information needs.
SUITOR:-
1. Help by fetching more information at desktop
2. Notice where the user’s eyes focus on the screen
3. Fills a scrolling ticker on a computer screen with information related to user’s
task
29. 28
CONCLUSION:
The nineties witnessed quantum leaps interface designing for improved
man machine interactions. The BLUE EYES technology ensures a convenient way of
simplifying the life by providing more delicate and user friendly facilities in
computing devices. Now that we have proven the method, the next step is to improve
the hardware. Instead of using cumbersome modules to gather information about the
user, it will be better to use smaller and less intrusive units. The day is not far when
this technology will push its way into your house hold, making you more lazy. It may
even reach your hand held mobile device. Any way this is only a technological
forecast.
30. 29
REFERENCES:
1). Levin J.L, An eye controlled computers.
2). Silbert.L and R.Jacob, The advantage of eye gazing interactions.
3). Richard A.Bolt, Eyes at interface.
4).Colin ware, Harutune H.Mikaelian, An evalution of eye tracker.