Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Affective computing


Published on

A Brief Introduction to what Affective Computing is, followed by its updates in current scenario

  • Be the first to comment

Affective computing

  1. 1. Affective Computing Saumya Srivastava M.Tech (HCI)
  2. 2. Introduction Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. - Rosalind Picard Originated with Rasalind Picard’s 1995 paper on “Affective Computing”. Motivation for Research : Ability to simulate Empathy (i.e. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions).(Video :Technology to measure Emotions) It is a Empirical research motivated from the theoretical foundation of psychology and neuroscience.(Video : Emotional Technology) - Eva Hudicka (Author: “Affective Computing : Theory, Methods & Applications”)
  3. 3. Objective To develop a computing device with its capacity to gather cues to user emotion from a variety of sources. In Simple words, produce “emotion aware machines”. Facial expression, posture, gesture, speech, force or rhythm of key stroke, temperature change of hand on mouse can signify changes in user‟s. emotional state, detected and interpreted by a computer. There exist a limitless range of applications :  E-Learning Tutor expands explanation when user is found in a state of confusion, adds information when user is found in a state of curiosity etc.  E-Therapy Provide psychological health services (i.e. online counseling) revealing the emotional state as in real world session. Through Affective Computing, the patient‟s posture, face expression and gesture in real world leads to accurate evaluation of psychological state.
  4. 4. PSYCHOLOGICAL THEORIES OF EMOTION [3] Ekman, Friesen and Ellsworth (1972) categorized emotions into 6 groups namely fear, surprise, disgust, anger, happiness and sadness all of which can facially expressed. Ekman and Friesen (1978) developed Facial Action Coding System (FAC), which uses muscle movements to quantify emotions. According to Ekman, every primary emotion has a adaptive value irrespective of individual and culture. Automated Version of FAC was given by Ballett et. al. (1999). Later, Plutnik(1980) argued for 8 basic pairs of emotions which can be combined to produce secondary emotion. Drawback : One was forced to choose among the 8 emotion pairs. Russell proposed variation in 2 dimension i.e. Valence (X-axis) and Arousal (Y- axis) e.g. Happy + Content = Pleasure / Displeasure true true Pleasure true false Displeasure false true Displeasure false false Displeasure
  6. 6. COMPONENTS OF EMOTIONS [3] Subjective experience (feeling of fear and so on). Physiological Changes in Autonomic Nervous System(ANS) and Endocrine System (Glands and Hormones released from them). e.g. trembling with fear precedes conscious control of them Behavior evoked (such as running away or fainting due to fear)
  7. 7. SOME THEORIES [3] JAMES-LANGE THEORY  Introduced in 1890 by James and Lange.  Argues that action precedes emotion (brain interprets action as emotion). e.g. something scary moving towards us → pulse starts rising up → interpreting our state of body → we are afraid(Fear). Perception of Visceral and skeletal Emotion arousing Interpretation Changes Stimulus Feedback loop
  8. 8. SOME THEORIES [3] (Continue) CANNON - BARD THEORY  Introduced in 1920 by Cannon and Bard.  Argues that emotion arousing stimulus precedes Action followed from cognitive Appraisal . Experience of Sends signals to Cortex Emotion Perception of emotion arousing stimulus Sends signals to Hypothalamus Physiological (Bodily) Changes
  9. 9. SOME THEORIES [3] (Continue) SCHACHTER – SINGER THEORY  Introduced in 1960.  Adrenaline Experiment: Participant were told that they would be injected with an injection of vitamin and tested whether their vision would be affected or not. Group A: Accurate information was given resulting in sweating, tremor and a feeling of jittery. Group B : False information was provided. Group C : Told nothing Group D : Injected with Saline (not a vitamin having no side-affects).  Results 1. A & D : Didn‟t feel like being involved. 2. B & C : Shared their emotion State.  Criticism 1. Less ambiguity about what was happening. 2. Emotion is not just based on behavior but also depends on past experience and source of information. 3. Unexplained emotion state arousal leads to negative experience.
  10. 10. SOME THEORIES [3] (Continue) Awareness of Physiological Arousal Interpreting the Perception of arousal as a particular emotion arousing Thalmus sends impulses to cortex emotion given the stimulus context Physiological (bodily) Changes Further, Lazus(1982) performed experiments on “Cognitive Labeling” and proposed a notion “Cognitive Appraisal”. Acc. To this theory, Evaluation of the situation precedes Affection Reaction. Zajonc(1984) argues that Emotional response precedes Cognitive processing.
  11. 11. Areas of Affective Computing [3] Detecting Emotional Information (Basic capabilities in a computer to discriminate emotions)  Input : Getting a large variety of i/p signals. E.g. Face, Hand gesture, posture, gait, respiration, electro thermal response, ECG, temperature, blood pressure, blood volume, Ecteomyogram*.  Pattern Recognition : Feature Extraction and their classification of signals. E.g. Analysis of Video motion features(to discriminate a frown from a smile)  Reasoning : Predicts underlying emotion based about how emotions are generated and expressed.  Learning : Factors tends to emotion (of an individual) which helps better to recognize a person‟s emotion.  Bias : If a system has emotions, then recognizing ambiguous emotion becomes easier.  Output : Recognize expression and likely underlying emotion.* A test that measures the activity of the muscles.
  12. 12. Areas of Affective Computing [3] (Continued) Recognizing Emotional Information  We are in need in development of systems which moderate their responses to respond to user frustration/stress/anxiety in response to computer recognition of emotion.  Exception : This concept can‟t be implemented in tele-healthcare.  Lisetti et. al. (2003) designed a application in this regard to resolve this issue. Wearable Sensors & other Embodied Avtars Devices  Helps communication between patient and clinician.  Result : 90% success recognizing SADDNESS. 80% success recognizing FEAR. 80% success recognizing ANGER. 70% success recognizing FRUSTRATION.
  13. 13. Areas of Affective Computing [3] (Continued) AFFECTIVE WEARABLES Sensors & tools can be used in recognizing affective patterns, but these tools require a lot of attention/ maintenance. Figure : Wearer’s Blood Volume Pressure Figure : Sample & transmit biometric data to using photoplethysmography larger computer for analysis
  14. 14. Areas of Affective Computing [3] (Continued) Expressing Emotional  Need of Computers to express emotions : 1. Computers expressing emotions can improve the quality and effectiveness of communication between people and technologies. 2. How people can communicate with computer such that they can express their emotions? 3. How technology can stimulate and support new modes of affective communication between people.  Efforts made : 1. Schiano and her colleagues(2000) tested an early prototype of simple robot. Drawback : It had no emotions. 2. An Experimental application at MIT, the „Relational Agent‟(Bickmore, 2003), designed to sustain long-term relationships. The agent expressed to emotions. Drawback : Didn‟t convince the reality of „feelings‟. 3. By Contrast, „Kismet‟ an expressive robot at MIT is equipped with auditory and proprioceptive (touch) sensory inputs. Kismet can express emotion through vocalization, facial expression and adjustment of Gaze direction and head orientation.
  15. 15. Areas of Affective Computing [3] (Continued) Expressing Emotional Evolution over the yearsFigure : MS Office Assistant Figure : Kismet Robot
  16. 16. What has been done?[5] Emotion Recognition and synthesis in the focus of many FP5, FP6 and FP7 projects.  Starting from ERMIS (emotion aware agents) → HUMAINE (network of excellence) to Callas (emotion in art and entertainment). What can be done?[5] Add Observable Manifestations which provide cues about user‟s subjective experience.  A Smile may indicate successful completion of task or retrieval of what user was looking for…  Instead of cryptic “retry” button or asking user to verify results.  People may frown to indicate displeasure or difficulty to read , nod to agree, shrug shoulders when indifferent etc.
  17. 17. How can this be done? [5] We can recognize :  Facial Features and cues  Head Pose/Eye Gaze (to estimate attention)  Hand Gestures (usually fixed vocabulary , signs)  Directions and Commands (usually fixed vocabulary)  Anger in speech (useful in call centers) Affective Interactions [5] When computers can sense affective cues :  Users cannot read text off the screen and frown/approach screen?  Redraw text with larger font!  Call centre user is angry?  Redirect to human operator!  Users not familiar with/cannot use mouse/keyboard?  Spoken commands/hand gestures are another option!  Users not comfortable with on-screen text?  Use virtual characters and speech synthesis!
  18. 18. Current State Of Art [1]Rosaline Picard, in her book “Affective Computing & Intelligent Interaction” addressesa research paper on “Expressive Face Animation – Synthesis based on dynamicMapping Method”[1] which talks of SPEECH DRIVEN FACE ANIMATIONSYSTEM WITH EXPRESSIONS .  Up till now… Sequence of SPEECH DRIVEN FACE Audio Stream Corresponding ANIMATION Face movements (For e.g. Multimodal HCI, Visual Reality, Video Phones)  Work had been done on Lip Movement resulting in inaccuracy & discontinuity.  In speech recognition System, Yamamoto E. built a phoneme recognition model using Hidden Markov Model, directly mapping phoneme to the lip shape.  Drawback : Phoneme had to be linked to a language, and since the phoneme varied from person to person and region to region, the Efficiency degraded.
  19. 19. Current State Of Art [1] (Continued) Progress  To overcome the drawback, Neural Networks was now used for Audio Visual Mapping using the Gaussian Mixture Model (GMM). (Demonstration 1) (Demonstration 2) Non Verbal Information Verbal Information Set of Message with speaker‟s emotional State  Explains the relation between neutral facial deformation & a expressive facial deformation via GMM together with joint probability distribution.  Result : An encouraging quantitative evaluation a synthesized face showing a realistic quality.
  20. 20. Released Applications [6] Spatio-Temporal Emotional Mapper for Social System [Demonstration]  Developed by the Dept. of Informatics Engineering of Faculty of Science & Technology, University of Coimbra.  This tool gathers from a society of agents their emotional arousals and self rated motivation as well as their location in order to plot a map of a city or geographical region with information about the motivation and emotional state of the agents that inhabit it.  It is open source application.(source code).
  21. 21. Research Groups & their Work AffQuake[2]  AffQuake is an attempt to in-cooperate signals that relate to the players emotions involved while playing.  Quake II alters with the modification in behavior of the play w.r.t. average skin conductance level.  For e.g. Excitement increase the size of the avatars, giving benefit to see farther, but at the same time making him a easier target.  Performed at MIT Media Lab, MIT.  Group Members : a) Carson J. Reynolds b) Rosalind W Picard
  22. 22. Research Groups & their Work Affective Tangibles[2]  Objective : To develop physical objects that can grasped, squeezed, thrown or otherwise manipulated via a natural display of affect.  People generally express their frustration through the use of motor skills. In simple words, people often increase their intensity of muscle movement when experiencing frustrating interactions.  Constructed tangibles include a Pressure Mouse, affective pinwheels that are mapped to skin conductance, and a voodoo doll that can be shaken to express frustration.  Performed at MIT Media Lab, MIT.  Group Members : a) Carson J. Reynolds b) Rosalind W Picard Figure : Pressure Mouse
  23. 23. Research Groups & their Work Affective Learning Companion[2]  A powerful research tool, exploring a variety of social-emotional skills in HCI.  The platform enables a computational agent to sense and respond, in real time, to a users non-verbal emotional cues, using video, postural movements, mouse pressure, physiology, and other behaviors communicated by the user to infer.  An animated agent, recently developed allowing the study of factors helping learners develop the ability to persevere during frustrating learning episodes.  Performed at MIT Media Lab, MIT.  Group Members : a) Selene Atenea Mota b) Rosalind W. Picard c) Ashish Kapoor d) Barry Kort e) Hyungil Ahn f) Ken Perlin g) Winslow Burleson
  24. 24. Research Groups & their Work The Galvactivator[2]  A glove-like wearable device that senses the wearers skin conductivity and maps its values to a bright LED display.  Increases in skin conductivity across the palm tend to be good indicators of physiological arousal, causing the galvactivator display to glow brightly.  Applications : Self-feedback for stress management, Facilitation of conversation between two people & new ways of visualizing mass excitement levels in performance situations or visualizing aspects of arousal and attention in learning situations.  Performed at MIT Media Lab, MIT.  Group Members : a) Rosalind W. Picard b) Jonny Farringdon c) Nancy Tilbury (Philips Research Laboratories) d) Jocelyn Scheirer
  25. 25. Research Groups & their Work Learning & Pattern Recognition[2]  This project developed efficient versions of Bayesian techniques for a variety of inference problems, including curve fitting, mixture-density estimation, principal-components analysis (PCA), automatic relevance determination, and spectral analysis.  Performed at MIT Media Lab, MIT.  Group Members : a) Rosalind W. Picard b) Thomas Minka c) Yuan Qi
  26. 26. Research Groups & their Work Robotic Computer[2]  A robotic computer that moves its monitor "head" and "neck," but that has no explicit face, is being designed to interact with users in a natural way for applications such as learning, rapport-building, interactive teaching, and posture improvement.  In all these applications, the robot will need to move in subtle ways that express its state and promote appropriate movements in the user, but that dont distract or annoy.  Goal : Giving the system the ability to recognize states of the user and also to have subtle expressions.  Performed at MIT Media Lab, MIT.  Group Members : a) Carson J. Reynolds b) Rosalind W PicardNote : Other publication associated with Affective computing are available @
  27. 27. Research Groups & their Work Agent-Dysl Project[5]  Problem : Children with dyslexia experience problems in reading off a computer screen  Common errors: skipping words, changing word or syllable sequence, easily distracted/frustrated.  Solution : A screen reading software which  Helps them read in correct order by highlighting words and syllables.  Checks and monitors their progress.  Looks for signs of distraction or frustration. Figure : User leans towards the screen? Font size Figure : User looks away? Highlighting stops increased
  28. 28. Key Issues for Further Research [3] The critical issues that the Interactive Systems Designers are facing :  In which domain does affective capability make a positive difference to HCI, and where is it irrelevant or even obstructive?  How precise do we need to be identifying human emotions- perhaps it is enough to identify a general positive or negative feeling? What techniques best detect emotional states for this purpose?  How do we evaluate the contribution of affect to overall success of design?
  29. 29. References[1] Panrong Yin, Linye Zhao, Lexing Huang and Jianhua Tao, “Expressive Face Animation Synthesis based on Dynamic Mapping Method” Published at National Laborotary of Pattern Recognition , Springer-Verlag Berlin Heidelberg 2011.[2] Site : http://www.[3] DAVID BENYON 2010, Designing Interactive Systems- A Comprehensive guide to HCI and interaction design ,Addison Wesley-Second Edition[4] Site :[5] DR. KOSTAS KARPOUZIS, “Technology Potential : Affective Computing”, Image, video and multimedia systems lab, National Technical University of Athens.[6] Site :[7] ZHIHONG ZENG, Member, IEEE Computer Society, MAJA PANTIC, Senior Member, IEEE, GLENN I. ROISMAN & THOMAS S. HUANG, Fellow, IEEE, “A Survey of Affect Recognition Methods: Audio,Visual, and Spontaneous Expressions” .
  30. 30. Projects Problem Definition 1 : Designing an interface integrating emotion detection for video surveillance. Problem Definition 2: A 3-D Avatar reflecting the emotion as per scenario in gaming environment.