Chasing the future Modern & cultural challenges of designing emotional robots Exploring Living with Interactive & Digital Companions
10 partners
5 robots
What is a Companion?  A friend or acquaintance you associate yourself with.
Companions companeros
Your first companion?
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
&quot; If the sum of positive emotions is bigger than the sum of the negative emotions over the last n time periods then t...
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
Floppy Neck Syndrome
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
<ul><li>What makes a good  </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul>...
So what?
Ceci n’est  pas un robot
Are we designing ourselves out of the equation?
Do we need new  services for these  companions?
Being human now.
Connecting the dots.
Gentle robotisation?
Who is learning?
“ You’re vulnerable, you’re vulnerable, You are not a robot, You’re loveable, so loveable.” - Marina & The Diamonds
Do we make good companions?
Thank  you @iotwatch
Upcoming SlideShare
Loading in …5

Chasing the Future: challenges and opportunites in the design of emotional robots


Published on

Talk I gave about work on emotional robots for RoboLIFT 2011 in Lyon.

Published in: Design, Technology, Business
1 Comment
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • ( My name is Alexanda Deschamps-Sonsino and I am a product and interaction designer . This talk will try to address how exploring the design landscape of emotional robots is really a way for us to understand ourselves better and I’ll touch on what that might mean for industrial application. Throughout this talk I’ll talk about the reasons, as designers, scientists and people, it’s important we should be exploring how to design digital and interactive companions who can develop and read emotions.
  • Most of what I will talk to you about will be through the lense of Lirec’s work, an European funded research project part of the FP7 framework which focuses on exploring the role, design and challenges of developing emotive and sensitive companions. As a designer, I’ve been asked by them to act as an evangelist trying to get their work to live outside of the numerous publications they have produced since the project began in 2008.
  • Lirec is 10 partners exploring how we live with digital and interactive companions. It is one of the rare projects where academics and industrial partners from the worlds of ethology, social science, design &amp; computer science come together to design real world robots today and ask the hard questions.
  • Since 2008, Lirec has developed about 5-6 different types of robots, and many more experiments with existing robots, some of which I will share with you.
  • I used the word companion here and this is a theme that is recurring across the tradeshow outside, and as a designer I’m interested in the impact of that word. According to Wikipedia, we are talking about a friend or acquaintance we associate with. Acquaintance and friend are very different and have very different roles socially. I think this should be considered futher down the lines as we seem to have wanted to design friends more than acquaintances so far in robots like the Aibo and Pleo.
  • In terms of imagery, a companion that comes to mind could be Sancho Panza from the tale of Don Quichotte de la Mancha. Loyal, doomed.
  • Or Robin for Batman, over-enthusiastic, youthful, with much to learn.
  • And of course dogs, man’s best friend. We’ll come back to why looking at dogs is more useful.
  • And of course our favorite acquaintance that knows the most about you is probably your smart phone.
  • The word robot was introduced to the public by the Czech interwar writer Karel Čapek in his play R.U.R. (Rossum&apos;s Universal Robots), published in 1920. But think about your first digital companion…
  • Realistically the first digital companion you will all have interacted with is Clippy. Dissapointing, hated in the West to extremes its behaviour was simple and completely useless mostly. Part of Microsoft Office Suite from 97 to 2003 it wanted to be loved, be lovable even, but it was too early and it lacked empathy, awareness of context or enough intelligence for that matter. A little dumb goes a long way unfortunately in creating what is a cult of hate around what was the work of print designer and creator Kevan Atteberry (
  • If we know what doesn’t’ work, we can possibly break down what would make a good companion. Here are 5 points I’d like to elaborate on.
  • (;feature=more_related ) As opposed to robots all of our behavior, verbal or non-verbal, is communicative, it is impossible not to communicate Emotions are one indisputable ingredient of the human product relationship that is difficult to fake, even we’re not very good at it. Emotion influence the reasoning process and also influence the learning capabilities Emotions like anxiety, depression or hungriness may reduce or even block the learning capabilities Emotion is a key element to make a robot or virtual companion more engaging and effective in its interaction, since it provides him with the tools to relate to the user and process the user’s feedback and actions, and more importantly to develop an emotional intelligence By recognizing and expressing emotion the virtual companion captures the attention of the user. The companion’s affect sensitivity goes beyond the ability to recognise prototypical emotions, and allows for more varied affective signals conveying more subtle states such as, for example, boredom, interest, frustration, agreement, etc.,
  • ( The work of Paul Ekman in creating the industry standard of Facial Action Coding System ( ) becomes useful to design the instantiation of our emotions, into physical facial reactions. It is a common standard to systematically categorize the physical expression of emotions, and it has proven useful to psychologists and to animators. Ekman showed that facial expressions of emotion are not culturally determined, but universal across human cultures and thus biological in origin . Expressions he found to be universal included those indicating anger, disgust, fear, joy, sadness, and surprise.
  • This is how phonemes ( the smallest segmental unit of sound employed to form meaningful contrasts between utterances) are divided up by animators. It’s hard to think about speech without thinking of the whole face though and the emotions that are communicated at the same time. Only text to speech bots really manage to sever the 2 which is also why they feel so robotic.
  • This set of expressions can then be interpreted according to the different platforms, as Lirec researchers did using the iCat robot to start to develop some assistive behaviours for children learning how to play chess.
  • Little Mozart, a product of the industrial partnership at Lirec is what is called an Affective Tutoring System Sometimes emotions play a positive role in reasoning process, other times they don&apos;t but one thing is sure emotion influence the reasoning process and also influence the learning capabilities. Emotions like anxiety, depression or hungriness may reduce or even block the learning so, like teachers already know, emotional upsets infer in learning capabilities. Some teachers adapt their behaviour to, within their possibilities, improve students learning capabilities. Little Mozart tries to do the same. The robotic scenario had a more immersive user experience, an improved feedback and a more believable social interaction. Little Mozart helps children compose music and improve their knowledge of melodic composition and basics of musical language. It&apos;s intended for children ages between 4 and 10 years old.
  • Understanding the user’s affective or mental states from their verbal and more subtle non-verbal behaviour is of vital importance for a companion to be able to act socially. A socially intelligent companion, for example, would try to ensure that the user is interested in maintaining the interaction or to act empathically towards them if they are sad or not willing to engage in an interaction, e.g., it would not disturb them trying to engage them in some activity if they do not approach it. the companion’s affect sensitivity goes beyond the ability to recognise prototypical emotions, and allows for more variegated affective signals conveying more subtle states such as, for example, boredom, interest, frustra- tion, agreement, etc., Moral development Previous studies comparing the interaction of robots with that of dogs raise the question about moral development. For example, a dog will constantly challenge its family as part of its inherent sense of group or pack dynamics. When these challenges are omitted, the result will in most cases be that the dog takes the dominant position, leading to an overindulged pet. In the case of Pleo these dynamics are apparently lacking. Strangely, Pleo and Aibo sit somewhere between a pet and a gadget, enabling us to build relationships with it where we start to care about its wellbeing if we get to that point at all and invest enough time.
  • Bio-mimicry and behavioural studies allow us to breakdown the emotional landscape into exploitable gestures, not just in a mathematical sense.
  • We also could think of a companion to be someone who grows old and ages in an interesting way. Pleo was again used to examine what quality of relationship evolves with time between a user and the pet robot.
  • Is it possible to care too much? With that said many overcome their fears and worries and return it, although they know they will not get the same Pleo back in return. On the other hand we see that brave users would rather void their warranties and perform their own “medical” procedures to assure that they keep their Pleo. Consumers are thus in many ways encouraged or compelled to treat it like a pet, but sometimes when play becomes a bit rough, would then a “broken leg” go under the return policy or be a part of its life cycle? Would new players come into the aftermarket e.g. robotic doctors and therapists?
  • Part of the design activity means that a companion needs to know what to remember and what to forget. To be believable it is necessary that its emotions, actions and motivations are coherent over long time periods , the absence of past personal experiences retrieval in agents may hinder that coherence over long time periods . Such memories that refer to personal experiences, and are linked with a specific time and place, are in psychology literature named as episodic memories Memory of companions can be designed so that emotional episodes can be remembered more and information about the user’s affective state during specific sessions of interaction can be used to retrieve relevant information when new interactions take place.
  • Migration &amp; energy management The latest report from world robotics in 2009 [11], suggests that out of 7.3 million service robots sold in 2008, out of which 4.4 million units were sold for personal use for home applications (vacuuming and lawn mowing bots) and about 2.8 million for entertainment and leisure (toy robots, hobby systems, and educational bots). In 2008 alone about 940,000 vacuum cleaning robots (like the iRobot Roomba 562 Pet Series above) were sold, almost 50 percent more than in 2007. The report estimates that 49,000 professional service robots and 11.6 million personal service robots will be sold between 2009 and 2012. This poses several challenges, as the robot should be able to sustain and operate over a longer period of time. Yet another bio-inspired approach was presented by David McFarland and Luc Steels at AI lab, VUB at Brussels, involving an artificial ecosystem in which robots cooperated in maintaining both their short-term and long-term energy supply. The approach focused on mutualism, which requires co-operation between robots, whereby one robot aids another out of self-interest. We already performed some tests using a mobile phone to retrieve data from the robot’s sensors and control it (e.g. perform certain movements or emitting sounds). The robot can sense certain stimuli, such as the user petting it, and it can provide appropriate reactions, like wagging its tail. Another feature that we are able to provide using Bluetooth is to detect when a user gets near Pleo (one of the important behaviours that demonstrate attachment), based on the connection established with the mobile phone. In such a situation, Pleo reacts similarly to a pet, becoming more active and trying to engage the user in an interaction. The pet will use the robot to interact with the user whenever it is in range, disappearing from the mobile phone’s screen. When the robot becomes out of range, the pet will return to the user’s mobile phone. The pet will respond to the presence or absence of the user in resemblance of a dog, becoming more excited or sad according to the situation. The values of the needs will be updated on the mobile phone so the result of the interaction with the physical version is reflected in the mobile version. Despite the two different embodiments, the “identity” of the pet is the same, which means that the state of its internal needs is transferred from one embodiment to another. For instance, if Pleo robot is running out of battery, should the pet autonomously migrate to the mobile version?
  • 00:37 00:35
  • So what? Now that we sort of know how we should aim to build our next generation companions, does that actually equip us with tools for the future or does it create more questions than answers?
  • Cargo cult design is what happens when you start creating a representation without sufficient knowledge of how it actually would work, or presenting the representation while not acknowledging such knowledge or lack thereof. This is ultimately a massive challenge in robotics research where the challenges are many and the depictions people have developed of robots in culture, movies, comics, have directly influenced how we perceive what they are able to do. Robots exist in popular culture as a form of mechanical character with imagined functionality based on unknown underlying mechanisms. This affect ultimately the design and evaluation of real robots. Trusting the future too much can be a bad thing. When HAL was designed as part of the plot of 2001 a Space Odessey, they really believed that at the rate of scientific discoveries of the time, 2001 was a realistic timeframe for the development of such a robotic companion. As the future trajectory of research is by nature largely unpredictable, such visions may run the risk of reinforcing unrealistic ideas of what robots can do. Thus, given the cultural foundations and historical concepts of robots, there is a risk that the general ideas of what a robot is and what it will be able to do in a near future is flavoured not so much by current research and existing products as it is of popular culture. This may not only be true for the general public, but also for researchers that may work with unrealistic robot designs.
  • Chip (Not quite human) 1972
  • Data (Star trek) Wall-e Jonny 5 ABC Warriors SID – 67 Kitt – Knight Rider
  • However, for many laymen, if a machine appears to be able to control its arms or limbs, and especially if it appears anthropomorphic or zoomorphic (e.g. ASIMO or Aibo), it would be called a robot. If you’re of a particular disposition you will see robots everywhere in nature and objects around you. These are pictures taken without permission from the photo sharing site Flickr’s “Hello Little Fella” group where people have been aggregating photos of everyday objects that look like faces for years now.
  • Do we start to feel bad because we’re unable to read an emotion on a robot? Are we made to feel inferior or superior.
  • In a way we are often lacking services around these robots. Perhaps, if these robots were on a rental basis, we might start to try them out more easily. Robot servicing and rental, not unlike cars. We build incredibly rich relationships with our cars, and there’s no reason why the robotics industry shouldn’t be thinking about itself in more marketable means.
  • Living in 2011 is acknolodging that most of us who even have the time to care about robots know too much about technology and have, to a certain extent actually adapted quite well to it. Relationships with other people were always complicated and we have used technology not only to get closer to each other but also to avoid each other more easily. In the context of research, this means there might not be much of a point in trying to develop speech recognition as we’re very good at pressing buttons and texting for examples. The uncanny valley of voice might not be what we need to aim for.
  • A useful way to consider this research Is also in ways to reach out to the connected masses, constantly communicating across platforms. For them, the migration I showed earlier will seem natural and they will be more forgiving about idiocicracies in the system vs an adult.
  • There’s also something about how we are slowly starting to categorize the world of our social relationships in ways that robots (both physical and more important, virtual) understand and can compute. The machine intelligence developed by platforms like Facebook is built on top of our slow acceptance of social behaviours that are no longer social but about ticking boxes. For those reasons and that forward trajectory, we have in many ways, made ourselves more user-friendly to robots and perhaps no longer need them to become very smart or read our emotions through complex real time face-tracking. A simple text or emoticon will do and be cheaper and be acceptable to the user.
  • Technologies are often defined by what people find themselves learning about and evolving with. Noone had dreams of the iPhone, anxieties about how it could “take over” in the same way we developed apocalyptic scenarios around robots.
  • Us vs them. In our dreams of robots, they rarely have emotions (with Marvin being an important exception) to the point where we are now fascinated by the idea that they might develop them, because that was a clear way to define ourselves and our humanity. We could learn about technology, develop ways to understand computers, learn how they can understand us better, learn how to program, how to type in emoticons to express emotions through them, and now they are learning from us so they may interact better with us, or that we may use them to understand each other better.
  • Friendship and companionship is a long-term thing. “ Please– tame me!” he said.“I want to, very much,” the little prince replied. “But I have not much time. I have friends to discover, and a great many things to understand.” “ One only understands the things that one tames,” said the fox. “Men have no more time to understand anything. They buy things all ready made at the shops. But there is no shop anywhere where one can buy friendship, and so men have no friends any more. If you want a friend, tame me. . . ” But you can think of a million of times where even as a human you didn’t have that intelligence. Contexts where those clues are ignored, a million exceptions to prove the rule. Surround yourself with enough technologists and you’ll find it rarely emerges as a behaviour. Walk into a café with a group of women chatting and the collective emotional intelligence rises tenfold. Perhaps, after all this, it’s more relevant to ask whether we make good companions rather than try to create the perfect mates for our socially complex world. .
  • Please get in touch if you have any questions about Lirec with me or Dorothee Loziak, Thanks!
  • Chasing the Future: challenges and opportunites in the design of emotional robots

    1. 1. Chasing the future Modern & cultural challenges of designing emotional robots
    2. 2. Exploring Living with Interactive & Digital Companions
    3. 3. 10 partners
    4. 4. 5 robots
    5. 5. What is a Companion? A friend or acquaintance you associate yourself with.
    6. 6. Companions companeros
    7. 7. Companions
    8. 8. Companions
    9. 10. Your first companion?
    10. 12. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><li>Understand context </li></ul><ul><li>Grow old gracefully </li></ul><ul><li>Learn to forget </li></ul><ul><li>Adapt </li></ul>
    11. 13. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><ul><li>Understand </li></ul></ul><ul><ul><li>Mimic to show they’ve understood </li></ul></ul>
    12. 18. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><li>Understand context </li></ul>
    13. 19. &quot; If the sum of positive emotions is bigger than the sum of the negative emotions over the last n time periods then the agent is in good mood or is in bad mood.”
    14. 20. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><li>Understand context </li></ul><ul><li>Grow old gracefully </li></ul>
    15. 21. Floppy Neck Syndrome
    16. 22. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><li>Understand context </li></ul><ul><li>Grow old gracefully </li></ul><ul><li>Learn to forget </li></ul>
    17. 23. <ul><li>What makes a good </li></ul><ul><li>companion? </li></ul><ul><li>Understand users & their emotions </li></ul><ul><li>Understand context </li></ul><ul><li>Grow old gracefully </li></ul><ul><li>Learn to forget </li></ul><ul><li>Adapt </li></ul>
    18. 26. So what?
    19. 27. Ceci n’est pas un robot
    20. 30. Hi!
    21. 31. Are we designing ourselves out of the equation?
    22. 32. Do we need new services for these companions?
    23. 33. Being human now.
    24. 34. Connecting the dots.
    25. 35. Gentle robotisation?
    26. 36. Who is learning?
    27. 37. “ You’re vulnerable, you’re vulnerable, You are not a robot, You’re loveable, so loveable.” - Marina & The Diamonds
    28. 38. Do we make good companions?
    29. 39. Thank you @iotwatch