A Robot's Experience of Another Robot: Simulation

705 views
636 views

Published on

Wai presentatie mei 2008

Published in: Technology, Business
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
705
On SlideShare
0
From Embeds
0
Number of Embeds
42
Actions
Shares
0
Downloads
39
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

A Robot's Experience of Another Robot: Simulation

  1. 1. A Robot’s Experience of Another Robot: Simulation Tibor Bosse Johan F. Hoorn Matthijs Pontier Ghazanfar F. Siddiqui
  2. 2. Overview of this presentation <ul><li>Introduction to I-PEFiC Model </li></ul><ul><li>Implementation </li></ul><ul><li>Results simulation experiments </li></ul><ul><li>Discussion </li></ul><ul><li>Future steps </li></ul>
  3. 3. Goal of this study <ul><li>Let virtual agent / robot perceive other agent, or human </li></ul><ul><li>By determining its engagement with its interaction partner, a virtual agent can show an appropriate affective response </li></ul><ul><li> Formalise well-validated I-PEFiC model about how humans perceive virtual characters, and use it to let virtual agents perceive each other, or their users </li></ul><ul><li>Interesting for any application which requires human-like emotional behaviour / communication; e.g.: </li></ul><ul><li>(serious) games, e-therapy, agents that serve as practice subjects for therapists or police officers, etc. </li></ul>
  4. 4. Variables in I-PEFiC model <ul><li>Ethics  Is agent to meet good or bad? </li></ul><ul><li>Aesthetics  Is agent to meet beautiful or ugly? </li></ul><ul><li>Epistemics  Is agent to meet realistic or unrealistic? </li></ul><ul><li>Affordances  Is agent aid or obstacle (regarding goals) </li></ul><ul><li>Similarity  Is agent like me? </li></ul><ul><li>Relevance  Is agent relevant for me? (regarding goals) </li></ul><ul><li>Valence  Is expected outcome of interaction with agent good or bad? </li></ul><ul><li>Involvement  Subjective tendency to approach </li></ul><ul><li>Distance  Subjective tendency to avoid </li></ul><ul><li>Variables are unipolar </li></ul>
  5. 5. Introduction to I-PEFiC model
  6. 6. Domain <ul><li>Soccer agents you might remember from earlier presentations </li></ul><ul><li>Virtual soccer fans communicate with each other in a virtual world </li></ul><ul><li>They can decide to meet with each other and talk about certain conversation subjects </li></ul>
  7. 7. Implementation <ul><li>Agents perceive each others features good , bad , realistic , and unrealistic using a designed value [0, 1] (what would the average agent think) and a (personal) bias [0, 2]; e.g., </li></ul><ul><li>Perceived (Beautiful, A1, A2) = Bias (A1, A2, Beautiful) * Designed (Beautiful, A2) </li></ul>
  8. 8. Implementation Ethics <ul><li>A1 perceives ethics A2 according to club clothes A2 is wearing; e.g., </li></ul><ul><li>Perceived (Good, A1, A2) = Satisfaction (A1, Club A2) </li></ul><ul><li> Teammates are good guys </li></ul><ul><li> Supporters of rival teams are bad guys </li></ul>
  9. 9. Implementation Affordances <ul><li>A1 perceives affordances A2 according to expected communication possibilities, based on presuppositions about language skills (in Dutch, English and Urdu) based on outer appearance; e.g., </li></ul><ul><li>Perceived (Aid, A1, A2) = </li></ul><ul><li> (ExpectedSkill (A1, A2, language) * Skill (A1, language) ) </li></ul>
  10. 10. Implementation Similarity <ul><li>Agents perceive their own features: </li></ul><ul><li>Perceived (Feature, A1, A1) = Bias (A1, A1, Feature) * Designed (Feature, A1) </li></ul><ul><li>Agents compare this self-perception with perception other to determine similarity: </li></ul><ul><li>Similarity (A1, A2) = </li></ul><ul><li>1- (  (  sim  feature * abs(Perceived (Feature, A1, A2) – Perceived (Feature, A1, A1) )) </li></ul><ul><li> sim  feature is the (regression) weight of each feature for similarity </li></ul>
  11. 11. Relevance, Valence and Engagement <ul><li>All formulas have the form: </li></ul><ul><li>A =  B *B +  C *C +  D *D +  CD *C*D </li></ul><ul><li> B = (regression) weight main effect B on A </li></ul><ul><li> CD = weight interaction effect C and D on A </li></ul>
  12. 12. Relevance, Valence and Engagement Effects on: Main effects Interaction effects Relevance Ethics Aesthetics Epistemics Affordances Ethics x Affordances Ethics x Aesthetics x Epistemics Valence Ethics Aesthetics Epistemics Affordances Ethics x Affordances Ethics x Aesthetics x Epistemics Engagement Similarity Relevance Valence Relevance x Valence
  13. 13. Conclusions from experiments <ul><li>Higher values for attributes will result in higher Involvement / Distance </li></ul><ul><li>Involvement / Distance from A1 to A2: Inv. Dist. </li></ul><ul><li>A2 is an aid for A1 (speaks same language) ++ -- </li></ul><ul><li>A2 is an obstacle for A1 (speaks different language) -- ++ </li></ul><ul><li>A2 is a good guy (wears same club clothes) ++ + </li></ul><ul><li>A2 is a bad guy (wears rival club clothes) + ++ </li></ul><ul><li>A2 is beautiful ++ + </li></ul><ul><li>A2 is ugly + ++ </li></ul><ul><li>A2 is realistic ++ + </li></ul><ul><li>A2 is unrealistic + ++ </li></ul>
  14. 14. Conclusions from experiments <ul><li>The relative differences between realistic / unrealistic are smaller than those between beautiful / ugly </li></ul><ul><li>Realistic / unrealistic have less influence than beautiful / ugly </li></ul>
  15. 15. Discussion <ul><li>Model is adequate for simulating engagement </li></ul><ul><li>Positive features do not exclusively increase level of involvement, and negative features do not exclusively increase level of distance, which corresponds to empirical evidence </li></ul>
  16. 16. Future Steps <ul><li>Combine I-PEFiC (emotion elicitation) with Gross (emotion regulation), so virtual agents can perform affective response selection </li></ul><ul><li>Validate model against empirical data human affective trade-off processes </li></ul>
  17. 17. Questions?

×