Journée Inter-GDR ISIS et Robotique: Interaction Homme-Robot


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Journée Inter-GDR ISIS et Robotique: Interaction Homme-Robot

  1. 1. Model of expressive gestures for humanoid robot NAO Quoc Anh Le Telecom ParisTech Catherine Pelachaud CNRS, Telecom ParisTech
  2. 2. NAO robot• An autonomous, programable and medium-size humanoid robot (57 cm)• Made by a French company (Aldebaran Robotics, Paris)• 25 degrees of freedom• 2 speakers to speak or play sound files• Control colors of its eyes
  3. 3. Introduction• This work is part of the National project ANR GVLEX – Partners: Aldebaran (coordinator), Acapela, LIMSI and Telecom-ParisTech.• Objective: Have the humanoid robot, Nao, read story agreeably through expressive verbal and non-verbal behaviors.• Methodology: Select and compute behaviors based on information extracted from the story: its structure, its various semantic and pragmatic elements as well as its emotional content.• Ongoing work at the Telecom-ParisTech: Control gestures of Nao by using the platform of an existing virtual agent system, Greta 3
  4. 4. Greta platform• Following SAIBA framework• Two representation languages: – FML: Function Markup Language – BML: Behavior Markup Language 4
  5. 5. SAIBA framework• Goal: Define interfaces at separate levels of abstraction for multimodal behavior generation• Structure: consist of 3 separated modules – Intent planning: Planning of a communicative intent – Behavior planning: Planning of multimodal behavior that carry – Behavior realization: Realization of the planned behaviors 5
  6. 6. FML• Objective: Encode communicative and expressive intent what agent aims to transmit. It may be emotional states, beliefs or goals, etc. 6
  7. 7. BML • Objective: Specify multimodal behaviors with constrains to be realized by agent • Description: 1. Occurrence of behaviors 2. Relative timings of behaviors 3. Form of behaviors/Reference to predefined animations 4. Conditions/Events/Feedbacks 21 3 7
  8. 8. Steps 1. Build a repertoire of gestures based on a video corpus 2. Use Greta platform to compute gestures for robot Behavior Realizer BML FMLText Intent Behavior Planning Planning Behavior BML Realizer
  9. 9. Build repertoire of gestures• Goal: Collect expressive gestures of individuals in a specified context (story-tellers)• Stages: 1. Video collection 2. Code schema and annotations 3. Elaboration of symbolic gestures annotations Gesture elaboration Videos Gesture Editor corpus Repertoire 9
  10. 10. Video collection• 6 actors from an amateur troupe were videotaped• Actors had received the script of the story beforehand• The text was displayed during the session so that they could read it from time to time• 2 digital cameras were used (front and side- view)• Each actor was videotaped twice – 1st session as a training / warm-up session – the most expressive session can be kept for analysis 10 Martin
  11. 11. Video corpus• Total duration: 80mn• Average: 7 mn per story 11 11 Martin
  12. 12. Code schema and annotation• Code schema – Goal: enable specification of gesture lexicons for Greta and Nao – Segmentation based on gesture phrases – Attributes • Handedness : Right hand / Left hand / 2 hands • Category: deictic, iconic, metaphoric, beat, emblem (McNeill 05, Kendon 04) • Lexicon: 47 different entries• Annotations using Anvil tool (Kipp 01) – Current state: 125 gestures segmented for 1 actor – Rich in terms of gestures : 23 gestures per minutes for subject 12 Martin
  13. 13. Annotation 13 Martin
  14. 14. Gesture Editor• Gesture described symbolically: – Gesture phases: preparation, stroke, hold, relaxation – Wrist position – Palm orientation – Finger orientation – Finger shape – Movement trajectory – Symmetry (one hand, two hand,..) 14
  15. 15. 15
  16. 16. Predefined positions• Pre-calculate joint values of all combinations of hand positions in 3D space (vertical, horizontal, distance) using Choregraphe.• Current state: 105 positions corresponding to 7 vertical values, 5 horizontal values and 3 distance values (000, 001, … , 462).• Replace symbolic positions by real joint values when compiling. 16
  17. 17. Predefined animations• Pre-programmed gestures or behavioral scripts using Choregraphe• Make reference to behavioral scripts from BML. 17
  18. 18. Behavioral scripts using ChoreGrapheVoilà bien longtemps, un soir de printemps, trois petits morceaux de nuit sedétachèrent du ciel et tombèrent sur Terre…. 18
  19. 19. Gesture lexicon• Different degrees of freedom• Variant of a gesture encompasses a family of gestures that shares – the same meaning (e.g. to stop someone) – a core signal (e.g. vertical flat hand toward the other)• Gestures within a family may differ along the non-core signals they use• Construction of a common lexicon with – Greta-Gestuary – Nao-Gestuary• In the specific lexicon, variant shares similar meaning and signal- core. 19
  20. 20. Robot vs. Greta• Degree of freedoms• Not dynamic wrists• Three fingers that open or close together• Movement speed (>0.5 seconds)• Singular positions=> Gestures may not be identical but should convey similar meaning 20
  21. 21. Gesture: Stop 21
  22. 22. Gesture: Stop 22
  23. 23. Demo: Three pieces of night 23
  24. 24. Demo: Choregraphe script 24
  25. 25. Future work• Synchronization of gestures with speech for robot• Define invariant signification of gestures• Define different levels of BML descriptions for gestures• Define evaluation methods 25
  26. 26. Thank You for Your Attention Any questions 26
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.