Ai

444 views
383 views

Published on

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
444
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Ai

  1. 1. KISMET AN APPLICATION OF HUMANOID ROBOTICS
  2. 2. FLOW OF PRESENTATION <ul><li>INTRODUCTION </li></ul><ul><li>OBJECTIVE </li></ul><ul><li>KISMET-THE ROBOT </li></ul><ul><li>OVERVIEW OF HARWARE COMPONENTS </li></ul><ul><li>FRAMEWORK </li></ul><ul><li>WORKING OF PERCEPTS </li></ul><ul><li>IMPLICATION OF NEURAL NETWORK </li></ul><ul><li>NEURAL MODEL </li></ul>
  3. 3. INTRODUCTION <ul><li>A humanoid robot is an autonomous robot because it can adapt to changes in its environment or itself and continue to reach its goal. </li></ul><ul><li>Sharing a similar morphology, they can communicate in a manner that supports the natural communication modalities of humans. </li></ul><ul><li>Examples include facial expression, body posture, gesture, gaze direction, and voice. </li></ul>
  4. 4. OBJECTIVE <ul><li>A GENERAL FRAMEWORK THAT INTEGRATES PERCEPTION ,ATTENTION , DRIVES ,EMOTIONS ,BEHAVIOUR SELECTION AND MOTOR ACTS. </li></ul><ul><li>OUTCOME </li></ul><ul><li>THE ROBOT RESPONDS WITH EXPRESSIVE DISPALYS WHICH REFLECT AN EVER CHANGING STATE </li></ul>
  5. 5. KISMET- THE ROBOT <ul><li>Kismet is an expressive robotic creature with </li></ul><ul><li>perceptual and motor modalities that imitates human </li></ul><ul><li>natural behaviour . </li></ul><ul><li>Input : Video and Auditory sensors. </li></ul><ul><li>Output : vocalizations, facial expressions, and motor </li></ul><ul><li>capabilities to adjust the gaze direction of the eyes </li></ul><ul><li>and the orientation of the head </li></ul>
  6. 6. KISMET'S EXPRESSIONS Calm Interest Angry Happy Sad Surprise
  7. 7. OVERVIEW OF HARDWARE COMPONENTS <ul><li>VISION SYSTEM </li></ul><ul><li>AUDITORY SYSTEM </li></ul><ul><li>EXPRESSION MOTOR SYSTEM </li></ul><ul><li>VOCALIZATION SYSTEM </li></ul>
  8. 8. FRAMEWORK <ul><li>Kismet is an autonomous robot designed for social interactions with humans. </li></ul><ul><li>APPROACH </li></ul><ul><li>The way infants learn to communicate with adults. </li></ul>
  9. 9. FRAMEWORK VIEW
  10. 10. COMPONENTS <ul><li>System architecture consists of six subsystems : </li></ul><ul><li>Low-level feature extraction system. </li></ul><ul><li>High-level perception system. </li></ul><ul><li>Attention system. </li></ul><ul><li>Motivation system. </li></ul><ul><li>Behavior system. </li></ul><ul><li>Motor system. </li></ul>
  11. 11. The Low-Level Feature Extraction System <ul><li>Responsible for processing the raw sensory information into quantities that have behavioral significance for the robot. </li></ul><ul><li>For instance, visual and auditory cues such as detecting eyes and the recognition of vocal affect are important for infants. </li></ul>
  12. 12. WORKING OF PERCEPTUAL SYSTEM <ul><li>MOTION DETECTION PROCESS RECEIVES A DIGITIZED 128 x 128 IMAGE FROM THE CAMERA. </li></ul><ul><li>INCOMING IMAGES ARE STORED IN A RING OF 3 BUFFERS. </li></ul><ul><li>ONE HOLDS THE CURRENT IMAGE Io , ONE BUFFER HOLDS THE PREVIOUS IMAGE I 1 , AND THIRD RECEIVES NEW INPUT. </li></ul><ul><li>ABSOLUTE VALUE OF DIFFERENCE IS THRESHOLDED TO GIVE RAW IMAGE AS : I RAW = T( | Io – I 1 | ). </li></ul><ul><li>USES RATIO TEMPLATE ALGORITHM. </li></ul>
  13. 13. FACE DETECTION <ul><li>DETERMINE POTENTIAL FACE LOCATIONS WITH RATIO TEMPLATE ALGORITHM </li></ul>
  14. 14. The Attention System <ul><li>The low-level visual percepts are sent to the attention system. </li></ul><ul><li>This system picks out low-level perceptual stimuli and direct the robot's attention and gaze toward them. </li></ul>
  15. 15. The Perceptual System <ul><li>Here low-level features corresponding to the target stimuli of the attention system are encapsulated into behaviorally relevant percepts. </li></ul><ul><li>Each behavior and emotive response has an associated releaser . </li></ul><ul><li>A releaser can be viewed as a collection of feature detectors that are minimally necessary to identify a particular object or event of behavioral significance. </li></ul>
  16. 16. The Motivation System <ul><li>It consists of the robot's basic “drives” and “emotions”. </li></ul><ul><li>The “drives” represent the basic “needs” of the robot and are modeled as simple homeostatic regulation mechanisms . </li></ul><ul><li>The “emotions” are modeled from a functional perspective . </li></ul><ul><li>Currently, six basic emotions are modeled that give the robot synthetic analogs of anger, disgust, fear, joy, sorrow, and surprise . </li></ul>
  17. 17. The Behavior System <ul><li>Organizes the robot's task-based behaviors into a coherent structure. </li></ul><ul><li>Each behavior is viewed as a self-interested, goal-directed entity that competes with other behaviors to establish the current task. </li></ul><ul><li>An arbitration mechanism is required to determine which behavior(s) to activate and for how long, given that the robot has several motivations that it must tend to and different behaviors that it can use to achieve them. </li></ul><ul><li>The main responsibility of the behavior system is to carry out this arbitration. </li></ul>
  18. 18. The Motor System <ul><li>It arbitrates the robot's motor skills and expressions. </li></ul><ul><li>It determines how to move the robot so as to carry out that course of action </li></ul><ul><li>Coordinates body posture, gaze direction, vocalizations, and facial expressions </li></ul>
  19. 19. IMPLICATION OF NEURAL NETWORKS <ul><li>Work more like human brain </li></ul><ul><li>Can handle ambiguity better than rule-based </li></ul><ul><li>systems. </li></ul><ul><li>Feature of &quot;learning&quot; - a helpful tool to add more </li></ul><ul><li>human-like behaviors. </li></ul>
  20. 20. NEURAL NETWORK AS ROBOT BRAIN <ul><li>INPUT : SENSORS </li></ul><ul><li>OUTPUT : MOTORS </li></ul>
  21. 21. MODEL WORKING OF NEURAL BRAIN <ul><li>The relationship between the sensors and the motors can be described in the following table (where +1 means on and -1 means off). </li></ul><ul><li>Sensor1 sensor2 motor1 motor2 </li></ul><ul><li>+1 +1 -1 -1 </li></ul><ul><li>+1 -1 +1 -1 </li></ul><ul><li>-1 +1 -1 +1 </li></ul><ul><li>-1 -1 +1 +1 </li></ul>
  22. 22. CONTD... <ul><li>SENSORS </li></ul><ul><li>PRE PROCESSING : Sensor data IS modified to fit into an input vector. </li></ul><ul><li>INPUT VECTOR : This is a list of inputs for the neural network. From a math point of view neural nets are matrixes, thus the inputs are called vectors. </li></ul>
  23. 23. CONTD... <ul><li>NEURAL BRAIN : This is where the real knowledge is stored. It consists of neural cells which are interconnected. Each cell is connected to the other cells. A (mathematical) matrix is used to describe these connections. </li></ul><ul><li>OUTPUT VECTOR : This is a list of outputs from the neural network, the result of the multiplication of the input vector with the brain matrix. </li></ul><ul><li>ACTION FUNCTION : Usually the neural network is not capable to yield results which are directly usable, so a separate function is required to translate the output vector to a specific reaction. </li></ul><ul><li>MOTORS : </li></ul>
  24. 24. CHALLENGES <ul><li>This research not only aims at trying to build an open ended learning system, but also to build a system that humans can interact with and train in a natural, instinctive manner. Humans are highly social creatures and use a variety of cues and modalities to communicate with each other. Building systems that can exploit and understand similar social cues could make machines easier for people to use, and enable humans to communicate with machines in richer ways. </li></ul>
  25. 25. THANK YOU

×