The document presents a method for automatic demonstration and feature selection for robot learning. It describes an experiment where a robot learns a task from user demonstrations of placing a green object on a red object. The method, called Dissimilarity Mapping Filtering (DMF), is able to correctly select the relevant demonstrations and features for the task. It identifies the last two demonstrations as incorrect and discards features related to the initial position of the green object as irrelevant. The results validate that DMF can accurately perform demonstration and feature selection to support robot learning.
1. Automatic Demonstration and Feature Selection
for Robot Learning
Santiago Morante, Juan G. Victores, and Carlos Balaguer
smorante@ing.uc3m.es
IEEE RAS Humanoids Conference 2015
UC3M/Robotics Lab
2. IEEE Humanoids 2015 2
●
Robot learning frameworks, such as Programming by
Demonstration, are based on learning tasks from sets of user
demonstrations.
●
Motivation: In their naïve implementation, they assume that all
the data from the user demonstrations has been correctly
sensed and can be relevant to the task.
Proposed solution:
demonstration selection + feature selection
Introduction
4. IEEE Humanoids 2015 4
Experiment: Overview
●
Task: putting the green
object on top of the red
object.
●
First goal: the robot has
to distinguish correct and
incorrect demonstrations.
●
Second goal: distinguish
relevant and irrelevant
features for the task.
5. IEEE Humanoids 2015 5
Experiment: CGDA
●
Continuous Goal-Directed Actions (CGDA): Focused on
changes in the environment due to an action.
●
Motivation: Answer to what to imitate? In robot imitation.
●
Procedure: Tracking object features (color, area, spatial, etc)
continuously in time. Only spatial features are analyzed in
this paper.
6. IEEE Humanoids 2015 6
Experiment: Setup
●
Humanoid robot TEO equipped with an ASUS Xtion PRO LIVE set to provide
640×480 RGB and depth streams at 30 fps. The red and the green object are
color segmented.
●
13 scalar features are extracted in a periodic 40 ms loop:
– centroid absolute position of red (x1 , y1 , z1) and green object (x2 , y2 , z2),
– centroid relative position (the difference between the centroid absolute
positions x1-x2 , y1 -y2 , z1-z2 ),
– absolute values of the previous values (|x1-x2|, |y1 -y2|, |z1-z2|),
– Euclidean distance between the red and the green object (dist(X1 , X2) )
7. IEEE Humanoids 2015 7
Experiment: Setup II
●
We recorded 10 demonstrations of different durations.
●
Performing 8 of them correctly, and performing the last 2
incorrectly.
●
The red object is not moved in any of the correct
demonstrations, but it is moved in the incorrect ones.
●
The green object approaches the red object from different
angles in the correct demonstrations, and is moved
randomly in the incorrect ones.
8. IEEE Humanoids 2015 8
Experiment: Hypothesis
●
As humans, with this context information, we consider that
the irrelevant demonstrations are the last two
demonstrations.
●
Regarding the features, we consider that the features that
must be discarded are: x2 , y2 , x1 − x2 and y1 − y2 , which are
those dependent on the initial position of the green object.
14. IEEE Humanoids 2015 14
Results: Demonstration Selection
Last two demonstrations are discarded. It agrees with our hypothesis
15. IEEE Humanoids 2015 15
Results: Feature Selection
Discarded features: x2
, y2
, x1
− x2
and y1
− y2
.
It agrees with our hypothesis
16. IEEE Humanoids 2015 16
●
We have applied DMF to demonstration and
feature selection in the context of a humanoid
robot goal-directed learning experiment.
●
Results show the accuracy of DMF, allowing a
great flexibility with the interchangeable
algorithms.
Conclusions
17. IEEE Humanoids 2015 17
For More Information:
Morante, S., Victores, J. G., & Balaguer, C. (2015). Automatic Demonstration and Feature
Selection for Robot Learning. In IEEE International Conference on Humanoid Robot
(Humanoids). Seoul: IEEE.
Morante, S., Victores, J. G., Jardón, A., & Balaguer, C. (2015). Humanoid robot imitation
through continuous goal-directed actions: an evolutionary approach. Advanced Robotics,
29(5), 303–314
Morante, S., Victores, J. G., Jardon, A., & Balaguer, C. (2014). On using guided motor
primitives to execute Continuous Goal-Directed Actions. In The23rd IEEE International
Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 613–618).
Edinburgh: IEEE
Morante, S., Victores, J. G., Jardon, A., & Balaguer, C. (2014). Action effect generalization,
recognition and execution through Continuous Goal-Directed Actions. In 2014 IEEE
International Conference on Robotics and Automation (ICRA) (pp. 1822–1827). Hong
Kong: IEEE
UC3M/Robotics Lab