SlideShare a Scribd company logo
1 of 1
Download to read offline
Features-Based Affordance Detection of
Tool Parts
Introduction Method Results
References
Student: Raghad Al-abboodi Supervisor: Walterio Mayol-Cuevas
Department of Computer Science, University of Bristol
Outlook!
Conclusions
In this work, we present an affordance detection framework for a 3D point cloud, using 3D
robust histogram features which characterize the point local geometry. Model-scene
matching is computed using Euclidean distance and the shortest returned distance between
the feature vectors which represent the most relevant features and accordingly the enquire
affordance. The features-based classification gave promising results during testing.
Robots are increasingly being used to perform
daily tasks usually performed by humans, as well
as task requiring human-robot collaboration.
As such it is important for them to be able to
detect and interact with various human tools and
objects.
Gibson [1] refers to affordance as “properties of
an object. that determine what actions a human
can perform on them.” In this sense, man-made
tools usually consist of many affordances (multi-
affordance) such as contain, grasp or cut.
If the robot has the ability to detect these
affordances, then it would be possible to interact
with objects even if it was seeing them for the first
time.
Moreover, learning the functional part of the tool
(ex: mug-handle, knife-blade) rather than the tool
itself helps generalize to define novel set of tools
tht have the same functional part. For example,
when the robot learns the functional part of the
knife(the blade) use for cut, then variety of tools
which have sharp edge can be used.
In this work, we address the problem of learning
affordances for part of the object based of the
local features.
[1] J. J. Gibson. The theory of affordance. In
Perceiving, Acting, and Knowing. Lawrence
Erlbaum Associates, Hillsdale, NJ, 1977.
[2] R. B. Rusu, N. Blodow, and M. Beetz.
Fast Point Feature Histograms (FPFH) for
3D Registration. In In Proceedings of the
International Conference on Robotics and
Automation (ICRA), 2009.
The proposed approach show promising
result after determining the optimal
parameters to use during calculation.
The figures bellow shows comparisons of
histogram features for different object,
illustrating similarities and differences.
Training
Point cloud extraction: a relative 3d point cloud
acquired from RGB-D kinect.
Normal estimation:
Use approximation to infer the surface normals
from the point cloud dataset directly.
Choosing the right scale:
Since calculating the normal value using kd-tree
needs to use a right scale for the size of the
neighbor points, therefore; a series of cross
validation are performed as shown an example in
Table 1.
Table 1: Cross validation result for the jar with different normal
radius. The left column means the training data, and the top row
means the testing radius. The values represent the number of
testing data that were wrongly classified. The lowest error average
are the best to use.
Keypoints computing: Dawn sample the point
cloud regulate the point density of the resulting
file.
Histogram Feature computing: set of 3D local
point features (Fast Point Feature Histograms) [2]
are used to represent the local descriptor.
For the training, the result features are manually
labeled with the ground truth model.
Testing
Model/
Radius
0.005
cm
0.004
cm
0.006
cm
0.05 cm 0.2 cm
Jar1 0 4 0 0 0
Jar2 0 0 1 2 5
Jar3 0 3 0 0 2
Jar4 3 0 4 5 5
Jar5 0 0 0 0 1
Jar6 0 0 0 0 1
Avg. 0.5 1.16667 0.8333 1.1667 2.3333
Mean histograms for two different object(Jar, Can) share the
same affordance (grasp)
Two different objects(Jar, Can) with different affordances
(grasp, place).
The mean histograms for various objects. Objects with the
same affordance have similar histograms, while those of
different affordance do not.
New set of
point cloud
Match found,
return
affordance type
FPFH
Keypoint
Extraction
Match not
found
Euclidean distance
Grasp Detection
Place Detection
Multi-affordance
tool (cut, grasp)
Multi-affordance tool
(Pound, grasp)

More Related Content

Viewers also liked (10)

ITK pdf
ITK pdfITK pdf
ITK pdf
 
Faithful elephants
Faithful elephantsFaithful elephants
Faithful elephants
 
Ppsj violaine
Ppsj violainePpsj violaine
Ppsj violaine
 
how_to_exhibit
how_to_exhibithow_to_exhibit
how_to_exhibit
 
Digital resources 8 20-15
Digital resources 8 20-15Digital resources 8 20-15
Digital resources 8 20-15
 
Television's Most Viewed
Television's Most ViewedTelevision's Most Viewed
Television's Most Viewed
 
Rapport de projet de fin d'année
Rapport de projet de fin d'année Rapport de projet de fin d'année
Rapport de projet de fin d'année
 
[week4] Cleaning data with openrefine2
[week4] Cleaning data with openrefine2[week4] Cleaning data with openrefine2
[week4] Cleaning data with openrefine2
 
[week7]R_Wrangling(2)
[week7]R_Wrangling(2)[week7]R_Wrangling(2)
[week7]R_Wrangling(2)
 
Kostolac 7
Kostolac 7Kostolac 7
Kostolac 7
 

Similar to Affordance Detection Poster

An interactive image segmentation using multiple user inputªs
An interactive image segmentation using multiple user inputªsAn interactive image segmentation using multiple user inputªs
An interactive image segmentation using multiple user inputªs
eSAT Journals
 

Similar to Affordance Detection Poster (20)

The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)The International Journal of Engineering and Science (The IJES)
The International Journal of Engineering and Science (The IJES)
 
GROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCEGROUPING OBJECTS BASED ON THEIR APPEARANCE
GROUPING OBJECTS BASED ON THEIR APPEARANCE
 
IRJET- Object Detection using Hausdorff Distance
IRJET-  	  Object Detection using Hausdorff DistanceIRJET-  	  Object Detection using Hausdorff Distance
IRJET- Object Detection using Hausdorff Distance
 
IRJET - Object Detection using Hausdorff Distance
IRJET -  	  Object Detection using Hausdorff DistanceIRJET -  	  Object Detection using Hausdorff Distance
IRJET - Object Detection using Hausdorff Distance
 
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGEOBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
OBJECT DETECTION FOR SERVICE ROBOT USING RANGE AND COLOR FEATURES OF AN IMAGE
 
Object Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an ImageObject Detection for Service Robot Using Range and Color Features of an Image
Object Detection for Service Robot Using Range and Color Features of an Image
 
Object detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an imageObject detection for service robot using range and color features of an image
Object detection for service robot using range and color features of an image
 
Partial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather ConditionsPartial Object Detection in Inclined Weather Conditions
Partial Object Detection in Inclined Weather Conditions
 
Action Recognition using Nonnegative Action
Action Recognition using Nonnegative ActionAction Recognition using Nonnegative Action
Action Recognition using Nonnegative Action
 
160713
160713160713
160713
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTSDETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
DETECTION OF DENSE, OVERLAPPING, GEOMETRIC OBJECTS
 
Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects Detection of Dense, Overlapping, Geometric Objects
Detection of Dense, Overlapping, Geometric Objects
 
Implementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time VideoImplementation of Object Tracking for Real Time Video
Implementation of Object Tracking for Real Time Video
 
Citython presentation
Citython presentationCitython presentation
Citython presentation
 
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...An Experiment with Sparse Field and Localized Region Based Active Contour Int...
An Experiment with Sparse Field and Localized Region Based Active Contour Int...
 
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITIONFEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
FEATURE EXTRACTION USING SURF ALGORITHM FOR OBJECT RECOGNITION
 
An interactive image segmentation using multiple user input’s
An interactive image segmentation using multiple user input’sAn interactive image segmentation using multiple user input’s
An interactive image segmentation using multiple user input’s
 
An interactive image segmentation using multiple user inputªs
An interactive image segmentation using multiple user inputªsAn interactive image segmentation using multiple user inputªs
An interactive image segmentation using multiple user inputªs
 
Edge detection by modified otsu method
Edge detection by modified otsu methodEdge detection by modified otsu method
Edge detection by modified otsu method
 

Affordance Detection Poster

  • 1. Features-Based Affordance Detection of Tool Parts Introduction Method Results References Student: Raghad Al-abboodi Supervisor: Walterio Mayol-Cuevas Department of Computer Science, University of Bristol Outlook! Conclusions In this work, we present an affordance detection framework for a 3D point cloud, using 3D robust histogram features which characterize the point local geometry. Model-scene matching is computed using Euclidean distance and the shortest returned distance between the feature vectors which represent the most relevant features and accordingly the enquire affordance. The features-based classification gave promising results during testing. Robots are increasingly being used to perform daily tasks usually performed by humans, as well as task requiring human-robot collaboration. As such it is important for them to be able to detect and interact with various human tools and objects. Gibson [1] refers to affordance as “properties of an object. that determine what actions a human can perform on them.” In this sense, man-made tools usually consist of many affordances (multi- affordance) such as contain, grasp or cut. If the robot has the ability to detect these affordances, then it would be possible to interact with objects even if it was seeing them for the first time. Moreover, learning the functional part of the tool (ex: mug-handle, knife-blade) rather than the tool itself helps generalize to define novel set of tools tht have the same functional part. For example, when the robot learns the functional part of the knife(the blade) use for cut, then variety of tools which have sharp edge can be used. In this work, we address the problem of learning affordances for part of the object based of the local features. [1] J. J. Gibson. The theory of affordance. In Perceiving, Acting, and Knowing. Lawrence Erlbaum Associates, Hillsdale, NJ, 1977. [2] R. B. Rusu, N. Blodow, and M. Beetz. Fast Point Feature Histograms (FPFH) for 3D Registration. In In Proceedings of the International Conference on Robotics and Automation (ICRA), 2009. The proposed approach show promising result after determining the optimal parameters to use during calculation. The figures bellow shows comparisons of histogram features for different object, illustrating similarities and differences. Training Point cloud extraction: a relative 3d point cloud acquired from RGB-D kinect. Normal estimation: Use approximation to infer the surface normals from the point cloud dataset directly. Choosing the right scale: Since calculating the normal value using kd-tree needs to use a right scale for the size of the neighbor points, therefore; a series of cross validation are performed as shown an example in Table 1. Table 1: Cross validation result for the jar with different normal radius. The left column means the training data, and the top row means the testing radius. The values represent the number of testing data that were wrongly classified. The lowest error average are the best to use. Keypoints computing: Dawn sample the point cloud regulate the point density of the resulting file. Histogram Feature computing: set of 3D local point features (Fast Point Feature Histograms) [2] are used to represent the local descriptor. For the training, the result features are manually labeled with the ground truth model. Testing Model/ Radius 0.005 cm 0.004 cm 0.006 cm 0.05 cm 0.2 cm Jar1 0 4 0 0 0 Jar2 0 0 1 2 5 Jar3 0 3 0 0 2 Jar4 3 0 4 5 5 Jar5 0 0 0 0 1 Jar6 0 0 0 0 1 Avg. 0.5 1.16667 0.8333 1.1667 2.3333 Mean histograms for two different object(Jar, Can) share the same affordance (grasp) Two different objects(Jar, Can) with different affordances (grasp, place). The mean histograms for various objects. Objects with the same affordance have similar histograms, while those of different affordance do not. New set of point cloud Match found, return affordance type FPFH Keypoint Extraction Match not found Euclidean distance Grasp Detection Place Detection Multi-affordance tool (cut, grasp) Multi-affordance tool (Pound, grasp)