This document summarizes an experiment on learning-based randomized bin-picking that allows finger contact with neighboring objects. The experiment uses linear SVM and random forest models to predict success or failure cases of picking objects based on the distribution of neighboring objects within the swept volume of the robot finger motion. The models were trained on datasets of successful and failed pick attempts. The random forest model achieved over 90% success prediction accuracy, significantly higher than conventional bin-picking methods. The experiment demonstrates that allowing finger contact and using machine learning to predict outcomes can enable automated randomized bin-picking.
What are the advantages and disadvantages of membrane structures.pptx
Initial Experiments on Learning Based Randomized Bin-Picking Allowing Finger Contact with Neighboring Objects
1. Initial Experiments
on Learning Based Randomized Bin-Picking
Allowing Finger Contact with Neighboring Objects
Kensuke Harada*,**, Weiwei Wan*, Tokuo Tsuji***
Kohei Kikuchi****, Kazuyuki Nagata*, and Hiromu Onda*
IEEE International Conference on Automation, System and Engineering, 2016
* National Institute of Advanced Industrial Science and Technology
** Osaka University
*** Kanazawa University
**** Toyota Motors Co. Ltd.
2. Why Randomized Bin-Picking?
•Parts can be Automatically Supplied to an Assembly Cell
•Needed to Automate the Assembly Process
Parts Production Company Randomized Bin-Picking Assembly Cell
3. •Fingers Contact Neighboring Objects
•Both Successful/Failure Cases Depending on the
Configuration of Neighboring Objects
•Without allowing contact with neighboring objects, motion
planner often finds no feasible solution
Why Randomized Bin-Picking Fails
4. Related Works
• 2D Grasp
– Morales et al. (’01) , Fryndenal et al.(‘98)
• Grasp Planning
– Dupis et al. (‘08), Domae et al. (‘14), Harada et al. (‘14)
• Deep Learning Based Method
– Levine et al. (RSS ’16)
5.
6. What is Proposed in this Research
• Randomized Bin-Picking Allowing Finger Contact
with Neighboring Objects
• Predict Success/Failure Cases of Pick Based on
Learning
• Using Linear SVM and Random Forest (RF)
7. Swept Volume of Finger Motion
Approach Motion
Finger Closing Motion
Swept Volume is calculated before the finger actually moves.
Swept Volume is used to predict the contact between finger and object.
8. Swept Volume of Finger Motion
Swept volume includes point cloud of neighboring objects
Finger will collide with a neighboring object
Classify success/failure cases based on distribution of
point cloud included in the swept volume
9. Discriminator Construction
• Linear SVM
– Small Number of Samples
– Intuitive and Heuristic Method
• Random Forest
– Feature Vector Includes More Concrete Information
10. Linear SVM
Point cloud distributes edge of the swept volume
Grasp tends to be successful
Feature Vector
11. Random Forest (RF)
Feature Vector
Bins to store point cloud
Feature vector
Discretized Point Cloud Distribution
in the Swept Volume
12. Motion Generation Method
Prepare Grasping Posture Database
Identify Poses of Multiple Objects
IK Solvable Grasping Pose Candidates
Grasping Poses Predicted to be Success
Applying the Discriminator
15. Discrimination Results using Random Forest (RF)
• Trained using 71 Success and 27 Failure Cases
• More accurate results than Linear SVM
• More than 90% Success Rate (Significantly Higher Than
Conventional Results)
16. An Example of Selected Grasping Posture
Collision will occur
17.
18. Conclusions
• Randomized Bin-Picking Allowing Finger
Contact with Neighboring Objects
• Predict Success/Failure Cases of Pick Based
on Learning
• Using Linear SVM and Random Forest (RF)
20. Base Position Planning for Dual-arm Mobile Manipulators
Performing a Sequence of Pick-and-place Tasks
Collect required parts from storage area
Base Position and Selection of Hands
Selective use of Hands
Number of Sequence can be reduced
IEEE-RAS Int. Conf. on Humanoid Robots, 2015
21. View Planning
1st Trial
2nd Trial
3rd Trial
3D Vision Sensor at the Wrist
Planning Pose of Sensor
Reuse Previously Captured
Image
Just for the Different Part,
Identify Pose of Objects
Int. Symp. on Experimental Robotics, 2016