DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Darius BurschkaMachine Vision and P...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Shared-Control for Telemanipulation
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013ASCENT – Augmented Shared-Control f...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What do we try to extract from thee...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What is in the scene? (labeling step)
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Algorithm Description(Model Preproc...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013• For each model surflet pairin the...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013IJRR 2012 Special Issue, Papazov et...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What	  happens	  if	  an	  object	 ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable Registration fromgeneric...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable Registration(special iss...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable 3D Shape Registration Ba...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What do we try to extract from thee...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Hybrid Model of the Environment (JC...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013World model saves additional info, ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Robust Feature Tracking through Fus...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Local Feature Tracking Algorithms• ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Adaptive and Generic Accelerated Se...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Real Time Pose Tracking (IROS 2003 ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUM...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Physical and Geometric Properties o...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Functional Properties of an Objects...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Each tool used in theprocedure has ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Basic Experiments:Functionality Map...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Functionality Maps
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUM...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013ConclusionsFig. 1: The experiments ...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUM...
DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUM...
Upcoming SlideShare
Loading in...5
×

Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013

165

Published on

Workshop about semi-autonomous manipulation for teleoperation tasks.

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
165
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013

  1. 1. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Darius BurschkaMachine Vision and Perception GroupDepartment of Computer ScienceTechnische Universität MünchenSemantic Perception for Semi-AutonomousTeleoperation Tasks
  2. 2. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Shared-Control for Telemanipulation
  3. 3. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013ASCENT – Augmented Shared-Control forEfficient Natural Telemanipulation(ICRA 2013 J. Bohren et al. Teleoperation WeF6 5:45pm Clubraum)Fig. 1: The experiments were conducted with a human operator at The Johns Hopkins University (JHU) Homewood Campusin Baltimore, MD, USA, utilizing a da Vinci RMaster Console (left) commanding a DLR LWR as part of the SAPHARIplatform at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany (right).• Many remote telerobotic applications have limitationson bandwidth, creating a situation where the fidelityof the imaging is compromised. The availability ofstereoscopic imaging, image resolution and frame ratesmay be limited, leading to a limited ability to resolvenecessary detail for manipulation. This is particularlychallenging given the absence of haptic cues notedabove increases the reliance on visual perception.• Some environments impose additional communicationlatency (time-delay) on telemetry as well. For example,telemanipulation from Earth to low-earth orbit typicallyimposes delays that exceed half a second for direct line-of-sight communications and 2-7 seconds when usinglarger-coverage on-orbit communications networks. Thelimitations of human performance in telemanipulationconstrained circumstances. ASCENT takes a collaborativesystems approach that transcends the limitations of eitherpurely autonomous or purely teleoperated control modes bycombining task-specific sensor-based feedback with inputfrom an operator. As a result, the operator is able to providegross motion guidance to the system, and the remote manip-ulator is able to adapt that motion based on environmentalinformation. We have implemented this approach with aDLR lightweight arm driven by a da Vinci RS masterconsole separated by over 4000 miles. We demonstratethat ASCENT greatly improves manipulation performance,particularly when subtle motions are necessary in order tocorrectly perform the task.II. BACKGROUNDProblems:•  Depth perception is essential for grasping•  Limited bandwidth does not always allow remote imagetransmission•  Significant latency in transmission deteriorates dexterityof the control•  Moving objects in the scene limit the allowed latency inthe control for robust direct manipulation in remoteenvironments
  4. 4. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What do we try to extract from theenvironment?labeling motion parameters
  5. 5. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What is in the scene? (labeling step)
  6. 6. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Algorithm Description(Model Preprocessing Phase)• For all pairs of surflets atdistance d insert the tripleplus a pointer to its model in ahash-table.• Do this for all models using thesame hash-table.
  7. 7. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013• For each model surflet pairin the hash-table cell:Compute the rigidtransform T thatbest alignsOnline Recognition Phasemodel hash-table
  8. 8. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013IJRR 2012 Special Issue, Papazov et al.
  9. 9. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What  happens  if  an  object  is  similar  to  one  in  the  database?  Indexing to the Atlas database needsto be extended to object classes-> deformable shape registrationneededAtlas information Observed object
  10. 10. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable Registration fromgeneric models (special issue SGP11 Papazov et al.)Matching of a detailed shapeto a primitive priorThe manipulation “heat map” fromthe generic model gets propagated
  11. 11. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable Registration(special issue SGP 11, Papazov et al)Input data
  12. 12. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Deformable 3D Shape Registration Based on Local Similarity TransformsMVP
  13. 13. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013What do we try to extract from theenvironment?labeling motion parameters
  14. 14. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Hybrid Model of the Environment (JC Ramirez)ObjectContainer3Dreconstruction&planedetectionBlobDetectionFUSIONObjectLayerGeometricLayerSensorBlobs3D DataMAPObjects 3D StructureGeometricBlobsMapUpdateSystemInput Data Stream Output Data Stream
  15. 15. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013World model saves additional info, like texture,motion, etc. (VISAPP 2013 J.Ramirez et al.)Juan Carlos Ramirez and Darius BFaculty for Informatics, Technische Universitaet Muenchen, Boltzmanramirezd@in.tum.de, burschka@cs.INTRODUCTIONScene Tentative object candidates EncapsulaAn approach to consistently model and characterize potential object candidateThree principal procedures support our method:i) the segmentation of the captured range images into 3D clusters or blobs, bthe spatial structure of the scene,ii) the maintenance and reliability of the map, which are obtained through thewhich we assign a degree of existence (confidence value),iii) the visual motion estimation of potential object candidates, through the cominformation, allows not only to update the state of the actors and perceive tand refine their individual 3D structures over time.Juan Carlos Ramirez and Darius Burschkaformatics, Technische Universitaet Muenchen, Boltzmannstr. 3, Garching bei Muencramirezd@in.tum.de, burschka@cs.tum.eduINTRODUCTIONTentative object candidates Encapsulated 3D blobs Motionconsistently model and characterize potential object candidates presented in non-static sceneprocedures support our method:tion of the captured range images into 3D clusters or blobs, by which we obtain a first gross iucture of the scene,nce and reliability of the map, which are obtained through the fusion of the captured and mapign a degree of existence (confidence value),tion estimation of potential object candidates, through the combination of the texture and 3D-allows not only to update the state of the actors and perceive their changes in a scene, but alseir individual 3D structures over time.D Mappingor 3D Structures in Dynamic Environmentsand Darius Burschkahen, Boltzmannstr. 3, Garching bei Muenchen, Germanyurschka@cs.tum.eduUCTIONEncapsulated 3D blobs Motion estimationbject candidates presented in non-static scenes.
  16. 16. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Robust Feature Tracking through Fusion ofCamera and IMU Data (IROS 2009 E. Mair et al.)
  17. 17. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Local Feature Tracking Algorithms• Image-gradient based à Extended KLT (ExtKLT)•  patch-based implementation•  feature propagation•  corner-binding+  sub-pixel accuracy•  algorithm scales bad with numberof features• Tracking-By-Matching à AGAST tracker•  AGAST corner detector•  efficient descriptor•  high frame-rates (hundrets offeatures in a few milliseconds)+  algorithm scales well with numberof features•  pixel-accuracy8
  18. 18. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Adaptive and Generic Accelerated Segment Test(AGAST)9Improvements compared to FAST:• full exploration of the configuration space by backward-induction (nolearning)• binary decision tree (not ternary)• computation of the actual probability and processing costs(no greedy algorithm)• automatic scene adaption by tree switching (at no cost)• various corner pattern sizes (not just one)No drawbacks!Mair, Hager, Burschka, Suppa, HirzingerECCV, Springer, 2010E. Rosten
  19. 19. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Real Time Pose Tracking (IROS 2003 Burschka & Hager)
  20. 20. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUMMVP Learning from HumanMapping ofKnowledge
  21. 21. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Physical and Geometric Properties of an Object(Object Contaier) (ICRA 2012 Petsch et al.)
  22. 22. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Functional Properties of an Objectstored in Functionality Map
  23. 23. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Each tool used in theprocedure has its owncontainer describing itsshape, handling propertiesetc.Knowledge RepresentationFunctionality map for a specificprocedure describes the wayhow the tool was used duringthe procedure while movedbetween points in the world(Petsch/Burschka IROS2011)
  24. 24. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Basic Experiments:Functionality Maps (Tracking Data)
  25. 25. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013Functionality Maps
  26. 26. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUM Knowledge RepresentationAtlas:– Long-term memory– Experience of the systemWorking memory:– Short-term memory– Experience grounded in a givenenvironment• Temporal handling information
  27. 27. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013ConclusionsFig. 1: The experiments were conducted with a human operator at The Johns Hopkins University (JHU) Homewood Campusin Baltimore, MD, USA, utilizing a da Vinci RMaster Console (left) commanding a DLR LWR as part of the SAPHARIplatform at the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany (right).• Many remote telerobotic applications have limitationson bandwidth, creating a situation where the fidelityof the imaging is compromised. The availability ofstereoscopic imaging, image resolution and frame ratesmay be limited, leading to a limited ability to resolvenecessary detail for manipulation. This is particularlychallenging given the absence of haptic cues notedabove increases the reliance on visual perception.• Some environments impose additional communicationlatency (time-delay) on telemetry as well. For example,telemanipulation from Earth to low-earth orbit typicallyimposes delays that exceed half a second for direct line-of-sight communications and 2-7 seconds when usinglarger-coverage on-orbit communications networks. Thelimitations of human performance in telemanipulationare well-studied, and the threshold at which humanperformance begins to suffer is far below that [12].constrained circumstances. ASCENT takes a collaborativesystems approach that transcends the limitations of eitherpurely autonomous or purely teleoperated control modes bycombining task-specific sensor-based feedback with inputfrom an operator. As a result, the operator is able to providegross motion guidance to the system, and the remote manip-ulator is able to adapt that motion based on environmentalinformation. We have implemented this approach with aDLR lightweight arm driven by a da Vinci RS masterconsole separated by over 4000 miles. We demonstratethat ASCENT greatly improves manipulation performance,particularly when subtle motions are necessary in order tocorrectly perform the task.II. BACKGROUNDPresently, robots that are deployed to perform high-valuetasks usually fall into two broad categories:Why is perception necessary:•  Allows data reduction over slow links. In worst case,just symbolic information about objects in the scene•  Allows together with motion estimation a transparentswitch between direct control and autonomous handling•  Allows to deal with the problem with high latencies andfast motions in the scene..Questions?
  28. 28. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUMMVPResearch of the MVP Group http://mvp.visual-navigation.comThe Machine Vision andPerception Group @TUM workson the aspects of visualperception and control inmedical, mobile, and HCIapplicationsVisual navigationBiologically motivatedperceptionPerception for manipulationVisual Action AnalysisPhotogrammetric monocularreconstructionRigid and DeformableRegistration
  29. 29. DariusBurschka–MVPGroupatTUMhttp://mvp.visual-navigation.com SPME Workshop, May 5, 2013MachineVisionandPerceptionGroup@TUMMVPResearch of the MVP Group http://mvp.visual-navigation.comExploration of physicalobject propertiesSensor substitutionMultimodal SensorFusionDevelopment of newOptical SensorsThe Machine Vision andPerception Group @TUM workson the aspects of visualperception and control inmedical, mobile, and HCIapplications

×