Your SlideShare is downloading. ×
Tony Pratkanis (Stanford Univ) on Robotics
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Tony Pratkanis (Stanford Univ) on Robotics

217

Published on

Tony Pratkanis at a LASER http://www.scaruffi.com/leonardo/aug2013.html

Tony Pratkanis at a LASER http://www.scaruffi.com/leonardo/aug2013.html

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
217
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Replacing the Office Intern: An Autonomous Coffee Run with a Mobile Robot Tony Pratkanis
  • 2. Outline of the Talk ● General Background ● Coffee Grasping Steps – Navigation – Doors – Elevators – Object Passing ● Lessons Learned
  • 3. About the Salisbury Robotics Lab ● sr.stanford.edu
  • 4. Personal Robotics: The PR2 ● Based on the PR1 at Salisbury Lab ● Spun out to Willow Garage to become PR2
  • 5. Personal Robotics: The PR2 ● Costs $400,000+, weighs 400 pounds – More battery capacity than a Prius – Two laser scanners, many color cameras, Kinect-like depth cameras, two arms, etc. – Despite this, it is still human-safe ● The PR2 is a “Kitchen Sink” robot – Designed exclusively for research purposes – It has a vast (likely excessive) number of sensors and features ● Ships integrated with ROS – An open-source robotics middleware developed by Willow Garage – Vast amounts of useful software including motion planning, navigation, SLAM, computer vision, 3D object recognition, linear algebra, etc.
  • 6. The Task
  • 7. Personal Robotics ● Personal robotics is the creation of robots that live and play safely and effectively in human-centric environments – The ideal is “Rosie” from The Jetsons ● Faces many challenges not present in other forms of robotics – Extremely diverse obstacles and objectives – Highly unpredictable and unstructured environment – Safety issues ● My solution to these challenges: – Analyze the nature of the task and the available information – Develop simple procedures that exploit that information ● The “coffee bot” allows us to demonstrate this approach
  • 8. Important Qualifications ● The robot must be fully autonomous – No human intervention except for interaction with coffee shop employees ● The environment must be unmodified – Modification of human environments is often socially and politically intractable – Defeats the purpose of building such a robot
  • 9. Navigation Video ● Navigation basic demo ● Note how the robot intelligently avoids both static obstacles and people
  • 10. How Mapping and Navigation Work ● Two sensors are used – Wheel odometry ● Very accurate over short distances ● Error builds up – Laser scanners ● Accurately (to approximately 1cm) measure distance to objects ● Integrated by software to create a detailed map of the environment – “SLAM” - Simultaneous localization and mapping ● Then the map is used for navigation
  • 11. Navigation and Obstacle Avoidance ● Laser data, the map, and odometry are fused for localization – Particle-filter based approach – Obtains the position of the robot ● A cost-map grid is built of all obstacles – Real-time updates of the obstacle grid – Fed to path-finding algorithms ● The navigation software was modified to handle multiple floors – Leading to “multi_map_navigation”
  • 12. Door Pushing Video ● Open the door by pushing ● Note how the robot lines up in all three axes then rotates to open the door
  • 13. Pushing Doors ● These specific doors are challenging from two perspectives – Transparent and thus hard to detect – Heavy and thus physically hard to open ● The PR2 uses mechanical approaches to detection instead of vision ● Uses the entire body and strength of the robot to overcome the doors
  • 14. ● The robot uses the mapping and navigation software to approximately (<30 cm) locate the door ● Next, it uses the tilting laser scanner to line up with the door: – Travels to the correct distance from the door – Aims laser at the base of the door ● Lines up rotationally using the base of the door – Aims laser at middle of the door ● The central window of the door leaves a gap in the laser data compared to the metal sides – the robot centers this gap to align horizontally with the door ● Then, it spins around and backs through the door – Backing through the door is important to hit the door metal bar Pushing Doors
  • 15. Driving and Door Pulling Video ● Drive to the next door ● The lasers were negatively impacted by the sun, requiring adjustments to software filters ● Note how the door is pulled open by the robot
  • 16. Pulling Doors ● Detecting the handles of transparent doors is difficult – The background is unpredictable because of the window – The window reflects the handle, leading to multiple images – The handle itself is shiny, leading to unpredictable coloration and edge structure
  • 17. Pulling Doors ● Solution: Purely mechanical approach to handle detection and door opening – Once again, the robot uses the map to know the approximate location of the door – Drives up to the door and does a “waggle dance” to align with it mechanically – Backs up and slides the hand across the door to find the handle – Grabs the handle and moves so that the handle is at a fixed position with respect to the robot – Dances the door opening dance to open the door
  • 18. Elevator Video ● Note: – Avoidance of people on the way to and inside the elevator – Elevator operation by finding call and control panel buttons – Exiting the elevator on the correct floor
  • 19. Elevator Overview ● Call the elevator by pushing the call button – Find the call button and press it ● Wait for the elevator to arrive – Identify the correct elevator (up or down) ● Enter the elevator – Avoid humans and obstacles in the elevator ● Push the buttons in the elevator control panel – Elevator interior poses challenging computer vision problem ● Exit the elevator – Check to ensure the correct floor
  • 20. Pushing the Elevator Call Buttons ● PR2 knows the approximate location of elevator buttons due to navigation and map ● Lines up with the wall using the laser scanner ● Finds the button using vision ● Repeats this process if the elevator does not arrive
  • 21. Waiting for the Elevator to Arrive ● Scans the indicator lights for elevator arrival – Identifies correct direction (up or down) and elevator (left or right) ● Moves quickly to the elevator before door closes – Avoids humans using the laser – Rule: Only rides in empty elevators
  • 22. Pushing the Elevator Control Buttons ● Uses mechanical procedure to align with buttons – Similar to door pulling ● Once alignment is achieved, the buttons are at a fixed position with respect to the robot – With the known position and height, it is easy to press the correct floor ● If the elevator door fails to open, then the robot will repeat this process
  • 23. Exiting the Elevator ● Waits and uses laser to detect when the door opens ● Checks for correct floor at exit – Important because humans may have ordered the elevator to stop at additional floors
  • 24. Exiting the Elevator ● Determining the correct floor was very challenging – First approach using vision to see the floor number sign at exit too prone to failure ● Not robust to lighting changes – Second approach using robot's accelerometer much more successful ● Detecting time interval between elevator start and stop accurately predicts number of floors traveled ● Much more robust
  • 25. Driving to the Coffee Shop ● More door pulling and pushing ● Note the avoidance of the tables and people
  • 26. Waiting in the Coffee Shop Line ● The robot drives along a predefined course using its map – If the laser detects a person in line, the robot stops and only advances as the person moves forward ● While this approach works well for many stores, it would not work in larger stores that have multiple cashiers
  • 27. Waiting in Line Video ● Note how the robot drives this course
  • 28. Ordering and Obtaining the Coffee ● First, give the barista the written coffee order and cash payment ● Second, take the coffee and place it in the cup holder ● This requires an intuitive approach to enabling humans and robots to pass objects to each other
  • 29. How Do Humans Pass Objects? ● Humans pass objects by two main approaches: 1. Receiver holds out his/her hand, giver places object into hand 2. Giver holds out object, receiver grabs object ● Thus, by using Case 1 to receive objects and Case 2 to give objects, the robot never needs to find the object or the human's hand: it can just hold out its hand or the object ● Humans are also very good at knowing when to let go of objects – Humans hold onto an object until they feel the other person pull the object back from the hand, ensuring a good grip
  • 30. Giving an Object to a Human ● Object giving sequence used by PR2: 1. Holds out the robot's hand with the object 2. Uses text-to-speech to tell the person to take the object 3. Releases the object when either of two conditions is met: ● There is significant hand acceleration ● The human has forced the robot to move its hand 4. Folds back the arm
  • 31. Receiving an Object from a Human ● Initial simple process is similar to giving: 1. Holds out the robot's empty hand with an open gripper 2. Uses text-to-speech to ask for the object 3. Grasps the object when the previous conditions are meet (accelerometer or hand being forced back) ● This worked relatively well, but failed in some common cases
  • 32. Receiving an Object from a Human ● Most common failure: people did not actually push the object into the hand, just into the gripper – The solution was to use the forearm camera to detect when significant motion occurred in front of the hand ● Despite this, sometimes the gripper closes when the object is not present or the gripper slips off the object – The solution to this problem was to ensure that the gripper did not fully close (indicating no object), and if it did, try the whole process again
  • 33. Coffee Grasping and Return Video ● Observe money passing and coffee grasping process ● Returning to our lab with the coffee by doing all the previous navigation steps backwards ● Giving the coffee to the faculty advisor
  • 34. Lessons Learned ● It is extremely fun and exciting to do this sort of work ● It is impractical in the current state – the robot is too slow, requires too much attention and is too expensive – However, it could still be helpful for disabled people or for artistic purposes ● Computer vision can be useful but is often unreliable ● Simple heuristic approaches often exceed the performance and especially the reliability of complex math-heavy algorithms – In addition, such approaches are more predictable and easier to understand and thus far easier to maintain ● Don't be afraid to fail, don't be afraid to retry – Your robot is not going to work every time – it gets exponentially harder to increase reliability as you go up from 90% to 99%, 99.9% etc. ● A better approach is to detect when the robot fails and simply try again
  • 35. Future Work ● Robotics software keeps being reinvented to perform the same tasks with different robots – Need robust development model that allows sharing of code and building off each coder's work – Critical because of the large amount of software needed for personal robotics ● I want to develop a next-generation approach to handling this problem – Develop an easy-to-use framework for specifying what each robotic software application does so multiple applications can be automatically integrated
  • 36. Questions Tony Pratkanis sr.stanford.edu

×