Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Robot Localisation: An Introduction - Luis Contreras 2020.06.09 | RoboCup@Home Education

650 views

Published on

RoboCup@Home Education
Online Classroom: Invited Lecture Series

= Robot Localisation: An Introduction =
Speaker: Luis Contreras | Tamagawa University
Date and Time:
- June 09, 2020 (Tue) 09:00~11:00 (GMT+8 China/Malaysia)
- June 08, 2020 (Mon) 21:00~23:00 (EDT New York)
- June 08, 2020 (Mon) 03:00~05:00 (CEST Italy/France)

https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Robot Localisation: An Introduction - Luis Contreras 2020.06.09 | RoboCup@Home Education

  1. 1. Robot Localisation: An Introduction Speaker: Luis Contreras | Tamagawa University Time: June 09, 2020 (Tue) 09:00~11:00 (GMT+8) https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series RoboCup@Home Education ONLINE CLASSROOM Invited Lecture Series Highlights ● Probabilistic in robot localisation ● Probabilistic model for robot motion and particle filters Luis Contreras received his Ph.D. in Computer Science at the Visual Information Laboratory, in the Department of Computer Vision, University of Bristol, UK. Currently, he is a research fellow at the Advanced Intelligence & Robotics Research Center, Tamagawa University, Japan. He has also been an active member of the Bio-robotics Laboratory at the Faculty of Engineering, National Autonomous University of Mexico, Mexico. He has been working on service robots and has tested his latest results at the RoboCup and similar robot competitions for the last ten years.
  2. 2. RoboCup@Home Education | www.RoboCupatHomeEDU.org Robot Localisation: An Introduction ● Speaker: Luis Contreras | Tamagawa University ● Host: Jeffrey Tan | @HomeEDU ● Date and Time: ○ June 09, 2020 (Tue) 09:00~11:00 (GMT+8 China/Malaysia) ○ June 08, 2020 (Mon) 21:00~23:00 (EDT New York) ○ June 08, 2020 (Mon) 03:00~05:00 (CEST Italy/France) ○ Web: https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series ** Privacy reminder: Video will be recorded and published online ** RoboCup@Home Education Online Classroom 2
  3. 3. RoboCup@Home Education | www.RoboCupatHomeEDU.org RoboCup@Home Education is an educational initiative in RoboCup@Home that promotes educational efforts to boost RoboCup@Home participation and artificial intelligence (AI)-focused service robot development. Under this initiative, currently there are 4 efforts in active operation: 1. RoboCup@Home Education Challenge events (national, regional, international) 2. Open Source Educational Robot Platforms for RoboCup@Home (service robotics) 3. OpenCourseWare for the learning of AI-focused service robot development 4. Outreach Programs (local workshops, international academic exchanges, etc.) Web: https://www.robocupathomeedu.org/ FB: https://www.facebook.com/robocupathomeedu/ RoboCup@Home Education 3
  4. 4. RoboCup@Home Education | www.RoboCupatHomeEDU.org Special Online Challenge Tracks ● Open Platform Online Classroom [EN] ● Open Platform Online Classroom [CN] ● Standard Platform Pepper 2.9 Online Classroom [EN] ● Standard Platform Pepper 2.5 Online Classroom [CN] More details: https://www.robocupathomeedu.org/learn/online -classroom Invited Lecture Series ● Robotics Development with MATLAB [EN] ● Robot Localisation: An Introduction [EN] ● World Representation Through Artificial Neural Networks: An Introduction [EN] ● ROS with AI [TH] Regular Online Classroom Tracks ● Introduction to Service Robotics [EN] ○ 6 weeks ○ ROS, Python ○ Speech, Vision, Navigation, Arm RoboCup@Home Education Online Classroom 4
  5. 5. RoboCup@Home Education | www.RoboCupatHomeEDU.org Luis Contreras | Tamagawa University 5 Luis Contreras received his Ph.D. in Computer Science at the Visual Information Laboratory, in the Department of Computer Vision, University of Bristol, UK. Currently, he is a research fellow at the Advanced Intelligence & Robotics Research Center, Tamagawa University, Japan. He has also been an active member of the Bio-robotics Laboratory at the Faculty of Engineering, National Autonomous University of Mexico, Mexico. He has been working on service robots and has tested his latest results at the RoboCup and similar robot competitions for the last ten years.
  6. 6. tamagawa.jp Robot Localisation: An Introduction Luis Angel Contreras-Toledo, PhD Advance Intelligence and Robotics Research Center Tamagawa University https://aibot.jp/ 2020
  7. 7. tamagawa.jp Content • Robot Localisation: An Introduction • An introduction to Robot Vision
  8. 8. tamagawa.jp Motivation Metric map Topologic map Probabilistic map Symbolic map Wall Floor
  9. 9. tamagawa.jp Motivation Global path Local path
  10. 10. tamagawa.jp Localisation s0 map m
  11. 11. tamagawa.jp Localisation s0 map m u1
  12. 12. tamagawa.jp Localisation s1=f(s0,u1) map m u1
  13. 13. tamagawa.jp Localisation p(s1¦s0,u1) map m u1~N(μ,σ) ?
  14. 14. tamagawa.jp Localisation p(s1¦s0,u1) map m u1~N(μ,σ) z1
  15. 15. tamagawa.jp Localisation p(s1¦u1, z1) map m u1~N(μ,σ) z1~N(μ,σ)
  16. 16. tamagawa.jp Localisation map m u2 p(s1¦u1, z1)
  17. 17. tamagawa.jp Localisation map m u2 z2 p(s2¦u1 , u2, z1 , z2)
  18. 18. tamagawa.jp Localisation Given a map m, with ui~N(μ,σ) and zi~N(μ,σ), at time T we have ST ={s0, s1, s2, …, sT} UT ={u1, u2, u3, …, uT} ZT ={z1, z2, z3, …, zT} The localisation problem is then defined as p(ST¦UT, ZT,m)
  19. 19. tamagawa.jp Error model o x y 𝑠𝑡 = 𝑥 𝑡 𝑦𝑡 𝜃𝑡
  20. 20. tamagawa.jp Error model o x y ut+1 = (d, α) 𝑠𝑡+1 = 𝑥 𝑡+1 𝑦𝑡+1 𝜃𝑡+1 = 𝑥 𝑡 + 𝑑 cos 𝜃𝑡+1 𝑦𝑡 + 𝑑 sin 𝜃𝑡+1 𝜃𝑡 + 𝜶
  21. 21. tamagawa.jp Error model o x y ut+1 = (d, α) 𝑠𝑡+1 = 𝑥 𝑡+1 𝑦𝑡+1 𝜃𝑡+1 = 𝑥 𝑡 + 𝒅 cos 𝜃𝑡+1 𝑦𝑡 + 𝒅 sin 𝜃𝑡+1 𝜃𝑡 + 𝛼
  22. 22. tamagawa.jp Error model o x y ut+1 = (d+ε, α+φ) 𝑠𝑡+1 = 𝑥 𝑡+1 𝑦𝑡+1 𝜃𝑡+1 ≈ 𝑥 𝑡 + (𝑑 + 𝜀) cos 𝜃𝑡+1 𝑦𝑡 + (𝑑 + 𝜀) sin 𝜃𝑡+1 𝜃𝑡 + 𝛼 + 𝜑
  23. 23. tamagawa.jp Error model X X Distribution of positions after several trials Original position of the robot
  24. 24. tamagawa.jp Error model X X X X X X XXXXX After 10 steps
  25. 25. tamagawa.jp Error model -Position error -Orientation error START GOAL START GOAL 0 0 𝜎𝜀 𝜎 𝜑
  26. 26. tamagawa.jp Error model -Pose (i.e. position and orientation) error ut+1 = (d+ε, α+φ) 𝜀 = 0 + 𝜎𝜀 ∙ randn 1,1 , a random Gaussian number with μ=0 and σ=σε. 𝜑 = 0 + 𝜎 𝜑 ∙ randn(1,1) , a random Gaussian number with μ=0 and σ=σφ. 𝑠𝑡+1 = 𝑥 𝑡+1 𝑦𝑡+1 𝜃𝑡+1 = 𝑥 𝑡 + (𝑑 + 𝜀) cos 𝜃𝑡+1 𝑦𝑡 + (𝑑 + 𝜀) sin 𝜃𝑡+1 𝜃𝑡 + 𝛼 + 𝜑 𝜎𝜀 𝜎 𝜑
  27. 27. tamagawa.jp Error model Sensor error z Error from reported distance. It can be modelled as a probability function, e.g. a Gaussian distribution, given a reading z = r and a distance to the obstacle d, then given x = |r – d| we have 𝑃 𝑥 = 1 2𝜋𝜎𝑧 2 𝑒 − 𝑥2 2𝜎 𝑧 2 0 𝜎𝑧 d
  28. 28. tamagawa.jp Error model Sensor error z Where, for a number of readings z = {𝑟1, 𝑟2, … , 𝑟𝑛} 𝜎𝑧 = σ𝑖=1 𝑛 𝑟𝑖 − ҧ𝑟 2 𝑛 − 1 0 𝜎𝑧 d
  29. 29. tamagawa.jp A probabilistic robot Uniform distribution After one measurement, uncertainty is centred around possible locations Images from S. Thrun et al. “Probabilistic Robotics”. MIT Press, 2005.
  30. 30. tamagawa.jp After moving to the right, uncertainty is propagated to After a further measurement uncertainty reduces And carries on...
  31. 31. tamagawa.jp The weighted particle representation map m 𝑠𝑡, 𝑤𝑡 , where 𝑠𝑡 = 𝑥 𝑡 𝑦𝑡 𝜃𝑡 𝜃𝑡 𝑧𝑡 𝑐1 𝑐2 𝑑1
  32. 32. tamagawa.jp Key concepts Bayes Formula 𝑃 𝑠𝑖 𝑧 = 𝑃 𝑧 𝑠𝑖 𝑃(𝑠𝑖) 𝑃(𝑧) = likelihood ∙ prior evidence
  33. 33. tamagawa.jp Key concepts Probability P(S = si) = P(si) that random variable S takes on value si . Prior (probability distribution) P(si) models uncertainty before new data is collected. Likelihood P(z | si) that sensor measurement takes on value z given that the robot is at pose si . Posterior (probability distribution) P(si | z) expresses uncertainty after measurement.
  34. 34. tamagawa.jp Key concepts Bayes Formula Suppose a robot detects when a door is open. If it gets measurement z=d±σz, what is P(open|z) . 𝑑 𝜎𝑧 z Error model
  35. 35. tamagawa.jp Key concepts Bayes Formula P(open|z) is diagnostic. P(z|open) is causal (it counts frequency). *Often, causal knowledge is easier to obtain. 𝑃 open 𝑧 = 𝑃 𝑧 open 𝑃(open) 𝑃(𝑧)
  36. 36. tamagawa.jp Key concepts Example 𝑃 𝑧 open = 0.6 𝑃 𝑧 ¬open = 0.3 𝑃 open = P ¬open = 0.5 𝑃 open 𝑧 = 𝑃 𝑧 open 𝑃(open) 𝑃(𝑧) = 𝑃 𝑧 open 𝑃(open) 𝑃(𝑧|open)𝑃(open) + 𝑃(𝑧|¬open)𝑃(¬open)
  37. 37. tamagawa.jp Key concepts Example 𝑃 𝑧 open = 0.6 𝑃 𝑧 ¬open = 0.3 𝑃 open = P ¬open = 0.5 𝑃 open 𝑧 = 0.6 ∙ 0.5 0.6 ∙ 0.5 + 0.3 ∙ 0.5 𝑃 open 𝑧 = 0.67
  38. 38. tamagawa.jp The weighted particle representation map m 𝑠𝑡, 𝑤𝑡 , where 𝑠𝑡 = 𝑥 𝑡 𝑦𝑡 𝜃𝑡 𝜃𝑡 𝑟𝑡 𝑐1 𝑐2 𝑑1 𝑟𝑡 = 𝑐1,𝑦 − 𝑐2,𝑦 𝑐2,𝑥 − 𝑥 𝑡 − (𝑐1,𝑥 − 𝑐2,𝑥)(𝑐2,𝑦 − 𝑦𝑡) 𝑐1,𝑦 − 𝑐2,𝑦 cos 𝜃 − (𝑐1,𝑥 − 𝑐2,𝑥) sin 𝜃
  39. 39. tamagawa.jp The weighted particle representation Get the likelihood of z given groundtruth rt at state st The weight of a particle might be calculated as To avoid some particles disappearing too quickly, we can add a damping factor 𝑃 𝑧|𝑠𝑡 ∝ 1 2𝜋𝜎𝑧 2 𝑒 − (𝑧−𝑟𝑡)2 2𝜎 𝑧 2 𝑤 ∝ 𝑃 𝑧|𝑟 𝑤 ∝ 𝑃 𝑧|𝑟 + k
  40. 40. tamagawa.jp Particle filter localisation Use particle distribution to represent uncertainty of robot position and orientation (state). Each particle is a hypothesis of the state of the robot. The particles’ weight indicates the credibility of that hypothesis. Particle propagation after robot motion considers uncertainty in the actuators, while particles’ weights consider sensor’s uncertainty.
  41. 41. tamagawa.jp Particle filter localisation Also known as Montecarlo filters, Condensation, or Factored Sampling, this method probabilistically estimates where the robot is. It is a Bayesian estimator. Also considered an Evolutionary Algorithm since the fittest individuals (particles) survive.
  42. 42. tamagawa.jp Particle filter localisation map m
  43. 43. tamagawa.jp Particle filter localisation map m
  44. 44. tamagawa.jp Particle filter localisation map m
  45. 45. tamagawa.jp Particle filter localisation map m
  46. 46. tamagawa.jp Particle filter localisation Remember Bayes formula 𝑃 𝑠𝑖 𝑧 = 𝑃 𝑧 𝑠𝑖 𝑃(𝑠𝑖) 𝑃(𝑧) Considering 𝑃(𝑠𝑖) and 𝑃(𝑧) constant for every particle, then 𝑃 𝑠𝑖 𝑧 ∝ 𝑤𝑖 Normalising for all particles 𝑤𝑖 = 𝑤𝑖 σ 𝑗=1 𝑁 𝑤𝑗
  47. 47. tamagawa.jp Particle filter localisation map m
  48. 48. tamagawa.jp Particle filter localisation map m
  49. 49. tamagawa.jp Particle filter localisation map m
  50. 50. tamagawa.jp Particle filter localisation map m
  51. 51. tamagawa.jp Particle filter localisation map m
  52. 52. tamagawa.jp Particle filter localisation map m
  53. 53. tamagawa.jp Particle filter localisation map m Given 𝑠𝑖, 𝑤𝑖 , where 𝑠𝑖 = 𝑥𝑖 𝑦𝑖 𝜃𝑖 In general, 𝑠𝑡 can be given by ෝ𝑠𝑡 = ෍ 𝑖=1 𝑁 𝑠𝑖 𝑤𝑖
  54. 54. tamagawa.jp Particle filter localisation 0. Spread particles uniformly in the virtual map. 1. Motion prediction: Move real robot and each particle inside the map. 2. Particle update: Take a measurement with the real robot and weight particles according to virtual readings from each particle inside the virtual world. 3. Re-sampling: Particles with better match between real and virtual measurement will get higher weight. 4. Go to Step 1 unless the robot is lost, in that case go to Step 0.
  55. 55. tamagawa.jp Particle filter localisation
  56. 56. tamagawa.jp Content • Robot Localisation: An Introduction • An introduction to Robot Vision
  57. 57. tamagawa.jp An introduction to Robot Vision We consider robot vision a crucial skill for a service robot to meet its expectations and therefore in this paper we presents a tutorial to computer vision for robotic applications, so new students can have a clear idea where and how to start. We first present the basic concepts of image publishers and subscribers in ROS and then we apply some basic commands to introduce the students to the digital image processing theory; finally, we present some RGBD and point cloud notions and applications.
  58. 58. tamagawa.jp Install You should access to: https://gitlab.com/trcp/introvision and follow the instructions there. Basically, creat a ROS workspace: $ cd ~ $ mkdir -p erasers_ws/src $ cd erasers_ws $ catkin_make and clone the repository: $ cd ~/erasers_ws/src $ git clone https://gitlab.com/trcp/introvision.git $ cd .. $ catkin_make
  59. 59. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  60. 60. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  61. 61. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  62. 62. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  63. 63. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  64. 64. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  65. 65. tamagawa.jp Installation You should access to: https://gitlab.com/trcp/introvision
  66. 66. tamagawa.jp Image Publishers and Subscribers in ROS We present a series of steps so the learners can start programming in the ROS environment while they learn the ROS concepts. The templates provided here can serve as a basic platform for more complex lessons or projects they develop after finishing all the lessons. ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Publisher pub = it.advertise("camera/image", 1); ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Subscriber sub = it.subscribe("camera/image", 1, callback_image); void callback_image(const sensor_msgs::ImageConstPtr& msg){ … }
  67. 67. tamagawa.jp RGB Image Processing with OpenCV and ROS We understand the images as a 2D array, or matrix, where each element (also known as pixel) in the array has a color value. We use three color channels per element in the array: Red, Green, and Blue. The origin of this image matrix is at the top-left corner and columns values increase positively from left to right while rows values increase positively from top to bottom.
  68. 68. tamagawa.jp RGB Image Processing with OpenCV and ROS We introduce the students to the basic elements in an image and how to perform some built-in OpenCV functions. Finally, we show them how to perform their own operations by accessing to the pixel elements in their image.
  69. 69. tamagawa.jp Point Cloud processing with ROS We present and introduction to Point Cloud data in ROS and propose a simple task where the students should track a person moving in front of a RGBD camera mounted in a mobile robot. We start by introducing what is a Depth image and how to interpret it.
  70. 70. tamagawa.jp Point Cloud processing with ROS Then, we introduce some concepts on point clouds of 3D points and how to use them to perform the target task where we divide the 3D space into a series of 2D planes so the student can interpret and select the appropriate information to perform the task at hand.
  71. 71. tamagawa.jp Summary In this work we have provided new comers to computer vision and robotics a short guide with a number of examples and exercises that they can use to solve the proposed task and extend them to solve their own applications. Moreover, by providing a series of rosbags, they do not need to have a real robot to start thinking of robot vision. We hope these work motivates them to continue in this field.
  72. 72. tamagawa.jp Robot Localisation: An Introduction Luis Angel Contreras-Toledo, PhD Advance Intelligence and Robotics Research Center Tamagawa University https://aibot.jp/ 2020
  73. 73. Web: https://www.robocupathomeedu.org/ FB: https://www.facebook.com/robocupathomeedu/ GitHub: https://github.com/robocupathomeedu/ Online Classroom: https://www.robocupathomeedu.org/learn/online-classroom Contact: oc@robocupathomeedu.org RoboCup@Home Education ONLINE CLASSROOM Invited Lecture Series
  74. 74. RoboCup@Home Education ONLINE CLASSROOM Invited Lecture Series Luis Contreras received his Ph.D. in Computer Science at the Visual Information Laboratory, in the Department of Computer Vision, University of Bristol, UK. Currently, he is a research fellow at the Advanced Intelligence & Robotics Research Center, Tamagawa University, Japan. He has also been an active member of the Bio-robotics Laboratory at the Faculty of Engineering, National Autonomous University of Mexico, Mexico. He has been working on service robots and has tested his latest results at the RoboCup and similar robot competitions for the last ten years. World Representation Through Artificial Neural Networks Speaker: Luis Contreras | Tamagawa University Time: June 16, 2020 (Tue) 09:00~11:00 (GMT+8) https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series Highlights ● Artificial Neural Networks and its application to Object Recognition ● Convolutional Neural Networks

×