Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
×
5 of 32

The Joy of SLAM

2

Share

Talk from /dev/summer
Brief overview of Simulatneous Localistion and Mapping incl. brief intro to localisation methods. Relates these methods to autonomous vehicles and touches on ethical concerns.

See all

See all

The Joy of SLAM

1. 1. The Joy of SLAM Samantha Ahern - @2standandstare Centre for Computational Intelligence, De Montfort University
2. 2. SLAM Presentation Plan  Simultaneous Localisation and Mapping (SLAM) is the core element of navigation systems for mobile robots and vehicles. In this talk I will discuss  how SLAM works,  the main implementation methods and  examples of their applications.  I will discuss my own work in implementing a SLAM system on a small autonomous robot and discuss the parallels with autonomous vehicles.
3. 3. Where’s Johnny?  An autonomous agent needs to know:  About its environment  Pre-existing map  Creates a map as it explores  Where it is in relation to its environment 𝑥 𝑦 𝜃 ?
4. 4. Types of SLAM  Feature-based SLAM  Pose-based SLAM  Appearance-based SLAM  Variants - these include Active SLAM and Multi-robot SLAM. E. Zamora and W. Yu, ‘Recent advances on simultaneous localization and mapping for mobile robots’, IETE Tech. Rev. Inst. Electron. Telecommun. Eng. India, vol. 30, no. 6, pp. 490–496, 2013.
5. 5. Localisation techniques  Histogram Filter  Kalman Filter / Extended Kalman Filter  Particle Filter Belief Sense Move
6. 6. Histogram Filters  Estimates discrete states  Multi-modal distribution  Formally:  0 ≤ 𝑃(𝑥) ≤ 1 Σ 𝑃 𝑥𝑖 = 1  Where 𝑥 is a grid cell and 𝑧 measurement  𝑃 𝑥𝑖 𝑧) = 𝑃 𝑧 | 𝑥 𝑖 𝑃(𝑥 𝑖) 𝑃(𝑧)  𝑃(𝑧) = 𝑖 𝑃 𝑧 𝑥𝑖) 𝑃(𝑥𝑖)
7. 7. Histogram Filters cont’d  Position – Raw probability:  𝑃 𝑥𝑖 𝑧) ← 𝑃 𝑧 𝑥𝑖) 𝑃(𝑥𝑖)  𝛼 = Σ 𝑃 𝑥𝑖 𝑧)  Position – Normalized probability:  𝑃 𝑥𝑖 𝑧) = 1 𝛼 𝑃 𝑥𝑖 𝑧)  Motion – derived from theorem of total probability:  𝑃(𝑥𝑖 𝑡 ) where 𝑡 is time and 𝑖 the grid cell  𝑗 𝑃 𝑥𝑗 𝑡 −1 − 𝑃 𝑥𝑖 𝑥𝑗)
8. 8. Kalman Filters  Estimates continuous states  Uni-modal distribution (Gaussian) µ 𝜎2 𝜇 = mean 𝜎2 = variance 1D Gaussian = ( 𝜇 , 𝜎2 ) 𝑓 𝑥 = 1 2 𝜋𝜎2 𝑒𝑥𝑝 − 1 2 (𝑥−𝜇)^2 𝜎2 Used for measurement and motion updates
9. 9. KF: Update on measurement Prior: 𝜇, 𝜎2 Measurement: 𝛾 , 𝑟2 Update: 𝜇′ = 𝑟2 𝜇+ 𝜎2 𝛾 𝑟2+ 𝜎2 𝜎2′ = 1 1 𝑟2 + 1 𝜎2
10. 10. KF: Update on motion Prior: 𝜇, 𝜎2 Update: 𝜇′ = 𝜇 + 𝑢 𝜎2′ = 𝜎2 + 𝑟2 𝑟2 𝑢
11. 11. Kalman Filter Updates  X = estimate  P = uncertainty covariance  F = state transition matrix  U = motion vector  Z = measurement  H = measurement function  R = measurement noise  I = identity matrix  Position update:  X’ = FX + U  P’ = F . P . F’  Measurement update:  Y = Z – H . X  𝑆 = H . P . H’ + R  K = P . H’ . 𝑆−1  X’ = X + ( KY )  P’ = ( I – K . H ) P
12. 12. Particle Filters  Easiest to program of the 3 filters  Estimates continuous states  Has a multi-modal distribution  All calculations are approximate  Level of efficiency is unclear
13. 13. PF: Core concepts  Particles consist of:  x position  y position  Direction  N is the number of particles …… Possible position of robot (particle) For each particle (.) the expected distance from each point (.) is calculated. Mismatch between expected and actual measurements determine weight. Higher weighted particles are more likely to survive resampling.
14. 14. PF: Re-sampling - weights Particle Weight Normalized weight ( 𝑥1 , 𝑦1, , 𝑑1 ) 𝑤1 ∝1= 𝑤1 𝑊 ( 𝑥2 , 𝑦2, , 𝑑2 ) 𝑤2 ∝2= 𝑤2 𝑊 ” ” ” ” ” ” ” ” ” ( 𝑥 𝑛 , 𝑦𝑛, , 𝑑 𝑛) 𝑤 𝑛 ∝ 𝑛= 𝑤 𝑛 𝑊 𝑊 = 𝑖 𝑤𝑖 1 = 𝑖 ∝𝑖 N
15. 15. PF: Re-sampling - replacements  Higher weighted particles are more likely to survive resampling Draw with replacements 𝛼1 𝛼2 ∝4 𝛼3 𝛼 𝑛 𝑝1 𝑝2 𝑝3 𝑝2 𝑝1
16. 16. PF: Re-sampling wheel  index = U [ 1 … N]  𝛽 = 0  For i = 1 … N  𝛽 <- 𝛽 + U [ 0 … 2 * 𝑤 𝑚𝑎𝑥]  If 𝑤𝑖𝑛𝑑𝑒𝑥 < 𝛽  𝛽 <- 𝛽 - 𝑤𝑖𝑛𝑑𝑒𝑥  index <- index + 1  Else  Pick 𝑝𝑖𝑛𝑑𝑒𝑥
17. 17. PF: Equations  Movement updates:  𝑃 𝑥 𝑧 ) ∝ 𝑃 𝑧 𝑥 ) 𝑃 ( 𝑥 )  Motion updates:  𝑃 ( 𝑥′) = 𝑃 𝑥′ 𝑥 ) 𝑃 𝑥 Particles Importance weights Re-sampling Samples Sample
18. 18. Mapping  Occupancy Grids  Topological graphs  Feature Map ( vision data ) http://www.cs.cmu.edu/~motionplanning/papers/sbp_papers/integrated4/elfes_occup_grids.pdf https://www.udacity.com/course/viewer#!/ c-cs373/l-48696626/m-48701349
19. 19. Occupancy Grids  Occupancy grids utilise random field representation,  Each cell in the grid stores a probabilistic estimate of the cell's state.  The probabilistic estimate is obtained through the integration and interpretation of sensor data from multiple sensors of the same type or different complimentary sensor types.  Occupancy grids can incorporate positional uncertainty into the mapping process. http://www.cs.cmu.edu/~motionplanning/papers/sbp_papers/integrated4/elfes_occup_grids.pdf
20. 20. Open RatSLAM  Inspired by the rodent hippocampal complex  Hybrid method combining characteristics of:  Feature based  Grid based  Topological SLAM techniques.  Consists of four nodes:  Pose Cell Network  Local View Cells  Experience Map  Visual Odometry (for image only datasets).  Developed by Queensland University of Technology
21. 21. Open RatSLAM http://link.springer.com.libproxy.ucl.ac.uk/article/10.1007/s10514-012-9317-9/fulltext.html
22. 22. RatSLAM – video
23. 23. DELPHI Drive http://www.delphi.com/delphi-drive
24. 24. DELPHI Car technology  The Delphi car is fitted with:  Radar:  Long Range Radar x 6  360Dg Radar x 4  4 Layer LiDAR x 6  Cameras:  Forward camera  HD camera  Infra-red camera  Plus:  GPS Antennae  Wheel odometers Where am I? What do I need to ‘see’?
25. 25. Semi - Autonomous Vehicles: Platoons http://www.sartre-project.eu/en/Sidor/default.aspx
26. 26. Autonomous vehicles – main difficulties  Noisy data  Incompleteness  Dynamicity  Discrete measurements in real-time Decision Control Perception Key blocks
27. 27. Will / can it do the right thing?  Hybrid agent architecture  Control System What it does  Rational Agent Why it does it Control System Low Level Rational Agent High Level Autonomous System
28. 28. Verifying the Rational Agent External Interactions • Stochastic Analysis Feedback Control • Differential Equations Decision Making • Discrete Logic Probabilistic Determinite Infinite Non-determinate Finite Finite Abstraction
29. 29. Essential elements - A S/A Vehicles  Sensors and perception  Computing platforms & control systems  Electrical architecture & network management  Vehicle connectivity  User experience  Off-board (cloud) support & services  Functional safety & cyber security
30. 30. Conclusions  93% road accidents caused by human error  Perception and decision making take place under uncertainty  Bayesian estimators are used for localisation and mapping  Interaction between driver and autonomous / semi-autonomous vehicle needs to be managed  Interaction between autonomous, semi-autonomous and manual vehicles needs to be managed  Same concepts used by autonomous drones
31. 31. References  https://www.udacity.com/course/progress#!/c-cs373  http://www.sartre-project.eu/en/Sidor/default.aspx  http://www.delphi.com/delphi-drive  J. Borenstein, H. R. Everett, L. Feng, and D. Wehe, ‘Mobile robot positioning: Sensors and techniques’, J. Robot. Syst., vol. 14, no. 4, pp. 231–249, 1997.  A. Elfes, ‘Using occupancy grids for mobile robot perception and navigation’, Computer, vol. 22, no. 6, pp. 46–57, Jun. 1989.  E. Zamora and W. Yu, ‘Recent advances on simultaneous localization and mapping for mobile robots’, IETE Tech. Rev. Inst. Electron. Telecommun. Eng. India, vol. 30, no. 6, pp. 490–496, 2013.  D. Ball, S. Heath, J. Wiles, G. Wyeth, P. Corke, and M. Milford, ‘OpenRatSLAM: an open source brain-based SLAM system’, Auton. Robots, vol. 34, no. 3, pp. 149–176, 2013.  R. Smith, M. Self, and P. Cheeseman, ‘Estimating uncertain spatial relationships in robotics’, in 1987 IEEE International Conference on Robotics and Automation. Proceedings, 1987, vol. 4, pp. 850–850.
32. 32. Questions

Editor's Notes

• Feature-based SLAM:
It is the most popular approach to solve the SLAM problem. It uses predefined landmarks and environment model to estimate the robot current state (or robot path) and the map [1].

Pose-Based:
Only the robot state trajectory is estimated, without landmark positions. The robot path is estimated using constraints imposed by the landmark positions or the raw laser (or visual) data.

Appearance-based:
It does not use metric information and the landmark positions. The robot path is not tracked in metric sense. The visual images or spatial information are utilized to recognize the place. It is very common that these appearance techniques are used complementary to any metric SLAM method to detect loop closures [7].

Active SLAM derives a control law for robot navigation in order to achieve efficiently a certain desired accuracy of the robot location and the map [10]. Multi-robot SLAM uses many robots for large environment [11].
• Belief: Probability
Sense: Product followed by normalization

Histogram Filter:
Discrete state estimation very rarely used

Kalman Filter / Extended Kalman filter:
Used in feature-based slam

Particle filter:
Used in filter and posed based SLAM
• Multivariate gaussians can be used to infer velocity from measurement update
• For my dissertation project am implementing a version in NXC using sonar sensors, translation from Robot C
• Uses detailed map for driving in urban areas but on highways builds grids
• Where am I? -> Localisation
What do I need to ‘see’? -> Vision, perception and mapping
• The project aims to encourage a step change in personal transport usage by developing of environmental roadtrains called platoons.
Systems will be developed facilitating the safe adoption of road trains on un-modified public highways with interaction with other traffic.
A scheme will be developed whereby a lead vehicle with a professional driver will take responsibility for a platoon. Following vehicles will enter a semi-autonomous control mode that allows the driver of the following vehicle to do other things that would normally be prohibited for reasons of safety; for example, operate a phone, reading a book or watching a movie.

Other research projects are working on fully autonomous version – the first vehicle implements full SLAM, following vehicles localisation and comms between vehicles?
• Who is in control? When should control be handed back? Should return to human control be refused?
Ethics? System can order options based on ethical priorities
Save humans >> save animals >> save property
• EI: Sensors / actuators
FC: Control system etc.
DM: Rational agent
• From DELPHI