Upcoming SlideShare
×

# Lecture 08: Localization and Mapping II

1,080 views

Published on

3 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
1,080
On SlideShare
0
From Embeds
0
Number of Embeds
92
Actions
Shares
0
0
0
Likes
3
Embeds 0
No embeds

No notes for slide

### Lecture 08: Localization and Mapping II

1. 1. Introduction to RoboticsLocalization and Mapping II<br />March 8, 2010<br />
2. 2. Last week’s exercise: Path-Planning<br />So far: Signal processing, feedback control<br />New: algorithms<br />Dijkstra<br />A*<br />
3. 3. Localization<br />Odometry<br />GPS<br />Control input<br />Landmarks<br />Gyroscope<br />
4. 4. Localization<br />Odometry<br />GPS<br />Local<br />Global<br />Control input<br />Landmarks<br />Gyroscope<br />
5. 5. Landmarks<br />R. Siegwart<br />
6. 6. Landmarks<br />R. Siegwart<br />
7. 7. Probabilistic Localization<br />
8. 8. Markov Localization<br />Discrete, finite number of possible poses (grid, topological map)<br />p(A) : probability that A is true<br />p(A|B) : probability that A is true knowing B<br />p(A^B)=p(A|B)p(B)<br />
9. 9. Bayes rule<br />p(A|B)=p(B|A)p(A)/p(B)<br />p(loc|sensing)=p(sensing|loc)p(loc)/p(sensing)<br />Example: I believe to be at location X and think that I see a door. What’s the likelihood to<br />be at X? The higher the likelihood to see a door at X, the higher the likelihood that I am<br />at X.<br />
10. 10. Markov Localization<br />But: I know more than that! I have an estimate on how much I moved and where I were Before!<br />0.33<br />0.33<br />0.33<br />p(l’t-1)<br />ot<br />0.1<br />0.7<br />0.2<br />(example depends on <br />error-model for ot)<br />
11. 11. Markov Localization<br />Two step process<br />Action update based on proprioception<br />Perception update based on exterioception<br />p(loc|sensing)=p(sensing|loc)p(loc)/p(sensing)<br />
12. 12. Example 1: topological map<br />Detect open/close doors using sonar<br />p(n|i)=p(i|n)p(n)<br />
13. 13. Example 1: topological map<br />
14. 14. Example 2: Grid map<br />3D (x,y, theta) leads to 3D grid<br />Same approach for updating belief using perception and action<br />
15. 15. Reducing the complexity of Markov Localization<br />Instead of maintaining a high-granularity belief state, perform random sampling.<br />Problem: Completeness<br />“Particle filter”<br />
16. 16. Example 3: Grid map/Particle Filter<br />W. Burgard<br />
17. 17. Example 3: Grid map/Particle Filter<br />W. Burgard<br />
18. 18. Example 3: Grid map/Particle Filter<br />W. Burgard<br />
19. 19. Example 3: Grid map/Particle Filter<br />W. Burgard<br />
20. 20. Example 3: Grid map/Particle Filter<br />(scan 13)<br />W. Burgard<br />
21. 21. Example 3: Grid map/Particle Filter<br />(scan 21)<br />W. Burgard<br />
22. 22. Exercise: Mapping and Localization in RobotStadium<br />Topological Map vs. Grid Map<br />Complete representation vs. Particle Filter<br />Localization sensor? Features?<br />Odometry (action update)?<br />
23. 23. Homework<br />Sections 5.7 and 5.8 (pages 244-256)<br />Next week: DESIGN REVIEW, 15min per group<br />