Successfully reported this slideshow.                    Upcoming SlideShare
×

of                    Upcoming SlideShare
Next

# What can software learn from robots

In almost every aspect of life, there is a concept of state (for example, where are you right now?) and a concept of uncertainty (for example, how soon can you reach home?). And whenever there is uncertainty, there is a tool that can help you get a better estimate of that state. It is called a Kalman Filter, named after an American Mathematician, Rudolf Kalman.

See all

### Related Audiobooks

#### Free with a 30 day trial from Scribd

See all
• Be the first to like this

### What can software learn from robots

1. 1. © TartanSense What can Software learn from Robots Naman Kumar 1
2. 2. © TartanSense ● Examples ○ Autonomous Driving ○ Robot Localization ○ Project Management 2
3. 3. © TartanSense ● Autonomous Driving 3https://www.economist.com/the-economist-explains/2015/05/12/how-does-a-self-driving-car-work
4. 4. © TartanSense ● Robot Localization Source: Probabilistic Robotics, Sebastian Thrun et al. 4
5. 5. © TartanSense ● Project Management ○ Input: ■ Jira Tasks completed ■ Unit tests passed ■ Integration testing status ■ Code coverage ○ Output: Precise estimate of the project ○ Also, an ability to predict accurately if and when you will meet the milestone 5
6. 6. © TartanSense ● TartanSense (Agriculture Robotics) 6
7. 7. © TartanSense ● TartanSense 7
8. 8. © TartanSense ● Bayes Filter ○ Terminology ■ X: State ■ Z: Measurement ■ u: Control/ Action ■ t: Timestep ……………….. ……………….. X0 X1 ’ X1 Xt ’ Xt u1 Z1 ut Zt Source: Probabilistic Robotics, Sebastian Thrun et al. 8
9. 9. © TartanSense ● Bayes Filter ○ Estimates a probability density function recursively over time using measurements and some maths ○ ○ Algorithm bayes_ﬁlter(bel(xt-1 ), ut, zt ): ■ For all xt do ● bel’(xt ) = ∫p(xt | ut ,xt-1 ) bel(xt-1 ) dx ● bel(xt ) = η p(zt |xt ) bel’(xt ) ■ Endfor ■ Return bel(xt ) 9
10. 10. © TartanSense ● But what the heck is a Kalman Filter? ○ It is just a Bayes ﬁlter with following conditions: ■ Normally distributed variables ■ Linear transitions ○ To reiterate, it has got two main steps: ■ PREDICT: It generates estimates of the state variables with some associated uncertainty ■ UPDATE: The new measurements (or data) are incorporated and the estimates are updated using a weighted average with more weight given to more certain estimates 10
11. 11. © TartanSense ● KALMAN FILTER I 11
12. 12. © TartanSense ● Kalman Filter equations: ○ Next State probability: xt = At *xt-1 + Bt *ut + εt ○ Measurement probability: zt = Ct *xt + δt ● Kalman Filter Algorithm ○ kalman_ﬁlter(μt-1 ,Σt-1 ,ut , zt ): ■ μ’t = At μt-1 + Bt ut ■ Σ’t = At Σt-1 AT t + Rt ■ Kt = Σ’t CT t (Ct Σ’t CT t + Qt )-1 ■ μt = μ’t + Kt (zt - Ct μ’t ) ■ Σt = (I - Kt Ct )Σ’t ○ return μt , Σt ● *K = Error in Prediction / (Error in Prediction + Error in Measurement) 12
13. 13. © TartanSense ● Shortcomings of a vanilla Kalman Filter ○ Works only for linear systems ○ Works only for Gaussian noise ○ It can compute beliefs only for continuous states ● Real life Example: Driving home 13
14. 14. © TartanSense ● Extended Kalman Filter (EKF) ○ Overcomes the linearity assumption. State and measurement probabilities are computed using nonlinear functions g and h: ■ xt = g(ut ,xt-1 ) + εt ■ zt = h(xt ) + δt ○ Comparing EKF and KF: ○ What will be the repercussions of this? ■ The belief is no longer gaussian ■ EKF can only calculate an approximate true belief unlike exact belief calculated by KF 14
15. 15. © TartanSense ● Extended Kalman Filter ○ The key here is the linearization. EKF uses First order Taylor Expansion ○ Algorithm: Source: Probabilistic Robotics, Sebastian Thrun et al. 15
16. 16. © TartanSense ● Where will EKF fail? ○ Lost Robot (Use Sum of Gaussians -> MHEKF) Source: Probabilistic Robotics, Sebastian Thrun et al. 16
17. 17. © TartanSense ● Where will EKF fail? ○ Degree of Nonlinearity ○ Degree of Uncertainty ● Is there a better way to linearize? ○ Unscented Kalman Filter 17
18. 18. © TartanSense ● Unscented Kalman Filter ○ How can we linearize better? ○ In KF, we have one point (the mean) and we linearize the function around it ○ In UKF, we have a bunch of points and we linearize around them ● When to use what? ○ If you have a linear system, use Kalman Filter ○ If you have a small nonlinear system, use Extended Kalman Filter ○ If you have a highly nonlinear system, use Unscented Kalman Filter 18
19. 19. © TartanSense ● Alternative to Gaussian Filters: Non-parametric ﬁlters ○ Histogram Filters ○ Particle Filters Source: Probabilistic Robotics, Sebastian Thrun et al. 19
20. 20. © TartanSense Questions/ Thoughts? 20

Total views

102

On Slideshare

0

From embeds

0

Number of embeds

0

0

Shares

0