SlideShare a Scribd company logo
1 of 39
Download to read offline
AI for Robotics
Recursive State Estimation
AI for Robotics
Recursive State Estimation
DR. ABHISHEK SARKAR
MECHANICAL ENGG.
BITS-PILANI, HYDERABAD
Linear Odometry
• When a relation between motor power and velocity v has been
determined, the robot can compute the distance moved by s = vt.
• If the robot starts at the origin (0, 0)
and moves in a straight line at angle
θ with velocity v for time t, its new
position (x, y) is
x = vt cos θ
y = vt sin θ
2
YI
XI
θ x
y
Odometry with Turns
• We can measure dl and dr, the distances moved by the two wheels
with the encoder and wheel diameter.
• After t seconds the wheel has moved
si = π d ωit, where i = 1,2
• The distances s1 and s2 are obtained from the
rotations of the wheels
θr1 = s1, θr2 = s2
subtracting and finding the value of θ
θ = (s2 − s1)/(r2 − r1) = (s2 − s1)/l
3
θ
s1
s2
ri
Odometry with Turns
• The center is halfway between the wheels
rc = (r1 + r2)/2
• So the distance travelled is
sc = θrc
= θ(r1 + r2)/2
= (s1 + s2)/2
4
θ
rc
sc
An error model for
Odometric position estimation
• For a differential-drive robot the position can be estimated
starting from a known position by integrating the movement
(summing the incremental travel distances).
• For a discrete system with a fixed sampling interval ∆t the
incremental travel distances are
∆x = ∆s cos(θ + ∆θ/2)
∆y = ∆s sin(θ + ∆θ/2)
∆θ = (∆s2 - ∆s1)/l
∆s = (∆s1 + ∆s2)/2
5
An error model for
Odometric position estimation
• Thus we get the updated position p’:
• By using the equations of ∆θ and ∆s we further obtain the basic
equation for odometric position update.
• Owing to integration errors of the uncertainties of p and the
motion errors during the incremental motion (∆s2, ∆s1) the
position error based on odometry integration grows with time.
6
p'=
[
x '
y'
θ' ]=p+
[
Δ scos(θ+∆θ/2)
Δ ssin(θ+∆θ/2)
∆θ ]
Recursive State Estimation
• as
●
Introduction
●
Probability
●
Environment Interaction
●
Probabilistic Generative Laws
●
Belief Distributions
●
The Bayes Filter Algorithm
●
Markov Assumption
7
Introduction
• State estimation addresses the problem of estimating quantities
from sensor data that are not directly observable, but that can be
inferred.
• Determining what to do is relatively easy if one only knew certain
quantities.
• Sensors carry only partial information about those quantities, and
their measurements are corrupted by noise.
8
Concepts in Probability
• Let X denote a random variable and x denote a specific value that
X might assume.
• A standard example of a random variable is a coin flip, where X
can take on the values heads or tails.
• If the space of all values that X can take on is discrete
p(X = x)
to denote the probability that the random variable X has value x.
• Discrete probabilities sum to one
9
∑
x
p( X=x)=1
Probability Density Function
• Continuous spaces are characterized by random variables that can
take on a continuum of values.
• A common density function is that of the one-dimensional normal
distribution with mean μ and variance σ2.
●
The PDF of a normal distribution is given by the following
Gaussian function:
●
We will frequently abbreviate them as N (x; μ, σ2
), which specifies
the random variable, its mean, and its variance.
10
p(x)=(2πσ2
)−1/2
exp{−
1
2
(x−μ)2
σ2 }
Multivariate distributions
• Normal distributions over vectors are called multivariate.
• Multivariate normal distributions are characterized by density
functions of the following form:
• Here μ is the mean vector, Σ a positive semidefinite and symmetric
matrix called the covariance matrix.
• A PDF always integrates to 1
11
p(x)=det(2π Σ)−1/2
exp{−
1
2
(x−μ)T
Σ−1
(x−μ)}
∫p(x)dx=1
Joint Distribution
• The joint distribution of two random variables X and Y is given by
• If X and Y are independent, we have
p(x, y) = p(x) p(y)
12
Conditional Probability
• Suppose we already know that Y ’s value is y, and we would like to
know the probability that X’s value is x conditioned on that fact.
• Such a probability will be denoted
• If p(y) > 0, then the conditional probability is defined as
• If X and Y are independent,
13
Theorem of Total Probability
• If p(x | y) or p(y) are zero, we define the product p(x | y) p(y) to be
zero, regardless of the value of the remaining factor.
14
Bayes Rule
• Bayes rule relates a conditional of the type p(x | y) to its “inverse,”
p(y | x)
• Useful for assessing diagnostic probability from causal probability:
15
Sample Space
• In probability theory, the set of all possible worlds is called the
sample space.
• If we are about to roll two (distinguishable) dice, there are 36
possible worlds to consider: (1,1), (1,2), . . ., (6,6).
• A fully specified probability model associates a numerical
probability P(ω) with each possible world.
• The total probability of the set of possible worlds is 1
16
Events / Propositions
• We might be interested in the cases where the two dice add up to
11, the cases where doubles are rolled, and so on.
• In probability theory, these sets are called events and in AI, the sets
are always described by propositions.
• The probability associated with a proposition φ,
• What is the probability of two dice add up to 11?
17
Unconditional or
Prior Probabilities
• We refer to degrees of belief in propositions in the absence of any
other information.
• Most of the time, however, we have some information, usually
called evidence, that has already been revealed.
• For example, the first die may already be showing a 5 …
… and we are waiting for the other one!
• Now, we are interested in the conditional or posterior probability
of rolling doubles given that the first die is a 5.
18
Conditional or
Posterior Probabilities
• This probability is written as P(doubles | Die1
= 5)
• Mathematically, conditional probabilities are defined in terms of
unconditional probabilities as follows: for any propositions a and b,
we have
which holds whenever P (b) > 0.
• For example,
• The product rule: P(a b) = P(a | b)P(b)
∧
19
Domain
• Every random variable has a domain—the set of possible values it
can take on.
• A Boolean random variable has the domain {true, false}
• The domain of Total for two dice is the set {2, . . . , 12}
• Variables can have infinite domains, too—either discrete (like the
integers) or continuous (like the reals).
20
Environment Interaction
• The robot can influence the state of its environment through its
actuators, and it can gather information about the state through its
sensors.
• Perception is the process by which the robot uses its sensors to
obtain information about the state of its environment.
• Typically, sensor measurements arrive with some delay. Hence they
provide information about the state a few moments ago.
• Control actions change the state of the world by actively asserting
forces on the robot’s environment.
26
Environment Interaction
• Environment measurement data provides information about a
momentary state of the environment.
zt1:t2
= zt1
, zt1+1
, zt1+2
, . . . , zt2
denotes the set of all measurements acquired from time t1
to time
t2
, for t1
≤ t2
.
• Control data carry information about the change of state in the
environment.
• The variable ut
will always correspond to the change of state in the
time interval (t − 1; t].
27
Environment Interaction
• Sequences of control data denoted by
ut1:t2
= ut1
, ut1+1
, ut1+2
, . . . , ut2
• Since the environment may change even if a robot does not execute
a specific control action, the fact that time passed by constitutes
control information.
• Therefore we assume that there is exactly one control data item per
time step t, and “do-nothing” is accounted as legal action.
28
Probabilistic Generative Laws
• The evolution of state and measurements is governed by
probabilistic laws.
• The state xt
is generated stochastically from the state xt−1
.
• The emergence of state xt
might be conditioned on all past states,
measurements, and controls
p(xt
| x0:t−1
, z1:t−1
, u1:t
).
• We assume here that the robot executes a control action u1
first,
and then takes a measurement z1
.
29
Probabilistic Generative Laws
• If the state x is complete then it is a sufficient summary of all that
happened in previous time steps.
• In particular, xt−1
is a sufficient statistic of all previous controls and
measurements up to this point in time, that is, u1:t−1
and z1:t−1
.
• From all the variables in the expression above, only the control ut
matters if we know the state xt−1
p(xt
| x0:t−1
, z1:t−1
, u1:t
) = p(xt
| xt−1
, ut
)
• [State transition probability]
30
Probabilistic Generative Laws
• Again, if xt
is complete, we have an important conditional
independence:
p(zt
| x0:t
, z1:t−1
, u1:t
) = p(zt
| xt
)
[Measurement probability]
• The state xt
is sufficient to predict the (potentially noisy)
measurement zt
.
• Knowledge of any other variable, such as past measurements
controls, or even past states, is irrelevant if xt
is complete.
31
Probabilistic Generative Laws
• The state transition probability
and the measurement
probability together
describe the dynamical
stochastic system of the
robot and its environment.
• The state at time t is stochastically
dependent on the state at time t − 1 and the control ut
.
• The measurement zt
depends stochastically on the state at time t.
32
xt-1
zt-1
ut-1
zt
xt
ut
zt+1
xt+1
ut+1
Belief Distributions
• A belief reflects the robot’s internal knowledge about the state of
the environment.
• Robot’s state cannot be measured directly.
• For example, a robot’s pose might be xt = <14.12, 12.7, 45°> in some
global coordinate system, but it usually cannot know its pose, since
poses are not measurable directly (not even with GPS!).
• We therefore distinguish the true state from its internal belief with
regards to that state.
33
Belief Distributions
• Belief distributions are posterior probabilities over state variables
conditioned on the available data.
• We denote belief over a state variable xt
by
bel(xt
) = p(xt
| z1:t
, u1:t
)
• This posterior is the probability distribution over the state xt
at time
t, conditioned on all past measurements z1:t
and all past controls u1:t
.
• Occasionally, it will prove useful to calculate a posterior before
incorporating zt
, just after executing the control ut
.
34
Belief Distributions
• Such a posterior will be denoted as follows
bel(xt
) = p(xt
| z1:t−1
, u1:t
)
• This probability distribution is often referred to as prediction in the
context of probabilistic filtering.
• bel(xt
) predicts the state at time t based on the previous state
posterior, before incorporating the measurement at time t.
• Calculating bel(xt
) from bel(xt
) is called correction or the
measurement update.
35
Bayes Filter Algorithm
1: Algorithm Bayes_filter(bel(xt−1
), ut
, zt
):
2: for all xt
do
3: bel(xt
) = ∫p(xt
| ut
, xt−1
) bel(xt−1
) dxt−1
4: bel(xt
) = η p(zt
| xt
) bel(xt
)
5: endfor
6: return bel(xt
)
36
Example
• Let us assume that the door
can be in one of two possible
states, open or closed, and
that only the robot can change
the state of the door.
• Also, the robot does not know
the state of the door initially.
37
Example
• Robot assigns equal prior probability to the two possible door
states:
bel(X0
= open) = 0.5 bel(X0
= closed) = 0.5
• Sensor noise is characterized by the conditional probabilities:
p(Zt
= sense_open | Xt
= is_open) = 0.6
p(Zt
= sense_closed | Xt
= is_open) = 0.4
p(Zt
= sense_open | Xt
= is_closed) = 0.2
p(Zt
= sense_closed | Xt
= is_closed) = 0.8
38https://people.eecs.berkeley.edu/~pabbeel/cs287-fa13/slides/bayes-filters.pdf
Example
• Robot uses its manipulator to push the door open.
• If the door is already open, it will remain open.
• If it is closed, the robot has a 0.8 chance that it will be open
afterwards:
p(Xt
= is_open | Ut
= push, Xt_1
= is_open) = 1
p(Xt
= is_closed | Ut
= push, Xt_1
= is_open) = 0
p(Xt
= is_open | Ut
= push, Xt_1
= is_closed) = 0.8
p(Xt
= is_closed | Ut
= push, Xt_1
= is_closed) = 0.2
39 http://stefanosnikolaidis.net/course-files/CS545/Lecture6.pdf
Example
• It can also choose not to use its manipulator, in which case the state
of the world does not change.
p(Xt
= is_open | Ut
= do_nothing, Xt_1
= is_open) = 1
p(Xt
= is_closed | Ut
= do_nothing, Xt_1
= is_open) = 0
p(Xt
= is_open | Ut
= do_nothing, Xt_1
= is_closed) = 0
p(Xt
= is_closed | Ut
= do_nothing, Xt_1
= is_closed) = 1
40
Example
• Suppose at time t = 1, the robot takes no control action but it senses
an open door.
• Since the state space is finite,
bel(x1
) = ∫p(x1
| u1
, x0
) bel(x0
) dx0
= Σx0
p(x1
| u1
, x0
) bel(x0
)
= p(x1
| U1
= do_nothing, X0
= is_open) bel(X0
= is_open)
+ p(x1
| U1
= do_nothing, X0
= is_closed) bel(X0
= is_closed)
41
Example
• For the hypothesis X1
= is_open, we obtain
bel(X1
= is_open) = 1 x 0.5 + 0 x 0.5 = 0.5
• Likewise, for X1
= is_closed we get
bel(X1
= is_closed) = 0 x 0.5 + 1 x 0.5 = 0.5
• The fact that the belief bel(x1
) equals our prior belief bel(x0
) should
not surprise, as the action do_nothing does not affect the state of
the world; neither does the world change over time by itself in our
example
42
Example
• Incorporating the measurement, however, changes the belief.
bel(x1
) = η p(Z1
= sense_open | x1
) bel(x1
)
• So, bel(X1
= is_open)
= η p(Z1
= sense_open | X1
= is_open) bel(X1
= is_open)
= η 0.6 · 0.5 = η 0.3
• and bel(X1
= is_closed)
= η p(Z1
= sense_open | X1
= is_closed) bel(X1
= is_closed)
= η 0.2 · 0.5 = η 0.1
43
Example
• The normalizer η is now easily calculated:
η = (0.3 + 0.1)−1
= 2.5
• Hence, we have
bel(X1
= is_open) = 0.75
bel(X1
= is_closed) = 0.25
• This calculation is now easily iterated for the next time step.
• Let u2
= push and z2
= sense_open we get ...
44

More Related Content

Similar to Recursive State Estimation AI for Robotics.pdf

Univariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisUnivariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisAnissa ATMANI
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
Foundation of KL Divergence
Foundation of KL DivergenceFoundation of KL Divergence
Foundation of KL DivergenceNatan Katz
 
Problem_Session_Notes
Problem_Session_NotesProblem_Session_Notes
Problem_Session_NotesLu Mao
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSChandan Singh
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplingsPierre Jacob
 
Econometrics 2.pptx
Econometrics 2.pptxEconometrics 2.pptx
Econometrics 2.pptxfuad80
 
Gibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceGibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceJeremyHeng10
 
Looking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of CarcinogenesisLooking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of CarcinogenesisSascha Zöllner
 
Bernard schutz gr
Bernard schutz grBernard schutz gr
Bernard schutz grjcklp1
 
Schrodinger equation in QM Reminders.ppt
Schrodinger equation in QM Reminders.pptSchrodinger equation in QM Reminders.ppt
Schrodinger equation in QM Reminders.pptRakeshPatil2528
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
 
lec-7_phase_plane_analysis.pptx
lec-7_phase_plane_analysis.pptxlec-7_phase_plane_analysis.pptx
lec-7_phase_plane_analysis.pptxdatamboli
 

Similar to Recursive State Estimation AI for Robotics.pdf (20)

Queueing theory
Queueing theoryQueueing theory
Queueing theory
 
Ols
OlsOls
Ols
 
Univariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisUnivariate Financial Time Series Analysis
Univariate Financial Time Series Analysis
 
PTSP PPT.pdf
PTSP PPT.pdfPTSP PPT.pdf
PTSP PPT.pdf
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Foundation of KL Divergence
Foundation of KL DivergenceFoundation of KL Divergence
Foundation of KL Divergence
 
Problem_Session_Notes
Problem_Session_NotesProblem_Session_Notes
Problem_Session_Notes
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICS
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Econometrics 2.pptx
Econometrics 2.pptxEconometrics 2.pptx
Econometrics 2.pptx
 
Gibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceGibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inference
 
Looking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of CarcinogenesisLooking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of Carcinogenesis
 
Bernard schutz gr
Bernard schutz grBernard schutz gr
Bernard schutz gr
 
Cdc18 dg lee
Cdc18 dg leeCdc18 dg lee
Cdc18 dg lee
 
Schrodinger equation in QM Reminders.ppt
Schrodinger equation in QM Reminders.pptSchrodinger equation in QM Reminders.ppt
Schrodinger equation in QM Reminders.ppt
 
Unit II PPT.pptx
Unit II PPT.pptxUnit II PPT.pptx
Unit II PPT.pptx
 
Unit 5: All
Unit 5: AllUnit 5: All
Unit 5: All
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
lec-7_phase_plane_analysis.pptx
lec-7_phase_plane_analysis.pptxlec-7_phase_plane_analysis.pptx
lec-7_phase_plane_analysis.pptx
 

Recently uploaded

IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...Call Girls in Nagpur High Profile
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 

Recently uploaded (20)

Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
High Profile Call Girls Nashik Megha 7001305949 Independent Escort Service Na...
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 

Recursive State Estimation AI for Robotics.pdf

  • 1. AI for Robotics Recursive State Estimation AI for Robotics Recursive State Estimation DR. ABHISHEK SARKAR MECHANICAL ENGG. BITS-PILANI, HYDERABAD
  • 2. Linear Odometry • When a relation between motor power and velocity v has been determined, the robot can compute the distance moved by s = vt. • If the robot starts at the origin (0, 0) and moves in a straight line at angle θ with velocity v for time t, its new position (x, y) is x = vt cos θ y = vt sin θ 2 YI XI θ x y
  • 3. Odometry with Turns • We can measure dl and dr, the distances moved by the two wheels with the encoder and wheel diameter. • After t seconds the wheel has moved si = π d ωit, where i = 1,2 • The distances s1 and s2 are obtained from the rotations of the wheels θr1 = s1, θr2 = s2 subtracting and finding the value of θ θ = (s2 − s1)/(r2 − r1) = (s2 − s1)/l 3 θ s1 s2 ri
  • 4. Odometry with Turns • The center is halfway between the wheels rc = (r1 + r2)/2 • So the distance travelled is sc = θrc = θ(r1 + r2)/2 = (s1 + s2)/2 4 θ rc sc
  • 5. An error model for Odometric position estimation • For a differential-drive robot the position can be estimated starting from a known position by integrating the movement (summing the incremental travel distances). • For a discrete system with a fixed sampling interval ∆t the incremental travel distances are ∆x = ∆s cos(θ + ∆θ/2) ∆y = ∆s sin(θ + ∆θ/2) ∆θ = (∆s2 - ∆s1)/l ∆s = (∆s1 + ∆s2)/2 5
  • 6. An error model for Odometric position estimation • Thus we get the updated position p’: • By using the equations of ∆θ and ∆s we further obtain the basic equation for odometric position update. • Owing to integration errors of the uncertainties of p and the motion errors during the incremental motion (∆s2, ∆s1) the position error based on odometry integration grows with time. 6 p'= [ x ' y' θ' ]=p+ [ Δ scos(θ+∆θ/2) Δ ssin(θ+∆θ/2) ∆θ ]
  • 7. Recursive State Estimation • as ● Introduction ● Probability ● Environment Interaction ● Probabilistic Generative Laws ● Belief Distributions ● The Bayes Filter Algorithm ● Markov Assumption 7
  • 8. Introduction • State estimation addresses the problem of estimating quantities from sensor data that are not directly observable, but that can be inferred. • Determining what to do is relatively easy if one only knew certain quantities. • Sensors carry only partial information about those quantities, and their measurements are corrupted by noise. 8
  • 9. Concepts in Probability • Let X denote a random variable and x denote a specific value that X might assume. • A standard example of a random variable is a coin flip, where X can take on the values heads or tails. • If the space of all values that X can take on is discrete p(X = x) to denote the probability that the random variable X has value x. • Discrete probabilities sum to one 9 ∑ x p( X=x)=1
  • 10. Probability Density Function • Continuous spaces are characterized by random variables that can take on a continuum of values. • A common density function is that of the one-dimensional normal distribution with mean μ and variance σ2. ● The PDF of a normal distribution is given by the following Gaussian function: ● We will frequently abbreviate them as N (x; μ, σ2 ), which specifies the random variable, its mean, and its variance. 10 p(x)=(2πσ2 )−1/2 exp{− 1 2 (x−μ)2 σ2 }
  • 11. Multivariate distributions • Normal distributions over vectors are called multivariate. • Multivariate normal distributions are characterized by density functions of the following form: • Here μ is the mean vector, Σ a positive semidefinite and symmetric matrix called the covariance matrix. • A PDF always integrates to 1 11 p(x)=det(2π Σ)−1/2 exp{− 1 2 (x−μ)T Σ−1 (x−μ)} ∫p(x)dx=1
  • 12. Joint Distribution • The joint distribution of two random variables X and Y is given by • If X and Y are independent, we have p(x, y) = p(x) p(y) 12
  • 13. Conditional Probability • Suppose we already know that Y ’s value is y, and we would like to know the probability that X’s value is x conditioned on that fact. • Such a probability will be denoted • If p(y) > 0, then the conditional probability is defined as • If X and Y are independent, 13
  • 14. Theorem of Total Probability • If p(x | y) or p(y) are zero, we define the product p(x | y) p(y) to be zero, regardless of the value of the remaining factor. 14
  • 15. Bayes Rule • Bayes rule relates a conditional of the type p(x | y) to its “inverse,” p(y | x) • Useful for assessing diagnostic probability from causal probability: 15
  • 16. Sample Space • In probability theory, the set of all possible worlds is called the sample space. • If we are about to roll two (distinguishable) dice, there are 36 possible worlds to consider: (1,1), (1,2), . . ., (6,6). • A fully specified probability model associates a numerical probability P(ω) with each possible world. • The total probability of the set of possible worlds is 1 16
  • 17. Events / Propositions • We might be interested in the cases where the two dice add up to 11, the cases where doubles are rolled, and so on. • In probability theory, these sets are called events and in AI, the sets are always described by propositions. • The probability associated with a proposition φ, • What is the probability of two dice add up to 11? 17
  • 18. Unconditional or Prior Probabilities • We refer to degrees of belief in propositions in the absence of any other information. • Most of the time, however, we have some information, usually called evidence, that has already been revealed. • For example, the first die may already be showing a 5 … … and we are waiting for the other one! • Now, we are interested in the conditional or posterior probability of rolling doubles given that the first die is a 5. 18
  • 19. Conditional or Posterior Probabilities • This probability is written as P(doubles | Die1 = 5) • Mathematically, conditional probabilities are defined in terms of unconditional probabilities as follows: for any propositions a and b, we have which holds whenever P (b) > 0. • For example, • The product rule: P(a b) = P(a | b)P(b) ∧ 19
  • 20. Domain • Every random variable has a domain—the set of possible values it can take on. • A Boolean random variable has the domain {true, false} • The domain of Total for two dice is the set {2, . . . , 12} • Variables can have infinite domains, too—either discrete (like the integers) or continuous (like the reals). 20
  • 21. Environment Interaction • The robot can influence the state of its environment through its actuators, and it can gather information about the state through its sensors. • Perception is the process by which the robot uses its sensors to obtain information about the state of its environment. • Typically, sensor measurements arrive with some delay. Hence they provide information about the state a few moments ago. • Control actions change the state of the world by actively asserting forces on the robot’s environment. 26
  • 22. Environment Interaction • Environment measurement data provides information about a momentary state of the environment. zt1:t2 = zt1 , zt1+1 , zt1+2 , . . . , zt2 denotes the set of all measurements acquired from time t1 to time t2 , for t1 ≤ t2 . • Control data carry information about the change of state in the environment. • The variable ut will always correspond to the change of state in the time interval (t − 1; t]. 27
  • 23. Environment Interaction • Sequences of control data denoted by ut1:t2 = ut1 , ut1+1 , ut1+2 , . . . , ut2 • Since the environment may change even if a robot does not execute a specific control action, the fact that time passed by constitutes control information. • Therefore we assume that there is exactly one control data item per time step t, and “do-nothing” is accounted as legal action. 28
  • 24. Probabilistic Generative Laws • The evolution of state and measurements is governed by probabilistic laws. • The state xt is generated stochastically from the state xt−1 . • The emergence of state xt might be conditioned on all past states, measurements, and controls p(xt | x0:t−1 , z1:t−1 , u1:t ). • We assume here that the robot executes a control action u1 first, and then takes a measurement z1 . 29
  • 25. Probabilistic Generative Laws • If the state x is complete then it is a sufficient summary of all that happened in previous time steps. • In particular, xt−1 is a sufficient statistic of all previous controls and measurements up to this point in time, that is, u1:t−1 and z1:t−1 . • From all the variables in the expression above, only the control ut matters if we know the state xt−1 p(xt | x0:t−1 , z1:t−1 , u1:t ) = p(xt | xt−1 , ut ) • [State transition probability] 30
  • 26. Probabilistic Generative Laws • Again, if xt is complete, we have an important conditional independence: p(zt | x0:t , z1:t−1 , u1:t ) = p(zt | xt ) [Measurement probability] • The state xt is sufficient to predict the (potentially noisy) measurement zt . • Knowledge of any other variable, such as past measurements controls, or even past states, is irrelevant if xt is complete. 31
  • 27. Probabilistic Generative Laws • The state transition probability and the measurement probability together describe the dynamical stochastic system of the robot and its environment. • The state at time t is stochastically dependent on the state at time t − 1 and the control ut . • The measurement zt depends stochastically on the state at time t. 32 xt-1 zt-1 ut-1 zt xt ut zt+1 xt+1 ut+1
  • 28. Belief Distributions • A belief reflects the robot’s internal knowledge about the state of the environment. • Robot’s state cannot be measured directly. • For example, a robot’s pose might be xt = <14.12, 12.7, 45°> in some global coordinate system, but it usually cannot know its pose, since poses are not measurable directly (not even with GPS!). • We therefore distinguish the true state from its internal belief with regards to that state. 33
  • 29. Belief Distributions • Belief distributions are posterior probabilities over state variables conditioned on the available data. • We denote belief over a state variable xt by bel(xt ) = p(xt | z1:t , u1:t ) • This posterior is the probability distribution over the state xt at time t, conditioned on all past measurements z1:t and all past controls u1:t . • Occasionally, it will prove useful to calculate a posterior before incorporating zt , just after executing the control ut . 34
  • 30. Belief Distributions • Such a posterior will be denoted as follows bel(xt ) = p(xt | z1:t−1 , u1:t ) • This probability distribution is often referred to as prediction in the context of probabilistic filtering. • bel(xt ) predicts the state at time t based on the previous state posterior, before incorporating the measurement at time t. • Calculating bel(xt ) from bel(xt ) is called correction or the measurement update. 35
  • 31. Bayes Filter Algorithm 1: Algorithm Bayes_filter(bel(xt−1 ), ut , zt ): 2: for all xt do 3: bel(xt ) = ∫p(xt | ut , xt−1 ) bel(xt−1 ) dxt−1 4: bel(xt ) = η p(zt | xt ) bel(xt ) 5: endfor 6: return bel(xt ) 36
  • 32. Example • Let us assume that the door can be in one of two possible states, open or closed, and that only the robot can change the state of the door. • Also, the robot does not know the state of the door initially. 37
  • 33. Example • Robot assigns equal prior probability to the two possible door states: bel(X0 = open) = 0.5 bel(X0 = closed) = 0.5 • Sensor noise is characterized by the conditional probabilities: p(Zt = sense_open | Xt = is_open) = 0.6 p(Zt = sense_closed | Xt = is_open) = 0.4 p(Zt = sense_open | Xt = is_closed) = 0.2 p(Zt = sense_closed | Xt = is_closed) = 0.8 38https://people.eecs.berkeley.edu/~pabbeel/cs287-fa13/slides/bayes-filters.pdf
  • 34. Example • Robot uses its manipulator to push the door open. • If the door is already open, it will remain open. • If it is closed, the robot has a 0.8 chance that it will be open afterwards: p(Xt = is_open | Ut = push, Xt_1 = is_open) = 1 p(Xt = is_closed | Ut = push, Xt_1 = is_open) = 0 p(Xt = is_open | Ut = push, Xt_1 = is_closed) = 0.8 p(Xt = is_closed | Ut = push, Xt_1 = is_closed) = 0.2 39 http://stefanosnikolaidis.net/course-files/CS545/Lecture6.pdf
  • 35. Example • It can also choose not to use its manipulator, in which case the state of the world does not change. p(Xt = is_open | Ut = do_nothing, Xt_1 = is_open) = 1 p(Xt = is_closed | Ut = do_nothing, Xt_1 = is_open) = 0 p(Xt = is_open | Ut = do_nothing, Xt_1 = is_closed) = 0 p(Xt = is_closed | Ut = do_nothing, Xt_1 = is_closed) = 1 40
  • 36. Example • Suppose at time t = 1, the robot takes no control action but it senses an open door. • Since the state space is finite, bel(x1 ) = ∫p(x1 | u1 , x0 ) bel(x0 ) dx0 = Σx0 p(x1 | u1 , x0 ) bel(x0 ) = p(x1 | U1 = do_nothing, X0 = is_open) bel(X0 = is_open) + p(x1 | U1 = do_nothing, X0 = is_closed) bel(X0 = is_closed) 41
  • 37. Example • For the hypothesis X1 = is_open, we obtain bel(X1 = is_open) = 1 x 0.5 + 0 x 0.5 = 0.5 • Likewise, for X1 = is_closed we get bel(X1 = is_closed) = 0 x 0.5 + 1 x 0.5 = 0.5 • The fact that the belief bel(x1 ) equals our prior belief bel(x0 ) should not surprise, as the action do_nothing does not affect the state of the world; neither does the world change over time by itself in our example 42
  • 38. Example • Incorporating the measurement, however, changes the belief. bel(x1 ) = η p(Z1 = sense_open | x1 ) bel(x1 ) • So, bel(X1 = is_open) = η p(Z1 = sense_open | X1 = is_open) bel(X1 = is_open) = η 0.6 · 0.5 = η 0.3 • and bel(X1 = is_closed) = η p(Z1 = sense_open | X1 = is_closed) bel(X1 = is_closed) = η 0.2 · 0.5 = η 0.1 43
  • 39. Example • The normalizer η is now easily calculated: η = (0.3 + 0.1)−1 = 2.5 • Hence, we have bel(X1 = is_open) = 0.75 bel(X1 = is_closed) = 0.25 • This calculation is now easily iterated for the next time step. • Let u2 = push and z2 = sense_open we get ... 44