SlideShare a Scribd company logo
Robots in Uncertain & Noisy
World
Ghulam Mustafa
5/13/2015
Deterministic
Mechanics
Rigid Body
Kinematics
Newton’s Law
Control
Robots in Uncertain & Noisy World
Stochastic
Dynamics
Estimation
Filtering
Prediction
Inference
dx/dt = f(x,u,t) + n(t)
Forward – Backward Algorithm
Featherstone | Kalman
Tonight's Recurring Themes
TranslationRotation
[Algorithms for computing NewtonianEulerian
kinematics]
ForwardBackward
[Algorithms for computing kinematicsdynamics
and state predictionestimation]
F=maAx=b
[Algorithms for predicting dynamics and estimating
states from measurements]
Part 1
Deterministic Dynamics & Control
(Articulated Rigid Bodies)
Some Definitions
Spatial Motion of Rigid Bodies
Denavit – Hartenberg Representation
Sheth - Uicker Representation
Computations and Algorithms
Kinematics of Some Wheeled Robots
Time Derivatives in ICC
The Wheel Jacobian
Computations and Algorithms
Velocity – Torque Duality
Newton—Euler Recursive Algorithm
Robot Control
Content of this Talk : Part 1 – Articulated Rigid Bodies
“In theory, there is no difference between
theory and practice. In practice there is.”
Part 2
Stochastic Dynamics
(Filtering, Estimation and Prediction)
In the Beginning - Geometry of Ax=b
Minimizing Error - (Vanilla LS)
Weighted - (Vanilla w/ Cream LS)
Recursive - (Vanilla w /Cream & Nuts on top LS)
… and Beyond (The Kalman Filter)
Rules of Probability (Bayes’ POV)
Graphical Models - Deconstructing Bayes
Noisy Measurements and Estimation
Repeated Noisy Measurements - Recursive Bayes
Noisy Measurements and Estimation
Hidden Markov Model (HMM)
Forward—Backward Algorithm
Content of this Talk – Part 2 : Filtering, Estimation and Prediction
It's tough to make predictions, especially
about the future.
Yogi Berra
Gauss-Markov-Kalman [and sometimes ]
Carl Friederich
Gauss
1777-1855
Andrey Markov
1856-1922
Rudolf Kalman
1930
John Flaig
Flaig
Thomas Bayes
1701-1761
Motivation
Deterministic models represent adequate description
of dynamics and control – why complicate things with
stochasticity?
Incomplete Deterministic Models: Models are based
on assumptions and hence are approximations –
ignoring higher modes, does not make them go away.
Extraneous Disturbance: Systems are driven not just
by deterministic control inputs but also by uncontrollable
environmental factors – wind gusts, treacherous terrain.
Incomplete /Noisy Measurement: Not everything is
amenable to measurement (easier to measure position
than velocity). Measurement errors are unavoidable.
Estimate the state xk from
noisy measurements zk
How to …
Model Development: Develop models that account for
uncertainties that are practical to implement.
Optimal Estimation: How to estimate model behavior based on
incomplete and noisy sensor data – fuse data from multiple sources,
recursively in real-time.
Optimal Control: Given uncertain system description, incomplete,
noisy and corrupted data, how to optimally control a system for a
desirable performance.
Estimate the state xk from
noisy measurements zk
Performance Evaluation: How to evaluate performance capabilities
of estimation and control both before and after they are built.
Problem Formulation
System Dynamics :
Robot EOM :  )(),()( qGqqVqqM 
))(,),(),(),(()( tttutxtxftx  
Observation : ))(,),(),(),(()( tttutxtxhtz 
Linear Model :
)()()(
)()()()(
tvtHxtz
twtButAxtx


Discrete Time :
kkk
kkkk
vHxz
wBuAxx

  111
k
k
z
x State @ tk
Observation @ tk
Estimate the state xk from
noisy measurements zk
Intro to Least Squares – Geometry of Ax=b






































2
1
2
22
12
1
21
11
2
1
2
1
2221
1211
b
b
x
a
a
x
a
a
b
b
x
x
aa
aa
bAx 
Linear Combination
of Columns of A
What if b is NOT in S(A)?
Solution exists if b lies in S(A)
[space spanned by columns of A]
Columns Space of A
bAx 
Columns Space of A
PbxA ˆ
b
xAb ˆ
Project b on S(A) and call it
the best estimate of x.
xAbe ˆMinimize
bAAAxbAAxA TTTT 1
)( 

Recall for a non-square A
Minimizing the Error – Vanilla LS
bAAAx
xAAbAe
xd
d
xAbxAbe
xAbe
TT
TT
T
1
2
2
)(ˆ
0ˆ22
ˆ
)ˆ).(ˆ(
ˆ





Projection P
(on S(A))
Minimize
Consider
bAx 
The best solution is the one that minimizes the norm square of the error
(Assume b to be measurements and x the state)
WbWAWAWAx
WbWAxWAWA
ewewewWe
TTTT
TT
1
2
33
2
22
2
11
2
)(ˆ
)(ˆ)(
...




bAAAxbAAxA TTTT 1
)( 

Recall when equally reliable W=I
 dxxxpeE )(][
Mean Error
 dxxpxeE )(][ 222

Variance
 dxeepeeeeEcv jijiji ),(][
C0-Variance
1
V 1
V
][
)(ˆ
1
111
T
T
P
T
eeEP
bVAAVAx





Weighted Residual – Vanilla w/ Cream LS
If the measurements are not equally reliable
WbWAx  (W is diagonal
of weights wi)
Recursive LS – Vanilla w/ Cream & Nuts on-the-top LS
Now imagine the data coming in a stream
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0b
00 bxA 
11 bxA 
1b
If more data arrives, can the best estimate for
the combined data be computed from x0 and
b1 without restarting the calculation from b0?
Lets compute the
average of n numbers n
xxx
nA n...
)( 21 

One additional
data arrives 1
...
)1( 121


 
n
xxxx
nA nn
121 ...)1()1(  nn xxxxnAn
1211 ...)(   nnn xxxxxnnA
 )(
1
1
)()1( 1 nAx
n
nAnA n 

 
Re-arrange
and simplify
 )()()1( 1 nAxKnAnA n  
Running Average
)()1(, nAnAn 
A(n+1)
Digression – Running Average
Recursive LS – Vanilla w/ Cream & Nuts on-the-top LS …
2b
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0b
00 bxA 
11 bxA 
1b
22 bxA 
Appended
Data 























1
0
1
0
1
1
0
1
0
b
b
A
A
x
A
A
A
A
TT
)( 11001
1
0
1
0
11 bAbAP
b
b
A
A
Px TT
T














Re-arrange
and simplify )( 0111101 xAbAPxx T

Original Data 00000 )( bAxAA
TT

)( 011101 xAbKxx 
Recursive LS
Recursive LS – What’s Goin’ On?
Projecting b on S(A) is the
best estimate of x.
Columns Space of A
PbxA ˆ
b
xAb ˆ
exAb  ˆError
Expanding th Columns
Space of A
bAppending data expands S(A)
and makes b closer to S(A) which
reduces the error. In the limit, b
collapses on S(A) we have the
exact Ax=b.
0ˆ  xAbError
Recursive LS – and Beyond (Take One)
iii
iii
xAb
xFx

1  System Dynamics
 Measurement
EstimateCurrent
State
Predict Future State
1b
1P
111 xAb 











































2
1
0
2
1
0
2
1
1
0
0
0
0
00
0
00
0
00
b
b
b
x
x
x
A
IF
A
IF
A
bAx  )(1 iiiiii xAbKxx 
Kalman
1x0F
100 xxF 
0x
0b
0P
000 xAb 
2b 3b
2x 3x 4x
2P 3P
1F 2F 3F
222 xAb 
211 xxF  322 xxF  433 xxF 
Recursive LS – and Beyond (Take Two)
)ˆ(ˆˆ 
 kkkkk xHzKxx
Kalman
kkk
kkk
vHxz
wAxx

  11  System Dynamics with noise
 Measurement with error
System &
Measurement Noise
kx
kz
kxˆ
1kx kx
ˆ


RRNvp
QQNwp
:),0(~)(
:),0(~)( system noise covariance
meas. noise covariance
0
ˆ
:][:ˆ
][:ˆ

 
k
kT
kkkkkk
T
kkkkkk
xd
dP
eeEPxxe
eeEPxxe
Rules of Probability – The Bayesian POV
)()|()()|(),(
),()(;),()(
ypyxpxpxypyxp
yxpypyxpxp
xy

 Sum Rule
Product Rule
Joint Probability
Conditional Probability
)(
)()|(
)|(
xp
ypyxp
xyp 
Bayes’ Rule
(Posterior)
(Likelihood) (Prior)
(Marginal)
(Prior) – belief before making and observation or collecting data.
(Posterior) – belief after making and observation or collecting
data. It forms the Prior for the next iteration.
(Likelihood) – it is a function of y, not a probability distribution
of y.
(Marginal) – Data from the observation.
p(x,y)
p(y|x)
p(x|y)
x
y
Rules of Probability – The Bayesian POV
(Posterior) )(
)()|(
)|(
xp
pxp
xp

 
Bayes’ Rule
(Likelihood) (Prior)
(Marginal)
 2
,)|(  Nxp Likelihood
Unknown Known
 2
00 ,)(  Np Prior (Known)
),().,()()|()|(
2
00
2
 NNpxpxp 
Posterior
Gaussian Prior
Gaussian Prior -> Gaussian Posterior
[Conjugate Pair]
Conditional Probability - De-Constructing Bayes’
),|().|(
),|().,,|(
).().().(
54746
2153214
321
xxxpxxp
xxxpxxxxp
xpxpxp
)().|().,|(),,( cpcbpcbapcbap 
c
b
a
)|().|().(),,( bcpabpapcbap 
cba
)().().(),,( cpbpapcbap 
cba
Factor Graphs – De-Constructing Bayes’
1x 3x2x
1f 3f2f 4f
)().,().,().,(),,( 34323212211321 xfxxfxxfxxfxxxp 


N
i
iN fxxp
1
1 ),...,(
1x
3x
2x
1f 2f
3f
321321
2133
22
11
..),,(
),|(
)(
)(
fffxxxp
xxxpf
xpf
xpf




1x
3x
2x
Making Noisy Measurements on a Stationary Robot
Image Recognition/ Data
Display / Storage
Robot
Target
Camera Problem Statement
We want to estimate the mean position
() of the robot from the noisy image
measurements (x).
)(p
Measurement Model
Assume () to be normally
distributed random variable
with probability p().
x
(x) is the noisy measurement.
)|( xpNoise on the sensor
centered around (x) is the
noisy measurement.
Choose () .
100um
Ext
PRE
Theta
-4
-3
-2
-1
0
1
2
3
Prin2
1
23
4
5
6
xy
-10 -8 -6 -4 -2 0 2 4
Prin 1
Extension
Pre-aligner
Theta
Theta
25
35
45
55
65
75
Meanofr
33 66 99 132 165 198 231 264 297 330 363 396 429 462
Sample 40 50 60
Wafer Placement
“Potential” of the System
Patterns in data indicate presence of assignable cause(s).
(Gaussian = Purely Random)
Pattern vs Noise – One More Digression …
Making Repeated Noisy Measurements
x

)(p
)|( xp
x

)(p
x

)(p
. . . . . .
Successive Noisy Measurements –The Recursive Bayes’
)|( xp )|( xp )(p
Prior for next p(x)
)(xp
Sample
x

Making Noisy Measurements on a Moving Robot (The Double Whammy)
WMR whose distance from an obstacle is
measured by an ultrasonic sensor. Estimate
the position of WMR at any time t.
Measurement: @ t=0, initial position has a
Gaussian distribution based on sensor
accuracy. Initial position is estimated.
Prediction: @ t=1, position is predicted from
the estimate @ t=0.
Measurement: @ t=1, position is estimated
from the measurement @ t=1.
Correction: @ t=1, position is corrected
from the measurement @ t=1. Also
prediction is made for t=2.
Making Noisy Measurements on a Moving Robot
System Noise – Low
Measurement Noise - Low
System Noise – Low
Measurement Noise - High
System Noise – High
Measurement Noise - Low
System Noise – High
Measurement Noise - High
Making Noisy Measurements on a Moving Robot (Take Three)
Kalman
iiii
iiii
exAb
xFx

 1
x

)(p
)|( xp
1t 2t 3t 4t
)|( xp 
Measurement
Noise
Prediction
Error
The HMM – Single Page Review
Hidden Markov Model
Hidden
States
Observations
Transition
T(i,j)
Xxnz kk  };...1{
Emission
i
1t
1z 2z 3z kz 1kz
1x 2x 3x kx 1kx
2t 3t kt  1 kt
)( 1zp
 
k
kkkk zxpzzpzxpzpxzp )|()|()|()(),( 1111
Xxizxxpx
njiizjzpjiT
kkii
kk

 
),|()(
}..1{),();|(),( 1

The HMM – The Forward-Backward Algo
1t 2t 3t kt  1 kt1 kt nt 
)(),|(),|( 11 zpzzpzxp kkkk Given :
Emission Transition Initial
Goal: Compute )|( :1nk xzp
Forward : nkxzp kk ...1),,( :1 
Backward: nkzxp knk ...1),|( :1 
)|().,(),()|( :1:1:1:1 knkkknknk zxpxzpxzpxzp 
1kz
1kx
nz
nx
1z 2z 3z
kz
1x 2x 3x
1kz
1kx kx
kz
1x nx
The HMM – Example – 2 State Model
}10...1{};2,1{  Xxz kk (Uniformly Distributed)
7
2
9
1
6
2
7
1
6
1
Forward-Backward – Two Ways
3j mj2j1j
0jRobot
Dynamic
&
Control
(Featherstone) kkk maf .
Velocities, Accelerations 
 Forces, Torques
Prediction, Estimation 
 Smoothing
State
Prediction
&
Estimation
(Kalman)
(Space)
z
x
)(xp
)|( xzp
)|( zxp
(Time)
The Definition - Reliability of a Robot
It is the probability (R) that the robot will successfully complete
the assigned task (T) under the specified conditions (C).
Specified Conditions (C):
Martian Terrain / Contact with Human Body / Assembly Line
Assigned Task (T):
Move from A to B / Perform Surgery / Spot Weld.
Probability (R).
On Mars, move from A to B 50 times without failure.
On a human perform surgery with failure.
On an assembly line do 1 million spot welds before failure.
The basic problem is to quantify R during design.
Re-Visiting Kalman
And what we don’t know yet believe to be true is religion
References
1. Craig, J. J., Introduction to Robotics,: Mechanics and Control,
Prince-Hall, 2003.
2. Muir, F. P. and Neuman, C. P. , (1987), Kinematics Modeling of
Wheeled Mobile Robots, J. Robotic Systems, 4(2).
3. Bishop, C. , Pattern Recognition and Machine Learning, Spring ,
2006.
4. Ghahramani, Z. , (2001), An Introduction to Hidden Markov
Models and Bayesian Networks, J. Pattern Recognitions and
Artificial Intelligence.
5. Kalman, R. E. and Bucy, R. S. Z. , (1961), New Results on Linear
Filtering and Prediction Theory, J. Basic Engineering.
PaintingTitle:ManyaMoonAgo,G.Mustafa©2011
Fin

More Related Content

What's hot

Supervisory control of discrete event systems for linear temporal logic speci...
Supervisory control of discrete event systems for linear temporal logic speci...Supervisory control of discrete event systems for linear temporal logic speci...
Supervisory control of discrete event systems for linear temporal logic speci...
AmiSakakibara
 
[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions
WooSung Choi
 
Tracking[1]
Tracking[1]Tracking[1]
Tracking[1]
mervebayrak
 
A General Framework for Enhancing Prediction Performance on Time Series Data
A General Framework for Enhancing Prediction Performance on Time Series DataA General Framework for Enhancing Prediction Performance on Time Series Data
A General Framework for Enhancing Prediction Performance on Time Series Data
HopeBay Technologies, Inc.
 
Stochastic augmentation by generalized minimum variance control with rst loop...
Stochastic augmentation by generalized minimum variance control with rst loop...Stochastic augmentation by generalized minimum variance control with rst loop...
Stochastic augmentation by generalized minimum variance control with rst loop...
UFPA
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
Pantelis Sopasakis
 
Transefermation
TransefermationTransefermation
Transefermation
Toran sahu
 
Kalman
KalmanKalman
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
Stephane Senecal
 
Click-Trough Rate (CTR) prediction
Click-Trough Rate (CTR) predictionClick-Trough Rate (CTR) prediction
Click-Trough Rate (CTR) prediction
Andrey Lange
 
Recursive Compressed Sensing
Recursive Compressed SensingRecursive Compressed Sensing
Recursive Compressed Sensing
Pantelis Sopasakis
 
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
AI Robotics KR
 
Design of Quadratic Optimal Regulator for DC Motor
Design of Quadratic Optimal Regulator for DC Motor Design of Quadratic Optimal Regulator for DC Motor
Design of Quadratic Optimal Regulator for DC Motor
International Journal of Research and Discovery(IJRD)
 
Sloshing-aware MPC for upper stage attitude control
Sloshing-aware MPC for upper stage attitude controlSloshing-aware MPC for upper stage attitude control
Sloshing-aware MPC for upper stage attitude control
Pantelis Sopasakis
 
Algorithm: Quick-Sort
Algorithm: Quick-SortAlgorithm: Quick-Sort
Algorithm: Quick-Sort
Tareq Hasan
 
Phase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and TechniquesPhase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and Techniques
Vaibhav Dixit
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Leo Asselborn
 
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability CalculusControl of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
Leo Asselborn
 
Passivity-based control of rigid-body manipulator
Passivity-based control of rigid-body manipulatorPassivity-based control of rigid-body manipulator
Passivity-based control of rigid-body manipulator
Hancheol Choi
 
Differential Geometry for Machine Learning
Differential Geometry for Machine LearningDifferential Geometry for Machine Learning
Differential Geometry for Machine Learning
SEMINARGROOT
 

What's hot (20)

Supervisory control of discrete event systems for linear temporal logic speci...
Supervisory control of discrete event systems for linear temporal logic speci...Supervisory control of discrete event systems for linear temporal logic speci...
Supervisory control of discrete event systems for linear temporal logic speci...
 
[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions
 
Tracking[1]
Tracking[1]Tracking[1]
Tracking[1]
 
A General Framework for Enhancing Prediction Performance on Time Series Data
A General Framework for Enhancing Prediction Performance on Time Series DataA General Framework for Enhancing Prediction Performance on Time Series Data
A General Framework for Enhancing Prediction Performance on Time Series Data
 
Stochastic augmentation by generalized minimum variance control with rst loop...
Stochastic augmentation by generalized minimum variance control with rst loop...Stochastic augmentation by generalized minimum variance control with rst loop...
Stochastic augmentation by generalized minimum variance control with rst loop...
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
 
Transefermation
TransefermationTransefermation
Transefermation
 
Kalman
KalmanKalman
Kalman
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
Click-Trough Rate (CTR) prediction
Click-Trough Rate (CTR) predictionClick-Trough Rate (CTR) prediction
Click-Trough Rate (CTR) prediction
 
Recursive Compressed Sensing
Recursive Compressed SensingRecursive Compressed Sensing
Recursive Compressed Sensing
 
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
 
Design of Quadratic Optimal Regulator for DC Motor
Design of Quadratic Optimal Regulator for DC Motor Design of Quadratic Optimal Regulator for DC Motor
Design of Quadratic Optimal Regulator for DC Motor
 
Sloshing-aware MPC for upper stage attitude control
Sloshing-aware MPC for upper stage attitude controlSloshing-aware MPC for upper stage attitude control
Sloshing-aware MPC for upper stage attitude control
 
Algorithm: Quick-Sort
Algorithm: Quick-SortAlgorithm: Quick-Sort
Algorithm: Quick-Sort
 
Phase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and TechniquesPhase Retrieval: Motivation and Techniques
Phase Retrieval: Motivation and Techniques
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
 
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability CalculusControl of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
Control of Uncertain Nonlinear Systems Using Ellipsoidal Reachability Calculus
 
Passivity-based control of rigid-body manipulator
Passivity-based control of rigid-body manipulatorPassivity-based control of rigid-body manipulator
Passivity-based control of rigid-body manipulator
 
Differential Geometry for Machine Learning
Differential Geometry for Machine LearningDifferential Geometry for Machine Learning
Differential Geometry for Machine Learning
 

Similar to ASQ Talk v4

MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
IJCI JOURNAL
 
Basics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.pptBasics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.ppt
Magde Gad
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from Scratch
Ahmed BESBES
 
Control assignment#2
Control assignment#2Control assignment#2
Control assignment#2
cairo university
 
Stack of Tasks Course
Stack of Tasks CourseStack of Tasks Course
Stack of Tasks Course
Thomas Moulard
 
Pole placement by er. sanyam s. saini (me reg)
Pole  placement by er. sanyam s. saini (me reg)Pole  placement by er. sanyam s. saini (me reg)
Pole placement by er. sanyam s. saini (me reg)
Sanyam Singh
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
angely alcendra
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
Rediet Moges
 
Sample quizz test
Sample quizz testSample quizz test
Sample quizz test
kasguest
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
Shocky1
 
Servo systems
Servo systemsServo systems
Servo systems
cairo university
 
assignemts.pdf
assignemts.pdfassignemts.pdf
assignemts.pdf
ramish32
 
5.5 back tracking 02
5.5 back tracking 025.5 back tracking 02
5.5 back tracking 02
Krish_ver2
 
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSEAU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
Thiyagarajan G
 
PART I.3 - Physical Mathematics
PART I.3 - Physical MathematicsPART I.3 - Physical Mathematics
PART I.3 - Physical Mathematics
Maurice R. TREMBLAY
 
Assignment2 control
Assignment2 controlAssignment2 control
Assignment2 control
cairo university
 
ECE611 Mini Project2
ECE611 Mini Project2ECE611 Mini Project2
ECE611 Mini Project2
Robinson Navas
 
Conversion of transfer function to canonical state variable models
Conversion of transfer function to canonical state variable modelsConversion of transfer function to canonical state variable models
Conversion of transfer function to canonical state variable models
Jyoti Singh
 
2012 mdsp pr13 support vector machine
2012 mdsp pr13 support vector machine2012 mdsp pr13 support vector machine
2012 mdsp pr13 support vector machine
nozomuhamada
 

Similar to ASQ Talk v4 (20)

MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
 
Basics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.pptBasics of CT- Lecture 9.ppt
Basics of CT- Lecture 9.ppt
 
Introduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from ScratchIntroduction to Neural Networks and Deep Learning from Scratch
Introduction to Neural Networks and Deep Learning from Scratch
 
Control assignment#2
Control assignment#2Control assignment#2
Control assignment#2
 
Stack of Tasks Course
Stack of Tasks CourseStack of Tasks Course
Stack of Tasks Course
 
Pole placement by er. sanyam s. saini (me reg)
Pole  placement by er. sanyam s. saini (me reg)Pole  placement by er. sanyam s. saini (me reg)
Pole placement by er. sanyam s. saini (me reg)
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
 
Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03Digital Signal Processing[ECEG-3171]-Ch1_L03
Digital Signal Processing[ECEG-3171]-Ch1_L03
 
Sample quizz test
Sample quizz testSample quizz test
Sample quizz test
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Servo systems
Servo systemsServo systems
Servo systems
 
assignemts.pdf
assignemts.pdfassignemts.pdf
assignemts.pdf
 
5.5 back tracking 02
5.5 back tracking 025.5 back tracking 02
5.5 back tracking 02
 
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSEAU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
AU QP Answer key NOv/Dec 2015 Computer Graphics 5 sem CSE
 
PART I.3 - Physical Mathematics
PART I.3 - Physical MathematicsPART I.3 - Physical Mathematics
PART I.3 - Physical Mathematics
 
Assignment2 control
Assignment2 controlAssignment2 control
Assignment2 control
 
ECE611 Mini Project2
ECE611 Mini Project2ECE611 Mini Project2
ECE611 Mini Project2
 
Conversion of transfer function to canonical state variable models
Conversion of transfer function to canonical state variable modelsConversion of transfer function to canonical state variable models
Conversion of transfer function to canonical state variable models
 
2012 mdsp pr13 support vector machine
2012 mdsp pr13 support vector machine2012 mdsp pr13 support vector machine
2012 mdsp pr13 support vector machine
 

ASQ Talk v4

  • 1. Robots in Uncertain & Noisy World Ghulam Mustafa 5/13/2015
  • 2. Deterministic Mechanics Rigid Body Kinematics Newton’s Law Control Robots in Uncertain & Noisy World Stochastic Dynamics Estimation Filtering Prediction Inference dx/dt = f(x,u,t) + n(t) Forward – Backward Algorithm Featherstone | Kalman
  • 3. Tonight's Recurring Themes TranslationRotation [Algorithms for computing NewtonianEulerian kinematics] ForwardBackward [Algorithms for computing kinematicsdynamics and state predictionestimation] F=maAx=b [Algorithms for predicting dynamics and estimating states from measurements]
  • 4. Part 1 Deterministic Dynamics & Control (Articulated Rigid Bodies)
  • 5. Some Definitions Spatial Motion of Rigid Bodies Denavit – Hartenberg Representation Sheth - Uicker Representation Computations and Algorithms Kinematics of Some Wheeled Robots Time Derivatives in ICC The Wheel Jacobian Computations and Algorithms Velocity – Torque Duality Newton—Euler Recursive Algorithm Robot Control Content of this Talk : Part 1 – Articulated Rigid Bodies
  • 6. “In theory, there is no difference between theory and practice. In practice there is.”
  • 7. Part 2 Stochastic Dynamics (Filtering, Estimation and Prediction)
  • 8. In the Beginning - Geometry of Ax=b Minimizing Error - (Vanilla LS) Weighted - (Vanilla w/ Cream LS) Recursive - (Vanilla w /Cream & Nuts on top LS) … and Beyond (The Kalman Filter) Rules of Probability (Bayes’ POV) Graphical Models - Deconstructing Bayes Noisy Measurements and Estimation Repeated Noisy Measurements - Recursive Bayes Noisy Measurements and Estimation Hidden Markov Model (HMM) Forward—Backward Algorithm Content of this Talk – Part 2 : Filtering, Estimation and Prediction
  • 9. It's tough to make predictions, especially about the future. Yogi Berra
  • 10. Gauss-Markov-Kalman [and sometimes ] Carl Friederich Gauss 1777-1855 Andrey Markov 1856-1922 Rudolf Kalman 1930 John Flaig Flaig Thomas Bayes 1701-1761
  • 11. Motivation Deterministic models represent adequate description of dynamics and control – why complicate things with stochasticity? Incomplete Deterministic Models: Models are based on assumptions and hence are approximations – ignoring higher modes, does not make them go away. Extraneous Disturbance: Systems are driven not just by deterministic control inputs but also by uncontrollable environmental factors – wind gusts, treacherous terrain. Incomplete /Noisy Measurement: Not everything is amenable to measurement (easier to measure position than velocity). Measurement errors are unavoidable. Estimate the state xk from noisy measurements zk
  • 12. How to … Model Development: Develop models that account for uncertainties that are practical to implement. Optimal Estimation: How to estimate model behavior based on incomplete and noisy sensor data – fuse data from multiple sources, recursively in real-time. Optimal Control: Given uncertain system description, incomplete, noisy and corrupted data, how to optimally control a system for a desirable performance. Estimate the state xk from noisy measurements zk Performance Evaluation: How to evaluate performance capabilities of estimation and control both before and after they are built.
  • 13. Problem Formulation System Dynamics : Robot EOM :  )(),()( qGqqVqqM  ))(,),(),(),(()( tttutxtxftx   Observation : ))(,),(),(),(()( tttutxtxhtz  Linear Model : )()()( )()()()( tvtHxtz twtButAxtx   Discrete Time : kkk kkkk vHxz wBuAxx    111 k k z x State @ tk Observation @ tk Estimate the state xk from noisy measurements zk
  • 14. Intro to Least Squares – Geometry of Ax=b                                       2 1 2 22 12 1 21 11 2 1 2 1 2221 1211 b b x a a x a a b b x x aa aa bAx  Linear Combination of Columns of A What if b is NOT in S(A)? Solution exists if b lies in S(A) [space spanned by columns of A] Columns Space of A bAx  Columns Space of A PbxA ˆ b xAb ˆ Project b on S(A) and call it the best estimate of x. xAbe ˆMinimize
  • 15. bAAAxbAAxA TTTT 1 )(   Recall for a non-square A Minimizing the Error – Vanilla LS bAAAx xAAbAe xd d xAbxAbe xAbe TT TT T 1 2 2 )(ˆ 0ˆ22 ˆ )ˆ).(ˆ( ˆ      Projection P (on S(A)) Minimize Consider bAx  The best solution is the one that minimizes the norm square of the error (Assume b to be measurements and x the state)
  • 16. WbWAWAWAx WbWAxWAWA ewewewWe TTTT TT 1 2 33 2 22 2 11 2 )(ˆ )(ˆ)( ...     bAAAxbAAxA TTTT 1 )(   Recall when equally reliable W=I  dxxxpeE )(][ Mean Error  dxxpxeE )(][ 222  Variance  dxeepeeeeEcv jijiji ),(][ C0-Variance 1 V 1 V ][ )(ˆ 1 111 T T P T eeEP bVAAVAx      Weighted Residual – Vanilla w/ Cream LS If the measurements are not equally reliable WbWAx  (W is diagonal of weights wi)
  • 17. Recursive LS – Vanilla w/ Cream & Nuts on-the-top LS Now imagine the data coming in a stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0b 00 bxA  11 bxA  1b If more data arrives, can the best estimate for the combined data be computed from x0 and b1 without restarting the calculation from b0? Lets compute the average of n numbers n xxx nA n... )( 21   One additional data arrives 1 ... )1( 121     n xxxx nA nn 121 ...)1()1(  nn xxxxnAn 1211 ...)(   nnn xxxxxnnA  )( 1 1 )()1( 1 nAx n nAnA n     Re-arrange and simplify  )()()1( 1 nAxKnAnA n   Running Average )()1(, nAnAn  A(n+1) Digression – Running Average
  • 18. Recursive LS – Vanilla w/ Cream & Nuts on-the-top LS … 2b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0b 00 bxA  11 bxA  1b 22 bxA  Appended Data                         1 0 1 0 1 1 0 1 0 b b A A x A A A A TT )( 11001 1 0 1 0 11 bAbAP b b A A Px TT T               Re-arrange and simplify )( 0111101 xAbAPxx T  Original Data 00000 )( bAxAA TT  )( 011101 xAbKxx  Recursive LS
  • 19. Recursive LS – What’s Goin’ On? Projecting b on S(A) is the best estimate of x. Columns Space of A PbxA ˆ b xAb ˆ exAb  ˆError Expanding th Columns Space of A bAppending data expands S(A) and makes b closer to S(A) which reduces the error. In the limit, b collapses on S(A) we have the exact Ax=b. 0ˆ  xAbError
  • 20. Recursive LS – and Beyond (Take One) iii iii xAb xFx  1  System Dynamics  Measurement EstimateCurrent State Predict Future State 1b 1P 111 xAb                                             2 1 0 2 1 0 2 1 1 0 0 0 0 00 0 00 0 00 b b b x x x A IF A IF A bAx  )(1 iiiiii xAbKxx  Kalman 1x0F 100 xxF  0x 0b 0P 000 xAb  2b 3b 2x 3x 4x 2P 3P 1F 2F 3F 222 xAb  211 xxF  322 xxF  433 xxF 
  • 21. Recursive LS – and Beyond (Take Two) )ˆ(ˆˆ   kkkkk xHzKxx Kalman kkk kkk vHxz wAxx    11  System Dynamics with noise  Measurement with error System & Measurement Noise kx kz kxˆ 1kx kx ˆ   RRNvp QQNwp :),0(~)( :),0(~)( system noise covariance meas. noise covariance 0 ˆ :][:ˆ ][:ˆ    k kT kkkkkk T kkkkkk xd dP eeEPxxe eeEPxxe
  • 22. Rules of Probability – The Bayesian POV )()|()()|(),( ),()(;),()( ypyxpxpxypyxp yxpypyxpxp xy   Sum Rule Product Rule Joint Probability Conditional Probability )( )()|( )|( xp ypyxp xyp  Bayes’ Rule (Posterior) (Likelihood) (Prior) (Marginal) (Prior) – belief before making and observation or collecting data. (Posterior) – belief after making and observation or collecting data. It forms the Prior for the next iteration. (Likelihood) – it is a function of y, not a probability distribution of y. (Marginal) – Data from the observation. p(x,y) p(y|x) p(x|y) x y
  • 23. Rules of Probability – The Bayesian POV (Posterior) )( )()|( )|( xp pxp xp    Bayes’ Rule (Likelihood) (Prior) (Marginal)  2 ,)|(  Nxp Likelihood Unknown Known  2 00 ,)(  Np Prior (Known) ),().,()()|()|( 2 00 2  NNpxpxp  Posterior Gaussian Prior Gaussian Prior -> Gaussian Posterior [Conjugate Pair]
  • 24. Conditional Probability - De-Constructing Bayes’ ),|().|( ),|().,,|( ).().().( 54746 2153214 321 xxxpxxp xxxpxxxxp xpxpxp )().|().,|(),,( cpcbpcbapcbap  c b a )|().|().(),,( bcpabpapcbap  cba )().().(),,( cpbpapcbap  cba
  • 25. Factor Graphs – De-Constructing Bayes’ 1x 3x2x 1f 3f2f 4f )().,().,().,(),,( 34323212211321 xfxxfxxfxxfxxxp    N i iN fxxp 1 1 ),...,( 1x 3x 2x 1f 2f 3f 321321 2133 22 11 ..),,( ),|( )( )( fffxxxp xxxpf xpf xpf     1x 3x 2x
  • 26. Making Noisy Measurements on a Stationary Robot Image Recognition/ Data Display / Storage Robot Target Camera Problem Statement We want to estimate the mean position () of the robot from the noisy image measurements (x). )(p Measurement Model Assume () to be normally distributed random variable with probability p(). x (x) is the noisy measurement. )|( xpNoise on the sensor centered around (x) is the noisy measurement. Choose () .
  • 27. 100um Ext PRE Theta -4 -3 -2 -1 0 1 2 3 Prin2 1 23 4 5 6 xy -10 -8 -6 -4 -2 0 2 4 Prin 1 Extension Pre-aligner Theta Theta 25 35 45 55 65 75 Meanofr 33 66 99 132 165 198 231 264 297 330 363 396 429 462 Sample 40 50 60 Wafer Placement “Potential” of the System Patterns in data indicate presence of assignable cause(s). (Gaussian = Purely Random) Pattern vs Noise – One More Digression …
  • 28. Making Repeated Noisy Measurements x  )(p )|( xp x  )(p x  )(p . . . . . .
  • 29. Successive Noisy Measurements –The Recursive Bayes’ )|( xp )|( xp )(p Prior for next p(x) )(xp Sample x 
  • 30. Making Noisy Measurements on a Moving Robot (The Double Whammy) WMR whose distance from an obstacle is measured by an ultrasonic sensor. Estimate the position of WMR at any time t. Measurement: @ t=0, initial position has a Gaussian distribution based on sensor accuracy. Initial position is estimated. Prediction: @ t=1, position is predicted from the estimate @ t=0. Measurement: @ t=1, position is estimated from the measurement @ t=1. Correction: @ t=1, position is corrected from the measurement @ t=1. Also prediction is made for t=2.
  • 31. Making Noisy Measurements on a Moving Robot System Noise – Low Measurement Noise - Low System Noise – Low Measurement Noise - High System Noise – High Measurement Noise - Low System Noise – High Measurement Noise - High
  • 32. Making Noisy Measurements on a Moving Robot (Take Three) Kalman iiii iiii exAb xFx   1 x  )(p )|( xp 1t 2t 3t 4t )|( xp  Measurement Noise Prediction Error
  • 33. The HMM – Single Page Review Hidden Markov Model Hidden States Observations Transition T(i,j) Xxnz kk  };...1{ Emission i 1t 1z 2z 3z kz 1kz 1x 2x 3x kx 1kx 2t 3t kt  1 kt )( 1zp   k kkkk zxpzzpzxpzpxzp )|()|()|()(),( 1111 Xxizxxpx njiizjzpjiT kkii kk    ),|()( }..1{),();|(),( 1 
  • 34. The HMM – The Forward-Backward Algo 1t 2t 3t kt  1 kt1 kt nt  )(),|(),|( 11 zpzzpzxp kkkk Given : Emission Transition Initial Goal: Compute )|( :1nk xzp Forward : nkxzp kk ...1),,( :1  Backward: nkzxp knk ...1),|( :1  )|().,(),()|( :1:1:1:1 knkkknknk zxpxzpxzpxzp  1kz 1kx nz nx 1z 2z 3z kz 1x 2x 3x 1kz 1kx kx kz 1x nx
  • 35. The HMM – Example – 2 State Model }10...1{};2,1{  Xxz kk (Uniformly Distributed) 7 2 9 1 6 2 7 1 6 1
  • 36. Forward-Backward – Two Ways 3j mj2j1j 0jRobot Dynamic & Control (Featherstone) kkk maf . Velocities, Accelerations   Forces, Torques Prediction, Estimation   Smoothing State Prediction & Estimation (Kalman) (Space) z x )(xp )|( xzp )|( zxp (Time)
  • 37. The Definition - Reliability of a Robot It is the probability (R) that the robot will successfully complete the assigned task (T) under the specified conditions (C). Specified Conditions (C): Martian Terrain / Contact with Human Body / Assembly Line Assigned Task (T): Move from A to B / Perform Surgery / Spot Weld. Probability (R). On Mars, move from A to B 50 times without failure. On a human perform surgery with failure. On an assembly line do 1 million spot welds before failure. The basic problem is to quantify R during design.
  • 38. Re-Visiting Kalman And what we don’t know yet believe to be true is religion
  • 39. References 1. Craig, J. J., Introduction to Robotics,: Mechanics and Control, Prince-Hall, 2003. 2. Muir, F. P. and Neuman, C. P. , (1987), Kinematics Modeling of Wheeled Mobile Robots, J. Robotic Systems, 4(2). 3. Bishop, C. , Pattern Recognition and Machine Learning, Spring , 2006. 4. Ghahramani, Z. , (2001), An Introduction to Hidden Markov Models and Bayesian Networks, J. Pattern Recognitions and Artificial Intelligence. 5. Kalman, R. E. and Bucy, R. S. Z. , (1961), New Results on Linear Filtering and Prediction Theory, J. Basic Engineering.