Upcoming SlideShare
×

# Kalman filter for object tracking

7,335 views

Published on

7 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
7,335
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
737
0
Likes
7
Embeds 0
No embeds

No notes for slide

### Kalman filter for object tracking

1. 1. Kalman Filter for Object Tracking
2. 2. Kalman Filter For Object Tracking • Kalman filter – as a linear discrete-time variant system • Kalman Filtering Problem • Optimal Estimation of Parameters • Extended Kalman filter – Modeling Non-Linear Systems • Object Tracking using Kalman filter a) Using Linear KF model b) Extended KF • Mean Shift Optimal Prediction and Kalman Filter for Object Tracking • Object Tracking using an adaptive Kalman Filter combined with Mean Shift
3. 3. Kalman Filter - as linear discrete-time dynamical system •Dynamic System – State of the system is time-variant. •System is described by “state vector” – which is minimal set of data that is sufficient to uniquely describe the dynamical behavior of the system. •We keep on updating this system i.e. state of the system or state vector based on observable data.
4. 4. KF – as linear discrete -time system System Description • Process Equation :- • xk+1 = F(k+1,k) * xk + wk ; • Where F(k+1,k) is the transition matrix taking the state xk from time k to time k+1. • Process noise wk is assumed to be AWG with zero mean and with covariance matrix defined by :- E [wn wkT] = Qk ; for n=k and zero otherwise.
5. 5. • Measurement Equation :- • yk = Hk * xk + vk ; • Where yk is observable data at time k and Hk is the measurment matrix. • Process noise vk is assumed to be AWG with zero mean and with covariance matrix defined by :- E [vn vkT] = Rk ; for n=k and zero otherwise. KF – as linear discrete -time system Measurement
6. 6. KF- linear dynamical time variant system
7. 7. KF- Filtering Problem Use the entire observed data, consisting of the vectors y1; y2… yk, to find for each k ¸ 1 the minimum mean-square error estimate of the state xk . The problem is called filtering if i = k , prediction if i > k and smoothing if 1<= i < k.
8. 8. Kalman Filter – Optimal Estimates • yk = xk + vk ; • where xk is an unknown signal and vk is an additive noise component. Where xk denote the a posteriori estimate of the signal xk, given the observations y1; y2; … yk. In general, the estimate ^xk is different from the unknown signal xk. • Cost function for incorrect measurements is mean square error which is Jk = E[(xk - ^xk)2] ; • Optimal Error Estimate :- • ^xk = E[xk | y1, y2, …. yk] ;
9. 9. KF – using observed data to improve estimate of System Description • Using the information contained in the new measurement yk to update estimate of the unknown state xk. • ^xk_ denote a prior estimate of the state which is already available at time k. • Posterior estimate after using the information in the observed data • ^xk = G(1,k) * ^xk_ + G(2,k) * yk ; • Where G(1,k) and G(2,k) is to determined such that it performs best linear estimation. • Here we need Cost function estimate error and correct parameters after that.
10. 10. KF – Cost Function and Error Calculation • State Error Vector xk~ = xk - ^xk ; • Using principle of Orthogonality • E [ xk~ yiT] = 0 for i = 1 to k-1 ; • Using error vector definition, system definition and this orthogonality we can reduce above equations into the following form :- • E[(xk - G(1,k) ^xk_ - G(2,k)*Hk*xk – G(2,k)*vk)yiT] = 0 for i = 1 to k-1 ; • Since vk is independent process of yi hence we can drop the last term .
11. 11. KF – Cost Function and Error Calculation; Continued … •Remaining expression can be written as by adding this element “ G(1,k)*xk – G(1,k)*xk ” … •E[(I - G(2,k) *Hk - G(1,k))*xk*yiT + G(1,k)(xk - ^xk_)yiT] = 0 for i = 1 to k-1 ; •Second term again can be dropped using principle of orthoganility. •Expression reduced to (I - G(2,k) *Hk - G(1,k))*E[xk*yiT ] = 0 •Which should be true for arbitrary value xk and yi hence following equation must hold I – G(2,k)Hk – G(1,k) = 0 ; • ^xk = ^xk_ + G(1,k)(yk – Hk * ^xk_) … G(1,k) is kalman gain we will be using Gk for G(1,k) here onwards.
12. 12. KF – Kalman Gain Calculation; Continued … • From the Principle of orthogonality • E[ (xk - ^xk) yiT] = 0 and it follows that … • E[ (xk - ^xk) ^yiT] = 0 where ^yiT is an estimate of yk given the previous measurement y1,y2……..yk-1 • Define the innovations process yk~ = yk – ^yk ; • Innovation process represents a measure of the “new” information contained in yk; • It may also be expressed as yk~ = yk – Hk* ^xk_ ; which can be reduced to yk~ = vk + Hk* xk~ ; • From above two equations we can deduce into the following form E[ ( xk - ^xk)*yk~T] = 0
13. 13. • State Error vector can be written as • xk - ^xk = ~xk_ - Gk *(Hk*~xk_ + vk) further can be resolved and putting back into the earlier equation :- • E[{(I – Gk*Hk)*~xk_ - Gk*vk}*(Hk*~xk_ + vk)] = 0 ; • (I-Gk*Hk)*E[~xk_*~xk_T]HkT - Gk*E[vkvkT]= 0 • Kalman Gain can be calculated once we know E[~xk_*~xk_T] ; • Defining a priori covariance Pk_ =E[~xk_*~xk_T]; • And posteriori covariance Pk =E[~xk*~xkT]; • Given the “old” a posteriori covariance matrix Pk-1 , compute the “updated” a priori covariance matrix Pk_ . • Now task is to calculate posteriori from priori estimated covariance matrix. KF – Kalman Gain Calculation; Continued …
14. 14. • From the definition of covariance matrix we can get the following expression :- • Pk = (I – Gk*Hk)*E[~xk_ ~xk_T ]* (I – Gk*Hk)T +Gk*E[vk vkT]*GkT ; • Pk = (I – Gk*Hk)*Pk_* (I – Gk*Hk)T +Gk*E[vk vkT]*GkT we got relation between prior and posteriori covariance matrix ; • On simplification of above expression the relation between posteriori and prior condition can be boils down to following expression :- Pk = (I – Gk*Hk)*Pk_ ; KF – Kalman Gain Calculation; Continued …
15. 15. KF – Calculation of Error covariance Matrix Continued … • For the second stage of error covariance we must make sure that priori estimate of the stage is defined in terms of the “old” a posteriori estimate as follows :- • ~xk_ = F(k,k-1)*^xk-1 ; • ~xk_ = xk - ^xk ; • ~xk_ = (F(k,k-1)*xk-1 + wk-1) – (F(k,k-1)*^xk-1) which is “ F(k,k-1)*~xk-1 + wk-1 ” ; • Pk_ = F(k,k-1)*Pk-1*F(k,k-1)T + Qk-1 ; • Which defines priori covariance Pk_ in terms of old covariance matrix Pk-1 ;
16. 16. KF – Calculation of Error covariance Matrix ; Continued … • Now we know how to estimate posteriori using priori covariance matrix . • And to calculate priori matrix using “old” posteriori covariance matrix. • Once we have error covariance matrix we can get the kalman gain and hence new estimate of the position. • Only thing left is “Initialization of Error Covariance Matrix”..
17. 17. KF – Calculation of Error covariance Matrix ; Continued … •Initialization •X0 = E[x0] ; •P0 = E[(x0 - E[x0] )*(x0 - E[x0] )T] ; •Initial condition has an advantage of yielding an unbiased estimate of the state xk.
18. 18. KF – Review the complete process
19. 19. Kalman Filter – as Gaussian density propagation process • Random component wk leads to spreading of the density function. • F(k+1,k)*XK causes drift bodily. • Effect of external observation y is to superimpose a reactive effect on the diffusion in which the density tends to peak in the vicinity of observations.
20. 20. Extended Kalman Filter – modelling Non-Linear Systems • Process Equation :- Xk+1 = f(k,Xk) + wk ; • Measurement Equation :- Yk = h(k,Xk) + vk ; Where as before wk and vk are independent zero-mean white gaussian noise process with covariance matrices Rk and Qk respectively. • f(k,Xk) denotes a nonlinear transition matrix function that is possibly time-variant. • H(k,Xk) denotes a nonlinear measurement matrix that may be time-variant too.
21. 21. • Linearize the state-space model for very short duration i.e. at each time instant around the most recent state estimate which is taken to be either ^Xk or ^Xk_ expending on which particular function is being considered. • F (k+1,k) = df (k,xk)/dx at x = ^xk . • H(k) = dh(k,xk)/dx at x = ^xk . • Taylor Approximation of the nonlinear functions F(k,x) and H(k,x) around ^Xk and ^Xk_ respectively. Extended Kalman Filter – modelling Non-Linear Systems
22. 22. • F(k,Xk) ≈ F( x ;^xk) + F (xk,^xk) k+1 to K ; • H(k,Xk) ≈ H( x ;^xk) + H(xk,^xk) k+1 to K ; • With these quantities in hand we can approximate as following :- Xk+1 ≈ F k+1,kxk + wk + dk and _yk ≈ Hkxk + vk ; • Two new introduced quantities are as follows: _yk = yk – {h(x,^xk_) Hk*^xk_} && dk = f(x,^xk) – F k+1,k * ^xk ; Extended Kalman Filter – modelling Non-Linear Systems
23. 23. • Object is segmented using image processing techniques. • Kalman filter is used to make more efficient the localization method of the object. • Steps Involved in vision tracking are :- • Step 1:- Initialization ( k = 0 ) Find out the object position and initially a big error tolerance(P0 = 1). • Step 2:- Prediction( k > 0 ) predicting the relative position of the object ^xk_ which is considered using as search center to find the object. • Step 3:- Correction( k > 0) its measurement to carry out the state correction using Kalman Filter finding this way ^xk . Object Tracking using Kalman Filter
24. 24. • Advantage is to tolerate small occlusions. • Whenever object is occluded we will skip the measurement correction and keeps on predicting till we get object again into localization. Object Tracking using Kalman Filter
25. 25. Object Tracking using Kalman Filter for Non Linear Trajectory • Extended Kalman Filter - modelling more dynamical system using unconstraine d Brownian Motion
26. 26. Object Tracking using Kalman Filter
27. 27. Mean Shift Optimal Prediction and Kalman Filter for Object Tracking
28. 28. Mean Shift Optimal Prediction and Kalman Filter for Object Tracking •Colour Based Similarity measurement – Bhattacharyya distance • Target Localization • Distance Maximization • Kalman Prediction
29. 29. Object Tracking using an adaptive Kalman Filter combined with Mean Shift • Occlusion detection based on value of Bhattacharyya coefficient. • Based on trade- offs between weight to measurement and residual error matrix Kalman Parameters are estimated.
30. 30. Thank You