SlideShare a Scribd company logo
Discrete-time Kalman Filter
NUNO GUERREIRO DE SOUSA⇤
University of Macau
nunoguerreirodesousa@gmail.com
July 2016
Abstract
A dicrete-time linear time-varying Kalman filter was implemented in Matlab in order to estimate the position
and velocity of a moving underwater target given its velocity relative to the fluid in addition to its direction
with respect to the origin of the inertial frame. All the steps taken while approaching the final system
design and important considerations are analyzed in this report. Simulation results are also presented and
discussed.
I. INTRODUCTION
Consider the generic discrete-time linear time-varying (LTV) system
(
x[k + 1] = A[k]x[k] + B[k]u[k] + w[k]
y[k + 1] = C[k + 1]x[k + 1] + n[k + 1]
, (1)
where x[k] 2 Rn
is the state vector, y[k] 2 Rn
is the outputs vector and u[k] 2 Rp
is the inputs vector. The
dynamics matrix A[k] 2 Rn⇥n
relates how the current state affects the state change. Matrix B[k] 2 Rn⇥p
is
called the control matrix and determines how the system’s input affects the state change. C[k] 2 Rm⇥n
is
the output matrix and determines the relationship between the system state and the system output. Finally,
w[k] 2 Rn
and n[k] 2 Rn
denote additive uncorrelated zero-mean Gaussian noise sequences. The structure
of all these vectors and matrices will be discussed and defined along the next sections.
II. SYSTEM MODELING
From (1), the first step is to define x[k], y[k], A[k], B[k], C[k] and u[k]. At first, since both the position and
the velocity in three dimensions are being estimated, the states were assumed to be the following:

s[k]
vR[k]
=
2
6
6
6
6
6
6
4
sx[k]
sy[k]
sz[k]
vRx[k]
vRy[k]
vRz[k]
3
7
7
7
7
7
7
5
, (2)
where s[k] 2 R3
and vR[k] 2 R3
represent the position and velocity in each of the coordinates at time k,
respectively, according to the inertial frame.
⇤A big thank you to Prof. Carlos Silvestre, Prof. Rui Martins and Eng. Joel Reis for the opportunity of working in the Sensor-based
Cooperative Robotics Laboratory under their guidance. Also to Dr. David Cabecinhas, Eng. Daniel Viegas and Eng. Wei Xie for all the
knowledge and help provided.
1
The term vR[k] stands for the real velocity, which may differ from the velocity the body perceives.
For example, suppose the body is traveling at a certain speed and there is a current with opposite direction
and same speed that makes the body stay in the same position, although it is still using its engines to move.
What happens is that the real velocity vR[k] = 0, while the velocity perceived by the body is vB[k] 6= 0. We
assume that the difference between vR[k] and vB[k] is constant and we define it as
b[k] := vR[k] vB[k] 2 R3
. (3)
The body velocity vB[k] and the direction of the body d[k] are measured at all instants. The direction
d[k] is a unit vector, and it is given by
d[k] =
s[k]
||s[k]||
=
s[k]
r[k]
, (4)
where ||s[k]|| is the Euclidean norm of the position, which we will denote by r[k]. Basically, d[k] is a
normalization of s[k].
The array u[k] has dimensions [3x1] and contains vB[k] as input of the system. The vector vB[k]
could be included in the output array y[k] as the velocities are already known for all instants, so they could be
considered measurements from the target. It turns out to be useful to have u[k] defined this way, as we will
see later in this section, while defining the elements in the first equation of (1).
To compute the position s[k + 1] we note that, under certain conditions,
s[k + 1] = s[k] + vR[k]T, (5)
where T is the sampling time between two consecutive iterations and is assumed constant.
At this point, looking at (3) and knowing the body velocity vB[k], we should replace vR[k] by b[k] in
(2). Although estimating either vR[k] or b[k] yields the same result, it is easier for the filter to estimate b[k]
because it is constant. So, for now our states become

s[k]
b[k]
=
2
6
6
6
6
6
6
4
sx[k]
sy[k]
sz[k]
bx[k]
by[k]
bz[k]
3
7
7
7
7
7
7
5
. (6)
Recalling the measurements equation in (1), we should use d[k] as y[k], but by doing so, a problem
arises. Given
d[k] =
s[k]
r[k]
=
s[k]
q
s2
x[k] + s2
y[k] + s2
z[k]
, (7)
An implicit non-linearity prevents the measurements expression in (1) to be written as C[k]x[k]. We can go
around the non-linearity by noticing that (4) is equivalent to
0 = s[k] d[k]r[k]. (8)
Therefore we include r[k] as a state (the system remains observable). We can finally define our x[k] as
x[k] :=
2
4
s[k]
b[k]
r[k]
3
5 =
2
6
6
6
6
6
6
6
6
4
sx[k]
sy[k]
sz[k]
bx[k]
by[k]
bz[k]
r[k]
3
7
7
7
7
7
7
7
7
5
2 R7
. (9)
2
Moreover, if we consider that y[k] is a [3x1] array of zeros, we can write:
C[k] =
⇥
I 0 d[k]
⇤
. (10)
The measurements equation in (1) is settled. Now that we have defined x[k] and u[k], we can move
on to defining the matrices in the first equation of (1). We start by writing r[k + 1] in terms of the states
expressed in (9). First, we write (4) as
d[k + 1] =
s[k + 1]
r[k + 1]
, (11)
and since d[k + 1] is a unit vector and r[k + 1] is a scalar we can re-write (11) as
r[k + 1] = dT
[k + 1]s[k + 1], (12)
which, together with (3) and (5) yields
r[k + 1] = dT
[k + 1](s[k] + vB[k]T + bT)
= dT
[k + 1](r[k]d[k] + vB[k]T + bT)
(13)
Now, from (13) and (5) we easily write
A[k] =
2
6
6
6
6
6
6
6
6
4
1 0 0 T 0 0 0
0 1 0 0 T 0 0
0 0 1 0 0 T 0
0 0 0 1 0 0 0
0 0 0 0 1 0 0
0 0 0 0 0 1 0
0 0 0 dx[k + 1]T dy[k + 1]T dz[k + 1]T dT
[k + 1]d[k]
3
7
7
7
7
7
7
7
7
5
2 R7⇥7
(14)
and
B[k] =
2
6
6
6
6
6
6
6
6
4
T 0 0
0 T 0
0 0 T
0 0 0
0 0 0
0 0 0
dx[k + 1]T dy[k + 1]T dz[k + 1]T
3
7
7
7
7
7
7
7
7
5
2 R7⇥3
. (15)
Hence we have defined all the variables in (1), except for w[k] and n[k], which will be discussed in Section
IV.
III. TRAJECTORY
A rich trajectory described by the body is presented in this section. Initially, a combination of three sinusoidal
functions was used (one for each of the x, y and z coordinates). This was later changed because despite
being a combination of three sines, the trajectory ended up presenting a strong periodic behavior, whereby all
directions were not sufficiently excited in order for the filter to achieve a good performance (the filter needs
information to be able to estimate more accurately).
A more complicated approach could have been moving the observer over time and force these
variations, but Eng. Joel Reis provided with a trajectory designed by him that is rich enough in terms of
variations to be suitable for testing. This trajectory is shown in Figure 1.
3
0
100
10
100
Z
0
20
Y
0
X
30
-100 -100
-200 -200
Figure 1: Trajectory of the body’s movement.
IV. KALMAN FILTER IMPLEMENTATION
i. Kalman filter equations
The Kalman filter implementation is as follows:
¯x[k] = A[k 1]ˆx[k 1] + B[k 1]u[k 1] (16)
P [k] = A[k 1]P[k 1]AT
[k 1] + Q (17)
K[k] = P [k]CT
[k] C[k]P [k]CT
[k] + R
1
(18)
¯x[k] = ˆx[k] + K[k](y[k] C[k]ˆx[k]) (19)
P[k] = (I K[k]C[k])P [k], (20)
where the first two equations, (16) and (17) refer to the model forecast step, and the second three, (18), (19)
and (20), refer to the data assimilation step. These equations run in loop for 1000 iterations in this particular
case.
ii. Variables and simulation parameters
The data from the trajectory described in Section III was stored in a variable. From then on it was possible
to compute vB[k], d[k] and r[k] for all moments, based on equations (7) and on the definition of derivative,
approximated as
vB[k] =
s[k + 1] s[k]
T
. (21)
The initial position was set to:
s[0] =
2
4
100
50
0
3
5 (22)
4
and the initial estimate was set to:
ˆx[0] =
2
6
6
6
6
6
6
6
6
4
100
100
0
0
0
0
100
3
7
7
7
7
7
7
7
7
5
. (23)
The number of k iterations was set to 1000 and T was set to 1s. All the other variables have been discussed
earlier.
iii. Error and Noise
The only topic left discussing concerns the matrices P, Q and R. P is the a priori estimate error covariance,
P is the a posteriori estimate error covariance. P is initially set as a diagonal matrix, and it expresses the
uncertainty between the estimated state and the predicted state. From then on P is computed (equation
(17)), and then it enters the loop.
Q is the process noise and it represents the uncertainty in the process model. Sometimes a very poor
model can be used simply by "injecting" enough uncertainty via the parameters of this matrix.
R is the measurement error covariance matrix which, as the name says, represents the noise over the
measurements. Just like Q, if the measurements are known to be poor, the solution is to introduce larger
values in matrix R.
The white noises w and n that are shown in the system equations in (1), and that so far have not been
discussed, are compensated through the matrices Q and R.
All these matrices were manually adjusted in order for the filter to produce better results, which means
for the predicted values to be as close as possible from the real values. In the end they were initialized, or set,
as:
P[0] =
2
6
6
6
6
6
6
6
6
4
104
0 0 0 0 0 0
0 104
0 0 0 0 0
0 0 104
0 0 0 0
0 0 0 10 0 0 0
0 0 0 0 10 0 0
0 0 0 0 0 10 0
0 0 0 0 0 0 104
3
7
7
7
7
7
7
7
7
5
2 R7⇥7
, (24)
Q =
2
6
6
6
6
6
6
6
6
4
10 4
0 0 0 0 0 0
0 10 4
0 0 0 0 0
0 0 10 4
0 0 0 0
0 0 0 10 6
0 0 0
0 0 0 0 10 6
0 0
0 0 0 0 0 10 6
0
0 0 0 0 0 0 9
3
7
7
7
7
7
7
7
7
5
2 R7⇥7
, (25)
and
R =
2
4
10 0 0
0 10 0
0 0 10
3
5 2 R3⇥3
. (26)
Another issue that was important to address was that the directions computed with equation (4) are
"clean", which means that they were obtained directly from the trajectory positions. In reality this would
5
not happen, since the directions are measurements from the target, so noise was introduced in the directions.
Since we cannot just add noise, because this kind of noise cannot be considered additive, what was done was
to convert the directions into spheric coordinates (azimuth and elevation), and then add a random variable with
gaussian distribution and zero mean with values between [ 1o
, 1o
]. After this, the new values of direction
were converted back to cartesian coordinates and fed to the filter. Since matrix R contains information saying
that the measurements of direction have a degree of uncertainty, we adjusted the data in a way as to meet that
information. This way, the filter had coherent information and produced better results.
V. RESULTS AND DISCUSSION
The plots resulting from one simulation through the Matlab implementation of the filter are now presented.
The first two figures (2) and (3) show the position error (difference between the actual position of the body
and the position estimated by the filter). The difference is that (2) shows the evolution of the error throughout
all the iterations, while (3) focuses only on the last 600 iterations, in order for the reader to have a better idea
of the values achieved in the end of the simulation and how fast they were achieved.The same idea is repeated
with the velocity bias error in figures (4) and (5), and, finally, with the direction error in (6) and (7):
0 200 400 600 800 1000
Time (s)
-200
-150
-100
-50
0
50
Actualposition-EstimatedPosition(m)
X
Y
Z
Figure 2: Position error - Initial convergence.
6
600 650 700 750 800 850 900 950 1000
Time (s)
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
Actualposition-EstimatedPosition(m)
X
Y
Z
Figure 3: Detail of position error in the last 600 iterations.
0 200 400 600 800 1000
Time (s)
-0.5
0
0.5
1
1.5
ActualBias-EstimatedBias(m/s)
Bias vX
Bias vY
Bias vZ
Figure 4: Velocity bias error - Initial convergence.
7
600 650 700 750 800 850 900 950 1000
Time (s)
-6
-4
-2
0
2
4
6
ActualBias-EstimatedBias(m/s)
#10
-3
Bias vX
Bias vY
Bias vZ
Figure 5: Detail of velocity bias error in the last 600 iterations.
0 200 400 600 800 1000
Time (s)
-20
0
20
40
60
80
100
120
140
ActualRange-EstimatedRange(m)
Figure 6: Range error - Initial convergence.
8
600 650 700 750 800 850 900 950 1000
Time (s)
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
ActualRange-EstimatedRange(m)
Figure 7: Detail of range error in the last 600 iterations.
Looking at all the figures presented above, we can see that the output values of the Kalman filter do
converge towards a good estimate of the real values. To check the filter’s performance more thoroughly, the
Monte Carlo method with a total number of simulations of 1000 was ran, so it was possible to compute the
average and standard deviation of the last 600 iterations (where the values are already starting to converge)
with different noise parameters. A table containing the average of these parameters from the Monte Carlo
runs is shown bellow:
Table 1: Average and Standard Deviation of each state, obtained through Monte Carlo runs.
State µ
sx 0.2289 0.7004 (m)
sy 0.4204 0.7626 (m)
sz 0.0958 0.2109 (m)
bx 0.725 1.9 (mm/s)
by 2.3 5.9 (mm/s)
bz 0.105 1.4 (mm/s)
r 0.6047 0.9759 (m)
Through this table we can see that the means and standard deviatons of all the states are close to zero.
Only the range r is not as close as the other states, but the filter is still suitable for many appliances. One
possible explanation for r to be different than expected is that the expression used to compute it derives from
prior approximations. Also, r depends on other states that are being estimated.
To get to more exact predictions, one could try and find even better initial values for the matrices P, Q
and R. As it was stated before, the initial values used for these matrices in all the simulations were manually
adjusted. So, in a way, there is room for improvement here.
Besides this, one could implement the Extended Kalman filter instead of the linear one when facing the
non-linearity in Section II. Instead of going around the non-linearity the way we did, the Extended Kalman
filter, which was designed precisely for non-linear systems, could produce even better results, although it is
not optimal and does not guarantee global stability.
9
VI. CONCLUSIONS
We can state that the linear discrete-time Kalman filter was successfully implemented for this particular case
since the results estimated were very close to the real values. Still, we could achieve even better results, for
instance if using the Extended Kalman filter as an alternative. But our approach is optimal and guarantees
global stability and convergence, which can happen or not using the Extended version. This implementation
is good enough to filter signals from noise a posteriori, after having collected the data. And it would also be
useful for real-time estimations, with some more limitations in underwater environments.
10

More Related Content

What's hot

ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
Maxim Eingorn
 
The klein gordon field in two-dimensional rindler space-time 200920ver-display
The klein gordon field in two-dimensional rindler space-time 200920ver-displayThe klein gordon field in two-dimensional rindler space-time 200920ver-display
The klein gordon field in two-dimensional rindler space-time 200920ver-display
foxtrot jp R
 
SMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionSMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionLilyana Vankova
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
angely alcendra
 
D. Mladenov - On Integrable Systems in Cosmology
D. Mladenov - On Integrable Systems in CosmologyD. Mladenov - On Integrable Systems in Cosmology
D. Mladenov - On Integrable Systems in Cosmology
SEENET-MTP
 
The klein gordon field in two-dimensional rindler space-time 14072020
The klein gordon field in two-dimensional rindler space-time  14072020The klein gordon field in two-dimensional rindler space-time  14072020
The klein gordon field in two-dimensional rindler space-time 14072020
foxtrot jp R
 
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
foxtrot jp R
 
Lap2009c&p105-ode3.5 ht
Lap2009c&p105-ode3.5 htLap2009c&p105-ode3.5 ht
Lap2009c&p105-ode3.5 ht
A Jorge Garcia
 
Cosmology from quantum_potential
Cosmology from quantum_potentialCosmology from quantum_potential
Cosmology from quantum_potential
Sérgio Sacani
 
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
SEENET-MTP
 
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
BRNSS Publication Hub
 
Modeling biased tracers at the field level
Modeling biased tracers at the field levelModeling biased tracers at the field level
Modeling biased tracers at the field level
Marcel Schmittfull
 
Closed-Form Solutions For Optimum Rotor in Hover and Climb
Closed-Form Solutions For Optimum Rotor in Hover and ClimbClosed-Form Solutions For Optimum Rotor in Hover and Climb
Closed-Form Solutions For Optimum Rotor in Hover and ClimbWilliam Luer
 
The klein gordon field in two-dimensional rindler space-timeforss
The klein gordon field in two-dimensional rindler space-timeforssThe klein gordon field in two-dimensional rindler space-timeforss
The klein gordon field in two-dimensional rindler space-timeforss
foxtrot jp R
 
Paolo Creminelli "Dark Energy after GW170817"
Paolo Creminelli "Dark Energy after GW170817"Paolo Creminelli "Dark Energy after GW170817"
Paolo Creminelli "Dark Energy after GW170817"
SEENET-MTP
 
A Characterization of the Zero-Inflated Logarithmic Series Distribution
A Characterization of the Zero-Inflated Logarithmic Series DistributionA Characterization of the Zero-Inflated Logarithmic Series Distribution
A Characterization of the Zero-Inflated Logarithmic Series Distribution
inventionjournals
 
qlp
qlpqlp

What's hot (19)

ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
ALL-SCALE cosmological perturbations and SCREENING OF GRAVITY in inhomogeneou...
 
The klein gordon field in two-dimensional rindler space-time 200920ver-display
The klein gordon field in two-dimensional rindler space-time 200920ver-displayThe klein gordon field in two-dimensional rindler space-time 200920ver-display
The klein gordon field in two-dimensional rindler space-time 200920ver-display
 
SMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last versionSMB_2012_HR_VAN_ST-last version
SMB_2012_HR_VAN_ST-last version
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
 
D. Mladenov - On Integrable Systems in Cosmology
D. Mladenov - On Integrable Systems in CosmologyD. Mladenov - On Integrable Systems in Cosmology
D. Mladenov - On Integrable Systems in Cosmology
 
The klein gordon field in two-dimensional rindler space-time 14072020
The klein gordon field in two-dimensional rindler space-time  14072020The klein gordon field in two-dimensional rindler space-time  14072020
The klein gordon field in two-dimensional rindler space-time 14072020
 
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
The klein gordon field in two-dimensional rindler space-time 28072020ver-drft...
 
Lap2009c&p105-ode3.5 ht
Lap2009c&p105-ode3.5 htLap2009c&p105-ode3.5 ht
Lap2009c&p105-ode3.5 ht
 
Cosmology from quantum_potential
Cosmology from quantum_potentialCosmology from quantum_potential
Cosmology from quantum_potential
 
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
N. Bilic - "Hamiltonian Method in the Braneworld" 3/3
 
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
Bound State Solution of the Klein–Gordon Equation for the Modified Screened C...
 
Graph Kernelpdf
Graph KernelpdfGraph Kernelpdf
Graph Kernelpdf
 
Modeling biased tracers at the field level
Modeling biased tracers at the field levelModeling biased tracers at the field level
Modeling biased tracers at the field level
 
Closed-Form Solutions For Optimum Rotor in Hover and Climb
Closed-Form Solutions For Optimum Rotor in Hover and ClimbClosed-Form Solutions For Optimum Rotor in Hover and Climb
Closed-Form Solutions For Optimum Rotor in Hover and Climb
 
The klein gordon field in two-dimensional rindler space-timeforss
The klein gordon field in two-dimensional rindler space-timeforssThe klein gordon field in two-dimensional rindler space-timeforss
The klein gordon field in two-dimensional rindler space-timeforss
 
Paolo Creminelli "Dark Energy after GW170817"
Paolo Creminelli "Dark Energy after GW170817"Paolo Creminelli "Dark Energy after GW170817"
Paolo Creminelli "Dark Energy after GW170817"
 
A Characterization of the Zero-Inflated Logarithmic Series Distribution
A Characterization of the Zero-Inflated Logarithmic Series DistributionA Characterization of the Zero-Inflated Logarithmic Series Distribution
A Characterization of the Zero-Inflated Logarithmic Series Distribution
 
qlp
qlpqlp
qlp
 
Ecl17
Ecl17Ecl17
Ecl17
 

Similar to Kalman_Filter_Report

Servo systems
Servo systemsServo systems
Servo systems
cairo university
 
SLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network TopologySLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network Topologytoukaigi
 
EENG519FinalProjectReport
EENG519FinalProjectReportEENG519FinalProjectReport
EENG519FinalProjectReportDaniel K
 
Servo systems-integral-state-feedback-lecture-2020
Servo systems-integral-state-feedback-lecture-2020Servo systems-integral-state-feedback-lecture-2020
Servo systems-integral-state-feedback-lecture-2020
cairo university
 
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
 
Ch r ssm
Ch r ssmCh r ssm
Ch r ssm
Marta Díaz
 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
AI Robotics KR
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning projectLianli Liu
 
Reachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsReachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsM Reza Rahmati
 
Reachability Analysis "Control Of Dynamical Non-Linear Systems"
Reachability Analysis "Control Of Dynamical Non-Linear Systems" Reachability Analysis "Control Of Dynamical Non-Linear Systems"
Reachability Analysis "Control Of Dynamical Non-Linear Systems" M Reza Rahmati
 
controllability-and-observability.pdf
controllability-and-observability.pdfcontrollability-and-observability.pdf
controllability-and-observability.pdf
LAbiba16
 
application of complex numbers
application of complex numbersapplication of complex numbers
application of complex numbers
Kaustubh Garud
 
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
ijrap
 
Stability and pole location
Stability and pole locationStability and pole location
Stability and pole location
ssuser5d64bb
 
Kittel c. introduction to solid state physics 8 th edition - solution manual
Kittel c.  introduction to solid state physics 8 th edition - solution manualKittel c.  introduction to solid state physics 8 th edition - solution manual
Kittel c. introduction to solid state physics 8 th edition - solution manual
amnahnura
 
Kleppner solution partial
Kleppner solution   partialKleppner solution   partial
Kleppner solution partial
Kamran Khursheed
 

Similar to Kalman_Filter_Report (20)

Relatório
RelatórioRelatório
Relatório
 
04_AJMS_157_18_RA.pdf
04_AJMS_157_18_RA.pdf04_AJMS_157_18_RA.pdf
04_AJMS_157_18_RA.pdf
 
04_AJMS_157_18_RA.pdf
04_AJMS_157_18_RA.pdf04_AJMS_157_18_RA.pdf
04_AJMS_157_18_RA.pdf
 
Servo systems
Servo systemsServo systems
Servo systems
 
SLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network TopologySLAM of Multi-Robot System Considering Its Network Topology
SLAM of Multi-Robot System Considering Its Network Topology
 
EENG519FinalProjectReport
EENG519FinalProjectReportEENG519FinalProjectReport
EENG519FinalProjectReport
 
Servo systems-integral-state-feedback-lecture-2020
Servo systems-integral-state-feedback-lecture-2020Servo systems-integral-state-feedback-lecture-2020
Servo systems-integral-state-feedback-lecture-2020
 
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...
 
Ch r ssm
Ch r ssmCh r ssm
Ch r ssm
 
pRO
pROpRO
pRO
 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning project
 
Reachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical SystemsReachability Analysis Control of Non-Linear Dynamical Systems
Reachability Analysis Control of Non-Linear Dynamical Systems
 
Reachability Analysis "Control Of Dynamical Non-Linear Systems"
Reachability Analysis "Control Of Dynamical Non-Linear Systems" Reachability Analysis "Control Of Dynamical Non-Linear Systems"
Reachability Analysis "Control Of Dynamical Non-Linear Systems"
 
controllability-and-observability.pdf
controllability-and-observability.pdfcontrollability-and-observability.pdf
controllability-and-observability.pdf
 
application of complex numbers
application of complex numbersapplication of complex numbers
application of complex numbers
 
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
Exact Quasi-Classical Asymptoticbeyond Maslov Canonical Operator and Quantum ...
 
Stability and pole location
Stability and pole locationStability and pole location
Stability and pole location
 
Kittel c. introduction to solid state physics 8 th edition - solution manual
Kittel c.  introduction to solid state physics 8 th edition - solution manualKittel c.  introduction to solid state physics 8 th edition - solution manual
Kittel c. introduction to solid state physics 8 th edition - solution manual
 
Kleppner solution partial
Kleppner solution   partialKleppner solution   partial
Kleppner solution partial
 

Kalman_Filter_Report

  • 1. Discrete-time Kalman Filter NUNO GUERREIRO DE SOUSA⇤ University of Macau nunoguerreirodesousa@gmail.com July 2016 Abstract A dicrete-time linear time-varying Kalman filter was implemented in Matlab in order to estimate the position and velocity of a moving underwater target given its velocity relative to the fluid in addition to its direction with respect to the origin of the inertial frame. All the steps taken while approaching the final system design and important considerations are analyzed in this report. Simulation results are also presented and discussed. I. INTRODUCTION Consider the generic discrete-time linear time-varying (LTV) system ( x[k + 1] = A[k]x[k] + B[k]u[k] + w[k] y[k + 1] = C[k + 1]x[k + 1] + n[k + 1] , (1) where x[k] 2 Rn is the state vector, y[k] 2 Rn is the outputs vector and u[k] 2 Rp is the inputs vector. The dynamics matrix A[k] 2 Rn⇥n relates how the current state affects the state change. Matrix B[k] 2 Rn⇥p is called the control matrix and determines how the system’s input affects the state change. C[k] 2 Rm⇥n is the output matrix and determines the relationship between the system state and the system output. Finally, w[k] 2 Rn and n[k] 2 Rn denote additive uncorrelated zero-mean Gaussian noise sequences. The structure of all these vectors and matrices will be discussed and defined along the next sections. II. SYSTEM MODELING From (1), the first step is to define x[k], y[k], A[k], B[k], C[k] and u[k]. At first, since both the position and the velocity in three dimensions are being estimated, the states were assumed to be the following:  s[k] vR[k] = 2 6 6 6 6 6 6 4 sx[k] sy[k] sz[k] vRx[k] vRy[k] vRz[k] 3 7 7 7 7 7 7 5 , (2) where s[k] 2 R3 and vR[k] 2 R3 represent the position and velocity in each of the coordinates at time k, respectively, according to the inertial frame. ⇤A big thank you to Prof. Carlos Silvestre, Prof. Rui Martins and Eng. Joel Reis for the opportunity of working in the Sensor-based Cooperative Robotics Laboratory under their guidance. Also to Dr. David Cabecinhas, Eng. Daniel Viegas and Eng. Wei Xie for all the knowledge and help provided. 1
  • 2. The term vR[k] stands for the real velocity, which may differ from the velocity the body perceives. For example, suppose the body is traveling at a certain speed and there is a current with opposite direction and same speed that makes the body stay in the same position, although it is still using its engines to move. What happens is that the real velocity vR[k] = 0, while the velocity perceived by the body is vB[k] 6= 0. We assume that the difference between vR[k] and vB[k] is constant and we define it as b[k] := vR[k] vB[k] 2 R3 . (3) The body velocity vB[k] and the direction of the body d[k] are measured at all instants. The direction d[k] is a unit vector, and it is given by d[k] = s[k] ||s[k]|| = s[k] r[k] , (4) where ||s[k]|| is the Euclidean norm of the position, which we will denote by r[k]. Basically, d[k] is a normalization of s[k]. The array u[k] has dimensions [3x1] and contains vB[k] as input of the system. The vector vB[k] could be included in the output array y[k] as the velocities are already known for all instants, so they could be considered measurements from the target. It turns out to be useful to have u[k] defined this way, as we will see later in this section, while defining the elements in the first equation of (1). To compute the position s[k + 1] we note that, under certain conditions, s[k + 1] = s[k] + vR[k]T, (5) where T is the sampling time between two consecutive iterations and is assumed constant. At this point, looking at (3) and knowing the body velocity vB[k], we should replace vR[k] by b[k] in (2). Although estimating either vR[k] or b[k] yields the same result, it is easier for the filter to estimate b[k] because it is constant. So, for now our states become  s[k] b[k] = 2 6 6 6 6 6 6 4 sx[k] sy[k] sz[k] bx[k] by[k] bz[k] 3 7 7 7 7 7 7 5 . (6) Recalling the measurements equation in (1), we should use d[k] as y[k], but by doing so, a problem arises. Given d[k] = s[k] r[k] = s[k] q s2 x[k] + s2 y[k] + s2 z[k] , (7) An implicit non-linearity prevents the measurements expression in (1) to be written as C[k]x[k]. We can go around the non-linearity by noticing that (4) is equivalent to 0 = s[k] d[k]r[k]. (8) Therefore we include r[k] as a state (the system remains observable). We can finally define our x[k] as x[k] := 2 4 s[k] b[k] r[k] 3 5 = 2 6 6 6 6 6 6 6 6 4 sx[k] sy[k] sz[k] bx[k] by[k] bz[k] r[k] 3 7 7 7 7 7 7 7 7 5 2 R7 . (9) 2
  • 3. Moreover, if we consider that y[k] is a [3x1] array of zeros, we can write: C[k] = ⇥ I 0 d[k] ⇤ . (10) The measurements equation in (1) is settled. Now that we have defined x[k] and u[k], we can move on to defining the matrices in the first equation of (1). We start by writing r[k + 1] in terms of the states expressed in (9). First, we write (4) as d[k + 1] = s[k + 1] r[k + 1] , (11) and since d[k + 1] is a unit vector and r[k + 1] is a scalar we can re-write (11) as r[k + 1] = dT [k + 1]s[k + 1], (12) which, together with (3) and (5) yields r[k + 1] = dT [k + 1](s[k] + vB[k]T + bT) = dT [k + 1](r[k]d[k] + vB[k]T + bT) (13) Now, from (13) and (5) we easily write A[k] = 2 6 6 6 6 6 6 6 6 4 1 0 0 T 0 0 0 0 1 0 0 T 0 0 0 0 1 0 0 T 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 dx[k + 1]T dy[k + 1]T dz[k + 1]T dT [k + 1]d[k] 3 7 7 7 7 7 7 7 7 5 2 R7⇥7 (14) and B[k] = 2 6 6 6 6 6 6 6 6 4 T 0 0 0 T 0 0 0 T 0 0 0 0 0 0 0 0 0 dx[k + 1]T dy[k + 1]T dz[k + 1]T 3 7 7 7 7 7 7 7 7 5 2 R7⇥3 . (15) Hence we have defined all the variables in (1), except for w[k] and n[k], which will be discussed in Section IV. III. TRAJECTORY A rich trajectory described by the body is presented in this section. Initially, a combination of three sinusoidal functions was used (one for each of the x, y and z coordinates). This was later changed because despite being a combination of three sines, the trajectory ended up presenting a strong periodic behavior, whereby all directions were not sufficiently excited in order for the filter to achieve a good performance (the filter needs information to be able to estimate more accurately). A more complicated approach could have been moving the observer over time and force these variations, but Eng. Joel Reis provided with a trajectory designed by him that is rich enough in terms of variations to be suitable for testing. This trajectory is shown in Figure 1. 3
  • 4. 0 100 10 100 Z 0 20 Y 0 X 30 -100 -100 -200 -200 Figure 1: Trajectory of the body’s movement. IV. KALMAN FILTER IMPLEMENTATION i. Kalman filter equations The Kalman filter implementation is as follows: ¯x[k] = A[k 1]ˆx[k 1] + B[k 1]u[k 1] (16) P [k] = A[k 1]P[k 1]AT [k 1] + Q (17) K[k] = P [k]CT [k] C[k]P [k]CT [k] + R 1 (18) ¯x[k] = ˆx[k] + K[k](y[k] C[k]ˆx[k]) (19) P[k] = (I K[k]C[k])P [k], (20) where the first two equations, (16) and (17) refer to the model forecast step, and the second three, (18), (19) and (20), refer to the data assimilation step. These equations run in loop for 1000 iterations in this particular case. ii. Variables and simulation parameters The data from the trajectory described in Section III was stored in a variable. From then on it was possible to compute vB[k], d[k] and r[k] for all moments, based on equations (7) and on the definition of derivative, approximated as vB[k] = s[k + 1] s[k] T . (21) The initial position was set to: s[0] = 2 4 100 50 0 3 5 (22) 4
  • 5. and the initial estimate was set to: ˆx[0] = 2 6 6 6 6 6 6 6 6 4 100 100 0 0 0 0 100 3 7 7 7 7 7 7 7 7 5 . (23) The number of k iterations was set to 1000 and T was set to 1s. All the other variables have been discussed earlier. iii. Error and Noise The only topic left discussing concerns the matrices P, Q and R. P is the a priori estimate error covariance, P is the a posteriori estimate error covariance. P is initially set as a diagonal matrix, and it expresses the uncertainty between the estimated state and the predicted state. From then on P is computed (equation (17)), and then it enters the loop. Q is the process noise and it represents the uncertainty in the process model. Sometimes a very poor model can be used simply by "injecting" enough uncertainty via the parameters of this matrix. R is the measurement error covariance matrix which, as the name says, represents the noise over the measurements. Just like Q, if the measurements are known to be poor, the solution is to introduce larger values in matrix R. The white noises w and n that are shown in the system equations in (1), and that so far have not been discussed, are compensated through the matrices Q and R. All these matrices were manually adjusted in order for the filter to produce better results, which means for the predicted values to be as close as possible from the real values. In the end they were initialized, or set, as: P[0] = 2 6 6 6 6 6 6 6 6 4 104 0 0 0 0 0 0 0 104 0 0 0 0 0 0 0 104 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 104 3 7 7 7 7 7 7 7 7 5 2 R7⇥7 , (24) Q = 2 6 6 6 6 6 6 6 6 4 10 4 0 0 0 0 0 0 0 10 4 0 0 0 0 0 0 0 10 4 0 0 0 0 0 0 0 10 6 0 0 0 0 0 0 0 10 6 0 0 0 0 0 0 0 10 6 0 0 0 0 0 0 0 9 3 7 7 7 7 7 7 7 7 5 2 R7⇥7 , (25) and R = 2 4 10 0 0 0 10 0 0 0 10 3 5 2 R3⇥3 . (26) Another issue that was important to address was that the directions computed with equation (4) are "clean", which means that they were obtained directly from the trajectory positions. In reality this would 5
  • 6. not happen, since the directions are measurements from the target, so noise was introduced in the directions. Since we cannot just add noise, because this kind of noise cannot be considered additive, what was done was to convert the directions into spheric coordinates (azimuth and elevation), and then add a random variable with gaussian distribution and zero mean with values between [ 1o , 1o ]. After this, the new values of direction were converted back to cartesian coordinates and fed to the filter. Since matrix R contains information saying that the measurements of direction have a degree of uncertainty, we adjusted the data in a way as to meet that information. This way, the filter had coherent information and produced better results. V. RESULTS AND DISCUSSION The plots resulting from one simulation through the Matlab implementation of the filter are now presented. The first two figures (2) and (3) show the position error (difference between the actual position of the body and the position estimated by the filter). The difference is that (2) shows the evolution of the error throughout all the iterations, while (3) focuses only on the last 600 iterations, in order for the reader to have a better idea of the values achieved in the end of the simulation and how fast they were achieved.The same idea is repeated with the velocity bias error in figures (4) and (5), and, finally, with the direction error in (6) and (7): 0 200 400 600 800 1000 Time (s) -200 -150 -100 -50 0 50 Actualposition-EstimatedPosition(m) X Y Z Figure 2: Position error - Initial convergence. 6
  • 7. 600 650 700 750 800 850 900 950 1000 Time (s) -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 Actualposition-EstimatedPosition(m) X Y Z Figure 3: Detail of position error in the last 600 iterations. 0 200 400 600 800 1000 Time (s) -0.5 0 0.5 1 1.5 ActualBias-EstimatedBias(m/s) Bias vX Bias vY Bias vZ Figure 4: Velocity bias error - Initial convergence. 7
  • 8. 600 650 700 750 800 850 900 950 1000 Time (s) -6 -4 -2 0 2 4 6 ActualBias-EstimatedBias(m/s) #10 -3 Bias vX Bias vY Bias vZ Figure 5: Detail of velocity bias error in the last 600 iterations. 0 200 400 600 800 1000 Time (s) -20 0 20 40 60 80 100 120 140 ActualRange-EstimatedRange(m) Figure 6: Range error - Initial convergence. 8
  • 9. 600 650 700 750 800 850 900 950 1000 Time (s) -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 ActualRange-EstimatedRange(m) Figure 7: Detail of range error in the last 600 iterations. Looking at all the figures presented above, we can see that the output values of the Kalman filter do converge towards a good estimate of the real values. To check the filter’s performance more thoroughly, the Monte Carlo method with a total number of simulations of 1000 was ran, so it was possible to compute the average and standard deviation of the last 600 iterations (where the values are already starting to converge) with different noise parameters. A table containing the average of these parameters from the Monte Carlo runs is shown bellow: Table 1: Average and Standard Deviation of each state, obtained through Monte Carlo runs. State µ sx 0.2289 0.7004 (m) sy 0.4204 0.7626 (m) sz 0.0958 0.2109 (m) bx 0.725 1.9 (mm/s) by 2.3 5.9 (mm/s) bz 0.105 1.4 (mm/s) r 0.6047 0.9759 (m) Through this table we can see that the means and standard deviatons of all the states are close to zero. Only the range r is not as close as the other states, but the filter is still suitable for many appliances. One possible explanation for r to be different than expected is that the expression used to compute it derives from prior approximations. Also, r depends on other states that are being estimated. To get to more exact predictions, one could try and find even better initial values for the matrices P, Q and R. As it was stated before, the initial values used for these matrices in all the simulations were manually adjusted. So, in a way, there is room for improvement here. Besides this, one could implement the Extended Kalman filter instead of the linear one when facing the non-linearity in Section II. Instead of going around the non-linearity the way we did, the Extended Kalman filter, which was designed precisely for non-linear systems, could produce even better results, although it is not optimal and does not guarantee global stability. 9
  • 10. VI. CONCLUSIONS We can state that the linear discrete-time Kalman filter was successfully implemented for this particular case since the results estimated were very close to the real values. Still, we could achieve even better results, for instance if using the Extended Kalman filter as an alternative. But our approach is optimal and guarantees global stability and convergence, which can happen or not using the Extended version. This implementation is good enough to filter signals from noise a posteriori, after having collected the data. And it would also be useful for real-time estimations, with some more limitations in underwater environments. 10