Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
1. 9.0 Intro 1
9.0 Intro
In a post mortem (after the fact) analysis, it is possible to
wait for more observations to accumulate. In that case, the
estimate can be improved by smoothing.
Andrew Jazwinski Jaz70, p. 143
In previous chapters, we discussed how to obtain the optimal a priori and a
posteriori state estimates.
The a priori state estimate at time , , is the state estimate at time based
on all the measurements up to (but not including) time .
The a posteriori state estimate at time , is the state estimate at time
based on all the measurements up to and including time :
There are often situations in which we want to obtain other types of state
estimates. We will define as the estimate of given all measurements up
to and including time .
With this notation, we see that
Now suppose, for example, that we have recorded measurements up to time
index 54 and we want to obtain an estimate of the state at time index 33.
Our theory in the previous chapters tells us how to obtain or , but
those estimates only use the measurements up to and including times 32 and
33, respectively.
If we have more measurements (e.g., measurements up to time 54 ) it stands
to reason that we should be able to get an even better estimate of .
This chapter discusses some ways of obtaining better estimates.
k x^k
−
k
k
k x^k
+
k
k
= E x ∣y ,⋯ ,yx^k
−
( k 1 k−1)
= E x ∣y ,⋯ ,yx^k
+
( k 1 k)
(9.1)
x^k,j xk
j
x^k,k−1
x^k,k
= x^k
−
= x^k
+ (9.2)
x^33
−
x^33
+
x33
2. 9.0 Intro 2
Fixed-point Smoothing
In another scenario, it may be that we are interested in obtaining an estimate
of the state at a fixed time .
As measurements keep rolling in, we want to keep updating our estimate .
In other words, we want to obtain .
This could be the case, for example, if a satellite takes a picture at time
In order to more accurately process the photograph at time we need an
estimate of the satellite state (position and velocity) at time .
As the satellite continues to orbit, we may obtain additional range
measurements of the satellite, so we can continue to update the estimate of
and thus improve the quality of the processed photograph.
This situation is called fixed-point smoothing because the time point for
which we want to obtain a state estimate (time in this example) is fixed, but
the number of measurements that are available to improve that estimate
continually changes. Fixed-point smoothing is depicted in Figure 9.1 and is
discussed in Section 9.2
Fixed-lag Smoothing
Another type of smoothing is fixed-lag smoothing. In this situation, we want to
obtain an estimate of the state at time given measurements up to
j
x^j
, ,⋯x^j,j+1 x^j,j+2
j
j
j
xj
j
(k − N)
3. 9.0 Intro 3
and including time , where the time index continually changes as we obtain
new measurements, but the lag is a constant.
In other words, at each time point we have future measurements available
for our state estimate.
We therefore want to obtain for , where is a fixed
positive integer.
This could be the case, for example, if a satellite is continually taking
photographs that are to be displayed or transmitted time steps after the
photograph is taken.
In this case, since the photograph is processed time steps after it is taken,
we have additional measurements after each photograph that are available
to update the estimate of the satellite state and hence improve the quality of
the photograph.
Fixed-lag smoothing is depicted in Figure 9.2 and is discussed in Section 9.3
Fixed-interval Smoothing
The final type of smoothing is fixed-interval smoothing.
In this situation, we have a fixed interval of measurements
that are available, and we want to obtain the optimal state estimates at all the
times in that interval.
For each state estimate we want to use all of the measurements in the time
interval.
That is, we want to obtain .
k k
N
N
x^k−N,k k = N,N + 1,⋯ N
N
N
N
y ,y ,⋯ ,y( 1 2 M )
, ,⋯ ,x^0,M x^1,M x^M ,M
4. 9.0 Intro 4
This is the case when we have recorded some data that are available for post-
processing.
For example, if a manufacturing process has run over the weekend and we
have recorded all of the data, and now we want to plot a time history of the
best estimate of the process state, we can use all of the recorded data to
estimate the states at each of the time points.
Fixed-interval smoothing is depicted in Figure 9.3 and is discussed in Section
9.4
Our derivation of these optimal smoothers will be based on a form for the
Kalman filter different than we have seen in previous chapters.
Therefore, before we can discuss the optimal smoothers, we will first present
an alternate Kalman filter form in Section 9.1.
5. 9.1 An Alternate Form for the Kalman Filter 1
9.1 An Alternate Form for the
Kalman Filter
In order to put ourselves in position to derive optimal smoothers, we first need
to derive yet another form for the Kalman filter.
This is the form presented in And79.
The equations describing the system and the Kalman filter were derived in
Section 5.1 as follows:
Now if we define as
and substitute the expression for into the expression for , then we
obtain
Expanding the expression for gives
Substituting for gives
xk
yk
Pk+1
−
Kk
Pk
+
x^k
−
xk
+
= F x + wk−1 k−1 k−1
= H x + vk k k
= F P F + Qk k
+
k
T
k
= P H H P H + Rk
−
k
T
( k k
−
k
T
k )
−1
= I − K H P I − K H + K R K( k k) k
−
( k k)
T
k k k
T
= Fk−1x^k−1
+
= + K y − Hx^k
−
k ( k kx^k
−
)
(9.3)
Lk
Lk = F Kk k
= F P H H P H + Rk k
−
k
T
( k k
−
k
T
k )
−1 (9.4)
x^k
+
x^k+1
−
x^k+1
−
= F + F K y − Hkx^k
−
k k ( k kx^k
−
)
= F + L y − Hkx^k
−
k ( k kx^k
−
)
(9.5)
Pk
+
P =k
+
P −k
−
K H P −k k k
−
P H K +k
−
k
T
k
T
K H P H K +k k k
−
k
T
k
T
K R Kk k k
T
(9.6)
Kk
1
6. 9.1 An Alternate Form for the Kalman Filter 2
Performing some factoring and collection of like terms on this equation gives
Substituting this expression for into the expression for gives
Combining Equations 9.4, 9.5, and 9.9 gives the alternate form for the
one-step a priori Kalman filter, which can be summarized as follows:
where is the redefined Kalman gain.
This form of the filter obtains only a priori state estimates and covariances.
P =k
+
P − P H H P H + R H P −k
−
k
−
k
T
( k k
−
k
T
k )
−1
k k
−
P H H P H + R H P +k
−
k
T
( k k
−
k
T
k )
−1
k k
−
P H H P H + R H P H H P H + R H P +k
−
k
T
( k k
−
k
T
k )
−1
k k
−
k
T
( k k
−
k
T
k )
−1
k k
−
P H H P H + R R H P H + R H Pk
−
k
T
( k k
−
k
T
k )
−1
k ( k k
−
k
T
k )
−1
k k
−
(9.7)
P =k
+
=
=
P + P H − H P H + R − H P H + R +k
−
k
−
k
T
[ ( k k
−
k
T
k )
−1
( k k
−
k
T
k )
−1
H P H + R H P H H P H + R +( k k
−
k
T
k )
−1
k k
−
k
T
( k k
−
k
T
k )
−1
H P H + R R H P H + R H P( k k
−
k
T
k )
−1
k ( k k
−
k
T
k )
−1
] k k
−
P + P H − H P H + R − H P H + R +k
−
k
−
k
T
[ ( k k
−
k
T
k )
−1
( k k
−
k
T
k )
−1
H P H + R H P H + R H P H + R H P( k k
−
k
T
k )
−1
( k k
−
k
T
k ) ( k k
−
k
T
k )
−1
] k k
−
P − P H H P H + R H Pk
−
k
−
k
T
( k k
−
k
T
k )
−1
k k
−
(9.8)
Pk
+
Pk+1
−
Pk+1
−
= F P F + Qk k
+
k
T
k
= F P − P H H P H + R H P F + Qk [ k
−
k
−
k
T
( k k
−
k
T
k )
−1
k k
−
] k
T
k
= F P F − L H + Qk k
−
( k k k)T
k
(9.9)
Lk
Pk+1
−
xk+1
−
= F P H H P H + Rk k
−
k
T
( k k
−
k
T
k )
−1
= F P F − L H + Qk k
−
( k k k)
T
k
= F z + L y − Hk k
−
k ( k kxˉk
−
)
(9.10)
Lk
7. 9.1 An Alternate Form for the Kalman Filter 3
Note that the Kalman gain, , for this form of the filter is not the same as the
Kalman gain, , for the form of the filter that we derived in Section 5.1.
However, the two forms result in identical state estimates and estimation-error
covariances.
Lk
Kk
8. 9.2 Fixed-Point Smoothing 1
9.2 Fixed-Point Smoothing
The objective in fixed-point smoothing is to obtain a priori state estimates of at
times .
We will use the notation to refer to the estimate of , that is obtained by using
all of the measurements up to and including time .
That is, can be thought of as the a priori estimate of at time :
With this definition we see that
In other words, is just the normal a priori state estimate at time that we
derived in Section 5.1.
We also see that
In other words, is just the normal a posteriori state estimate at time that
we derived in Section 5.1
The question addressed by fixed-point smoothing is as follows: When we get the
next measurement at time , how can we incorporate that information to
obtain an improved estimate (along with its covariance) for the state at time ?
Furthermore, when we get additional measurements at times , etc.,
how can we incorporate that information to obtain an improved estimate (along
with its covariance) for the state at time ?
In order to derive the fixed-point smoother, we will define a new state variable
This new state variable will be initialized as , and will have the dynamics
, With this definition, we see that , for all
So if we can use the standard Kalman filter to find the a priori estimate of then
we will, by definition, have a smoothed estimate of , given measurements up to
xj
j + 1,j + 2,⋯ ,k,k + 1,⋯
x^j,k x
(k − 1)
x^j,k xj k
=x^j,k E x ∣y ,⋯ ,y k ≥( j 1 k−1) j (9.11)
x^j,j = E x ∣y ,⋯ ,y( j 1 j−1 )
= x^j
− (9.12)
x^j,j j
x^j,j+1 = E x ∣y ,⋯ ,y( j 1 j )
= xj
+ (9.13)
x^j,j+1 j
(j + 1)
j
(j + 2),(j + 3)
j
x′
x =j
′
xj
x =k+1
′
x (k =k
′
j,j + 1,⋯) x =k
′
xj
k ≥ j
xk
′
x
9. 9.2 Fixed-Point Smoothing 2
and including time .
In other words, the a priori estimate of will be equal to .
This idea is depicted in Figure 9.4
Our original system is given as
Augmenting the dynamics of our newly defined state to the original system
results in the following:
If we use a standard Kalman filter to obtain an a priori estimate of the augmented
state, the covariance of the estimation error can be written as
The covariance above is the normal a priori covariance of the estimate of .
We have dropped the minus superscript for ease of notation, and we will also feel
free to drop the minus superscript on all other quantities in this section with the
understanding that all estimates and covariances are a priori.
The and matrices are defined by the above equation.
Note that at time and are given as
(k − 1)
xk
′
x^j,k
x = F x + wk k−1 k−1 k−1
y = H x + vk k k k
(9.14)
x′
[
xk
xk
′ ]
yk
= + w[
Fk−1
0
0
I
] [
xk−1
xk−1
′ ] [
I
0
] k−1
= + v[ Hk 0 ] [
xk
xk
′ ] k
(9.15)
E =[(
x −k x^k
−
x − xj j,k
) ( x −( k x^k
−
)
T
x −( j x^j,k )
T
)] [
Pk
Σk
Σk
T
Πk
](9.16)
Pk xk
Σk Πk
k = j, Σk Πk
[ ]
10. 9.2 Fixed-Point Smoothing 3
The Kalman filter summarized in Equation 9.10 can be written for the augmented
system as follows:
where is the normal Kalman filter gain given in Equation 9.10, and is the
additional part of the Kalman gain, which will be determined later in this section.
Writing Equation 9.18 as two separate equations gives
The Kalman gain can be written from Equation 9.10 as follows:
Writing this equation as two separate equations gives
Σj
Πj
= E x − x −[( j x^j,j )( j x^j
−
)
T
]
= E x − x −[( j x^j
−
) ( j x^j
−
)
T
]
= Pj
= E x − x −[( j x^j,j )( j x^j,j )
T
]
= E x − x −[( j x^j
−
) ( j x^j
−
)
T
]
= Pj
(9.17)
=[
x^k+1
−
x^j,k+1
− ] +[
Fk−1
0
0
I
] [
x^k
−
x^j,k
− ] [
Lk
λk
] ( y −k [ Hk 0 ] [
x^k
−
x^j,k
− ])(9.18)
Lk λk
x^k+1
−
x^j,k+1
−
= F + L y − Hk−1x^k
−
k ( k kx^k
−
)
= + λ y − Hx^j,k
−
k ( k kx^k
−
)
(9.19)
=[
Lk
λk
] ×[
Fk−1
0
0
I
] [
Pk
Σk
Σk
T
Πk
] [
Hk
T
0
]
= ([ + R )Hk 0 ] [
Pk
Σk
Σk
T
Πk
] [
Hk
T
0
] k
−1
= H P H + R[
Fk−1
0
0
I
] [
Pk
Σk
Σk
T
Πk
] [
Hk
T
0
] ( k k k
T
k )
−1
(9.20)
Lk
λk
= F P H H P H + Rk k k
T
( k k k
T
k )
−1
= Σ H H P H + Rk k
T
( k k k
T
k )
−1 (9.21)
11. 9.2 Fixed-Point Smoothing 4
The Kalman filter estimation-error covariance-update equation can be written from
Equation 9.10 as follows:
Writing this equation as three separate equations gives
It is not immediately apparent from the above expressions that is really the
transpose of , but the equality can be established by substituting for and
Equations 9.19 9.23 completely define the fixed-point smoother.
The fixed. point smoother, which is used for obtaining
for can be summarized as follows.
The fixed-point smoother
Run the standard Kalman filter up until time , at which point we have and
.
In the algorithm below, we omit the minus superscript on for ease of
notation.
Initialize the filter as follows:
[
Pk+1
Σk+1
Σk+1
T
Πk+1
] = ×[
Fk
0
0
I
] [
Pk
Σk
Σk
T
Πk
]
− +([
Fk
T
0
0
I
] [
Hk
T
0
] [ Lk
T
λk
T
]) [
Qk
0
0
0
]
= +[
F P F − F P H Lk k k
T
k k k
T
k
T
Σ F − Σ H Lk k
T
k k
T
k
T
−F P H λ + F Σk k k
T
k
T
k k
T
−Σ H λ + Πk k
T
k
T
k
]
[
Qk
0
0
0
]
(9.22)
P = F P F − L H + Qk+1 k k ( k k k)T
k
Π = Π − Σ H λk+1 k k k
T
k
T
Σ = −F P H λ + F Σk+1
T
k k k
T
k
T
k k
T
Σ = Σ F − L Hk+1 k ( k k k)
T
(9.23)
Σk+1
T
Σk+1 Pk
Lk
=x^ȷ,k E x ∣y ,⋯ ,y( j 1 k−1)
k ≥ j
j x^j
−
Pj
−
Pj
−
Σj
Πj
zj,j
= Pj
= Pj
= zj
−
(9.24)
12. 9.2 Fixed-Point Smoothing 5
For perform the following:
As we recall from Equation 9.16, is the a priori covariance of the standard
Kalman filter estimate, is the covariance of the smoothed estimate of , at time
, and is the cross covariance between the two.
9.2.1 Estimation improvement due to smoothing
Now we will look at the improvement in the estimate of , due to smoothing.
The estimate is the standard a priori Kalman filter estimate of , and the
estimate is the smoothed estimate after measurements up to and including
time have been processed.
In other words, uses more measurements to obtain the estimate of
, than uses.
How much more accurate can we expect our estimate to be with the use of these
additional measurements?
The estimation accuracy can be measured by the covariance.
The improvement in estimation accuracy due to smoothing is equal to the standard
estimation covariance minus the smoothed estimation covariance .
We can use Equations 9.24 and 9.25 to write this improvement as
k = j,j + 1,⋯ ,
Lk
λk
x^j,k+1
x^k+1
−
Pk+1
Πk+1
Σk+1
= F P H H P H + Rk k k
T
( k k k
T
k )
−1
= Σ H H P H + Rk k
T
( k k k
T
k )
−1
= + λ y − Hx^j,k k ( k kx^k
−
)
= F + L y − Hkx^k
−
k ( k kx^k
−
)
= F P F − L H + Qk k ( k k k)
T
k
= Π − Σ H λk k k
T
k
T
= Σ F − L Hk ( k k k)T
(9.25)
Pk
Πk xj
k Σk
x
x^j
−
x
x^j,k+1
k
x^j,k (k + 1−j)
x x^j
−
(k + 1−j)
Pj Πk+1
P − Πj k+1 = Π − Π − Σ H λj ( j
i=j
∑
k
i i
T
i
T
)
= Σ H λ
i=j
∑
k
i i
T
i
T
(9.26)
13. 9.2 Fixed-Point Smoothing 6
Now assume for purposes of additional analysis that the system is time-invariant
and the covariance of the standard filter has reached steady state at time .
Then we have
From Equation 9.25 we see that
where is initialized as . Combining this expression for with its
initial value, we see that
where is defined by the above equation.
Now substitute this expression, and the expression for from Equation 9.25, into
Equation 9.26 to obtain
The quantity on the right side of this equation is positive definite, which shows that
the smoothed estimate of is always better than the standard Kalman filter
estimate.
In other words, , which implies that
Furthermore, the quantity on the right side is a sum of positive definite matrices,
which shows that the larger the value of (i.e., the more measurements that we use
to obtain our smoothed estimate), the greater the improvement in the estimation
accuracy.
Also note from the above that the quantity inside the summation is
inverted.
This shows that as increases, the quantity on the right side decreases.
j
P =
k→∞
lim k
−
P (9.27)
Σ =k+1 Σ (F −k LH)T
(9.28)
Σ Σ =j P Σk + 1
Σk+1 = P (F − LH)[ T
]
k+1−1
= P (FˉT
)
k+1−j (9.29)
F
~
λ
P − Πj k+1 = Σ H λ
i=j
∑
k
i
T T
= P H HPH + R H P[
i=j
∑
k
(F
~T
)
i−j
T
( T
)
−1
F
~i−j
]
(9.30)
xj
(P −j Π ) >k+1 0 Π <k+1 Pj
k
(HPH +T
R)
R
14. 9.2 Fixed-Point Smoothing 7
In the limit we see from Equation 9.30 that
This illustrates the general principle that the larger the measurement noise, the
smaller the improvement in estimation accuracy that we can obtain by smoothing.
This is intuitive because large measurement noise means that additional
measurements will not provide much improvement to our estimation accuracy.
EXAMPLE 9.1
In this example, we will see the improvement due to smoothing that can be
obtained for a vehicle navigation problem.
This is a second-order Newtonian system where is position and is
velocity.
The input is comprised of a commanded acceleration plus acceleration noise .
The measurement is a noisy measurement of position.
After discretizing with a step size of , the system equations can be written as
Note that the process noise is given as
Now suppose the acceleration noise has a standard deviation of . We obtain
the process noise covariance as follows:
P − Π =
R→∞
lim ( ȷ k+1) 0 (9.31)
x(1) x(2)
u u~
y
T
xk+1
yk
= x + u +[
1
0
T
1
] k [
T /22
T
] ( k u~
k)
= z + u + w[
1
0
T
1
] k [
T /22
T
] k k
= x + v[ 1 0 ] k k
(9.32)
wk
w =k [
T /22
T
] u~k (9.33)
u~
k a
Qk = E w w( k k
T
)
= E[
T /44
T /23
T /23
T2 ] (uˉk
2
)
= a2
[
T /44
T /23
T /23
T2 ]
(9.34)
15. 9.2 Fixed-Point Smoothing 8
The percent improvement due to smoothing can be defined as
where is the point which is being smoothed, and is the number of
measurements that are processed by the smoother.
We can run the fixed-point smoother given by Equation 9.25 in order to smooth
the position and velocity estimate at any desired time.
Suppose we use the smoother equations to smooth the estimate at the second time
step .
If we use measurements at times up to and including 10 seconds to estimate ,
then our estimate is denoted as .
In this case, Table 9.1 shows the percent improvement due to smoothing after 10
seconds when the time step and the acceleration noise standard deviation
.
As expected from the results of the previous subsection, we see that the
improvement due to smoothing is more dramatic for small measurement noise.
Percent Improvement =
Tr P( j )
100 Tr P − Π( j k+1)
(9.35)
j k
(k = 1)
x1
x^1,101
T = 0.1
a = 0.2
16. 9.2 Fixed-Point Smoothing 9
Figure 9.5 shows the trace of , which is the covariance of the estimation error of
the state at the first time step.
As time progresses, our estimate of the state at the first time step improves.
After 10 seconds of additional measurements, the estimate of the state at the first
time step has improved by 96.6% relative to the standard Kalman filter estimate.
Πk
17. 9.2 Fixed-Point Smoothing 10
Figure 9.6 shows the smoothed estimation error of the position and velocity of the
first time step.
We see that processing more measurements decreases the estimation-error
covariance.
In general, the smoothed estimation errors shown in Figure 9.6 will converge to
nonzero values.
The estimation errors are zero-mean, but not for any particular simulation.
The estimation errors are zero-mean when averaged over many simulations.
The system discussed here was simulated 1000 times and the variance of the
estimation errors were computed numerically to be equal to 0.054
and 0.012 for the two states.
The diagonal elements of were equal to 0.057 and 0.012
9.2.2 Smoothing constant states
Now we will think about the improvement (due to smoothing) in the estimation
accuracy of constant states.
If the system states are constant then and .
Equation 9.25 shows that
x −( 1 x^1,101)
Π101
F =k I Q = 0
T
18. 9.2 Fixed-Point Smoothing 11
Comparing these expressions for and , and realizing from Equation
9.24 that the initial value of , we see that for .
This means that the expression for Lk from Equation 9.25 can be written as
Substituting these results into the expression for from Equation 9.25 we see
that
Realizing that the initial value of , and comparing this expression for
with Equation 9.36 for , we see that for .
Recall that is the covariance of the estimate of from the standard Kalman
filter, and is the covariance of the estimate of , given measurements up to and
including time
This result shows that constant states are not smoothable.
Additional measurements are still helpful for refining an estimate of a constant
state.
However, there is no point to using smoothing for estimation of a constant state.
If we want to estimate a constant state at time using measurements up to time
, then we may as well simply run the standard Kalman filter up to time .
Implementing the smoothing equations will not gain any improvement in estimation
accuracy.
Pk+1
Σk+1
= F P F − L H + Qk k ( k k k)
T
k
= P I − L Hk ( k k)
T
= Σ F − L Hk ( k k k)
T
= Σ I − L Hk ( k k)T
(9.36)
Pk+1 Σk+1
Σ =j Pj Σ =k Pk k ≥ j
Lk = F P H H P H + Rk k k
T
( k k k
T
k )
−1
= Σ H H P H + Rk k
T
( k k k
T
k )
−1
= λk
(9.37)
Πk+1
Πk+1 = Π − Σ H λk k k
T
k
T
= Π − P H Lk k k
T
k
T (9.38)
Π =j Pj Πk+1
Pk+1 Π =k Pk k ≥ j
Pk xk
Πk x
(k−1)
j
k > j k
19. 9.3 Fixed-Lag Smoothing 1
9.3 Fixed-Lag Smoothing
In fixed-lag smoothing we want to obtain an estimate of the state at time given
measurements up to and including time , where the time index continually changes as we
obtain new measurements, but the lag is a constant.
In other words, at each time point we have future measurements available for our state
estimate.
We therefore want to obtain for , where is a fixed positive integer.
This could be the case, for example, if a satellite is continually taking photographs that are to be
displayed or transmitted time steps after the photograph is taken.
In this case, since the photograph is processed time steps after it is taken, we have
additional measurements after each photograph that are available to update the estimate of the
satellite state and hence improve the quality of the photograph.
In this section we use the notation
Note that the notation has changed slightly from the previous section.
In the previous section we used the notation to refer to the estimate of given
measurements up to and including time .
In this section (and in the remainder of this chapter) we use to refer to the estimate of
given measurements up to and including time
Let us define as the state propagated with an identity transition matrix and zero
process noise to time k. With this definition we see that
We can therefore define the augmented system
(k−N)
k k
N
N
x^k−N,k k = N,N + 1,⋯ N
N
N N
xk−N,k
Πk−N
= E x ∣y ,⋯ ,y( k−N 1 k)
= E x − x −[( k−N x^k−N,k )( k−N x^k−N,k )
T
]
(9.39)
x^k,m xk
(m−1)
x^k,m xk
m
xk,m xk−m
xk+1,1
xk+1,2
xk+1,3
= xk
= xk−1
= xk,1
= xk−2
= xk,2
etc.
(9.40)
20. 9.3 Fixed-Lag Smoothing 2
The Kalman filter estimates of the components of this augmented state vector are given as
We see that if we can use a Kalman filter to estimate the states of the augmented system (using
measurements up to and including time ), then the estimate of the last element of the augmented
state vector, , will be equal to the estimate of given measurements up to and
including time .
This is the estimate that we are looking for in fixed-lag smoothing.
This idea is illustrated in Figure 9.7
From Equation 9.10 we can write the Kalman filter for the augmented system of Equation 9.41
as follows
⎣
⎢
⎢
⎢⎢
⎡ xk+1
xk+1,1
⋮
xk+1,N+1
⎦
⎥
⎥
⎥⎥
⎤
yk
= + w
⎣
⎢
⎢
⎢⎢
⎡ Fk
I
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
I
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ xk
xk,1
⋮
xk,N+1
⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ I
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
k
= + v[ Hk 0 ⋯ 0 ]
⎣
⎢
⎢
⎢⎢
⎡ xk
xk,1
⋮
xk,N+1
⎦
⎥
⎥
⎥⎥
⎤
k
(9.41)
E x ∣y ⋯y( k+1 1 k)
E x ∣y ⋯y( k+1,1 1 k)
E x ∣y ⋯y( k+1,2 1 k)
E x ∣y ⋯y( k+1,N+1 1 k)
= x^k+1
−
= x^k+1,k
= E x ∣y ⋯y( k 1 k)
= x^k
+
= x^k,k
= E x ∣y ⋯y( k−1 1 k)
= x^k−1,k
⋮
= x^k−N,k
(9.42)
k
xk+1,N+1 xk−N
k
21. 9.3 Fixed-Lag Smoothing 3
where the matrices are components of the smoother gain that will be determined in this
section.
Note that is the standard Kalman gain.
The smoother gain is defined as
From Equation 9.10 we see that the Lk gain matrix is given by
where the covariance matrices are defined as
The expression above can be simplified to
From Equation 9.10 we see that the covariance-update equation for the Kalman filter for our
augmented system can be written as
=
⎣
⎢
⎢
⎢⎢
⎡ x^k+1
−
x^k,k
⋮
x^k−N,k
⎦
⎥
⎥
⎥⎥
⎤
+
⎣
⎢
⎢
⎢⎢
⎡ Fk
I
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
I
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ x^k
−
x^k−1,k−1
⋮
x^k−(N+1),k−1
⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ Lk,0
Lk,1
⋮
Lk,N+1
⎦
⎥
⎥
⎥⎥
⎤
⎝
⎜
⎜
⎜⎜
⎛
y −k [ Hk 0 ⋯ 0 ]
⎣
⎢
⎢
⎢⎢
⎡ x^k
−
x^k−1,k−1
⋮
xk−(N+1),k−1
⎦
⎥
⎥
⎥⎥
⎤
⎠
⎟
⎟
⎟⎟
⎞
(9.43)
Lk,i
Lk,0
Lk
L =k
⎣
⎢
⎢
⎢⎢
⎡ Lk,0
Lk,1
⋮
Lk,N+1
⎦
⎥
⎥
⎥⎥
⎤
(9.44)
L = ×k
⎣
⎢
⎢
⎢⎢
⎡ Fk
I
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
I
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢⎢
⎡ Pk
0,0
⋮
Pk
0,N+1
⋯
⋱
⋯
P( k
0,N+1
)
T
⋮
Pk
N+1,N+1 ⎦
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ Hk
T
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
+ R
⎝
⎜
⎜
⎜⎜
⎛
[ Hk 0 ⋯ 0 ]
⎣
⎢
⎢⎢
⎡ Pk
0,0
⋮
Pk
0,N+1
⋯
⋱
⋯
P( k
0,N+1
)
T
⋮
Pk
N+1,N+1 ⎦
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ Hk
T
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
k
⎠
⎟
⎟
⎟⎟
⎞
−1 (9.45)
Pk
i,j
P =k
ı,j
E x − x −[( k−ȷ x^k−i,k−1)( k−i x^k−i,k−1)
T
] (9.46)
Lk
L =k H P H + R
⎣
⎢
⎢
⎢⎢
⎡ F P Hk k
0,0
k
T
P Hk
0,0
k
T
⋮
P Hk
0,N
k
T ⎦
⎥
⎥
⎥⎥
⎤
( k k
0,0
k
T
k )
−1
(9.47)
22. 9.3 Fixed-Lag Smoothing 4
Substituting for from Equation 9.47 and multiplying out gives
This gives us the update equations for the matrices.
The equations for the first column of the matrix are as follows.
The equations for the diagonal elements of the matrix are as follows:
=
⎣
⎢
⎢⎢
⎡ Pk+1
0,0
⋮
Pk+1
0,N+1
⋯
⋱
⋯
P( k+1
0,N+1
)
T
⋮
Pk+1
N+1,N+1 ⎦
⎥
⎥⎥
⎤
×
⎣
⎢
⎢
⎢⎢
⎡ Fk
I
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
I
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢⎢
⎡ Pk
0,0
⋮
Pk
0,N+1
⋯
⋱
⋯
P( k
0,N+1
)
T
⋮
Pk
N+1,N+1 ⎦
⎥
⎥⎥
⎤
− L +
⎝
⎜
⎜
⎜
⎜⎜
⎛
⎣
⎢
⎢
⎢
⎢⎢
⎡ Fk
T
0
⋮
0
I
0
⋱
⋯
⋯
⋱
⋱
⋯
0
⋮
I
0 ⎦
⎥
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ Hk
T
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
k
T
⎠
⎟
⎟
⎟
⎟⎟
⎞
⎣
⎢
⎢
⎢⎢
⎡ Qk
0
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
⋯
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
Lk
=
⎣
⎢
⎢⎢
⎡ Pk+1
0,0
⋮
Pk+1
0,N+1
⋯
⋱
⋯
P( k+1
0,N+1
)
T
⋮
Pk+1
N+1,N+1 ⎦
⎥
⎥⎥
⎤
×
⎣
⎢
⎢
⎢
⎢
⎢
⎢
⎢⎢
⎡ F Pk k
0,0
Pk
0,0
⋮
Pk
0,N
F Pk ( k
0,1
)
T
P( k
0,1
)
T
⋱
Pk
1,N
⋯
⋯
⋱
⋯
F Pk ( k
0,N+1
)
T
P( k
0,N+1
)
T
⋮
P( k
N,N+1
)
T
⎦
⎥
⎥
⎥
⎥
⎥
⎥
⎥⎥
⎤
− H H P H + R H ×
⎝
⎜
⎜
⎜
⎜⎜
⎛
⎣
⎢
⎢
⎢
⎢⎢
⎡ Fk
T
0
⋮
0
I
0
⋱
⋯
⋯
⋱
⋱
⋯
0
⋮
I
0 ⎦
⎥
⎥
⎥
⎥⎥
⎤
k
T
( k k
0,0
k
T
k )
−1
k
+
⎣
⎢
⎢
⎢⎢
⎡ P Fk
0,0
k
T
0
⋮
0
Pk
0,0
0
⋱
0
⋯
⋯
⋱
⋯
Pk
0,N
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
⎣
⎢
⎢
⎢⎢
⎡ Qk
0
⋮
0
0
0
⋱
⋯
⋯
⋯
⋱
⋯
0
0
⋮
0 ⎦
⎥
⎥
⎥⎥
⎤
(9.49)
P
P
Pk+1
0,0
Pk+1
0,1
Pk+1
0,N+1
= F P F − H H P H + R H P F + Qk k
0,0
[ k
T
k
T
( k k
0,0
k
T
k )
−1
k k
0,0
k
T
] k
= F P F − L H + Qk k
0,0
( k k,0 k)
T
k
= P F − L Hk
0,0
( k k,0 k)
T
⋮
= P F − L Hk
0,N
( k k,0 k)T
(9.50)
P
23. 9.3 Fixed-Lag Smoothing 5
These equations give us the formulas that we can use for fixed-lag smoothing.
This gives us the estimate for a fixed as continually increments.
The fixed-lag smoother is summarized as follows.
The fixed-lag smoother
Run the standard Kalman filter of Equation 9.10 to obtain , and .
Initialize the fixed-lag smoother as follows:
3. For perform the following:
Note that the first time through this loop is the measurement update of the standard Kalman filter.
At the end of this loop we have the smoothed estimates of each state with delays between and
, given measurements up to and including time .
These estimates are denoted .
We also have the estimation-error covariances, denoted
The percent improvement due to smoothing can be computed as
EXAMPLE 9.2
Consider the same two state system as described in Example 9.1.
Pk+1
1,1
Pk+1
2,2
Pk+1
i,4
= P I − H H P H + R H P Fk
0,0
[ k
T
( k k
0,0
k
T
k )
−1
k k
0,0
k
T
]
= P − P H L Fk
0,0
k
0,0
k
T
k,1
T
k
T
= P −H L F + Pk
0,1
[ k
T
k,2
T
k
T
] k
1,1
= P − P H L Fk
1,1
k
0,1
k
T
k,2
T
k
T
⋮
= P − P H L Fk
1−1,2−1
k
0,1−1
k
T
k,s
T
k
T
(9.51)
E x ∣y ,⋯ ,y( k−N 1 k) N k
,Lx^k+1
−
k Pk
−
x^k+1,k
Lk,0
Pk
0,0
= x^k+1
−
= Lk
= Pk
−
(9.52)
i = 1,⋯ ,N + 1,
Lk,i
Pk+1
i,i
Pk+1
0,i
x^k+1−i,k
= P H H P H + Rk
0,i−1
k
T
( k k
0,0
k
T
k )
−1
= P − P H L Fk
i−1,i−1
k
0,i−1
k
T
k,i
T
k
T
= P F − L Hk
0,i−1
( k k,0 k)
T
= + L y − Hx^k+2−i,k k,i ( k kx^k
−
)
(9.53)
0
N k
,⋯ ,x^k,k x^k−N,k
P ,⋯ ,Pk+1
1,1
k+1
N+1,N+1
Percent Improvement =
Tr P( k
0,0
)
100 Tr P − P( k
0,0
k
N+1,N+1
)
(9.54)
24. 9.3 Fixed-Lag Smoothing 6
Suppose we are trying to estimate the state of the system with a fixed time lag.
The discretization time step and the standard deviation of the acceleration noise is 10.
Figure 9.8 shows the percent improvement in state estimation that Is available with fixed-lag
smoothing.
The figure shows percent improvement as a function of lag size, and for two different values of
measurement noise.
The values on the plot are based on the theoretical estimation-error covariance.
As expected, the improvement in estimation accuracy is more dramatic as the measurement noise
decreases.
This was discussed at the end of Section 9.2
T = 0.1
25. 9.4 Fixed-Interval Smoothing 1
9.4 Fixed-Interval Smoothing
Suppose we have measurements for a fixed time interval.
In fixed-interval smoothing we seek an estimate of the state at some of the interior points of the
time interval.
During the smoothing process we do not obtain any new measurements.
Section 9.4 .1 discusses the forward-backward approach to smoothing, which is perhaps the most
straightforward smoothing algorithm.
Section 9.4 .2 discusses the RTS smoother, which is conceptually more difficult but is
computationally cheaper than forward-backward smoothing.
9.4.1 Forward-backward smoothing
Suppose we want to estimate the state based on measurements from to , where
.
The forward-backward approach to smoothing obtains two estimates of .
The first estimate, , is based on the standard Kalman filter that operates from to .
The second estimate, , is based on a Kalman filter that runs backward in time from back
to .
The forward-backward approach to smoothing combines the two estimates to form an optimal
smoothed estimate.
This approach was first suggested in Fra69.
Suppose that we combine a forward estimate of the state and a backward estimate of the
state to get a smoothed estimate of x as follows:
where and are constant matrix coefficients to be determined.
Note that and are both unbiased since they are both outputs from Kalman filters.
Therefore, if is to be unbiased, we require (see Problem 9.9 ).
This gives
The covariance of the estimate can then be found as
where and we have used the fact that .
xm k = 1 k = N
N > m
xm
x^f k = 1 k = m
x^b k = N
k = m
x^f x^b
=x^ K +f x^f Kbx^b (9.55)
Kf Kb
x^f x^b
x^ K +f K =b I
=x^ K +f x^f I − K( f )x^b (9.56)
P = E (x − )(x − ){ x^ x^ T
]
= E x − K − I − K [⋯]{[ f x^f ( f )x^b] T
}
= E K e − e + e [⋯]{[ f ( f b) b] T
}
= E K e e + e e K + e e − K e e − e e K{ f ( f f
T
b b
T
) f
T
b b
T
f b b
T
b b
T
f
T
}
(9.57)
e =f x − x ,e =f b x − xb E(e e ) =f b
T
0
26. 9.4 Fixed-Interval Smoothing 2
The estimates and are both unbiased, and and are independent (since they depend on
separate sets of measurements).
We can minimize the trace of with respect to using results from Equation 1.66 and
Problem 1.4 :
where is the covariance of the forward estimate, and is the
covariance of the backward estimate.
Setting this equal to zero to find the optimal value of gives
The inverse of always exists since both covariance matrices are positive definite.
We can substitute this result into Equation 9.57 to find the covariance of the fixed-interval
smoother as follows:
Using the identity (see Problem 9.2 ), we can write the above
equation as
Multiplying out the first term, and again using the identity on
the last two terms, results in
From the matrix inversion lemma of Equation 1.39 we see that
Substituting this into Equation 9.62 gives
x^f x^b ef eb
P Kf
∂Kf
∂ Tr(P)
= 2E K e e + e e − e e{ f ( f f
T
b b
T
) b b
T
}
= 2 K P + P − P[ f ( f b) b]
(9.58)
P =f E(e e )f f
T
P =b E(e e )b b
T
Kf
K = P P + Pf b ( f b)
−1
K = P P + Pb f ( f b)
−1 (9.59)
(P +f P )b
P =P P + P P + P P + P P +b ( f b)−1
( f b)( f b)−1
b
P − P P + P P − P P + P Pb b ( f b)
−1
b b ( f b)
−1
b
(9.60)
(A + B) =−1
B (AB +−1 − 1
I)−1
P = P P + I P + P P P + I +( f b
−1
)
−1
( f b)( b
−1
f )
−1
P − P P + I P − P P + I Pb ( f b
−1
)
−1
b ( f b
−1
)
−1
b
(9.61)
(A + B) =−1
B (AB +−1 −1
I)−1
P = P + P + P P P + P P P + I +[( b
−1
f
−1
)
−1
( b
−1
f b
−1
b
−1
)
−1
] ( b
−1
f )
−1
P − 2 P P P + Pb ( b
−1
f b
−1
b
−1
)
−1
(9.62)
P P P + P =( b
−1
f b
−1
b
−1
)
−1
P −b P + P( f
−1
b
−1
)
−1
(9.63)
1
27. 9.4 Fixed-Interval Smoothing 3
These results form the basis for the fixed-interval smoothing problem. The system model is given
as
Suppose we want a smoothed estimate at time index .
First we run the forward Kalman filter normally, using measurements up to and including time .
Initialize the forward filter as follows:
For k 1, …, m, perform the following:
At this point we have a forward estimate for , along with its covariance.
These quantities are obtained using measurements up to and including time .
The backward filter needs to run backward in time, starting at the final time index N.
Since the forward and backward estimates must be independent, none of the information that was
used in the forward filter is allowed to be used in the backward filter.
Therefore, must be infinite
P = R P P + I + P − 2P + 2 P + P( b
−1
f )
−1
b b ( f
−1
b
−1
)
−1
= P P P + P − P + 2 P + P( b
−1
f b
−1
b
−1
)
−1
b ( f
−1
b
−1
)
−1
= P − P + P − P + 2 P + Pb ( f
−1
b
−1
)
−1
b ( f
−1
b
−1
)
−1
= P + P( f
−1
b
−1
)
−1
(9.64)
xk
yk
wk
vk
= F x + wk−1 k−1 k−1
= H x + vk k k
∼ 0,Q( k)
∼ 0,R( k )
(9.65)
m
m
= E xx^f0
+
( 0)
P = E x − x −f0
+
[( 0 x^f0
+
)( 0 x^f0
+
)
T
]
(9.69)
Pfk
−
Kfk
x^fk
−
x^fk
+
Pfk
+
= F P F + Qk−1 f,k−1
+
k−1
T
k−1
= P H H P H + Rfk
T
k
T
( k fk
T
k
T
k )
−1
= P H Rfk
+
k
T
k
−1
= Fk−1x^f,k−1
+
= + K y − Hx^fk
−
fk ( k kx^fk
−
)
= I − K H P( fk k) fk
−
(9.67)
xm
m
PbN
−
P =bN
−
∞ (9.68)
28. 9.4 Fixed-Interval Smoothing 4
We are using the minus superscript on to indicate the backward covariance at time before
the measurement at time is processed. Recall that the filtering is performed backward in time.)
So will be updated to obtain after the measurement at time N is processed.
Then it will be extrapolated backward in time to obtain , and so on.
Now the question arises how to initialize the backward state estimate at the final time k = N.
We can solve this problem by introducing the new variable
A minus or plus superscript can be added on all the quantities in the above equation to indicate
values before or after the measurement at time is taken into account.
since it follows that
The infinite boundary condition on means that we cannot run the standard Kalman filter
backward in time because we have to begin with an infinite covariance.
Instead we run the information filter from Section 6.2 backward in time.
This can be done by writing the system of Equation 9.65 as
Note that should always exist if it comes from a real system, because comes from a
matrix exponential that is always invertible (see Sections 1.2 and 1.4.
The backward information filter can be written as follows.
Initialize the filter with
For , perform the following:
PbN
−
N
N
PbN
−
PbN
+
PbN−1
−
x^bk
−
s =k Pbk
−1
x^bk (9.69)
k
P =bN
−
∞
s =N
−
0 (9.70)
Pbk
−
xk−1
yk
wbk
vk
= F x + F wk−1
−1
k k−1
−1
k−1
= F x + wk−1
−1
k b,k−1
= H x + vk k k
∼ 0,F Q F( k
−1
k k
−T
)
∼ 0,R( k )
(9.71)
Fk
−1
Fk
I =bN
−
0
k = N,N−1, ⋯
Ibk
+
Kbk
x^bk
+
Ib,k−1
−
x^b,k−1
−
= I + H R Hbk k
T
k
−1
k
= I H R( bk
+
)
−1
k
T
k
−1
= + K y − Hx^bk
−1
bk ( k kx^bk
−
)
= F I F + F Q F[ k−1
−1
( bk
+
)
−1
k−1
−T
k−1
−1
k−1 k−1
−T
]
−1
= F I + Q Fk−1
T
[( bk
+
)
−1
k−1]
−1
k−1
= F Q − Q I + Q Q Fk−1
T
[ k−1
−1
k−1
−1
( bk
+
k−1
−1
)
−1
k−1
−1
] k−1
= Fk−1
−1
x^bk
+
(9.72)
29. 9.4 Fixed-Interval Smoothing 5
The first form for above requires the inversion of .
Consider the first time step for the backward filter (i.e., at ).
The information matrix is initialized to zero, and then the first time through the above loop
we set .
If there are fewer measurements than states, will always be singular and, therefore,
will be singular at .
Therefore, the first form given above for will not be computable.
In practice we can get around this by initializing to a small nonzero matrix instead of zero.
The third form for above has its own problems.
It does not require the inversion of , but it does require the inversion of .
So the third form of is not computable unless is nonsingular.
Again, in practice we can get around this by making a small modification to so that it is
numerically nonsingular.
Since we need to update instead of (because of initialization issues) as defined
in Equation 9.69, we rewrite the update equations for the state estimate as follows:
Now note from Equation 6.33 that we can write , and
Substituting these expressions into the above equation for gives
We combine this with Equation 9.72 to write the backward information filter as follows.
Initialize the filter as follows:
For , perform the following:
Ib,k−1
−
Ibk
+
k = N
IbN
−
I =bk
+
I +bk
−
H R Hk
T
k
− 1
k
H R Hk
T
k
− 1
k
Ibk
+
k = N
Ib,k−1
−
IbN
−
Ib,k−1
−
Ibk
+
Qk−1
Ib,k−1
−
Qk−1
Qk−1
s =k Ibkx^bk x^bk
x^bk
+
sk
+
= + K y − Hx^bk
−
bk ( k kx^bk
−
)
= Ibk
+
x^bk
+
= I + I K y − Hbk
+
x^bk
−
bk
+
bk ( k kx^bk
−
)
(9.73)
I =bk
+
I +bk
−
H R Hk
T
k
−1
k K =bk
P H Rbk
+
k
T
k
−1
sk
+
sk
+
= I + H R H + H R y − Hbk
−
x^bk
−
k
T
k
−1
kx^bk
−
k
T
k
−1
( k kx^bk
−
)
= s + H R yk
−
k
T
k
−1
k
(9.74)
sN
−
IbN
−
= 0
= 0
(9.75)
k = N,N−1,⋯ ,m + 1
T 1
30. 9.4 Fixed-Interval Smoothing 6
3. Perform one final time update to obtain the backward estimate of
Now we have the backward estimate and its covariance .
These quantities are obtained from measurements
After we obtain the backward quantities as outlined above, we combine them with the forward
quantities from Equation 9.67 to obtain the final state estimate and covariance:
We can obtain an alternative equation for by manipulating the above equations.
If we substitute for in the above expression for then we obtain
Using the matrix inversion lemma on the rightmost inverse in the above equation and performing
some other manipulations gives
Ibk
+
sk
+
Ib,k−1
−
sk−1
−
= I + H R Hbk
−
k
T
k
−1
k
= s + H R yk
−
k
T
k
−1
k
= F I F + F Q F[ k−1
−1
( Bk
+
)
−1
k−1
−T
k−1
−1
k−1 k−1
−T
]
−1
= F I + Q Fk−1
T
[(( Bk
+
)
−1
k−1]
−1
k−1
= F Q − Q I + Q Q Fk−1
T
[ k−1
−1
k−1
−1
( Bk
+
k−1
−1
)
−1
k−1
−1
] k−1
= I F I sb,k−1
−
k−1
−1
( Bk
+
)
−1
k
+
(9.76)
xm
Ibm
−
Pbm
−
sm
−
x^bm
−
= Q − Q F I + F Q F F Qm
−1
m
−1
m
−1
( b,m+1
+
m
−T
m
−1
m
−1
)
−1
m
−T
m
−1
= I( bm
−1
)
−1
= I F I sbm m
−1
( b,m+1
+
)
−1
m+1
+
= I s( bm
−
)
−1
m
−
(9.77)
x^bm
−
Pbm
−
m + 1,m + 2,⋯ ,N
K = P P + Pf bm
−
( fm
+
bm
−
)
−1
= K + I − Kx^m f x^fm
+
( f )x^bm
−
P = P + Pm [( fm
+
)
−1
( bm
−
)
−1
]
−1
(9.78)
x^m
Kf x^m
x^m = P P + P + I − P P + Pbm
−
( fm
+
bm
−
)
−1
x^fm
+
[ bm
−
( fm
+
bm
−
)
−1
] x^bm
−
= P P + P + P + P − P P + Pbm
−
( fm
+
bm
−
)
−1
x^fm
+
[( fm
+
bm
−
) bm
−
] ( fm
+
bm
−
)
−1
x^bm
−
= P P + P + P P + Pbm
−
( fm
+
bm
−
)
−1
x^fm
+
fm
+
( fm
+
bm
−
)
−1
x^bm
−
(9.79)
1
31. 9.4 Fixed-Interval Smoothing 7
where we have relied on the identity (see Problem 9.2 ).
The coefficients of and in the above equation both have a common factor which can be
written as follows
Therefore, using Equation 9.78, we can write Equation 9.80 as
Figure 9.9 illustrates how the forward-backward smoother works.
=x^m
=
=
=
P + P − P P + P +[( bm
−
fm
+
) fm
+
] ( fm
+
bm
−
)
−1
x^fm
+
P I − I I + I Ifm
+
[ bm
−
bm
−
( fm
+
bm
−
)
−1
bm
−
] x^bm
−
I − P P + P +[ fm
+
( fm
+
bm
−
)
−1
] x^fm
+
P I − I I + I Ifm
+
[ bm
−
( fm
+
bm
−
)
−1
] bm
−
x^bm
−
I − P I I + P I +[ fm
+
bm
−
( fm
+
bm
−
)
−1
] x^fm
+
P I − I I + P I P Ifm
+
[ bm
−
( fm
+
bm
−
)
−1
fm
+
] bm
−
x^bm
−
P I − I I + P I P I +fm
+
[ bm
−
( fm
+
bm
−
)
−1
fm
+
] fm
+
x^fm
+
P I − I I + P I P Ifm
+
[ bm
−
( fm
+
bm
−
)
−1
fm
+
] bm
−
x^bm
−
(9.80)
(A + B) =−1
B (AB +−1 −1
I)−1
x^fm
+
x^bm
−
P I − I I + P I Pfm
+
[ bm
−
( fm
+
bm
−
)
−1
fm
+
]
= P − P I I + P I Pfm
+
fm
+
bm
−
( fm
+
bm
−
)
−1
fm
+
= P − P I P + Ifm
+
fm
+
( fm
+
bm
−
)
−1
= P I P + I − P I P + I[ fm
+
( fm
+
bm
−
) fm
+
] ( fm
+
bm
−
)
−1
= P I P + Ibm
−
( fm
+
bm
−
)
−1
= I + I( fm
+
bm
−
)
−1
(9.81)
x^m = P T + P Im fm
+
x^fm
+
m bm
−
x^bm
−
= P T + Im ( fm
+
x^fm
+
bmx^bm
−
)
(9.82)
32. 9.4 Fixed-Interval Smoothing 8
EXAMPLE 9.3
In this example we consider the same problem given in Example 9.1.
Suppose that we want to estimate the position and velocity of the vehicle at seconds.
We have measurements every 0.1 seconds for a total of 10 seconds.
The standard deviation of the measurement noise is 10, and the standard deviation of the
acceleration noise is 10.
Figure 9.10 shows the trace of the covariance of the estimation of the forward filter as it runs from
to the backward filter as it runs from back to , and the smoothed
estimate at .
The forward and backward filters both converge to the same steady-state value, even though the
forward filter was initialized to a covariance of 20 for both the position and velocity estimation
errors, and the backward filter was initialized to an infinite covariance.
The smoothed filter has a covariance of about 7.6, which shows the dramatic improvement that
can be obtained in estimation accuracy when smoothing is used.
Figure 9.10 This shows the trace of the estimation-error covariance for Example 9.3 The forward
filter runs from to , the backward filter runs from to and the trace of the
covariance of the smoothed estimate is shown at
t = 5
t = 0 t = 5 t = 10 t = 5
t = 5
t = 0 t = 5 t = 10 t = 5
t = 5
33. 9.4 Fixed-Interval Smoothing 9
9.4.2. RTS smoothing
Several other forms of the fixed-interval smoother have been obtained.
One of the most common is the smoother that was presented by Rauch, Tung, and Striebel, usually
called the RTS smoother Rau65.
The RTS smoother is more computationally efficient than the smoother presented in the previous
section because we do not need to directly compute the backward estimate or covariance in
order to get the smoothed estimate and covariance.
In order to obtain the RTS smoother, we will first look at the smoothed covariance given in
Equation 9.78 and obtain an equivalent expression that does not use .
Then we will look at the smoothed estimate given in Equation 9.78, which uses the gain ,
which depends on and obtain an equivalent expression that does not use or .
9.4.2.1 RTS covariance update
First consider the smoothed covariance given in Equation 9.78. This can be written as
where the second expression comes from an application of the matrix inversion lemma to the first
expression (see Problem 9.3 ).
From Equation 9.72 we see that
Pbm
Kf
Pbm Pbm x^bm
Pm = P + P[( fm
+
)
−1
( Bm
−
)
−1
]
−1
= P − P P + P Pfm fm
+
( fm
+
bm
−
)
−1
fm
+
(9.83)
[ ]
34. 9.4 Fixed-Interval Smoothing 10
Substituting this into the expression gives the following:
From Equations 6.26 and 9.76 recall that
We can combine these two equations to obtain
Substituting this into Equation 9.78 gives
Substituting this into Equation 9.85 gives
where the last equality comes from an application of the matrix inversion lemma.
Substituting this expression into Equation 9.83 gives
where the smoother gain is given as
P =bm
−
F P + Q Fm
−1
[ b,m+1
+
m] m
−T
(9.84)
(P +fm
+
P )bm
− −1
P + P( fm
+
bm
−
)
−1
= P + F P + Q F[ fm
+
m
−1
( b,m+1
+
m) m
−T
]
−1
= F F P F F + F P + Q F[ m
−1
m fm
+
m
T
m
−T
m
−1
( b,m+1
+
m) m
−T
]
−1
= F F P F + P + Q F[ m
−1
( m fm
+
m
T
b,m+1
+
m) m
−T
]
−1
= F F P F + P + Q Fm
T
( m fm
+
m
T
b,m+1
+
m)
−1
m
= F P + P Fm
T
( f,m+1
−
b,m+1
+
)
−1
m
(9.85)
I = I + H R Hfm
+
fm
−
m
T
m
−1
m
I = I + H R Hbm
+
bm
−
m
T
m
−1
m
(9.86)
I =b,m+1
+
I +b,m+1
−
I −f,m+1
+
If,m+1
−
(9.87)
Pm+1
Pm+1
−1
Pb,m+1
+
= I + I[ f,m+1
+
b,m+1
−
]
−1
= I + I[ b,m+1
+
f,m+1
−
]
−1
= I + Ib,m+1
+
f,m+1
−
= P − I[ m+1
−1
f,m+1
−
]
−1
(9.88)
P + P = F P + P − T F( fm
+
bm
−
)
−1
m
T
[ f,m+1
−
( m+1
−1
f,m+1
−
)
−1
]
−1
m
= F I T + I P − I I I Fm
T
f,m+1
−
[ f,m+1
−
f,m+1
−
( m+1
−1
f,m+1
−
)
−1
f,m+1]
−1
f,m+1
−
m
= F I P − P I Fm
T
f,m+1
−
( f,m+1
−
m+1) f,m+1
−
m
(9.89)
P =m P −fm
+
K P − P Km ( f,m+1
−
m+1) m
T
(9.90)
Km
T
35. 9.4 Fixed-Interval Smoothing 11
The covariance update equation for is not a function of the backward covariance.
The smoother covariance can be solved by using only the forward covariance , which
reduces the computational effort (compared to the algorithm presented in Section 9.4.1 ).
9.4.2.2 RTS state estimate update
Next we consider the smoothed estimate given in Equation 9.78.
We will find an equivalent expression that does not use or .
In order to do this we will first need to establish a few lemmas.
Lemma 1
Proof: From Equation 9.67 we see that
Rearranging this equation gives
Premultiplying both sides by and postmultiplying both sides by gives the desired
result.
QED
Lemma 2 The a posteriori covariance of the backward filter satisfies the equation
Proof: From Equation 9.78 we obtain
QED
Lemma 3 The covariances of the forward and backward filters satisfy the equation
K =m P F Ifm
+
m
T
f,m+1
−
(9.91)
Pm
Pm Pfm
x^m
Pbm x^bm
F Q F =k−1
−1
k−1 k−1
−T
F P F −k−1
−1
fk
−
k−1
−T
Pf,k−1
+
(9.92)
P =fk
−
F P F +k−1 f,k−1
+
k−1
T
Qk−1 (9.93)
Q =k−1 P −fk
−
F P Fk−1 f,k−1
+
k−1
T
(9.94)
Fk−1
−1
Fk−1
−T
Pbm
+
P =bk
+
(P +fk
−
P )I Pbk
+
fk
−
k (9.95)
I
PBk
+
= I + I P( BA
+
fk
) k
= I + P I P( Bk
+
fk
) k
= P + P I Pk Bk
+
fk
−
k
= P I P + P I Pfk
−
fk k Bk
+
fk k
= P + P I P( fk
−
Bk
+
) fk
−
k
(9.96)
T
36. 9.4 Fixed-Interval Smoothing 12
Proof: From Equation 9.67 and 9.72 we see that
Adding these two equations and rearranging gives
QED
Lemma 4 The smoothed estimate can be written as
Proof: From Equations 9.69 and 9.82 we have
From Equation 9.76 we see that
Substitute this expression for , and the expression for from Equation 9.67 into Equation
9.101 to obtain
Now substitute [from Equation 9.67 in the above equation to obtain
QED
Lemma 5
Proof: Recall from Equations 6.26 and 9.72 that
P +fk
−
P =bk
+
F (P +k−1 f,k−1
+
P )Fb, k−1
−
k−1
T
(9.97)
P = F P F − F Q Ff,k−1
+
k−1
−1
fk
−
k−1
−T
k−1 k−1 k−1
−T
P = F P F + F Q Fb,k−1
−
k−1
−1
Bk
+
k−1
−T
k−1
−1
k−1 k−1
−T (9.98)
P + Pf,k−1
+
b,k−1
−
P + Pfk Bk
+
= F P + P Fk−1
−1
( fk
−
bk
+
) k−1
−T
= F P + P Fk−1 ( f,k−1
+
b,k−1) k−1
T
(9.99)
x^k
x =k P I − P H R H +k fk
+
x^fk
−
k k
T
k
−1
kx^fk
−
P sk k
+
(9.100)
x k = P I + P Ik fk
+
x^fk
+
k bk
−
x^bk
−
= P I + P sk fk
+
x^fk
+
k k
− (9.100)
s =k
−
s − H R yk
+
k
T
k
−1
k (9.102)
sk
−
x^fk
+
x =k P I +k fk
+
x^fk
−
P I K (y −H ) +k fk
+
fk k kx^fk
−
P s −P H R yk k
+
k k
T
k
−1
k (9.103)
P H Rk k
T
k
−1
x^k = P I + P I P H R y − H + P s − P H R yk fk
+
x^fk
−
k fk
+
fk
+
k
T
k
−1
( k kx^fk
−
) k k
+
k k
T
k
−1
k
= P I + P H R y − H + P s − P H R yk fk
+
x^fk
−
k k
T
k
−1
( k kx^fk
−
) k k
+
k k
T
k
−1
k
= P I − P H R H + P sk fk
+
x^fk
−
k k
T
k
−1
kx^fk
−
k k
+
(9.104)
P + P =( f,k−1
+
b,k−1
−
)
−1
F I P − P I Fk−1
T
fk
−
( fk
−
k) fk
−
k−1 (9.105)
1
37. 9.4 Fixed-Interval Smoothing 13
Combining these two equations gives
where we have used Equation 9.78 in the above derivation. Substitute this expression for
into Equation 9.97 to obtain
Invert both sides to obtain
Now apply the matrix inversion lemma to the term in the above equation. This
results in
QED
I = I + H R Hfk
+
fk
−
k
T
k
−1
k
I = I + H R Hbk
+
bk
−
k
T
k
−1
k
(9.106)
Ibk
+
Pbk
+
= I + I − Ibkˉ fk
+
fk
−
= I + I − I[( bk
−
fk
+
)
−1
]
−1
fk
−
= P − Ik
−1
fk
−
= I − I( k fk
−
)
−1
(9.107)
Pbk
+
F P + P Fk−1 ( f,k−1
+
b,k−1
−
) k−1
T
= P + Pfk bk
+
= P + I − Ifk ( k fk
−
)
−1 (9.108)
F P + P Fk−1
−T
( f,k−1
+
b,k−1
−
)
−1
k−1
−1
P + P( f,k−1
+
b,k−1
−
)
−1
= P + I − I[ fk
−
( k fk
−
)
−1
]
−1
= F P + I − I Fk−1
T
[ fk
−
( k fk
−
)
−1
]
−1
k−1
= F P I P + P T I − I I P Fk−1
T
[ fk
−
fk
−
fk
−
fk
−
fk
−
( k fk
−
)
−1
fk
−
fk
−
]
−1
k−1
= F I I + I I − I I I Fk−1
T
fk
−
[ fk
−
fk
−
( k fk
−
)
−1
fk
−
]
−1
fk
−
k−1
(9.109)
(I −k I )fk
− −1
P + P( f,k−1
+
b,k−1
−
)
−1
= F I I + I −P − P −P + P P I I Fk−1
T
fk
−
[ fk
−
fk
−
( fk
−
fk
−
( fk
−
k)
−1
fk
−
) fk
−
]
−1
fk k−1
= F I I + −I − −P + P P I I Fk−1
T
fk
−
[ fk
−
( ( fk
−
k)
−1
fk
−
) fk
−
]
−1
fk
−
k−1
= F I I − I − P + P I Fk−1
T
fk
−
[ fk
−
fk
−
( fk
−
k)
−1
]
−1
fk
−
k−1
= F I P − P I Fk−1
T
fk
−
( fk
−
k) fk
−
k−1
(9.110)
38. 9.4 Fixed-Interval Smoothing 14
With the above lemmas we now have the tools that we need to obtain an alternate expression for
the smoothed estimate.
Starting with the expression for in Equation 9.76, and substituting the expression for
from Equation 9.72 gives
Rearranging this equation gives
Multiplying out this equation, and premultiplying both sides by , gives
Substituting for from Equation 9.92 gives
Substituting in this expression for the from Equation 9.95 gives
Substituting for from Equation 9.97 on both sides of this expression gives
Premultiplying both sides by gives
Substituting Equation 9.105 for gives
sk−1
−
Ib,k−1
−
sk−1
−
= I F P sbb k−1
−1
bk
+
k
+
= F Q − Q I + Q Q F F P sk−1
T
[ k−1
−1
k−1
−1
( bk
+
k−1
−1
)
−1
k−1
−1
] k−1 k−1
−1
bk
+
k
+
= F Q I − I + Q Q P sk−1
T
k−1
−1
[ ( bk
+
k−1
−1
)
−1
k−1
−1
] 4k
+
k
+
= F Q I + Q I + Q − Q P sk−1
T
k−1
−1
( bk
+
k−1
−1
) ( k+1
+1
k−1
−1
k−1
−1
) bk
+
k
+
= F Q I + Q sk−1
T
k−1
−1
( Bk
+
k−1
−1
)
−1
k
+
= F I + I Q sk−1
T
( bk
+
k−1)
−1
k
+
(9.111)
(I + I Q )F s =bk
+
k−1 k−1
−T
k−1
−
sk
+
(9.112)
F Pk−1
+
bk
+
F P F s +k−1
−1
bk
+
k−1
−T
k−1
−
F Q F s =k−1
−1
k−1 k−1
−T
k−1
−
F P sk−1
−1
bk
+
k
+
(9.113)
F Q Fk−1
−1
k−1 k−1
−T
F P F s + F P F s − P sk−1
−1
bk
+
k−1
−T
k−1
−
k−1
−1
fk
−
k−1
−T
k−1
−
f,k−1
+
k−1
−
F P + P F − P s[ k−1
−1
( fk
−
bk
+
) k−1
−T
f,k−1
+
] k−1
−
P + P F − F P s[( fk
−
bk
+
) k−1
−T
k−1 f,k−1
+
] k−1
−
= F P sk−1
−1
bk
+
k
+
= F P sk−1
−1
bk
+
k
+
= P sbk
+
k
+
(9.114)
Pbk
+
P + P F − F P s =[( fk
−
bk
+
) k−1
−T
k−1 f,k−1
+
] k−1
−
P + P I P s( fk
−
bk
+
) fk
−
k k
+
(9.115)
(P +fk
−
P )bk
+
F P + P − F P s =[ k−1 ( f,k−1
+
b,k−1
−
) k−1 f,k−1
+
] k−1
−
F P + P F I P sk−1 ( f,k−1
+
b,k−1
−
) k−1
T
fk
−
k k
+
(9.116)
P + P F( f,k−1
+
b,k−1
−
)
−1
k−1
−1
I − P + P P s =[ ( f,k−1
+
b,k−1
−
)
−1
f,k−1
+
] k−1
−
F I P sk−1
T
fk
−
k k
+
(9.117)
P + P( f,k−1
+
b,k−1
−
)
−1
s −k−1
−
F I P − P I F P s =k−1
T
fk
−
( fk
−
k) fk
−
k−1 f,k−1
+
k−1
−
F I P sk−1
T
fk
−
k k
+
(9.118)
39. 9.4 Fixed-Interval Smoothing 15
Now from Equation 9.105 we see that
So we can add the two sides of this equation to the two sides of Equation 9.118 to get
Now use Equation 9.100 to substitute for in the above equation and obtain
Rearrange this equation to obtain
From Equation 9.106 we see that . Also note that part of the coefficient
of on the left side of the above equation can be expressed as
From Equation 9.67 we see that . Therefore Equation 9.122 can be written as
Now substitute for from Equation 9.90 and use Equation 9.91 in the above equation to
obtain
Premultiplying both sides by gives
− P + P F =( f,k−1
+
b,k−1
−
)
−1
k−1
−1
x^fk
−
F I P − P Ik−1
T
fk
−
( k fk
−
) fk
−
x^fk
−
(9.119)
s − F I P − P I F P s − P + P Fk−1
−
k−1
T
fk
−
( fk
−
k) fk
−
k−1 f,k−1
+
k−1
−
( f,k−1
+
b,k−1
−
)
−1
k−1
−1
x^fk
−
= F I P s + F I P − P Ik−1
T
fk
−
k k
+
k−1
T
fk
−
( k fk
−
) fkx^jk
(9.120)
P sk k
+
s − F I P − P I F P s − P + P Fk−1
−
k−1
T
fk
−
( fk
−
k) fk
−
k−1 f,k−1
+
k−1
−
( f,k−1
+
b,k−1
−
)
−1
k−1
−1
x^fk
−
= F I − F I P I +k−1
T
fk
−
x^k k−1
T
fk
−
k fk
+
x^fk
−
F I P H R H + F I P − P Ik−1
T
fk
−
k k
T
k
−1
kx^fk
−
k−1
T
fk
−
( k fk
−
) fk
−
x^fk
−
(9.121)
s − F I P − P I F P s +k−1
−
k−1
T
fk
−
( fk
−
k) fk
−
k−1 f,k−1
+
k−1
−
− P + P F + F I P I − F I P H R H −[ ( f,k−1
+
b,k−1
−
)
−1
k−1
−1
k−1
T
fk
−
k fk
+
k−1
T
fk
−
k k
T
k
−1
k
F I P I = F I −k−1
T
fk
−
k fk
−
] x^fk
−
k−1
T
fk
−
(x^k x^fk
−
)
(9.122)
I −I =fk
+
fk
−
H R Hk
T
k
−1
k
x^fk
−
P + P F =( f,k−1
+
b,k−1
−
)
−1
k−1
−1
I I + P I Fb,k−1
−
( f,k−1
+
b,k−1
−
)
−1
k−1
−1
(9.123)
F =k−1
−1
x^fk
−
x^f,k−1
+
s − F I P − P I F P s −k−1
−
k−1
T
fk
−
( fk
−
k) fk
−
k−1 f,k−1
+
k−1
−
I I + P I = F I −b,k−1
−
( f,k−1
+
b,k−1)
−1
x^f,k−1
+
k−1
T
fk (x^k x^fk
−
)
(9.124)
Pk
s − F I P − P + K P K − K P K K s −k−1
−
k−1
T
fk
−
( fk
−
fk
+
k f,k+1
−
k
T
k k+1 k
T
) k−1
T
k−1
−
I I + P I x = F I −b,k−1
−
( f,k−1
+
b,k−1
−
)
−1
f,k−1
+
k−1
T
fk
−
(x^k x^jk
−
)
(9.125)
Pf,k−1
+
P − P F I P − P + K P K − K P K K s[ f,k−1
+
f,k−1
+
k−1
T
fk ( fk
−
fk
+
k f,k+1
−
k
T
k k+1 k
T
) k−1
T
] k−1
−
−P I I + P I = K −f,k−1
+
0,k−1
−
( f,k−1
+
b,k−1
−
)
−1
x^j,k−1
+
k−1 (x^k x^fk
−
)
(9.126)
40. 9.4 Fixed-Interval Smoothing 16
Now use Equation 9.91 to notice that the coefficient of on the left side of the above
equation can be written as
Using Equation 9.90 to substitute for allows us to write the above expression as
Since this is the coefficient of in Equation 9.126, we can write that equation as
Now from Equations 9.78 and 9.82 we see that
From this we see that
Rewriting the above equation with the time subscripts and then substituting for the left
side of Equation 9.129 gives
from which we can write
This gives the smoothed estimate without needing to explicitly calculate the backward
estimate.
The RTS smoother is implemented by first running the standard Kalman filter of Equation 9.67
forward in time to the final time, and then implementing Equations 9.90, 9.91, and 9.133
backward in time.
sk−1
−
P −f,k−1
+
K P − P + K P K − K P K Kk−1 ( fk
−
fk
+
k f,k+1
−
k
T
k k+1 k
T
) k−1
T
(9.127)
K P Kk k+1 k
T
P −f,k−1
+
K P − P + K P K − P + P − K P K Kk−1 ( fk
−
fk
+
k f,k+1
−
k
T
k fk
+
k f,k+1
−
k
T
) k−1
T
= P − K P K + K P Kf,k−1
+
k−1 fk
−
k−1
T
k−1 k k−1
T
= P − K P K + P − P + K P Kf,k−1
+
k−1 fi
−
k−1
T
k−1 f,k−1
+
k−1 fk
−
k−1
T
= Pk−1
(9.128)
sk−1
−
P s −k−1 k−1
−
P I I + P I =f,k−1
+
b,k−1
−
( f,k−1
+
b,k−1
−
)
−1
x^f,k−1
+
K −k−1 (x^k x^fk
−
) (9.129)
x^k = I + I I + P I( fk
+
bk
−
)
−1
fk
+
x^fk
+
k bk
−
x^bk
−
= I + P I + P s( fk
+
bk
−
)
−1
x^fk
+
k k
−
(9.130)
−x^k x^fk
+
= I + P T − I + P s[( fk
+
bk
−
)
−1
] x^fk k k
−
= I − I + P I I + P I + P[ ( fk
+
bk
−
)] ( fk
+
bk
−
)
−1
x^fk
+
kk
−
= −P I I + P I + P sfk
+
bk
−
( fk
+
bk)
−1
x^fk
+
k k
−
(9.131)
(k−1)
−x^k−1 =x^f,k−1
+
K −k−1 (x^k x^fk
−
) (9.129)
=x^k +x^fk
+
K −k (x^k+1 x^f,k+1
−
) (9.133)
x^k
41. 9.4 Fixed-Interval Smoothing 17
The RTS smoother can be summarized as follows.
The RTS smoother
The system model is given as follows:
Initialize the forward filter as follows:
3. For (where is the final time), execute the standard forward Kalman filter:
4. Initialize the RTS smoother as follows:
5. For , execute the following RTS smoother equations:
xk
yk
wk
vk
= F x + G u + wk−1 k−1 k−1 k−1 k−1
= H x + vk k k
∼ 0,Q( k)
∼ 0,R( k )
(9.134)
= E xx^f0 ( 0)
P = E x − x −f0
+
[( 0 x^f0)( 0 x^f0)T
]
(9.135)
k = 1,⋯ ,N N
Pfk
−
Kfk
x^fk
−
x^fk
+
Pfk
+
= F P F + Qk−1 f,k−1
+
k−1
T
k−1
= P H H P H + Rfk
−
k
T
( k fk
−
k
T
k )
−1
= P H Rfk
+
k
T
k
−1
= F + G uk−1x^j,k−1
+
k−1 k−1
= + K y − Hx^fk
−1
fk ( k kx^fk
−
)
= I − K H P I − K H + K R K( fk k) fk
−
( fk k)
T
fk k fk
T
= P + H R H[( fk
−
)
−1
k
T
k
−1
k]
−1
= I − K H P( fk k) fk
−
(9.136)
x =N x^fN
+
P = PN fN
+ (9.137)
k = N − 1,⋯ ,1,0
If,k+1
−
Kk
Pk
x^k
= P( f,k+1
−
)
−1
= P F Ifk
+
k
T
f,k+1
−
= P − K P − P Kfk
+
k ( f,k+1
−
k+1) k
T
= + K −x^fk
+
k (x^k+1 x^f,k+1
−
)
(9.138)
42. 9.5 Summary 1
9.5 Summary
In this chapter we derived the optimal smoothing filters.
These filters, sometimes called retrodiction filters Bar01, include the
following variants.
is the output of the fixed-point
smoother. In this filter we find the estimate of the state at the fixed time
when measurements continue to arrive at the filter at times greater than
The time index is fixed while continues to increase as we obtain more
measurements.
for a fixed is the output of the fixed-lag
smoother. In this filter we find the estimate of the state at each time
while using measurements up to and including time . The time
index varies while remains fixed.
for a fixed is the output of the fixed-interval
smoother. In this filter we find the estimate of the state at each time
while using measurements up to and including time The time index
varies while the total number of measurements is fixed. The two
formulas we derived for this type of smoothing included the forward-
backward smoother and the RTS smoother.
Just as steady-state filters can be used for standard filtering, we can also
derive steady-state smoothers to save computational effort Gel74. An early
survey of smoothing algorithms is given in Med73.
=x^ȷ,k E x ∣y ,⋯ ,y (k ≥( j 1 k−1) j)
j
j
j k
=x^k−N,k E x ∣y ,⋯ ,y( k−N 1 k) N
k
(k + N)
k N
=x^k,N E x ∣y ,⋯ ,y( k 1 N ) N
k
N k
N