Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
ย
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
1. Chapter 12 Additional Topics in H_infty Filtering 1
Chapter 12 Additional Topics in
Filtering
since make no assumption about the
disturbances, they have to accommodate for all conceivable
disturbances, and are thus over-conservative.
- Babak Hassibi and Thomas Kailath ๎Has95๎
References
๎Ber89๎ D. Bernstein and W. Haddad, โSteady-state Kalman filtering with an H ,
error bound,โ Systems and Control Letters, 12, pp. 9๎16 ๎1989๎.
https://doi.org/10.1016/0167๎6911๎89๎90089๎3
๎Bet94๎ M. Bettayeb and D. Kavranoglu, โReduced order H, filtering,โ American
Control Conference, pp. 1884๎1888 ๎June 1994๎.
https://ieeexplore.ieee.org/abstract/document/752400/
๎Fle81๎ R. Fletcher, Practical Methods of Optimization - Volume 2๎ Constrained
Optimization,
John Wiley & Sons, New York, 1981.
๎Gil81๎ P. Gill, W. Murray, and M. Wright, Practical Optimization, Academic
Press,
New York, 1981.
๎Gri97๎ K. Grigoriadis and J. Watson, โReduced-order Hm and Lz - Lm filtering
via
linear matrix inequalities,โ IEEE Tmnsactions on Aerospace and Electronic
Systems, 33๎4๎, pp. 13281338 ๎October 1997๎.
https://ieeexplore.ieee.org/abstract/document/625133/
๎Gri91a] M. Grimble, โH, fixed lag smoothing filter for scalar systems,โ IEEE
Transactions
on Signal Processing, 39๎9๎, pp. 1955๎1963 ๎September 1991๎.
https://ieeexplore.ieee.org/abstract/document/134428
Hโ
H ย filters[ โ ]
2. Chapter 12 Additional Topics in H_infty Filtering 2
๎Had91๎ W. Haddad, D. Bernstein, and D. Mustafa, โMixed-norm H2/
regulation
and estimation: The discrete-time case,โ Systems and Control Letters, 16,
pp. 235๎247 ๎1991๎.
https://www.sciencedirect.com/science/article/pii/0167691191900113
๎Has95๎ B. Hassibi and T. Kailath, โ adaptive filtering,โ IEEE International
Conference
on Acoustics, Speech, and Signal Processing, pp. 949๎952 ๎1995๎.
ieeexplore.ieee.org/document/480332/
๎Has96a] B. Hassibi, A. Sayed, and T. Kailath, โLinear estimation in Krein spaces
-
Part I๎ Theory,โ IEEE Transactzons on Automatic Control, 41๎1๎, pp. 18๎33
๎January 1996๎.https://ieeexplore.ieee.org/abstract/document/481605
๎Has96b] B. Hassibi, A. Sayed, and T. Kailath, โLinear estimation in Krein spaces
- Part
11๎ Applications,โ IEEE Transactions on Automatic Control, 41๎1๎, pp. 34๎49
๎January 1996๎. https://ieeexplore.ieee.org/abstract/document/481606
๎Has99๎ B. Hassibi, A. Sayed, T. Kailath, Indefinite-Quadratic Estimation and
Control,
SIAM, Philadelphia, Pennsylvania, 1999.
๎Hay98๎ S. Hayward, โConstrained Kalman filter for least-squares estimation of
timevarying
beamforming weights,โ in Mathematics in Signal Processing IV, J.
McWhirter and I. Prouder ๎Eds.),O xford University Press, pp. 113๎125, 1998.
๎Hun03๎ Y. Hung and F. Yang, โRobust H , filtering with error variance
constraints for
discrete timevarying systems with uncertainty,โ Automatica, 39๎7๎, pp. 1185๎
1194 ๎July 2003๎.
https://www.sciencedirect.com/science/article/pii/S0005109803001171
๎Ko06๎ S. KO and R. Bitmead, โState estimation for linear systems with state
equality
constraints,โ Automatica, 2006.
https://www.sciencedirect.com/science/article/pii/S0005109807001306
๎Mah04b] M. Mahmoud, โResilient linear filtering of uncertain systems,โ
Automatica,
Hโ
Hโ
3. Chapter 12 Additional Topics in H_infty Filtering 3
40๎10๎, pp. 1797๎1802 ๎October 2004๎.
https://www.sciencedirect.com/science/article/pii/S000510980400161X
๎Por88๎ J. Porrill, โOptimal combination and constraints for geometrical sensor
data,โ
International Journal of Robotics Research, 7๎6๎, pp. 6677 ๎December 1988๎.
http://journals.sagepub.com/doi/10.1177/027836498800700606
๎Sim96๎ D. Simon and H. El-Sherief, โHybrid Kalman/minimax filtering in
phaselocked
loops,โ Control Engineenng Practice, 4, pp. 615๎623 ๎October 1996๎.
https://www.sciencedirect.com/science/article/pii/0967066196000433
๎Sim06c] D. Simon, โA game theory approach to constrained minimax state
estimation,โ
IEEE Transactions on Signal Processing, 54๎2๎, pp. 405๎412 ๎February 2006๎.
http://ieeexplore.ieee.org/document/1576971/
๎The94a] Y . Theodor, U. Shaked, and C. de Souza, โA game theory approach to
robust
discretetime H, estimation,โ IEEE Transactions on Signal Processing,
42๎6๎, pp. 1486๎1495 ๎June 1994๎.
https://ieeexplore.ieee.org/abstract/document/286964
๎The94b] Y. Theodor and U. Shaked, โGame theory approach to H,-optimal
discretetime
fixed-point and fixed-lag smoothing,โ IEEE Transactions on Automatic
Control, 39๎9๎, pp. 1944๎1948 ๎September
1994๎.https://ieeexplore.ieee.org/abstract/document/317131
๎Wen92๎ W. Wen and H. Durrant-Whyte, โModel-based multi-sensor data
fusion,โ
IEEE International Conference on Robotics and Automation, pp. 1720๎1726,
May 1992. ieeexplore.ieee.org/document/220130/
๎Xie04๎ L. Xie, L. Lu, D. Zhang, and H. Zhang, โImproved robust HZ and H ,
filtering
for uncertain discrete-time systems,โ Automatica, 40๎5๎, pp. 873๎880 ๎May
2004๎. inkinghub.elsevier.com/retrieve/pii/S0005109804000172
๎Xu02๎ S. Xu and T. Chen, โReduced-order H , filtering for stochastic systems,โ
IEEE Transactions on Signal Processing, 50๎ 12๎, pp. 2998๎3007 ๎December
2002๎. ieeexplore.ieee.org/abstract/document/1075993/
4. Chapter 12 Additional Topics in H_infty Filtering 4
๎Yae92๎ I. Yaesh and U. Shaked, โGame theory approach to state estimation of
linear
discrete-time processes and its relation to H,-optimal estimation,โ International
Journal of Control, 55๎6๎, pp. 1443๎1452 ๎1992๎.
https://www.tandfonline.com/doi/abs/10.1080/00207179208934293
๎Yae04๎ I. Yaesh and U. Shaked, โMin-max Kalman filtering,โ Systems and
Control
Letters, 53๎3๎4๎, pp. 217๎228 ๎November 2004๎.
linkinghub.elsevier.com/retrieve/pii/S0167691104000672
๎Yoo04๎ M. Yoon, V. Ugrinovskii, and I. Petersen, โRobust finite horizon minimax
filtering for discretetime stochastic uncertain systems,โ Systems and Control
Letters, 52๎2๎, pp. 99๎112 ๎June 2004๎.
https://linkinghub.elsevier.com/retrieve/pii/S0167691103003001
๎Zha05a] H. Zhang, L. Xie, Y. Chai Soh, and D. Zhang, โH, fixed-lag smoothing
for
discrete linear timevarying systems,โ Automatica, 41๎5๎, pp. 839๎846 ๎May
2005๎. https://www.sciencedirect.com/science/article/pii/S0005109805000117
๎Zha05b] Y. Zhang, Y. Soh, and W. Chen, โRobust information filter for
decentralized
estimation,โ Automatzca, 41๎12๎, pp. 2141๎2146 ๎December 2005๎.
https://www.sciencedirect.com/science/article/pii/S0005109805002700
Introduction
Some advanced topics in filtering
filtering started since 80s (less mature than Kalman filtering)
more room for development in filtering
The current research directions in filtering.
Outline
Section 12.1๎ The mixed Kalman/ estimation problem.
a filter that satisfies an performance bound while at the same time
minimizing a Kalman performance bound.
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
5. Chapter 12 Additional Topics in H_infty Filtering 5
Section 12.2๎ The robust mixed Kalman/H estimation problem.
Additional uncertainties in the system matrices.
Section 12.3๎ The constrained filter withequality (or inequality)
constraints
12.1 Mixed Kalman / Filtering
A filter that combines the best features of Kalman filtering with the best
features of filtering.
This problem can be attacked a couple of different ways.
Recall from Section 5.2 (pp. 129๎ the cost function that is minimized by the
steady-state Kalman filter:
Recall from Section 11.3 (pp.343๎361๎ the cost function that is minimized by
the steady-state state estimator if and are identity matrices:
Loosely speaking
The Kalman filter minimizes the RMS estimation error
The filter minimizes the worst-case estimation error.
โ
Hโ
Hโ
Hโ
๎12.1๎
Hโ Sk Lk
๎12.2๎
Hโ
6. Chapter 12 Additional Topics in H_infty Filtering 6
In ๎Had91๎ these two performance objectives are combined to form the
following problem: Given the -state observable LTI system
where and are uncorrelated zero-mean, white noise processes with
covariances and respectively
Find an estimator of the form
With the following Criteria
๎ฒ๎ is a stable matrix (so the estimator is stable).
๎ณ๎ The cost function is bounded by a user-specified parameter:
๎ฒ๎ Among all estimators satisfying the above criteria, the filter minimizes the
Kalman filter cost function
The solution to this problem provides the best RMS estimation error among
all estimators that bound the worst-case estimation error.
n
๎12.3๎
w{ k} v{ k}
Q R
๎12.4๎
F^
Hโ
๎12.5๎
J2
7. Chapter 12 Additional Topics in H_infty Filtering 7
The filter that solves this problem is given as follows.
The Mixed Kalman / Filter
1. Find the positive semidefinite matrix that satisfies the following
Riccati equation:
where and are defined as
2. Derive the and matrices in Equation ๎12.4๎ as
3. The estimator of Equation ๎12.4๎ satisfies the mixed Kalman/ estimation
problem if and only if is stable. In this case, the state estimation error
satisfies the bound
If , reduces to the Kalman filter problem statement.
Hโ
n ร n P
๎12.6๎
Pa V
๎12.7๎
F^ K
๎12.8๎
Hโ
F^
๎12.9๎
ฮธ = 0
8. Chapter 12 Additional Topics in H_infty Filtering 8
In this case we can see that Equation ๎12.6๎ reduces to the discrete-time
algebraic Riccati equation that is associated with the Kalman filter (see
Problem 12.2 and Section 7.3(pp. 193๎ ).
The continuous-time version of this theory is given in ๎Ber89๎.
Example 12.1
In this example, we take another look at the scalar system that is described in
Example 11.2 (pp. 356๎๎
where and are uncorrelated zero-mean, white noise processes with
covariances and , respectively.
Equation ๎12.6๎, the Riccati equation for the mixed Kalman/ filter, reduces
to the following scalar equation:
Suppose that (for some value of and ) this equation has a solution
and where the filter gain from Equation ๎12.8๎ is given
as
๎12.10๎
w{ k} v{ k}
Q R
Hโ
๎12.11๎
Q,R, ฮธ
P โฅ 0, โฃ1 โ Kโฃ < 1, K
๎12.12๎
9. Chapter 12 Additional Topics in H_infty Filtering 9
Then from Equation ๎12.2๎ is bounded from above by and the variance
of the state estimation error is bounded from above by .
๎ฒ๎ The top half of Figure 12.1 shows the Kalman filter performance bound
and the estimator gain as a function of when .
Note that at the mixed Kalman/ filter reduces to a standard
Kalman filter.
The performance bound
The estimator gain , as discussed in Example 11.2
However, if then we do not have any guarantee on the worst-
case performance index
From the top half of Figure 12.1, we see that as increases, the
performance bound increases, which means that our Kalman
performance index gets worse.
However, at the same time, the worst-case performance index
decreases as increases.
From the bottom half of Figure 12.1 , we see that as increases,
increases, which is consistent with better performance and worse
Kalman performance (see Example 11.2๎.
When reaches about 0.91 numerical difficulties prevent a solution to
the mixed filter problem.
๎ณ๎ The bottom half of Figure 12.1 shows that at the estimator gain
.
Recall from Example 11.2 that the filter had an estimator
for the same value of .
This shows that the mixed Kalman/ filter has a smaller estimator gain
(for the same ) than the pure filter.
In other words, the mixed filter uses a lower gain in order to obtain better
Kalman performance, whereas the pure filter uses a higher gain
because it does not take Kalman performance into account.
Jโ 1/ฮธ
P
P
K ฮธ Q = R = 1
ฮธ = 0 Hโ
P โ 1.62
K โ 0.62
ฮธ = 0
Jโ
ฮธ
P
Jโ
ฮธ
ฮธ K
Hโ
ฮธ
ฮธ = 0.5
K โ 0.76
Hโ gain K = 1
ฮธ
Hโ
ฮธ Hโ
Hโ
10. Chapter 12 Additional Topics in H_infty Filtering 10
Although the approach presented above is a theoretically elegant method of
obtaining a mixed Kalman/ filter, the solution of the Riccati equation can
be challenging for problems with a large number of states.
Other more straightforward approaches can be used to combine the
Kalman and filters.
For example, if the steady-state Kalman filter gain for a given problem is
denoted as and the steady-state filter gain is denoted as ,
then a hybrid filter gain can be constructed as
where .
This hybrid filter gain is a convex combination of the Kalman and filter
gains, which would be expected to provide a balance between RMS and
worst-case performance ๎Sim96๎.
Hโ
Hโ
K2 Hโ Kโ
๎12.13๎
d โ [0,1]
Hโ
11. Chapter 12 Additional Topics in H_infty Filtering 11
However, this approach is not as attractive theoretically since stability
must be determined numerically, and no a priori bounds on the Kalman or
performance measures can be given.
Analytical determination of stability and performance bounds for this
type of filter is an open research issue.
12.2 Robust Kalman / Filtering
The material in this section is based on ๎Hun03๎.
In most practical problems, an exact model of the system may not be
available.
The performance of the system in the presence of model uncertainties
becomes an important issue.
System
For example, suppose we have a system given as
where and are uncorrelated zero-mean white noise processes with
covariances and , respectively.
Matrices
Matrices and represent uncertainties in the system and
measurement matrices.
These uncertainties are assumed to be of the form
Hโ
Hโ
๎12.14๎
w{ k} v{ k}
Qk Rk
ฮFk ฮHk
12. Chapter 12 Additional Topics in H_infty Filtering 12
where , and are known matrices, and is an unknown matrix
satisfying the bound
๎Recall that we use the general notation to denote that is a
negative semidefinite matrix.]
Assume that is nonsingular.
This assumption is not too restrictive; should always be nonsingular for
a real system because it comes from the matrix exponential of the system
matrix of a continuous-time system, and the matrix exponential is always
nonsingular (see Sections 1.2 (pp. 18 Linear Systems) and 1.4 (pp. 26
Discretization)).
The problem is to design a state estimator of the form
with the following characteristics
๎ฒ๎ The estimator is stable (i.e., the eigenvalues of are less than one in
magnitude)
๎ณ๎ The estimation error satisfies the following worst-case bound:
๎12.15๎
M ,M1k 2k Nk ฮk
๎12.16๎
A โค B (A โ B)
Fk
Fk
๎12.17๎
F^k
x~
k
13. Chapter 12 Additional Topics in H_infty Filtering 13
3. The estimation error satisfies the following RMS bound:
The solution to the problem can be found by the following procedure ๎Hun03๎.
The Robust Mixed Kalman / Filter
๎ฒ๎ Choose some scalar sequence , and a small scalar
๎ณ๎ Define the following matrices:
3. Initialize and as follows:
4. Find positive definite solutions and satisfying the following Riccati
equations:
๎12.18๎
x~
k
๎12.19๎
Hโ
ฮฑ >k 0 ฯต > 0
๎12.20๎
Pk P
~
k
๎12.21๎
Pk P
~
k
14. Chapter 12 Additional Topics in H_infty Filtering 14
where the matrices and are defined as
5. If the Riccati equation solutions satisfy
then the estimator of Equation ๎12.17๎ solves the problem with
๎12.22๎
R ,R ,F ,H ,1k 2k 1k 1k Tk
๎12.23๎
๎12.24๎
15. Chapter 12 Additional Topics in H_infty Filtering 15
The parameter is generally chosen as a very small positive number.
In the example in ๎Hun03๎ the value is .
The parameter has to be chosen large enough so that the conditions of
Equation ๎12.24๎ are satisfied.
However, as increases, also increases, which results in a looser
bound on the RMS estimation error.
A steady-state robust filter can be obtained by letting and
in Equation ๎12.22๎ and removing all the time subscripts
(assuming that the system is time-invariant).
But the resulting coupled steady-state Riccati equations will be more
difficult to solve than the discrete-time Riccati equations in Equation
๎12.22๎ which can be solved by a simple (albeit tedious) iterative process.
Similar problems have been solved in ๎Mah04b, Xie04, Yoo04๎.
Example 12.2
An angular positioning system (motor)
Definitions
The moment of inertia of the motor and its load is
The coefficient of viscous friction is .
The torque that is applied to the motor is where is the applied
voltage, is a motor constant that relates applied voltage to generated
torque, and is unmodeled torque that can be considered as noise.
Differential Equation
๎12.25๎
ฯต
ฯต = 10โ8
ฮฑk
ฮฑk Pk
P =k+1 Pk
=P
~
k+1 P
~
k
J
B
cu + w, u
c
w
16. Chapter 12 Additional Topics in H_infty Filtering 16
The differential equation for this system is given as
where is the motor shaft angle.
We choose the states as and .
Dynamic System Model
The dynamic system model
Discretize the system with a sample time , we use the method of Section 1.4
to obtain
The discrete-time system matrices
๎12.26๎
ฯ
x(1) = ฯ x(2) = ฯห
๎12.27๎
T
๎12.28๎
17. Chapter 12 Additional Topics in H_infty Filtering 17
If our measurement is angular position corrupted by noise, then our
measurement equation can be written as
Suppose the system has
a torque disturbance with a standard deviation of 2, and
a measurement noise with a standard deviation of 0.2 degrees.
Run the Kalman filter and the robust mixed Kalman/ filter for this
problem
Interpretation
Figure 12.2 shows the position and velocity estimation errors of the Kalman
and robust mixed Kalman/ filters.
The robust filter performs better at the beginning of the simulation,
although the Kalman filter performs better in steady state.
๎It is not easy to see the advantage of Kalman filter during steady state
in Figure 12.2 because of the scale, but over the last half of the plot the
Kalman filter estimation errors have standard deviations of 0.33 deg
๎12.29๎
ฯ
๎12.30๎
wk
vk
Hโ
Hโ
18. Chapter 12 Additional Topics in H_infty Filtering 18
and 1.65 deg/s, whereas the robust filter has standard deviations of
and
Now suppose that the moment of inertia of the motor changes by a factor
of .
That is, the filter assumes that is The filter that solves this problem is
given as follows. than it really is.
In this, case Figure 12.3 shows the position and velocity estimation
errors of the Kalman and robust filters.
It is apparent that in this case the robust filter performs better not
only at the beginning of the simulation, but in steady state also.
After the filters reach "steady state" (which we have defined
somewhat arbitrarily as the time at which the position estimation
error magnitude falls below the Kalman filter
estimation errors are 0.36 degrees for position and 1.33 degrees/s
for velocity, whereas the robust filter RMS estimation errors are
0.28 degrees for position and 1.29 degrees/s for velocity.
The square roots of the diagonal elements of the Riccati
equation solution of Equation ๎12.22๎ reach steady-state values of
0.51 degrees and 1.52 degrees/s, which shows that the estimation-
error variance is indeed bounded by the Riccati equation solution
0.36 deg 4.83 deg /s
100
J
1ย degreeย ) RMS
Pk
Pk
20. Chapter 12 Additional Topics in H_infty Filtering 20
12.3 Constrained Filtering
As in Section 7.5 (pp. 212 Constrained Kalman Filtering), suppose that we
know (on the basis of physical considerations) that the states satisfy some
equality constraint or some inequality constraint
where is a known matrix and is a known vector.
This section discusses how those constraints can be incorporated into the
filter equations.
As discussed in Section 7.5 state equality constraints can always be handled by
Reducing the system-model parameterization ๎Wen92๎
Treating state equality constraints as perfect measurements ๎Por88,
Hay98๎.
Hโ
D x =k k d ,k D x โคk k d ,k
Dk dk
Hโ
21. Chapter 12 Additional Topics in H_infty Filtering 21
However, these approaches cannot be extended to inequality constraints.
The approach summarized in this section is to incorporate the state constraints
into the derivation of the filter ๎Sim06c]
Consider the discrete LTI system given by
where is the measurement, and are uncorrelated white noise
sequences with respective covariances and , and is a noise sequence
generated by an adversary (i.e., nature).
Note that we are assuming that the measurement noise has a unity covariance
matrix.
In a real system, if the measurement noise covariance is not equal to the
identity matrix, then we will have to normalize the measurement equation as
shown in Example 12.3 below.
In general, and can be time varying matrices, but we will omit the time
subscript on these matrices for ease of notation.
In addition to the state equation, we know (on the basis of physical
considerations or other a priori information ) that the states satisfy the
following constraint:
We assume that the matrix is full rank and normalized so that In
general, is an matrix, where is the number of constraints, is the
number of states, and .
Hโ
๎12.31๎
yk w{ k} v{ k}
Q I ฮด{ k}
F,H, Q
๎12.32๎
Dk D D =k k
T
I
Dk s ร n s n
s < n
22. Chapter 12 Additional Topics in H_infty Filtering 22
If then Equation ๎12.32๎ completely defines which makes the
estimation problem trivial (i.e., ).
For which is the case in this section, there are fewer constraints than
states, which makes the estimation problem nontrivial.
Assuming that is full rank is the same as the assumption made in the
constrained Kalman filtering problem of Section 7.5.
For notational convenience we define the matrix as
We assume that both the noisy system and the noise-free system satisfy the
state constraint.
The problem is to find an estimate of given the measurements
. The estimate should satisfy the state constraint.
We will assume that the estimate is given by the following standard
predictor/corrector form:
The noise in ๎12.31๎ is introduced by an adversary that has the goal of
maximizing the estimation error.
We will assume that our adversary's input to the system is given as follows:
s = n xk
k =x^ Dk dโ1
k
s < n,
Dk
Vk
๎12.33๎
x^k+1 xk+1
y ,y ,โฏ ,y{ 1 2 k}
๎12.35๎
ฮดk
23. Chapter 12 Additional Topics in H_infty Filtering 23
where is a gain to be determined, is a given matrix, and is a noise
sequence with variance equal to the identity matrix.
We assume that is uncorrelated with and .
This form of the adversary's input is not intuitive because it is based on the
state estimation error, but this form is taken because the solution of the
resulting problem results in a state estimator that bounds the infinity-norm of
the transfer function from the random noise terms to the state estimation error
๎Yae92๎.
in Equation ๎12.35๎ is chosen by the designer as a tuning parameter or
weighting matrix that can be adjusted on the basis of our a priori knowledge
about the adversary's noise input.
Suppose, for example, that we know ahead of time that the first component of
the adversary's noise input to the system is twice the magnitude of the second
component, the third component is zero, and so on; then that information can
be reflected in the designer's choice of .
We do not need to make any assumptions about the form of (e.g., it does
not need to be positive definite or square).
From Equation ๎12.35๎ we see that as approaches the zero matrix, the
adversary's input becomes a purely random process without any deterministic
component.
This causes the resulting filter to approach the Kalman filter; that is, we obtain
better RMS error performance but poorer worst-case error performance.
As becomes large, the filter places more emphasis on minimizing the
estimation error due to the deterministic component of the adversary's input.
That is, the filter assumes less about the adversary's input, and we obtain
better worst-case error performance but worse RMS error performance.
The estimation error is defined as
Lk Gk n{ k}
n{ k} w , v ,{ k} { k} x0
Gk
Gk
Gk
Gk
Gk
๎12.36๎
24. Chapter 12 Additional Topics in H_infty Filtering 24
It can be shown from the preceding equations that the dynamic system
describing the estimation error is given as
since we see that .
But it can also be shown ๎Sim06c] that .
Therefore, we can subtract the zero term from
Equation ๎12.37๎ to obtain
However, this is an inappropriate term for a minimax problem because the
adversary can arbitrarily increase by arbitrarily increasing .
To prevent this, we decompose as
where and evolve as follows:
๎12.37๎
D x =k k D k =kx^ dk, D e =k k 0
D Fe =k+1 k 0
D D Fe =k+1
T
k+1 k V Fek+1 k
๎12.38๎
ek Lk
ek
๎12.39๎
e1,k e2,k
25. Chapter 12 Additional Topics in H_infty Filtering 25
We define the objective function for the filtering problem as
where is any positive definite weighting matrix.
The differential game is for the filter designer to find a gain sequence
that minimizes , and for the adversary to find a gain sequence that
maximizes .
As such, is considered a function of and which we denote in
shorthand notation as and .
This objective function is not intuitive, but is used here because the solution of
the problem results in a state estimator that bounds the infinity-norm of the
transfer function from the random noise terms to the state estimation error
๎Yae92๎.
That is, suppose we can find an estimator gain that minimizes
when the matrix in ๎12.35๎ is equal to for some positive scalar .
Then the infinity-norm of the weighted transfer function from the noise terms
and to the estimation error $$e_{k}$ is bounded by .
That is,
๎12.40๎
๎12.41๎
Wk
K{ k}
J L{ k}
J
J K{ k} L ,{ k}
K L
Kโ
J(K,L)
Gk ฮธI ฮธ
wk vk 1/ฮธ
26. Chapter 12 Additional Topics in H_infty Filtering 26
where sup stands for supremum.' The filtering solution is obtained by finding
optimal gain sequences and that satisfy the following saddle
point:
This problem is solved subject to the constraint that in ๎Sim06c],
whose result is presented here.
We define and as the nonsingular solutions to the following set of
equations:
Nonsingular solutions to these equations are not always guaranteed to exist, in
which case a solution to the filtering problem may not exist.
However, if nonsingular solutions do exist, then the following gain matrices for
our estimator and adversary satisfy the constrained filtering problem:
๎12.42๎
K{ k} L{ k}
๎12.43๎
D =kx^k dk
Pk ฮฃk
๎12.44๎
Hโ
Hโ
27. Chapter 12 Additional Topics in H_infty Filtering 27
These matrices solve the constrained filtering problem only if
Note that as becomes larger, we will be less likely to
satisfy this condition.
From Equation ๎12.35๎ we see that a larger gives the adversary more
latitude in choosing a disturbance. This makes it less likely that the designer
can minimize the cost function.
The mean square estimation error that results from using the optimal gain
cannot be specified because it depends on the adversary's input .
However, we can state an upper bound for the mean square estimation error
๎Sim06c] as follows:
This provides additional motivation for using the game theory approach
presented in this section.
The estimator not only bounds the worst-case estimation error, but also
bounds the mean square estimation error.
The special case: no state constraints
Then in Equation ๎12.32๎ we can set the matrix equal to a zero row vector
and the vector equal to the zero scalar. In this case and we
obtain from Equations ๎12.44๎ and ๎12.45๎ the following estimator and
adversary strategies:
๎12.45๎
Hโ
I โ GkP G โฅ( k k
T
) 0 Gk
Gk
Kk
โ
ฮดk
๎12.46๎
Dk
dk V =k+1 0
28. Chapter 12 Additional Topics in H_infty Filtering 28
This is identical to the unconstrained estimator ๎Yae92๎
The unconstrained estimator for continuous-time systems is given in
๎Yae04๎.
In the case of state inequality constraints (i.e., constraints of the form
a standard active-set method ๎Fle81, Gil81๎ can be used to solve the
filtering problem.
An active-set method uses the fact that it is only those constraints that are
active at the solution of the problem that affect the optimality conditions; the
inactive constraints can be ignored.
Therefore, an inequality-constrained problem is equivalent to an equality-
constrained problem.
An active-set method determines which constraints are active at the solution
of the problem and then solves the problem using the active constraints as
equality constraints.
Inequality constraints will significantly increase the computational effort
required for a problem solution because the active constraints need to be
determined, but conceptually this poses no difficulty.
The Constrained Filter โ Summary
1. We have a linear system given as
๎12.47๎
Hโ
Hโ
D x โคk k
d ,k) Hโ
Hโ
29. Chapter 12 Additional Topics in H_infty Filtering 29
where is the process noise, is the measurement noise, and the last
equation above specifies equality constraints on the state.
We assume that the constraints are normalized so .
The covariance of is equal to but might have a zero mean or it
might have a nonzero mean (i.e., it might contain a deterministic component).
The covariance of is the identity matrix.
2. Initialize the filter as follows:
๎ฒ๎ At each time step do the following.
(a) Choose the tuning parameter matrix to weight the deterministic,
biased component of the process noise. If then we are assuming
that the process noise is zero-mean and we get Kalman filter performance.
As increases we are assuming that there is more of a deterministic,
biased component to the process noise. This gives us better worst-case
error performance but worse RMS error performance.
(b) Compute the next state estimate as follows:
๎12.48๎
wk vk
D D =k k
T
I
wk Q ,k wk
vk
๎12.49๎
k = 0,1,โฏ ,
Gk
G =k 0
Gk
30. Chapter 12 Additional Topics in H_infty Filtering 30
(c) Verify that
If not then the filter is invalid.
Example 12.3
Consider a land-based vehicle that is equipped to measure its latitude and
longitude (e.g., through the use of a GPS receiver).
This is the same problem as that considered in Example 7.12.
The vehicle dynamics and measurements can be approximated by the
following equations:
๎12.50๎
๎12.51๎
๎12.52๎
31. Chapter 12 Additional Topics in H_infty Filtering 31
The first two components of are the latitude and longitude positions, the
last two components of are the latitude and longitude velocities,
represents zero-mean process noise due to potholes and other disturbances,
is additional unknown process noise, and is the commanded
acceleration.
is the sample period of the system, and is the heading angle (measured
counterclockwise from due east).
The measurement consists of latitude and longitude, and is the
measurement noise.
Suppose the standard deviations of the measurement noises are known to be
and .
Then we must normalize our measurement equation to satisfy the condition
that the measurement noise has a unity covariance.
We therefore define the normalized measurement as
In our simulation we set the covariances of the process and measurement noise
as follows:
We can use an filter to estimate the position of the vehicle.
There may be times when the vehicle is traveling off-road, or on an unknown
road, in which case the problem is unconstrained.
xk
xk wk
ฮดk uk
T ฮฑ
yk
โฒ
vk
โฒ
ฯ1 ฯ2
yk
๎12.53๎
๎12.54๎
Hโ
32. Chapter 12 Additional Topics in H_infty Filtering 32
At other times it may be known that the vehicle is traveling on a given road, in
which case the state estimation problem is constrained.
For instance, if it is known that the vehicle is traveling on a straight road with a
heading of and the vector of Equation ๎12.32๎ can be given as
follows:
We can enforce the condition by dividing by .
In our simulation we set the sample period to 1 s and the heading angle to
a constant 60 degrees.
The commanded acceleration is toggled between
as if the vehicle were accelerating and decelerating in traffic.
The initial conditions are set to
ฮฑDkDk dk
๎12.55๎
D D =k k
T
I Dk 1 + tan ฮฑ2
T ฮฑ
ยฑ10m/s ,2
๎12.56๎
34. Chapter 12 Additional Topics in H_infty Filtering 34
We found via tuning that a matrix of with gave good filter
performance.
Smaller values of make the filter perform like a Kalman filter.
Larger values of prevent the filter from finding a solution as the positive
definite conditions in Equations ๎12.44๎ and ๎12.45๎ are not satisfied.
This example could be solved by reducing the system-model parameterization
๎Wen92๎ or by introducing artificial perfect measurements into the problem
๎Hay98, Por88๎.
In fact, those methods could be used for any estimation problem with equality
constraints.
However, those methods cannot be extended to inequality constraints,
whereas the method discussed in this section can be extended to inequality
constraints, as discussed earlier.
The unconstrained and constrained filters were simulated 100 times each,
and the average RMS position and estimation error magnitudes at each time
step are plotted in Figure 12.4.
Gk ฮธI, ฮธ = 1/40,
ฮธ Hโ
ฮธ Hโ
Hโ
35. Chapter 12 Additional Topics in H_infty Filtering 35
It can be seen that the constrained filter results in more accurate estimates.
The unconstrained estimator results in position errors that average
whereas the constrained estimator gives position errors that average about
The unconstrained velocity estimation error is , whereas the
constrained velocity estimation error is
Table 12.1 shows a comparison of the unconstrained and constrained Kalman
and filters when the noise statistics are nominal. Table 12.2 shows a
comparison of the unconstrained and constrained Kalman and filters when
the acceleration noise on the system has a bias of in both the north and
east directions. In both situations, the filter estimates position more
accurately, but the Kalman filter estimates velocity more accurately. In the off-
nominal noise case, the advantage of the filter over the Kalman filter for
position estimation is more pronounced than when the noise is nominal.
12.4 SUMMARY
Some advanced topics in the area of filtering.
minimizing a combination of the Kalman and filter performance
indices.
To balance the excessive optimism of the Kalman filter with the
excessive pessimism of the filter.
The robust mixed Kalman/ estimation problem, where we took system-
model uncertainties into account.
Rhe system model is never perfectly known.
Constrained filtering, in which equality (or inequality) constraints are
enforced on the state estimate.
This can improve filter performance in cases in which we know that the
state must satisfy certain constraints.
There is still a lot of room for additional work and development in
filtering.
35.3m,
27.1m.
12.9m/s
10.9m/s
Hโ
Hโ
1m/s2
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
36. Chapter 12 Additional Topics in H_infty Filtering 36
For example, reduced-order filtering tries to obtain good minimax
estimation performance with a filter whose order is less than that of the
underlying system.
Literature Review
Reduced-order Kalman filtering ๎Section 10.3๎, and reduced-order
filtering is considered in ๎Bet94, Gri97, Xu02๎.
The approach taken in ๎Ko06๎ for constrained Kalman filtering may be
applicable to constrained filtering and may give better results than the
method discussed in this chapter.
The use of Krein space approaches for solving various filtering
problems is promising ๎Has96a, Has96b].
smoothing is discussed in ๎Gri91a, The94b, Has99, Zha05a], and
robust smoothing is discussed in ๎The94a].
An information form for the filter (analogous to the Kalman
information filter discussed in Section 6.2 ) is presented in ๎Zha05b].
Approaches to dealing with delayed measurements and synchronization
errors have been extensively explored for Kalman filters (see Section 10.5
), but are notably absent in the filter literature.
There has been a lot of work on nonlinear Kalman filtering (see Chapters
13๎15๎, but not nearly as much on nonlinear filtering.
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ
Hโ