SlideShare a Scribd company logo
1 of 36
Download to read offline
Chapter 12 Additional Topics in H_infty Filtering 1
Chapter 12 Additional Topics in
Filtering
since make no assumption about the
disturbances, they have to accommodate for all conceivable
disturbances, and are thus over-conservative.
- Babak Hassibi and Thomas Kailath ๎‚ƒHas95๎‚„
References
๎‚ƒBer89๎‚„ D. Bernstein and W. Haddad, โ€œSteady-state Kalman filtering with an H ,
error bound,โ€ Systems and Control Letters, 12, pp. 9๎‚ˆ16 ๎‚1989๎‚‚.
https://doi.org/10.1016/0167๎‚ˆ6911๎‚89๎‚‚90089๎‚ˆ3
๎‚ƒBet94๎‚„ M. Bettayeb and D. Kavranoglu, โ€œReduced order H, filtering,โ€ American
Control Conference, pp. 1884๎‚ˆ1888 ๎‚June 1994๎‚‚.
https://ieeexplore.ieee.org/abstract/document/752400/
๎‚ƒFle81๎‚„ R. Fletcher, Practical Methods of Optimization - Volume 2๎‚’ Constrained
Optimization,
John Wiley & Sons, New York, 1981.
๎‚ƒGil81๎‚„ P. Gill, W. Murray, and M. Wright, Practical Optimization, Academic
Press,
New York, 1981.
๎‚ƒGri97๎‚„ K. Grigoriadis and J. Watson, โ€œReduced-order Hm and Lz - Lm filtering
via
linear matrix inequalities,โ€ IEEE Tmnsactions on Aerospace and Electronic
Systems, 33๎‚4๎‚‚, pp. 13281338 ๎‚October 1997๎‚‚.
https://ieeexplore.ieee.org/abstract/document/625133/
๎‚ƒGri91a] M. Grimble, โ€œH, fixed lag smoothing filter for scalar systems,โ€ IEEE
Transactions
on Signal Processing, 39๎‚9๎‚‚, pp. 1955๎‚ˆ1963 ๎‚September 1991๎‚‚.
https://ieeexplore.ieee.org/abstract/document/134428
Hโˆž
H ย filters[ โˆž ]
Chapter 12 Additional Topics in H_infty Filtering 2
๎‚ƒHad91๎‚„ W. Haddad, D. Bernstein, and D. Mustafa, โ€œMixed-norm H2/
regulation
and estimation: The discrete-time case,โ€ Systems and Control Letters, 16,
pp. 235๎‚ˆ247 ๎‚1991๎‚‚.
https://www.sciencedirect.com/science/article/pii/0167691191900113
๎‚ƒHas95๎‚„ B. Hassibi and T. Kailath, โ€œ adaptive filtering,โ€ IEEE International
Conference
on Acoustics, Speech, and Signal Processing, pp. 949๎‚ˆ952 ๎‚1995๎‚‚.
ieeexplore.ieee.org/document/480332/
๎‚ƒHas96a] B. Hassibi, A. Sayed, and T. Kailath, โ€œLinear estimation in Krein spaces
-
Part I๎‚’ Theory,โ€ IEEE Transactzons on Automatic Control, 41๎‚1๎‚‚, pp. 18๎‚ˆ33
๎‚January 1996๎‚‚.https://ieeexplore.ieee.org/abstract/document/481605
๎‚ƒHas96b] B. Hassibi, A. Sayed, and T. Kailath, โ€œLinear estimation in Krein spaces
- Part
11๎‚’ Applications,โ€ IEEE Transactions on Automatic Control, 41๎‚1๎‚‚, pp. 34๎‚ˆ49
๎‚January 1996๎‚‚. https://ieeexplore.ieee.org/abstract/document/481606
๎‚ƒHas99๎‚„ B. Hassibi, A. Sayed, T. Kailath, Indefinite-Quadratic Estimation and
Control,
SIAM, Philadelphia, Pennsylvania, 1999.
๎‚ƒHay98๎‚„ S. Hayward, โ€œConstrained Kalman filter for least-squares estimation of
timevarying
beamforming weights,โ€ in Mathematics in Signal Processing IV, J.
McWhirter and I. Prouder ๎‚Eds.),O xford University Press, pp. 113๎‚ˆ125, 1998.
๎‚ƒHun03๎‚„ Y. Hung and F. Yang, โ€œRobust H , filtering with error variance
constraints for
discrete timevarying systems with uncertainty,โ€ Automatica, 39๎‚7๎‚‚, pp. 1185๎‚ˆ
1194 ๎‚July 2003๎‚‚.
https://www.sciencedirect.com/science/article/pii/S0005109803001171
๎‚ƒKo06๎‚„ S. KO and R. Bitmead, โ€œState estimation for linear systems with state
equality
constraints,โ€ Automatica, 2006.
https://www.sciencedirect.com/science/article/pii/S0005109807001306
๎‚ƒMah04b] M. Mahmoud, โ€œResilient linear filtering of uncertain systems,โ€
Automatica,
Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 3
40๎‚10๎‚‚, pp. 1797๎‚ˆ1802 ๎‚October 2004๎‚‚.
https://www.sciencedirect.com/science/article/pii/S000510980400161X
๎‚ƒPor88๎‚„ J. Porrill, โ€œOptimal combination and constraints for geometrical sensor
data,โ€
International Journal of Robotics Research, 7๎‚6๎‚‚, pp. 6677 ๎‚December 1988๎‚‚.
http://journals.sagepub.com/doi/10.1177/027836498800700606
๎‚ƒSim96๎‚„ D. Simon and H. El-Sherief, โ€œHybrid Kalman/minimax filtering in
phaselocked
loops,โ€ Control Engineenng Practice, 4, pp. 615๎‚ˆ623 ๎‚October 1996๎‚‚.
https://www.sciencedirect.com/science/article/pii/0967066196000433
๎‚ƒSim06c] D. Simon, โ€œA game theory approach to constrained minimax state
estimation,โ€
IEEE Transactions on Signal Processing, 54๎‚2๎‚‚, pp. 405๎‚ˆ412 ๎‚February 2006๎‚‚.
http://ieeexplore.ieee.org/document/1576971/
๎‚ƒThe94a] Y . Theodor, U. Shaked, and C. de Souza, โ€œA game theory approach to
robust
discretetime H, estimation,โ€ IEEE Transactions on Signal Processing,
42๎‚6๎‚‚, pp. 1486๎‚ˆ1495 ๎‚June 1994๎‚‚.
https://ieeexplore.ieee.org/abstract/document/286964
๎‚ƒThe94b] Y. Theodor and U. Shaked, โ€œGame theory approach to H,-optimal
discretetime
fixed-point and fixed-lag smoothing,โ€ IEEE Transactions on Automatic
Control, 39๎‚9๎‚‚, pp. 1944๎‚ˆ1948 ๎‚September
1994๎‚‚.https://ieeexplore.ieee.org/abstract/document/317131
๎‚ƒWen92๎‚„ W. Wen and H. Durrant-Whyte, โ€œModel-based multi-sensor data
fusion,โ€
IEEE International Conference on Robotics and Automation, pp. 1720๎‚ˆ1726,
May 1992. ieeexplore.ieee.org/document/220130/
๎‚ƒXie04๎‚„ L. Xie, L. Lu, D. Zhang, and H. Zhang, โ€œImproved robust HZ and H ,
filtering
for uncertain discrete-time systems,โ€ Automatica, 40๎‚5๎‚‚, pp. 873๎‚ˆ880 ๎‚May
2004๎‚‚. inkinghub.elsevier.com/retrieve/pii/S0005109804000172
๎‚ƒXu02๎‚„ S. Xu and T. Chen, โ€œReduced-order H , filtering for stochastic systems,โ€
IEEE Transactions on Signal Processing, 50๎‚ 12๎‚‚, pp. 2998๎‚ˆ3007 ๎‚December
2002๎‚‚. ieeexplore.ieee.org/abstract/document/1075993/
Chapter 12 Additional Topics in H_infty Filtering 4
๎‚ƒYae92๎‚„ I. Yaesh and U. Shaked, โ€œGame theory approach to state estimation of
linear
discrete-time processes and its relation to H,-optimal estimation,โ€ International
Journal of Control, 55๎‚6๎‚‚, pp. 1443๎‚ˆ1452 ๎‚1992๎‚‚.
https://www.tandfonline.com/doi/abs/10.1080/00207179208934293
๎‚ƒYae04๎‚„ I. Yaesh and U. Shaked, โ€œMin-max Kalman filtering,โ€ Systems and
Control
Letters, 53๎‚3๎‚ˆ4๎‚‚, pp. 217๎‚ˆ228 ๎‚November 2004๎‚‚.
linkinghub.elsevier.com/retrieve/pii/S0167691104000672
๎‚ƒYoo04๎‚„ M. Yoon, V. Ugrinovskii, and I. Petersen, โ€œRobust finite horizon minimax
filtering for discretetime stochastic uncertain systems,โ€ Systems and Control
Letters, 52๎‚2๎‚‚, pp. 99๎‚ˆ112 ๎‚June 2004๎‚‚.
https://linkinghub.elsevier.com/retrieve/pii/S0167691103003001
๎‚ƒZha05a] H. Zhang, L. Xie, Y. Chai Soh, and D. Zhang, โ€œH, fixed-lag smoothing
for
discrete linear timevarying systems,โ€ Automatica, 41๎‚5๎‚‚, pp. 839๎‚ˆ846 ๎‚May
2005๎‚‚. https://www.sciencedirect.com/science/article/pii/S0005109805000117
๎‚ƒZha05b] Y. Zhang, Y. Soh, and W. Chen, โ€œRobust information filter for
decentralized
estimation,โ€ Automatzca, 41๎‚12๎‚‚, pp. 2141๎‚ˆ2146 ๎‚December 2005๎‚‚.
https://www.sciencedirect.com/science/article/pii/S0005109805002700
Introduction
Some advanced topics in filtering
filtering started since 80s (less mature than Kalman filtering)
more room for development in filtering
The current research directions in filtering.
Outline
Section 12.1๎‚’ The mixed Kalman/ estimation problem.
a filter that satisfies an performance bound while at the same time
minimizing a Kalman performance bound.
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 5
Section 12.2๎‚’ The robust mixed Kalman/H estimation problem.
Additional uncertainties in the system matrices.
Section 12.3๎‚’ The constrained filter withequality (or inequality)
constraints
12.1 Mixed Kalman / Filtering
A filter that combines the best features of Kalman filtering with the best
features of filtering.
This problem can be attacked a couple of different ways.
Recall from Section 5.2 (pp. 129๎‚‚ the cost function that is minimized by the
steady-state Kalman filter:
Recall from Section 11.3 (pp.343๎‚ˆ361๎‚‚ the cost function that is minimized by
the steady-state state estimator if and are identity matrices:
Loosely speaking
The Kalman filter minimizes the RMS estimation error
The filter minimizes the worst-case estimation error.
โˆž
Hโˆž
Hโˆž
Hโˆž
๎‚12.1๎‚‚
Hโˆž Sk Lk
๎‚12.2๎‚‚
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 6
In ๎‚ƒHad91๎‚„ these two performance objectives are combined to form the
following problem: Given the -state observable LTI system
where and are uncorrelated zero-mean, white noise processes with
covariances and respectively
Find an estimator of the form
With the following Criteria
๎ฒ๎‚” is a stable matrix (so the estimator is stable).
๎ณ๎‚” The cost function is bounded by a user-specified parameter:
๎ฒ๎‚” Among all estimators satisfying the above criteria, the filter minimizes the
Kalman filter cost function
The solution to this problem provides the best RMS estimation error among
all estimators that bound the worst-case estimation error.
n
๎‚12.3๎‚‚
w{ k} v{ k}
Q R
๎‚12.4๎‚‚
F^
Hโˆž
๎‚12.5๎‚‚
J2
Chapter 12 Additional Topics in H_infty Filtering 7
The filter that solves this problem is given as follows.
The Mixed Kalman / Filter
1. Find the positive semidefinite matrix that satisfies the following
Riccati equation:
where and are defined as
2. Derive the and matrices in Equation ๎‚12.4๎‚‚ as
3. The estimator of Equation ๎‚12.4๎‚‚ satisfies the mixed Kalman/ estimation
problem if and only if is stable. In this case, the state estimation error
satisfies the bound
If , reduces to the Kalman filter problem statement.
Hโˆž
n ร— n P
๎‚12.6๎‚‚
Pa V
๎‚12.7๎‚‚
F^ K
๎‚12.8๎‚‚
Hโˆž
F^
๎‚12.9๎‚‚
ฮธ = 0
Chapter 12 Additional Topics in H_infty Filtering 8
In this case we can see that Equation ๎‚12.6๎‚‚ reduces to the discrete-time
algebraic Riccati equation that is associated with the Kalman filter (see
Problem 12.2 and Section 7.3(pp. 193๎‚‚ ).
The continuous-time version of this theory is given in ๎‚ƒBer89๎‚„.
Example 12.1
In this example, we take another look at the scalar system that is described in
Example 11.2 (pp. 356๎‚‚๎‚’
where and are uncorrelated zero-mean, white noise processes with
covariances and , respectively.
Equation ๎‚12.6๎‚‚, the Riccati equation for the mixed Kalman/ filter, reduces
to the following scalar equation:
Suppose that (for some value of and ) this equation has a solution
and where the filter gain from Equation ๎‚12.8๎‚‚ is given
as
๎‚12.10๎‚‚
w{ k} v{ k}
Q R
Hโˆž
๎‚12.11๎‚‚
Q,R, ฮธ
P โ‰ฅ 0, โˆฃ1 โˆ’ Kโˆฃ < 1, K
๎‚12.12๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 9
Then from Equation ๎‚12.2๎‚‚ is bounded from above by and the variance
of the state estimation error is bounded from above by .
๎ฒ๎‚” The top half of Figure 12.1 shows the Kalman filter performance bound
and the estimator gain as a function of when .
Note that at the mixed Kalman/ filter reduces to a standard
Kalman filter.
The performance bound
The estimator gain , as discussed in Example 11.2
However, if then we do not have any guarantee on the worst-
case performance index
From the top half of Figure 12.1, we see that as increases, the
performance bound increases, which means that our Kalman
performance index gets worse.
However, at the same time, the worst-case performance index
decreases as increases.
From the bottom half of Figure 12.1 , we see that as increases,
increases, which is consistent with better performance and worse
Kalman performance (see Example 11.2๎‚‚.
When reaches about 0.91 numerical difficulties prevent a solution to
the mixed filter problem.
๎ณ๎‚” The bottom half of Figure 12.1 shows that at the estimator gain
.
Recall from Example 11.2 that the filter had an estimator
for the same value of .
This shows that the mixed Kalman/ filter has a smaller estimator gain
(for the same ) than the pure filter.
In other words, the mixed filter uses a lower gain in order to obtain better
Kalman performance, whereas the pure filter uses a higher gain
because it does not take Kalman performance into account.
Jโˆž 1/ฮธ
P
P
K ฮธ Q = R = 1
ฮธ = 0 Hโˆž
P โ‰ˆ 1.62
K โ‰ˆ 0.62
ฮธ = 0
Jโˆž
ฮธ
P
Jโˆž
ฮธ
ฮธ K
Hโˆž
ฮธ
ฮธ = 0.5
K โ‰ˆ 0.76
Hโˆž gain K = 1
ฮธ
Hโˆž
ฮธ Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 10
Although the approach presented above is a theoretically elegant method of
obtaining a mixed Kalman/ filter, the solution of the Riccati equation can
be challenging for problems with a large number of states.
Other more straightforward approaches can be used to combine the
Kalman and filters.
For example, if the steady-state Kalman filter gain for a given problem is
denoted as and the steady-state filter gain is denoted as ,
then a hybrid filter gain can be constructed as
where .
This hybrid filter gain is a convex combination of the Kalman and filter
gains, which would be expected to provide a balance between RMS and
worst-case performance ๎‚ƒSim96๎‚„.
Hโˆž
Hโˆž
K2 Hโˆž Kโˆž
๎‚12.13๎‚‚
d โˆˆ [0,1]
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 11
However, this approach is not as attractive theoretically since stability
must be determined numerically, and no a priori bounds on the Kalman or
performance measures can be given.
Analytical determination of stability and performance bounds for this
type of filter is an open research issue.
12.2 Robust Kalman / Filtering
The material in this section is based on ๎‚ƒHun03๎‚„.
In most practical problems, an exact model of the system may not be
available.
The performance of the system in the presence of model uncertainties
becomes an important issue.
System
For example, suppose we have a system given as
where and are uncorrelated zero-mean white noise processes with
covariances and , respectively.
Matrices
Matrices and represent uncertainties in the system and
measurement matrices.
These uncertainties are assumed to be of the form
Hโˆž
Hโˆž
๎‚12.14๎‚‚
w{ k} v{ k}
Qk Rk
ฮ”Fk ฮ”Hk
Chapter 12 Additional Topics in H_infty Filtering 12
where , and are known matrices, and is an unknown matrix
satisfying the bound
๎‚ƒRecall that we use the general notation to denote that is a
negative semidefinite matrix.]
Assume that is nonsingular.
This assumption is not too restrictive; should always be nonsingular for
a real system because it comes from the matrix exponential of the system
matrix of a continuous-time system, and the matrix exponential is always
nonsingular (see Sections 1.2 (pp. 18 Linear Systems) and 1.4 (pp. 26
Discretization)).
The problem is to design a state estimator of the form
with the following characteristics
๎ฒ๎‚” The estimator is stable (i.e., the eigenvalues of are less than one in
magnitude)
๎ณ๎‚” The estimation error satisfies the following worst-case bound:
๎‚12.15๎‚‚
M ,M1k 2k Nk ฮ“k
๎‚12.16๎‚‚
A โ‰ค B (A โˆ’ B)
Fk
Fk
๎‚12.17๎‚‚
F^k
x~
k
Chapter 12 Additional Topics in H_infty Filtering 13
3. The estimation error satisfies the following RMS bound:
The solution to the problem can be found by the following procedure ๎‚ƒHun03๎‚„.
The Robust Mixed Kalman / Filter
๎ฒ๎‚” Choose some scalar sequence , and a small scalar
๎ณ๎‚” Define the following matrices:
3. Initialize and as follows:
4. Find positive definite solutions and satisfying the following Riccati
equations:
๎‚12.18๎‚‚
x~
k
๎‚12.19๎‚‚
Hโˆž
ฮฑ >k 0 ฯต > 0
๎‚12.20๎‚‚
Pk P
~
k
๎‚12.21๎‚‚
Pk P
~
k
Chapter 12 Additional Topics in H_infty Filtering 14
where the matrices and are defined as
5. If the Riccati equation solutions satisfy
then the estimator of Equation ๎‚12.17๎‚‚ solves the problem with
๎‚12.22๎‚‚
R ,R ,F ,H ,1k 2k 1k 1k Tk
๎‚12.23๎‚‚
๎‚12.24๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 15
The parameter is generally chosen as a very small positive number.
In the example in ๎‚ƒHun03๎‚„ the value is .
The parameter has to be chosen large enough so that the conditions of
Equation ๎‚12.24๎‚‚ are satisfied.
However, as increases, also increases, which results in a looser
bound on the RMS estimation error.
A steady-state robust filter can be obtained by letting and
in Equation ๎‚12.22๎‚‚ and removing all the time subscripts
(assuming that the system is time-invariant).
But the resulting coupled steady-state Riccati equations will be more
difficult to solve than the discrete-time Riccati equations in Equation
๎‚12.22๎‚‚ which can be solved by a simple (albeit tedious) iterative process.
Similar problems have been solved in ๎‚ƒMah04b, Xie04, Yoo04๎‚„.
Example 12.2
An angular positioning system (motor)
Definitions
The moment of inertia of the motor and its load is
The coefficient of viscous friction is .
The torque that is applied to the motor is where is the applied
voltage, is a motor constant that relates applied voltage to generated
torque, and is unmodeled torque that can be considered as noise.
Differential Equation
๎‚12.25๎‚‚
ฯต
ฯต = 10โˆ’8
ฮฑk
ฮฑk Pk
P =k+1 Pk
=P
~
k+1 P
~
k
J
B
cu + w, u
c
w
Chapter 12 Additional Topics in H_infty Filtering 16
The differential equation for this system is given as
where is the motor shaft angle.
We choose the states as and .
Dynamic System Model
The dynamic system model
Discretize the system with a sample time , we use the method of Section 1.4
to obtain
The discrete-time system matrices
๎‚12.26๎‚‚
ฯ•
x(1) = ฯ• x(2) = ฯ•ห™
๎‚12.27๎‚‚
T
๎‚12.28๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 17
If our measurement is angular position corrupted by noise, then our
measurement equation can be written as
Suppose the system has
a torque disturbance with a standard deviation of 2, and
a measurement noise with a standard deviation of 0.2 degrees.
Run the Kalman filter and the robust mixed Kalman/ filter for this
problem
Interpretation
Figure 12.2 shows the position and velocity estimation errors of the Kalman
and robust mixed Kalman/ filters.
The robust filter performs better at the beginning of the simulation,
although the Kalman filter performs better in steady state.
๎‚It is not easy to see the advantage of Kalman filter during steady state
in Figure 12.2 because of the scale, but over the last half of the plot the
Kalman filter estimation errors have standard deviations of 0.33 deg
๎‚12.29๎‚‚
ฯ•
๎‚12.30๎‚‚
wk
vk
Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 18
and 1.65 deg/s, whereas the robust filter has standard deviations of
and
Now suppose that the moment of inertia of the motor changes by a factor
of .
That is, the filter assumes that is The filter that solves this problem is
given as follows. than it really is.
In this, case Figure 12.3 shows the position and velocity estimation
errors of the Kalman and robust filters.
It is apparent that in this case the robust filter performs better not
only at the beginning of the simulation, but in steady state also.
After the filters reach "steady state" (which we have defined
somewhat arbitrarily as the time at which the position estimation
error magnitude falls below the Kalman filter
estimation errors are 0.36 degrees for position and 1.33 degrees/s
for velocity, whereas the robust filter RMS estimation errors are
0.28 degrees for position and 1.29 degrees/s for velocity.
The square roots of the diagonal elements of the Riccati
equation solution of Equation ๎‚12.22๎‚‚ reach steady-state values of
0.51 degrees and 1.52 degrees/s, which shows that the estimation-
error variance is indeed bounded by the Riccati equation solution
0.36 deg 4.83 deg /s
100
J
1ย degreeย ) RMS
Pk
Pk
Chapter 12 Additional Topics in H_infty Filtering 19
Chapter 12 Additional Topics in H_infty Filtering 20
12.3 Constrained Filtering
As in Section 7.5 (pp. 212 Constrained Kalman Filtering), suppose that we
know (on the basis of physical considerations) that the states satisfy some
equality constraint or some inequality constraint
where is a known matrix and is a known vector.
This section discusses how those constraints can be incorporated into the
filter equations.
As discussed in Section 7.5 state equality constraints can always be handled by
Reducing the system-model parameterization ๎‚ƒWen92๎‚„
Treating state equality constraints as perfect measurements ๎‚ƒPor88,
Hay98๎‚„.
Hโˆž
D x =k k d ,k D x โ‰คk k d ,k
Dk dk
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 21
However, these approaches cannot be extended to inequality constraints.
The approach summarized in this section is to incorporate the state constraints
into the derivation of the filter ๎‚ƒSim06c]
Consider the discrete LTI system given by
where is the measurement, and are uncorrelated white noise
sequences with respective covariances and , and is a noise sequence
generated by an adversary (i.e., nature).
Note that we are assuming that the measurement noise has a unity covariance
matrix.
In a real system, if the measurement noise covariance is not equal to the
identity matrix, then we will have to normalize the measurement equation as
shown in Example 12.3 below.
In general, and can be time varying matrices, but we will omit the time
subscript on these matrices for ease of notation.
In addition to the state equation, we know (on the basis of physical
considerations or other a priori information ) that the states satisfy the
following constraint:
We assume that the matrix is full rank and normalized so that In
general, is an matrix, where is the number of constraints, is the
number of states, and .
Hโˆž
๎‚12.31๎‚‚
yk w{ k} v{ k}
Q I ฮด{ k}
F,H, Q
๎‚12.32๎‚‚
Dk D D =k k
T
I
Dk s ร— n s n
s < n
Chapter 12 Additional Topics in H_infty Filtering 22
If then Equation ๎‚12.32๎‚‚ completely defines which makes the
estimation problem trivial (i.e., ).
For which is the case in this section, there are fewer constraints than
states, which makes the estimation problem nontrivial.
Assuming that is full rank is the same as the assumption made in the
constrained Kalman filtering problem of Section 7.5.
For notational convenience we define the matrix as
We assume that both the noisy system and the noise-free system satisfy the
state constraint.
The problem is to find an estimate of given the measurements
. The estimate should satisfy the state constraint.
We will assume that the estimate is given by the following standard
predictor/corrector form:
The noise in ๎‚12.31๎‚‚ is introduced by an adversary that has the goal of
maximizing the estimation error.
We will assume that our adversary's input to the system is given as follows:
s = n xk
k =x^ Dk dโˆ’1
k
s < n,
Dk
Vk
๎‚12.33๎‚‚
x^k+1 xk+1
y ,y ,โ‹ฏ ,y{ 1 2 k}
๎‚12.35๎‚‚
ฮดk
Chapter 12 Additional Topics in H_infty Filtering 23
where is a gain to be determined, is a given matrix, and is a noise
sequence with variance equal to the identity matrix.
We assume that is uncorrelated with and .
This form of the adversary's input is not intuitive because it is based on the
state estimation error, but this form is taken because the solution of the
resulting problem results in a state estimator that bounds the infinity-norm of
the transfer function from the random noise terms to the state estimation error
๎‚ƒYae92๎‚„.
in Equation ๎‚12.35๎‚‚ is chosen by the designer as a tuning parameter or
weighting matrix that can be adjusted on the basis of our a priori knowledge
about the adversary's noise input.
Suppose, for example, that we know ahead of time that the first component of
the adversary's noise input to the system is twice the magnitude of the second
component, the third component is zero, and so on; then that information can
be reflected in the designer's choice of .
We do not need to make any assumptions about the form of (e.g., it does
not need to be positive definite or square).
From Equation ๎‚12.35๎‚‚ we see that as approaches the zero matrix, the
adversary's input becomes a purely random process without any deterministic
component.
This causes the resulting filter to approach the Kalman filter; that is, we obtain
better RMS error performance but poorer worst-case error performance.
As becomes large, the filter places more emphasis on minimizing the
estimation error due to the deterministic component of the adversary's input.
That is, the filter assumes less about the adversary's input, and we obtain
better worst-case error performance but worse RMS error performance.
The estimation error is defined as
Lk Gk n{ k}
n{ k} w , v ,{ k} { k} x0
Gk
Gk
Gk
Gk
Gk
๎‚12.36๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 24
It can be shown from the preceding equations that the dynamic system
describing the estimation error is given as
since we see that .
But it can also be shown ๎‚ƒSim06c] that .
Therefore, we can subtract the zero term from
Equation ๎‚12.37๎‚‚ to obtain
However, this is an inappropriate term for a minimax problem because the
adversary can arbitrarily increase by arbitrarily increasing .
To prevent this, we decompose as
where and evolve as follows:
๎‚12.37๎‚‚
D x =k k D k =kx^ dk, D e =k k 0
D Fe =k+1 k 0
D D Fe =k+1
T
k+1 k V Fek+1 k
๎‚12.38๎‚‚
ek Lk
ek
๎‚12.39๎‚‚
e1,k e2,k
Chapter 12 Additional Topics in H_infty Filtering 25
We define the objective function for the filtering problem as
where is any positive definite weighting matrix.
The differential game is for the filter designer to find a gain sequence
that minimizes , and for the adversary to find a gain sequence that
maximizes .
As such, is considered a function of and which we denote in
shorthand notation as and .
This objective function is not intuitive, but is used here because the solution of
the problem results in a state estimator that bounds the infinity-norm of the
transfer function from the random noise terms to the state estimation error
๎‚ƒYae92๎‚„.
That is, suppose we can find an estimator gain that minimizes
when the matrix in ๎‚12.35๎‚‚ is equal to for some positive scalar .
Then the infinity-norm of the weighted transfer function from the noise terms
and to the estimation error $$e_{k}$ is bounded by .
That is,
๎‚12.40๎‚‚
๎‚12.41๎‚‚
Wk
K{ k}
J L{ k}
J
J K{ k} L ,{ k}
K L
Kโˆ—
J(K,L)
Gk ฮธI ฮธ
wk vk 1/ฮธ
Chapter 12 Additional Topics in H_infty Filtering 26
where sup stands for supremum.' The filtering solution is obtained by finding
optimal gain sequences and that satisfy the following saddle
point:
This problem is solved subject to the constraint that in ๎‚ƒSim06c],
whose result is presented here.
We define and as the nonsingular solutions to the following set of
equations:
Nonsingular solutions to these equations are not always guaranteed to exist, in
which case a solution to the filtering problem may not exist.
However, if nonsingular solutions do exist, then the following gain matrices for
our estimator and adversary satisfy the constrained filtering problem:
๎‚12.42๎‚‚
K{ k} L{ k}
๎‚12.43๎‚‚
D =kx^k dk
Pk ฮฃk
๎‚12.44๎‚‚
Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 27
These matrices solve the constrained filtering problem only if
Note that as becomes larger, we will be less likely to
satisfy this condition.
From Equation ๎‚12.35๎‚‚ we see that a larger gives the adversary more
latitude in choosing a disturbance. This makes it less likely that the designer
can minimize the cost function.
The mean square estimation error that results from using the optimal gain
cannot be specified because it depends on the adversary's input .
However, we can state an upper bound for the mean square estimation error
๎‚ƒSim06c] as follows:
This provides additional motivation for using the game theory approach
presented in this section.
The estimator not only bounds the worst-case estimation error, but also
bounds the mean square estimation error.
The special case: no state constraints
Then in Equation ๎‚12.32๎‚‚ we can set the matrix equal to a zero row vector
and the vector equal to the zero scalar. In this case and we
obtain from Equations ๎‚12.44๎‚‚ and ๎‚12.45๎‚‚ the following estimator and
adversary strategies:
๎‚12.45๎‚‚
Hโˆž
I โˆ’ GkP G โ‰ฅ( k k
T
) 0 Gk
Gk
Kk
โˆ—
ฮดk
๎‚12.46๎‚‚
Dk
dk V =k+1 0
Chapter 12 Additional Topics in H_infty Filtering 28
This is identical to the unconstrained estimator ๎‚ƒYae92๎‚„
The unconstrained estimator for continuous-time systems is given in
๎‚ƒYae04๎‚„.
In the case of state inequality constraints (i.e., constraints of the form
a standard active-set method ๎‚ƒFle81, Gil81๎‚„ can be used to solve the
filtering problem.
An active-set method uses the fact that it is only those constraints that are
active at the solution of the problem that affect the optimality conditions; the
inactive constraints can be ignored.
Therefore, an inequality-constrained problem is equivalent to an equality-
constrained problem.
An active-set method determines which constraints are active at the solution
of the problem and then solves the problem using the active constraints as
equality constraints.
Inequality constraints will significantly increase the computational effort
required for a problem solution because the active constraints need to be
determined, but conceptually this poses no difficulty.
The Constrained Filter โ€” Summary
1. We have a linear system given as
๎‚12.47๎‚‚
Hโˆž
Hโˆž
D x โ‰คk k
d ,k) Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 29
where is the process noise, is the measurement noise, and the last
equation above specifies equality constraints on the state.
We assume that the constraints are normalized so .
The covariance of is equal to but might have a zero mean or it
might have a nonzero mean (i.e., it might contain a deterministic component).
The covariance of is the identity matrix.
2. Initialize the filter as follows:
๎ฒ๎‚” At each time step do the following.
(a) Choose the tuning parameter matrix to weight the deterministic,
biased component of the process noise. If then we are assuming
that the process noise is zero-mean and we get Kalman filter performance.
As increases we are assuming that there is more of a deterministic,
biased component to the process noise. This gives us better worst-case
error performance but worse RMS error performance.
(b) Compute the next state estimate as follows:
๎‚12.48๎‚‚
wk vk
D D =k k
T
I
wk Q ,k wk
vk
๎‚12.49๎‚‚
k = 0,1,โ‹ฏ ,
Gk
G =k 0
Gk
Chapter 12 Additional Topics in H_infty Filtering 30
(c) Verify that
If not then the filter is invalid.
Example 12.3
Consider a land-based vehicle that is equipped to measure its latitude and
longitude (e.g., through the use of a GPS receiver).
This is the same problem as that considered in Example 7.12.
The vehicle dynamics and measurements can be approximated by the
following equations:
๎‚12.50๎‚‚
๎‚12.51๎‚‚
๎‚12.52๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 31
The first two components of are the latitude and longitude positions, the
last two components of are the latitude and longitude velocities,
represents zero-mean process noise due to potholes and other disturbances,
is additional unknown process noise, and is the commanded
acceleration.
is the sample period of the system, and is the heading angle (measured
counterclockwise from due east).
The measurement consists of latitude and longitude, and is the
measurement noise.
Suppose the standard deviations of the measurement noises are known to be
and .
Then we must normalize our measurement equation to satisfy the condition
that the measurement noise has a unity covariance.
We therefore define the normalized measurement as
In our simulation we set the covariances of the process and measurement noise
as follows:
We can use an filter to estimate the position of the vehicle.
There may be times when the vehicle is traveling off-road, or on an unknown
road, in which case the problem is unconstrained.
xk
xk wk
ฮดk uk
T ฮฑ
yk
โ€ฒ
vk
โ€ฒ
ฯƒ1 ฯƒ2
yk
๎‚12.53๎‚‚
๎‚12.54๎‚‚
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 32
At other times it may be known that the vehicle is traveling on a given road, in
which case the state estimation problem is constrained.
For instance, if it is known that the vehicle is traveling on a straight road with a
heading of and the vector of Equation ๎‚12.32๎‚‚ can be given as
follows:
We can enforce the condition by dividing by .
In our simulation we set the sample period to 1 s and the heading angle to
a constant 60 degrees.
The commanded acceleration is toggled between
as if the vehicle were accelerating and decelerating in traffic.
The initial conditions are set to
ฮฑDkDk dk
๎‚12.55๎‚‚
D D =k k
T
I Dk 1 + tan ฮฑ2
T ฮฑ
ยฑ10m/s ,2
๎‚12.56๎‚‚
Chapter 12 Additional Topics in H_infty Filtering 33
Chapter 12 Additional Topics in H_infty Filtering 34
We found via tuning that a matrix of with gave good filter
performance.
Smaller values of make the filter perform like a Kalman filter.
Larger values of prevent the filter from finding a solution as the positive
definite conditions in Equations ๎‚12.44๎‚‚ and ๎‚12.45๎‚‚ are not satisfied.
This example could be solved by reducing the system-model parameterization
๎‚ƒWen92๎‚„ or by introducing artificial perfect measurements into the problem
๎‚ƒHay98, Por88๎‚„.
In fact, those methods could be used for any estimation problem with equality
constraints.
However, those methods cannot be extended to inequality constraints,
whereas the method discussed in this section can be extended to inequality
constraints, as discussed earlier.
The unconstrained and constrained filters were simulated 100 times each,
and the average RMS position and estimation error magnitudes at each time
step are plotted in Figure 12.4.
Gk ฮธI, ฮธ = 1/40,
ฮธ Hโˆž
ฮธ Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 35
It can be seen that the constrained filter results in more accurate estimates.
The unconstrained estimator results in position errors that average
whereas the constrained estimator gives position errors that average about
The unconstrained velocity estimation error is , whereas the
constrained velocity estimation error is
Table 12.1 shows a comparison of the unconstrained and constrained Kalman
and filters when the noise statistics are nominal. Table 12.2 shows a
comparison of the unconstrained and constrained Kalman and filters when
the acceleration noise on the system has a bias of in both the north and
east directions. In both situations, the filter estimates position more
accurately, but the Kalman filter estimates velocity more accurately. In the off-
nominal noise case, the advantage of the filter over the Kalman filter for
position estimation is more pronounced than when the noise is nominal.
12.4 SUMMARY
Some advanced topics in the area of filtering.
minimizing a combination of the Kalman and filter performance
indices.
To balance the excessive optimism of the Kalman filter with the
excessive pessimism of the filter.
The robust mixed Kalman/ estimation problem, where we took system-
model uncertainties into account.
Rhe system model is never perfectly known.
Constrained filtering, in which equality (or inequality) constraints are
enforced on the state estimate.
This can improve filter performance in cases in which we know that the
state must satisfy certain constraints.
There is still a lot of room for additional work and development in
filtering.
35.3m,
27.1m.
12.9m/s
10.9m/s
Hโˆž
Hโˆž
1m/s2
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Chapter 12 Additional Topics in H_infty Filtering 36
For example, reduced-order filtering tries to obtain good minimax
estimation performance with a filter whose order is less than that of the
underlying system.
Literature Review
Reduced-order Kalman filtering ๎‚Section 10.3๎‚‚, and reduced-order
filtering is considered in ๎‚ƒBet94, Gri97, Xu02๎‚„.
The approach taken in ๎‚ƒKo06๎‚„ for constrained Kalman filtering may be
applicable to constrained filtering and may give better results than the
method discussed in this chapter.
The use of Krein space approaches for solving various filtering
problems is promising ๎‚ƒHas96a, Has96b].
smoothing is discussed in ๎‚ƒGri91a, The94b, Has99, Zha05a], and
robust smoothing is discussed in ๎‚ƒThe94a].
An information form for the filter (analogous to the Kalman
information filter discussed in Section 6.2 ) is presented in ๎‚ƒZha05b].
Approaches to dealing with delayed measurements and synchronization
errors have been extensively explored for Kalman filters (see Section 10.5
), but are notably absent in the filter literature.
There has been a lot of work on nonlinear Kalman filtering (see Chapters
13๎‚ˆ15๎‚‚, but not nearly as much on nonlinear filtering.
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž
Hโˆž

More Related Content

More from AI Robotics KR

Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]AI Robotics KR
ย 
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]AI Robotics KR
ย 
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]AI Robotics KR
ย 
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]AI Robotics KR
ย 
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]AI Robotics KR
ย 
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]AI Robotics KR
ย 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...AI Robotics KR
ย 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]AI Robotics KR
ย 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]AI Robotics KR
ย 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]AI Robotics KR
ย 
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]AI Robotics KR
ย 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [๋ฐ•์ •์€]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [๋ฐ•์ •์€]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [๋ฐ•์ •์€]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [๋ฐ•์ •์€]AI Robotics KR
ย 
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]AI Robotics KR
ย 
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]AI Robotics KR
ย 
Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]AI Robotics KR
ย 
Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]AI Robotics KR
ย 
ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)AI Robotics KR
ย 
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜AI Robotics KR
ย 

More from AI Robotics KR (18)

Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
ย 
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
ย 
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
ย 
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]
Sensor Fusion Study - Ch15. The Particle Filter [Seoyeon Stella Yang]
ย 
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]
Sensor Fusion Study - Ch13. Nonlinear Kalman Filtering [Ahn Min Sung]
ย 
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch11. The H-Infinity Filter [๊น€์˜๋ฒ”]
ย 
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
Sensor Fusion Study - Ch10. Additional topics in kalman filter [Stella Seoyeo...
ย 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
ย 
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]
Sensor Fusion Study - Ch8. The Continuous-Time Kalman Filter [์ดํ•ด๊ตฌ]
ย 
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]
Sensor Fusion Study - Ch7. Kalman Filter Generalizations [๊น€์˜๋ฒ”]
ย 
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
ย 
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [๋ฐ•์ •์€]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [๋ฐ•์ •์€]Sensor Fusion Study - Ch5. The discrete-time Kalman filter  [๋ฐ•์ •์€]
Sensor Fusion Study - Ch5. The discrete-time Kalman filter [๋ฐ•์ •์€]
ย 
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]
Sensor Fusion Study - Ch3. Least Square Estimation [๊ฐ•์†Œ๋ผ, Stella, Hayden]
ย 
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]
Sensor Fusion Study - Ch4. Propagation of states and covariance [๊น€๋™ํ˜„]
ย 
Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]
ย 
Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]
ย 
ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)
ย 
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜
Bayesian Inference : Kalman filter ์—์„œ Optimization ๊นŒ์ง€ - ๊น€ํ™๋ฐฐ ๋ฐ•์‚ฌ๋‹˜
ย 

Recently uploaded

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
ย 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
ย 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
ย 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
ย 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
ย 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
ย 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
ย 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
ย 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
ย 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
ย 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
ย 
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...RajaP95
ย 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
ย 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
ย 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
ย 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
ย 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
ย 
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”soniya singh
ย 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
ย 

Recently uploaded (20)

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
ย 
โ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
โ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCRโ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
โ˜… CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
ย 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
ย 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
ย 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
ย 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
ย 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
ย 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
ย 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
ย 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
ย 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
ย 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
ย 
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON PROFESSIONAL E...
ย 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
ย 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
ย 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
ย 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
ย 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
ย 
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”
Model Call Girl in Narela Delhi reach out to us at ๐Ÿ”8264348440๐Ÿ”
ย 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
ย 

Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]

  • 1. Chapter 12 Additional Topics in H_infty Filtering 1 Chapter 12 Additional Topics in Filtering since make no assumption about the disturbances, they have to accommodate for all conceivable disturbances, and are thus over-conservative. - Babak Hassibi and Thomas Kailath ๎‚ƒHas95๎‚„ References ๎‚ƒBer89๎‚„ D. Bernstein and W. Haddad, โ€œSteady-state Kalman filtering with an H , error bound,โ€ Systems and Control Letters, 12, pp. 9๎‚ˆ16 ๎‚1989๎‚‚. https://doi.org/10.1016/0167๎‚ˆ6911๎‚89๎‚‚90089๎‚ˆ3 ๎‚ƒBet94๎‚„ M. Bettayeb and D. Kavranoglu, โ€œReduced order H, filtering,โ€ American Control Conference, pp. 1884๎‚ˆ1888 ๎‚June 1994๎‚‚. https://ieeexplore.ieee.org/abstract/document/752400/ ๎‚ƒFle81๎‚„ R. Fletcher, Practical Methods of Optimization - Volume 2๎‚’ Constrained Optimization, John Wiley & Sons, New York, 1981. ๎‚ƒGil81๎‚„ P. Gill, W. Murray, and M. Wright, Practical Optimization, Academic Press, New York, 1981. ๎‚ƒGri97๎‚„ K. Grigoriadis and J. Watson, โ€œReduced-order Hm and Lz - Lm filtering via linear matrix inequalities,โ€ IEEE Tmnsactions on Aerospace and Electronic Systems, 33๎‚4๎‚‚, pp. 13281338 ๎‚October 1997๎‚‚. https://ieeexplore.ieee.org/abstract/document/625133/ ๎‚ƒGri91a] M. Grimble, โ€œH, fixed lag smoothing filter for scalar systems,โ€ IEEE Transactions on Signal Processing, 39๎‚9๎‚‚, pp. 1955๎‚ˆ1963 ๎‚September 1991๎‚‚. https://ieeexplore.ieee.org/abstract/document/134428 Hโˆž H ย filters[ โˆž ]
  • 2. Chapter 12 Additional Topics in H_infty Filtering 2 ๎‚ƒHad91๎‚„ W. Haddad, D. Bernstein, and D. Mustafa, โ€œMixed-norm H2/ regulation and estimation: The discrete-time case,โ€ Systems and Control Letters, 16, pp. 235๎‚ˆ247 ๎‚1991๎‚‚. https://www.sciencedirect.com/science/article/pii/0167691191900113 ๎‚ƒHas95๎‚„ B. Hassibi and T. Kailath, โ€œ adaptive filtering,โ€ IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 949๎‚ˆ952 ๎‚1995๎‚‚. ieeexplore.ieee.org/document/480332/ ๎‚ƒHas96a] B. Hassibi, A. Sayed, and T. Kailath, โ€œLinear estimation in Krein spaces - Part I๎‚’ Theory,โ€ IEEE Transactzons on Automatic Control, 41๎‚1๎‚‚, pp. 18๎‚ˆ33 ๎‚January 1996๎‚‚.https://ieeexplore.ieee.org/abstract/document/481605 ๎‚ƒHas96b] B. Hassibi, A. Sayed, and T. Kailath, โ€œLinear estimation in Krein spaces - Part 11๎‚’ Applications,โ€ IEEE Transactions on Automatic Control, 41๎‚1๎‚‚, pp. 34๎‚ˆ49 ๎‚January 1996๎‚‚. https://ieeexplore.ieee.org/abstract/document/481606 ๎‚ƒHas99๎‚„ B. Hassibi, A. Sayed, T. Kailath, Indefinite-Quadratic Estimation and Control, SIAM, Philadelphia, Pennsylvania, 1999. ๎‚ƒHay98๎‚„ S. Hayward, โ€œConstrained Kalman filter for least-squares estimation of timevarying beamforming weights,โ€ in Mathematics in Signal Processing IV, J. McWhirter and I. Prouder ๎‚Eds.),O xford University Press, pp. 113๎‚ˆ125, 1998. ๎‚ƒHun03๎‚„ Y. Hung and F. Yang, โ€œRobust H , filtering with error variance constraints for discrete timevarying systems with uncertainty,โ€ Automatica, 39๎‚7๎‚‚, pp. 1185๎‚ˆ 1194 ๎‚July 2003๎‚‚. https://www.sciencedirect.com/science/article/pii/S0005109803001171 ๎‚ƒKo06๎‚„ S. KO and R. Bitmead, โ€œState estimation for linear systems with state equality constraints,โ€ Automatica, 2006. https://www.sciencedirect.com/science/article/pii/S0005109807001306 ๎‚ƒMah04b] M. Mahmoud, โ€œResilient linear filtering of uncertain systems,โ€ Automatica, Hโˆž Hโˆž
  • 3. Chapter 12 Additional Topics in H_infty Filtering 3 40๎‚10๎‚‚, pp. 1797๎‚ˆ1802 ๎‚October 2004๎‚‚. https://www.sciencedirect.com/science/article/pii/S000510980400161X ๎‚ƒPor88๎‚„ J. Porrill, โ€œOptimal combination and constraints for geometrical sensor data,โ€ International Journal of Robotics Research, 7๎‚6๎‚‚, pp. 6677 ๎‚December 1988๎‚‚. http://journals.sagepub.com/doi/10.1177/027836498800700606 ๎‚ƒSim96๎‚„ D. Simon and H. El-Sherief, โ€œHybrid Kalman/minimax filtering in phaselocked loops,โ€ Control Engineenng Practice, 4, pp. 615๎‚ˆ623 ๎‚October 1996๎‚‚. https://www.sciencedirect.com/science/article/pii/0967066196000433 ๎‚ƒSim06c] D. Simon, โ€œA game theory approach to constrained minimax state estimation,โ€ IEEE Transactions on Signal Processing, 54๎‚2๎‚‚, pp. 405๎‚ˆ412 ๎‚February 2006๎‚‚. http://ieeexplore.ieee.org/document/1576971/ ๎‚ƒThe94a] Y . Theodor, U. Shaked, and C. de Souza, โ€œA game theory approach to robust discretetime H, estimation,โ€ IEEE Transactions on Signal Processing, 42๎‚6๎‚‚, pp. 1486๎‚ˆ1495 ๎‚June 1994๎‚‚. https://ieeexplore.ieee.org/abstract/document/286964 ๎‚ƒThe94b] Y. Theodor and U. Shaked, โ€œGame theory approach to H,-optimal discretetime fixed-point and fixed-lag smoothing,โ€ IEEE Transactions on Automatic Control, 39๎‚9๎‚‚, pp. 1944๎‚ˆ1948 ๎‚September 1994๎‚‚.https://ieeexplore.ieee.org/abstract/document/317131 ๎‚ƒWen92๎‚„ W. Wen and H. Durrant-Whyte, โ€œModel-based multi-sensor data fusion,โ€ IEEE International Conference on Robotics and Automation, pp. 1720๎‚ˆ1726, May 1992. ieeexplore.ieee.org/document/220130/ ๎‚ƒXie04๎‚„ L. Xie, L. Lu, D. Zhang, and H. Zhang, โ€œImproved robust HZ and H , filtering for uncertain discrete-time systems,โ€ Automatica, 40๎‚5๎‚‚, pp. 873๎‚ˆ880 ๎‚May 2004๎‚‚. inkinghub.elsevier.com/retrieve/pii/S0005109804000172 ๎‚ƒXu02๎‚„ S. Xu and T. Chen, โ€œReduced-order H , filtering for stochastic systems,โ€ IEEE Transactions on Signal Processing, 50๎‚ 12๎‚‚, pp. 2998๎‚ˆ3007 ๎‚December 2002๎‚‚. ieeexplore.ieee.org/abstract/document/1075993/
  • 4. Chapter 12 Additional Topics in H_infty Filtering 4 ๎‚ƒYae92๎‚„ I. Yaesh and U. Shaked, โ€œGame theory approach to state estimation of linear discrete-time processes and its relation to H,-optimal estimation,โ€ International Journal of Control, 55๎‚6๎‚‚, pp. 1443๎‚ˆ1452 ๎‚1992๎‚‚. https://www.tandfonline.com/doi/abs/10.1080/00207179208934293 ๎‚ƒYae04๎‚„ I. Yaesh and U. Shaked, โ€œMin-max Kalman filtering,โ€ Systems and Control Letters, 53๎‚3๎‚ˆ4๎‚‚, pp. 217๎‚ˆ228 ๎‚November 2004๎‚‚. linkinghub.elsevier.com/retrieve/pii/S0167691104000672 ๎‚ƒYoo04๎‚„ M. Yoon, V. Ugrinovskii, and I. Petersen, โ€œRobust finite horizon minimax filtering for discretetime stochastic uncertain systems,โ€ Systems and Control Letters, 52๎‚2๎‚‚, pp. 99๎‚ˆ112 ๎‚June 2004๎‚‚. https://linkinghub.elsevier.com/retrieve/pii/S0167691103003001 ๎‚ƒZha05a] H. Zhang, L. Xie, Y. Chai Soh, and D. Zhang, โ€œH, fixed-lag smoothing for discrete linear timevarying systems,โ€ Automatica, 41๎‚5๎‚‚, pp. 839๎‚ˆ846 ๎‚May 2005๎‚‚. https://www.sciencedirect.com/science/article/pii/S0005109805000117 ๎‚ƒZha05b] Y. Zhang, Y. Soh, and W. Chen, โ€œRobust information filter for decentralized estimation,โ€ Automatzca, 41๎‚12๎‚‚, pp. 2141๎‚ˆ2146 ๎‚December 2005๎‚‚. https://www.sciencedirect.com/science/article/pii/S0005109805002700 Introduction Some advanced topics in filtering filtering started since 80s (less mature than Kalman filtering) more room for development in filtering The current research directions in filtering. Outline Section 12.1๎‚’ The mixed Kalman/ estimation problem. a filter that satisfies an performance bound while at the same time minimizing a Kalman performance bound. Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž
  • 5. Chapter 12 Additional Topics in H_infty Filtering 5 Section 12.2๎‚’ The robust mixed Kalman/H estimation problem. Additional uncertainties in the system matrices. Section 12.3๎‚’ The constrained filter withequality (or inequality) constraints 12.1 Mixed Kalman / Filtering A filter that combines the best features of Kalman filtering with the best features of filtering. This problem can be attacked a couple of different ways. Recall from Section 5.2 (pp. 129๎‚‚ the cost function that is minimized by the steady-state Kalman filter: Recall from Section 11.3 (pp.343๎‚ˆ361๎‚‚ the cost function that is minimized by the steady-state state estimator if and are identity matrices: Loosely speaking The Kalman filter minimizes the RMS estimation error The filter minimizes the worst-case estimation error. โˆž Hโˆž Hโˆž Hโˆž ๎‚12.1๎‚‚ Hโˆž Sk Lk ๎‚12.2๎‚‚ Hโˆž
  • 6. Chapter 12 Additional Topics in H_infty Filtering 6 In ๎‚ƒHad91๎‚„ these two performance objectives are combined to form the following problem: Given the -state observable LTI system where and are uncorrelated zero-mean, white noise processes with covariances and respectively Find an estimator of the form With the following Criteria ๎ฒ๎‚” is a stable matrix (so the estimator is stable). ๎ณ๎‚” The cost function is bounded by a user-specified parameter: ๎ฒ๎‚” Among all estimators satisfying the above criteria, the filter minimizes the Kalman filter cost function The solution to this problem provides the best RMS estimation error among all estimators that bound the worst-case estimation error. n ๎‚12.3๎‚‚ w{ k} v{ k} Q R ๎‚12.4๎‚‚ F^ Hโˆž ๎‚12.5๎‚‚ J2
  • 7. Chapter 12 Additional Topics in H_infty Filtering 7 The filter that solves this problem is given as follows. The Mixed Kalman / Filter 1. Find the positive semidefinite matrix that satisfies the following Riccati equation: where and are defined as 2. Derive the and matrices in Equation ๎‚12.4๎‚‚ as 3. The estimator of Equation ๎‚12.4๎‚‚ satisfies the mixed Kalman/ estimation problem if and only if is stable. In this case, the state estimation error satisfies the bound If , reduces to the Kalman filter problem statement. Hโˆž n ร— n P ๎‚12.6๎‚‚ Pa V ๎‚12.7๎‚‚ F^ K ๎‚12.8๎‚‚ Hโˆž F^ ๎‚12.9๎‚‚ ฮธ = 0
  • 8. Chapter 12 Additional Topics in H_infty Filtering 8 In this case we can see that Equation ๎‚12.6๎‚‚ reduces to the discrete-time algebraic Riccati equation that is associated with the Kalman filter (see Problem 12.2 and Section 7.3(pp. 193๎‚‚ ). The continuous-time version of this theory is given in ๎‚ƒBer89๎‚„. Example 12.1 In this example, we take another look at the scalar system that is described in Example 11.2 (pp. 356๎‚‚๎‚’ where and are uncorrelated zero-mean, white noise processes with covariances and , respectively. Equation ๎‚12.6๎‚‚, the Riccati equation for the mixed Kalman/ filter, reduces to the following scalar equation: Suppose that (for some value of and ) this equation has a solution and where the filter gain from Equation ๎‚12.8๎‚‚ is given as ๎‚12.10๎‚‚ w{ k} v{ k} Q R Hโˆž ๎‚12.11๎‚‚ Q,R, ฮธ P โ‰ฅ 0, โˆฃ1 โˆ’ Kโˆฃ < 1, K ๎‚12.12๎‚‚
  • 9. Chapter 12 Additional Topics in H_infty Filtering 9 Then from Equation ๎‚12.2๎‚‚ is bounded from above by and the variance of the state estimation error is bounded from above by . ๎ฒ๎‚” The top half of Figure 12.1 shows the Kalman filter performance bound and the estimator gain as a function of when . Note that at the mixed Kalman/ filter reduces to a standard Kalman filter. The performance bound The estimator gain , as discussed in Example 11.2 However, if then we do not have any guarantee on the worst- case performance index From the top half of Figure 12.1, we see that as increases, the performance bound increases, which means that our Kalman performance index gets worse. However, at the same time, the worst-case performance index decreases as increases. From the bottom half of Figure 12.1 , we see that as increases, increases, which is consistent with better performance and worse Kalman performance (see Example 11.2๎‚‚. When reaches about 0.91 numerical difficulties prevent a solution to the mixed filter problem. ๎ณ๎‚” The bottom half of Figure 12.1 shows that at the estimator gain . Recall from Example 11.2 that the filter had an estimator for the same value of . This shows that the mixed Kalman/ filter has a smaller estimator gain (for the same ) than the pure filter. In other words, the mixed filter uses a lower gain in order to obtain better Kalman performance, whereas the pure filter uses a higher gain because it does not take Kalman performance into account. Jโˆž 1/ฮธ P P K ฮธ Q = R = 1 ฮธ = 0 Hโˆž P โ‰ˆ 1.62 K โ‰ˆ 0.62 ฮธ = 0 Jโˆž ฮธ P Jโˆž ฮธ ฮธ K Hโˆž ฮธ ฮธ = 0.5 K โ‰ˆ 0.76 Hโˆž gain K = 1 ฮธ Hโˆž ฮธ Hโˆž Hโˆž
  • 10. Chapter 12 Additional Topics in H_infty Filtering 10 Although the approach presented above is a theoretically elegant method of obtaining a mixed Kalman/ filter, the solution of the Riccati equation can be challenging for problems with a large number of states. Other more straightforward approaches can be used to combine the Kalman and filters. For example, if the steady-state Kalman filter gain for a given problem is denoted as and the steady-state filter gain is denoted as , then a hybrid filter gain can be constructed as where . This hybrid filter gain is a convex combination of the Kalman and filter gains, which would be expected to provide a balance between RMS and worst-case performance ๎‚ƒSim96๎‚„. Hโˆž Hโˆž K2 Hโˆž Kโˆž ๎‚12.13๎‚‚ d โˆˆ [0,1] Hโˆž
  • 11. Chapter 12 Additional Topics in H_infty Filtering 11 However, this approach is not as attractive theoretically since stability must be determined numerically, and no a priori bounds on the Kalman or performance measures can be given. Analytical determination of stability and performance bounds for this type of filter is an open research issue. 12.2 Robust Kalman / Filtering The material in this section is based on ๎‚ƒHun03๎‚„. In most practical problems, an exact model of the system may not be available. The performance of the system in the presence of model uncertainties becomes an important issue. System For example, suppose we have a system given as where and are uncorrelated zero-mean white noise processes with covariances and , respectively. Matrices Matrices and represent uncertainties in the system and measurement matrices. These uncertainties are assumed to be of the form Hโˆž Hโˆž ๎‚12.14๎‚‚ w{ k} v{ k} Qk Rk ฮ”Fk ฮ”Hk
  • 12. Chapter 12 Additional Topics in H_infty Filtering 12 where , and are known matrices, and is an unknown matrix satisfying the bound ๎‚ƒRecall that we use the general notation to denote that is a negative semidefinite matrix.] Assume that is nonsingular. This assumption is not too restrictive; should always be nonsingular for a real system because it comes from the matrix exponential of the system matrix of a continuous-time system, and the matrix exponential is always nonsingular (see Sections 1.2 (pp. 18 Linear Systems) and 1.4 (pp. 26 Discretization)). The problem is to design a state estimator of the form with the following characteristics ๎ฒ๎‚” The estimator is stable (i.e., the eigenvalues of are less than one in magnitude) ๎ณ๎‚” The estimation error satisfies the following worst-case bound: ๎‚12.15๎‚‚ M ,M1k 2k Nk ฮ“k ๎‚12.16๎‚‚ A โ‰ค B (A โˆ’ B) Fk Fk ๎‚12.17๎‚‚ F^k x~ k
  • 13. Chapter 12 Additional Topics in H_infty Filtering 13 3. The estimation error satisfies the following RMS bound: The solution to the problem can be found by the following procedure ๎‚ƒHun03๎‚„. The Robust Mixed Kalman / Filter ๎ฒ๎‚” Choose some scalar sequence , and a small scalar ๎ณ๎‚” Define the following matrices: 3. Initialize and as follows: 4. Find positive definite solutions and satisfying the following Riccati equations: ๎‚12.18๎‚‚ x~ k ๎‚12.19๎‚‚ Hโˆž ฮฑ >k 0 ฯต > 0 ๎‚12.20๎‚‚ Pk P ~ k ๎‚12.21๎‚‚ Pk P ~ k
  • 14. Chapter 12 Additional Topics in H_infty Filtering 14 where the matrices and are defined as 5. If the Riccati equation solutions satisfy then the estimator of Equation ๎‚12.17๎‚‚ solves the problem with ๎‚12.22๎‚‚ R ,R ,F ,H ,1k 2k 1k 1k Tk ๎‚12.23๎‚‚ ๎‚12.24๎‚‚
  • 15. Chapter 12 Additional Topics in H_infty Filtering 15 The parameter is generally chosen as a very small positive number. In the example in ๎‚ƒHun03๎‚„ the value is . The parameter has to be chosen large enough so that the conditions of Equation ๎‚12.24๎‚‚ are satisfied. However, as increases, also increases, which results in a looser bound on the RMS estimation error. A steady-state robust filter can be obtained by letting and in Equation ๎‚12.22๎‚‚ and removing all the time subscripts (assuming that the system is time-invariant). But the resulting coupled steady-state Riccati equations will be more difficult to solve than the discrete-time Riccati equations in Equation ๎‚12.22๎‚‚ which can be solved by a simple (albeit tedious) iterative process. Similar problems have been solved in ๎‚ƒMah04b, Xie04, Yoo04๎‚„. Example 12.2 An angular positioning system (motor) Definitions The moment of inertia of the motor and its load is The coefficient of viscous friction is . The torque that is applied to the motor is where is the applied voltage, is a motor constant that relates applied voltage to generated torque, and is unmodeled torque that can be considered as noise. Differential Equation ๎‚12.25๎‚‚ ฯต ฯต = 10โˆ’8 ฮฑk ฮฑk Pk P =k+1 Pk =P ~ k+1 P ~ k J B cu + w, u c w
  • 16. Chapter 12 Additional Topics in H_infty Filtering 16 The differential equation for this system is given as where is the motor shaft angle. We choose the states as and . Dynamic System Model The dynamic system model Discretize the system with a sample time , we use the method of Section 1.4 to obtain The discrete-time system matrices ๎‚12.26๎‚‚ ฯ• x(1) = ฯ• x(2) = ฯ•ห™ ๎‚12.27๎‚‚ T ๎‚12.28๎‚‚
  • 17. Chapter 12 Additional Topics in H_infty Filtering 17 If our measurement is angular position corrupted by noise, then our measurement equation can be written as Suppose the system has a torque disturbance with a standard deviation of 2, and a measurement noise with a standard deviation of 0.2 degrees. Run the Kalman filter and the robust mixed Kalman/ filter for this problem Interpretation Figure 12.2 shows the position and velocity estimation errors of the Kalman and robust mixed Kalman/ filters. The robust filter performs better at the beginning of the simulation, although the Kalman filter performs better in steady state. ๎‚It is not easy to see the advantage of Kalman filter during steady state in Figure 12.2 because of the scale, but over the last half of the plot the Kalman filter estimation errors have standard deviations of 0.33 deg ๎‚12.29๎‚‚ ฯ• ๎‚12.30๎‚‚ wk vk Hโˆž Hโˆž
  • 18. Chapter 12 Additional Topics in H_infty Filtering 18 and 1.65 deg/s, whereas the robust filter has standard deviations of and Now suppose that the moment of inertia of the motor changes by a factor of . That is, the filter assumes that is The filter that solves this problem is given as follows. than it really is. In this, case Figure 12.3 shows the position and velocity estimation errors of the Kalman and robust filters. It is apparent that in this case the robust filter performs better not only at the beginning of the simulation, but in steady state also. After the filters reach "steady state" (which we have defined somewhat arbitrarily as the time at which the position estimation error magnitude falls below the Kalman filter estimation errors are 0.36 degrees for position and 1.33 degrees/s for velocity, whereas the robust filter RMS estimation errors are 0.28 degrees for position and 1.29 degrees/s for velocity. The square roots of the diagonal elements of the Riccati equation solution of Equation ๎‚12.22๎‚‚ reach steady-state values of 0.51 degrees and 1.52 degrees/s, which shows that the estimation- error variance is indeed bounded by the Riccati equation solution 0.36 deg 4.83 deg /s 100 J 1ย degreeย ) RMS Pk Pk
  • 19. Chapter 12 Additional Topics in H_infty Filtering 19
  • 20. Chapter 12 Additional Topics in H_infty Filtering 20 12.3 Constrained Filtering As in Section 7.5 (pp. 212 Constrained Kalman Filtering), suppose that we know (on the basis of physical considerations) that the states satisfy some equality constraint or some inequality constraint where is a known matrix and is a known vector. This section discusses how those constraints can be incorporated into the filter equations. As discussed in Section 7.5 state equality constraints can always be handled by Reducing the system-model parameterization ๎‚ƒWen92๎‚„ Treating state equality constraints as perfect measurements ๎‚ƒPor88, Hay98๎‚„. Hโˆž D x =k k d ,k D x โ‰คk k d ,k Dk dk Hโˆž
  • 21. Chapter 12 Additional Topics in H_infty Filtering 21 However, these approaches cannot be extended to inequality constraints. The approach summarized in this section is to incorporate the state constraints into the derivation of the filter ๎‚ƒSim06c] Consider the discrete LTI system given by where is the measurement, and are uncorrelated white noise sequences with respective covariances and , and is a noise sequence generated by an adversary (i.e., nature). Note that we are assuming that the measurement noise has a unity covariance matrix. In a real system, if the measurement noise covariance is not equal to the identity matrix, then we will have to normalize the measurement equation as shown in Example 12.3 below. In general, and can be time varying matrices, but we will omit the time subscript on these matrices for ease of notation. In addition to the state equation, we know (on the basis of physical considerations or other a priori information ) that the states satisfy the following constraint: We assume that the matrix is full rank and normalized so that In general, is an matrix, where is the number of constraints, is the number of states, and . Hโˆž ๎‚12.31๎‚‚ yk w{ k} v{ k} Q I ฮด{ k} F,H, Q ๎‚12.32๎‚‚ Dk D D =k k T I Dk s ร— n s n s < n
  • 22. Chapter 12 Additional Topics in H_infty Filtering 22 If then Equation ๎‚12.32๎‚‚ completely defines which makes the estimation problem trivial (i.e., ). For which is the case in this section, there are fewer constraints than states, which makes the estimation problem nontrivial. Assuming that is full rank is the same as the assumption made in the constrained Kalman filtering problem of Section 7.5. For notational convenience we define the matrix as We assume that both the noisy system and the noise-free system satisfy the state constraint. The problem is to find an estimate of given the measurements . The estimate should satisfy the state constraint. We will assume that the estimate is given by the following standard predictor/corrector form: The noise in ๎‚12.31๎‚‚ is introduced by an adversary that has the goal of maximizing the estimation error. We will assume that our adversary's input to the system is given as follows: s = n xk k =x^ Dk dโˆ’1 k s < n, Dk Vk ๎‚12.33๎‚‚ x^k+1 xk+1 y ,y ,โ‹ฏ ,y{ 1 2 k} ๎‚12.35๎‚‚ ฮดk
  • 23. Chapter 12 Additional Topics in H_infty Filtering 23 where is a gain to be determined, is a given matrix, and is a noise sequence with variance equal to the identity matrix. We assume that is uncorrelated with and . This form of the adversary's input is not intuitive because it is based on the state estimation error, but this form is taken because the solution of the resulting problem results in a state estimator that bounds the infinity-norm of the transfer function from the random noise terms to the state estimation error ๎‚ƒYae92๎‚„. in Equation ๎‚12.35๎‚‚ is chosen by the designer as a tuning parameter or weighting matrix that can be adjusted on the basis of our a priori knowledge about the adversary's noise input. Suppose, for example, that we know ahead of time that the first component of the adversary's noise input to the system is twice the magnitude of the second component, the third component is zero, and so on; then that information can be reflected in the designer's choice of . We do not need to make any assumptions about the form of (e.g., it does not need to be positive definite or square). From Equation ๎‚12.35๎‚‚ we see that as approaches the zero matrix, the adversary's input becomes a purely random process without any deterministic component. This causes the resulting filter to approach the Kalman filter; that is, we obtain better RMS error performance but poorer worst-case error performance. As becomes large, the filter places more emphasis on minimizing the estimation error due to the deterministic component of the adversary's input. That is, the filter assumes less about the adversary's input, and we obtain better worst-case error performance but worse RMS error performance. The estimation error is defined as Lk Gk n{ k} n{ k} w , v ,{ k} { k} x0 Gk Gk Gk Gk Gk ๎‚12.36๎‚‚
  • 24. Chapter 12 Additional Topics in H_infty Filtering 24 It can be shown from the preceding equations that the dynamic system describing the estimation error is given as since we see that . But it can also be shown ๎‚ƒSim06c] that . Therefore, we can subtract the zero term from Equation ๎‚12.37๎‚‚ to obtain However, this is an inappropriate term for a minimax problem because the adversary can arbitrarily increase by arbitrarily increasing . To prevent this, we decompose as where and evolve as follows: ๎‚12.37๎‚‚ D x =k k D k =kx^ dk, D e =k k 0 D Fe =k+1 k 0 D D Fe =k+1 T k+1 k V Fek+1 k ๎‚12.38๎‚‚ ek Lk ek ๎‚12.39๎‚‚ e1,k e2,k
  • 25. Chapter 12 Additional Topics in H_infty Filtering 25 We define the objective function for the filtering problem as where is any positive definite weighting matrix. The differential game is for the filter designer to find a gain sequence that minimizes , and for the adversary to find a gain sequence that maximizes . As such, is considered a function of and which we denote in shorthand notation as and . This objective function is not intuitive, but is used here because the solution of the problem results in a state estimator that bounds the infinity-norm of the transfer function from the random noise terms to the state estimation error ๎‚ƒYae92๎‚„. That is, suppose we can find an estimator gain that minimizes when the matrix in ๎‚12.35๎‚‚ is equal to for some positive scalar . Then the infinity-norm of the weighted transfer function from the noise terms and to the estimation error $$e_{k}$ is bounded by . That is, ๎‚12.40๎‚‚ ๎‚12.41๎‚‚ Wk K{ k} J L{ k} J J K{ k} L ,{ k} K L Kโˆ— J(K,L) Gk ฮธI ฮธ wk vk 1/ฮธ
  • 26. Chapter 12 Additional Topics in H_infty Filtering 26 where sup stands for supremum.' The filtering solution is obtained by finding optimal gain sequences and that satisfy the following saddle point: This problem is solved subject to the constraint that in ๎‚ƒSim06c], whose result is presented here. We define and as the nonsingular solutions to the following set of equations: Nonsingular solutions to these equations are not always guaranteed to exist, in which case a solution to the filtering problem may not exist. However, if nonsingular solutions do exist, then the following gain matrices for our estimator and adversary satisfy the constrained filtering problem: ๎‚12.42๎‚‚ K{ k} L{ k} ๎‚12.43๎‚‚ D =kx^k dk Pk ฮฃk ๎‚12.44๎‚‚ Hโˆž Hโˆž
  • 27. Chapter 12 Additional Topics in H_infty Filtering 27 These matrices solve the constrained filtering problem only if Note that as becomes larger, we will be less likely to satisfy this condition. From Equation ๎‚12.35๎‚‚ we see that a larger gives the adversary more latitude in choosing a disturbance. This makes it less likely that the designer can minimize the cost function. The mean square estimation error that results from using the optimal gain cannot be specified because it depends on the adversary's input . However, we can state an upper bound for the mean square estimation error ๎‚ƒSim06c] as follows: This provides additional motivation for using the game theory approach presented in this section. The estimator not only bounds the worst-case estimation error, but also bounds the mean square estimation error. The special case: no state constraints Then in Equation ๎‚12.32๎‚‚ we can set the matrix equal to a zero row vector and the vector equal to the zero scalar. In this case and we obtain from Equations ๎‚12.44๎‚‚ and ๎‚12.45๎‚‚ the following estimator and adversary strategies: ๎‚12.45๎‚‚ Hโˆž I โˆ’ GkP G โ‰ฅ( k k T ) 0 Gk Gk Kk โˆ— ฮดk ๎‚12.46๎‚‚ Dk dk V =k+1 0
  • 28. Chapter 12 Additional Topics in H_infty Filtering 28 This is identical to the unconstrained estimator ๎‚ƒYae92๎‚„ The unconstrained estimator for continuous-time systems is given in ๎‚ƒYae04๎‚„. In the case of state inequality constraints (i.e., constraints of the form a standard active-set method ๎‚ƒFle81, Gil81๎‚„ can be used to solve the filtering problem. An active-set method uses the fact that it is only those constraints that are active at the solution of the problem that affect the optimality conditions; the inactive constraints can be ignored. Therefore, an inequality-constrained problem is equivalent to an equality- constrained problem. An active-set method determines which constraints are active at the solution of the problem and then solves the problem using the active constraints as equality constraints. Inequality constraints will significantly increase the computational effort required for a problem solution because the active constraints need to be determined, but conceptually this poses no difficulty. The Constrained Filter โ€” Summary 1. We have a linear system given as ๎‚12.47๎‚‚ Hโˆž Hโˆž D x โ‰คk k d ,k) Hโˆž Hโˆž
  • 29. Chapter 12 Additional Topics in H_infty Filtering 29 where is the process noise, is the measurement noise, and the last equation above specifies equality constraints on the state. We assume that the constraints are normalized so . The covariance of is equal to but might have a zero mean or it might have a nonzero mean (i.e., it might contain a deterministic component). The covariance of is the identity matrix. 2. Initialize the filter as follows: ๎ฒ๎‚” At each time step do the following. (a) Choose the tuning parameter matrix to weight the deterministic, biased component of the process noise. If then we are assuming that the process noise is zero-mean and we get Kalman filter performance. As increases we are assuming that there is more of a deterministic, biased component to the process noise. This gives us better worst-case error performance but worse RMS error performance. (b) Compute the next state estimate as follows: ๎‚12.48๎‚‚ wk vk D D =k k T I wk Q ,k wk vk ๎‚12.49๎‚‚ k = 0,1,โ‹ฏ , Gk G =k 0 Gk
  • 30. Chapter 12 Additional Topics in H_infty Filtering 30 (c) Verify that If not then the filter is invalid. Example 12.3 Consider a land-based vehicle that is equipped to measure its latitude and longitude (e.g., through the use of a GPS receiver). This is the same problem as that considered in Example 7.12. The vehicle dynamics and measurements can be approximated by the following equations: ๎‚12.50๎‚‚ ๎‚12.51๎‚‚ ๎‚12.52๎‚‚
  • 31. Chapter 12 Additional Topics in H_infty Filtering 31 The first two components of are the latitude and longitude positions, the last two components of are the latitude and longitude velocities, represents zero-mean process noise due to potholes and other disturbances, is additional unknown process noise, and is the commanded acceleration. is the sample period of the system, and is the heading angle (measured counterclockwise from due east). The measurement consists of latitude and longitude, and is the measurement noise. Suppose the standard deviations of the measurement noises are known to be and . Then we must normalize our measurement equation to satisfy the condition that the measurement noise has a unity covariance. We therefore define the normalized measurement as In our simulation we set the covariances of the process and measurement noise as follows: We can use an filter to estimate the position of the vehicle. There may be times when the vehicle is traveling off-road, or on an unknown road, in which case the problem is unconstrained. xk xk wk ฮดk uk T ฮฑ yk โ€ฒ vk โ€ฒ ฯƒ1 ฯƒ2 yk ๎‚12.53๎‚‚ ๎‚12.54๎‚‚ Hโˆž
  • 32. Chapter 12 Additional Topics in H_infty Filtering 32 At other times it may be known that the vehicle is traveling on a given road, in which case the state estimation problem is constrained. For instance, if it is known that the vehicle is traveling on a straight road with a heading of and the vector of Equation ๎‚12.32๎‚‚ can be given as follows: We can enforce the condition by dividing by . In our simulation we set the sample period to 1 s and the heading angle to a constant 60 degrees. The commanded acceleration is toggled between as if the vehicle were accelerating and decelerating in traffic. The initial conditions are set to ฮฑDkDk dk ๎‚12.55๎‚‚ D D =k k T I Dk 1 + tan ฮฑ2 T ฮฑ ยฑ10m/s ,2 ๎‚12.56๎‚‚
  • 33. Chapter 12 Additional Topics in H_infty Filtering 33
  • 34. Chapter 12 Additional Topics in H_infty Filtering 34 We found via tuning that a matrix of with gave good filter performance. Smaller values of make the filter perform like a Kalman filter. Larger values of prevent the filter from finding a solution as the positive definite conditions in Equations ๎‚12.44๎‚‚ and ๎‚12.45๎‚‚ are not satisfied. This example could be solved by reducing the system-model parameterization ๎‚ƒWen92๎‚„ or by introducing artificial perfect measurements into the problem ๎‚ƒHay98, Por88๎‚„. In fact, those methods could be used for any estimation problem with equality constraints. However, those methods cannot be extended to inequality constraints, whereas the method discussed in this section can be extended to inequality constraints, as discussed earlier. The unconstrained and constrained filters were simulated 100 times each, and the average RMS position and estimation error magnitudes at each time step are plotted in Figure 12.4. Gk ฮธI, ฮธ = 1/40, ฮธ Hโˆž ฮธ Hโˆž Hโˆž
  • 35. Chapter 12 Additional Topics in H_infty Filtering 35 It can be seen that the constrained filter results in more accurate estimates. The unconstrained estimator results in position errors that average whereas the constrained estimator gives position errors that average about The unconstrained velocity estimation error is , whereas the constrained velocity estimation error is Table 12.1 shows a comparison of the unconstrained and constrained Kalman and filters when the noise statistics are nominal. Table 12.2 shows a comparison of the unconstrained and constrained Kalman and filters when the acceleration noise on the system has a bias of in both the north and east directions. In both situations, the filter estimates position more accurately, but the Kalman filter estimates velocity more accurately. In the off- nominal noise case, the advantage of the filter over the Kalman filter for position estimation is more pronounced than when the noise is nominal. 12.4 SUMMARY Some advanced topics in the area of filtering. minimizing a combination of the Kalman and filter performance indices. To balance the excessive optimism of the Kalman filter with the excessive pessimism of the filter. The robust mixed Kalman/ estimation problem, where we took system- model uncertainties into account. Rhe system model is never perfectly known. Constrained filtering, in which equality (or inequality) constraints are enforced on the state estimate. This can improve filter performance in cases in which we know that the state must satisfy certain constraints. There is still a lot of room for additional work and development in filtering. 35.3m, 27.1m. 12.9m/s 10.9m/s Hโˆž Hโˆž 1m/s2 Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž
  • 36. Chapter 12 Additional Topics in H_infty Filtering 36 For example, reduced-order filtering tries to obtain good minimax estimation performance with a filter whose order is less than that of the underlying system. Literature Review Reduced-order Kalman filtering ๎‚Section 10.3๎‚‚, and reduced-order filtering is considered in ๎‚ƒBet94, Gri97, Xu02๎‚„. The approach taken in ๎‚ƒKo06๎‚„ for constrained Kalman filtering may be applicable to constrained filtering and may give better results than the method discussed in this chapter. The use of Krein space approaches for solving various filtering problems is promising ๎‚ƒHas96a, Has96b]. smoothing is discussed in ๎‚ƒGri91a, The94b, Has99, Zha05a], and robust smoothing is discussed in ๎‚ƒThe94a]. An information form for the filter (analogous to the Kalman information filter discussed in Section 6.2 ) is presented in ๎‚ƒZha05b]. Approaches to dealing with delayed measurements and synchronization errors have been extensively explored for Kalman filters (see Section 10.5 ), but are notably absent in the filter literature. There has been a lot of work on nonlinear Kalman filtering (see Chapters 13๎‚ˆ15๎‚‚, but not nearly as much on nonlinear filtering. Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž Hโˆž