SlideShare a Scribd company logo
Least squares estimation
Sora Kang
AI Robotics KR
Sensor fusion Study
Overview
3.1. Estimation Of A Constant
3.2. Weighted Least Squares Estimation
3.1.
3.1
3.1
3.2
3.2
3.2
Recursive
Least Squares Estimation
AI Robotics KR Sensor Fusion Study
Stella Seoyeon Yang
3.3 Recursive Least Squares Estimation
If we obtain measurements sequentially and want to update our estimate of z with
each new measurement, we need to augment the H matrix and completely
recompute the estimate x_hat. If the number of measurements becomes large,
then the computational effort could become prohibitive.
Weighted Least Square Estimator
3.3 Recursive Least Squares Estimation
Linear Recursive Least Square
Estimator
gain matrix, correction term
3.3 Recursive Least Squares Estimation
if measurement noise(v_k) is zero mean for all k,
and x_hat_0 = E(x),
E(x_hat) = E(x_k) for all k
3.3 Recursive Least Squares Estimation
The optimality criterion :
Cost Function , Minimize the the sum of the variance of
estimation errors (J)
3.3 Recursive Least Squares Estimation
Estimation-error covariance :
P_k is Positive definite.
3.3 Recursive Least Squares Estimation
Optimality Criterion, Minimum
3.3 Recursive Least Squares Estimation
Summary
3.3.1 Alternate Estimator Forms
Sometimes it is useful to write the equations for P_k and K_k in alternate forms. Although these alternate
forms are mathematically identical, they can be beneficial from a computational point of view. They can
also lead to new results, which we will discover in later chapters.
3.3.1 Alternate Estimator Forms
3.3.1 Alternate Estimator Forms
This is a simpler equation for P_{k}, but numerical computing problems (Le,
scaling issues) may cause this expression for P_{k} to not
be positive definite, even when P_{k-1} and R_{k} are positive definite.
3.3.1 Alternate Estimator Forms
3.3.1 Alternate Estimator Forms
3.3.1 Alternate Estimator Forms
EX. 3.4 In this example, we illustrate the computational advantages of the first
form of the covariance update in Equation compared with the third form. Suppose
we have a scalar parameter z and a perfect measurement of it. That is, H1 = 1
and R1 = 0. Further suppose that our initial estimation covariance Po = 6, and our
computer provides precision of three digits to the right of the decimal point for
each quantity that it computes. The estimator gain K1 is
3.3.2 Curve fitting
we measure data one sample at a time (y1, y2, ...) and want to find the best fit of a
curve to the data. The curve that we want to fit to the data could be constrained to be
linear, or quadratic, or sinusoid, or some other shape, depending on the underlying
problem.
EXAMPLE 3.7
Suppose that we know a priori that the underlying data is a quadratic function of
time. In this case, we have a quadratic data fitting problem. For example, suppose
we are measuring the altitude of a free-falling object. We know from our
understanding of physics that altitude r is a function of the acceleration due to
gravity, the initial altitude and velocity of the object r_{0} and v_{0}, and time t, as
given by the equation r=r_{0}+v_{0} t+(a / 2) t^{2} . So if we measure r at various
time instants and fit a quadratic to the resulting r versus t curve, then we have an
estimate of the parameters r_{0}, v_{0}, and a / 2 .
3.3.2 Curve fitting
3.3.1 Alternate Estimator Forms
3.3.1 Alternate Estimator Forms
Summary
감사합니다 !
Q & A
.
.
.
.
.
.
.
.
Chapter 3 Least Squares Estimation
3.4 Wiener Filtering
Sensor Fusion Study
AI Robotics KR
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 1 / 44
.
.
.
.
.
.
.
.
1 Introduction
2 3.4.1 Parametric Filter Optimization
3 3.4.2 General Filter Optimization
4 3.4.3 Noncausal Filter Optimization
5 3.4.4 Causal Filter Optimization
6 3.4.5 Comparison
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 2 / 44
.
.
.
.
.
.
.
.
Section 1
Introduction
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 3 / 44
.
.
.
.
.
.
.
.
Introduction
A brief review of Wiener filtering.
Knowledge of Wiener filtering is not assummed.
Wiener filtering is historically important, still widely used in signal
processing and communication theory.
But WF is not used much for state estimation, this section is optional
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 4 / 44
.
.
.
.
.
.
.
.
History of Wiener Filtering
Design a linear, time-invariant filter to extract a signal from noise,
approaching the problem from the frequency domain perspective.
Invented in World War II by Norbert Wiener
First published in 1942, but knwon to public 1949 [Wie64].
Andrey Kolmogorov actually solved a more general problem earlier
(1941), and Mark Krein also worked on the same problem (1945).
Kolmogorov’s and Krein’s work was independent of Wiener’s work,
and Wiener acknowledges that Kolmogorov’s work predated his own
work [Wie56].
However, Kolmogorov’s and Krein’s work did not become well known
in the Western world until later, since it was published in Russian
[Kol41].
A nontechnical account of Wiener’s work is given in his autobiography
[Wie56]
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 5 / 44
.
.
.
.
.
.
.
.
Backgrounds (pp. 94-95) I
To set up the presentation of the Wiener filter, we first need to ask
the following question:
Question
How does the power spectrum of a stochastic process 𝑥(𝑡) change
when it goes through an LTI system with Impulse response 𝑔(𝑡) ?
Output 𝑦(𝑡)
The output 𝑦(𝑡) of the system is given by the convolution of the
impulse response with the input:
𝑦(𝑡) = 𝑔(𝑡) ∗ 𝑥(𝑡)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 6 / 44
.
.
.
.
.
.
.
.
Time Invariant
The system is time-invariant, a time shift in the input results in an
equal time shift in the output:
𝑦(𝑡 + 𝛼) = 𝑔(𝑡) ∗ 𝑥(𝑡 + 𝛼)
Convolution Integral
Multiplying the above two equations and writing out the convolutions
as integrals gives
𝑦(𝑡)𝑦(𝑡 + 𝛼) = ∫ 𝑔(𝜏)𝑥(𝑡 − 𝜏)𝑑𝜏 ∫ 𝑔(𝛾)𝑥(𝑡 + 𝛼 − 𝛾)𝑑𝛾
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 7 / 44
.
.
.
.
.
.
.
.
Autocorrelation of 𝑦(𝑡)
Taking the expected value of both sides of the above equation gives
the autocorrelation of 𝑦(𝑡) as a function of the autocorrelation of 𝑥(𝑡)
𝐸[𝑦(𝑡)𝑦(𝑡 + 𝛼)] = ∬ 𝑔(𝜏)𝑔(𝛾)𝐸[𝑥(𝑡 − 𝜏)𝑥(𝑡 + 𝛼 − 𝛾)]𝑑𝜏 𝑑𝛾
Shorthand Notation 𝑅 𝑦(𝛼)
… it will be written in shorthand notation as
𝑅 𝑦(𝛼) = ∬ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑥(𝛼 + 𝜏 − 𝛾)𝑑𝜏 𝑑𝛾
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 8 / 44
.
.
.
.
.
.
.
.
Taking Fourier Transform
Take the Fourier transform of the above equation to obtain
∫ 𝑅 𝑦(𝛼)𝑒−𝑗𝜔𝛼
𝑑𝛼 = ∭ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑧(𝛼 + 𝜏 − 𝛾)𝑒−𝑗𝜔𝛼
𝑑𝜏 𝑑𝛾𝑑𝛼
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 9 / 44
.
.
.
.
.
.
.
.
Power Spectrum of the Ouput 𝑦(𝑡)
Now we define a new variable of integration 𝛽 = 𝛼 + 𝜏 − 𝛾 and
replace 𝛼 in above equation to obtain
𝑆 𝑦(𝜔) = ∭ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑧(𝛽)𝑒−𝑗𝜔𝛽
𝑒−𝑗𝜔𝜏
𝑒 𝑗𝜔𝜏
𝑑𝜏 𝑑𝛾𝑑𝛽
= 𝐺(−𝜔)𝐺(𝜔)𝑆 𝑥(𝜔)
(1)
In other words, the power spectrum of the output 𝑦(𝑡) is a function of
the Fourier transform of the impulse response of the system, 𝐺(𝜔),
and the power spectrum of the input 𝑥(𝑡)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 10 / 44
.
.
.
.
.
.
.
.
Problem statement
Design a stable LTI filter to extract a signal from noise.
Quantities of Interests
𝑥(𝑡) = noise free signal
𝑣(𝑡) = additive noise
𝑔(𝑡) = filter impulse response (to be designed)
̂𝑥(𝑡) = output of filter [estimate of 𝑥(𝑡)]
𝑒(𝑡) = estimation error = 𝑥(𝑡) − ̂𝑥(𝑡)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 11 / 44
.
.
.
.
.
.
.
.
Mathematical Expression
Fourier Transform
These quantities are represented in fig. 1 from which we see that
̂𝑥(𝑡) = 𝑔(𝑡) ∗ [𝑥(𝑡) + 𝑣(𝑡)]
̂𝑋(𝜔) = 𝐺(𝜔)[𝑋(𝜔) + 𝑉 (𝜔)]
𝐸(𝜔) = 𝑋(𝜔) − ̂𝑋(𝜔)
= 𝑋(𝜔) − 𝐺(𝜔)[𝑋(𝜔) + 𝑉 (𝜔)]
= [1 − 𝐺(𝜔)]𝑋(𝜔) − 𝐺(𝜔)𝑉 (𝜔)
the error signal 𝑒(𝑡) is the superposition of the system [1 − 𝐺(𝜔)]
acting on the signal 𝑥(𝑡),
the system 𝐺(𝜔) acting on the signal 𝑣(𝑡).
Therefore, from eq. 1 we obtain
𝑆 𝑒(𝜔) = [1 − 𝐺(𝜔)][1 − 𝐺(−𝜔)]𝑆 𝑥(𝜔) − 𝐺(𝜔)𝐺(−𝜔)𝑆 𝑣(𝜔) (2)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 12 / 44
.
.
.
.
.
.
.
.
Variance of Estimation Error
Recall, (Equation 2.92)
𝑆 𝑋(𝜔) = ∫
∞
−∞
𝑅 𝑋(𝜏)𝑒−𝑗𝜔𝜏
𝑑𝜏
𝑅 𝑋(𝜏) = 1
2𝜋 ∫
∞
−∞
𝑆 𝑋(𝜔)𝑒 𝑗𝜔𝜏
𝑑𝜔
(3)
The variance of the estimation error is obtained from eq. 3 (Equation
2.92) as
𝐸 [𝑒2
(𝑡)] =
1
2𝜋
∫ 𝑆 𝑒(𝜔)𝑑𝜔 (4)
To find the optimal filter 𝐺(𝜔) we need to minimize 𝐸 [𝑒2
(𝑡)],
which means that we need to know 𝑆 𝑥(𝜔) and 𝑆 𝑣(𝜔), the statistical
properties of the signal 𝑥(𝑡) and the noise 𝑣(𝑡)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 13 / 44
.
.
.
.
.
.
.
.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 14 / 44
.
.
.
.
.
.
.
.
Section 2
3.4.1 Parametric Filter Optimization
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 15 / 44
.
.
.
.
.
.
.
.
3.4.1 Parametric Filter Optimization I
To simplify the problem of the determination of the optimal filter
𝐺(𝜔)
Assume the optimal filter is a first-order, low-pass filter (stable and
causal) with a bandwidth 1/𝑇 to be determined by parametric
optimization.
𝐺(𝜔) =
1
1 + 𝑇 𝑗𝜔
This may not be a valid assumption, but it reduces the problem to a
parametric optimization problem.
To simplify the problem further
Suppose that 𝑆 𝑥(𝜔) and 𝑆 𝑣(𝜔) are in the following forms.
𝑆 𝑥(𝜔) =
2𝜎2
𝛽
𝜔2 + 𝛽2
𝑆 𝑣(𝜔) = 𝐴
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 16 / 44
.
.
.
.
.
.
.
.
3.4.1 Parametric Filter Optimization II
In other words, the noise 𝑣(𝑡) is white. From eq. 2 (Equation 3.78)
we obtain
𝑆 𝑒(𝜔) = (
𝑇 𝑗𝜔
1 + 𝑇 𝑗𝜔
) (
−𝑇 𝑗𝜔
1 − 𝑇 𝑗𝜔
) (
2𝜎2
𝛽
𝜔2 + 𝛽2
) −
(
1
1 + 𝑇 𝑗𝜔
) (
1
1 − 𝑇 𝑗𝜔
) 𝐴
Now we can substitute 𝑆 𝑒(𝜔) in eq. 4 (Equation 3.79) and
differentiate with respect to 𝑇 to find
𝑇opt =
√
𝐴
𝜎
√
2𝛽 − 𝛽
√
𝐴
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 17 / 44
.
.
.
.
.
.
.
.
Example 3.8
If 𝐴 = 𝜎 = 𝛽 = 1 then the optimal time constant of the filter is
computed as
𝑇 =
1
√
2 − 1
≈ 2.4
the optimal filter is given as
𝐺(𝜔) =
1
1 + 𝑗𝜔𝑇
=
1/𝑇
1/𝑇 + 𝑗𝜔
𝑔(𝑡) =
1
𝑇
𝑒−𝑡/𝑇
𝑡 ≥ 0
Converting this filter to the time domain results in
̇̂𝑥 =
1
𝑇
(− ̂𝑥 + 𝑦)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 18 / 44
.
.
.
.
.
.
.
.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 19 / 44
.
.
.
.
.
.
.
.
Section 3
3.4.2 General Filter Optimization
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 20 / 44
.
.
.
.
.
.
.
.
General Approach to Optimal Filter
Take a more general approach to find the optimal filter
The Expected Value of the Estimation Error
The expected value of the estimation error can be computed as
𝑒(𝑡) =𝑥(𝑡) − ̂𝑥(𝑡)
𝑒2
(𝑡) =𝑥2
(𝑡) − 2𝑥(𝑡) ̂𝑥(𝑡) + ̂𝑥2
(𝑡)
=𝑥2
(𝑡) − 2𝑥(𝑡) ∫ 𝑔(𝑢)[𝑥(𝑡 − 𝑢) + 𝑣(𝑡 − 𝑢)]𝑑𝑢+
∬ 𝑔(𝑢)𝑔(𝛾)[𝑥(𝑡 − 𝑢) + 𝑣(𝑡 − 𝑢)]×
[𝑥(𝑡 − 𝑣) + 𝑣(𝑡 − 𝑣)]𝑑𝑢𝑑𝛾
𝐸 [𝑒2
(𝑡)] =𝐸 [𝑥2
(𝑡)] − 2 ∫ 𝑔(𝑢)𝑅 𝑥(𝑢)𝑑𝑢+
∫ (𝑔(𝑢)𝑔(𝛾) [𝑅 𝑥(𝑢 − 𝑣) + 𝑅 𝑣(𝑢 − 𝑣)] 𝑑𝑢𝑑𝛾
(5)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 21 / 44
.
.
.
.
.
.
.
.
Calculus of Variations approach
Use a calculus of variations approach [Fom00, Wei74] to find the filter
𝑔(𝑡) that minimizes 𝐸 [𝑒2
(𝑡)].
Replace 𝑔(𝑡) in eq. 5 (Equation 3.87) with 𝑔(𝑡) + 𝜖𝜂(𝑡)
𝜖: some small number
𝜂(𝑡): an arbitrary perturbation in 𝑔(𝑡)
(By calculus of variations) Can minimize 𝐸 (𝑒2
(𝑡)) by setting as eq. 6
(Equation 3.88)
Thus solve for the optimal 𝑔(𝑡).
𝜕𝐸 (𝑒2
(𝑡))
𝜕𝜖
∣
𝑒=0
= 0 (6)
From eq. 5 (Equation 3.87) we can write
𝑅 𝑒(0) =𝑅 𝑥(0) − 2 ∫[𝑔(𝑢) + 𝜖𝜂(𝑢)]𝑅 𝑥(𝑢)𝑑𝑢+
∬[𝑔(𝑢) + 𝑒𝜂(𝑢)][𝑔(𝛾) + 𝜖𝜂(𝛾)]
[𝑅 𝑧(𝑢 − 𝛾) + 𝑅 𝑣(𝑢 − 𝛾)] 𝑑𝑢𝑑𝛾
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 22 / 44
.
.
.
.
.
.
.
.
Taking Partial Derivative
Taking the partial derivative with respect to 𝜖 gives
𝜕𝑅 𝑒(0)
𝜕𝜖
= − 2 ∫ 𝜂(𝑢)𝑅 𝑧(𝑢)𝑑𝑢+
∬[𝜂(𝑢)𝑔(𝛾) + 𝜂(𝛾)𝑔(𝑢) + 2𝜖𝜂(𝑢)𝜂(𝛾)]×
[𝑅 𝑥(𝑢 − 𝑣) + 𝑅 𝑣(𝑢 − 𝛾)] 𝑑𝑢𝑑𝛾
𝜕𝑅 𝑒(0)
𝜕𝜖
∣
𝑒=0
= − 2 ∫ 𝜂(𝜏)𝑅 𝑧(𝜏)𝑑𝜏+
∬ 𝜂(𝜏)𝑔(𝛾) [𝑅 𝑥(𝜏 − 𝛾) + 𝑅 𝑣(𝜏 − 𝛾)] 𝑑𝜏 𝑑𝛾+
∬ 𝜂(𝜏)𝑔(𝑢) [𝑅 𝑧(𝑢 − 𝜏) + 𝑅 𝑤(𝑢 − 𝜏)] 𝑑𝜏 𝑑𝑢
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 23 / 44
.
.
.
.
.
.
.
.
Necessary Condition
Now recall from eq. 7 (Equation 2.87)
|𝑅 𝑋(𝜏)| ≤ 𝑅 𝑋(0) (7)
𝑅 𝑥(𝜏 − 𝑢) = 𝑅 𝑧(𝑢 − 𝜏) [i.e., 𝑅 𝑥(𝜏) is even] if 𝑥(𝑡) is stationary.
In this case, the above equation can be written as
0 = − 2 ∫ 𝜂(𝜏)𝑅 𝑥(𝜏)𝑑𝜏+
2 ∬ 𝜂(𝜏)𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝜏 𝑑𝑢
This gives the necessary condition for the optimality of the filter 𝑔(𝑡)
as follows:
∫ 𝜂(𝜏) [−𝑅 𝑧(𝜏) + ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢] 𝑑𝜏 = 0 (8
We need to solve this for 𝑔(𝑡) to find the optimal filter.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 24 / 44
.
.
.
.
.
.
.
.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 25 / 44
.
.
.
.
.
.
.
.
Section 4
3.4.3 Noncausal Filter Optimization
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 26 / 44
.
.
.
.
.
.
.
.
3.4.3 Noncausal Filter Optimization
If no estrictions on causality of our filter, then
𝑔(𝑡) can be nonzero for 𝑡 < 0,
which means that our perturbation 𝜂(𝑡) can also be nonzero for 𝑡 < 0.
This means that the quantity inside the square brackets in eq. 8
(Equation 3.92) must be zero.
This results in
𝑅 𝑥(𝜏) = ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢
= 𝑔(𝜏) ∗ [𝑅 𝑥(𝜏) + 𝑅 𝑣(𝜏)]
𝑆 𝑥(𝜔) = 𝐺(𝜔) [𝑆 𝑥(𝜔) + 𝑆 𝑣(𝜔)]
𝐺(𝜔) =
𝑆 𝑧(𝜔)
𝑆 𝑧(𝜔) + 𝑆 𝑣(𝜔)
(9)
The transfer function of the optimal filter is the ratio of the power
spectrum of the signal 𝑥(𝑡) to the sum of the power spectrums of
𝑥(𝑡) and the noise 𝑣(𝑡)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 27 / 44
.
.
.
.
.
.
.
.
Example 3.9
Consider the system discussed in Example 3.8 with 𝐴 = 𝛽 = 𝜎 = 1.
The signal and noise power spectra are given as
𝑆 𝑧(𝜔) =
2
𝜔2 + 1
𝑆 𝑣(𝜔) = 1
From this we obtain the optimal noncausal filter from eq. 9 (Equation
3.93) as
𝐺(𝜔) =
2
𝜔2 + 3
=
1
√
3
(
2
√
3
𝜔2 + 3
)
𝑔(𝑡) =
1
√
3
𝑒−
√
3|𝑡|
≈ 0.58𝑒−0.58|𝑡|
, 𝑡 ∈ [−∞, ∞]
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 28 / 44
.
.
.
.
.
.
.
.
Partial Fraction of 𝐺(𝜔)
In order to find a time domain representation of the filter, we perform
a partial fraction expansion of 𝐺(𝜔) to find the causal part and the
anticausal’ part of the filter:
𝐺(𝜔) =
1
√
3(𝑗𝜔 +
√
3)⏟⏟⏟⏟⏟⏟⏟
causal filter
+
1
√
3(−𝑗𝜔 +
√
3)⏟⏟⏟⏟⏟⏟⏟
anticausal filter
From this we see that
̂𝑋(𝜔) =
1
√
3(𝑗𝜔 +
√
3)
𝑌 (𝑠) −
1
√
3(𝑗𝜔 −
√
3)
𝑌 (𝑠)
= ̂𝑋 𝑐(𝜔) + ̂𝑋 𝑎(𝜔)
̂𝑋 𝑐(𝜔) are the causal part of ̃𝑋(𝜔)
̂𝑋 𝑜(𝜔) is anticausal part of ̃𝑋(𝜔)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 29 / 44
.
.
.
.
.
.
.
.
In the time domain, this can be written
̂𝑥(𝑡) = ̂𝑥 𝑐(𝑡) + ̂𝑥 𝑎(𝑡)
̇̂𝑥 𝑐 = −
√
3 ̂𝑥 𝑐 + 𝑦/
√
3
̂𝑥 𝑎 =
√
3 ̂𝑥 𝑎 − 𝑦/
√
3
The ̇̂𝑥 𝑐 equation runs forward in time and is therefore causal causal.
The ̂𝑥 𝑎 equation runs backward in time and is therefore anticausal and
stable. (If it ran forward in time, it would be unstable.)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 30 / 44
.
.
.
.
.
.
.
.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 31 / 44
.
.
.
.
.
.
.
.
Section 5
3.4.4 Causal Filter Optimization
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 32 / 44
.
.
.
.
.
.
.
.
Theory I
If we require a causal filter for signal estimation, then 𝑔(𝑡) = 0 for
𝑡 < 0, and the perturbation 𝜂(𝑡) must be equal to 0 for 𝑡 < 0. In this
case, eq. 8 gives
𝑅 𝑥(𝜏) − ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢 = 0, 𝑡 ≥ 0 (10)
The initial application of this equation was in the field of astrophysics
in 1894 [Sob63].
Explicit solutions were thought to be impossible, but Norbert Wiener
and Eberhard Hopf became instantly famous when they solved this
equation in 1931.
Their solution was so impressive that the equation became known as
the Wiener-Hopf equation.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 33 / 44
.
.
.
.
.
.
.
.
Solution
To solve eq. 10 (Example 3.99) postulate some function 𝑎(𝑡) that is
arbitrary for 𝑡 < 0 but is equal to 0 for 𝑡 ≥ 0. Then we obtain
𝑅 𝑥(𝜏) − ∫ 𝑔(𝑢) [𝑅 𝑧(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢 = 𝑎(𝜏)
𝑆 𝑧(𝜔) − 𝐺(𝜔) [𝑆 𝑧(𝜔) + 𝑆 𝑣(𝜔)] = 𝐴(𝜔)
(11)
For ease of notation, make the following definition:
𝑆 𝑧𝑣(𝜔) = 𝑆 𝑥(𝜔) + 𝑆 𝑣(𝜔)
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 34 / 44
.
.
.
.
.
.
.
.
Then eq. 11 (Equation 3.100) becomes
𝑆 𝑥(𝜔) − 𝐺(𝜔)𝑆+
𝑧𝑣(𝜔)𝑆−
𝑥𝑣(𝜔) = 𝐴(𝜔) (12)
where,
𝑆+
𝑥𝑣(𝜔) is the part of 𝑆 𝑥𝑣(𝜔) that has all its poles and zeros in the
LHP (and hence corresponds to a causal time function)
𝑆−
𝑥𝑣(𝜔) is the part of 𝑆 𝑥𝑣(𝜔) that has all its poles and zeros in the
RHP (and hence corresponds to an anticausal time function).
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 35 / 44
.
.
.
.
.
.
.
.
eq. 12 (Equation 3.102) can be written as
𝐺(𝜔)𝑆+
𝑥𝑣(𝜔) =
𝑆 𝑧(𝜔)
𝑆−
𝑥𝑣(𝜔)
−
𝐴(𝜔)
𝑆 𝑧𝑣(𝜔)
𝐺(𝜔)𝑆+
𝑥𝑣(𝜔): a causal time function [assuming that 𝑔(𝑡) is stable].
𝐴(𝜔)
𝑆 𝑧𝑣(𝜔) : an anticausal time function.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 36 / 44
.
.
.
.
.
.
.
.
Transfer Function of the Optimal Filter
Therefore,
𝐺(𝜔)𝑆+
𝑥𝑣(𝜔) = causal part of
𝑆 𝑥(𝜔)
𝑆−
𝑥𝑣(𝜔)
𝐺(𝜔) =
1
𝑆+
𝑥𝑣(𝜔)
[ causal part of
𝑆 𝑥(𝜔)
𝑆−
𝑥𝑣(𝜔)
]
(13)
This gives the TF of the optimal causal filter.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 37 / 44
.
.
.
.
.
.
.
.
Example 3.10
Consider the system discussed in Section 3.4.1 with 𝐴 = 𝛽 = 𝜎 = 1.
This was also discussed in Example 3.9 For this example we have
𝑆 𝑧(𝜔) =
2
𝜔2 + 1
𝑆 𝑥𝑣(𝜔) =
𝜔2
+ 3
𝜔2 + 1
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 38 / 44
.
.
.
.
.
.
.
.
Splitting this up into its causal and anticausal factors gives
𝑆 𝑥𝑣(𝜔) =
−1
(
𝑗𝜔 +
√
3
𝑗𝜔 + 1
) (
−𝑗𝜔 +
√
3
−𝑗𝜔 + 1
)
⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
𝑆 𝑥,(𝜔)
𝑆 𝑧(𝜔)
𝑆−
𝑥𝑣(𝜔)
=
2(−𝑗𝜔 + 1)
(𝜔2 + 1) (−𝑗𝜔 +
√
3)
=
2
(−𝑗𝜔 +
√
3)(𝑗𝜔 + 1)
=
√
3−1
𝑗𝜔+1
causal part
+
√
3 − 1
−𝑗𝜔 +
√
3⏟⏟⏟⏟⏟
anticausal part
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 39 / 44
.
.
.
.
.
.
.
.
Transer Function and impulse response of the optimal filter
eq. 13 (Equation 3.104) gives
𝐺(𝜔) = (
𝑗𝜔 + 1
𝑗𝜔 +
√
3
) (
√
3 − 1
𝑗𝜔 + 1
)
=
√
3 − 1
𝑗𝜔 +
√
3
𝑔(𝑡) = (
√
3 − 1)𝑒−
√
3𝑡
, 𝑡 ≥ 0
This gives the TF and impulse response of the optimal filter when
causality is required.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 40 / 44
.
.
.
.
.
.
.
.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 41 / 44
.
.
.
.
.
.
.
.
Section 6
3.4.5 Comparison
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 42 / 44
.
.
.
.
.
.
.
.
3.4.5 Comparison I
Comparing the three examples of optimal filter design presented in
this section Example 3.8, Example 3.9, Example 3.10 it can be shown
that the mean square errors of the filter are as follows [Bro96]:
Parameter optimization method: 𝐸 [𝑒2
(𝑡)] = 0.914
Causal Wiener filter: 𝐸 [𝑒2
(𝑡)] = 0.732
Noncausal Wiener filter: 𝐸 [𝑒2
(𝑡)] = 0.577
As expected, the estimation error decreases when we have fewer
constraints on the filter.
However, the removal of constraints makes the filter design problem
more difficult.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 43 / 44
.
.
.
.
.
.
.
.
The Wiener filter is not very amenable to state estimation
because of
difficulty in extension to MIMO problems with state variable
descriptions, and
difficulty in application to signals with time-varying statistical
properties.
Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 44 / 44

More Related Content

What's hot

Kalman filter - Applications in Image processing
Kalman filter - Applications in Image processingKalman filter - Applications in Image processing
Kalman filter - Applications in Image processingRavi Teja
 
Kalman filter for object tracking
Kalman filter for object trackingKalman filter for object tracking
Kalman filter for object trackingMohit Yadav
 
Kalman filters
Kalman filtersKalman filters
Kalman filters
AJAL A J
 
Kalman filter for Beginners
Kalman filter for BeginnersKalman filter for Beginners
Kalman filter for Beginners
winfred lu
 
Maneuverable Target Tracking using Linear Kalman Filter
Maneuverable Target Tracking  using Linear Kalman FilterManeuverable Target Tracking  using Linear Kalman Filter
Maneuverable Target Tracking using Linear Kalman FilterAnnwesh Barik
 
The extended kalman filter
The extended kalman filterThe extended kalman filter
The extended kalman filter
Mudit Parnami
 
Understanding kalman filter for soc estimation.
Understanding kalman filter for soc estimation.Understanding kalman filter for soc estimation.
Understanding kalman filter for soc estimation.
Ratul
 
Report kalman filtering
Report kalman filteringReport kalman filtering
Report kalman filteringIrfan Anjum
 
Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of Systems
Amr E. Mohamed
 
Kalman filters
Kalman filtersKalman filters
Kalman filters
Saravanan Natarajan
 
Kalman Filter and its Application
Kalman Filter and its ApplicationKalman Filter and its Application
Kalman Filter and its Application
Saptarshi Mazumdar
 
Chapter 4 time domain analysis
Chapter 4 time domain analysisChapter 4 time domain analysis
Chapter 4 time domain analysisBin Biny Bino
 
Real time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target trackingReal time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target trackingIAEME Publication
 
Kalmanfilter
KalmanfilterKalmanfilter
Kalmanfilter
john chezhiyan r
 
Deadbeat Response Design _8th lecture
Deadbeat Response Design _8th lectureDeadbeat Response Design _8th lecture
Deadbeat Response Design _8th lecture
Khalaf Gaeid Alshammery
 

What's hot (20)

kalman filtering "From Basics to unscented Kaman filter"
 kalman filtering "From Basics to unscented Kaman filter" kalman filtering "From Basics to unscented Kaman filter"
kalman filtering "From Basics to unscented Kaman filter"
 
Kalman filter - Applications in Image processing
Kalman filter - Applications in Image processingKalman filter - Applications in Image processing
Kalman filter - Applications in Image processing
 
Kalman filter for object tracking
Kalman filter for object trackingKalman filter for object tracking
Kalman filter for object tracking
 
Kalman filters
Kalman filtersKalman filters
Kalman filters
 
Kalman filter for Beginners
Kalman filter for BeginnersKalman filter for Beginners
Kalman filter for Beginners
 
Maneuverable Target Tracking using Linear Kalman Filter
Maneuverable Target Tracking  using Linear Kalman FilterManeuverable Target Tracking  using Linear Kalman Filter
Maneuverable Target Tracking using Linear Kalman Filter
 
Kalman_filtering
Kalman_filteringKalman_filtering
Kalman_filtering
 
The extended kalman filter
The extended kalman filterThe extended kalman filter
The extended kalman filter
 
Understanding kalman filter for soc estimation.
Understanding kalman filter for soc estimation.Understanding kalman filter for soc estimation.
Understanding kalman filter for soc estimation.
 
Report kalman filtering
Report kalman filteringReport kalman filtering
Report kalman filtering
 
Modern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of SystemsModern Control - Lec 02 - Mathematical Modeling of Systems
Modern Control - Lec 02 - Mathematical Modeling of Systems
 
Kalman Filter
Kalman FilterKalman Filter
Kalman Filter
 
Kalman filters
Kalman filtersKalman filters
Kalman filters
 
Kalman Filter and its Application
Kalman Filter and its ApplicationKalman Filter and its Application
Kalman Filter and its Application
 
Chapter 4 time domain analysis
Chapter 4 time domain analysisChapter 4 time domain analysis
Chapter 4 time domain analysis
 
Real time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target trackingReal time implementation of unscented kalman filter for target tracking
Real time implementation of unscented kalman filter for target tracking
 
Time response
Time responseTime response
Time response
 
Kalmanfilter
KalmanfilterKalmanfilter
Kalmanfilter
 
Deadbeat Response Design _8th lecture
Deadbeat Response Design _8th lectureDeadbeat Response Design _8th lecture
Deadbeat Response Design _8th lecture
 
Lecture 09: SLAM
Lecture 09: SLAMLecture 09: SLAM
Lecture 09: SLAM
 

Similar to Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]

Clock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
Clock Skew Compensation Algorithm Immune to Floating-Point Precision LossClock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
Clock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
Xi'an Jiaotong-Liverpool University
 
ADC Lab Analysis
ADC Lab AnalysisADC Lab Analysis
ADC Lab Analysis
Kara Bell
 
An improved ant colony algorithm based on
An improved ant colony algorithm based onAn improved ant colony algorithm based on
An improved ant colony algorithm based on
IJCI JOURNAL
 
control_5.pptx
control_5.pptxcontrol_5.pptx
control_5.pptx
ewnetukassa2
 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware Realization
Sayan Chaudhuri
 
Adaptive pi based on direct synthesis nishant
Adaptive pi based on direct synthesis nishantAdaptive pi based on direct synthesis nishant
Adaptive pi based on direct synthesis nishant
Nishant Parikh
 
Frequency domain analysis of Linear Time Invariant system
Frequency domain analysis of Linear Time Invariant systemFrequency domain analysis of Linear Time Invariant system
Frequency domain analysis of Linear Time Invariant system
TsegaTeklewold1
 
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHMTHE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
IJCSEA Journal
 
Dcs lec03 - z-analysis of discrete time control systems
Dcs   lec03 - z-analysis of discrete time control systemsDcs   lec03 - z-analysis of discrete time control systems
Dcs lec03 - z-analysis of discrete time control systems
Amr E. Mohamed
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
Xi'an Jiaotong-Liverpool University
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Xi'an Jiaotong-Liverpool University
 
476 293
476 293476 293
Filter design techniques ch7 iir
Filter design techniques ch7 iirFilter design techniques ch7 iir
Filter design techniques ch7 iir
Falah Mohammed
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
IJERD Editor
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
IJCI JOURNAL
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
Franco Bontempi Org Didattica
 
The cubic root unscented kalman filter to estimate the position and orientat...
The cubic root unscented kalman filter to estimate  the position and orientat...The cubic root unscented kalman filter to estimate  the position and orientat...
The cubic root unscented kalman filter to estimate the position and orientat...
IJECEIAES
 
Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05
Charlton Inao
 
Introduction to Algorithms
Introduction to AlgorithmsIntroduction to Algorithms
Introduction to Algorithms
Venkatesh Iyer
 

Similar to Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden] (20)

Clock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
Clock Skew Compensation Algorithm Immune to Floating-Point Precision LossClock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
Clock Skew Compensation Algorithm Immune to Floating-Point Precision Loss
 
ADC Lab Analysis
ADC Lab AnalysisADC Lab Analysis
ADC Lab Analysis
 
main
mainmain
main
 
An improved ant colony algorithm based on
An improved ant colony algorithm based onAn improved ant colony algorithm based on
An improved ant colony algorithm based on
 
control_5.pptx
control_5.pptxcontrol_5.pptx
control_5.pptx
 
Wiener Filter Hardware Realization
Wiener Filter Hardware RealizationWiener Filter Hardware Realization
Wiener Filter Hardware Realization
 
Adaptive pi based on direct synthesis nishant
Adaptive pi based on direct synthesis nishantAdaptive pi based on direct synthesis nishant
Adaptive pi based on direct synthesis nishant
 
Frequency domain analysis of Linear Time Invariant system
Frequency domain analysis of Linear Time Invariant systemFrequency domain analysis of Linear Time Invariant system
Frequency domain analysis of Linear Time Invariant system
 
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHMTHE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
THE RESEARCH OF QUANTUM PHASE ESTIMATION ALGORITHM
 
Dcs lec03 - z-analysis of discrete time control systems
Dcs   lec03 - z-analysis of discrete time control systemsDcs   lec03 - z-analysis of discrete time control systems
Dcs lec03 - z-analysis of discrete time control systems
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Clo...
 
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...Theoretical and Practical Bounds on the Initial Value of  Skew-Compensated Cl...
Theoretical and Practical Bounds on the Initial Value of Skew-Compensated Cl...
 
476 293
476 293476 293
476 293
 
Filter design techniques ch7 iir
Filter design techniques ch7 iirFilter design techniques ch7 iir
Filter design techniques ch7 iir
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Maneuvering target track prediction model
Maneuvering target track prediction modelManeuvering target track prediction model
Maneuvering target track prediction model
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
 
The cubic root unscented kalman filter to estimate the position and orientat...
The cubic root unscented kalman filter to estimate  the position and orientat...The cubic root unscented kalman filter to estimate  the position and orientat...
The cubic root unscented kalman filter to estimate the position and orientat...
 
Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05Wk 6 part 2 non linearites and non linearization april 05
Wk 6 part 2 non linearites and non linearization april 05
 
Introduction to Algorithms
Introduction to AlgorithmsIntroduction to Algorithms
Introduction to Algorithms
 

More from AI Robotics KR

Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
AI Robotics KR
 
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
AI Robotics KR
 
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
AI Robotics KR
 
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
AI Robotics KR
 
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
AI Robotics KR
 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
AI Robotics KR
 
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
AI Robotics KR
 
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
AI Robotics KR
 
Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]
AI Robotics KR
 
Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]
AI Robotics KR
 
ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)
AI Robotics KR
 
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
AI Robotics KR
 

More from AI Robotics KR (12)

Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
Sensor Fusion Study - Real World 2: GPS & INS Fusion [Stella Seoyeon Yang]
 
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
Sensor Fusion Study - Ch14. The Unscented Kalman Filter [Sooyoung Kim]
 
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
Sensor Fusion Study - Real World 1: Lidar radar fusion [Kim Soo Young]
 
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
Sensor Fusion Study - Ch12. Additional Topics in H-Infinity Filtering [Hayden]
 
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
Sensor Fusion Study - Ch11. The H-Infinity Filter [김영범]
 
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
Sensor Fusion Study - Ch9. Optimal Smoothing [Hayden]
 
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
Sensor Fusion Study - Ch6. Alternate Kalman filter formulations [Jinhyuk Song]
 
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
Sensor Fusion Study - Ch4. Propagation of states and covariance [김동현]
 
Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]Sensor Fusion Study - Ch2. Probability Theory [Stella]
Sensor Fusion Study - Ch2. Probability Theory [Stella]
 
Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]Sensor Fusion Study - Ch1. Linear System [Hayden]
Sensor Fusion Study - Ch1. Linear System [Hayden]
 
ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)ROS2 on WebOS - Brian Shin(LG)
ROS2 on WebOS - Brian Shin(LG)
 
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
Bayesian Inference : Kalman filter 에서 Optimization 까지 - 김홍배 박사님
 

Recently uploaded

NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
manasideore6
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
AhmedHussein950959
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
MLILAB
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
Jayaprasanna4
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
TeeVichai
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
SupreethSP4
 
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdfAKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
SamSarthak3
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
gdsczhcet
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
seandesed
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
Vijay Dialani, PhD
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
AmarGB2
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
fxintegritypublishin
 

Recently uploaded (20)

NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&BDesign and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
 
Fundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptxFundamentals of Electric Drives and its applications.pptx
Fundamentals of Electric Drives and its applications.pptx
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
 
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang,  ICLR 2024, MLILAB, KAIST AI.pdfJ.Yang,  ICLR 2024, MLILAB, KAIST AI.pdf
J.Yang, ICLR 2024, MLILAB, KAIST AI.pdf
 
ethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.pptethical hacking-mobile hacking methods.ppt
ethical hacking-mobile hacking methods.ppt
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
Runway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptxRunway Orientation Based on the Wind Rose Diagram.pptx
Runway Orientation Based on the Wind Rose Diagram.pptx
 
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdfAKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
AKS UNIVERSITY Satna Final Year Project By OM Hardaha.pdf
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
Gen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdfGen AI Study Jams _ For the GDSC Leads in India.pdf
Gen AI Study Jams _ For the GDSC Leads in India.pdf
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
 
ML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptxML for identifying fraud using open blockchain data.pptx
ML for identifying fraud using open blockchain data.pptx
 
Investor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptxInvestor-Presentation-Q1FY2024 investor presentation document.pptx
Investor-Presentation-Q1FY2024 investor presentation document.pptx
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
 

Sensor Fusion Study - Ch3. Least Square Estimation [강소라, Stella, Hayden]

  • 1. Least squares estimation Sora Kang AI Robotics KR Sensor fusion Study
  • 2. Overview 3.1. Estimation Of A Constant 3.2. Weighted Least Squares Estimation
  • 4. 3.1
  • 5. 3.1
  • 6. 3.2
  • 7. 3.2
  • 8. 3.2
  • 9. Recursive Least Squares Estimation AI Robotics KR Sensor Fusion Study Stella Seoyeon Yang
  • 10. 3.3 Recursive Least Squares Estimation If we obtain measurements sequentially and want to update our estimate of z with each new measurement, we need to augment the H matrix and completely recompute the estimate x_hat. If the number of measurements becomes large, then the computational effort could become prohibitive. Weighted Least Square Estimator
  • 11. 3.3 Recursive Least Squares Estimation Linear Recursive Least Square Estimator gain matrix, correction term
  • 12. 3.3 Recursive Least Squares Estimation if measurement noise(v_k) is zero mean for all k, and x_hat_0 = E(x), E(x_hat) = E(x_k) for all k
  • 13. 3.3 Recursive Least Squares Estimation The optimality criterion : Cost Function , Minimize the the sum of the variance of estimation errors (J)
  • 14. 3.3 Recursive Least Squares Estimation Estimation-error covariance : P_k is Positive definite.
  • 15. 3.3 Recursive Least Squares Estimation Optimality Criterion, Minimum
  • 16. 3.3 Recursive Least Squares Estimation Summary
  • 17. 3.3.1 Alternate Estimator Forms Sometimes it is useful to write the equations for P_k and K_k in alternate forms. Although these alternate forms are mathematically identical, they can be beneficial from a computational point of view. They can also lead to new results, which we will discover in later chapters.
  • 19. 3.3.1 Alternate Estimator Forms This is a simpler equation for P_{k}, but numerical computing problems (Le, scaling issues) may cause this expression for P_{k} to not be positive definite, even when P_{k-1} and R_{k} are positive definite.
  • 22. 3.3.1 Alternate Estimator Forms EX. 3.4 In this example, we illustrate the computational advantages of the first form of the covariance update in Equation compared with the third form. Suppose we have a scalar parameter z and a perfect measurement of it. That is, H1 = 1 and R1 = 0. Further suppose that our initial estimation covariance Po = 6, and our computer provides precision of three digits to the right of the decimal point for each quantity that it computes. The estimator gain K1 is
  • 23. 3.3.2 Curve fitting we measure data one sample at a time (y1, y2, ...) and want to find the best fit of a curve to the data. The curve that we want to fit to the data could be constrained to be linear, or quadratic, or sinusoid, or some other shape, depending on the underlying problem. EXAMPLE 3.7 Suppose that we know a priori that the underlying data is a quadratic function of time. In this case, we have a quadratic data fitting problem. For example, suppose we are measuring the altitude of a free-falling object. We know from our understanding of physics that altitude r is a function of the acceleration due to gravity, the initial altitude and velocity of the object r_{0} and v_{0}, and time t, as given by the equation r=r_{0}+v_{0} t+(a / 2) t^{2} . So if we measure r at various time instants and fit a quadratic to the resulting r versus t curve, then we have an estimate of the parameters r_{0}, v_{0}, and a / 2 .
  • 26. 3.3.1 Alternate Estimator Forms Summary
  • 28. . . . . . . . . Chapter 3 Least Squares Estimation 3.4 Wiener Filtering Sensor Fusion Study AI Robotics KR Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 1 / 44
  • 29. . . . . . . . . 1 Introduction 2 3.4.1 Parametric Filter Optimization 3 3.4.2 General Filter Optimization 4 3.4.3 Noncausal Filter Optimization 5 3.4.4 Causal Filter Optimization 6 3.4.5 Comparison Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 2 / 44
  • 30. . . . . . . . . Section 1 Introduction Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 3 / 44
  • 31. . . . . . . . . Introduction A brief review of Wiener filtering. Knowledge of Wiener filtering is not assummed. Wiener filtering is historically important, still widely used in signal processing and communication theory. But WF is not used much for state estimation, this section is optional Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 4 / 44
  • 32. . . . . . . . . History of Wiener Filtering Design a linear, time-invariant filter to extract a signal from noise, approaching the problem from the frequency domain perspective. Invented in World War II by Norbert Wiener First published in 1942, but knwon to public 1949 [Wie64]. Andrey Kolmogorov actually solved a more general problem earlier (1941), and Mark Krein also worked on the same problem (1945). Kolmogorov’s and Krein’s work was independent of Wiener’s work, and Wiener acknowledges that Kolmogorov’s work predated his own work [Wie56]. However, Kolmogorov’s and Krein’s work did not become well known in the Western world until later, since it was published in Russian [Kol41]. A nontechnical account of Wiener’s work is given in his autobiography [Wie56] Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 5 / 44
  • 33. . . . . . . . . Backgrounds (pp. 94-95) I To set up the presentation of the Wiener filter, we first need to ask the following question: Question How does the power spectrum of a stochastic process 𝑥(𝑡) change when it goes through an LTI system with Impulse response 𝑔(𝑡) ? Output 𝑦(𝑡) The output 𝑦(𝑡) of the system is given by the convolution of the impulse response with the input: 𝑦(𝑡) = 𝑔(𝑡) ∗ 𝑥(𝑡) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 6 / 44
  • 34. . . . . . . . . Time Invariant The system is time-invariant, a time shift in the input results in an equal time shift in the output: 𝑦(𝑡 + 𝛼) = 𝑔(𝑡) ∗ 𝑥(𝑡 + 𝛼) Convolution Integral Multiplying the above two equations and writing out the convolutions as integrals gives 𝑦(𝑡)𝑦(𝑡 + 𝛼) = ∫ 𝑔(𝜏)𝑥(𝑡 − 𝜏)𝑑𝜏 ∫ 𝑔(𝛾)𝑥(𝑡 + 𝛼 − 𝛾)𝑑𝛾 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 7 / 44
  • 35. . . . . . . . . Autocorrelation of 𝑦(𝑡) Taking the expected value of both sides of the above equation gives the autocorrelation of 𝑦(𝑡) as a function of the autocorrelation of 𝑥(𝑡) 𝐸[𝑦(𝑡)𝑦(𝑡 + 𝛼)] = ∬ 𝑔(𝜏)𝑔(𝛾)𝐸[𝑥(𝑡 − 𝜏)𝑥(𝑡 + 𝛼 − 𝛾)]𝑑𝜏 𝑑𝛾 Shorthand Notation 𝑅 𝑦(𝛼) … it will be written in shorthand notation as 𝑅 𝑦(𝛼) = ∬ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑥(𝛼 + 𝜏 − 𝛾)𝑑𝜏 𝑑𝛾 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 8 / 44
  • 36. . . . . . . . . Taking Fourier Transform Take the Fourier transform of the above equation to obtain ∫ 𝑅 𝑦(𝛼)𝑒−𝑗𝜔𝛼 𝑑𝛼 = ∭ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑧(𝛼 + 𝜏 − 𝛾)𝑒−𝑗𝜔𝛼 𝑑𝜏 𝑑𝛾𝑑𝛼 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 9 / 44
  • 37. . . . . . . . . Power Spectrum of the Ouput 𝑦(𝑡) Now we define a new variable of integration 𝛽 = 𝛼 + 𝜏 − 𝛾 and replace 𝛼 in above equation to obtain 𝑆 𝑦(𝜔) = ∭ 𝑔(𝜏)𝑔(𝛾)𝑅 𝑧(𝛽)𝑒−𝑗𝜔𝛽 𝑒−𝑗𝜔𝜏 𝑒 𝑗𝜔𝜏 𝑑𝜏 𝑑𝛾𝑑𝛽 = 𝐺(−𝜔)𝐺(𝜔)𝑆 𝑥(𝜔) (1) In other words, the power spectrum of the output 𝑦(𝑡) is a function of the Fourier transform of the impulse response of the system, 𝐺(𝜔), and the power spectrum of the input 𝑥(𝑡) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 10 / 44
  • 38. . . . . . . . . Problem statement Design a stable LTI filter to extract a signal from noise. Quantities of Interests 𝑥(𝑡) = noise free signal 𝑣(𝑡) = additive noise 𝑔(𝑡) = filter impulse response (to be designed) ̂𝑥(𝑡) = output of filter [estimate of 𝑥(𝑡)] 𝑒(𝑡) = estimation error = 𝑥(𝑡) − ̂𝑥(𝑡) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 11 / 44
  • 39. . . . . . . . . Mathematical Expression Fourier Transform These quantities are represented in fig. 1 from which we see that ̂𝑥(𝑡) = 𝑔(𝑡) ∗ [𝑥(𝑡) + 𝑣(𝑡)] ̂𝑋(𝜔) = 𝐺(𝜔)[𝑋(𝜔) + 𝑉 (𝜔)] 𝐸(𝜔) = 𝑋(𝜔) − ̂𝑋(𝜔) = 𝑋(𝜔) − 𝐺(𝜔)[𝑋(𝜔) + 𝑉 (𝜔)] = [1 − 𝐺(𝜔)]𝑋(𝜔) − 𝐺(𝜔)𝑉 (𝜔) the error signal 𝑒(𝑡) is the superposition of the system [1 − 𝐺(𝜔)] acting on the signal 𝑥(𝑡), the system 𝐺(𝜔) acting on the signal 𝑣(𝑡). Therefore, from eq. 1 we obtain 𝑆 𝑒(𝜔) = [1 − 𝐺(𝜔)][1 − 𝐺(−𝜔)]𝑆 𝑥(𝜔) − 𝐺(𝜔)𝐺(−𝜔)𝑆 𝑣(𝜔) (2) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 12 / 44
  • 40. . . . . . . . . Variance of Estimation Error Recall, (Equation 2.92) 𝑆 𝑋(𝜔) = ∫ ∞ −∞ 𝑅 𝑋(𝜏)𝑒−𝑗𝜔𝜏 𝑑𝜏 𝑅 𝑋(𝜏) = 1 2𝜋 ∫ ∞ −∞ 𝑆 𝑋(𝜔)𝑒 𝑗𝜔𝜏 𝑑𝜔 (3) The variance of the estimation error is obtained from eq. 3 (Equation 2.92) as 𝐸 [𝑒2 (𝑡)] = 1 2𝜋 ∫ 𝑆 𝑒(𝜔)𝑑𝜔 (4) To find the optimal filter 𝐺(𝜔) we need to minimize 𝐸 [𝑒2 (𝑡)], which means that we need to know 𝑆 𝑥(𝜔) and 𝑆 𝑣(𝜔), the statistical properties of the signal 𝑥(𝑡) and the noise 𝑣(𝑡) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 13 / 44
  • 41. . . . . . . . . Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 14 / 44
  • 42. . . . . . . . . Section 2 3.4.1 Parametric Filter Optimization Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 15 / 44
  • 43. . . . . . . . . 3.4.1 Parametric Filter Optimization I To simplify the problem of the determination of the optimal filter 𝐺(𝜔) Assume the optimal filter is a first-order, low-pass filter (stable and causal) with a bandwidth 1/𝑇 to be determined by parametric optimization. 𝐺(𝜔) = 1 1 + 𝑇 𝑗𝜔 This may not be a valid assumption, but it reduces the problem to a parametric optimization problem. To simplify the problem further Suppose that 𝑆 𝑥(𝜔) and 𝑆 𝑣(𝜔) are in the following forms. 𝑆 𝑥(𝜔) = 2𝜎2 𝛽 𝜔2 + 𝛽2 𝑆 𝑣(𝜔) = 𝐴 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 16 / 44
  • 44. . . . . . . . . 3.4.1 Parametric Filter Optimization II In other words, the noise 𝑣(𝑡) is white. From eq. 2 (Equation 3.78) we obtain 𝑆 𝑒(𝜔) = ( 𝑇 𝑗𝜔 1 + 𝑇 𝑗𝜔 ) ( −𝑇 𝑗𝜔 1 − 𝑇 𝑗𝜔 ) ( 2𝜎2 𝛽 𝜔2 + 𝛽2 ) − ( 1 1 + 𝑇 𝑗𝜔 ) ( 1 1 − 𝑇 𝑗𝜔 ) 𝐴 Now we can substitute 𝑆 𝑒(𝜔) in eq. 4 (Equation 3.79) and differentiate with respect to 𝑇 to find 𝑇opt = √ 𝐴 𝜎 √ 2𝛽 − 𝛽 √ 𝐴 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 17 / 44
  • 45. . . . . . . . . Example 3.8 If 𝐴 = 𝜎 = 𝛽 = 1 then the optimal time constant of the filter is computed as 𝑇 = 1 √ 2 − 1 ≈ 2.4 the optimal filter is given as 𝐺(𝜔) = 1 1 + 𝑗𝜔𝑇 = 1/𝑇 1/𝑇 + 𝑗𝜔 𝑔(𝑡) = 1 𝑇 𝑒−𝑡/𝑇 𝑡 ≥ 0 Converting this filter to the time domain results in ̇̂𝑥 = 1 𝑇 (− ̂𝑥 + 𝑦) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 18 / 44
  • 46. . . . . . . . . Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 19 / 44
  • 47. . . . . . . . . Section 3 3.4.2 General Filter Optimization Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 20 / 44
  • 48. . . . . . . . . General Approach to Optimal Filter Take a more general approach to find the optimal filter The Expected Value of the Estimation Error The expected value of the estimation error can be computed as 𝑒(𝑡) =𝑥(𝑡) − ̂𝑥(𝑡) 𝑒2 (𝑡) =𝑥2 (𝑡) − 2𝑥(𝑡) ̂𝑥(𝑡) + ̂𝑥2 (𝑡) =𝑥2 (𝑡) − 2𝑥(𝑡) ∫ 𝑔(𝑢)[𝑥(𝑡 − 𝑢) + 𝑣(𝑡 − 𝑢)]𝑑𝑢+ ∬ 𝑔(𝑢)𝑔(𝛾)[𝑥(𝑡 − 𝑢) + 𝑣(𝑡 − 𝑢)]× [𝑥(𝑡 − 𝑣) + 𝑣(𝑡 − 𝑣)]𝑑𝑢𝑑𝛾 𝐸 [𝑒2 (𝑡)] =𝐸 [𝑥2 (𝑡)] − 2 ∫ 𝑔(𝑢)𝑅 𝑥(𝑢)𝑑𝑢+ ∫ (𝑔(𝑢)𝑔(𝛾) [𝑅 𝑥(𝑢 − 𝑣) + 𝑅 𝑣(𝑢 − 𝑣)] 𝑑𝑢𝑑𝛾 (5) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 21 / 44
  • 49. . . . . . . . . Calculus of Variations approach Use a calculus of variations approach [Fom00, Wei74] to find the filter 𝑔(𝑡) that minimizes 𝐸 [𝑒2 (𝑡)]. Replace 𝑔(𝑡) in eq. 5 (Equation 3.87) with 𝑔(𝑡) + 𝜖𝜂(𝑡) 𝜖: some small number 𝜂(𝑡): an arbitrary perturbation in 𝑔(𝑡) (By calculus of variations) Can minimize 𝐸 (𝑒2 (𝑡)) by setting as eq. 6 (Equation 3.88) Thus solve for the optimal 𝑔(𝑡). 𝜕𝐸 (𝑒2 (𝑡)) 𝜕𝜖 ∣ 𝑒=0 = 0 (6) From eq. 5 (Equation 3.87) we can write 𝑅 𝑒(0) =𝑅 𝑥(0) − 2 ∫[𝑔(𝑢) + 𝜖𝜂(𝑢)]𝑅 𝑥(𝑢)𝑑𝑢+ ∬[𝑔(𝑢) + 𝑒𝜂(𝑢)][𝑔(𝛾) + 𝜖𝜂(𝛾)] [𝑅 𝑧(𝑢 − 𝛾) + 𝑅 𝑣(𝑢 − 𝛾)] 𝑑𝑢𝑑𝛾 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 22 / 44
  • 50. . . . . . . . . Taking Partial Derivative Taking the partial derivative with respect to 𝜖 gives 𝜕𝑅 𝑒(0) 𝜕𝜖 = − 2 ∫ 𝜂(𝑢)𝑅 𝑧(𝑢)𝑑𝑢+ ∬[𝜂(𝑢)𝑔(𝛾) + 𝜂(𝛾)𝑔(𝑢) + 2𝜖𝜂(𝑢)𝜂(𝛾)]× [𝑅 𝑥(𝑢 − 𝑣) + 𝑅 𝑣(𝑢 − 𝛾)] 𝑑𝑢𝑑𝛾 𝜕𝑅 𝑒(0) 𝜕𝜖 ∣ 𝑒=0 = − 2 ∫ 𝜂(𝜏)𝑅 𝑧(𝜏)𝑑𝜏+ ∬ 𝜂(𝜏)𝑔(𝛾) [𝑅 𝑥(𝜏 − 𝛾) + 𝑅 𝑣(𝜏 − 𝛾)] 𝑑𝜏 𝑑𝛾+ ∬ 𝜂(𝜏)𝑔(𝑢) [𝑅 𝑧(𝑢 − 𝜏) + 𝑅 𝑤(𝑢 − 𝜏)] 𝑑𝜏 𝑑𝑢 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 23 / 44
  • 51. . . . . . . . . Necessary Condition Now recall from eq. 7 (Equation 2.87) |𝑅 𝑋(𝜏)| ≤ 𝑅 𝑋(0) (7) 𝑅 𝑥(𝜏 − 𝑢) = 𝑅 𝑧(𝑢 − 𝜏) [i.e., 𝑅 𝑥(𝜏) is even] if 𝑥(𝑡) is stationary. In this case, the above equation can be written as 0 = − 2 ∫ 𝜂(𝜏)𝑅 𝑥(𝜏)𝑑𝜏+ 2 ∬ 𝜂(𝜏)𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝜏 𝑑𝑢 This gives the necessary condition for the optimality of the filter 𝑔(𝑡) as follows: ∫ 𝜂(𝜏) [−𝑅 𝑧(𝜏) + ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢] 𝑑𝜏 = 0 (8 We need to solve this for 𝑔(𝑡) to find the optimal filter. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 24 / 44
  • 52. . . . . . . . . Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 25 / 44
  • 53. . . . . . . . . Section 4 3.4.3 Noncausal Filter Optimization Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 26 / 44
  • 54. . . . . . . . . 3.4.3 Noncausal Filter Optimization If no estrictions on causality of our filter, then 𝑔(𝑡) can be nonzero for 𝑡 < 0, which means that our perturbation 𝜂(𝑡) can also be nonzero for 𝑡 < 0. This means that the quantity inside the square brackets in eq. 8 (Equation 3.92) must be zero. This results in 𝑅 𝑥(𝜏) = ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢 = 𝑔(𝜏) ∗ [𝑅 𝑥(𝜏) + 𝑅 𝑣(𝜏)] 𝑆 𝑥(𝜔) = 𝐺(𝜔) [𝑆 𝑥(𝜔) + 𝑆 𝑣(𝜔)] 𝐺(𝜔) = 𝑆 𝑧(𝜔) 𝑆 𝑧(𝜔) + 𝑆 𝑣(𝜔) (9) The transfer function of the optimal filter is the ratio of the power spectrum of the signal 𝑥(𝑡) to the sum of the power spectrums of 𝑥(𝑡) and the noise 𝑣(𝑡) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 27 / 44
  • 55. . . . . . . . . Example 3.9 Consider the system discussed in Example 3.8 with 𝐴 = 𝛽 = 𝜎 = 1. The signal and noise power spectra are given as 𝑆 𝑧(𝜔) = 2 𝜔2 + 1 𝑆 𝑣(𝜔) = 1 From this we obtain the optimal noncausal filter from eq. 9 (Equation 3.93) as 𝐺(𝜔) = 2 𝜔2 + 3 = 1 √ 3 ( 2 √ 3 𝜔2 + 3 ) 𝑔(𝑡) = 1 √ 3 𝑒− √ 3|𝑡| ≈ 0.58𝑒−0.58|𝑡| , 𝑡 ∈ [−∞, ∞] Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 28 / 44
  • 56. . . . . . . . . Partial Fraction of 𝐺(𝜔) In order to find a time domain representation of the filter, we perform a partial fraction expansion of 𝐺(𝜔) to find the causal part and the anticausal’ part of the filter: 𝐺(𝜔) = 1 √ 3(𝑗𝜔 + √ 3)⏟⏟⏟⏟⏟⏟⏟ causal filter + 1 √ 3(−𝑗𝜔 + √ 3)⏟⏟⏟⏟⏟⏟⏟ anticausal filter From this we see that ̂𝑋(𝜔) = 1 √ 3(𝑗𝜔 + √ 3) 𝑌 (𝑠) − 1 √ 3(𝑗𝜔 − √ 3) 𝑌 (𝑠) = ̂𝑋 𝑐(𝜔) + ̂𝑋 𝑎(𝜔) ̂𝑋 𝑐(𝜔) are the causal part of ̃𝑋(𝜔) ̂𝑋 𝑜(𝜔) is anticausal part of ̃𝑋(𝜔) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 29 / 44
  • 57. . . . . . . . . In the time domain, this can be written ̂𝑥(𝑡) = ̂𝑥 𝑐(𝑡) + ̂𝑥 𝑎(𝑡) ̇̂𝑥 𝑐 = − √ 3 ̂𝑥 𝑐 + 𝑦/ √ 3 ̂𝑥 𝑎 = √ 3 ̂𝑥 𝑎 − 𝑦/ √ 3 The ̇̂𝑥 𝑐 equation runs forward in time and is therefore causal causal. The ̂𝑥 𝑎 equation runs backward in time and is therefore anticausal and stable. (If it ran forward in time, it would be unstable.) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 30 / 44
  • 58. . . . . . . . . Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 31 / 44
  • 59. . . . . . . . . Section 5 3.4.4 Causal Filter Optimization Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 32 / 44
  • 60. . . . . . . . . Theory I If we require a causal filter for signal estimation, then 𝑔(𝑡) = 0 for 𝑡 < 0, and the perturbation 𝜂(𝑡) must be equal to 0 for 𝑡 < 0. In this case, eq. 8 gives 𝑅 𝑥(𝜏) − ∫ 𝑔(𝑢) [𝑅 𝑥(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢 = 0, 𝑡 ≥ 0 (10) The initial application of this equation was in the field of astrophysics in 1894 [Sob63]. Explicit solutions were thought to be impossible, but Norbert Wiener and Eberhard Hopf became instantly famous when they solved this equation in 1931. Their solution was so impressive that the equation became known as the Wiener-Hopf equation. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 33 / 44
  • 61. . . . . . . . . Solution To solve eq. 10 (Example 3.99) postulate some function 𝑎(𝑡) that is arbitrary for 𝑡 < 0 but is equal to 0 for 𝑡 ≥ 0. Then we obtain 𝑅 𝑥(𝜏) − ∫ 𝑔(𝑢) [𝑅 𝑧(𝑢 − 𝜏) + 𝑅 𝑣(𝑢 − 𝜏)] 𝑑𝑢 = 𝑎(𝜏) 𝑆 𝑧(𝜔) − 𝐺(𝜔) [𝑆 𝑧(𝜔) + 𝑆 𝑣(𝜔)] = 𝐴(𝜔) (11) For ease of notation, make the following definition: 𝑆 𝑧𝑣(𝜔) = 𝑆 𝑥(𝜔) + 𝑆 𝑣(𝜔) Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 34 / 44
  • 62. . . . . . . . . Then eq. 11 (Equation 3.100) becomes 𝑆 𝑥(𝜔) − 𝐺(𝜔)𝑆+ 𝑧𝑣(𝜔)𝑆− 𝑥𝑣(𝜔) = 𝐴(𝜔) (12) where, 𝑆+ 𝑥𝑣(𝜔) is the part of 𝑆 𝑥𝑣(𝜔) that has all its poles and zeros in the LHP (and hence corresponds to a causal time function) 𝑆− 𝑥𝑣(𝜔) is the part of 𝑆 𝑥𝑣(𝜔) that has all its poles and zeros in the RHP (and hence corresponds to an anticausal time function). Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 35 / 44
  • 63. . . . . . . . . eq. 12 (Equation 3.102) can be written as 𝐺(𝜔)𝑆+ 𝑥𝑣(𝜔) = 𝑆 𝑧(𝜔) 𝑆− 𝑥𝑣(𝜔) − 𝐴(𝜔) 𝑆 𝑧𝑣(𝜔) 𝐺(𝜔)𝑆+ 𝑥𝑣(𝜔): a causal time function [assuming that 𝑔(𝑡) is stable]. 𝐴(𝜔) 𝑆 𝑧𝑣(𝜔) : an anticausal time function. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 36 / 44
  • 64. . . . . . . . . Transfer Function of the Optimal Filter Therefore, 𝐺(𝜔)𝑆+ 𝑥𝑣(𝜔) = causal part of 𝑆 𝑥(𝜔) 𝑆− 𝑥𝑣(𝜔) 𝐺(𝜔) = 1 𝑆+ 𝑥𝑣(𝜔) [ causal part of 𝑆 𝑥(𝜔) 𝑆− 𝑥𝑣(𝜔) ] (13) This gives the TF of the optimal causal filter. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 37 / 44
  • 65. . . . . . . . . Example 3.10 Consider the system discussed in Section 3.4.1 with 𝐴 = 𝛽 = 𝜎 = 1. This was also discussed in Example 3.9 For this example we have 𝑆 𝑧(𝜔) = 2 𝜔2 + 1 𝑆 𝑥𝑣(𝜔) = 𝜔2 + 3 𝜔2 + 1 Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 38 / 44
  • 66. . . . . . . . . Splitting this up into its causal and anticausal factors gives 𝑆 𝑥𝑣(𝜔) = −1 ( 𝑗𝜔 + √ 3 𝑗𝜔 + 1 ) ( −𝑗𝜔 + √ 3 −𝑗𝜔 + 1 ) ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ 𝑆 𝑥,(𝜔) 𝑆 𝑧(𝜔) 𝑆− 𝑥𝑣(𝜔) = 2(−𝑗𝜔 + 1) (𝜔2 + 1) (−𝑗𝜔 + √ 3) = 2 (−𝑗𝜔 + √ 3)(𝑗𝜔 + 1) = √ 3−1 𝑗𝜔+1 causal part + √ 3 − 1 −𝑗𝜔 + √ 3⏟⏟⏟⏟⏟ anticausal part Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 39 / 44
  • 67. . . . . . . . . Transer Function and impulse response of the optimal filter eq. 13 (Equation 3.104) gives 𝐺(𝜔) = ( 𝑗𝜔 + 1 𝑗𝜔 + √ 3 ) ( √ 3 − 1 𝑗𝜔 + 1 ) = √ 3 − 1 𝑗𝜔 + √ 3 𝑔(𝑡) = ( √ 3 − 1)𝑒− √ 3𝑡 , 𝑡 ≥ 0 This gives the TF and impulse response of the optimal filter when causality is required. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 40 / 44
  • 68. . . . . . . . . Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 41 / 44
  • 69. . . . . . . . . Section 6 3.4.5 Comparison Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 42 / 44
  • 70. . . . . . . . . 3.4.5 Comparison I Comparing the three examples of optimal filter design presented in this section Example 3.8, Example 3.9, Example 3.10 it can be shown that the mean square errors of the filter are as follows [Bro96]: Parameter optimization method: 𝐸 [𝑒2 (𝑡)] = 0.914 Causal Wiener filter: 𝐸 [𝑒2 (𝑡)] = 0.732 Noncausal Wiener filter: 𝐸 [𝑒2 (𝑡)] = 0.577 As expected, the estimation error decreases when we have fewer constraints on the filter. However, the removal of constraints makes the filter design problem more difficult. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 43 / 44
  • 71. . . . . . . . . The Wiener filter is not very amenable to state estimation because of difficulty in extension to MIMO problems with state variable descriptions, and difficulty in application to signals with time-varying statistical properties. Sensor Fusion Study (AI Robotics KR) Chapter 3 Least Squares Estimation 44 / 44