Processing & Properties of Floor and Wall Tiles.pptx
Section8 control
1. Stochastic Section # 8
Blind Deconvolution
Eslam Adel
May 21, 2018
1 Introduction
• Deconvolution is the inverse of convolution process.
• Example in image processing is image de-blurring (Deblur with the Blind Deconvolution)
Image → blurring (Convolution with low pass filter) → Blurred image
Blurred image → De-blurring (Deconvolution) → estimate of original image.
(a) Blurred image (b) Estimated image after de-blurring
Blurring naturally occurs due to movement artifact during image capturing.
train
Figure 2: Blurring due to motion artifact
• Blind means that convolution kernel is unknown.
• It is used for estimation of source signal.
1
2. 2 Single Source with two measurements
Figure 3: General Block diagram
Figure 4: Example of single source with two measurements
Objective is estimation of aortic blood pressure.
Notes:
• We model transfer channel as linear time invariant filter (LTI). Actually it is none linear and time varying.
• Order of the filter is unknown. We here assume that it is a first order filter for simplicity.
• We assume that filter coefficients are constants. Actually it changes over time.
• Measured signal assumed to depend only on the source signal but there is noise components, we ignore it
for simplicity.
2.1 Wiener filter model for blind deconvolution
Wiener filter model assumes that we have a source signal x(n) distorted to be another signal y(n) and we build
a filter h(n) to get ˆx(n) which is an estimate of source signal x(n). The criteria is minimization of mean square
error E[e(n)2
] where e(n) = x(n) − ˆx(n). We can get our filter using original signal or its model.
Similarly, we have signal PA that is transformed to be PF and we need to build Wiener filter to get PAF
which is an estimate of PA from PF . We can build another filter to get PAI which is another estimate of PA
but from PI. Here we can define the error as e(n) = PAF − PAI as we don’t have source signal or its model
then we minimize the mean square error E[e2
]. That means that if both estimates have same values so both of
them are an accurate estimate of source signal PA.
The model will be as in figure 5.
Derivation
PAI = PI ∗ HI =
I
i=0 HI(i)PI(n − i)
For first order filter
2
4.
RII(0) RII(1) −RIF (0) −RIF (1)
RII(1) RII(0) −RIF (1) −RIF (0)
RIF (0) RIF (1) −RF F (0) −RF F (1)
RIF (1) RIF (0) −RF F (1) −RF F (0)
HI(0)
HI(1)
HF (0)
HF (1)
= 0
Solution of homogeneous system of linear equations
• Naive solution or zero solution
If matrix determinant is not equal to zero (Matrix is none singular matrix or full rank) so we have only
unique solution which is zero solution i.e HI(0) = HI(1) = HF (0) = HF (1) = 0
• Infinite number of solution
If matrix determinant is equal to zero (Matrix is singular). So we have infinite number of solutions. We
assume values for some of variable and get others.
3 Two source of signals
The same idea can applied If we have two source of signals for examples if we have the fetus and mother ECG
and we need to separate both signals. We must have number of measurements greater than number of source
signals at least with one. So we will take three measurements. For simplicity we will assume that filter is a zero
order filter with only one coefficient. Here we will apply two Wiener filters on each measurement. So we can
get source signals which is a compilations of output of these Wiener filters. We will minimize the mean square
error. And we will get a system of linear equations which is homogeneous and has zero solution if matrix is
non-singular and infinite number of solutions if it is a singular matrix.
Figure 6: Block Diagram of two sources of signals with three measurements
Assuming zero order for simplicity
y1(n)
y2(n)
y3(n)
=
F11 F12
F21 F22
F31 F32
u1
u2
We get these matrices
4
5. y1(n)
y2(n)
=
F11 F12
F21 F22
u1
u2
y2(n)
y3(n)
=
F21 F22
F31 F32
u1
u2
y1(n)
y3(n)
=
F11 F12
F31 F32
u1
u2
So
u1
u2
=
F11 F12
F21 F22
−1
y1(n)
y2(n)
u1
u2
=
F11 F12
F31 F32
−1
y1(n)
y3(n)
u1
u2
=
F21 F22
F31 F32
−1
y2(n)
y3(n)
For u1(n)
u1(n) = H1y1(n) + H2y2(n)
u1(n) = H3y2(n) + H4y3(n)
u1(n) = H5y1(n) + H6y3(n)
Where H1, H2, . . . H6 are filter coefficients.
3.1 Wiener Model
Figure 7: Wiener filter model and error definition
e1(n) = H1y1(n) + H2y2(n) − H3y2(n) + H4y3(n)
e2(n) = H1y1(n) + H2y2(n) − H5y1(n) + H6y3(n)
e3(n) = H3y2(n) + H4y3(n) − H5y1(n) + H6y3(n)
5
6. The objective is to minimize the mean square error E[e(n)2
].
To minimize it we need to differential with respect to filter coefficients.
To minimize mean square error (MME)
∂E[e2
1(n)]
∂Hi
= 0
∂E[e2
2(n)]
∂Hi
= 0
∂E[e2
3(n)]
∂Hi
= 0
We have to get 6 equations from these equations. For example
∂E[e2
1(n)]
∂H1
= 2E[(H1y1 + H2y2 − H3y2 − H4y3)y1] = 0
H1Ry1y1 (0) + H2Ry1y2 (0) − H3Ry1y2 (0) − H4Ry1y2 (0) = 0
And
∂E[e2
1(n)]
∂H2
= 2E[(H1y1 + H2y2 − H3y2 − H4y3)y2] = 0
H1Ry1y2
(0) + H2Ry2y2
(0) − H3Ry2y2
(0) − H4Ry3y2
(0) = 0
And so on.
Finally we will have the following equations
Ry1y1
(0) Ry1y2
(0) −Ry1y1
(0) −Ry1y3
(0) 0 0
Ry1y2
(0) Ry2y2
(0) −Ry2y2
(0) −Ry3y2
(0) 0 0
Ry1y3
(0) Ry2y3
(0) −Ry2y3
(0) −Ry3y2
(0) 0 0
0 0 Ry2y2 (0) Ry3y2 (0) −Ry1y2 (0) −Ry3y2 (0)
Ry1y3 (0) Ry3y2 (0) 0 0 −Ry2y3 (0) −Ry3y3 (0)
0 0 Ry2y1
(0) Ry3y1
(0) −Ry1y1
(0) −Ry3y1
(0)
H1
H2
H3
H4
H5
H6
= 0
Solution of homogeneous system of linear equations
• Naive solution or zero solution
If matrix determinant is not equal to zero (Matrix is none singular matrix or full rank) so we have only
unique solution which is zero solution i.e H1 = H2 = . . . = 0
• Infinite number of solution
If matrix determinant is equal to zero (Matrix is singular). So we have infinite number of solutions. We
assume values for some of variable and get others.
After getting value of H1 . . . H6 we can get value of ˆu11(n) ∼ ˆu12(n) ∼ ˆu13(n) ∼ u1(n)
To get u2(n) process can be repeated.
6