By
Asst.Prof.Dr.Thamer M.Jamel
Department of Electrical Engineering
University of Technology
Baghdad – Iraq
Introduction
Linear filters :
the filter output is a linear function of the filter input
Design methods:
 The classical approach
frequency-selective filters such as
low pass / band pass / notch filters etc
 Optimal filter design
Mostly based on minimizing the mean-square value
of the error signal
2
Wiener filter
 work of Wiener in 1942 and Kolmogorov in
1939
 it is based on a priori
statistical information
 when such a priori
information is not available,
which is usually the case,
it is not possible to design
a Wiener filter in the first
place.
Adaptive filter
 the signal and/or noise characteristics are often
nonstationary and the statistical parameters vary
with time
 An adaptive filter has an adaptation algorithm, that is
meant to monitor the environment and vary the filter
transfer function accordingly
 based in the actual signals received, attempts to find
the optimum filter design
Adaptive filter
 The basic operation now involves two processes :
1. a filtering process, which produces an output signal
in response to a given input signal.
2. an adaptation process, which aims to adjust the filter
parameters (filter transfer function) to the (possibly
time-varying) environment
Often, the (average) square value of the error signal is
used as the optimization criterion
Adaptive filter
• Because of complexity of the optimizing algorithms
most adaptive filters are digital filters that perform
digital signal processing
 When processing
analog signals,
the adaptive filter
is then preceded
by A/D and D/A
convertors.
Adaptive filter
• The generalization to adaptive IIR filters leads to
stability problems
• It’s common to use
a FIR digital filter
with adjustable
coefficients
Applications of Adaptive Filters:
Identification
 Used to provide a linear model of an unknown
plant
 Applications:
 System identification
Applications of Adaptive Filters:
Inverse Modeling
 Used to provide an inverse model of an unknown
plant
 Applications:
 Equalization (communications channels)
Applications of Adaptive Filters:
Prediction
 Used to provide a prediction of the present
value of a random signal
 Applications:
 Linear predictive coding
Applications of Adaptive Filters:
Interference Cancellation
 Used to cancel unknown interference from a primary
signal
 Applications:
 Echo / Noise cancellation
hands-free carphone, aircraft headphones etc
Example:
Acoustic Echo Cancellation
LMS Algorithm
• Most popular adaptation algorithm is LMS
Define cost function as mean-squared error
• Based on the method of steepest descent
Move towards the minimum on the error surface to get to
minimum
gradient of the error surface estimated at every iteration
LMS Adaptive Algorithm
• Introduced by Widrow & Hoff in 1959
• Simple, no matrices calculation involved in the
adaptation
• In the family of stochastic gradient algorithms
• Approximation of the steepest – descent method
• Based on the MMSE criterion.(Minimum Mean
square Error)
• Adaptive process containing two input signals:
• 1.) Filtering process, producing output signal.
• 2.) Desired signal (Training sequence)
• Adaptive process: recursive adjustment of filter tap
weights
LMS Algorithm Steps
 Filter output
 Estimation error
 Tap-weight adaptation
17
     





1
0
*
M
k
k n
w
k
n
u
n
y
     
n
y
n
d
n
e 

       
n
e
k
n
u
n
w
1
n
w *
k
k 





















 
































signal
error
vector
input
tap
parameter
rate
-
learning
vector
weight
-
tap
of
value
old
vector
weigth
-
tap
of
value
update
Stability of LMS
 The LMS algorithm is convergent in the mean square
if and only if the step-size parameter satisfy
 Here max is the largest eigenvalue of the correlation
matrix of the input data
 More practical test for stability is
 Larger values for step size
 Increases adaptation rate (faster adaptation)
 Increases residual mean-squared error
• Given the following function we need to obtain the vector that would give us
the absolute minimum.
• It is obvious that
give us the minimum.
(This figure is quadratic error function (quadratic bowl) )
STEEPEST DESCENT EXAMPLE
2
2
2
1
2
1 )
,
( C
C
c
c
Y 

,
0
2
1 
 C
C
1
C
2
C
y
Now lets find the solution by the steepest descend method
• We start by assuming (C1 = 5, C2 = 7)
• We select the constant . If it is too big, we miss the minimum. If it is too
small, it would take us a lot of time to het the minimum. I would select =
0.1.
• The gradient vector is:
STEEPEST DESCENT EXAMPLE


]
[
2
1
]
[
2
1
]
[
2
1
]
[
2
1
]
1
[
2
1
9
.
0
1
.
0
2
.
0
n
n
n
n
n
C
C
C
C
C
C
y
C
C
C
C



























































2
1
2
1
2
2
C
C
dc
dy
dc
dy
y
• So our iterative equation is:
STEEPEST DESCENT EXAMPLE







































567
.
0
405
.
0
:
3
3
.
6
5
.
4
:
2
7
5
:
1
2
1
2
1
2
1
C
C
Iteration
C
C
Iteration
C
C
Iteration




























0
0
lim
013
.
0
01
.
0
:
60
......
]
[
2
1
2
1
n
n
C
C
C
C
Iteration
As we can see, the vector [c1,c2] converges to the value which would yield the
function minimum and the speed of this convergence depends on .

1
C
2
C
y
Initial guess
Minimum
LMS – CONVERGENCE GRAPH
This graph illustrates the LMS algorithm. First we start from
guessing the TAP weights. Then we start going in opposite the
gradient vector, to calculate the next taps, and so on, until we get
the MMSE, meaning the MSE is 0 or a very close value to it.(In
practice we can not get exactly error of 0 because the noise is a
random process, we could only decrease the error below a desired
minimum)
Example for the Unknown Channel of 2nd order:
Desired Combination of taps
Desired Combination of taps
Adaptive Array Antenna
 Adaptive Arrays
Linear Combiner
Interference
SMART ANTENNAS
Adaptive Array Antenna
Applications are many
Digital Communications
(OFDM , MIMO , CDMA, and
RFID)
Channel Equalisation
Adaptive noise cancellation
Adaptive echo cancellation
System identification
Smart antenna systems
Blind system equalisation
And many, many others
Introduction
Wireless communication is the most
interesting field of communication
these days, because it supports mobility
(mobile users). However, many
applications of wireless comm. now
require high-speed communications
(high-data-rates).
What is the ISI
Inter-symbol-interference, takes place when a
given transmitted symbol is distorted by other
transmitted symbols.
Cause of ISI
ISI is imposed due to band-limiting effect of
practical channel, or also due to the multi-path
effects (delay spread).
Definition of the Equalizer:
the equalizer is a digital filter that provides an
approximate inverse of channel frequency
response.
Need of equalization:
is to mitigate the effects of ISI to decrease the
probability of error that occurs without
suppression of ISI, but this reduction of ISI
effects has to be balanced with prevention of
noise power enhancement.
Types of Equalization techniques
Linear Equalization techniques
which are simple to implement, but greatly enhance
noise power because they work by inverting channel
frequency response.
Non-Linear Equalization techniques
which are more complex to implement, but have much
less noise enhancement than linear equalizers.
Equalization Techniques
Fig.3 Classification of equalizers
Linear equalizer with N-taps, and (N-1) delay elements.
Go
Table of various algorithms and their trade-offs:
algorithm Multiplying-
operations
complexity convergence tracking
LMS Low slow poor
MMSE Very high fast good
RLS High fast good
Fast
kalman
Fairly
Low
fast good
RLS-
DFE
High fast good
2 3
N toN
2 1
N 
2
2.5 4.5
N N

20 5
N 
2
1.5 6.5
N N

Adaptive Filter Block Diagram
Adaptive Filter Block Diagram
d(n) Desired
y(n)
e(n)
+
-
x(n)
Filter Input
Adaptive Filter
e(n) Error Output
Filter Output
The LMS Equation
 The Least Mean Squares Algorithm (LMS)
updates each coefficient on a sample-by-sample
basis based on the error e(n).
 This equation minimises the power in the error
e(n).
)
(
)
(
)
( n
x
n
e
n
w
1)
(n
w k
k
k 


 
The Least Mean Squares Algorithm
 The value of µ (mu) is critical.
 If µ is too small, the filter reacts slowly.
 If µ is too large, the filter resolution is poor.
 The selected value of µ is a compromise.
LMS Convergence Vs u
Audio Noise Reduction
 A popular application of acoustic noise reduction is
for headsets for pilots. This uses two microphones.
Block Diagram of a Noise Reduction Headset
d(n) = speech + noise
y(n)
e(n)
+
-
x(n) = noise'
Adaptive Filter
e(n)
Speech Output
Filter Output
(noise)
Far Microphone
Near Microphone
The Simulink Model
Setting the Step size (mu)
 The rate of
convergence of the
LMS Algorithm is
controlled by the
“Step size (mu)”.
 This is the critical
variable.
Trace of Input to Model
“Input” = Signal + Noise.
Trace of LMS Filter Output
“Output” starts at
zero and grows.
Trace of LMS Filter Error
“Error” contains
the noise.
Typical C6713 DSK Setup
USB to PC to +5V
Headphones Microphone
Acoustic Echo Canceller
New Trends in Adaptive Filtering
 Partial Updating Weights.
 Sub-band adaptive filtering.
 Adaptive Kalman filtering.
 Affine Projection Method.
 Time-Space adaptive processing.
 Non-Linear adaptive filtering:-
Neural Networks.
The Volterra Series Algorithm .
Genetic & Fuzzy.
• Blind Adaptive Filtering.
Introduction to adaptive filtering and its applications.ppt

Introduction to adaptive filtering and its applications.ppt

  • 1.
    By Asst.Prof.Dr.Thamer M.Jamel Department ofElectrical Engineering University of Technology Baghdad – Iraq
  • 2.
    Introduction Linear filters : thefilter output is a linear function of the filter input Design methods:  The classical approach frequency-selective filters such as low pass / band pass / notch filters etc  Optimal filter design Mostly based on minimizing the mean-square value of the error signal 2
  • 3.
    Wiener filter  workof Wiener in 1942 and Kolmogorov in 1939  it is based on a priori statistical information  when such a priori information is not available, which is usually the case, it is not possible to design a Wiener filter in the first place.
  • 4.
    Adaptive filter  thesignal and/or noise characteristics are often nonstationary and the statistical parameters vary with time  An adaptive filter has an adaptation algorithm, that is meant to monitor the environment and vary the filter transfer function accordingly  based in the actual signals received, attempts to find the optimum filter design
  • 5.
    Adaptive filter  Thebasic operation now involves two processes : 1. a filtering process, which produces an output signal in response to a given input signal. 2. an adaptation process, which aims to adjust the filter parameters (filter transfer function) to the (possibly time-varying) environment Often, the (average) square value of the error signal is used as the optimization criterion
  • 6.
    Adaptive filter • Becauseof complexity of the optimizing algorithms most adaptive filters are digital filters that perform digital signal processing  When processing analog signals, the adaptive filter is then preceded by A/D and D/A convertors.
  • 7.
    Adaptive filter • Thegeneralization to adaptive IIR filters leads to stability problems • It’s common to use a FIR digital filter with adjustable coefficients
  • 8.
    Applications of AdaptiveFilters: Identification  Used to provide a linear model of an unknown plant  Applications:  System identification
  • 9.
    Applications of AdaptiveFilters: Inverse Modeling  Used to provide an inverse model of an unknown plant  Applications:  Equalization (communications channels)
  • 10.
    Applications of AdaptiveFilters: Prediction  Used to provide a prediction of the present value of a random signal  Applications:  Linear predictive coding
  • 11.
    Applications of AdaptiveFilters: Interference Cancellation  Used to cancel unknown interference from a primary signal  Applications:  Echo / Noise cancellation hands-free carphone, aircraft headphones etc
  • 12.
  • 15.
    LMS Algorithm • Mostpopular adaptation algorithm is LMS Define cost function as mean-squared error • Based on the method of steepest descent Move towards the minimum on the error surface to get to minimum gradient of the error surface estimated at every iteration
  • 16.
    LMS Adaptive Algorithm •Introduced by Widrow & Hoff in 1959 • Simple, no matrices calculation involved in the adaptation • In the family of stochastic gradient algorithms • Approximation of the steepest – descent method • Based on the MMSE criterion.(Minimum Mean square Error) • Adaptive process containing two input signals: • 1.) Filtering process, producing output signal. • 2.) Desired signal (Training sequence) • Adaptive process: recursive adjustment of filter tap weights
  • 17.
    LMS Algorithm Steps Filter output  Estimation error  Tap-weight adaptation 17            1 0 * M k k n w k n u n y       n y n d n e           n e k n u n w 1 n w * k k                                                         signal error vector input tap parameter rate - learning vector weight - tap of value old vector weigth - tap of value update
  • 18.
    Stability of LMS The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy  Here max is the largest eigenvalue of the correlation matrix of the input data  More practical test for stability is  Larger values for step size  Increases adaptation rate (faster adaptation)  Increases residual mean-squared error
  • 19.
    • Given thefollowing function we need to obtain the vector that would give us the absolute minimum. • It is obvious that give us the minimum. (This figure is quadratic error function (quadratic bowl) ) STEEPEST DESCENT EXAMPLE 2 2 2 1 2 1 ) , ( C C c c Y   , 0 2 1   C C 1 C 2 C y Now lets find the solution by the steepest descend method
  • 20.
    • We startby assuming (C1 = 5, C2 = 7) • We select the constant . If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. I would select = 0.1. • The gradient vector is: STEEPEST DESCENT EXAMPLE   ] [ 2 1 ] [ 2 1 ] [ 2 1 ] [ 2 1 ] 1 [ 2 1 9 . 0 1 . 0 2 . 0 n n n n n C C C C C C y C C C C                                                            2 1 2 1 2 2 C C dc dy dc dy y • So our iterative equation is:
  • 21.
  • 22.
    LMS – CONVERGENCEGRAPH This graph illustrates the LMS algorithm. First we start from guessing the TAP weights. Then we start going in opposite the gradient vector, to calculate the next taps, and so on, until we get the MMSE, meaning the MSE is 0 or a very close value to it.(In practice we can not get exactly error of 0 because the noise is a random process, we could only decrease the error below a desired minimum) Example for the Unknown Channel of 2nd order: Desired Combination of taps Desired Combination of taps
  • 34.
    Adaptive Array Antenna Adaptive Arrays Linear Combiner Interference SMART ANTENNAS
  • 35.
  • 37.
    Applications are many DigitalCommunications (OFDM , MIMO , CDMA, and RFID) Channel Equalisation Adaptive noise cancellation Adaptive echo cancellation System identification Smart antenna systems Blind system equalisation And many, many others
  • 39.
    Introduction Wireless communication isthe most interesting field of communication these days, because it supports mobility (mobile users). However, many applications of wireless comm. now require high-speed communications (high-data-rates).
  • 40.
    What is theISI Inter-symbol-interference, takes place when a given transmitted symbol is distorted by other transmitted symbols. Cause of ISI ISI is imposed due to band-limiting effect of practical channel, or also due to the multi-path effects (delay spread).
  • 43.
    Definition of theEqualizer: the equalizer is a digital filter that provides an approximate inverse of channel frequency response. Need of equalization: is to mitigate the effects of ISI to decrease the probability of error that occurs without suppression of ISI, but this reduction of ISI effects has to be balanced with prevention of noise power enhancement.
  • 45.
    Types of Equalizationtechniques Linear Equalization techniques which are simple to implement, but greatly enhance noise power because they work by inverting channel frequency response. Non-Linear Equalization techniques which are more complex to implement, but have much less noise enhancement than linear equalizers.
  • 46.
  • 47.
    Linear equalizer withN-taps, and (N-1) delay elements. Go
  • 48.
    Table of variousalgorithms and their trade-offs: algorithm Multiplying- operations complexity convergence tracking LMS Low slow poor MMSE Very high fast good RLS High fast good Fast kalman Fairly Low fast good RLS- DFE High fast good 2 3 N toN 2 1 N  2 2.5 4.5 N N  20 5 N  2 1.5 6.5 N N 
  • 54.
    Adaptive Filter BlockDiagram Adaptive Filter Block Diagram d(n) Desired y(n) e(n) + - x(n) Filter Input Adaptive Filter e(n) Error Output Filter Output
  • 55.
    The LMS Equation The Least Mean Squares Algorithm (LMS) updates each coefficient on a sample-by-sample basis based on the error e(n).  This equation minimises the power in the error e(n). ) ( ) ( ) ( n x n e n w 1) (n w k k k     
  • 56.
    The Least MeanSquares Algorithm  The value of µ (mu) is critical.  If µ is too small, the filter reacts slowly.  If µ is too large, the filter resolution is poor.  The selected value of µ is a compromise.
  • 57.
  • 58.
    Audio Noise Reduction A popular application of acoustic noise reduction is for headsets for pilots. This uses two microphones. Block Diagram of a Noise Reduction Headset d(n) = speech + noise y(n) e(n) + - x(n) = noise' Adaptive Filter e(n) Speech Output Filter Output (noise) Far Microphone Near Microphone
  • 59.
  • 60.
    Setting the Stepsize (mu)  The rate of convergence of the LMS Algorithm is controlled by the “Step size (mu)”.  This is the critical variable.
  • 61.
    Trace of Inputto Model “Input” = Signal + Noise.
  • 62.
    Trace of LMSFilter Output “Output” starts at zero and grows.
  • 63.
    Trace of LMSFilter Error “Error” contains the noise.
  • 64.
    Typical C6713 DSKSetup USB to PC to +5V Headphones Microphone
  • 70.
  • 80.
    New Trends inAdaptive Filtering  Partial Updating Weights.  Sub-band adaptive filtering.  Adaptive Kalman filtering.  Affine Projection Method.  Time-Space adaptive processing.  Non-Linear adaptive filtering:- Neural Networks. The Volterra Series Algorithm . Genetic & Fuzzy. • Blind Adaptive Filtering.