SlideShare a Scribd company logo
1 of 18
Download to read offline
i
Adaptive Filtering
Mustafa Khaleel
Year 2016
ii
Contents
1. Introduction ............................................................................................................................ 1
2. Digital Filters ........................................................................................................................... 2
2.1. Linear and Nonlinear Filter............................................................................................. 2
2.2. Filter Design .................................................................................................................... 3
3. Wiener Filters.......................................................................................................................... 4
3.1. Error Measurements....................................................................................................... 6
3.2. The Mean-Square Error (MSE)........................................................................................ 6
3.3. Mean Square Error Surface ............................................................................................ 7
4. Method of Steepest Descent.................................................................................................. 8
5. The Least Mean Squares (LMS) Algorithm........................................................................... 10
5.1. Convergence in the Mean-Sense.................................................................................. 11
5.2. Convergent in the Mean Square Sense........................................................................ 12
6. Simulation and Results ......................................................................................................... 13
Conclusion..................................................................................................................................... 15
Reference ...................................................................................................................................... 15
Annex ............................................................................................................................................ 16
Figure 1 FIR Filter ............................................................................................................................ 3
Figure 2 IIR Filter............................................................................................................................. 4
Figure 3 Wiener Filters.................................................................................................................... 5
Figure 4 Error surface with two weights........................................................................................ 7
Figure 5 Adaptive Filter with LMS................................................................................................ 10
Figure 6 Adaptive Filter (Noise Cancellation) .............................................................................. 13
Figure 7 Step-Size small................................................................................................................ 13
Figure 8 Step-Size Large................................................................................................................ 14
Figure 9 Step-Size Acceptable ...................................................................................................... 14
1
1. Introduction
Filtering is a signal processing operation whose objective is to process a signal in
order to manipulate the information contained in the signal. In other words, a
filter is a device that maps its input signal to another output signal facilitating the
extraction of the desired information contained in the input signal. A digital filter
is the one that processes discrete-time signals represented in digital format. For
time-invariant filters the internal parameters and the structure of the filter are
fixed, and if the filter is linear the output signal is a linear function of the input
signal. Once prescribed specifications are given, the design of time-invariant
linear filters entails three basic steps, namely: the approximation of the
specifications by a rational transfer function, the choice of an appropriate
structure defining the algorithm, and the choice of the form of implementation
for the algorithm.
An adaptive filter is required when either the fixed specifications are unknown or
the specifications cannot be satisfied by time-invariant filters. Strictly speaking an
adaptive filter is a nonlinear filter since its characteristics are dependent on the
input signal and consequently the homogeneity and additivity conditions are not
satisfied. However, if we freeze the filter parameters at a given instant of time,
most adaptive filters considered in this text are linear in the sense that their
output signals are linear functions of their input signals
The adaptive filters are time-varying since their parameters are continually
changing in order to meet a performance requirement. In this sense, we can
interpret an adaptive filter as a filter that performs the approximation step on-
line. Usually, the definition of the performance criterion requires the existence of
a reference signal that is usually hidden in the approximation step of fixed-filter
design.
2
2. Digital Filters
The term filter is commonly used to refer to any device or system that takes a
mixture of particles/elements from its input and processes them according to
some specific rules to generate a corresponding set of particles/elements at
its output. In the context of signals and systems, particles/elements are the
frequency components of the underlying signals and, traditionally, filters are
used to retain all the frequency components that belong to a particular band
of frequencies, while rejecting the rest of them, as much as possible. In a more
general sense, the term filter may be used to refer to a system that reshapes
the frequency components of the input to generate an output signal with
some desirable features.
2.1.Linear and Nonlinear Filter
Filters can be classified as either linear or nonlinear types. A linear filter is one
whose output is some linear function of the input. In the design of linear filters it
is necessary to assume stationary (statistical-time-invariance) and know the
relevant signal and noise statistics a priori. The linear filter design attempts to
minimize the effects of noise on the signal by meeting a suitable statistical
criterion. The classical linear Wiener filter, for example, minimizes the Mean
Square Error (MSE) between the desired signal response and the actual filter
response. The Wiener solution is said to be optimum in the mean square sense,
and it can be said to be truly optimum
for second-order stationary noise statistics (fully described by constant finite
mean and variance). A linear adaptive filter is one whose output is some linear
combination of the actual input at any moment in time between adaptation
operations.
A nonlinear adaptive filter does not necessarily have a linear relationship
between the input and output at any moment in time. Many different linear
adaptive filter algorithms have been published in the literature. Some of the
important features of these algorithms can be identified by the following terms
1. Rate of convergence - how many iterations to reach a near optimum solution.
2. Misadjustment- measure of the amount by which the final value of the MSE,
averaged over an ensemble of adaptive filters, deviates from the MSE produced
by the Wiener solution.
3. Tracking - ability to follow statistical variations in a non-stationary
environment.
3
4. Robustness - implies that small disturbances from any source (internal or
5.external) produce only small estimation errors.
6. Computational requirements - the computational operations per iteration,
Data storage and programming requirements.
7. Structure - of information flow in the algorithm, e.g., serial, parallel etc.,
which determines the possible hardware implementations.
8. Numerical properties - type and nature of quantization errors, numerical
stability and numerical accuracy.
2.2.Filter Design
There two common way to design a Filter (recursion) and (non- recursion).
For non-recursion filter also its call (Finite Impulse Response or FIR).
The Filter is implemented by convolution, each sample in the output is calculated
by weighting the samples in the input, and adding them together.
Recursive filters (Infinite Impulse Response or IIR filters) are an extension of this,
using previously calculated values from the output, besides points from the input.
Recursive filters are defined by a set of recursion coefficients.
Figure 1 FIR Filter
4
Figure 2 IIR Filter
Finally we can classify digital filters by their use and by their implementation. The
use of a digital filter can be broken into three categories: time domain, frequency
domain and custom. As previously described, time domain filters are used when
the information is encoded in the shape of the signal's waveform. Time domain
filtering is used for such actions as: smoothing, DC removal, waveform shaping,
etc. In contrast, frequency domain filters are used when the information is
contained in the amplitude, frequency, and phase of the component sinusoids. The
goal of these filters is to separate one band of frequencies from another. Custom
filters are used when a special action is required by the filter, something more
elaborate than the four basic responses (high-pass, low-pass, band-pass and band-
reject).
3. Wiener Filters
Wiener formulated the continuous-time, least mean square error, estimation
Problem in his classic work on interpolation, extrapolation and smoothing
Of time series (Wiener 1949). The extension of the Wiener theory from
Continuous time to discrete time is simple, and of more practical use for
Implementation on digital signal processors. A Wiener filter can be an
Infinite-duration impulse response (IIR) filter or a finite-duration impulse
5
Response (FIR) filter.
In general, the formulation of an IIR Wiener filter results in a set of non-linear
equations, whereas the formulation of an FIR Wiener filter results in a set of
linear equations and has a closed-form solution e they are relatively simple to
compute, inherently stable and more practical. The main drawback of FIR filters
compared with IIR filters is that they may need a large number of coefficients to
approximate a desired response.
Figure 3 Wiener Filters
Where ๐‘ฅ( ๐‘›) is input signal and ๐‘ค are filter coefficients, respectively; that is
๐‘ฅ(๐‘›) = [ ๐‘ฅ(๐‘›)๐‘ฅ โ€ฆ ๐‘ฅ(๐‘› โˆ’ ๐‘ + 1)] ๐‘‡
(1)
๐‘ค = [ ๐‘ค0 ๐‘ค1 โ€ฆ . ๐‘ค ๐‘] ๐‘‡
(2)
And ๐‘ฆ(๐‘˜) is the output signal,
๐‘ฆ(๐‘›) = โˆ‘ ๐‘ค๐‘– ๐‘ฅ(๐‘› โˆ’ ๐‘–)๐‘
๐‘–=0
= ๐‘ค๐‘œ ๐‘ฅ(๐‘›) + ๐‘ค1 ๐‘ฅ(๐‘› โˆ’ 1) + โ‹ฏ + ๐‘ค ๐‘ ๐‘ฅ(๐‘› โˆ’ ๐‘)
๐‘ฆ(๐‘›) = ๐‘ค ๐‘‡
๐‘ฅ(๐‘›) (3)
๐‘‘(๐‘›) Is the training or desired signal, and e(n) is error signal (different between the
Output signal ๐‘ฆ(๐‘›) and desired signal ๐‘‘(๐‘›)).
๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›) (4)
6
3.1.Error Measurements
Adaptation of the filter coefficients follows a minimization procedure of a
particular objective or cost function. This function is commonly defined as a norm
of the error signal e (n). The most commonly employed norms are the mean-
square error (MSE).
3.2.The Mean-Square Error (MSE).
From Figure 3, we defined The MSE (cost function) as
๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)|2]. (5)
From equation (3) we write the equation (5) as follow:
So,
๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)|2]
๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ค ๐‘‡
๐‘ฅ(๐‘›)|2]
๐œ‰(๐‘›) = [๐‘‘2(๐‘›) โˆ’ 2๐‘ค ๐‘‡
๐ธ[๐‘‘(๐‘›)๐‘ฅ(๐‘›)] + ๐‘ค ๐‘‡
๐ธ[๐‘ฅ(๐‘›)๐‘ค ๐‘‡
๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)]๐‘ค
Where,
๐‘…= ๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)],
๐‘ = ๐ธ[๐‘‘(๐‘›)๐‘ฅ ๐‘‡(๐‘›)].
๐œ‰(๐‘›) = ๐‘ฅ ๐‘‘๐‘‘(0) โˆ’ 2๐‘ + 2๐‘…๐‘ค (6)
Where R and p are the input-signal correlation matrix and the cross-correlation
vector between the reference signal and the input signal.
The gradient vector of the MSE function with respect to the adaptive filter
coefficient vector is given by
โˆ‡w ฮพ (n)= โˆ’2๐‘ + 2๐‘…๐‘ค (7)
That minimizes the MSE cost function, is obtained by equating the gradient
vector to zero. Assuming that R is non-singular, one gets that
โˆ‡ ๐‘ค ๐œ‰(๐‘›) = 0
7
Figure 4 Error surface with two weights
๐‘ค๐‘œ = ๐‘…โˆ’1
๐‘ (8)
This system of equations is known as the Wiener-Hopf equations, and the filter
whose weights satisfy the Wiener-Hopf equations is called a Wienerfilter.
3.3.Mean Square Error Surface
From Equation (7) the mean square error for filter is a quadratic function of the
filter coefficient vector ๐’˜ and has a single minimum point. For example, for a
filter with only two coefficients (๐‘ค0, ๐‘ค1). The mean square error function is a
bowl-shaped surface, with a single minimum point. At this optimal operating
point the mean square error surface has zero gradient.
8
4. Method of Steepest Descent
To solve the Wiener-Hopf equations (Eq.7) for tap weights of the optimum spatial
filter, we basically need to compute the inverse of a p-by-p matrix made up of the
different values of the autocorrelation function. We may avoid the need for this
matrix inversion by using the method of steepest descent. Starting with an initial
guess for optimum weight ๐’˜ ๐’, say ๐’˜(๐ŸŽ), a recursive search method that may
require many iterations (steps) to converge to ๐’˜ ๐’ is used.
The method of steepest descent is a general scheme that uses the following steps
to search for the minimum point of any convex function of a set of parameters:
1. Start with an initial guess of the parameters whose optimum values are to
be found for minimizing the function.
2. Find the gradient of the function with respect to these parameters at the
present point.
3. Update the parameters by taking a step in the opposite direction of the
gradient vector obtained in Step 2. This corresponds to a step in the direction
of steepest descent in the cost function at the present point. Furthermore,
the size of the step taken is chosen proportional to the size of the gradient
vector.
4. Repeat Steps 2 and 3 until no further significant change is observed in the
parameters.
To implement this procedure in the case of the transversal filter shown in Figure
3, we recall (equation 7)
โˆ‡w ฮพ (n)= โˆ’2๐‘ + 2๐‘…๐‘ค (9)
Where โˆ‡ is the gradient operator defined as the column vector,
โˆ‡ = [
๐œ•
๐œ•๐‘ค0
๐œ•
๐œ•๐‘ค1
โ€ฆ
๐œ•
๐œ•๐‘ค ๐‘โˆ’1
]
๐‘‡
(10)
According to the above procedure, if ๐‘ค(๐‘›) is the tap-weight vector at the ๐‘› ๐‘กโ„Ž
iteration, the following recursive equation may be used to update ๐‘ค( ๐‘›).
๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ ๐œ‡โˆ‡ ๐‘˜ ๐œ‰ (11)
Where ๐œ‡ positive scalar is call Step-Size, and โˆ‡ ๐‘˜ ๐œ‰ denotes the gradient vector โˆ‡ ๐‘˜ ๐œ‰
evaluated at the point ๐‘ค = ๐‘ค( ๐‘˜). Substituting (Eq.9) in (Eq. 11), we get
๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡(๐‘…๐‘ค( ๐‘›) โˆ’ ๐‘) (12)
9
As we shall soon show, the convergence of ๐’˜(๐’) to the optimum solution
๐‘ค๐‘œ and the speed at which this convergence takes place are dependent on the
size of the step-size parameter ฮผ. A large step-size may result in divergence of
this recursive equation.
To see how the recursive update ๐‘ค(๐‘˜) converges toward๐‘ค๐‘œ, we rearrange Eq.
(12) as
๐‘ค( ๐‘˜ + 1) = (ฮ™ โˆ’ 2๐œ‡๐‘น) ๐’˜( ๐‘˜) + 2๐œ‡๐’‘ (13)
Where ๐šฐ is the N-by-N identify matrix. Next we subtract ๐‘ค๐‘œform both side for Eq.
(13) and rearrange the result to obtain
๐‘ค( ๐‘˜ + 1) โˆ’ ๐‘ค๐‘œ = (ฮ™ โˆ’ 2๐œ‡๐‘น)( ๐’˜( ๐‘˜) โˆ’ ๐’˜ ๐’) (14)
Defining the ๐‘( ๐‘˜) as
๐‘( ๐‘›) = ๐‘ค( ๐‘›) โˆ’ ๐‘ค๐‘œ
And ๐‘… = ๐‘„ฮ›๐‘„ ๐‘‡
Where ๐šฒ is a diagonal matrix consisting of the eigenvalues ๐œ†0, ๐œ†0, โ€ฆ ๐œ† ๐‘โˆ’1 of R and
the columns of ๐‘„ contain the corresponding orthonormal eigenvectors,
and ฮ™ =๐‘„๐‘„ ๐‘‡
, Substituting Eq.(14) we get
๐‘( ๐‘› + 1) = ๐‘„(I โˆ’ 2ฮผฮ›) ๐‘„ ๐‘‡
๐‘ฃ( ๐‘˜) (15)
Pre-multiplying Eq. (15) by ๐‘„ ๐‘‡
we have ๐‘„ ๐ป
๐‘( ๐‘› + 1) = (I โˆ’ ฮผฮ›)๐‘„ ๐ป
๐‘(๐‘›) (16)
Notation: ๐‘ฃ( ๐‘›) = ๐‘„ ๐ป
๐‘(๐‘›)
๐‘ฃ( ๐‘› + 1) = (I โˆ’ ฮผฮ›) ๐‘ฃ( ๐‘›), ๐‘˜ = 1,2, . . , ๐‘ (17)
Initial conditions: ๐‘ฃ(0) = ๐‘„ ๐ป
๐‘(0) = ๐‘„ ๐ป
[๐‘ค(0) โˆ’ ๐‘ค๐‘œ]
๐‘ฃ ๐‘˜( ๐‘›) = (1 โˆ’ ๐œ‡๐œ† ๐‘š๐‘Ž๐‘ฅ) ๐‘›
๐‘ฃ ๐‘˜(0), ๐‘˜ = 1,2, โ€ฆ . , ๐‘
Convergence (stability):
When n=0, 0 < ๐œ‡ <
2
๐œ† ๐‘š๐‘Ž๐‘ฅ
Stability condition
๐œ† ๐‘š๐‘Ž๐‘ฅ = max{๐œ†1, ๐œ†2, โ€ฆ , ๐œ† ๐‘}
๐œ† ๐‘š๐‘Ž๐‘ฅ Is the maximum of the eigenvalues๐œ†0 , ๐œ†1, โ€ฆ . , ๐œ† ๐‘โˆ’1. The left limit in refers to
the fact that the tap-weight correction must be in the opposite direction of the
gradient vector. The right limit is to ensure that all the scalar tap-weight
parameters in the recursive equations (17) decay exponentially as ๐‘˜ increases.
10
5. The Least Mean Squares (LMS) Algorithm
In any event, care has to be exercised in the selection of the learning-rate
parameter ๐ for the method of steepest descent to work. Also, a practical
limitation of the method of steepest descent is that it requires knowledge of the
spatial correlation functions ๐’“ ๐’…๐’™( ๐’‹, ๐’)and ๐’“ ๐’™๐’™(๐’‹, ๐’)now, when the filter operates
in an unknown environment, these correlation functions are not available, in
which case we are forced to use estimates in their place. The least-mean-square
algorithm results from a simple and yet effective method of providing for these
estimates.
The least-mean-square (LMS) algorithm is based on the use of instantaneous
estimates of the autocorrelation function ๐‘น and the cross-correlation function
๐’‘.These estimates are deduced directly from the defining equations (18) and (19)
as follows:
๐‘…= ๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)] โŸน ๐‘…โ€ฒ
= ๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›) (18)
๐‘ = ๐ธ[๐‘‘(๐‘›)๐‘ฅ ๐‘‡(๐‘›)] โ‡’ ๐‘โ€ฒ
= ๐‘ฅ(๐‘›)๐‘‘(๐‘›) (19)
Now call Eq. (12): ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡( ๐‘…๐‘ค( ๐‘›) โˆ’ ๐‘)
๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡[( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘ฅ( ๐‘›) ๐‘‘( ๐‘›)]
๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡๐‘ฅ(๐‘›)[( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘‘( ๐‘›)]
When ๐‘’โ€ฒ
(๐‘›) = ( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘‘( ๐‘›)
So, ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡๐‘ฅ(๐‘›)๐‘’โ€ฒ
(๐‘›) (20)
Equation (20) describe Least-Mean-Square (LMS) Algorithm.
Figure 5 Adaptive Filter with LMS
11
Summary of the LMS algorithm
Input: Tap-weight vector, ๐’˜( ๐’),
Input vector, ๐’™(๐’).
and desired output, ๐’…(๐’).
Output: Filter: output, ๐’š(๐’)
Tap-weight vector update ๐’˜( ๐’ + ๐Ÿ)
1. Filtering:
๐’š(๐’) = ๐’˜ ๐‘ป
๐’™(๐’)
2. Error Estimation:
๐’†(๐’) = ๐’…(๐’) โˆ’ ๐’š(๐’)
3. Tap-Weight Vector Adaptation:
๐’˜( ๐’ + ๐Ÿ) = ๐’˜( ๐’) โˆ’ ๐Ÿ๐๐’™(๐’)๐’†โ€ฒ
(๐’)
Where ๐‘ฅ( ๐‘›) = [ ๐‘ฅ( ๐‘›) ๐‘ฅโ€ฆ ๐‘ฅ( ๐‘› โˆ’ ๐‘ + 1)] ๐‘‡ . Substituting this result in Eq. (11),
we get
๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) + 2๐œ‡๐‘’( ๐‘›) ๐‘ฅ( ๐‘›) (21)
This is referred to as the LMS recursion it suggests a simple procedure for
recursive adaptation of the filter coefficients after arrival of every new input
sample ๐‘ฅ( ๐‘›) and its corresponding desired output sample, ๐‘‘( ๐‘›) Equations (3),
(4), and (21), in this order, specify the three steps required to complete each
iteration of the LMS algorithm. Equation (3) is referred to as filtering. It is
performed to obtain the filter output. Equation (4) is used to calculate the
estimation error. Equation (21) is tap-weight adaptation recursion.
5.1.Convergence in the Mean-Sense
A detailed analysis of convergence of the LMS algorithm in the mean square is
much more complicated than convergence analysis of the algorithm in the
mean. This analysis is also much more demanding in the assumptions made
concerning the behavior of the weight vector ๐’˜( ๐’) computed by the LMS
algorithm (Haykin, 1991). In this subsection we present a simplified result of
the analysis.
The LMS algorithm is convergent in the mean square if the learning-rate
parameter ๐œ‡ satisfies the following condition,
0 < ๐œ‡ < ๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] (22)
12
Where ๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] is the trace of the correlation matrix ๐‘…, from matrix algebra, we
know that
๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] = โˆ‘ ๐œ† ๐‘˜ โ‰ฅ ๐œ† ๐‘š๐‘Ž๐‘ฅ (23)
And Convergence condition โ€“ Convergence in the mean sense
0 < ๐œ‡ <
2
๐œ† ๐‘š๐‘Ž๐‘ฅ
(24)
5.2.Convergent in the Mean Square Sense
For an LMS algorithm convergent in the mean square, the final value of ๐œ‰(โˆž) the
mean-squared error ๐œ‰(๐‘›) is a positive constant, which represents the steady-
state condition of the learning curve. In fact, ๐œ‰(โˆž) is always in excess of the
minimum mean-squared error J- realized by the corresponding Wiener filter for a
stationary environment. The difference between ๐œ‰(โˆž) and ๐œ‰ ๐‘š๐‘–๐‘›
is called the
excess mean-squared error:
๐œ‰ ๐‘’๐‘ฅ = ๐œ‰(โˆž) โˆ’ ๐œ‰ ๐‘š๐‘–๐‘›
(25)
And Convergence condition โ€“ Convergence in the mean square sense
0 < ๐œ‡ <
2
๐œ† ๐‘š๐‘Ž๐‘ฅ
(26)
The ratio of ๐œ‰ ๐‘’๐‘ฅ to ๐œ‰ ๐‘š๐‘–๐‘›
is called the miss-adjustment:
๐‘€ =
๐œ‰ ๐‘’๐‘ฅ
๐œ‰ ๐‘š๐‘–๐‘›
(27)
It is customary to express the miss-adjustment M as a percentage. Thus, for
example, a miss-adjustment of 10 percent means that the LMS algorithm
produces a mean-squared error (after completion of the learning process) that is
10percent greater than the minimum mean squared error ๐œ‰ ๐‘š๐‘–๐‘›
.Such a
performance is ordinarily considered to be satisfactory.
Another important characteristic of the LMS algorithm is the settling time.
However, there is no unique definition for the settling time. We may, for
example, approximate the learning curve by a single exponential with average
time constant ๐‰, and so use ๐‰, as a rough measure of the settling time. The
smaller the value of ๐‰ ,is, the faster will be the settling time.
To a good degree of approximation, the miss-adjustment M of the LMS algorithm
is directly proportional to the learning-rate parameter ๐, whereas the average
time constant ๐‰ is inversely proportional to the learning-rate parameter ๐ .
13
Figure 7 Step-Size small
We therefore have conflicting results in the sense that if the learning-rate
parameter is reduced so as to reduce the miss-adjustment, then the settling time
of the LMS algorithm is increased. Conversely, if the learning-rate parameter is
increased so as to accelerate the learning process, then the miss-adjustment is
increased.
6. Simulation and Results
Our scenario in this simulation we have signal send thorough channel and received with noise ,
at receiver we have the training signal (reference) , we try to extract our signal using (LMS) with
specific value of ( ๐).
Figure 6 Adaptive Filter (Noise Cancellation)
In the first case we assign small value for ๐ =0.0002, as result:
14
As we see our received signal still noisy, next we may try other value of ๐
In Figure above we choose large value of step-size ( ๐ = ๐ŸŽ. ๐Ÿ’), and the Signal canโ€™t recover by
receiver.
As we see in figure.9 our signal has be recover when the vlue of Step-Size
(๐ =0.005).
To recover our signal in this system we must choose value of step-size carefully,
So the step-size not be small size (slow convergence) and not be large (fast
convergence), it must be between (๐ŸŽ. ๐ŸŽ๐ŸŽ๐ŸŽ๐Ÿ < ๐ < ๐ŸŽ. ๐Ÿ’) to have stable system.
Figure 8 Step-Size Large
Figure 9 Step-Size Acceptable
15
Conclusion
Adaptive filtering involves the changing of filter parameters (coefficients) over
time, to adapt to changing signal characteristics. Over the past three decades,
digital signal processors have made great advances in increasing speed and
complexity, and reducing power consumption. As a result, real-time adaptive
filtering algorithms are quickly becoming practical and essential for the future of
communications, both wired and wireless. The LMS algorithm is by far the most
widely used algorithm in adaptive filtering for several reasons, the main features
that attracted the use of the LMS algorithm are low computational complexity,
proof of convergence in stationary environment, unbiased convergence in the
mean to the Wiener solution, and stable behavior when implemented with finite-
precision arithmetic.
Compare to other algorithm like Method of Steepest Descent which dependent
on the updated value of weights (coefficients) in iterative fashion and continually
seeking the bottom point of the error surface of the filter.
Reference
(1) Adaptive Filtering Algorithms and Practical Implementation,Third Edition, 2008 Springe.
(2) Principles of Adaptive Filters and Self-learning Systems, Springer-Verlag London Limited 2005.
(3) Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi,John Wiley
& Sons 2000.
(4) Adaptive Filtering Fundamentals of Least Mean Squares with MATLAB, Alexander D. Poularikas,CRC
Press 2015.
(5) The Scientist and Engineer's Guide to Digital Signal Processing .
16
Annex
Implementation Adaptive Filter Using LMS
t=1:0.025:5;
desired=5*sin(2*3.*t);
noise=5*sin(2*50*3.*t);
refer=5*sin(2*50*3.*t+ 3/20);
primary=desired+noise;
subplot(4,1,1);
plot(t,desired);
ylabel('desired');
subplot(4,1,2);
plot(t,refer);
ylabel('refer');
subplot(4,1,3);
plot(t,primary);
ylabel('primary');
order=2;
mu=0.005;
n=length(primary)
delayed=zeros(1,order);
adap=zeros(1,order);
cancelled=zeros(1,n);
for k=1:n,
delayed(1)=refer(k);
y=delayed*adap';
cancelled(k)=primary(k)-y;
adap = adap + 2*mu*cancelled(k) .* delayed;
delayed(2:order)=delayed(1:order-1);
end
subplot(4,1,4);
plot(t,cancelled);
ylabel('cancelled');

More Related Content

What's hot

Ec 2401 wireless communication unit 4
Ec 2401 wireless communication   unit 4Ec 2401 wireless communication   unit 4
Ec 2401 wireless communication unit 4JAIGANESH SEKAR
ย 
Convolution codes and turbo codes
Convolution codes and turbo codesConvolution codes and turbo codes
Convolution codes and turbo codesManish Srivastava
ย 
Design of FIR Filters
Design of FIR FiltersDesign of FIR Filters
Design of FIR FiltersAranya Sarkar
ย 
Waveform coding
Waveform codingWaveform coding
Waveform codingAlapan Banerjee
ย 
Smart antenna systems
Smart antenna systems Smart antenna systems
Smart antenna systems Apoorva Shetty
ย 
Equalization
EqualizationEqualization
Equalization@zenafaris91
ย 
Optimum Receiver corrupted by AWGN Channel
Optimum Receiver corrupted by AWGN ChannelOptimum Receiver corrupted by AWGN Channel
Optimum Receiver corrupted by AWGN ChannelAWANISHKUMAR84
ย 
presentation on digital signal processing
presentation on digital signal processingpresentation on digital signal processing
presentation on digital signal processingsandhya jois
ย 
Windowing techniques of fir filter design
Windowing techniques of fir filter designWindowing techniques of fir filter design
Windowing techniques of fir filter designRohan Nagpal
ย 
Optical Wavelength converters
Optical Wavelength convertersOptical Wavelength converters
Optical Wavelength convertersFAIZAN AHMAD
ย 
4.5 equalizers and its types
4.5   equalizers and its types4.5   equalizers and its types
4.5 equalizers and its typesJAIGANESH SEKAR
ย 
Microwave devices
Microwave devicesMicrowave devices
Microwave devicesjohn chezhiyan r
ย 
4.4 diversity combining techniques
4.4   diversity combining techniques4.4   diversity combining techniques
4.4 diversity combining techniquesJAIGANESH SEKAR
ย 
Power delay profile,delay spread and doppler spread
Power delay profile,delay spread and doppler spreadPower delay profile,delay spread and doppler spread
Power delay profile,delay spread and doppler spreadManish Srivastava
ย 
Subband Coding
Subband CodingSubband Coding
Subband CodingMihika Shah
ย 
Introduction to equalization
Introduction to equalizationIntroduction to equalization
Introduction to equalizationHarshit Srivastava
ย 
Wiener filters
Wiener filtersWiener filters
Wiener filtersRayeesa
ย 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalizationKamal Bhatt
ย 

What's hot (20)

Ec 2401 wireless communication unit 4
Ec 2401 wireless communication   unit 4Ec 2401 wireless communication   unit 4
Ec 2401 wireless communication unit 4
ย 
Dsp lecture vol 7 adaptive filter
Dsp lecture vol 7 adaptive filterDsp lecture vol 7 adaptive filter
Dsp lecture vol 7 adaptive filter
ย 
Convolution codes and turbo codes
Convolution codes and turbo codesConvolution codes and turbo codes
Convolution codes and turbo codes
ย 
Design of FIR Filters
Design of FIR FiltersDesign of FIR Filters
Design of FIR Filters
ย 
Linear Predictive Coding
Linear Predictive CodingLinear Predictive Coding
Linear Predictive Coding
ย 
Waveform coding
Waveform codingWaveform coding
Waveform coding
ย 
Smart antenna systems
Smart antenna systems Smart antenna systems
Smart antenna systems
ย 
Equalization
EqualizationEqualization
Equalization
ย 
Optimum Receiver corrupted by AWGN Channel
Optimum Receiver corrupted by AWGN ChannelOptimum Receiver corrupted by AWGN Channel
Optimum Receiver corrupted by AWGN Channel
ย 
presentation on digital signal processing
presentation on digital signal processingpresentation on digital signal processing
presentation on digital signal processing
ย 
Windowing techniques of fir filter design
Windowing techniques of fir filter designWindowing techniques of fir filter design
Windowing techniques of fir filter design
ย 
Optical Wavelength converters
Optical Wavelength convertersOptical Wavelength converters
Optical Wavelength converters
ย 
4.5 equalizers and its types
4.5   equalizers and its types4.5   equalizers and its types
4.5 equalizers and its types
ย 
Microwave devices
Microwave devicesMicrowave devices
Microwave devices
ย 
4.4 diversity combining techniques
4.4   diversity combining techniques4.4   diversity combining techniques
4.4 diversity combining techniques
ย 
Power delay profile,delay spread and doppler spread
Power delay profile,delay spread and doppler spreadPower delay profile,delay spread and doppler spread
Power delay profile,delay spread and doppler spread
ย 
Subband Coding
Subband CodingSubband Coding
Subband Coding
ย 
Introduction to equalization
Introduction to equalizationIntroduction to equalization
Introduction to equalization
ย 
Wiener filters
Wiener filtersWiener filters
Wiener filters
ย 
Adaptive equalization
Adaptive equalizationAdaptive equalization
Adaptive equalization
ย 

Viewers also liked

Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Brati Sundar Nanda
ย 
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...Raj Kumar Thenua
ย 
Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...shaik chand basha
ย 
Adaptive filters
Adaptive filtersAdaptive filters
Adaptive filtersPranoti Bachhav
ย 
Active noise control
Active noise controlActive noise control
Active noise controlRishikesh .
ย 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxIDES Editor
ย 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Raj Kumar Thenua
ย 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filterchintanajoshi
ย 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Studyidescitation
ย 
Performance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalPerformance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalRaj Kumar Thenua
ย 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713CSCJournals
ย 
5 g โ€“wireless technology
5 g โ€“wireless technology5 g โ€“wireless technology
5 g โ€“wireless technologySushil Sudake
ย 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011photomatt
ย 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksSlideShare
ย 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShareSlideShare
ย 

Viewers also liked (20)

Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
Noice canclellation using adaptive filters with adpative algorithms(LMS,NLMS,...
ย 
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
M.Tech Thesis on Simulation and Hardware Implementation of NLMS algorithm on ...
ย 
Internet
InternetInternet
Internet
ย 
Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...Low power vlsi implementation adaptive noise cancellor based on least means s...
Low power vlsi implementation adaptive noise cancellor based on least means s...
ย 
Adaptive filters
Adaptive filtersAdaptive filters
Adaptive filters
ย 
Active noise control
Active noise controlActive noise control
Active noise control
ย 
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition ToolboxReal-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
Real-Time Active Noise Cancellation with Simulink and Data Acquisition Toolbox
ย 
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
Simulation and hardware implementation of Adaptive algorithms on tms320 c6713...
ย 
Nlms algorithm for adaptive filter
Nlms algorithm for adaptive filterNlms algorithm for adaptive filter
Nlms algorithm for adaptive filter
ย 
Echo Cancellation Paper
Echo Cancellation Paper Echo Cancellation Paper
Echo Cancellation Paper
ย 
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative StudyEcho Cancellation Algorithms using Adaptive Filters: A Comparative Study
Echo Cancellation Algorithms using Adaptive Filters: A Comparative Study
ย 
Performance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signalPerformance analysis of adaptive noise canceller for an ecg signal
Performance analysis of adaptive noise canceller for an ecg signal
ย 
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
Hardware Implementation of Adaptive Noise Cancellation over DSP Kit TMS320C6713
ย 
5 g โ€“wireless technology
5 g โ€“wireless technology5 g โ€“wireless technology
5 g โ€“wireless technology
ย 
ppt on solar tree
ppt on solar treeppt on solar tree
ppt on solar tree
ย 
Noise cancellation and supression
Noise cancellation and supressionNoise cancellation and supression
Noise cancellation and supression
ย 
zigbee full ppt
zigbee full pptzigbee full ppt
zigbee full ppt
ย 
State of the Word 2011
State of the Word 2011State of the Word 2011
State of the Word 2011
ย 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & Tricks
ย 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShare
ย 

Similar to Adaptive filters

DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersAmr E. Mohamed
ย 
Filter (signal processing)
Filter (signal processing)Filter (signal processing)
Filter (signal processing)RSARANYADEVI
ย 
Discrete time signal processing unit-2
Discrete time signal processing unit-2Discrete time signal processing unit-2
Discrete time signal processing unit-2selvalakshmi24
ย 
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfKarthikRaperthi
ย 
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfKarthikRaperthi
ย 
Z4301132136
Z4301132136Z4301132136
Z4301132136IJERA Editor
ย 
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmDesign of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmIJERA Editor
ย 
Design of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACDesign of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACIRJET Journal
ย 
Method to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsMethod to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsIJERA Editor
ย 
Performance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolPerformance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolijtsrd
ย 
digital filters on open-loop system.pptx
digital filters on open-loop system.pptxdigital filters on open-loop system.pptx
digital filters on open-loop system.pptxHtetWaiYan27
ย 
Time domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designTime domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designCSCJournals
ย 
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET Journal
ย 
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...iosrjce
ย 
Simulation of EMI Filters Using Matlab
Simulation of EMI Filters Using MatlabSimulation of EMI Filters Using Matlab
Simulation of EMI Filters Using Matlabinventionjournals
ย 

Similar to Adaptive filters (20)

DSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital FiltersDSP_2018_FOEHU - Lec 05 - Digital Filters
DSP_2018_FOEHU - Lec 05 - Digital Filters
ย 
File 2
File 2File 2
File 2
ย 
Filter (signal processing)
Filter (signal processing)Filter (signal processing)
Filter (signal processing)
ย 
Discrete time signal processing unit-2
Discrete time signal processing unit-2Discrete time signal processing unit-2
Discrete time signal processing unit-2
ย 
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdfASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ASP UNIT 1 QUESTIONBANK ANSWERS.pdf
ย 
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdfASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ASP UNIT 1 QUESTIONBANK ANSWERS (1).pdf
ย 
Z4301132136
Z4301132136Z4301132136
Z4301132136
ย 
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search AlgorithmDesign of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
Design of Low Pass Digital FIR Filter Using Cuckoo Search Algorithm
ย 
Design of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MACDesign of Area Efficient Digital FIR Filter using MAC
Design of Area Efficient Digital FIR Filter using MAC
ย 
Method to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration SignalsMethod to Measure Displacement and Velocityfrom Acceleration Signals
Method to Measure Displacement and Velocityfrom Acceleration Signals
ย 
C010431520
C010431520C010431520
C010431520
ย 
Av 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - IntroductionAv 738 - Adaptive Filtering Lecture 1 - Introduction
Av 738 - Adaptive Filtering Lecture 1 - Introduction
ย 
Performance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDAToolPerformance Analysis of FIR Filter using FDATool
Performance Analysis of FIR Filter using FDATool
ย 
E0162736
E0162736E0162736
E0162736
ย 
digital filters on open-loop system.pptx
digital filters on open-loop system.pptxdigital filters on open-loop system.pptx
digital filters on open-loop system.pptx
ย 
Time domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter designTime domain analysis and synthesis using Pth norm filter design
Time domain analysis and synthesis using Pth norm filter design
ย 
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass FilterIRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
IRJET-A Comparative Study of Digital FIR and IIR Band- Pass Filter
ย 
D017632228
D017632228D017632228
D017632228
ย 
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
Adaptive Digital Filter Design for Linear Noise Cancellation Using Neural Net...
ย 
Simulation of EMI Filters Using Matlab
Simulation of EMI Filters Using MatlabSimulation of EMI Filters Using Matlab
Simulation of EMI Filters Using Matlab
ย 

More from Mustafa Khaleel

IPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnelsIPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnelsMustafa Khaleel
ย 
WiMAX implementation in ns3
WiMAX implementation in ns3WiMAX implementation in ns3
WiMAX implementation in ns3Mustafa Khaleel
ย 
Ultra wideband technology (UWB)
Ultra wideband technology (UWB)Ultra wideband technology (UWB)
Ultra wideband technology (UWB)Mustafa Khaleel
ย 

More from Mustafa Khaleel (7)

LTE-U
LTE-ULTE-U
LTE-U
ย 
Massive mimo
Massive mimoMassive mimo
Massive mimo
ย 
IPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnelsIPsec vpn topology over GRE tunnels
IPsec vpn topology over GRE tunnels
ย 
WiMAX implementation in ns3
WiMAX implementation in ns3WiMAX implementation in ns3
WiMAX implementation in ns3
ย 
Turbocode
TurbocodeTurbocode
Turbocode
ย 
Mm wave
Mm waveMm wave
Mm wave
ย 
Ultra wideband technology (UWB)
Ultra wideband technology (UWB)Ultra wideband technology (UWB)
Ultra wideband technology (UWB)
ย 

Recently uploaded

Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
ย 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
ย 
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort service
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort serviceGurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort service
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort servicejennyeacort
ย 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction managementMariconPadriquez1
ย 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
ย 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examplesDr. Gudipudi Nageswara Rao
ย 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfme23b1001
ย 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
ย 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
ย 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
ย 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxk795866
ย 
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...9953056974 Low Rate Call Girls In Saket, Delhi NCR
ย 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
ย 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
ย 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
ย 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxnull - The Open Security Community
ย 
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube ExchangerAnamika Sarkar
ย 
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
ย 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
ย 

Recently uploaded (20)

Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
ย 
Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
ย 
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort service
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort serviceGurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort service
Gurgaon โœก๏ธ9711147426โœจCall In girls Gurgaon Sector 51 escort service
ย 
computer application and construction management
computer application and construction managementcomputer application and construction management
computer application and construction management
ย 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
ย 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
ย 
Electronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdfElectronically Controlled suspensions system .pdf
Electronically Controlled suspensions system .pdf
ย 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
ย 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
ย 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
ย 
Introduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptxIntroduction-To-Agricultural-Surveillance-Rover.pptx
Introduction-To-Agricultural-Surveillance-Rover.pptx
ย 
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
๐Ÿ”9953056974๐Ÿ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
ย 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
ย 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
ย 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
ย 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
ย 
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ๏ปฟTube Exchanger
ย 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
ย 
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTACยฎ CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
ย 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
ย 

Adaptive filters

  • 2. ii Contents 1. Introduction ............................................................................................................................ 1 2. Digital Filters ........................................................................................................................... 2 2.1. Linear and Nonlinear Filter............................................................................................. 2 2.2. Filter Design .................................................................................................................... 3 3. Wiener Filters.......................................................................................................................... 4 3.1. Error Measurements....................................................................................................... 6 3.2. The Mean-Square Error (MSE)........................................................................................ 6 3.3. Mean Square Error Surface ............................................................................................ 7 4. Method of Steepest Descent.................................................................................................. 8 5. The Least Mean Squares (LMS) Algorithm........................................................................... 10 5.1. Convergence in the Mean-Sense.................................................................................. 11 5.2. Convergent in the Mean Square Sense........................................................................ 12 6. Simulation and Results ......................................................................................................... 13 Conclusion..................................................................................................................................... 15 Reference ...................................................................................................................................... 15 Annex ............................................................................................................................................ 16 Figure 1 FIR Filter ............................................................................................................................ 3 Figure 2 IIR Filter............................................................................................................................. 4 Figure 3 Wiener Filters.................................................................................................................... 5 Figure 4 Error surface with two weights........................................................................................ 7 Figure 5 Adaptive Filter with LMS................................................................................................ 10 Figure 6 Adaptive Filter (Noise Cancellation) .............................................................................. 13 Figure 7 Step-Size small................................................................................................................ 13 Figure 8 Step-Size Large................................................................................................................ 14 Figure 9 Step-Size Acceptable ...................................................................................................... 14
  • 3. 1 1. Introduction Filtering is a signal processing operation whose objective is to process a signal in order to manipulate the information contained in the signal. In other words, a filter is a device that maps its input signal to another output signal facilitating the extraction of the desired information contained in the input signal. A digital filter is the one that processes discrete-time signals represented in digital format. For time-invariant filters the internal parameters and the structure of the filter are fixed, and if the filter is linear the output signal is a linear function of the input signal. Once prescribed specifications are given, the design of time-invariant linear filters entails three basic steps, namely: the approximation of the specifications by a rational transfer function, the choice of an appropriate structure defining the algorithm, and the choice of the form of implementation for the algorithm. An adaptive filter is required when either the fixed specifications are unknown or the specifications cannot be satisfied by time-invariant filters. Strictly speaking an adaptive filter is a nonlinear filter since its characteristics are dependent on the input signal and consequently the homogeneity and additivity conditions are not satisfied. However, if we freeze the filter parameters at a given instant of time, most adaptive filters considered in this text are linear in the sense that their output signals are linear functions of their input signals The adaptive filters are time-varying since their parameters are continually changing in order to meet a performance requirement. In this sense, we can interpret an adaptive filter as a filter that performs the approximation step on- line. Usually, the definition of the performance criterion requires the existence of a reference signal that is usually hidden in the approximation step of fixed-filter design.
  • 4. 2 2. Digital Filters The term filter is commonly used to refer to any device or system that takes a mixture of particles/elements from its input and processes them according to some specific rules to generate a corresponding set of particles/elements at its output. In the context of signals and systems, particles/elements are the frequency components of the underlying signals and, traditionally, filters are used to retain all the frequency components that belong to a particular band of frequencies, while rejecting the rest of them, as much as possible. In a more general sense, the term filter may be used to refer to a system that reshapes the frequency components of the input to generate an output signal with some desirable features. 2.1.Linear and Nonlinear Filter Filters can be classified as either linear or nonlinear types. A linear filter is one whose output is some linear function of the input. In the design of linear filters it is necessary to assume stationary (statistical-time-invariance) and know the relevant signal and noise statistics a priori. The linear filter design attempts to minimize the effects of noise on the signal by meeting a suitable statistical criterion. The classical linear Wiener filter, for example, minimizes the Mean Square Error (MSE) between the desired signal response and the actual filter response. The Wiener solution is said to be optimum in the mean square sense, and it can be said to be truly optimum for second-order stationary noise statistics (fully described by constant finite mean and variance). A linear adaptive filter is one whose output is some linear combination of the actual input at any moment in time between adaptation operations. A nonlinear adaptive filter does not necessarily have a linear relationship between the input and output at any moment in time. Many different linear adaptive filter algorithms have been published in the literature. Some of the important features of these algorithms can be identified by the following terms 1. Rate of convergence - how many iterations to reach a near optimum solution. 2. Misadjustment- measure of the amount by which the final value of the MSE, averaged over an ensemble of adaptive filters, deviates from the MSE produced by the Wiener solution. 3. Tracking - ability to follow statistical variations in a non-stationary environment.
  • 5. 3 4. Robustness - implies that small disturbances from any source (internal or 5.external) produce only small estimation errors. 6. Computational requirements - the computational operations per iteration, Data storage and programming requirements. 7. Structure - of information flow in the algorithm, e.g., serial, parallel etc., which determines the possible hardware implementations. 8. Numerical properties - type and nature of quantization errors, numerical stability and numerical accuracy. 2.2.Filter Design There two common way to design a Filter (recursion) and (non- recursion). For non-recursion filter also its call (Finite Impulse Response or FIR). The Filter is implemented by convolution, each sample in the output is calculated by weighting the samples in the input, and adding them together. Recursive filters (Infinite Impulse Response or IIR filters) are an extension of this, using previously calculated values from the output, besides points from the input. Recursive filters are defined by a set of recursion coefficients. Figure 1 FIR Filter
  • 6. 4 Figure 2 IIR Filter Finally we can classify digital filters by their use and by their implementation. The use of a digital filter can be broken into three categories: time domain, frequency domain and custom. As previously described, time domain filters are used when the information is encoded in the shape of the signal's waveform. Time domain filtering is used for such actions as: smoothing, DC removal, waveform shaping, etc. In contrast, frequency domain filters are used when the information is contained in the amplitude, frequency, and phase of the component sinusoids. The goal of these filters is to separate one band of frequencies from another. Custom filters are used when a special action is required by the filter, something more elaborate than the four basic responses (high-pass, low-pass, band-pass and band- reject). 3. Wiener Filters Wiener formulated the continuous-time, least mean square error, estimation Problem in his classic work on interpolation, extrapolation and smoothing Of time series (Wiener 1949). The extension of the Wiener theory from Continuous time to discrete time is simple, and of more practical use for Implementation on digital signal processors. A Wiener filter can be an Infinite-duration impulse response (IIR) filter or a finite-duration impulse
  • 7. 5 Response (FIR) filter. In general, the formulation of an IIR Wiener filter results in a set of non-linear equations, whereas the formulation of an FIR Wiener filter results in a set of linear equations and has a closed-form solution e they are relatively simple to compute, inherently stable and more practical. The main drawback of FIR filters compared with IIR filters is that they may need a large number of coefficients to approximate a desired response. Figure 3 Wiener Filters Where ๐‘ฅ( ๐‘›) is input signal and ๐‘ค are filter coefficients, respectively; that is ๐‘ฅ(๐‘›) = [ ๐‘ฅ(๐‘›)๐‘ฅ โ€ฆ ๐‘ฅ(๐‘› โˆ’ ๐‘ + 1)] ๐‘‡ (1) ๐‘ค = [ ๐‘ค0 ๐‘ค1 โ€ฆ . ๐‘ค ๐‘] ๐‘‡ (2) And ๐‘ฆ(๐‘˜) is the output signal, ๐‘ฆ(๐‘›) = โˆ‘ ๐‘ค๐‘– ๐‘ฅ(๐‘› โˆ’ ๐‘–)๐‘ ๐‘–=0 = ๐‘ค๐‘œ ๐‘ฅ(๐‘›) + ๐‘ค1 ๐‘ฅ(๐‘› โˆ’ 1) + โ‹ฏ + ๐‘ค ๐‘ ๐‘ฅ(๐‘› โˆ’ ๐‘) ๐‘ฆ(๐‘›) = ๐‘ค ๐‘‡ ๐‘ฅ(๐‘›) (3) ๐‘‘(๐‘›) Is the training or desired signal, and e(n) is error signal (different between the Output signal ๐‘ฆ(๐‘›) and desired signal ๐‘‘(๐‘›)). ๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›) (4)
  • 8. 6 3.1.Error Measurements Adaptation of the filter coefficients follows a minimization procedure of a particular objective or cost function. This function is commonly defined as a norm of the error signal e (n). The most commonly employed norms are the mean- square error (MSE). 3.2.The Mean-Square Error (MSE). From Figure 3, we defined The MSE (cost function) as ๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)|2]. (5) From equation (3) we write the equation (5) as follow: So, ๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)|2] ๐œ‰(๐‘›) = ๐ธ[๐‘’2(๐‘›)] = ๐ธ[| ๐‘‘(๐‘›) โˆ’ ๐‘ค ๐‘‡ ๐‘ฅ(๐‘›)|2] ๐œ‰(๐‘›) = [๐‘‘2(๐‘›) โˆ’ 2๐‘ค ๐‘‡ ๐ธ[๐‘‘(๐‘›)๐‘ฅ(๐‘›)] + ๐‘ค ๐‘‡ ๐ธ[๐‘ฅ(๐‘›)๐‘ค ๐‘‡ ๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)]๐‘ค Where, ๐‘…= ๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)], ๐‘ = ๐ธ[๐‘‘(๐‘›)๐‘ฅ ๐‘‡(๐‘›)]. ๐œ‰(๐‘›) = ๐‘ฅ ๐‘‘๐‘‘(0) โˆ’ 2๐‘ + 2๐‘…๐‘ค (6) Where R and p are the input-signal correlation matrix and the cross-correlation vector between the reference signal and the input signal. The gradient vector of the MSE function with respect to the adaptive filter coefficient vector is given by โˆ‡w ฮพ (n)= โˆ’2๐‘ + 2๐‘…๐‘ค (7) That minimizes the MSE cost function, is obtained by equating the gradient vector to zero. Assuming that R is non-singular, one gets that โˆ‡ ๐‘ค ๐œ‰(๐‘›) = 0
  • 9. 7 Figure 4 Error surface with two weights ๐‘ค๐‘œ = ๐‘…โˆ’1 ๐‘ (8) This system of equations is known as the Wiener-Hopf equations, and the filter whose weights satisfy the Wiener-Hopf equations is called a Wienerfilter. 3.3.Mean Square Error Surface From Equation (7) the mean square error for filter is a quadratic function of the filter coefficient vector ๐’˜ and has a single minimum point. For example, for a filter with only two coefficients (๐‘ค0, ๐‘ค1). The mean square error function is a bowl-shaped surface, with a single minimum point. At this optimal operating point the mean square error surface has zero gradient.
  • 10. 8 4. Method of Steepest Descent To solve the Wiener-Hopf equations (Eq.7) for tap weights of the optimum spatial filter, we basically need to compute the inverse of a p-by-p matrix made up of the different values of the autocorrelation function. We may avoid the need for this matrix inversion by using the method of steepest descent. Starting with an initial guess for optimum weight ๐’˜ ๐’, say ๐’˜(๐ŸŽ), a recursive search method that may require many iterations (steps) to converge to ๐’˜ ๐’ is used. The method of steepest descent is a general scheme that uses the following steps to search for the minimum point of any convex function of a set of parameters: 1. Start with an initial guess of the parameters whose optimum values are to be found for minimizing the function. 2. Find the gradient of the function with respect to these parameters at the present point. 3. Update the parameters by taking a step in the opposite direction of the gradient vector obtained in Step 2. This corresponds to a step in the direction of steepest descent in the cost function at the present point. Furthermore, the size of the step taken is chosen proportional to the size of the gradient vector. 4. Repeat Steps 2 and 3 until no further significant change is observed in the parameters. To implement this procedure in the case of the transversal filter shown in Figure 3, we recall (equation 7) โˆ‡w ฮพ (n)= โˆ’2๐‘ + 2๐‘…๐‘ค (9) Where โˆ‡ is the gradient operator defined as the column vector, โˆ‡ = [ ๐œ• ๐œ•๐‘ค0 ๐œ• ๐œ•๐‘ค1 โ€ฆ ๐œ• ๐œ•๐‘ค ๐‘โˆ’1 ] ๐‘‡ (10) According to the above procedure, if ๐‘ค(๐‘›) is the tap-weight vector at the ๐‘› ๐‘กโ„Ž iteration, the following recursive equation may be used to update ๐‘ค( ๐‘›). ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ ๐œ‡โˆ‡ ๐‘˜ ๐œ‰ (11) Where ๐œ‡ positive scalar is call Step-Size, and โˆ‡ ๐‘˜ ๐œ‰ denotes the gradient vector โˆ‡ ๐‘˜ ๐œ‰ evaluated at the point ๐‘ค = ๐‘ค( ๐‘˜). Substituting (Eq.9) in (Eq. 11), we get ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡(๐‘…๐‘ค( ๐‘›) โˆ’ ๐‘) (12)
  • 11. 9 As we shall soon show, the convergence of ๐’˜(๐’) to the optimum solution ๐‘ค๐‘œ and the speed at which this convergence takes place are dependent on the size of the step-size parameter ฮผ. A large step-size may result in divergence of this recursive equation. To see how the recursive update ๐‘ค(๐‘˜) converges toward๐‘ค๐‘œ, we rearrange Eq. (12) as ๐‘ค( ๐‘˜ + 1) = (ฮ™ โˆ’ 2๐œ‡๐‘น) ๐’˜( ๐‘˜) + 2๐œ‡๐’‘ (13) Where ๐šฐ is the N-by-N identify matrix. Next we subtract ๐‘ค๐‘œform both side for Eq. (13) and rearrange the result to obtain ๐‘ค( ๐‘˜ + 1) โˆ’ ๐‘ค๐‘œ = (ฮ™ โˆ’ 2๐œ‡๐‘น)( ๐’˜( ๐‘˜) โˆ’ ๐’˜ ๐’) (14) Defining the ๐‘( ๐‘˜) as ๐‘( ๐‘›) = ๐‘ค( ๐‘›) โˆ’ ๐‘ค๐‘œ And ๐‘… = ๐‘„ฮ›๐‘„ ๐‘‡ Where ๐šฒ is a diagonal matrix consisting of the eigenvalues ๐œ†0, ๐œ†0, โ€ฆ ๐œ† ๐‘โˆ’1 of R and the columns of ๐‘„ contain the corresponding orthonormal eigenvectors, and ฮ™ =๐‘„๐‘„ ๐‘‡ , Substituting Eq.(14) we get ๐‘( ๐‘› + 1) = ๐‘„(I โˆ’ 2ฮผฮ›) ๐‘„ ๐‘‡ ๐‘ฃ( ๐‘˜) (15) Pre-multiplying Eq. (15) by ๐‘„ ๐‘‡ we have ๐‘„ ๐ป ๐‘( ๐‘› + 1) = (I โˆ’ ฮผฮ›)๐‘„ ๐ป ๐‘(๐‘›) (16) Notation: ๐‘ฃ( ๐‘›) = ๐‘„ ๐ป ๐‘(๐‘›) ๐‘ฃ( ๐‘› + 1) = (I โˆ’ ฮผฮ›) ๐‘ฃ( ๐‘›), ๐‘˜ = 1,2, . . , ๐‘ (17) Initial conditions: ๐‘ฃ(0) = ๐‘„ ๐ป ๐‘(0) = ๐‘„ ๐ป [๐‘ค(0) โˆ’ ๐‘ค๐‘œ] ๐‘ฃ ๐‘˜( ๐‘›) = (1 โˆ’ ๐œ‡๐œ† ๐‘š๐‘Ž๐‘ฅ) ๐‘› ๐‘ฃ ๐‘˜(0), ๐‘˜ = 1,2, โ€ฆ . , ๐‘ Convergence (stability): When n=0, 0 < ๐œ‡ < 2 ๐œ† ๐‘š๐‘Ž๐‘ฅ Stability condition ๐œ† ๐‘š๐‘Ž๐‘ฅ = max{๐œ†1, ๐œ†2, โ€ฆ , ๐œ† ๐‘} ๐œ† ๐‘š๐‘Ž๐‘ฅ Is the maximum of the eigenvalues๐œ†0 , ๐œ†1, โ€ฆ . , ๐œ† ๐‘โˆ’1. The left limit in refers to the fact that the tap-weight correction must be in the opposite direction of the gradient vector. The right limit is to ensure that all the scalar tap-weight parameters in the recursive equations (17) decay exponentially as ๐‘˜ increases.
  • 12. 10 5. The Least Mean Squares (LMS) Algorithm In any event, care has to be exercised in the selection of the learning-rate parameter ๐ for the method of steepest descent to work. Also, a practical limitation of the method of steepest descent is that it requires knowledge of the spatial correlation functions ๐’“ ๐’…๐’™( ๐’‹, ๐’)and ๐’“ ๐’™๐’™(๐’‹, ๐’)now, when the filter operates in an unknown environment, these correlation functions are not available, in which case we are forced to use estimates in their place. The least-mean-square algorithm results from a simple and yet effective method of providing for these estimates. The least-mean-square (LMS) algorithm is based on the use of instantaneous estimates of the autocorrelation function ๐‘น and the cross-correlation function ๐’‘.These estimates are deduced directly from the defining equations (18) and (19) as follows: ๐‘…= ๐ธ[๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›)] โŸน ๐‘…โ€ฒ = ๐‘ฅ(๐‘›)๐‘ฅ ๐‘‡(๐‘›) (18) ๐‘ = ๐ธ[๐‘‘(๐‘›)๐‘ฅ ๐‘‡(๐‘›)] โ‡’ ๐‘โ€ฒ = ๐‘ฅ(๐‘›)๐‘‘(๐‘›) (19) Now call Eq. (12): ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡( ๐‘…๐‘ค( ๐‘›) โˆ’ ๐‘) ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡[( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘ฅ( ๐‘›) ๐‘‘( ๐‘›)] ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡๐‘ฅ(๐‘›)[( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘‘( ๐‘›)] When ๐‘’โ€ฒ (๐‘›) = ( ๐‘ฅ( ๐‘›) ๐‘ฅ ๐‘‡( ๐‘›) ๐‘ค( ๐‘›) โˆ’ ๐‘‘( ๐‘›) So, ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) โˆ’ 2๐œ‡๐‘ฅ(๐‘›)๐‘’โ€ฒ (๐‘›) (20) Equation (20) describe Least-Mean-Square (LMS) Algorithm. Figure 5 Adaptive Filter with LMS
  • 13. 11 Summary of the LMS algorithm Input: Tap-weight vector, ๐’˜( ๐’), Input vector, ๐’™(๐’). and desired output, ๐’…(๐’). Output: Filter: output, ๐’š(๐’) Tap-weight vector update ๐’˜( ๐’ + ๐Ÿ) 1. Filtering: ๐’š(๐’) = ๐’˜ ๐‘ป ๐’™(๐’) 2. Error Estimation: ๐’†(๐’) = ๐’…(๐’) โˆ’ ๐’š(๐’) 3. Tap-Weight Vector Adaptation: ๐’˜( ๐’ + ๐Ÿ) = ๐’˜( ๐’) โˆ’ ๐Ÿ๐๐’™(๐’)๐’†โ€ฒ (๐’) Where ๐‘ฅ( ๐‘›) = [ ๐‘ฅ( ๐‘›) ๐‘ฅโ€ฆ ๐‘ฅ( ๐‘› โˆ’ ๐‘ + 1)] ๐‘‡ . Substituting this result in Eq. (11), we get ๐‘ค( ๐‘› + 1) = ๐‘ค( ๐‘›) + 2๐œ‡๐‘’( ๐‘›) ๐‘ฅ( ๐‘›) (21) This is referred to as the LMS recursion it suggests a simple procedure for recursive adaptation of the filter coefficients after arrival of every new input sample ๐‘ฅ( ๐‘›) and its corresponding desired output sample, ๐‘‘( ๐‘›) Equations (3), (4), and (21), in this order, specify the three steps required to complete each iteration of the LMS algorithm. Equation (3) is referred to as filtering. It is performed to obtain the filter output. Equation (4) is used to calculate the estimation error. Equation (21) is tap-weight adaptation recursion. 5.1.Convergence in the Mean-Sense A detailed analysis of convergence of the LMS algorithm in the mean square is much more complicated than convergence analysis of the algorithm in the mean. This analysis is also much more demanding in the assumptions made concerning the behavior of the weight vector ๐’˜( ๐’) computed by the LMS algorithm (Haykin, 1991). In this subsection we present a simplified result of the analysis. The LMS algorithm is convergent in the mean square if the learning-rate parameter ๐œ‡ satisfies the following condition, 0 < ๐œ‡ < ๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] (22)
  • 14. 12 Where ๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] is the trace of the correlation matrix ๐‘…, from matrix algebra, we know that ๐‘ก๐‘Ÿ[ ๐‘… ๐‘ฅ] = โˆ‘ ๐œ† ๐‘˜ โ‰ฅ ๐œ† ๐‘š๐‘Ž๐‘ฅ (23) And Convergence condition โ€“ Convergence in the mean sense 0 < ๐œ‡ < 2 ๐œ† ๐‘š๐‘Ž๐‘ฅ (24) 5.2.Convergent in the Mean Square Sense For an LMS algorithm convergent in the mean square, the final value of ๐œ‰(โˆž) the mean-squared error ๐œ‰(๐‘›) is a positive constant, which represents the steady- state condition of the learning curve. In fact, ๐œ‰(โˆž) is always in excess of the minimum mean-squared error J- realized by the corresponding Wiener filter for a stationary environment. The difference between ๐œ‰(โˆž) and ๐œ‰ ๐‘š๐‘–๐‘› is called the excess mean-squared error: ๐œ‰ ๐‘’๐‘ฅ = ๐œ‰(โˆž) โˆ’ ๐œ‰ ๐‘š๐‘–๐‘› (25) And Convergence condition โ€“ Convergence in the mean square sense 0 < ๐œ‡ < 2 ๐œ† ๐‘š๐‘Ž๐‘ฅ (26) The ratio of ๐œ‰ ๐‘’๐‘ฅ to ๐œ‰ ๐‘š๐‘–๐‘› is called the miss-adjustment: ๐‘€ = ๐œ‰ ๐‘’๐‘ฅ ๐œ‰ ๐‘š๐‘–๐‘› (27) It is customary to express the miss-adjustment M as a percentage. Thus, for example, a miss-adjustment of 10 percent means that the LMS algorithm produces a mean-squared error (after completion of the learning process) that is 10percent greater than the minimum mean squared error ๐œ‰ ๐‘š๐‘–๐‘› .Such a performance is ordinarily considered to be satisfactory. Another important characteristic of the LMS algorithm is the settling time. However, there is no unique definition for the settling time. We may, for example, approximate the learning curve by a single exponential with average time constant ๐‰, and so use ๐‰, as a rough measure of the settling time. The smaller the value of ๐‰ ,is, the faster will be the settling time. To a good degree of approximation, the miss-adjustment M of the LMS algorithm is directly proportional to the learning-rate parameter ๐, whereas the average time constant ๐‰ is inversely proportional to the learning-rate parameter ๐ .
  • 15. 13 Figure 7 Step-Size small We therefore have conflicting results in the sense that if the learning-rate parameter is reduced so as to reduce the miss-adjustment, then the settling time of the LMS algorithm is increased. Conversely, if the learning-rate parameter is increased so as to accelerate the learning process, then the miss-adjustment is increased. 6. Simulation and Results Our scenario in this simulation we have signal send thorough channel and received with noise , at receiver we have the training signal (reference) , we try to extract our signal using (LMS) with specific value of ( ๐). Figure 6 Adaptive Filter (Noise Cancellation) In the first case we assign small value for ๐ =0.0002, as result:
  • 16. 14 As we see our received signal still noisy, next we may try other value of ๐ In Figure above we choose large value of step-size ( ๐ = ๐ŸŽ. ๐Ÿ’), and the Signal canโ€™t recover by receiver. As we see in figure.9 our signal has be recover when the vlue of Step-Size (๐ =0.005). To recover our signal in this system we must choose value of step-size carefully, So the step-size not be small size (slow convergence) and not be large (fast convergence), it must be between (๐ŸŽ. ๐ŸŽ๐ŸŽ๐ŸŽ๐Ÿ < ๐ < ๐ŸŽ. ๐Ÿ’) to have stable system. Figure 8 Step-Size Large Figure 9 Step-Size Acceptable
  • 17. 15 Conclusion Adaptive filtering involves the changing of filter parameters (coefficients) over time, to adapt to changing signal characteristics. Over the past three decades, digital signal processors have made great advances in increasing speed and complexity, and reducing power consumption. As a result, real-time adaptive filtering algorithms are quickly becoming practical and essential for the future of communications, both wired and wireless. The LMS algorithm is by far the most widely used algorithm in adaptive filtering for several reasons, the main features that attracted the use of the LMS algorithm are low computational complexity, proof of convergence in stationary environment, unbiased convergence in the mean to the Wiener solution, and stable behavior when implemented with finite- precision arithmetic. Compare to other algorithm like Method of Steepest Descent which dependent on the updated value of weights (coefficients) in iterative fashion and continually seeking the bottom point of the error surface of the filter. Reference (1) Adaptive Filtering Algorithms and Practical Implementation,Third Edition, 2008 Springe. (2) Principles of Adaptive Filters and Self-learning Systems, Springer-Verlag London Limited 2005. (3) Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi,John Wiley & Sons 2000. (4) Adaptive Filtering Fundamentals of Least Mean Squares with MATLAB, Alexander D. Poularikas,CRC Press 2015. (5) The Scientist and Engineer's Guide to Digital Signal Processing .
  • 18. 16 Annex Implementation Adaptive Filter Using LMS t=1:0.025:5; desired=5*sin(2*3.*t); noise=5*sin(2*50*3.*t); refer=5*sin(2*50*3.*t+ 3/20); primary=desired+noise; subplot(4,1,1); plot(t,desired); ylabel('desired'); subplot(4,1,2); plot(t,refer); ylabel('refer'); subplot(4,1,3); plot(t,primary); ylabel('primary'); order=2; mu=0.005; n=length(primary) delayed=zeros(1,order); adap=zeros(1,order); cancelled=zeros(1,n); for k=1:n, delayed(1)=refer(k); y=delayed*adap'; cancelled(k)=primary(k)-y; adap = adap + 2*mu*cancelled(k) .* delayed; delayed(2:order)=delayed(1:order-1); end subplot(4,1,4); plot(t,cancelled); ylabel('cancelled');