NOISE CANCELLATION USING LMS ALGORITHM
OBJECTIVE
• INTRODUCTION
• ADAPTIVE FILTER
• BLOCK DIAGRAM
• LEAST MEAN SQUARE - LMS
• ADVANTAGES AND DISADVANTAGES
• MATLAB CODE
• CONCLUSION
ADAPTIVE NOISE CANCELLATION
➢ Adaptive noise cancellation is the approach used for estimating a desired
signal d(n) from a noise-corrupted observation.
x(n) = d(n) + v1(n)
➢ Usually the method uses a primary input containing the corrupted signal
and a reference input containing noise correlated in some unknown way
with the primary noise.
➢ The reference input v1(n) can be filtered and subtracted from the primary
input to obtain the signal estimate 𝑑 ̂(n).
➢ As the measurement system is a black box, no reference signal that is
correlated with the noise is available.
An adaptive filter is composed of two parts, the digital filter and the
adaptive algorithm.
• A digital filter with adjustable coefficients wn(z) and an adaptive algorithm
which is used to adjust or modify the coefficients of the filter.
• The adaptive filter can be a Finite Impulse Response FIR filter or an
Infinite Impulse Response IIR filter.
ALGORITHMS FOR ADAPTIVE EQUALIZATION
• There are three different types of adaptive filtering algorithms.
➢ Zero forcing (ZF)
➢ least mean square (LMS)
➢ Recursive least square filter (RLS)
• Recursive least square is an adaptive filter algorithm that recursively finds the coefficients
that minimize a weighted linear least squares cost function relating to the input signals.
• This approach is different from the least mean-square algorithm that aim to reduce the
mean-square error.
Least Mean Square - LMS
• The LMS algorithm in general, consists of two basics procedure:
1. Filtering process, which involve, computing the output (d(n - d)) of a linear filter in
response to the input signal and generating an estimation error by comparing this
output with a desired response as follows:
y(n) is filter output and is the desired response at time n
2. Adaptive process, which involves the automatics adjustment of the parameter of the
filter in accordance with the estimation error.
➢ where wn is the estimate of the weight value vector at time n, x(n) is the input
signal vector.
➢ e(n) is the filter error vector and μ is the step-size, which determines the filter
convergence rate and overall behavior.
➢ One of the difficulties in the design and implementation of the LMS adaptive
filter is the selection of the step-size μ. This parameter must lie in a specific
range, so that the LMS algorithm converges.
➢ LMS algorithm, aims to reduce the mean-square error.
The convergence characteristics of the LMS adaptive algorithm depends on two
factors: the step-size μ and the eigenvalue spread of the autocorrelation matrix .
The step-size μ must lie in a specific range
where 𝜆𝑚𝑎𝑥 is the largest eigenvalue of the autocorrelation matrix Rx.
• A large value of the step-size μ will lead to a faster convergence but may be less
stable around the minimum value. T
On National Teacher Day, meet the 2024-25 Kenan Fellows
LMS .pdf
1. PONDICHERRY UNIVERSITY
DEPARTMENT OF ELECTRONICS
ENGINEERING
NOISE CANCELLATION USING LMS ALGORITHM
SUBMITTED TO:
PROF. DR. K. ANUSUDHA
DEPT. OF ELECTRONICS ENGINEERING
SUBMITTED BY:
AWANISH KUMAR
M.TECH(ECE)-1st Year
21304006 1
2. CONTENTS
• OBJECTIVE
• INTRODUCTION
• ADAPTIVE FILTER
• BLOCK DIAGRAM
• LEAST MEAN SQUARE - LMS
• ADVANTAGES AND DISADVANTAGES
• MATLAB CODE
• CONCLUSION
2
3. OBJECTIVE
• The main objective of this presentation is to eliminate noise from the corrupted
input signal and adapt the Adaptive transversal filter (ATF) weights in the way
that the mean square of the estimated error is to be minimized by using the
Least Mean Square (LMS) algorithm.
• To get the best estimate of the desired signal from the noise-corrupted signal.
• To extract the desired signal from the noisy process, where the desired signal is a
sinusoid with unit amplitude that is observed in the presence of additive noise.
3
4. INTRODUCTION
In real-time digital signal processing applications, there are many situations
in which useful signals are corrupted by unwanted generated signals that
are classified as noise. Noise can occur randomly or as white noise with an
even frequency distribution or as frequency-dependent noise. The term
noise includes not only thermal or flicker noise, but all disturbances, either in
stimuli, environment or components of sensors and circuits. Noisy data may
arise from a variety of internal and external sources.
4
5. ADAPTIVE NOISE CANCELLATION
➢ Adaptive noise cancellation is the approach used for estimating a desired
signal d(n) from a noise-corrupted observation.
x(n) = d(n) + v1(n)
➢ Usually the method uses a primary input containing the corrupted signal
and a reference input containing noise correlated in some unknown way
with the primary noise.
➢ The reference input v1(n) can be filtered and subtracted from the primary
input to obtain the signal estimate 𝑑 ̂(n).
➢ As the measurement system is a black box, no reference signal that is
correlated with the noise is available.
5
6. ADAPTIVE FILTER
• An adaptive filter is composed of two parts, the digital filter and the
adaptive algorithm.
• A digital filter with adjustable coefficients wn(z) and an adaptive algorithm
which is used to adjust or modify the coefficients of the filter.
• The adaptive filter can be a Finite Impulse Response FIR filter or an
Infinite Impulse Response IIR filter.
6
8. ➢ The filtering process involves the computation of the filter output y(n) in
response to the filter input signal x(n).
➢ The filter output is compared with the desired response d(n), generating an
error estimation e(n).
➢ The feedback error signal e(n) is then used by the adaptive algorithm to modify
the adjustable coefficients of the filter wn, generally called weight in order to
minimize the error according to some optimization criterion.
8
9. ALGORITHMS FOR ADAPTIVE EQUALIZATION
• There are three different types of adaptive filtering algorithms.
➢ Zero forcing (ZF)
➢ least mean square (LMS)
➢ Recursive least square filter (RLS)
• Recursive least square is an adaptive filter algorithm that recursively finds the coefficients
that minimize a weighted linear least squares cost function relating to the input signals.
• This approach is different from the least mean-square algorithm that aim to reduce the
mean-square error.
9
10. Least Mean Square - LMS
• The LMS algorithm in general, consists of two basics procedure:
1. Filtering process, which involve, computing the output (d(n - d)) of a linear filter in
response to the input signal and generating an estimation error by comparing this
output with a desired response as follows:
y(n) is filter output and is the desired response at time n
2. Adaptive process, which involves the automatics adjustment of the parameter of the
filter in accordance with the estimation error.
10
12. ➢ LMS algorithm equation
➢ where wn is the estimate of the weight value vector at time n, x(n) is the input
signal vector.
➢ e(n) is the filter error vector and μ is the step-size, which determines the filter
convergence rate and overall behavior.
➢ One of the difficulties in the design and implementation of the LMS adaptive
filter is the selection of the step-size μ. This parameter must lie in a specific
range, so that the LMS algorithm converges.
➢ LMS algorithm, aims to reduce the mean-square error.
12
13. RATE OF CONVERGENCE
⮚Convergence of the LMS adaptive algorithm
• The convergence characteristics of the LMS adaptive algorithm depends on two
factors: the step-size μ and the eigenvalue spread of the autocorrelation matrix .
The step-size μ must lie in a specific range
where 𝜆𝑚𝑎𝑥 is the largest eigenvalue of the autocorrelation matrix Rx.
• A large value of the step-size μ will lead to a faster convergence but may be less
stable around the minimum value. The convergence of the algorithm is inversely
proportional to the eigenvalue spread of the correlation matrix.
13
14. ADVANTAGES AND DISADVANTAGES
➢ LMS Simple and can be easily applied but RLS Increased complexity
and computational cost.
➢ LMS Takes longer to converge but RLS faster convergence.
➢ Larger steady state error with respect to the unknown system.
➢ Objective is to minimize the current mean square error between the
desired signal and the output.
14
15. MATLAB CODE
clc;
clear;
close all;
% Generation Desired signal
t = 0.001:0.001:1;
D = 2*sin(2*pi*50*t);
% Generating signal corrupted with
noise
n = numel(D);
A = D(1:n)+0.9*randn(1,n);
M = 25;
W = zeros(1,M);
Wi = zeros(1,M);
E = [];
mu = 0.002;
for i = M:n
E(i) = D(i) - Wi*A(i:-1:i-M+1)';
Wi = Wi + 2*mu*E(i)*A(i:-1:i-M+1);
end
Est = zeros(n,1); % Estimation of the signal
for i = M:n
j = A(i:-1:i-M+1);
Est(i) = ((Wi)*(j)');
end
Err =Est’-D; % computing the error signal
%Display of signal
figure(1)
subplot(4,1,1);
plot(D);
title('Desired signal');
subplot(4,1,2);
plot(A);
title('signal corrupted with noise');
subplot(4,1,3);
plot(Est);
title('LMS - Estimated signal');
subplot(4,1,4);
plot(Err);
title('Error signal');
15
18. CONCLUSION
The LMS algorithm is simple and has less computational complexity when compared
with the RLS algorithm. But the RLS algorithm has faster convergence rate than LMS
algorithm at the cost of higher computational complexity. The plays a role in
RLS algorithm similar to that of step size parameter in the LMS algorithm .RLS
algorithm has better performance than LMS algorithm in the low signal to noise ratio.
18
19. REFERENCE
➢ Adaptive filter theory by Simon Haykin.
➢ Johan. G. Proakis, “Digital Communications”, Fourth
Edition, McGrow-Hill, 2001.
➢ Adaptive Filter Algorithm, [online] Available from:
➢ http://zone.ni.com/reference/enXX/help/372357A01/lvaftco
ncepts/firfilter/adaptive algorithm.
➢ Matlab for scientists and Engineers by Brian D Hahn.
19