SlideShare a Scribd company logo
1 of 108
Download to read offline
A MATLAB Simulation Software for Key
Adaptive Algorithms and Applications
Project 2
Written by
Group 18
Main Uddin-Al-Hasan, 8901011836
main.hasan@gmail.com
M.Sc. in Electrical Engineering with emphasis on Signal Processing
Blekinge Institute of Technology, Karlskrona, Sweden
Abstract
Adaptive signal processing algorithms are very useful in Active Noise Cancellation
(ANC), Adaptive Line Enhancement (ALE) and System Identification (SI). Therefore, A
MATLAB software is developed for the simulation of MATLAB pre-implemented Least-
Mean-Square (LMS), Recursive-Least-Square (RLS), Affine Projection (AP), Frequency
Domain (FD), Lattice (L) based 30 signal processing adaptive algorithms but we have
theoretically studied only most common variants of LMS Based adaptive algorithms in this
project. The developed software reduces simulation time through assembling all mentioned
adaptive algorithms into one software interface.
The LMS Based Algorithms are mainly studied in the project of which LMS, NLMS,
LLMS are studied with emphasis. These algorithms are studied with different step size and
filter order. The benefit of stochastic LMS algorithms in compare to Least-Square Adaptive
algorithms is also studied in the project. The learning curve (LC) of the adaptive algorithms
are also studied in relation to their step size and filter order. The learning curve parameters
Convergence, Local convergence, Global convergence, Steady State Error (SSE) showed
exactly right adaptive learning behaviour in accordance with Adaptive Filter Theory. The
learning curve behaviour and graphical presentation of the LC and its different parameters is
studied. Moreover, the adaptive algorithm performance assessment criteria is also studied.
The developed MATLAB software is written programmatically and have GUI features
such as popup-menu, algorithm parameter input, signal data input, loaded data display, filtered
signal and learning curve data display. The software can store processed data in run-time and
later can be re-plotted in a new figure window and can be played to check filtered signals audio
quality. The implemented algorithms can be tested with some default parameter. Moreover,
slider control is implemented in the software to update algorithm parameters easily.
Acknowledgement
I would like to give thanks to all scientists and professors specially Simon Haykin, B. Farhang-
Boroujeny, John G. Proakis, Dimitris G. Manolakis and Monson H. Hayes whose books nicely
explains the complex adaptive signal processing concepts in an easy way. Moreover, I would
like to thank my supervisor Irina Gertsovich at BTH for her precise information and
supervision of the project which helped me to complete the project. Furthermore, I would like
to also give thanks to my family for their continuous support and for providing aspirations to
complete my education.
Contents
Abstract.....................................................................................................................................3
Acknowledgement....................................................................................................................5
List of Figures.........................................................................................................................10
List of Acronyms....................................................................................................................13
Chapter 1..................................................................................................................................14
Introduction............................................................................................................................14
1.1 Project Scope.............................................................................................................17
1.2 Problem formulation and Project Outline .................................................................17
Chapter 2..................................................................................................................................19
Research Methodology and Requirement Analysis............................................................19
2.1 Functional requirements.................................................................................................19
2.2 Non-functional requirements..........................................................................................19
Chapter 3..................................................................................................................................20
Adaptive Signal Processing Filters and Applications.........................................................20
3.1 Structure of Adaptive Filter............................................................................................20
3.1.1 Spatial Structure or Block Diagram.........................................................................20
3.1.2 Functional structure .................................................................................................21
3.2 Adaptive Filter Performance ..........................................................................................23
3.2.1 Learning Curve........................................................................................................24
3.2.2 Convergence Speed .................................................................................................26
3.2.3 Steady State Error (SSE) .........................................................................................30
3.3 Adaptive Filter Groups...................................................................................................30
3.4 Application Classes........................................................................................................30
3.5 Difference between MSE and LSE ................................................................................31
Chapter 4..................................................................................................................................32
Literature Review ..................................................................................................................32
Chapter 5..................................................................................................................................33
Least-Mean-Square Adaptive Filters and Applications.....................................................33
5.2 Least-Mean-Square (LMS) Adaptive Filters..................................................................33
5.2.1 Some Common Variants of LMS Algorithm ..........................................................35
5.3 Implemented Adaptive Filter Applications................................................................37
5.3.1 Adaptive Noise Cancellation (ANC).......................................................................37
5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction................................38
5.3.3 System Identification or Modelling (SI)..................................................................40
Chapter 6..................................................................................................................................42
MATLAB and Development Tools.......................................................................................42
6.1 MATLAB GUI Design Methodology............................................................................42
6.1.1 Compact data representation ...................................................................................42
6.1. 2 Aesthetical data representation...............................................................................42
6.1.3 GUI Development using โ€œGUIDEโ€.........................................................................43
6.1.4 Programmatic GUI Development............................................................................43
6.2 Structural GUI Design Tools..........................................................................................44
6.2.1 Nested Panels...........................................................................................................44
6.3 Used Functions...............................................................................................................45
Chapter 7..................................................................................................................................46
Algorithm and Software Development.................................................................................46
7.1 Graphical User Interface (GUI) Structure and Elements ...............................................46
7.1.1 Main GUI Window or Figure ..................................................................................46
7.1.2 Nested Panelling......................................................................................................47
7.1.3 Popup Menu or Listing............................................................................................50
7.1.4 Slider Control ..........................................................................................................51
7.1.5 Application and Parameter Data Input ....................................................................53
7.1.6 Data storage and retrieval........................................................................................54
7.1.7 Data display axes.....................................................................................................56
7.1.8 A block of main plotter function .............................................................................56
7.1.9 An instance of functions for applications................................................................58
7.1.10 Display results in a new figure ..............................................................................61
7.1.11 Data representation, Listening data and Default Parameter Value........................62
7.2 Software Execution Flow...............................................................................................64
Chapter 8..................................................................................................................................65
Results of Adaptive Algorithms............................................................................................65
8.1 Active Noise Cancellation (ANC)..................................................................................65
8.2 Adaptive Line Enhancement (ALE)...............................................................................76
8.3 System Identification (SI) ..............................................................................................87
Chapter 9..................................................................................................................................98
Comparative Performance and Data Analysis....................................................................98
9.1 Comparative Performance..............................................................................................98
9.1.1 Adaptive Noise Cancellation (ANC).......................................................................98
9.1.2 Adaptive Line Enhancement (ALE)......................................................................100
9.1.3 System Identification (SI)......................................................................................102
Chapter 10..............................................................................................................................105
Summary and Conclusions .................................................................................................105
10.1 Future Work ...............................................................................................................105
References.............................................................................................................................106
List of Figures
Figure 1: Original output from the filter..................................................................................15
Figure 2: Desired output from the filter...................................................................................15
Figure 3: Adaptive control using adaptive filter......................................................................16
Figure 4: Signal approximation using adaptive filter ..............................................................16
Figure 5: An N-tap transversal adaptive filter [3]....................................................................20
Figure 6: Adaptive Filter Functional Components ..................................................................21
Figure 7: Convergence Speed and SSE ...................................................................................23
Figure 8: Local Convergence and Global Convergence..........................................................23
Figure 9: Learning Curve.........................................................................................................24
Figure 10: An error signal with associated LC ........................................................................25
Figure 11: System Identification with NLMS when step size ยต= 0.1, order n = 20 and beta
ฮฒ=1 ...........................................................................................................................................27
Figure 12: System Identification with NLMS when step size ยต= 0.01, order n = 20 and beta
ฮฒ=1 ...........................................................................................................................................28
Figure 13: ANC with filter order 30 ........................................................................................29
Figure 14: ANC with filter order 80 ........................................................................................29
Figure 15: Influence of step-size ยต in convergence towards แถ“ ๐’Ž๐’Š๐’ [Google Search] ............34
Figure 16: Adaptive Noise Cancellation..................................................................................38
Figure 17: Adaptive Line Enhancement ..................................................................................39
Figure 18: System Identification using Adaptive Filter...........................................................41
Figure 19: Developed GUI without data..................................................................................47
Figure 20: Main GUI window with some data ........................................................................47
Figure 21: Internal GUI Blocks ...............................................................................................49
Figure 22: Popup menu execution flow...................................................................................51
Figure 23: Real-time slider control..........................................................................................52
Figure 24: Application data input consistency.........................................................................54
Figure 25: Representation and Listening to Data ....................................................................63
Figure 26: Software Execution Flow .......................................................................................64
Figure 27: ANC with LMS when ยต = .01 and order 30...........................................................65
Figure 28: ANC with LMS when ยต = .001 and order 30.........................................................66
Figure 29: ANC with NLMS when ยต = .01 and order 30........................................................66
Figure 30: ANC with NLMS when ยต = .001 and order 30......................................................67
Figure 31: ANC with LLMS when ยต = .01, order 30 and leakage .8 ......................................67
Figure 32: ANC with LLMS when ยต = .001, order 30 and leakage .8 ....................................68
Figure 33: ANC with ADJLMS when ยต = .001, order 30 .......................................................68
Figure 34: ANC with ADJLMS when ยต = .00001, order 30 ...................................................69
Figure 35: ANC with BLMS when ยต = .01, order 30..............................................................69
Figure 36: ANC with BLMS when ยต = .001, order 30............................................................70
Figure 37: ANC with BLMSFFT when ยต = .01, order 30.......................................................70
Figure 38: ANC with BLMSFFT when ยต = .001, order 30.....................................................71
Figure 39: ANC with DLMS when ยต = .01, order 30, delay = 11...........................................71
Figure 40: ANC with DLMS when ยต = .001, order 30, delay = 11.........................................72
Figure 41: ANC with Filtered-x LMS when ยต = .01, order 30................................................72
Figure 42: ANC with Filtered-x LMS when ยต = .001, order 30..............................................73
Figure 43: ANC with Sign-Data LMS when ยต = .01, order 30 ...............................................73
Figure 44: ANC with Sign-Data LMS when ยต = .001, order 30 .............................................74
Figure 45: ANC with Sign-Error LMS when ยต = .01, order 30 ..............................................74
Figure 46: ANC with Sign-Error LMS when ยต = .001, order 30 ............................................75
Figure 47: ANC with Sign-Sign LMS when ยต = .01, order 30................................................75
Figure 48: ANC with Sign-Sign LMS when ยต = .001, order 30..............................................76
Figure 49: ALE with LMS when ยต = .01, order 30 .................................................................77
Figure 50: ALE with LMS when ยต = .001, order 30 ...............................................................77
Figure 51: ALE with LMS when ยต = .01, order 30 .................................................................78
Figure 52: ALE with LLMS when ยต = .001, order 30.............................................................78
Figure 53: ALE with ADJLMS when ยต = .001, order 30........................................................79
Figure 54: ALE with ADJLMS when ยต = .0001, order 30......................................................79
Figure 55: ALE with BLMS when ยต = .001, order 30.............................................................80
Figure 56: ALE with BLMS when ยต = .0001, order 30 ..........................................................80
Figure 57: ALE with BLMSFFT when ยต = .001, order 30......................................................81
Figure 58: ALE with BLMSFFT when ยต = .0001, order 30....................................................81
Figure 59: ALE with DLMS when ยต = .001, order 30 ............................................................82
Figure 60: ALE with DLMS when ยต = .0001, order 30 ..........................................................82
Figure 61: ALE with Filtered-x LMS when ยต = .0001, order 30 ............................................83
Figure 62: ALE with Filtered-x LMS when ยต = .001, order 30 ..............................................83
Figure 63: ALE with Sign-Data when ยต = .001, order 30 .......................................................84
Figure 64: ALE with Sign-Data when ยต = .0001, order 30 .....................................................84
Figure 65: ALE with Sign-Error when ยต = .0001, order 30 ....................................................85
Figure 66: ALE with Sign-Error when ยต = .001, order 30 ......................................................85
Figure 67: ALE with Sign-Sign when ยต = .001, order 30 .......................................................86
Figure 68: ALE with Sign-Sign when ยต = .0001, order 30 .....................................................86
Figure 69: SI with LMS when ยต = .001, order 30 ...................................................................87
Figure 70: SI with LMS when ยต = .0001, order 30 .................................................................87
Figure 71: SI with NLMS when ยต = .01, order 30, beta 1.......................................................88
Figure 72: SI with NLMS when ยต = .1, order 30, beta 1.........................................................88
Figure 73: SI with NLMS when ยต = .01, order 30, leakage 1 .................................................89
Figure 74: SI with NLMS when ยต = .001, order 30, leakage 1 ...............................................89
Figure 75: SI with ADJLMS when ยต = .00001, order 30, leakage 1.......................................90
Figure 76: SI with ADJLMS when ยต = .0001, order 30, leakage 1.........................................90
Figure 77: SI with BLMS when ยต = .001, order 30.................................................................91
Figure 78: SI with BLMS when ยต = .0001, order 30...............................................................91
Figure 79: SI with BLMSFFT when ยต = .001, order 30..........................................................92
Figure 80: SI with BLMSFFT when ยต = .0001, order 30........................................................92
Figure 81: SI with DLMS when ยต = .001, order 30, Delay 20................................................93
Figure 82: SI with DLMS when ยต = .0001, order 30, Delay 20..............................................93
Figure 83: SI with Filtered-x LMS when ยต = .001, order 30...................................................94
Figure 84: SI with Filtered-x LMS when ยต = .0001, order 30.................................................94
Figure 85: SI with Sign-Data when ยต = .001, order 30 ...........................................................95
Figure 86: SI with Sign-Data when ยต = .0001, order 30 .........................................................95
Figure 87: SI with Sign-Error when ยต = .001, order 30...........................................................96
Figure 88: SI with Sign-Error when ยต = .01, order 30.............................................................96
Figure 89: SI with Sign-Sign when ยต = .0001, order 30..........................................................97
Figure 90: SI with Sign-Sign when ยต = .00002, order 30........................................................97
Figure 91: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) ....................................................................................................................................98
Figure 92: Learning Curves ADJLMS.....................................................................................99
Figure 93: Learning Curves Filtered-xLMS ............................................................................99
Figure 94: Learning Curves SS..............................................................................................100
Figure 95: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) ..................................................................................................................................100
Figure 96: Learning Curve ADJLMS ....................................................................................101
Figure 97: Learning Curve Filt-xLMS...................................................................................101
Figure 98: Learning Curve SS ...............................................................................................102
Figure 99: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS,
SD, SE) ..................................................................................................................................102
Figure 100: Learning Curve ADJLMS ..................................................................................103
Figure 101: Learning Curve Filt-xLMS.................................................................................103
Figure 102: Learning Curve SS .............................................................................................104
List of Acronyms
ADJLMS Adjoint Least Mean Square
BLMS Block Least Mean Square
BLMSFFT Block Least Mean Square FFT
CS Convergence Speed
DLMS Delayed Least Mean Square
DSP Digital Signal Processing
FILTXLMS Filtered X-LMS
FD Frequency Domain
GUI Graphical User Interface
LC Learning Curve
LMS Least-Mean-Squares
LLMS Leaky Least Mean Square
NLMS Normalized Least Mean Square
SD Sign-Data
SE Sign-Error
SS Sign-Sign
SSE Steady State Error
Chapter 1
Introduction
The goal of adaptive filters are to maintain or derive desired output signal characteristics from
a FIR or IIR filter. This goal is obtained via a feedback loop structure that feeds measure of
undesired signal characteristics (error) to the filter under consideration and subsequently the
filter updates its filter kernel with the fed coefficients to generate or maintain the desired output
signal characteristics. The calculation of new coefficients based on the error signal feedback
which is to be minimized is powered by some adapting algorithms. The error is defined as the
deviation of output signal from the desired signal characteristics, such that, where d(n) is the
desired signal, y(n) is the output signal and e(n) is the error signal, then the following formulas
holds.
๐‘ฆ(๐‘›) = โˆ‘ ๐‘Š๐‘–(๐‘›) ๐‘ฅ(๐‘› โˆ’ ๐‘–)
๐‘โˆ’1
๐‘–=0
๐‘ฆ (๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘ 
๐‘‘(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘ 
๐‘กโ„Ž๐‘’๐‘›, ๐‘’(๐‘›) = โ€–๐‘‘(๐‘›)โ€– โˆ’ โ€–๐‘ฆ(๐‘›)โ€–
๐‘’(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘–๐‘“๐‘“๐‘’๐‘Ÿ๐‘’๐‘›๐‘๐‘’ ๐‘๐‘’๐‘ก๐‘ค๐‘’๐‘’๐‘› ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘‘(๐‘›) ๐‘Ž๐‘›๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก
๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘ฆ(๐‘›)
Source: [3] (Page 139 โ€“ 188)
We can see from the above derivation that ๐‘’(๐‘›) is the signal sequence which is needed
to be minimized and an adaptive filterโ€™s ability to do that makes it separate from other types of
filters.
In the figure 1, an output signal is given. But instead of this output we want to have the output
as exactly as signal given in figure 1.2. To derive the desired signal from the system, we first
have to measure the error signal through finding out mathematical correlation between samples
of output signal and desired signal. In short, from a higher point of view, this error signal is
measured by subtracting the first signal from the latter signal. Then, this error signal is
optimally minimized via updating operating filterโ€™s coefficients through a live feedback loop.
Figure 1: Original output from the filter
Figure 2: Desired output from the filter
The use of adaptive filters can be divided majorly into two groups. Firstly, to
continuously maintain the output signal unchanged from a running filter. Secondly, to
approximate a desired signal from the output signal of a filter. These both approach use the
same fundamental structure of the adaptive filter but they varies in terms of orientation and
applications. In figure 3, we can see that how adaptive control has been implemented using
adaptive filter and necessary error signal is computed. In figure 4, we can see that how a desired
signal is approximated using adaptive filter and necessary error signal is computed. Both figure
3 and figure 4 looks similar in terms of their execution sequence and operating FIR or IIR filter.
However, if we look carefully we will see that, there still exists a difference in associated error
signal computation orientation.
Input Signal Sequences
START
Does output signal deviated from
desired characteristics?
FIR or IIR Filter
Desired output Signal
Calculate Deviation
(Error Signal)
Reduce error signal
power in MSE sense
YES
If NO then Iterate
Calculate New
Coefficients
Send New Coefficents
To maintain desired output signal throughput
Figure 3: Adaptive control using adaptive filter
Input Signal Sequences
START
Does output signal approximates desired
signal within required level of accuracy?
FIR or IIR Filter
Output Signal
Calculate Deviation
(Error Signal)
Reduce error signal
power in MSE sense
NO
If YES then Iterate
Calculate New
Coefficients
Send New Coefficents
To approximate the desired signal
Desired Signal
Figure 4: Signal approximation using adaptive filter
1.1 Project Scope
The requirements of the project is to study and understand adaptive filter structure, LMS based
adaptive filters (mainly LMS, NLMS, and LLMS) and subsequently developing a user friendly
MATLAB software that facilitates the simulation of these algorithms. Therefore, the following
statement has been derived to summarize the project scope and goal.
โ€œDevelopment of a professional MATLAB Software that will offer a concise work
environment for the simulation of key adaptive signal processing algorithms and
applications in real-time and can be used in real-lifeโ€
1.2 Problem formulation and Project Outline
The development problems that arose and solved during the project are summarized as some
development questions as follows
1. How Adaptive Filter works and what is the functional role of sub-systems or sub-
blocks within it?
2. How new coefficients are calculated and which mathematical framework is used to
calculate the new coefficients?
3. Which adapting algorithms are used and how many of them are pre-implemented in
MATLAB?
4. Understanding the application of adaptive filters for ANC, ALE and SI and how they
are pre-implemented in MATLAB?
5. What type of software exists that offer concise work environment for simulation of
adaptive algorithms and applications?
6. How to develop a MATLAB App and standalone MATLAB software?
7. Which methodology is best to develop GUI in MATLAB? What are the advantages
and disadvantages of each methodology?
8. How to load data and store data during run-time in MATLAB App?
9. How to organize GUI blocks to have a user friendly, compact but coherent GUI?
10. What are the implementation alternatives of MATLAB GUI development and which
method best suits the project need?
11. How to preserve aesthetical properties of the software while not compensating
functional requirements?
12. How to integrate different components of the software into a single module?
In Chapter 2, we have mentioned about requirement analysis and research methodology. In
Chapter 3, we have dissected the adaptive signal processing filters and discussed about it. In
Chapter 4, the relevant existing works done by others are studied and discussed in terms of
what has been done and what is lacking? In Chapter 5, we have discussed about popular LMS
Based adaptive signal processing filters and applications. In Chapter 6, we have discussed
about different MATLAB GUI design methodology and different development tools. In
Chapter 7, we have discussed about algorithm and software development. In Chapter 8, we
have discussed about results obtained from different adaptive algorithms. In Chapter 9, we have
discussed about comparative performance of different adaptive algorithms and data analysis.
In Chapter 10, we have discussed about project summary and probable future work.
Chapter 2
Research Methodology and Requirement
Analysis
All types of software development requires a thorough requirement analysis. Requirements can
be divided into two parts, namely, functional requirements and non-functional requirements.
The functional requirements form the core part of the development and all requirements must
need to be meet in order develop a working software. On the other hand, non-functional
requirements are too important but not mandatory to have a working software. However, some
non-functional requirements are very important without which the software product may turn
into unusable and not user friendly.
2.1 Functional requirements
1. MATLAB implementation of Adaptive Algorithms
2. MATLAB implementation of Adaptive Applications
3. Comparative performance analysis of Adaptive Algorithms
4. Graphical User Interface (GUI)
5. Data Loading and Data Writing
6. Run-time Data Storage
7. Data Processing and Display
2.2 Non-functional requirements
1. User friendliness
2. Fast and Reliability
3. Compact data representation
4. Aesthetical data representation
Chapter 3
Adaptive Signal Processing Filters and
Applications
Adaptive filter can be literally understood as a filter that is able to take feedback and based on
that feedback it is able to adapt to produce or maintain desired signal output. An adaptive filter
has different parameters to facilitate the flexibility in dealing with optimal performance of
adaptive filters. The selection of different parameters for adaptive filters directly influences the
calculation filter coefficients. That is to say, we reduces the error through optimizing a
consistently designed performance function. This performance function can be designed either
in statistical framework or deterministic framework. The performance function in statistical
framework is the mean-square-value of the error signal. In deterministic framework the
frequent choice of performance function is a weighted sum of the squared error signal.
3.1 Structure of Adaptive Filter
Adaptive filters can be mainly structurally realized into two ways, namely, spatially
and functionally. Spatial structure discusses about the organization of filter components
without restricting corresponding filters desired functional output. On the other hand,
functional structure discusses about the functional role of the sub-systems of each adaptive
filter.
3.1.1 Spatial Structure or Block Diagram
The most common used structure are direct form, cascade form, parallel form and
lattice. Transversal layout of adaptive filters are most commonly used, however, lattice layout
is also used when its advantages overrides the advantages of transversal layout.
Figure 5: An N-tap transversal adaptive filter [3]
3.1.2 Functional structure
Adaptive filters can be dissected into following major parts based on the functional role and
each of these part plays a major role in producing a working adaptive filter.
FIR/IIR Filter
Adaptive Control
Algorithm
Input Signal:
x(n)
Output
Signal: y(n)
Desired
Signal:d(n)
Error Signal:
e(n)
Updated
Coefficients
Feedback Loop
Figure 6: Adaptive Filter Functional Components
3.1.2.1 Input Signal
Input signal is the data feeder or provider to the adaptive filter. This is the primary
signal that is needed to be updated or maintained at a constant level or needed to be
approximated to a desired signal characteristics. If we have input signal that is needed to be
maintained at a constant level than whenever input signal differs from desired level, we can
find out this deviation or error and subsequently minimizes it to maintain the constant desired
signal throughput. In other case, we can have an output signal from a filter which is needed to
be updated with the characteristics of a desired signal. In this case, we find out the difference
between output signal and desired signal and this difference is error. Subsequently, we calculate
new adaptive filter coefficient to reduce this error and these coefficients are used to update the
input signal.
3.1.2.2 FIR or IIR Filter
FIR or IIR filter is the main worker of the adaptive filter. Initially, the filter starts
producing output signal from the instantaneous input signal given to it. But after providing the
feedback (i.e. calculated filter coefficients to reduce the error power of the error signal), it
updates its output signal which approximates desired signal or reduces deviation from desired
signal.
3.1.2.3 Output Signal
Output signal is the initial output or updated output from FIR/IIR filter. Output signal
can be realized in two categories, namely, coarse output signal and fine output signal. The
coarse output signal represents the instantaneous output from FIR/IIR filter or the deviated
output signal from the desired condition. On the other hand, we obtain the fine output signal
when coarse output signal approximates to desired signal. That is to say that, fine output signal
is the end product of the coarse output signal when error is removed from it.
3.1.2.4 Desired Signal
Desired signal is the final expected signal from the adaptive filter. The approximated
desired signal is obtained from the adaptive filter when adaptive filter converges. We have to
say โ€œapproximatedโ€ because an adaptive filter converges 100% if and only if error signal
reduces to 0%. But in reality, this is always not the case, even after adaptive filter converges
there still an SSE exists. And, in this case, we say that, we have approximated the desired
signal. Moreover, desired signal can be also realized in two categories, namely, external-
reference-desired-signal, maintained-desired-signal. The external-reference-desired-signal is
a provided signal that is taken as reference to calculate the error and then through error removal
adaptive filter approximates that signal. On the other hand, maintained-desired-signal is the
instantaneous output of the FIR/IIR filter that is maintained in a stable state through error
removal whenever it deviates from the stability.
3.1.2.5 Error Signal
Error signal is the difference between output signal and desired signal. That is to say
that, error signal is the amount of signal component that adaptive filter optimally removes when
it converges and thus arriving at the desired condition.
3.1.2.6 Adaptive Control Algorithm
Adaptive control algorithm is the algorithm that adaptive filter uses to iteratively
calculate the new coefficients that optimally reduces the power of error signal. The choice of
adaptive control algorithm depends on the data class, memory resources, computational time,
energy requirements and overall cost. The L-MSE and LSE are two commonly used algorithm
to calculate the updated coefficients.
3.1.2.7 Feedback loop
The feeback loop is a conceptual realization just to indicate that, the re-measured
coefficients from the error signal is fed into FIR/IIR filter to produce the desired output.
However, even though conceptual, this is of particular importance as it turns a general FIR/IIR
filter into an adaptive filter.
3.2 Adaptive Filter Performance
The performance of adaptive filter can be evaluated using Learning Curve (LC),
Convergence Speed (CS), and Steady State Error (SSE). In the following figure of LC, CS and
SSE are shown. We can see that, the error power error signal quickly dropped since the
initialization of adaptive filter and this phenomenon is also reflected in the associated learning
curve. Beside, we can also see that, even though the filter converged very quickly, there still
exists a SSE in the produced output of the filter. Now, this SSE is acceptable or not depends
on the requirements of the application domain.
Figure 7: Convergence Speed and SSE
The goal of designing adaptive filter is to minimize the error signal power and hence when
provided with right parameters, the adaptive filter ought to converge. However, the question is
how fast or slow an adaptive filter converges? This convergence speed can be classified as very
fast, fast, higher average, average, lower average, slow, very slow etc.
Figure 8: Local Convergence and Global Convergence
Convergence can be realized into two categories, namely, local convergence and global
convergence. In the figure 8, the error signal power started converging but then suddenly raised
up and repeated slightly couple of times and then finally converged. So, the convergence before
sudden raise of error power is local convergence and final convergence is the global
convergence.
However, adaptive filter performance is a relative indicator and varies depending on
application and desired filter output. For example, minimal SSE could be the only indicator of
filter performance and indicator of filter output. On the other hand, CS could be the only
indicator filter performance and indicator of filter output. Moreover, there can be cases where
weighted measure of both CS and SSE could be the indicator of filter performance and indicator
of filter output quality measure. We can summarize the adaptive filter performance criteria as
follows:
๏‚ท Fast Convergence is important, optimal lower SSE is not important
๏‚ท Fast Convergence is important, optimally lower SSE is important
๏‚ท Fast Convergence is not important, optimally lower SSE is important
๏‚ท Fast Convergence is not important, standard SSE is important
๏‚ท Standard Convergence is enough, optimally lower SSE is important
๏‚ท Standard Convergence is enough, standard SSE is enough
Because of such criteriaโ€™s or such similar criteria, different adaptive filters and different
algorithm parameters are chosen and each of which offer different level of solution. Through
trial-and-error process the best adaptive filter with best parameters are chosen for a data
scenario.
3.2.1 Learning Curve
Learning Curve is literally a curve which is generated through plotting the time-varying
error power for all coefficients of adaptive filter. For a number of iterations, the error power
approximates to zero and plotting this decreasing error power in time domain creates a very
nice curve with gradually descent gradient. This curve provides a quick information on the
performance of LMS adaptive filter under consideration.
Figure 9: Learning Curve
In the figure 9, we can see a gradually descent curve which gradually approximates to zero.
The left the error power is higher but with increasing iterations of adaptive algorithm the error
power approximates to zero.
Figure 10: An error signal with associated LC
In the figure 10, the first plot is a gradually converging error signal and the second plot
is associated LC. From the first figure, we can see that, the error signal quickly converged and
this phenomenon is also reflected in the LC. This reflection happens, because it is the same
filter coefficients that produced the data which are used to create both plot. In other words, we
can say that, LC is just a different representation of how the error signal converges and is
visually more convenient to make decision of how adaptive filter is performing.
3.2.2 Convergence Speed
Convergence means gradually minimizing power of error signal and arriving at the
point that produces desired signal. Convergence speed or CS literally means how fast an
adaptive algorithm converges or reduces the error signal power. A slower CS means the
adaptive filter took long time to minimize the error power. Similarly, a faster CS means the
adaptive filter took short time to minimize the error power. Adaptive filters iteratively calculate
new coefficient to minimize the error power of error signal. CS substantially varies with
different algorithm parameters.
Moreover, the step size also greatly influences the CS speed of adaptive filters. A
smaller step size decreases the CS which means the adaptive filter takes more time to converge
when a smaller step size is used than the larger one. The phenomenon can be clearly seen from
the figure provided below. In figure, the convergence speed is fast when ยต=.1 used but when
ยต=.01 is used the convergence speed is dropped which is also reflected in the LC.
Figure 11: System Identification with NLMS when step size ยต= 0.1, order n = 20 and beta
ฮฒ=1
Figure 12: System Identification with NLMS when step size ยต= 0.01, order n = 20 and beta
ฮฒ=1
The higher the filter order the lesser the convergence speed. However, this filter order
verses convergence speed behaviour holds for a certain threshold and this threshold varies for
different data class. We have found the right filter order through trial-and-error process and
seen that higher filter order does always produce the best filter performance as well less one.
Therefore, if we can achieve the desired adaptive filter performance with less filter order that
always gives the benefit of less computational time and overall cost. Hence, the empirically
derived filter order is the best value which can ensure best filter performance for specific data
case as well as best value. This phenomenon is demonstrated in figure 13 and 14. We can see
that, even though higher filer order is used, the figure 14 consist more error power than figure
13. However, in this case of ANC it is acceptable and wanted, as error signal is the desired
speech signal with less noise. But this phenomenon exists also for other applications where less
error signal power is always desired and hence decreasing performance with increasing order
is never accepted positively.
Figure 13: ANC with filter order 30
Figure 14: ANC with filter order 80
3.2.3 Steady State Error (SSE)
In many cases, the error signal power never converges to zero even after adaptive filter
converges (i.e. filter coefficients arrives in a stability and do not show significant change in
value). This persisted error is called SSE error. In many applications, this error is not
significantly important while it can be important for some. Therefore, threshold of SSE
acceptability varies depending on application and thus it turns into a relative performance
indicator.
3.3 Adaptive Filter Groups
There are substantial amount of adaptive filters are available that varies in terms of learning
difficulty, applications and application data class. However, the common goal of all of these
adaptive algorithms is to adapt a coarse signal to a fine signal or to maintain a desired signal
output. To accomplish this task, the adaptive algorithms offers different level of flexibility for
different corresponding problem scenarios. Some of them are grouped [MATLAB] as follows.
๏‚ท Least-Mean-Square (LMS) Based: LMS, NLMS, LLMS, ADJLMS, BLMS,
BLMSFFT, DLMS, Filt-XLMS, SD, SE, SS
๏‚ท Recursive-Least-Square (RLS) Based: RLS, QRDRLS, HRLS, HSWRLS, SWRLS,
FTF, SWFTF
๏‚ท Affine Projection (AP) Based: AP, APRU, BAP
๏‚ท Frequency Domain (FD) Based: FDAF, PBFDAF, PBUFDAF, TDAFDCT, TDAFDFT,
UFDAF
๏‚ท Lattice (L) Based: GAL, LSL, QRDLSL
3.4 Application Classes
Adaptive filters are mostly used to process an input signal and using the updated
coefficients calculated from error signal, it approximates a desired signal or maintains a signal
to its original state. Based on this similarity, the application of adaptive filter can be grouped
into four categories [3], namely, modelling, inverse modelling, linear prediction and
interference cancellation. Some applications for each of these can be summarized as follows.
๏‚ท Modelling: System Identification (SI) etc.
๏‚ท Inverse Modelling: Channel Equalization, Magnetic Recording etc.
๏‚ท Linear Prediction: Auto regressive spectral analysis, Adaptive Line Enhancement
(ALE), Speech Coding etc.
๏‚ท Interference cancellation: Echo cancellation in telephone lines, Acoustic Echo
Cancellation, Active Noise Control (ANC), Beamforming etc.
3.5 Difference between MSE and LSE
Mean-Square-Error (MSE) and Least-Square-Error (LSE) may sound similar but they
are not same. MSE is an approach that follows statistical framework. On the other hand, LSE
is an approach that follows deterministic framework. If we define a cost or performance
function ๐ฝ then MSE and LSE can be realized as follows.
๏‚ท Total squared Error (LSE) = ๐ฝ = โˆ‘ ๐‘’2
(๐‘›)๐‘โˆ’1
๐‘›=0
๏‚ท Mean Squared Error (MSE) = ๐ฝ = E{|๐‘’( ๐‘›)|2
}
Both MSE and LSE has their own advantages and disadvantages. The choice of MSE
or LSE approach depends filtering problem and associated computational cost. MSE deals with
mean value, which means, we define statistical sample with a convenient sample size and then
calculate the mean value for this sample. Clearly, this will results in a processing of less number
of samples, reciprocally less cost and yet preserving processed signalโ€™s characteristics within a
satisfactory level. The different between LSE and MSE can be summarized as follows.
Property L-MSE L-SE
Framework Stochastic (i.e. statistical) Deterministic
Weighting criteria Sample Mean Total signal
Computational Cost Lower Higher
Memory requirements Lower Higher
Matrix operations No Yes
Accuracy Lower than LSE but robust
enough in many cases
Optimal
Performance Robust or Standard or Poor
(Input data dependent)
Robust
Chapter 4
Literature Review
The adaptive filters are very popular among scientists and engineers and thus a rich set of
literature are available for study. However, these literatures can be largely classified into
different categories based on their orientation such as general reference book, specialized
reference book, general articles, project result based articles etc. It is impossible to study all of
these references because of its sheer size and complexity. And, therefore an in depth literature
review is impractical to be accomplished. However, we have randomly studied different parts
of different books and skimmed through required chapters that are necessary for this project.
Subsequently, the literatures are reviewed from high level point of view and according to their
orientation.
The book Adaptive Filter Theory [1] written by Simon Haykin is one of the best book
that covers most important concepts of adaptive filters into a single book. Nevertheless, the
book progresses forward in accordance with foundation-to-generalization approach. That is to
say that, for example, we have to first understand Method of Steepest Descent and Wiener
Filters and as well as difference between stochastic (i.e. statistical) approach and deterministic
approach to be able to understand L-MSE and LSE adaptive control algorithms. Therefore, the
book first begins with basic introduction, then discusses about Stochastic Processes and
Models, Method of Steepest Descent and then writes about LMS. The progression of whole
book follows a convenient and pedagogically friendly approach that is very useful for a student
and readers.
The book Adaptive Filters: Theory and Applications [3] written by B. Farhang-
Boroujeny is another book that is written in a very legible and in an understandable way. The
book mainly focuses on LMS Based algorithms but discusses about other adaptive filtering
issues. Moreover, the introduction written in this book is very useful which provides a lot of
useful information in a short scope. The book Statistical Digital Signal Processing and
Modeling [2] written by Monson H. Hayes is also a good book for studying adaptive signal
processing. The book first discusses about necessary fundamental concepts to understand
adaptive filtering and then at the end of the book it consists a dedicated chapter about adaptive
filters. Furthermore, the books [4, 5, 6, 7, 8, 9, 10, 11, 12, and 13] are also good resource for
studying adaptive filters. Some of these books focuses on adaptive filtering fundamentals while
others focuses on a specifically oriented application of adaptive filters. The journal articles [14,
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] discusses about specific application of
particular adaptive filter. All of these papers clearly depicts the reliability, scalability and
overall adaptive performance of adaptive filters from various perspective angle. The usefulness
of various adaptive filter parameters are clearly understandable from the discussions of these
articles.
Chapter 5
Least-Mean-Square Adaptive Filters and
Applications
In this project, we have studied LMS, NLMS, and LLMS adaptive filters and also produced
results using other (i.e. ADJLMS, BLMS, BLMSFFT, DLMS, Filt-xLMS, Sign-Data, Sign-
Error, Sign-Sign) LMS Based adaptive filters. However, as there are good number of adaptive
filters are already implemented in MATLAB, we have also included those adaptive filters in
the developed software and generated results from some of those filters to understand the LMS
algorithms comparatively. The results from these algorithms are mentioned in the appendices.
5.2 Least-Mean-Square (LMS) Adaptive Filters
Least-Mean-Square (LMS) adaptive filters reduces the signal error power in a mean-
square sense and therefore literally called LMS adaptive filters. Moreover, in short, when we
have stationary input and desired signal, the LMS adaptive filter just turns into a practical
implementation of optimal wiener filter in a MSE perspective. In other way, we achieve
optimal wiener filter when its cost function is controlled by MSE. Another important
foundation of LMS filter is the steepest descent algorithm. To mention, steepest descent is not
an adaptive filter by itself but it is the basis for calculating updated new coefficients when
signal statistics are known and thus serves as a fundamental basis of LMS adaptive filter. The
steepest descent algorithm is given below.
๏‚ท Initialize filter coefficients with a start value, ๐‘พ ๐’=๐ŸŽ(๐ŸŽ)
๏‚ท Gradient ๐›แถ“(๐’) is determined that points in the direction of where the cost function
increased maximally, ๐›แถ“(๐’) = โˆ’๐Ÿ๐ฉ + ๐Ÿ๐‘๐ฐ(๐ง)
๏‚ท Updated coefficient ๐‘ค(๐‘› + 1) is adjusted in the opposite direction to the gradient, but
using step-size ยต the adjustment is weighted down, ๐’˜(๐’ + ๐Ÿ) = ๐’˜(๐’) +
๐Ÿ
๐Ÿ
ยต [โˆ’๐›แถ“(๐’)]
The LMS algorithm is the stochastic or random realization of steepest descent algorithm. That
is to say that the LMS algorithm updates signal statistics continuously while steepest descent
algorithm works in a deterministic way. In short, the LMS algorithm is one of the stochastic
gradient methods and the steepest descent is one of the deterministic gradient methods. The
steepest descent algorithm uses deterministic cost function แถ“ = ๐ธ[๐‘’2(๐‘›)] while the LMS
algorithm uses stochastic or coarsely estimated cost function แถ“ฬ‚ = ๐‘’2
(๐‘›). The stochastic or
coarse estimate of cost function results in a faster processing, reciprocally less computational
overhead and at the same time ensures the ability to track the signal characteristics. Thus, the
error signal reduction of general LMS adaptive filter is based on the following relationships.
๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) โˆ’ ๐œ‡ โˆ‡ ๐‘’2
(๐‘›)
๐ป๐‘’๐‘Ÿ๐‘’ ๐‘ค(๐‘›) = [๐‘ค0(๐‘›), ๐‘ค1(๐‘›) โ€ฆ โ€ฆ โ€ฆ ๐‘ค ๐‘โˆ’1(๐‘›)] ๐‘‡
, ๐œ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ก๐‘’๐‘
โˆ’ ๐‘ ๐‘–๐‘ง๐‘’ ๐‘๐‘Ž๐‘Ÿ๐‘Ž๐‘š๐‘’๐‘ก๐‘’๐‘Ÿ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘Ž๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š ๐‘Ž๐‘›๐‘‘ โˆ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘”๐‘Ÿ๐‘Ž๐‘‘๐‘–๐‘’๐‘›๐‘ก ๐‘œ๐‘๐‘’๐‘Ÿ๐‘Ž๐‘ก๐‘œ๐‘Ÿ
โˆ‡ ๐‘’2(๐‘›) = โˆ’2๐‘’(๐‘›)๐‘ฅ(๐‘›)
๐ป๐‘’๐‘Ÿ๐‘’, ๐‘ฅ(๐‘›) = [๐‘ฅ(๐‘›) ๐‘ฅ(๐‘› โˆ’ 1) โ€ฆ ๐‘ฅ(๐‘› โˆ’ ๐‘ + 1)] ๐‘‡
๐‘‡โ„Ž๐‘’๐‘Ÿ๐‘’๐‘“๐‘œ๐‘Ÿ๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘  ๐‘๐‘ฆ ๐‘ ๐‘ข๐‘๐‘ ๐‘ก๐‘–๐‘ก๐‘ข๐‘–๐‘›๐‘” ๐‘™๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ ๐‘–๐‘›๐‘ก๐‘œ ๐‘กโ„Ž๐‘’ ๐‘“๐‘–๐‘Ÿ๐‘ ๐‘ก ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›
๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) โˆ’ ๐œ‡ {โˆ’2 ๐‘’(๐‘›) ๐‘ฅ(๐‘›)}
๐ป๐‘’๐‘›๐‘๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘กโ„Ž๐‘’ ๐ฟ๐‘€๐‘† ๐‘Ÿ๐‘’๐‘๐‘ข๐‘Ÿ๐‘ ๐‘–๐‘œ๐‘› ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘ 
๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) + 2 ๐œ‡ ๐‘’(๐‘›)๐‘ฅ(๐‘›)
The step-size has major influence in convergence behaviour towards แถ“ฬ‚ ๐’Ž๐’Š๐’. In figure, we can
see that the smaller the step-size the smoother and fastest convergence we have towards
the แถ“ฬ‚ ๐’Ž๐’Š๐’.
Figure 15: Influence of step-size ยต in convergence towards แถ“ฬ‚ ๐’Ž๐’Š๐’ [Google Search]
The basic components of the LMS algorithm can be written as follows in terms of input, output
and functional form.
๐‘ฐ๐’๐’‘๐’–๐’•
๐ผ๐‘›๐‘–๐‘ก๐‘–๐‘Ž๐‘™ ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ค(๐‘›)
๐ผ๐‘›๐‘๐‘ข๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ฅ(๐‘›)
๐ท๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘‘(๐‘›)
๐‘ถ๐’–๐’•๐’‘๐’–๐’•
๐น๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก, ๐‘ฆ(๐‘›)
๐‘ˆ๐‘๐‘‘๐‘Ž๐‘ก๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ค(๐‘› + 1)
๐‘ญ๐’–๐’๐’„๐’•๐’Š๐’๐’๐’‚๐’ ๐’‡๐’๐’“๐’Ž
๐ผ๐‘›๐‘๐‘ข๐‘ก โˆ’ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘ฆ(๐‘›) = ๐‘ค ๐‘‡(๐‘›) ๐‘ฅ(๐‘›)
๐ธ๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)
๐ถ๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ข๐‘๐‘‘๐‘Ž๐‘ก๐‘’ ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) + 2 ๐œ‡ ๐‘’(๐‘›)๐‘ฅ(๐‘›)
๐‘Šโ„Ž๐‘’๐‘Ÿ๐‘’, 2๐œ‡๐‘’(๐‘›)๐‘ฅ(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘๐‘ก๐‘–๐‘œ๐‘› ๐‘ก๐‘’๐‘Ÿ๐‘š
The basic reason for the popularity of LMS adaptive filter is because of its computational
simplicity. The computational overhead of LMS adaptive filter can be summarized as follows.
๐Ÿ๐ + ๐Ÿ ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ข๐œ๐š๐ญ๐ข๐จ๐ง๐ฌ & ๐Ÿ๐ + ๐Ÿ ๐š๐๐๐ข๐ญ๐ข๐จ๐ง๐ฌ
๐‘ญ๐’๐’“ ๐’„๐’‚๐’๐’„๐’–๐’๐’‚๐’•๐’Š๐’๐’ˆ ๐’•๐’‰๐’† ๐’๐’–๐’•๐’‘๐’–๐’• ๐’š(๐’): ๐‘ ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ 
๐‘ญ๐’๐’“ ๐’๐’ƒ๐’•๐’‚๐’Š๐’๐’Š๐’๐’ˆ (๐Ÿ๐) โˆ— ๐’†(๐’): 1 ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›
๐‘ญ๐’๐’“ ๐’”๐’„๐’‚๐’๐’‚๐’“ โˆ’ ๐’ƒ๐’š โˆ’ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’Ž๐’–๐’๐’•๐’Š๐’‘๐’๐’Š๐’„๐’‚๐’•๐’Š๐’๐’ ๐Ÿ๐๐’†(๐’) โˆ— ๐’™(๐’): ๐‘ ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ 
5.2.1 Some Common Variants of LMS Algorithm
In practice, three common LMS algorithm variants are standard LMS (SLMS), normalized LMS
(NLMS) or time-varying step size LMS and leaky LMS (LLMS). All these three variants have
almost same design structure except with some differences in update equation. The standard
LMS algorithm has the following update equation.
Standard LMS (SLMS)
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›) ๐œ‡ (๐‘›)
๐ป๐‘’๐‘Ÿ๐‘’, ๐‘คโƒ—โƒ— (๐‘› + 1) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก
๐œ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ก๐‘’๐‘ ๐‘ ๐‘–๐‘ง๐‘’ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘Ž๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š
๐‘’(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘’๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™, ๐œ‡ (๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘–๐‘›๐‘๐‘ข๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ
The basic difference between standard LMS algorithm and normalized algorithm is in the
characteristics of their step size. The unique characteristic of the step size of NLMS is that it is
time-varying in compare to SLMS. The NLMS has the following update equation.
Normalized LMS (NLMS)
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›)
๐‘ขโƒ—โƒ— (๐‘›)
โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2
๐‘Š๐‘’ ๐‘๐‘Ž๐‘› ๐‘Ÿ๐‘’๐‘ค๐‘Ÿ๐‘–๐‘ก๐‘’ ๐‘Ž๐‘๐‘œ๐‘ฃ๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘ 
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) +
๐œ‡
โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2
๐‘’(๐‘›) ๐œ‡(๐‘›)
๐‘‡โ„Ž๐‘’๐‘Ÿ๐‘’๐‘“๐‘œ๐‘Ÿ๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡(๐‘›)๐‘’(๐‘›)๐œ‡(๐‘›), ๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’
๐œ‡
โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2
= ๐œ‡(๐‘›)
The LLMS has similar update equation except that it includes a leaky factor. The leaky factor
has a range (0, 0.1) and has direct relation with steady state error (SSE). If leaky factor is
increased, the SSE increases and the leaky factor decreases the SSE decreases. The LLMS has
the following cost function and update equation.
Leaky LMS (LLMS)
๐ฝ(๐‘›) = ๐‘’2(๐‘›) + ๐›ผ โˆ‘ ๐‘Š๐‘˜
2
(๐‘›)
๐‘โˆ’1
๐‘˜=0
๐‘คโƒ—โƒ— (๐‘› + 1) = (1 โˆ’ ๐œ‡๐›ผ). ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›) ๐œ‡ (๐‘›)
We can see that the cost function includes both error signal and filter coefficients along with a
leaky factor. Therefore, LLMS is able to reduce the coefficient overflow problem. In the update
equation, if ๐›ผ = 0, the update equation turns into the same update equation as standard LMS.
The LMS algorithm is often implemented in digital signal processors. As DSPโ€™s often has
limited computational resource and LMS computational overhead is crucially important in DSP
implementation. Therefore, computationally simpler version of standard LMS algorithm are
Sign-Error LMS, Sign-Data LMS and Sign-Sign LMS and they require fewer multiplication
operation in compare to standard LMS. The simplification from standard LMS to sign LMS is
done using the following equation.
๐‘ ๐‘”๐‘›(๐‘ฅ) = {
1, ๐‘ฅ > 0
0, ๐‘ฅ = 0
โˆ’1, ๐‘ฅ < 0
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘ ๐‘”๐‘›(๐‘’(๐‘›)) . ๐œ‡ (๐‘›) : Sign-Error LMS Algorithm
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘’(๐‘›) . ๐‘ ๐‘”๐‘›( ๐œ‡ (๐‘›)) : Sign-Data LMS Algorithm
๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘ ๐‘”๐‘›(๐‘’(๐‘›)). ๐‘ ๐‘”๐‘›(๐œ‡ (๐‘›)) : Sign-Sign LMS Algorithm
We can clearly see from the above equations that, the convergence speed for Sign-LMS
algorithms are slower in compare to standard LMS and the SSE using Sign-LMS will be larger
than standard-LMS. Therefore, Sign-LMS algorithms are useful where computational
resources are important than performance. In ANC, we often have large input signal vector
and at the same time real-time processing of adaptive filter is required for real-time
performance. In this case, BLMSFFT can be used which offers fewer computational overhead
through fewer multiplication than standard LMS. In BLMSFFT, the input signal is first
transformed into frequency domain and filter coefficients are updated in the frequency domain.
In standard LMS filter, filter coefficients are updated based on sample by sample processing
which is better for performance but increases computational overhead as well takes more time.
In the BLMSFFT adaptive filter, the block size and filter length is same and coefficients are
updated based on block processing.
5.3 Implemented Adaptive Filter Applications
We have discussed earlier about the applications of adaptive filters. However, in this project,
we have implemented the following applications.
5.3.1 Adaptive Noise Cancellation (ANC)
In adaptive noise cancellation, we have a measured signal that contains primary noise from the
same signal source. In addition, we have reference noise available that is knowingly or
unknowingly correlated with the primary noise that are contained within the measured signal.
The reason of using reference noise is that we want to adaptively estimate how much undesired
noise is contained within the primary measured signal. Because of adaptive reference noise,
the necessary noise reduction can be estimated through real-time experiment to ensure the best
quality of desired signal.
๐‘–๐‘“ ๐‘ฅ(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘–๐‘š๐‘Ž๐‘Ÿ๐‘ฆ ๐‘š๐‘’๐‘Ž๐‘ ๐‘ข๐‘Ÿ๐‘’๐‘š๐‘’๐‘›๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘คโ„Ž๐‘–๐‘โ„Ž ๐‘๐‘œ๐‘›๐‘ก๐‘Ž๐‘–๐‘›๐‘  ๐‘๐‘œ๐‘กโ„Ž ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ (๐‘›)
๐‘Ž๐‘›๐‘‘ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ฃ(๐‘›) ๐‘“๐‘Ÿ๐‘œ๐‘š ๐‘กโ„Ž๐‘’ ๐‘ ๐‘Ž๐‘š๐‘’ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘œ๐‘ข๐‘Ÿ๐‘๐‘’, ๐‘กโ„Ž๐‘’๐‘›,
๐‘ฅ(๐‘›) = ๐‘ (๐‘›) + ๐‘ฃ(๐‘›)
๐‘–๐‘“ ๐‘ค๐‘’ โ„Ž๐‘Ž๐‘ฃ๐‘’ ๐‘Ž ๐‘Ÿ๐‘’๐‘“๐‘’๐‘Ÿ๐‘’๐‘›๐‘๐‘’ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘”(๐‘›) ๐‘คโ„Ž๐‘–๐‘โ„Ž ๐‘–๐‘  ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘’๐‘‘ ๐‘ค๐‘–๐‘กโ„Ž ๐‘กโ„Ž๐‘’ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ฃ(๐‘›), ๐‘กโ„Ž๐‘’๐‘›,
๐‘’(๐‘›) = {๐‘ (๐‘›) + ๐‘ฃ(๐‘›)} โˆ’ ๐‘”(๐‘›)
๐‘’(๐‘›) โ‰ˆ ๐‘ (๐‘›)
In the following figure, a reference noise is extracted from a measured signal to obtain error
signal and this error signal is the approximated desired signal.
FIR Filter
Adaptive Control
Algorithm
desired error signal
e(n) = x(n) - y(n) = s(n)
Updated Coefficients
Feedback Loop
y(n)
measurement signal x(n) that contains signal s(n) with noise v(n)
x(n) = s(n) + v(n)
correlated noise
g(n)
Figure 16: Adaptive Noise Cancellation
5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction
Adaptive Line Enhancement is done when a narrowband desired signal is mixed with wideband
undesired noise and at the same time we do not have any knowledge about wideband noise. In
this scenario, we slightly delay the received signal but large enough to de-correlate the
wideband noise and then use a FIR linear predictor to estimate the desired narrowband signal.
Then we subtract this estimated narrowband signal from the primary signal and obtain the
estimated error and reduce this error to obtain the enhanced desired narrowband signal.
Therefore, the quality of desired enhanced narrowband signal depends on better performance
of the FIR linear predictor.
๐น๐‘Ÿ๐‘œ๐‘š ๐‘Ž ๐‘Ÿ๐‘’๐‘๐‘’๐‘–๐‘ฃ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฃ(๐‘›), ๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’ ๐‘ค๐‘–๐‘‘๐‘’๐‘๐‘Ž๐‘›๐‘‘ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ค(๐‘›) ๐‘š๐‘Ž๐‘ ๐‘˜๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘›๐‘Ž๐‘Ÿ๐‘Ÿ๐‘œ๐‘ค
๐‘๐‘Ž๐‘›๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฅ(๐‘›), ๐‘ค๐‘’ ๐‘ค๐‘Ž๐‘›๐‘ก ๐‘ก๐‘œ ๐‘’๐‘›โ„Ž๐‘Ž๐‘›๐‘๐‘’ ๐‘กโ„Ž๐‘’ ๐‘›๐‘Ž๐‘Ÿ๐‘Ÿ๐‘œ๐‘ค๐‘๐‘Ž๐‘›๐‘‘ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฅ(๐‘›). ๐‘‡โ„Ž๐‘’๐‘›,
๐‘ฃ(๐‘›) = ๐‘ฅ(๐‘›) + ๐‘ค(๐‘›)
๐‘ฅ(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ… = โˆ‘ โ„Ž(๐‘˜) ๐‘ฃ(๐‘› โˆ’ ๐ท โˆ’ ๐‘˜)
๐‘€โˆ’1
๐‘˜=0
๐‘’(๐‘›) = ๐‘ฃ(๐‘›) โˆ’ ๐‘ฅ(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ… = ๐‘ค(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ…ฬ…
๐‘‡๐‘œ ๐‘”๐‘’๐‘ก ๐‘กโ„Ž๐‘’ ๐‘œ๐‘๐‘ก๐‘–๐‘š๐‘Ž๐‘™ ๐น๐ผ๐‘… ๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ ๐‘๐‘Ÿ๐‘’๐‘‘๐‘–๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก๐‘ 
โˆ‘ โ„Ž(๐‘˜) ๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ โˆ’ ๐‘˜) = ๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ + ๐ท), ๐‘™ = 0,1, โ€ฆ โ€ฆ โ€ฆ , ๐‘€ โˆ’ 1
๐‘€โˆ’1
๐‘˜=0
The expected value of the right hand side of the above equation is the statistical autocorrelation
of the narrowband signal ๐‘ฅ(๐‘›) which can be seen as follows.
๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ + ๐ท) = โˆ‘ ๐‘ฃ(๐‘›) ๐‘ฃ(๐‘› โˆ’ ๐‘™ โˆ’ ๐ท)
๐‘
๐‘›=0
= โˆ‘[๐‘ค(๐‘›) + ๐‘ฅ(๐‘›)][๐‘ค(๐‘› โˆ’ ๐‘™ โˆ’ ๐ท) + ๐‘ฅ (๐‘› โˆ’ ๐‘™ โˆ’ ๐ท)]
๐‘
๐‘›=0
= ๐‘Ÿ๐‘ค ๐‘ค(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ค ๐‘ฅ(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ฅ ๐‘ค(๐‘™ + ๐ท)
= 0 + ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) + 0 + 0 (๐ด๐‘ ๐‘ ๐‘ข๐‘š๐‘’๐‘‘)
= ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) = ๐›พ๐‘ฅ๐‘ฅ(๐‘™ + ๐ท)
In the following figure, we have delayed the primary signal to de-correlate the wideband noise
and then fed into a linear FIR predictor to best estimate the narrowband desired signal ๐‘ฅ(๐‘›)
and then this estimation is used to estimate the wideband noise error. Subsequently, the error
is reduced and enhanced narrowband desired signal ๐‘ฅ(๐‘›) is obtained.
FIR Filter
Adaptive Control
Algorithm
Estimated Wideband Error
Signal e(n) =
Updated Coefficients
Feedback Loop
Enhanced
Narrowband
Output
Decorrelation Delay v (n-D)
Estimated Narrowband
Wideband Noise w(n) that
masks Narrowband x(n)
v(n) = x(n) + w(n)
Figure 17: Adaptive Line Enhancement
5.3.3 System Identification or Modelling (SI)
System identification is the modelling or extraction of the impulse response of an unknown
system through replicating the similar impulse response in an adjacent FIR filter. The input
signal sequence ๐‘ฅ(๐‘›) is fed into both unknown system and adjacent FIR filter. The output
signal sequence ๐‘ฆฬ‚ of the FIR filter is subtracted from the unknown systemโ€™s output signal
sequence ๐‘ฆ(๐‘›) and error signal sequence ๐‘’(๐‘›) is obtained. The new coefficients for FIR filter
are now selected from the error signal sequence and minimized to get the corrected new
coefficients. The optimally minimized coefficients replicates or approximates the impulse
response of the unknown system. Thus the unknown systemโ€™s impulse response is modelled
without any prior knowledge through using adaptive FIR filter.
๐‘‡๐‘œ ๐‘š๐‘œ๐‘‘๐‘’๐‘™ ๐‘Ž ๐‘ข๐‘›๐‘˜๐‘›๐‘œ๐‘ค๐‘› ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐‘ค๐‘–๐‘กโ„Ž ๐‘Ž๐‘› ๐‘€ ๐‘Ž๐‘‘๐‘—๐‘ข๐‘ ๐‘ก๐‘Ž๐‘๐‘™๐‘’ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐น๐ผ๐‘… ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ, ๐‘กโ„Ž๐‘’๐‘›,
๐น๐ผ๐‘… ๐‘“๐‘–๐‘ก๐‘™๐‘’๐‘Ÿ ๐‘ค๐‘–๐‘กโ„Ž ๐‘€ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘’๐‘›๐‘ก, ๐‘ฆ(๐‘›) = โˆ‘ โ„Ž(๐‘˜) โˆ— ๐‘ฅ(๐‘› โˆ’ ๐‘˜)
๐‘€โˆ’1
๐‘˜=0
๐‘ˆ๐‘›๐‘˜๐‘›๐‘œ๐‘ค๐‘› ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘šโ€ฒ
๐‘  ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก, ๐‘‘(๐‘›)
๐ธ๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’, ๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›)
๐‘๐‘œ๐‘ค, ๐‘ก๐‘œ ๐‘”๐‘’๐‘ก ๐‘š๐‘–๐‘›๐‘–๐‘š๐‘–๐‘ง๐‘’๐‘‘ ๐‘œ๐‘Ÿ ๐‘œ๐‘๐‘ก๐‘–๐‘š๐‘–๐‘ง๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก๐‘  โ„Ž(๐‘˜) ๐‘ค๐‘–๐‘กโ„Ž ๐‘ + 1 ๐‘œ๐‘๐‘ ๐‘’๐‘Ÿ๐‘ฃ๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ ,
แถ“ ๐‘€ = โˆ‘ [๐‘‘(๐‘›) โˆ’ โˆ‘ โ„Ž(๐‘˜) ๐‘ฅ(๐‘› โˆ’ ๐‘˜)
๐‘€โˆ’1
๐‘˜=0
]
2๐‘
๐‘›=0
แถ“ ๐‘€ = โˆ‘ [๐‘‘(๐‘›) โˆ’ โˆ‘ โ„Ž(๐‘˜) ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ โˆ’ ๐‘˜) = ๐‘Ÿ๐‘ฆ ๐‘ฅ(๐‘™)
๐‘€โˆ’1
๐‘˜=0
]
2๐‘
๐‘›=0
๐‘Šโ„Ž๐‘’๐‘Ÿ๐‘’, ๐‘™ = 0,1, โ€ฆ โ€ฆ . ๐‘€ โˆ’ 1
๐‘กโ„Ž๐‘’ ๐‘Ž๐‘ข๐‘ก๐‘œ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’ ๐‘ฅ(๐‘›) = ๐‘Ÿ๐‘ฅ๐‘ฅ(๐‘™)
๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘œ๐‘ ๐‘ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ค๐‘–๐‘กโ„Ž ๐‘กโ„Ž๐‘’ ๐‘–๐‘›๐‘๐‘ข๐‘ก ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’, ๐‘Ÿ๐‘ฆ ๐‘ฅ(๐‘™)
In the figure, we can clearly see that, the input signal is provided to both FIR filter and
unknown system. The FIR filter is initialized with some best guessed coefficients. Then, from
the error signal, we can measure the deviation of default coefficients from the desired
coefficients through calculating new corrected coefficients.
FIR/IIR Filter
Adaptive Control
Algorithm
Input Signal: x(n)
Output
Signal: y(n)
Error Signal: e(n)
Updated Coefficients
Feedback Loop
Unknown Time-
variant System
Desired Signal: d(n)
Figure 18: System Identification using Adaptive Filter
Chapter 6
MATLAB and Development Tools
6.1 MATLAB GUI Design Methodology
MATLAB is resource rich and offers several development alternatives to develop a software
in MATLAB. For an example, to develop a GUI in MATLAB we can either use GUI preform
GUIDE or we can write the GUI programmatically. Moreover, for run-time data storage, we
can either use โ€œguidata()โ€ function or โ€œsetappdata()/getappdata()โ€ function. Furthermore, for
function management we can either use โ€œmultiple-functionโ€ or โ€œnested-functionโ€ approach. In
addition, for GUI structural block we can either use โ€œsingle panelโ€ or โ€œnested panelsโ€
approach. Each of these alternatives have their own trade-off and need to be used according to
the software need. Some of these alternatives are discussed with more details in the following
sections.
6.1.1 Compact data representation
The goal of compact data representation is to optimally utilize the spatial spaces available
within a data display and to reuse the same space to display multiple data. In MATLAB this
can be easily accomplished using function property โ€œVisibleโ€. When the โ€œVisibleโ€ property is
โ€œonโ€, the corresponding GUI elements will be visible and vice versa. Therefore, a set of GUI
elements can be made invisible and visible in an execution instance using this property and this
flexibility can be used to contain multiple GUI element in the same spatial coordinate and can
be made visible when needed.
6.1. 2 Aesthetical data representation
The overall aesthetics of software workspace is important as like as physical workspace
aesthetics are important to concentrate on work. This aesthetical matter always influences
humans because human mind drives human brain and our mind always likes beauty. Therefore,
most used data need to be placed on the focal point of the convenient eye focus. Data need to
be represented with pleasant but eye-friendly colors. Moreover, in a GUI, data need to be spread
in a coherent manner so that there should be less congestion in visibility even with more data.
All of these aesthetical aspects were attempted to be maintained in the developed software.
6.1.3 GUI Development using โ€œGUIDEโ€
In MATLAB, โ€œGUIDEโ€ is a GUI development form which is pre-developed. It allows itโ€™s user
to place GUI elements in the GUI using drag and drop method. Besides, it also allows user to
extend the functionality of GUI elements using further programming. However, there are both
advantages and disadvantages using this approach and these are discussed as follows:
6.1.3.1 Advantages:
๏‚ท Less time-consuming
๏‚ท Best for prototyping
๏‚ท Best for short-term use
๏‚ท Best for simpler GUI
๏‚ท Easy solution for newbie computing professional or engineers
6.1.3.2: Disadvantages:
๏‚ท Does not offer full understanding on GUI construction
๏‚ท There are cases where it can take more time to fix GUI error issues in compare to
programmatic implementation
๏‚ท Needs to keep track of two files i.e. โ€œ.mโ€ and โ€œ.figโ€ for every GUI
๏‚ท GUIDE generated codes are messy and large in size
๏‚ท Little changes in GUI causes substantial reordering of the corresponding GUI code
hence it is not worthy to keep track of the code through source code control system
(e.g. CVS)
6.1.4 Programmatic GUI Development
In MATLAB, a GUI can be developed programmatically. This approach has huge advantages
but as well contains some drawbacks. However, the advantages overcome its drawbacks and
therefore, we have used we have the developed the GUI in these project programmatically. The
advantages and disadvantages are discussed as follows.
6.1.4.1 Advantages:
๏‚ท Faster from an overall consideration if implemented with good experience and
expertise
๏‚ท Best for applications that will be used for Long-term
๏‚ท Best for applications that will evolve with more complexity in the future
๏‚ท Allows to make use of nested functions
๏‚ท Hand-coding GUI results in lucid, simpler and easy-to-follow code
๏‚ท Easy deployment; for example it easier to upgrade and update the GUI when there are
fewer files and less codes
๏‚ท Best solution for competent or advanced computing professionals, engineers, scientist
and researchers
๏‚ท GUI layout can be controlled programmatically and hence appropriate adaptability
with various screen sizes becomes possible
๏‚ท GUI related code can be reused
๏‚ท Easy to keep track of the changes that is made to the earlier version of the code
through source code control system (e.g. CVS)
6.1.4.2 Disadvantages
๏‚ท Longer learning curve
๏‚ท Have to start from scratch
๏‚ท Take more time to create a simple GUI in compare to GUIDE
6.2 Structural GUI Design Tools
The structure of GUI depends on the extent and type of GUI elements are used to construct it.
We can formulate the GUI structure in two categories, namely, โ€œskinโ€ structure and โ€œcodeโ€
structure. For skin structure, two notions are important in the development of GUI, these are:
1. GUI elements 2. How these elements are placed within GUI. We have used โ€œnested panelsโ€
in this project that has shaped both โ€œskinโ€ and โ€œcodeโ€ structure of the GUI. Moreover, we have
also used โ€œnested functionsโ€ in this project that has mostly shaped the โ€œcodeโ€ structure. Both
โ€œnested panelsโ€ and โ€œnested functionsโ€ have their own trade-offs and are discussed as follows.
6.2.1 Nested Panels
โ€œNested panelsโ€ means putting several panels within a single parent panel. A parent panel can
have several level of child panels based on the degree of nesting. In other words, we can say
that, a parent panel can have child panels and grand-child panels which in turns result in several
parent panels within a grandparent panel. There are both advantages and disadvantages of using
โ€œnested panelsโ€ and are discussed as follows. In this project, we have used โ€œnested panelsโ€
because its advantages overcomes its disadvantages.
6.2.1.1 Advantages
๏‚ท Realignment only impact within child panel and GUI elements within outer panels stays
intact
๏‚ท Offers locked GUI elements within a certain GUI area and therefore prevents any
accidental realignment
๏‚ท All components within a parent panel can be easily relocated with 100% same alignment
ratio
๏‚ท Facilitates moduler GUI development
๏‚ท Facilitates re-use of code in another symmetric panel with same alignment ratio
6.2.1.2 Disadvantages
๏‚ท If parent panels needed to be reorganized, then whole GUI layout needed to be re-
implemented
6.2.2 Nested Functions
โ€œNested functionsโ€ means putting several or hundreds of child functions within a single
parent function. However, there are advantages and disadvantages for this approach and are
discussed as follows.
6.2.1.1 Advantages
๏‚ท It is possible to use variables that are not explicitly passed as input arguments, namely
externally scoped variables from the parent function.
๏‚ท A handle created in parent function can be used for data storage purpose from the nested
function.
6.2.1.2 Disadvantages
๏‚ท When a code become larger, a function and several hundreds of nested functions within it
creates inconvenience to programmer.
6.3 Used Functions
In MATLAB, there are cases which can be only solved using a unique function and there are
no alternatives available. However, there are also cases which can be solved using several
alternative functions and a user need to make choice based on need and convenience.
๏‚ท Main GUI window: using โ€œfigureโ€ function.
๏‚ท GUI element handling: using โ€œfunction handleโ€ of each GUI element
๏‚ท GUI element customization: using each functionโ€™s associated โ€œPropertyโ€ and โ€œValuesโ€.
๏‚ท GUI elements: โ€œuimenuโ€, โ€œuitoolbarโ€, โ€œuipushtoolโ€, โ€œuipanelโ€, โ€œuicontrolโ€, โ€œaxesโ€,
โ€œgetappdataโ€, โ€œuitableโ€, โ€œuigetfileโ€
๏‚ท Run-time data storage: โ€œguidataโ€, โ€œsetappdataโ€
๏‚ท Callback event execution: โ€œCallbackโ€ and associatively directed functions
๏‚ท Data Loading: โ€œdlmwriteโ€, โ€œfilepartsโ€
๏‚ท Learning Curve Calculation: โ€œmsesimโ€ function is used
Chapter 7
Algorithm and Software Development
7.1 Graphical User Interface (GUI) Structure and Elements
The Graphical User Interface (GUI) is composed of several elements such as menubar, menus,
toolbar, pushbutton, popup menu, slider, axes, text, edit and as well as design structures such
as panels etc. In the previous chapter, we have briefly mentioned about it. All of these elements
are placed in the coordinate of the main parent figure. In another word, the whole MATLAB
GUI is a figure function instance which contains various sub components to accomplish the
tasks of the software.
7.1.1 Main GUI Window or Figure
In MATLAB, the whole GUI is realized within a single function called โ€œfigureโ€. The
function is called along with desired arguments and in turn it generates a blank GUI window
in accordance with the passed on properties. This blank GUI window has horizontal coordinate
and vertical coordinate. Then, we have placed several GUI elements into this blank GUI
window through using this coordinates. After declaration of the โ€œfigureโ€ function it returns the
handle to that function, reciprocally, to the blank GUI window. We have used this handle for
placing other GUI elements to the blank parent GUI window. In the following code, we can
see that, first we have declared the main parent โ€œfigureโ€ function and then placed menubar,
menus and toolbar into the generated main GUI window.
myHandle=figure('Visible','off','HandleVisibility','callback','NumberTitle'
,'off','MenuBar','None','Resize','off','Name','A MATLAB Simulation Software
for Key Adaptive Algorithms and Applications, Developed By Main Uddin-Al-
Hasan','units','normalized','outerposition',[0 0 1 1],'Visible','on');
myMenu1=uimenu(myHandle,'Label','File');
addItem2=uimenu(myMenu1,'Label','Load Data','Callback',@loadData);
addItem4=uimenu(myMenu1,'Label','Close','Callback',@closeFigure);
myToolbar=uitoolbar(myHandle);
img1 = imread('new.png');
img11 = imresize(img1,[25,25]);
tool1 =
uipushtool(myToolbar,'CData',img11,'Separator','on','TooltipString','Load
Data','HandleVisibility','off','ClickedCallback',@loadData);
In figure 16, we can see the structure of the developed GUI. The main parent figure contains
all GUI elements and panels.
Figure 19: Developed GUI without data
In the figure 16, from the middle to left there are four panels of dissimilar sizes. The
top 2 panels are child panel within a parent panel. The bottom two panels are individual panels
that are positioned into main parent figure coordinate. And, from the middle to right, we have
four display panels and each of which are locked into another display parent panel. This parent
display panel is locked into the main parent figure coordinate.
7.1.2 Nested Panelling
Figure 20: Main GUI window with some data
In figure 17, the bottom left panel of the main GUI window is populated with several child
panels and each panel is populated with several GUI elements. In the following code, first we
have declared four parent panels. All other GUI elements are placed into these four parent
panels. This nested panelling offer modular software development such that if we want to swap
between left half and right half of the above GUI then we just need to change four coordinate
values of corresponding four parent panels and can disregard coordinate locations of all other
GUI elements. That is to say that when we move a parent panel, we move all other child panels
within it and their internal location consistency stays unchanged.
% Creating Parent Panels
DataAndSelection=uipanel(myHandle,'BorderType','none','BackgroundColor','wh
ite','Position',[.0 .70 .5 .30]);
AlgorithmParameter=uipanel(myHandle,'BorderType','none','BackgroundColor','
white','Position',[.0 .0 .3 .70]);
titleData=uicontrol(AlgorithmParameter,'Style','text','String','Algorithm
Paramters','BackgroundColor',[.5 .5 1],...
'Units','normalized','FontSize',12,'Position',[.0 .95 1 .05]);
LoadedDataDisplay=uipanel(myHandle,'BorderType','none','Position',[.3 .0 .2
.70]);
SignalDisplay=uipanel(myHandle,'BorderType','none','Position',[.5 .0 .5
1]);
In the following code, we have created two child panels. In the first child panel, we have placed
popup menus, default data load option and execution push button. In the second child panel,
we have placed GUI elements for ALE and SI application data input.
% Creating child panels for Data&Selection
AlgorithmsAndApplications=uipanel(DataAndSelection,'BorderType','line','Hig
hlightColor',[.5 .5 1],'ShadowColor',[.5 .5 1],...
'FontSize',12,'FontWeight','normal','Position',[.0 .0 .35 1]);
titleData=uicontrol(AlgorithmsAndApplications,'Style','text','String','Algo
rithms & Applications','BackgroundColor',[.5 .4 1],...
'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]);
ApplicationData=uipanel(DataAndSelection,'Visible','off','BorderType','line
','FontSize',12,'HighlightColor',[.5 .6 1],...
'ShadowColor',[.5 .6 1],'Position',[.35 .0 .65 1]);
titleData=uicontrol(ApplicationData,'Style','text','String','Application
Data','BackgroundColor',[.5 .7 1],...
'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]);
In the following code, we have created child panels for each class of algorithms. Then, in each
child panel for each class, we have placed grand-child panels for each type of individual
algorithm.
% Creating child panels for each Algorithm Type
LMSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp
e','none','Position',[.0 .0 1 .95]);
RLSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp
e','none','Position',[.0 .0 1 .95]);
APAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]);
FDAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]);
LBAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType
','none','Position',[.0 .0 1 .95]);
In the following code, we have created several grand-child panels for each type of LMS based
algorithms. After that, we have populated each child panel with corresponding algorithm
properties.
% Creating child panels for LMS Based Algorithms
lms=uipanel(LMSAlgorithmParameter,'Title','LMS','Position',[.0 .66 .333
.33]);
nlms=uipanel(LMSAlgorithmParameter,'Title','NLMS','Position',[.333 .66 .333
.33]);
llms=uipanel(LMSAlgorithmParameter,'Title','LLMS','Position',[.666 .66 .333
.33]);
adjlms=uipanel(LMSAlgorithmParameter,'Title','ADJLMS','Position',[.0 .33
.333 .33]);
blms=uipanel(LMSAlgorithmParameter,'Title','BLMS','Position',[.333 .33 .333
.33]);
blms_fft=uipanel(LMSAlgorithmParameter,'Title','BLMS-FFT','Position',[.666
.33 .333 .33]);
dlms=uipanel(LMSAlgorithmParameter,'Title','DLMS','Position',[.0 .0 .333
.33]);
filtxlms=uipanel(LMSAlgorithmParameter,'Title','FILT-XLMS','Position',[.333
.0 .333 .33]);
sDESlms=uipanel(LMSAlgorithmParameter,'Title','SD/SE/SS','Position',[.666
.0 .333 .33]);
In the figure, we can see the internal blocks of the resultant GUI. The position of each block
in this figure exactly similar to the corresponding developed GUI.
Main Parent Figure
Menubar: menus, sub-menus,
Toolbar
Parent Panel: Selection, Execution and Application Data
Parent Panel: Algorithm Parameters
Parent Panel: Data Display
Child Panel: Select
Applications and
Algorithms and Execute
Child Panel: Enter ALE
and SI Data
Child Panel 1
(Parameters)
Child Panel 2
(Parameters)
Child Panel 3
(Parameters)
Parent Panel:
Loaded Data
Display
Child Panel 4
(Parameters)
Child Panel 5
(Parameters)
Child Panel 6
(Parameters)
Child Panel 7
(Parameters)
Child Panel 8
(Parameters)
Child Panel 9
(Parameters)
Child Panel: Original Signal
Child Panel: All Learning Curve
Grand Child
Panel: Axis
Customization
and Listening
Child Panel: All Estimated Signal Grand Child
Panel: Axis
Customization
and Listening
Child Panel: All Error Signal Grand Child
Panel: Axis
Customization
and Listening
Figure 21: Internal GUI Blocks
The benefit of modular GUI management is clearly understandable from the figure 18. For an
example, if we want to swap between โ€œChild Panel 1โ€ and โ€œChild Panel 2โ€, we just need to
change the โ€œPositionโ€ property coordinate. All of the GUI elements that are contained within
these two child panels will stay unchanged.
7.1.3 Popup Menu or Listing
Menubar is a common element of modern software GUI. The common standard to use
this element is at the top of the software. However, there are shortage of spaces there and popup
menu is a good alternative to show a listing. Moreover, multiple popup menu can be locked
into a single place and then can be conveniently accessed using the โ€œvisibleโ€ property of GUI.
We have used this property to show several popup menu in a small place. A small block of the
code related to popup menu is given blow. Here, we have first declared the list and then created
the popup menu and assigned the list to the โ€œStringโ€ property of popup function. After that, we
have fetched the currently selected value and associated string value from second column of
the list. This fetched string value is later used to decide which configuration of function is
called.
popupLMSClass ={... % LMS Based Algorithms
'','';
'LMS FIR' 'LMS';
'Normalized LMS FIR' 'NLMS';
'Leaky LMS FIR' 'LLMS';
'Adjoint LMS FIR' 'ADJLMS';
'Block LMS FIR' 'BLMS';
'FFT-based Block LMS FIR' 'BLMSFFT';
'Delayed LMS FIR' 'DLMS';
'Filtered-x LMS FIR' 'FILTXLMS';
'Sign-Data LMS FIR (SD)' 'SD';
'Sign-Error LMS FIR (SE)' 'SE';
'Sign-Sign LMS FIR (SS)' 'SS'};
selectLMSClass =
uicontrol(AlgorithmsAndApplications,'Visible','off','Style','popupmenu','Un
its','normalized','String',popupLMSClass(:,1),'HandleVisibility','callback'
,'Position',[.05 .44 .83 .1],'Callback',@AlgCustomizedVisibility);
whatLMSAlgorithm = popupLMSClass{get(selectLMSClass,'Value'), 2};
In total, we have created three visible popup menu at an execution instance and they
need to be selected in a descending order to be able to use it correctly. That is to say to mean
that, when an option is selected from the first popup menu, the second popup menu is displayed
based on the first selection and similarly based second selection third popup menu is displayed.
The first popup menu shows the applications, second popup menu shows the algorithm class
types and the third popup menu shows the individual algorithms.
Popup Menu 1: Select Applications
1. Adaptive Noise Cancellation (ANC)
2. Adaptive Line Enhancement (ALE)
3. System Identification (SI)
START
Popup Menu 2: Select Algorithm Group or
Comparison
1. Run & Compare Algorithms
2. LMS Based FIR Filter
3. RLS Based FIR Filter
4. Affine Projection Based FIR Filter
5. Frequency Domain Based FIR Filter
6. Lattice Base FIR Filter
Is ANC/ALE/SI Chosen?
Is Option 4
Chosen?
Is Option 3
Chosen?
Is Option 2
Chosen?
Is Option 1
Chosen?
Is Option 5
Chosen?
Is Option 6
Chosen?
YES
Popup Menu 3(1):Run and Compare
Algorithms->
1. All LMS Based Algorithms
2. All RLS Based Algorithms
3. All AP Based Algorithms
4. All FD Based Algorithms
5. All Lattice Based Algorithms
6. LMS Based Algorithms in Group
7. RLS Based Algorithms in Group
8. AP Based Algorithms in Group
9. FD Based Algorithms in Group
10. Lattice Based Algorithms in Group
YES
Popup Menu 3(2): LMS Based Algorithms->
1. LMS FIR 2. NLMS FIR 3. LLMS FIR
4. ADJLMS FIR 5. BLMS FIR 6. BLMSFFT
FIR 7. DLMS FIR 8. FILTXLMS FIR 9. SD FIR
10. SE FIR 11. SS FIR
YES
YES
Popup Menu 3(3): RLS Based Algorithms->
1. RLS FIR 2. QRDRLS FIR 3. HRLS FIR 4.
HSWRLS FIR 5. SWRLS FIR 6. FTF FIR
YES Popup Menu 3(4): AP Based Algorithms->
1. AP 2. APRU 3. BAP
YES
Popup Menu 3(5): FD Based Algorithms->
1. PBFDAF 2. PBUFDAF 3. TDAFDCT 4.
TDAFDFT 5. UFDAF
Popup Menu 3(6): Lattice Based Algorithms->
1. GAL 2. LSL 3. QRDLSL
YES
Figure 22: Popup menu execution flow
In the figure 19, the orderly execution of popup menu is given along with the content
of each popup menu. The first popup menu location has a single popup menu that shows the
type of application. The second popup menu location also has a single popup menu that shows
the class of algorithms and comparison mode. But, we have placed six popup menu in the third
popup menu location and each of these menu is connected with the corresponding entry in the
popup menu of second popup menu location.
7.1.4 Slider Control
We have used sliders in the developed GUI. The user input value for the variable
parameters (i.e. step-size, filter order) of each algorithm can be easily and conveniently
controlled using these sliders. The sliders works in real-time and that is to say to mean that
when slider position changes it also changes the associated value for corresponding parameter
and when corresponding parameter value is changed the associated slider position is updated.
This auto update is accomplished through using โ€œCallbackโ€ property of both โ€œeditโ€ and โ€œsliderโ€
GUI elements. When there is a change in a โ€œeditโ€ box it also executes the associated โ€œCallbackโ€
function. And, we have fetched current โ€œeditโ€ box value and used this value to update the slider
position inside this associated โ€œCallbackโ€ function. And, when there is a change in a โ€œsliderโ€,
it also executes the associated โ€œCallbackโ€ function and in a similar way updates the
corresponding value in the โ€œeditโ€ box. In the following code, the first function is executed when
there is a change in the corresponding โ€œeditโ€ box and the second function is executed when
there is a change in the corresponding โ€œsliderโ€. Similarly, the third and fourth function works
for the order parameters of the algorithm.
function editLMSmu(hObject,evendata)
set(lmsMuSl1,'Value',str2double(get(lmsDF1,'string')));
end
function sliderLMSmu(hObject, eventdata)
sliderValue=get(lmsMuSl1,'Value');
set(lmsDF1,'string',sliderValue);
end
function editLMSorder(hObject,eventdata)
set(lmsOrderSl1,'Value',str2double(get(lmsDF2,'string')));
end
function sliderLMSorder(hObject,eventdata)
sliderValue=get(lmsOrderSl1,'Value');
set(lmsDF2,'string',sliderValue);
end
In the following figure, we can see how the โ€œeditโ€ box and โ€œsliderโ€ interact with each-other to
update the corresponding value in real-time.
START
Change parameter value
Update parameter value accordingly
Execute associated callback function
Update slider position accordingly
Change slider position
Execute associated callback function
Figure 23: Real-time slider control
7.1.5 Application and Parameter Data Input
In the developed software, we have two types of user input, namely, application data
input for ALE and SI and variable parameter data input for each algorithm. In the following
code, first we have created the text label using โ€œtextโ€ for corresponding data and then used
โ€œeditโ€ box to insert data.
% Data Fields for Signal 1
AmplitudeS1=uicontrol(Signal1,'Style','text','String','Amplitude','units','
normalized','Position',[.1 .80 .3 .15]);
SignalFreqS1=uicontrol(Signal1,'Style','text','String','Frequency','units',
'normalized','Position',[.09 .6 .3 .15]);
SampleTimeS1=uicontrol(Signal1,'Style','text','String','Sample
Time','units','normalized','Position',[.07 .4 .3 .15]);
SamplingRateS1=uicontrol(Signal1,'Style','text','String','Sampling
Rate','units','normalized','Position',[.0 .2 .4 .15]);
PhaseS1=uicontrol(Signal1,'Style','text','String','Phase','units','normaliz
ed','Position',[.13 .0 .3 .15]);
AmplitudeDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor'
,'white','units','normalized','Position',[.45 .79 .4 .15]);
SignalFreqDFS1=uicontrol(Signal1,'Style','edit','string',1200,'BackgroundCo
lor','white','units','normalized','Position',[.45 .59 .4 .15]);
SampleTimeDFS1=uicontrol(Signal1,'Style','edit','string',3000,'BackgroundCo
lor','white','units','normalized','Position',[.45 .39 .4
.15],'Callback',@updateSampleTimeForOtherSignal1);
SamplingRateDFS1=uicontrol(Signal1,'Style','edit','string',1000,'Background
Color','white','units','normalized','Position',[.45 .19 .4 .15]);
PhaseDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor','wh
ite','units','normalized','Position',[.45 .01 .4 .15]);
In the following code, we have created text label using โ€œtextโ€ for both โ€œeditโ€ and corresponding
sliders and then used โ€œeditโ€ to insert data for varying algorithm parameters and used sliders to
conveniently increase or decrease that data.
% Data Fields for LMS
lmsT1=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi
tion',[.14 .8 .2 .15]);
lmsT2=uicontrol(lms,'Style','text','String','order','units','normalized','P
osition',[.1 .59 .21 .15]);
lmsDF1=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm
alized','Position',[.4 .8 .5 .15],'Callback',@editLMSmu);
lmsDF2=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm
alized','Position',[.4 .59 .5 .15],'Callback',@editLMSorder);
lmsT3=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi
tion',[.14 .34 .2 .15]);
lmsT4=uicontrol(lms,'Style','text','String','order','units','normalized','P
osition',[.1 .14 .21 .15]);
lmsMuSl1=uicontrol(lms,'Style','slider','Min',0,'Max',5,'SliderStep',[0.05
0.1],'units','normalized','Position',[.4 .35 .5
.15],'Callback',@sliderLMSmu);
lmsOrderSl1=uicontrol(lms,'Style','slider','Min',0,'Max',1000,'SliderStep',
[.001 .005],'units','normalized','Position',[.4 .15 .5
.15],'Callback',@sliderLMSorder);
Change another
Signalโ€™s Sample
Time Equally
Change Noise
Signalโ€™s Sample
Time Equally
START
Is Sample Time for
One Signal Changed?
If Changed
Fetch Default
Sample Time
If not Changed
Change Signal One
Sample Time
Equally
Change Signal
Two Sample Time
Equally
START
Is Sample Time for
Noise Signal Changed?
If Changed
Fetch Default
Sample Time
If not Changed
Figure 24: Application data input consistency
In the application data input for ALE and SI, the sample time for signal 1, signal 2 and
additive noise must be same in order to be computed correctly. Therefore, we have used similar
method that we have used in โ€œedit-sliderโ€ to maintain automatic consistency among these data
types. For an example, if we change โ€œSignal 1โ€ sample time, then sample time for both โ€œSignal
2โ€ and โ€œNoiseโ€ will automatically turn similar to โ€œSignal 1โ€. The same thing holds for โ€œSignal
2โ€ and โ€œNoiseโ€ and when sample time from one of them is changed then the sample time for
other two will also change.
7.1.6 Data storage and retrieval
In the developed software, the use of data can be realized into two categories. Firstly,
loaded data or external data. Secondly, software generated data after processing. The external
speech data or loaded data is stored in the guidata() storage function of main GUI handle for
further processing. On the other hand, the software generated data such as estimated signal,
error signal, learning curve are stored in the axis handle of corresponding display axis using
setappdata() function. The software generated data is stored so that processed signals can be
played whenever needed after processing or can be displayed in a new figure. In the following
code, we have loaded the speech data for ANC and saved it in the guidata() function of main
figure handle.
function loadData(hObject, eventdata)
[filename,filepath] = uigetfile('*.*','All Files','Select your Data or
Files');
[path,name,ext] = fileparts(filename);
if(strcmp(ext,'.mat'))
data=matfile(filename);
dlmwrite('inputData.dat',[data.d data.x]);
myData=load('inputData.dat');
guidata(myHandle,myData);
setappdata(AncData,'SignalWithNoise',data);
updateDataTable();
else
myData=load(filename);
guidata(myHandle,myData);
updateDataTable();
end
end
In the following code, we have fetched back the loaded and stored data and displayed in the
โ€œuitableโ€ function generated table. This โ€œuitableโ€ GUI element is placed into the third main
parent panel.
function updateDataTable(hObject,eventdata)
% Setting uitable in Statistical and Data Analysis
columnFormat = {'numeric', 'numeric'};
columnEdit = [true true];
columnWidth = {60 60};
inputRawData=guidata(myHandle);
colnames={'1','2','3'};
inputDataTable =
uitable(StatisticalAndDataAnalysis,'Units','normalized','Position',[.0 .0 1
.95],'Data',inputRawData,...
'ColumnName',colnames,'ColumnFormat',
columnFormat,'ColumnWidth', columnWidth,'ColumnEditable', columnEdit,...
'ToolTipString','Loaded Signal Data');
end
In the following code, we have fetched back stored software generated data (e.g. estimated
signal) to be played. Similarly, error signal and learning curve data can be also fetched and be
listened or displayed respectively.
function playEstimatedSound(hObject,eventdata)
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report
Applied Adaptive Signal Processing Report

More Related Content

Viewers also liked

Proceso de seleccion de idea de negocio
Proceso de seleccion de idea de negocioProceso de seleccion de idea de negocio
Proceso de seleccion de idea de negocioArturo Guerrero
ย 
Syllabus-SV0002-Swedish for International Students 2
Syllabus-SV0002-Swedish for International Students 2Syllabus-SV0002-Swedish for International Students 2
Syllabus-SV0002-Swedish for International Students 2Main Uddin-Al-Hasan
ย 
GETSI-Field Guiding Principles presentation
GETSI-Field Guiding Principles presentationGETSI-Field Guiding Principles presentation
GETSI-Field Guiding Principles presentationSERC at Carleton College
ย 
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...iosrjce
ย 
Sculptures from the early age
Sculptures from the early ageSculptures from the early age
Sculptures from the early ageotepenyo
ย 
Drought monitoring in Morocco, Noureddine BIJABER
Drought monitoring in Morocco, Noureddine BIJABERDrought monitoring in Morocco, Noureddine BIJABER
Drought monitoring in Morocco, Noureddine BIJABERNENAwaterscarcity
ย 
Fundamentals of Passive and Active Sonar Technical Training Short Course Sampler
Fundamentals of Passive and Active Sonar Technical Training Short Course SamplerFundamentals of Passive and Active Sonar Technical Training Short Course Sampler
Fundamentals of Passive and Active Sonar Technical Training Short Course SamplerJim Jenkins
ย 
Sonar Principles Asw Analysis
Sonar Principles Asw AnalysisSonar Principles Asw Analysis
Sonar Principles Asw AnalysisJim Jenkins
ย 
A complete introduction on matlab and matlab's projects
A complete introduction on matlab and matlab's projectsA complete introduction on matlab and matlab's projects
A complete introduction on matlab and matlab's projectsMukesh Kumar
ย 
Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...thanh nguyen
ย 

Viewers also liked (13)

Proceso de seleccion de idea de negocio
Proceso de seleccion de idea de negocioProceso de seleccion de idea de negocio
Proceso de seleccion de idea de negocio
ย 
Posorden
PosordenPosorden
Posorden
ย 
Syllabus-SV0002-Swedish for International Students 2
Syllabus-SV0002-Swedish for International Students 2Syllabus-SV0002-Swedish for International Students 2
Syllabus-SV0002-Swedish for International Students 2
ย 
GETSI-Field Guiding Principles presentation
GETSI-Field Guiding Principles presentationGETSI-Field Guiding Principles presentation
GETSI-Field Guiding Principles presentation
ย 
BI
BIBI
BI
ย 
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...
Corporate Social Responsibility Practices in Tourism-related Businesses in Zi...
ย 
Sculptures from the early age
Sculptures from the early ageSculptures from the early age
Sculptures from the early age
ย 
supply-chain-resiliency
supply-chain-resiliencysupply-chain-resiliency
supply-chain-resiliency
ย 
Drought monitoring in Morocco, Noureddine BIJABER
Drought monitoring in Morocco, Noureddine BIJABERDrought monitoring in Morocco, Noureddine BIJABER
Drought monitoring in Morocco, Noureddine BIJABER
ย 
Fundamentals of Passive and Active Sonar Technical Training Short Course Sampler
Fundamentals of Passive and Active Sonar Technical Training Short Course SamplerFundamentals of Passive and Active Sonar Technical Training Short Course Sampler
Fundamentals of Passive and Active Sonar Technical Training Short Course Sampler
ย 
Sonar Principles Asw Analysis
Sonar Principles Asw AnalysisSonar Principles Asw Analysis
Sonar Principles Asw Analysis
ย 
A complete introduction on matlab and matlab's projects
A complete introduction on matlab and matlab's projectsA complete introduction on matlab and matlab's projects
A complete introduction on matlab and matlab's projects
ย 
Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...
ย 

Similar to Applied Adaptive Signal Processing Report

Product Engineer Certified Lean Six Sigma Black Belt by IASSC
Product Engineer Certified Lean Six Sigma Black Belt by IASSCProduct Engineer Certified Lean Six Sigma Black Belt by IASSC
Product Engineer Certified Lean Six Sigma Black Belt by IASSCHAKKACHE Mohamed
ย 
Parallel and Distributed Algorithms for Large Text Datasets Analysis
Parallel and Distributed Algorithms for Large Text Datasets AnalysisParallel and Distributed Algorithms for Large Text Datasets Analysis
Parallel and Distributed Algorithms for Large Text Datasets AnalysisIllia Ovchynnikov
ย 
Resume-Rohit_Vijay_Bapat_December_2016
Resume-Rohit_Vijay_Bapat_December_2016Resume-Rohit_Vijay_Bapat_December_2016
Resume-Rohit_Vijay_Bapat_December_2016Rohit Bapat
ย 
Proposal with sdlc
Proposal with sdlcProposal with sdlc
Proposal with sdlcKamau Francis
ย 
Knapp_Masterarbeit
Knapp_MasterarbeitKnapp_Masterarbeit
Knapp_MasterarbeitNathaniel Knapp
ย 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1Pavel Prochazka
ย 
complete_project
complete_projectcomplete_project
complete_projectAnirban Roy
ย 
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdf
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdfEMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdf
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdfEMERSON EDUARDO RODRIGUES
ย 
BIT (Building Material Retail Online Store) Project Nay Linn Ko
BIT (Building Material Retail Online Store) Project Nay Linn KoBIT (Building Material Retail Online Store) Project Nay Linn Ko
BIT (Building Material Retail Online Store) Project Nay Linn KoNay Linn Ko
ย 
Proceedings of the 2015 Industrial and Systems Engineering Res.docx
Proceedings of the 2015 Industrial and Systems Engineering Res.docxProceedings of the 2015 Industrial and Systems Engineering Res.docx
Proceedings of the 2015 Industrial and Systems Engineering Res.docxwkyra78
ย 
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...solit
ย 
Matlab worshop
Matlab worshopMatlab worshop
Matlab worshopSilicon Mentor
ย 
online movie ticket booking system
online movie ticket booking systemonline movie ticket booking system
online movie ticket booking systemSikandar Pandit
ย 
Petr_Kalina_Thesis_1_sided_version
Petr_Kalina_Thesis_1_sided_versionPetr_Kalina_Thesis_1_sided_version
Petr_Kalina_Thesis_1_sided_versionPetr Kalina
ย 
Representation Learning for Structural Music Similarity Measurements
Representation Learning for Structural Music Similarity MeasurementsRepresentation Learning for Structural Music Similarity Measurements
Representation Learning for Structural Music Similarity MeasurementsVahndi Minah
ย 
Design Patterns in Electronic Data Management
Design Patterns in Electronic Data ManagementDesign Patterns in Electronic Data Management
Design Patterns in Electronic Data ManagementGlen Alleman
ย 
Thesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedThesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedWalter Rodrigues
ย 

Similar to Applied Adaptive Signal Processing Report (20)

Thesis_Report
Thesis_ReportThesis_Report
Thesis_Report
ย 
Product Engineer Certified Lean Six Sigma Black Belt by IASSC
Product Engineer Certified Lean Six Sigma Black Belt by IASSCProduct Engineer Certified Lean Six Sigma Black Belt by IASSC
Product Engineer Certified Lean Six Sigma Black Belt by IASSC
ย 
Parallel and Distributed Algorithms for Large Text Datasets Analysis
Parallel and Distributed Algorithms for Large Text Datasets AnalysisParallel and Distributed Algorithms for Large Text Datasets Analysis
Parallel and Distributed Algorithms for Large Text Datasets Analysis
ย 
Resume-Rohit_Vijay_Bapat_December_2016
Resume-Rohit_Vijay_Bapat_December_2016Resume-Rohit_Vijay_Bapat_December_2016
Resume-Rohit_Vijay_Bapat_December_2016
ย 
Proposal with sdlc
Proposal with sdlcProposal with sdlc
Proposal with sdlc
ย 
Knapp_Masterarbeit
Knapp_MasterarbeitKnapp_Masterarbeit
Knapp_Masterarbeit
ย 
disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1disertation_Pavel_Prochazka_A1
disertation_Pavel_Prochazka_A1
ย 
complete_project
complete_projectcomplete_project
complete_project
ย 
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdf
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdfEMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdf
EMERSON EDUARDO RODRIGUES Automating_with_SIMATIC_S7_400_inside_TIA_Portal.pdf
ย 
BIT (Building Material Retail Online Store) Project Nay Linn Ko
BIT (Building Material Retail Online Store) Project Nay Linn KoBIT (Building Material Retail Online Store) Project Nay Linn Ko
BIT (Building Material Retail Online Store) Project Nay Linn Ko
ย 
Proceedings of the 2015 Industrial and Systems Engineering Res.docx
Proceedings of the 2015 Industrial and Systems Engineering Res.docxProceedings of the 2015 Industrial and Systems Engineering Res.docx
Proceedings of the 2015 Industrial and Systems Engineering Res.docx
ย 
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...
Solit 2014, ะŸะพะดะณะพั‚ะพะฒะบะฐ ัะฟะตั†ะธะฐะปะธัั‚ะพะฒ ะฒ ัั„ะตั€ะต It ะฝะฐ ั„ะฐะบัƒะปัŒั‚ะตั‚e ะธะฝั„ะพั€ะผะฐั†ะธะพะฝะฝั‹ั… ั‚...
ย 
Matlab worshop
Matlab worshopMatlab worshop
Matlab worshop
ย 
online movie ticket booking system
online movie ticket booking systemonline movie ticket booking system
online movie ticket booking system
ย 
Petr_Kalina_Thesis_1_sided_version
Petr_Kalina_Thesis_1_sided_versionPetr_Kalina_Thesis_1_sided_version
Petr_Kalina_Thesis_1_sided_version
ย 
Profile (1)
Profile (1)Profile (1)
Profile (1)
ย 
Representation Learning for Structural Music Similarity Measurements
Representation Learning for Structural Music Similarity MeasurementsRepresentation Learning for Structural Music Similarity Measurements
Representation Learning for Structural Music Similarity Measurements
ย 
Thesis
ThesisThesis
Thesis
ย 
Design Patterns in Electronic Data Management
Design Patterns in Electronic Data ManagementDesign Patterns in Electronic Data Management
Design Patterns in Electronic Data Management
ย 
Thesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updatedThesis_Walter_PhD_final_updated
Thesis_Walter_PhD_final_updated
ย 

More from Main Uddin-Al-Hasan

Syllabus-SV0001-Swedish for International Students 1
Syllabus-SV0001-Swedish for International Students 1Syllabus-SV0001-Swedish for International Students 1
Syllabus-SV0001-Swedish for International Students 1Main Uddin-Al-Hasan
ย 
Syllabus-FM1111-Planning in Sweden-An Introduction
Syllabus-FM1111-Planning in Sweden-An IntroductionSyllabus-FM1111-Planning in Sweden-An Introduction
Syllabus-FM1111-Planning in Sweden-An IntroductionMain Uddin-Al-Hasan
ย 
Syllabus-ET1201-Mobile Communications
Syllabus-ET1201-Mobile CommunicationsSyllabus-ET1201-Mobile Communications
Syllabus-ET1201-Mobile CommunicationsMain Uddin-Al-Hasan
ย 
Syllabus-EN1204-Language and Communication II
Syllabus-EN1204-Language and Communication IISyllabus-EN1204-Language and Communication II
Syllabus-EN1204-Language and Communication IIMain Uddin-Al-Hasan
ย 
Syllabus-EN1114-Literature and Media Studies I
Syllabus-EN1114-Literature and Media Studies ISyllabus-EN1114-Literature and Media Studies I
Syllabus-EN1114-Literature and Media Studies IMain Uddin-Al-Hasan
ย 
Syllabus-EN1103-Culture and Media Studies I
Syllabus-EN1103-Culture and Media Studies ISyllabus-EN1103-Culture and Media Studies I
Syllabus-EN1103-Culture and Media Studies IMain Uddin-Al-Hasan
ย 
Syllabus-EN1105-Language and Communication I
Syllabus-EN1105-Language and Communication ISyllabus-EN1105-Language and Communication I
Syllabus-EN1105-Language and Communication IMain Uddin-Al-Hasan
ย 
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual Communication
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual CommunicationSyllabus-ME1101-Media, Form and Design-An Introduction to Visual Communication
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual CommunicationMain Uddin-Al-Hasan
ย 
Syllabus-TIG021-Software Processes
Syllabus-TIG021-Software ProcessesSyllabus-TIG021-Software Processes
Syllabus-TIG021-Software ProcessesMain Uddin-Al-Hasan
ย 
Syllabus-ET2566-Masterยดs Thesis
Syllabus-ET2566-Masterยดs ThesisSyllabus-ET2566-Masterยดs Thesis
Syllabus-ET2566-Masterยดs ThesisMain Uddin-Al-Hasan
ย 
Syllabus-ET2546-Multidimensional Signal Processing
Syllabus-ET2546-Multidimensional Signal ProcessingSyllabus-ET2546-Multidimensional Signal Processing
Syllabus-ET2546-Multidimensional Signal ProcessingMain Uddin-Al-Hasan
ย 
Syllabus-ET1304-Digital Signal Processors
Syllabus-ET1304-Digital Signal ProcessorsSyllabus-ET1304-Digital Signal Processors
Syllabus-ET1304-Digital Signal ProcessorsMain Uddin-Al-Hasan
ย 
Syllabus-ET2544-Experimental Modal Analysis
Syllabus-ET2544-Experimental Modal AnalysisSyllabus-ET2544-Experimental Modal Analysis
Syllabus-ET2544-Experimental Modal AnalysisMain Uddin-Al-Hasan
ย 
Syllabus-ET2542-Adaptive Signal Processing
Syllabus-ET2542-Adaptive Signal ProcessingSyllabus-ET2542-Adaptive Signal Processing
Syllabus-ET2542-Adaptive Signal ProcessingMain Uddin-Al-Hasan
ย 
Syllabus-MS2502-Random Processes
Syllabus-MS2502-Random ProcessesSyllabus-MS2502-Random Processes
Syllabus-MS2502-Random ProcessesMain Uddin-Al-Hasan
ย 
Syllabus-ET2571-Advanced Applied Signal Processing
Syllabus-ET2571-Advanced Applied Signal ProcessingSyllabus-ET2571-Advanced Applied Signal Processing
Syllabus-ET2571-Advanced Applied Signal ProcessingMain Uddin-Al-Hasan
ย 
Syllabus-MA1434-Complex Analysis and Transforms
Syllabus-MA1434-Complex Analysis and TransformsSyllabus-MA1434-Complex Analysis and Transforms
Syllabus-MA1434-Complex Analysis and TransformsMain Uddin-Al-Hasan
ย 

More from Main Uddin-Al-Hasan (19)

Electricity
ElectricityElectricity
Electricity
ย 
Syllabus_SFI_English
Syllabus_SFI_EnglishSyllabus_SFI_English
Syllabus_SFI_English
ย 
Syllabus-SV0001-Swedish for International Students 1
Syllabus-SV0001-Swedish for International Students 1Syllabus-SV0001-Swedish for International Students 1
Syllabus-SV0001-Swedish for International Students 1
ย 
Syllabus-FM1111-Planning in Sweden-An Introduction
Syllabus-FM1111-Planning in Sweden-An IntroductionSyllabus-FM1111-Planning in Sweden-An Introduction
Syllabus-FM1111-Planning in Sweden-An Introduction
ย 
Syllabus-ET1201-Mobile Communications
Syllabus-ET1201-Mobile CommunicationsSyllabus-ET1201-Mobile Communications
Syllabus-ET1201-Mobile Communications
ย 
Syllabus-EN1204-Language and Communication II
Syllabus-EN1204-Language and Communication IISyllabus-EN1204-Language and Communication II
Syllabus-EN1204-Language and Communication II
ย 
Syllabus-EN1114-Literature and Media Studies I
Syllabus-EN1114-Literature and Media Studies ISyllabus-EN1114-Literature and Media Studies I
Syllabus-EN1114-Literature and Media Studies I
ย 
Syllabus-EN1103-Culture and Media Studies I
Syllabus-EN1103-Culture and Media Studies ISyllabus-EN1103-Culture and Media Studies I
Syllabus-EN1103-Culture and Media Studies I
ย 
Syllabus-EN1105-Language and Communication I
Syllabus-EN1105-Language and Communication ISyllabus-EN1105-Language and Communication I
Syllabus-EN1105-Language and Communication I
ย 
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual Communication
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual CommunicationSyllabus-ME1101-Media, Form and Design-An Introduction to Visual Communication
Syllabus-ME1101-Media, Form and Design-An Introduction to Visual Communication
ย 
Syllabus-TIG021-Software Processes
Syllabus-TIG021-Software ProcessesSyllabus-TIG021-Software Processes
Syllabus-TIG021-Software Processes
ย 
Syllabus-ET2566-Masterยดs Thesis
Syllabus-ET2566-Masterยดs ThesisSyllabus-ET2566-Masterยดs Thesis
Syllabus-ET2566-Masterยดs Thesis
ย 
Syllabus-ET2546-Multidimensional Signal Processing
Syllabus-ET2546-Multidimensional Signal ProcessingSyllabus-ET2546-Multidimensional Signal Processing
Syllabus-ET2546-Multidimensional Signal Processing
ย 
Syllabus-ET1304-Digital Signal Processors
Syllabus-ET1304-Digital Signal ProcessorsSyllabus-ET1304-Digital Signal Processors
Syllabus-ET1304-Digital Signal Processors
ย 
Syllabus-ET2544-Experimental Modal Analysis
Syllabus-ET2544-Experimental Modal AnalysisSyllabus-ET2544-Experimental Modal Analysis
Syllabus-ET2544-Experimental Modal Analysis
ย 
Syllabus-ET2542-Adaptive Signal Processing
Syllabus-ET2542-Adaptive Signal ProcessingSyllabus-ET2542-Adaptive Signal Processing
Syllabus-ET2542-Adaptive Signal Processing
ย 
Syllabus-MS2502-Random Processes
Syllabus-MS2502-Random ProcessesSyllabus-MS2502-Random Processes
Syllabus-MS2502-Random Processes
ย 
Syllabus-ET2571-Advanced Applied Signal Processing
Syllabus-ET2571-Advanced Applied Signal ProcessingSyllabus-ET2571-Advanced Applied Signal Processing
Syllabus-ET2571-Advanced Applied Signal Processing
ย 
Syllabus-MA1434-Complex Analysis and Transforms
Syllabus-MA1434-Complex Analysis and TransformsSyllabus-MA1434-Complex Analysis and Transforms
Syllabus-MA1434-Complex Analysis and Transforms
ย 

Recently uploaded

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
ย 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
ย 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
ย 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
ย 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerunnathinaik
ย 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
ย 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
ย 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
ย 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
ย 
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdfssuser54595a
ย 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
ย 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
ย 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
ย 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
ย 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
ย 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaVirag Sontakke
ย 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
ย 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonJericReyAuditor
ย 

Recently uploaded (20)

CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
ย 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
ย 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
ย 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
ย 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developer
ย 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
ย 
Model Call Girl in Tilak Nagar Delhi reach out to us at ๐Ÿ”9953056974๐Ÿ”
Model Call Girl in Tilak Nagar Delhi reach out to us at ๐Ÿ”9953056974๐Ÿ”Model Call Girl in Tilak Nagar Delhi reach out to us at ๐Ÿ”9953056974๐Ÿ”
Model Call Girl in Tilak Nagar Delhi reach out to us at ๐Ÿ”9953056974๐Ÿ”
ย 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
ย 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
ย 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
ย 
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAะกY_INDEX-DM_23-1-final-eng.pdf
ย 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
ย 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
ย 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
ย 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
ย 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
ย 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
ย 
Painted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of IndiaPainted Grey Ware.pptx, PGW Culture of India
Painted Grey Ware.pptx, PGW Culture of India
ย 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
ย 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lesson
ย 

Applied Adaptive Signal Processing Report

  • 1. A MATLAB Simulation Software for Key Adaptive Algorithms and Applications Project 2 Written by Group 18 Main Uddin-Al-Hasan, 8901011836 main.hasan@gmail.com M.Sc. in Electrical Engineering with emphasis on Signal Processing Blekinge Institute of Technology, Karlskrona, Sweden
  • 2.
  • 3. Abstract Adaptive signal processing algorithms are very useful in Active Noise Cancellation (ANC), Adaptive Line Enhancement (ALE) and System Identification (SI). Therefore, A MATLAB software is developed for the simulation of MATLAB pre-implemented Least- Mean-Square (LMS), Recursive-Least-Square (RLS), Affine Projection (AP), Frequency Domain (FD), Lattice (L) based 30 signal processing adaptive algorithms but we have theoretically studied only most common variants of LMS Based adaptive algorithms in this project. The developed software reduces simulation time through assembling all mentioned adaptive algorithms into one software interface. The LMS Based Algorithms are mainly studied in the project of which LMS, NLMS, LLMS are studied with emphasis. These algorithms are studied with different step size and filter order. The benefit of stochastic LMS algorithms in compare to Least-Square Adaptive algorithms is also studied in the project. The learning curve (LC) of the adaptive algorithms are also studied in relation to their step size and filter order. The learning curve parameters Convergence, Local convergence, Global convergence, Steady State Error (SSE) showed exactly right adaptive learning behaviour in accordance with Adaptive Filter Theory. The learning curve behaviour and graphical presentation of the LC and its different parameters is studied. Moreover, the adaptive algorithm performance assessment criteria is also studied. The developed MATLAB software is written programmatically and have GUI features such as popup-menu, algorithm parameter input, signal data input, loaded data display, filtered signal and learning curve data display. The software can store processed data in run-time and later can be re-plotted in a new figure window and can be played to check filtered signals audio quality. The implemented algorithms can be tested with some default parameter. Moreover, slider control is implemented in the software to update algorithm parameters easily.
  • 4.
  • 5. Acknowledgement I would like to give thanks to all scientists and professors specially Simon Haykin, B. Farhang- Boroujeny, John G. Proakis, Dimitris G. Manolakis and Monson H. Hayes whose books nicely explains the complex adaptive signal processing concepts in an easy way. Moreover, I would like to thank my supervisor Irina Gertsovich at BTH for her precise information and supervision of the project which helped me to complete the project. Furthermore, I would like to also give thanks to my family for their continuous support and for providing aspirations to complete my education.
  • 6. Contents Abstract.....................................................................................................................................3 Acknowledgement....................................................................................................................5 List of Figures.........................................................................................................................10 List of Acronyms....................................................................................................................13 Chapter 1..................................................................................................................................14 Introduction............................................................................................................................14 1.1 Project Scope.............................................................................................................17 1.2 Problem formulation and Project Outline .................................................................17 Chapter 2..................................................................................................................................19 Research Methodology and Requirement Analysis............................................................19 2.1 Functional requirements.................................................................................................19 2.2 Non-functional requirements..........................................................................................19 Chapter 3..................................................................................................................................20 Adaptive Signal Processing Filters and Applications.........................................................20 3.1 Structure of Adaptive Filter............................................................................................20 3.1.1 Spatial Structure or Block Diagram.........................................................................20 3.1.2 Functional structure .................................................................................................21 3.2 Adaptive Filter Performance ..........................................................................................23 3.2.1 Learning Curve........................................................................................................24 3.2.2 Convergence Speed .................................................................................................26 3.2.3 Steady State Error (SSE) .........................................................................................30 3.3 Adaptive Filter Groups...................................................................................................30 3.4 Application Classes........................................................................................................30 3.5 Difference between MSE and LSE ................................................................................31 Chapter 4..................................................................................................................................32 Literature Review ..................................................................................................................32 Chapter 5..................................................................................................................................33 Least-Mean-Square Adaptive Filters and Applications.....................................................33 5.2 Least-Mean-Square (LMS) Adaptive Filters..................................................................33 5.2.1 Some Common Variants of LMS Algorithm ..........................................................35 5.3 Implemented Adaptive Filter Applications................................................................37 5.3.1 Adaptive Noise Cancellation (ANC).......................................................................37 5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction................................38 5.3.3 System Identification or Modelling (SI)..................................................................40
  • 7. Chapter 6..................................................................................................................................42 MATLAB and Development Tools.......................................................................................42 6.1 MATLAB GUI Design Methodology............................................................................42 6.1.1 Compact data representation ...................................................................................42 6.1. 2 Aesthetical data representation...............................................................................42 6.1.3 GUI Development using โ€œGUIDEโ€.........................................................................43 6.1.4 Programmatic GUI Development............................................................................43 6.2 Structural GUI Design Tools..........................................................................................44 6.2.1 Nested Panels...........................................................................................................44 6.3 Used Functions...............................................................................................................45 Chapter 7..................................................................................................................................46 Algorithm and Software Development.................................................................................46 7.1 Graphical User Interface (GUI) Structure and Elements ...............................................46 7.1.1 Main GUI Window or Figure ..................................................................................46 7.1.2 Nested Panelling......................................................................................................47 7.1.3 Popup Menu or Listing............................................................................................50 7.1.4 Slider Control ..........................................................................................................51 7.1.5 Application and Parameter Data Input ....................................................................53 7.1.6 Data storage and retrieval........................................................................................54 7.1.7 Data display axes.....................................................................................................56 7.1.8 A block of main plotter function .............................................................................56 7.1.9 An instance of functions for applications................................................................58 7.1.10 Display results in a new figure ..............................................................................61 7.1.11 Data representation, Listening data and Default Parameter Value........................62 7.2 Software Execution Flow...............................................................................................64 Chapter 8..................................................................................................................................65 Results of Adaptive Algorithms............................................................................................65 8.1 Active Noise Cancellation (ANC)..................................................................................65 8.2 Adaptive Line Enhancement (ALE)...............................................................................76 8.3 System Identification (SI) ..............................................................................................87 Chapter 9..................................................................................................................................98 Comparative Performance and Data Analysis....................................................................98 9.1 Comparative Performance..............................................................................................98 9.1.1 Adaptive Noise Cancellation (ANC).......................................................................98 9.1.2 Adaptive Line Enhancement (ALE)......................................................................100
  • 8. 9.1.3 System Identification (SI)......................................................................................102 Chapter 10..............................................................................................................................105 Summary and Conclusions .................................................................................................105 10.1 Future Work ...............................................................................................................105 References.............................................................................................................................106
  • 9.
  • 10. List of Figures Figure 1: Original output from the filter..................................................................................15 Figure 2: Desired output from the filter...................................................................................15 Figure 3: Adaptive control using adaptive filter......................................................................16 Figure 4: Signal approximation using adaptive filter ..............................................................16 Figure 5: An N-tap transversal adaptive filter [3]....................................................................20 Figure 6: Adaptive Filter Functional Components ..................................................................21 Figure 7: Convergence Speed and SSE ...................................................................................23 Figure 8: Local Convergence and Global Convergence..........................................................23 Figure 9: Learning Curve.........................................................................................................24 Figure 10: An error signal with associated LC ........................................................................25 Figure 11: System Identification with NLMS when step size ยต= 0.1, order n = 20 and beta ฮฒ=1 ...........................................................................................................................................27 Figure 12: System Identification with NLMS when step size ยต= 0.01, order n = 20 and beta ฮฒ=1 ...........................................................................................................................................28 Figure 13: ANC with filter order 30 ........................................................................................29 Figure 14: ANC with filter order 80 ........................................................................................29 Figure 15: Influence of step-size ยต in convergence towards แถ“ ๐’Ž๐’Š๐’ [Google Search] ............34 Figure 16: Adaptive Noise Cancellation..................................................................................38 Figure 17: Adaptive Line Enhancement ..................................................................................39 Figure 18: System Identification using Adaptive Filter...........................................................41 Figure 19: Developed GUI without data..................................................................................47 Figure 20: Main GUI window with some data ........................................................................47 Figure 21: Internal GUI Blocks ...............................................................................................49 Figure 22: Popup menu execution flow...................................................................................51 Figure 23: Real-time slider control..........................................................................................52 Figure 24: Application data input consistency.........................................................................54 Figure 25: Representation and Listening to Data ....................................................................63 Figure 26: Software Execution Flow .......................................................................................64 Figure 27: ANC with LMS when ยต = .01 and order 30...........................................................65 Figure 28: ANC with LMS when ยต = .001 and order 30.........................................................66 Figure 29: ANC with NLMS when ยต = .01 and order 30........................................................66 Figure 30: ANC with NLMS when ยต = .001 and order 30......................................................67 Figure 31: ANC with LLMS when ยต = .01, order 30 and leakage .8 ......................................67 Figure 32: ANC with LLMS when ยต = .001, order 30 and leakage .8 ....................................68 Figure 33: ANC with ADJLMS when ยต = .001, order 30 .......................................................68 Figure 34: ANC with ADJLMS when ยต = .00001, order 30 ...................................................69 Figure 35: ANC with BLMS when ยต = .01, order 30..............................................................69 Figure 36: ANC with BLMS when ยต = .001, order 30............................................................70 Figure 37: ANC with BLMSFFT when ยต = .01, order 30.......................................................70 Figure 38: ANC with BLMSFFT when ยต = .001, order 30.....................................................71 Figure 39: ANC with DLMS when ยต = .01, order 30, delay = 11...........................................71 Figure 40: ANC with DLMS when ยต = .001, order 30, delay = 11.........................................72 Figure 41: ANC with Filtered-x LMS when ยต = .01, order 30................................................72
  • 11. Figure 42: ANC with Filtered-x LMS when ยต = .001, order 30..............................................73 Figure 43: ANC with Sign-Data LMS when ยต = .01, order 30 ...............................................73 Figure 44: ANC with Sign-Data LMS when ยต = .001, order 30 .............................................74 Figure 45: ANC with Sign-Error LMS when ยต = .01, order 30 ..............................................74 Figure 46: ANC with Sign-Error LMS when ยต = .001, order 30 ............................................75 Figure 47: ANC with Sign-Sign LMS when ยต = .01, order 30................................................75 Figure 48: ANC with Sign-Sign LMS when ยต = .001, order 30..............................................76 Figure 49: ALE with LMS when ยต = .01, order 30 .................................................................77 Figure 50: ALE with LMS when ยต = .001, order 30 ...............................................................77 Figure 51: ALE with LMS when ยต = .01, order 30 .................................................................78 Figure 52: ALE with LLMS when ยต = .001, order 30.............................................................78 Figure 53: ALE with ADJLMS when ยต = .001, order 30........................................................79 Figure 54: ALE with ADJLMS when ยต = .0001, order 30......................................................79 Figure 55: ALE with BLMS when ยต = .001, order 30.............................................................80 Figure 56: ALE with BLMS when ยต = .0001, order 30 ..........................................................80 Figure 57: ALE with BLMSFFT when ยต = .001, order 30......................................................81 Figure 58: ALE with BLMSFFT when ยต = .0001, order 30....................................................81 Figure 59: ALE with DLMS when ยต = .001, order 30 ............................................................82 Figure 60: ALE with DLMS when ยต = .0001, order 30 ..........................................................82 Figure 61: ALE with Filtered-x LMS when ยต = .0001, order 30 ............................................83 Figure 62: ALE with Filtered-x LMS when ยต = .001, order 30 ..............................................83 Figure 63: ALE with Sign-Data when ยต = .001, order 30 .......................................................84 Figure 64: ALE with Sign-Data when ยต = .0001, order 30 .....................................................84 Figure 65: ALE with Sign-Error when ยต = .0001, order 30 ....................................................85 Figure 66: ALE with Sign-Error when ยต = .001, order 30 ......................................................85 Figure 67: ALE with Sign-Sign when ยต = .001, order 30 .......................................................86 Figure 68: ALE with Sign-Sign when ยต = .0001, order 30 .....................................................86 Figure 69: SI with LMS when ยต = .001, order 30 ...................................................................87 Figure 70: SI with LMS when ยต = .0001, order 30 .................................................................87 Figure 71: SI with NLMS when ยต = .01, order 30, beta 1.......................................................88 Figure 72: SI with NLMS when ยต = .1, order 30, beta 1.........................................................88 Figure 73: SI with NLMS when ยต = .01, order 30, leakage 1 .................................................89 Figure 74: SI with NLMS when ยต = .001, order 30, leakage 1 ...............................................89 Figure 75: SI with ADJLMS when ยต = .00001, order 30, leakage 1.......................................90 Figure 76: SI with ADJLMS when ยต = .0001, order 30, leakage 1.........................................90 Figure 77: SI with BLMS when ยต = .001, order 30.................................................................91 Figure 78: SI with BLMS when ยต = .0001, order 30...............................................................91 Figure 79: SI with BLMSFFT when ยต = .001, order 30..........................................................92 Figure 80: SI with BLMSFFT when ยต = .0001, order 30........................................................92 Figure 81: SI with DLMS when ยต = .001, order 30, Delay 20................................................93 Figure 82: SI with DLMS when ยต = .0001, order 30, Delay 20..............................................93 Figure 83: SI with Filtered-x LMS when ยต = .001, order 30...................................................94 Figure 84: SI with Filtered-x LMS when ยต = .0001, order 30.................................................94 Figure 85: SI with Sign-Data when ยต = .001, order 30 ...........................................................95 Figure 86: SI with Sign-Data when ยต = .0001, order 30 .........................................................95 Figure 87: SI with Sign-Error when ยต = .001, order 30...........................................................96 Figure 88: SI with Sign-Error when ยต = .01, order 30.............................................................96
  • 12. Figure 89: SI with Sign-Sign when ยต = .0001, order 30..........................................................97 Figure 90: SI with Sign-Sign when ยต = .00002, order 30........................................................97 Figure 91: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS, SD, SE) ....................................................................................................................................98 Figure 92: Learning Curves ADJLMS.....................................................................................99 Figure 93: Learning Curves Filtered-xLMS ............................................................................99 Figure 94: Learning Curves SS..............................................................................................100 Figure 95: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS, SD, SE) ..................................................................................................................................100 Figure 96: Learning Curve ADJLMS ....................................................................................101 Figure 97: Learning Curve Filt-xLMS...................................................................................101 Figure 98: Learning Curve SS ...............................................................................................102 Figure 99: Comparative Learning Curves (LMS, NLMS, LLMS, BLMS, BLMSFFT, DLMS, SD, SE) ..................................................................................................................................102 Figure 100: Learning Curve ADJLMS ..................................................................................103 Figure 101: Learning Curve Filt-xLMS.................................................................................103 Figure 102: Learning Curve SS .............................................................................................104
  • 13. List of Acronyms ADJLMS Adjoint Least Mean Square BLMS Block Least Mean Square BLMSFFT Block Least Mean Square FFT CS Convergence Speed DLMS Delayed Least Mean Square DSP Digital Signal Processing FILTXLMS Filtered X-LMS FD Frequency Domain GUI Graphical User Interface LC Learning Curve LMS Least-Mean-Squares LLMS Leaky Least Mean Square NLMS Normalized Least Mean Square SD Sign-Data SE Sign-Error SS Sign-Sign SSE Steady State Error
  • 14. Chapter 1 Introduction The goal of adaptive filters are to maintain or derive desired output signal characteristics from a FIR or IIR filter. This goal is obtained via a feedback loop structure that feeds measure of undesired signal characteristics (error) to the filter under consideration and subsequently the filter updates its filter kernel with the fed coefficients to generate or maintain the desired output signal characteristics. The calculation of new coefficients based on the error signal feedback which is to be minimized is powered by some adapting algorithms. The error is defined as the deviation of output signal from the desired signal characteristics, such that, where d(n) is the desired signal, y(n) is the output signal and e(n) is the error signal, then the following formulas holds. ๐‘ฆ(๐‘›) = โˆ‘ ๐‘Š๐‘–(๐‘›) ๐‘ฅ(๐‘› โˆ’ ๐‘–) ๐‘โˆ’1 ๐‘–=0 ๐‘ฆ (๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘‘(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘กโ„Ž๐‘’๐‘›, ๐‘’(๐‘›) = โ€–๐‘‘(๐‘›)โ€– โˆ’ โ€–๐‘ฆ(๐‘›)โ€– ๐‘’(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘–๐‘“๐‘“๐‘’๐‘Ÿ๐‘’๐‘›๐‘๐‘’ ๐‘๐‘’๐‘ก๐‘ค๐‘’๐‘’๐‘› ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘‘(๐‘›) ๐‘Ž๐‘›๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’๐‘  ๐‘ฆ(๐‘›) Source: [3] (Page 139 โ€“ 188) We can see from the above derivation that ๐‘’(๐‘›) is the signal sequence which is needed to be minimized and an adaptive filterโ€™s ability to do that makes it separate from other types of filters.
  • 15. In the figure 1, an output signal is given. But instead of this output we want to have the output as exactly as signal given in figure 1.2. To derive the desired signal from the system, we first have to measure the error signal through finding out mathematical correlation between samples of output signal and desired signal. In short, from a higher point of view, this error signal is measured by subtracting the first signal from the latter signal. Then, this error signal is optimally minimized via updating operating filterโ€™s coefficients through a live feedback loop. Figure 1: Original output from the filter Figure 2: Desired output from the filter The use of adaptive filters can be divided majorly into two groups. Firstly, to continuously maintain the output signal unchanged from a running filter. Secondly, to approximate a desired signal from the output signal of a filter. These both approach use the same fundamental structure of the adaptive filter but they varies in terms of orientation and applications. In figure 3, we can see that how adaptive control has been implemented using adaptive filter and necessary error signal is computed. In figure 4, we can see that how a desired signal is approximated using adaptive filter and necessary error signal is computed. Both figure 3 and figure 4 looks similar in terms of their execution sequence and operating FIR or IIR filter. However, if we look carefully we will see that, there still exists a difference in associated error signal computation orientation.
  • 16. Input Signal Sequences START Does output signal deviated from desired characteristics? FIR or IIR Filter Desired output Signal Calculate Deviation (Error Signal) Reduce error signal power in MSE sense YES If NO then Iterate Calculate New Coefficients Send New Coefficents To maintain desired output signal throughput Figure 3: Adaptive control using adaptive filter Input Signal Sequences START Does output signal approximates desired signal within required level of accuracy? FIR or IIR Filter Output Signal Calculate Deviation (Error Signal) Reduce error signal power in MSE sense NO If YES then Iterate Calculate New Coefficients Send New Coefficents To approximate the desired signal Desired Signal Figure 4: Signal approximation using adaptive filter
  • 17. 1.1 Project Scope The requirements of the project is to study and understand adaptive filter structure, LMS based adaptive filters (mainly LMS, NLMS, and LLMS) and subsequently developing a user friendly MATLAB software that facilitates the simulation of these algorithms. Therefore, the following statement has been derived to summarize the project scope and goal. โ€œDevelopment of a professional MATLAB Software that will offer a concise work environment for the simulation of key adaptive signal processing algorithms and applications in real-time and can be used in real-lifeโ€ 1.2 Problem formulation and Project Outline The development problems that arose and solved during the project are summarized as some development questions as follows 1. How Adaptive Filter works and what is the functional role of sub-systems or sub- blocks within it? 2. How new coefficients are calculated and which mathematical framework is used to calculate the new coefficients? 3. Which adapting algorithms are used and how many of them are pre-implemented in MATLAB? 4. Understanding the application of adaptive filters for ANC, ALE and SI and how they are pre-implemented in MATLAB? 5. What type of software exists that offer concise work environment for simulation of adaptive algorithms and applications? 6. How to develop a MATLAB App and standalone MATLAB software? 7. Which methodology is best to develop GUI in MATLAB? What are the advantages and disadvantages of each methodology? 8. How to load data and store data during run-time in MATLAB App? 9. How to organize GUI blocks to have a user friendly, compact but coherent GUI? 10. What are the implementation alternatives of MATLAB GUI development and which method best suits the project need? 11. How to preserve aesthetical properties of the software while not compensating functional requirements? 12. How to integrate different components of the software into a single module?
  • 18. In Chapter 2, we have mentioned about requirement analysis and research methodology. In Chapter 3, we have dissected the adaptive signal processing filters and discussed about it. In Chapter 4, the relevant existing works done by others are studied and discussed in terms of what has been done and what is lacking? In Chapter 5, we have discussed about popular LMS Based adaptive signal processing filters and applications. In Chapter 6, we have discussed about different MATLAB GUI design methodology and different development tools. In Chapter 7, we have discussed about algorithm and software development. In Chapter 8, we have discussed about results obtained from different adaptive algorithms. In Chapter 9, we have discussed about comparative performance of different adaptive algorithms and data analysis. In Chapter 10, we have discussed about project summary and probable future work.
  • 19. Chapter 2 Research Methodology and Requirement Analysis All types of software development requires a thorough requirement analysis. Requirements can be divided into two parts, namely, functional requirements and non-functional requirements. The functional requirements form the core part of the development and all requirements must need to be meet in order develop a working software. On the other hand, non-functional requirements are too important but not mandatory to have a working software. However, some non-functional requirements are very important without which the software product may turn into unusable and not user friendly. 2.1 Functional requirements 1. MATLAB implementation of Adaptive Algorithms 2. MATLAB implementation of Adaptive Applications 3. Comparative performance analysis of Adaptive Algorithms 4. Graphical User Interface (GUI) 5. Data Loading and Data Writing 6. Run-time Data Storage 7. Data Processing and Display 2.2 Non-functional requirements 1. User friendliness 2. Fast and Reliability 3. Compact data representation 4. Aesthetical data representation
  • 20. Chapter 3 Adaptive Signal Processing Filters and Applications Adaptive filter can be literally understood as a filter that is able to take feedback and based on that feedback it is able to adapt to produce or maintain desired signal output. An adaptive filter has different parameters to facilitate the flexibility in dealing with optimal performance of adaptive filters. The selection of different parameters for adaptive filters directly influences the calculation filter coefficients. That is to say, we reduces the error through optimizing a consistently designed performance function. This performance function can be designed either in statistical framework or deterministic framework. The performance function in statistical framework is the mean-square-value of the error signal. In deterministic framework the frequent choice of performance function is a weighted sum of the squared error signal. 3.1 Structure of Adaptive Filter Adaptive filters can be mainly structurally realized into two ways, namely, spatially and functionally. Spatial structure discusses about the organization of filter components without restricting corresponding filters desired functional output. On the other hand, functional structure discusses about the functional role of the sub-systems of each adaptive filter. 3.1.1 Spatial Structure or Block Diagram The most common used structure are direct form, cascade form, parallel form and lattice. Transversal layout of adaptive filters are most commonly used, however, lattice layout is also used when its advantages overrides the advantages of transversal layout. Figure 5: An N-tap transversal adaptive filter [3]
  • 21. 3.1.2 Functional structure Adaptive filters can be dissected into following major parts based on the functional role and each of these part plays a major role in producing a working adaptive filter. FIR/IIR Filter Adaptive Control Algorithm Input Signal: x(n) Output Signal: y(n) Desired Signal:d(n) Error Signal: e(n) Updated Coefficients Feedback Loop Figure 6: Adaptive Filter Functional Components 3.1.2.1 Input Signal Input signal is the data feeder or provider to the adaptive filter. This is the primary signal that is needed to be updated or maintained at a constant level or needed to be approximated to a desired signal characteristics. If we have input signal that is needed to be maintained at a constant level than whenever input signal differs from desired level, we can find out this deviation or error and subsequently minimizes it to maintain the constant desired signal throughput. In other case, we can have an output signal from a filter which is needed to be updated with the characteristics of a desired signal. In this case, we find out the difference between output signal and desired signal and this difference is error. Subsequently, we calculate new adaptive filter coefficient to reduce this error and these coefficients are used to update the input signal. 3.1.2.2 FIR or IIR Filter FIR or IIR filter is the main worker of the adaptive filter. Initially, the filter starts producing output signal from the instantaneous input signal given to it. But after providing the feedback (i.e. calculated filter coefficients to reduce the error power of the error signal), it
  • 22. updates its output signal which approximates desired signal or reduces deviation from desired signal. 3.1.2.3 Output Signal Output signal is the initial output or updated output from FIR/IIR filter. Output signal can be realized in two categories, namely, coarse output signal and fine output signal. The coarse output signal represents the instantaneous output from FIR/IIR filter or the deviated output signal from the desired condition. On the other hand, we obtain the fine output signal when coarse output signal approximates to desired signal. That is to say that, fine output signal is the end product of the coarse output signal when error is removed from it. 3.1.2.4 Desired Signal Desired signal is the final expected signal from the adaptive filter. The approximated desired signal is obtained from the adaptive filter when adaptive filter converges. We have to say โ€œapproximatedโ€ because an adaptive filter converges 100% if and only if error signal reduces to 0%. But in reality, this is always not the case, even after adaptive filter converges there still an SSE exists. And, in this case, we say that, we have approximated the desired signal. Moreover, desired signal can be also realized in two categories, namely, external- reference-desired-signal, maintained-desired-signal. The external-reference-desired-signal is a provided signal that is taken as reference to calculate the error and then through error removal adaptive filter approximates that signal. On the other hand, maintained-desired-signal is the instantaneous output of the FIR/IIR filter that is maintained in a stable state through error removal whenever it deviates from the stability. 3.1.2.5 Error Signal Error signal is the difference between output signal and desired signal. That is to say that, error signal is the amount of signal component that adaptive filter optimally removes when it converges and thus arriving at the desired condition. 3.1.2.6 Adaptive Control Algorithm Adaptive control algorithm is the algorithm that adaptive filter uses to iteratively calculate the new coefficients that optimally reduces the power of error signal. The choice of adaptive control algorithm depends on the data class, memory resources, computational time, energy requirements and overall cost. The L-MSE and LSE are two commonly used algorithm to calculate the updated coefficients. 3.1.2.7 Feedback loop The feeback loop is a conceptual realization just to indicate that, the re-measured coefficients from the error signal is fed into FIR/IIR filter to produce the desired output. However, even though conceptual, this is of particular importance as it turns a general FIR/IIR filter into an adaptive filter.
  • 23. 3.2 Adaptive Filter Performance The performance of adaptive filter can be evaluated using Learning Curve (LC), Convergence Speed (CS), and Steady State Error (SSE). In the following figure of LC, CS and SSE are shown. We can see that, the error power error signal quickly dropped since the initialization of adaptive filter and this phenomenon is also reflected in the associated learning curve. Beside, we can also see that, even though the filter converged very quickly, there still exists a SSE in the produced output of the filter. Now, this SSE is acceptable or not depends on the requirements of the application domain. Figure 7: Convergence Speed and SSE The goal of designing adaptive filter is to minimize the error signal power and hence when provided with right parameters, the adaptive filter ought to converge. However, the question is how fast or slow an adaptive filter converges? This convergence speed can be classified as very fast, fast, higher average, average, lower average, slow, very slow etc. Figure 8: Local Convergence and Global Convergence
  • 24. Convergence can be realized into two categories, namely, local convergence and global convergence. In the figure 8, the error signal power started converging but then suddenly raised up and repeated slightly couple of times and then finally converged. So, the convergence before sudden raise of error power is local convergence and final convergence is the global convergence. However, adaptive filter performance is a relative indicator and varies depending on application and desired filter output. For example, minimal SSE could be the only indicator of filter performance and indicator of filter output. On the other hand, CS could be the only indicator filter performance and indicator of filter output. Moreover, there can be cases where weighted measure of both CS and SSE could be the indicator of filter performance and indicator of filter output quality measure. We can summarize the adaptive filter performance criteria as follows: ๏‚ท Fast Convergence is important, optimal lower SSE is not important ๏‚ท Fast Convergence is important, optimally lower SSE is important ๏‚ท Fast Convergence is not important, optimally lower SSE is important ๏‚ท Fast Convergence is not important, standard SSE is important ๏‚ท Standard Convergence is enough, optimally lower SSE is important ๏‚ท Standard Convergence is enough, standard SSE is enough Because of such criteriaโ€™s or such similar criteria, different adaptive filters and different algorithm parameters are chosen and each of which offer different level of solution. Through trial-and-error process the best adaptive filter with best parameters are chosen for a data scenario. 3.2.1 Learning Curve Learning Curve is literally a curve which is generated through plotting the time-varying error power for all coefficients of adaptive filter. For a number of iterations, the error power approximates to zero and plotting this decreasing error power in time domain creates a very nice curve with gradually descent gradient. This curve provides a quick information on the performance of LMS adaptive filter under consideration. Figure 9: Learning Curve
  • 25. In the figure 9, we can see a gradually descent curve which gradually approximates to zero. The left the error power is higher but with increasing iterations of adaptive algorithm the error power approximates to zero. Figure 10: An error signal with associated LC In the figure 10, the first plot is a gradually converging error signal and the second plot is associated LC. From the first figure, we can see that, the error signal quickly converged and this phenomenon is also reflected in the LC. This reflection happens, because it is the same filter coefficients that produced the data which are used to create both plot. In other words, we can say that, LC is just a different representation of how the error signal converges and is visually more convenient to make decision of how adaptive filter is performing.
  • 26. 3.2.2 Convergence Speed Convergence means gradually minimizing power of error signal and arriving at the point that produces desired signal. Convergence speed or CS literally means how fast an adaptive algorithm converges or reduces the error signal power. A slower CS means the adaptive filter took long time to minimize the error power. Similarly, a faster CS means the adaptive filter took short time to minimize the error power. Adaptive filters iteratively calculate new coefficient to minimize the error power of error signal. CS substantially varies with different algorithm parameters. Moreover, the step size also greatly influences the CS speed of adaptive filters. A smaller step size decreases the CS which means the adaptive filter takes more time to converge when a smaller step size is used than the larger one. The phenomenon can be clearly seen from the figure provided below. In figure, the convergence speed is fast when ยต=.1 used but when ยต=.01 is used the convergence speed is dropped which is also reflected in the LC.
  • 27. Figure 11: System Identification with NLMS when step size ยต= 0.1, order n = 20 and beta ฮฒ=1
  • 28. Figure 12: System Identification with NLMS when step size ยต= 0.01, order n = 20 and beta ฮฒ=1 The higher the filter order the lesser the convergence speed. However, this filter order verses convergence speed behaviour holds for a certain threshold and this threshold varies for different data class. We have found the right filter order through trial-and-error process and seen that higher filter order does always produce the best filter performance as well less one. Therefore, if we can achieve the desired adaptive filter performance with less filter order that always gives the benefit of less computational time and overall cost. Hence, the empirically derived filter order is the best value which can ensure best filter performance for specific data case as well as best value. This phenomenon is demonstrated in figure 13 and 14. We can see that, even though higher filer order is used, the figure 14 consist more error power than figure 13. However, in this case of ANC it is acceptable and wanted, as error signal is the desired speech signal with less noise. But this phenomenon exists also for other applications where less error signal power is always desired and hence decreasing performance with increasing order is never accepted positively.
  • 29. Figure 13: ANC with filter order 30 Figure 14: ANC with filter order 80
  • 30. 3.2.3 Steady State Error (SSE) In many cases, the error signal power never converges to zero even after adaptive filter converges (i.e. filter coefficients arrives in a stability and do not show significant change in value). This persisted error is called SSE error. In many applications, this error is not significantly important while it can be important for some. Therefore, threshold of SSE acceptability varies depending on application and thus it turns into a relative performance indicator. 3.3 Adaptive Filter Groups There are substantial amount of adaptive filters are available that varies in terms of learning difficulty, applications and application data class. However, the common goal of all of these adaptive algorithms is to adapt a coarse signal to a fine signal or to maintain a desired signal output. To accomplish this task, the adaptive algorithms offers different level of flexibility for different corresponding problem scenarios. Some of them are grouped [MATLAB] as follows. ๏‚ท Least-Mean-Square (LMS) Based: LMS, NLMS, LLMS, ADJLMS, BLMS, BLMSFFT, DLMS, Filt-XLMS, SD, SE, SS ๏‚ท Recursive-Least-Square (RLS) Based: RLS, QRDRLS, HRLS, HSWRLS, SWRLS, FTF, SWFTF ๏‚ท Affine Projection (AP) Based: AP, APRU, BAP ๏‚ท Frequency Domain (FD) Based: FDAF, PBFDAF, PBUFDAF, TDAFDCT, TDAFDFT, UFDAF ๏‚ท Lattice (L) Based: GAL, LSL, QRDLSL 3.4 Application Classes Adaptive filters are mostly used to process an input signal and using the updated coefficients calculated from error signal, it approximates a desired signal or maintains a signal to its original state. Based on this similarity, the application of adaptive filter can be grouped into four categories [3], namely, modelling, inverse modelling, linear prediction and interference cancellation. Some applications for each of these can be summarized as follows. ๏‚ท Modelling: System Identification (SI) etc. ๏‚ท Inverse Modelling: Channel Equalization, Magnetic Recording etc. ๏‚ท Linear Prediction: Auto regressive spectral analysis, Adaptive Line Enhancement (ALE), Speech Coding etc. ๏‚ท Interference cancellation: Echo cancellation in telephone lines, Acoustic Echo Cancellation, Active Noise Control (ANC), Beamforming etc.
  • 31. 3.5 Difference between MSE and LSE Mean-Square-Error (MSE) and Least-Square-Error (LSE) may sound similar but they are not same. MSE is an approach that follows statistical framework. On the other hand, LSE is an approach that follows deterministic framework. If we define a cost or performance function ๐ฝ then MSE and LSE can be realized as follows. ๏‚ท Total squared Error (LSE) = ๐ฝ = โˆ‘ ๐‘’2 (๐‘›)๐‘โˆ’1 ๐‘›=0 ๏‚ท Mean Squared Error (MSE) = ๐ฝ = E{|๐‘’( ๐‘›)|2 } Both MSE and LSE has their own advantages and disadvantages. The choice of MSE or LSE approach depends filtering problem and associated computational cost. MSE deals with mean value, which means, we define statistical sample with a convenient sample size and then calculate the mean value for this sample. Clearly, this will results in a processing of less number of samples, reciprocally less cost and yet preserving processed signalโ€™s characteristics within a satisfactory level. The different between LSE and MSE can be summarized as follows. Property L-MSE L-SE Framework Stochastic (i.e. statistical) Deterministic Weighting criteria Sample Mean Total signal Computational Cost Lower Higher Memory requirements Lower Higher Matrix operations No Yes Accuracy Lower than LSE but robust enough in many cases Optimal Performance Robust or Standard or Poor (Input data dependent) Robust
  • 32. Chapter 4 Literature Review The adaptive filters are very popular among scientists and engineers and thus a rich set of literature are available for study. However, these literatures can be largely classified into different categories based on their orientation such as general reference book, specialized reference book, general articles, project result based articles etc. It is impossible to study all of these references because of its sheer size and complexity. And, therefore an in depth literature review is impractical to be accomplished. However, we have randomly studied different parts of different books and skimmed through required chapters that are necessary for this project. Subsequently, the literatures are reviewed from high level point of view and according to their orientation. The book Adaptive Filter Theory [1] written by Simon Haykin is one of the best book that covers most important concepts of adaptive filters into a single book. Nevertheless, the book progresses forward in accordance with foundation-to-generalization approach. That is to say that, for example, we have to first understand Method of Steepest Descent and Wiener Filters and as well as difference between stochastic (i.e. statistical) approach and deterministic approach to be able to understand L-MSE and LSE adaptive control algorithms. Therefore, the book first begins with basic introduction, then discusses about Stochastic Processes and Models, Method of Steepest Descent and then writes about LMS. The progression of whole book follows a convenient and pedagogically friendly approach that is very useful for a student and readers. The book Adaptive Filters: Theory and Applications [3] written by B. Farhang- Boroujeny is another book that is written in a very legible and in an understandable way. The book mainly focuses on LMS Based algorithms but discusses about other adaptive filtering issues. Moreover, the introduction written in this book is very useful which provides a lot of useful information in a short scope. The book Statistical Digital Signal Processing and Modeling [2] written by Monson H. Hayes is also a good book for studying adaptive signal processing. The book first discusses about necessary fundamental concepts to understand adaptive filtering and then at the end of the book it consists a dedicated chapter about adaptive filters. Furthermore, the books [4, 5, 6, 7, 8, 9, 10, 11, 12, and 13] are also good resource for studying adaptive filters. Some of these books focuses on adaptive filtering fundamentals while others focuses on a specifically oriented application of adaptive filters. The journal articles [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] discusses about specific application of particular adaptive filter. All of these papers clearly depicts the reliability, scalability and overall adaptive performance of adaptive filters from various perspective angle. The usefulness of various adaptive filter parameters are clearly understandable from the discussions of these articles.
  • 33. Chapter 5 Least-Mean-Square Adaptive Filters and Applications In this project, we have studied LMS, NLMS, and LLMS adaptive filters and also produced results using other (i.e. ADJLMS, BLMS, BLMSFFT, DLMS, Filt-xLMS, Sign-Data, Sign- Error, Sign-Sign) LMS Based adaptive filters. However, as there are good number of adaptive filters are already implemented in MATLAB, we have also included those adaptive filters in the developed software and generated results from some of those filters to understand the LMS algorithms comparatively. The results from these algorithms are mentioned in the appendices. 5.2 Least-Mean-Square (LMS) Adaptive Filters Least-Mean-Square (LMS) adaptive filters reduces the signal error power in a mean- square sense and therefore literally called LMS adaptive filters. Moreover, in short, when we have stationary input and desired signal, the LMS adaptive filter just turns into a practical implementation of optimal wiener filter in a MSE perspective. In other way, we achieve optimal wiener filter when its cost function is controlled by MSE. Another important foundation of LMS filter is the steepest descent algorithm. To mention, steepest descent is not an adaptive filter by itself but it is the basis for calculating updated new coefficients when signal statistics are known and thus serves as a fundamental basis of LMS adaptive filter. The steepest descent algorithm is given below. ๏‚ท Initialize filter coefficients with a start value, ๐‘พ ๐’=๐ŸŽ(๐ŸŽ) ๏‚ท Gradient ๐›แถ“(๐’) is determined that points in the direction of where the cost function increased maximally, ๐›แถ“(๐’) = โˆ’๐Ÿ๐ฉ + ๐Ÿ๐‘๐ฐ(๐ง) ๏‚ท Updated coefficient ๐‘ค(๐‘› + 1) is adjusted in the opposite direction to the gradient, but using step-size ยต the adjustment is weighted down, ๐’˜(๐’ + ๐Ÿ) = ๐’˜(๐’) + ๐Ÿ ๐Ÿ ยต [โˆ’๐›แถ“(๐’)] The LMS algorithm is the stochastic or random realization of steepest descent algorithm. That is to say that the LMS algorithm updates signal statistics continuously while steepest descent algorithm works in a deterministic way. In short, the LMS algorithm is one of the stochastic gradient methods and the steepest descent is one of the deterministic gradient methods. The steepest descent algorithm uses deterministic cost function แถ“ = ๐ธ[๐‘’2(๐‘›)] while the LMS algorithm uses stochastic or coarsely estimated cost function แถ“ฬ‚ = ๐‘’2 (๐‘›). The stochastic or coarse estimate of cost function results in a faster processing, reciprocally less computational
  • 34. overhead and at the same time ensures the ability to track the signal characteristics. Thus, the error signal reduction of general LMS adaptive filter is based on the following relationships. ๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) โˆ’ ๐œ‡ โˆ‡ ๐‘’2 (๐‘›) ๐ป๐‘’๐‘Ÿ๐‘’ ๐‘ค(๐‘›) = [๐‘ค0(๐‘›), ๐‘ค1(๐‘›) โ€ฆ โ€ฆ โ€ฆ ๐‘ค ๐‘โˆ’1(๐‘›)] ๐‘‡ , ๐œ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ก๐‘’๐‘ โˆ’ ๐‘ ๐‘–๐‘ง๐‘’ ๐‘๐‘Ž๐‘Ÿ๐‘Ž๐‘š๐‘’๐‘ก๐‘’๐‘Ÿ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘Ž๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š ๐‘Ž๐‘›๐‘‘ โˆ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘”๐‘Ÿ๐‘Ž๐‘‘๐‘–๐‘’๐‘›๐‘ก ๐‘œ๐‘๐‘’๐‘Ÿ๐‘Ž๐‘ก๐‘œ๐‘Ÿ โˆ‡ ๐‘’2(๐‘›) = โˆ’2๐‘’(๐‘›)๐‘ฅ(๐‘›) ๐ป๐‘’๐‘Ÿ๐‘’, ๐‘ฅ(๐‘›) = [๐‘ฅ(๐‘›) ๐‘ฅ(๐‘› โˆ’ 1) โ€ฆ ๐‘ฅ(๐‘› โˆ’ ๐‘ + 1)] ๐‘‡ ๐‘‡โ„Ž๐‘’๐‘Ÿ๐‘’๐‘“๐‘œ๐‘Ÿ๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘  ๐‘๐‘ฆ ๐‘ ๐‘ข๐‘๐‘ ๐‘ก๐‘–๐‘ก๐‘ข๐‘–๐‘›๐‘” ๐‘™๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ ๐‘–๐‘›๐‘ก๐‘œ ๐‘กโ„Ž๐‘’ ๐‘“๐‘–๐‘Ÿ๐‘ ๐‘ก ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) โˆ’ ๐œ‡ {โˆ’2 ๐‘’(๐‘›) ๐‘ฅ(๐‘›)} ๐ป๐‘’๐‘›๐‘๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘กโ„Ž๐‘’ ๐ฟ๐‘€๐‘† ๐‘Ÿ๐‘’๐‘๐‘ข๐‘Ÿ๐‘ ๐‘–๐‘œ๐‘› ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘  ๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) + 2 ๐œ‡ ๐‘’(๐‘›)๐‘ฅ(๐‘›) The step-size has major influence in convergence behaviour towards แถ“ฬ‚ ๐’Ž๐’Š๐’. In figure, we can see that the smaller the step-size the smoother and fastest convergence we have towards the แถ“ฬ‚ ๐’Ž๐’Š๐’. Figure 15: Influence of step-size ยต in convergence towards แถ“ฬ‚ ๐’Ž๐’Š๐’ [Google Search]
  • 35. The basic components of the LMS algorithm can be written as follows in terms of input, output and functional form. ๐‘ฐ๐’๐’‘๐’–๐’• ๐ผ๐‘›๐‘–๐‘ก๐‘–๐‘Ž๐‘™ ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ค(๐‘›) ๐ผ๐‘›๐‘๐‘ข๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ฅ(๐‘›) ๐ท๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘‘(๐‘›) ๐‘ถ๐’–๐’•๐’‘๐’–๐’• ๐น๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก, ๐‘ฆ(๐‘›) ๐‘ˆ๐‘๐‘‘๐‘Ž๐‘ก๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ, ๐‘ค(๐‘› + 1) ๐‘ญ๐’–๐’๐’„๐’•๐’Š๐’๐’๐’‚๐’ ๐’‡๐’๐’“๐’Ž ๐ผ๐‘›๐‘๐‘ข๐‘ก โˆ’ ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘ฆ(๐‘›) = ๐‘ค ๐‘‡(๐‘›) ๐‘ฅ(๐‘›) ๐ธ๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›) ๐ถ๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐‘ข๐‘๐‘‘๐‘Ž๐‘ก๐‘’ ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘›, ๐‘ค(๐‘› + 1) = ๐‘ค(๐‘›) + 2 ๐œ‡ ๐‘’(๐‘›)๐‘ฅ(๐‘›) ๐‘Šโ„Ž๐‘’๐‘Ÿ๐‘’, 2๐œ‡๐‘’(๐‘›)๐‘ฅ(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘๐‘ก๐‘–๐‘œ๐‘› ๐‘ก๐‘’๐‘Ÿ๐‘š The basic reason for the popularity of LMS adaptive filter is because of its computational simplicity. The computational overhead of LMS adaptive filter can be summarized as follows. ๐Ÿ๐ + ๐Ÿ ๐ฆ๐ฎ๐ฅ๐ญ๐ข๐ฉ๐ฅ๐ข๐œ๐š๐ญ๐ข๐จ๐ง๐ฌ & ๐Ÿ๐ + ๐Ÿ ๐š๐๐๐ข๐ญ๐ข๐จ๐ง๐ฌ ๐‘ญ๐’๐’“ ๐’„๐’‚๐’๐’„๐’–๐’๐’‚๐’•๐’Š๐’๐’ˆ ๐’•๐’‰๐’† ๐’๐’–๐’•๐’‘๐’–๐’• ๐’š(๐’): ๐‘ ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘ญ๐’๐’“ ๐’๐’ƒ๐’•๐’‚๐’Š๐’๐’Š๐’๐’ˆ (๐Ÿ๐) โˆ— ๐’†(๐’): 1 ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘ญ๐’๐’“ ๐’”๐’„๐’‚๐’๐’‚๐’“ โˆ’ ๐’ƒ๐’š โˆ’ ๐’—๐’†๐’„๐’•๐’๐’“ ๐’Ž๐’–๐’๐’•๐’Š๐’‘๐’๐’Š๐’„๐’‚๐’•๐’Š๐’๐’ ๐Ÿ๐๐’†(๐’) โˆ— ๐’™(๐’): ๐‘ ๐‘š๐‘ข๐‘™๐‘ก๐‘–๐‘๐‘™๐‘–๐‘๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  5.2.1 Some Common Variants of LMS Algorithm In practice, three common LMS algorithm variants are standard LMS (SLMS), normalized LMS (NLMS) or time-varying step size LMS and leaky LMS (LLMS). All these three variants have almost same design structure except with some differences in update equation. The standard LMS algorithm has the following update equation. Standard LMS (SLMS) ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›) ๐œ‡ (๐‘›) ๐ป๐‘’๐‘Ÿ๐‘’, ๐‘คโƒ—โƒ— (๐‘› + 1) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘๐‘ก๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก
  • 36. ๐œ‡ ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ก๐‘’๐‘ ๐‘ ๐‘–๐‘ง๐‘’ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘Ž๐‘™๐‘”๐‘œ๐‘Ÿ๐‘–๐‘กโ„Ž๐‘š ๐‘’(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘’๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™, ๐œ‡ (๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘–๐‘›๐‘๐‘ข๐‘ก ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ The basic difference between standard LMS algorithm and normalized algorithm is in the characteristics of their step size. The unique characteristic of the step size of NLMS is that it is time-varying in compare to SLMS. The NLMS has the following update equation. Normalized LMS (NLMS) ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›) ๐‘ขโƒ—โƒ— (๐‘›) โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2 ๐‘Š๐‘’ ๐‘๐‘Ž๐‘› ๐‘Ÿ๐‘’๐‘ค๐‘Ÿ๐‘–๐‘ก๐‘’ ๐‘Ž๐‘๐‘œ๐‘ฃ๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘Ž๐‘  ๐‘“๐‘œ๐‘™๐‘™๐‘œ๐‘ค๐‘  ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2 ๐‘’(๐‘›) ๐œ‡(๐‘›) ๐‘‡โ„Ž๐‘’๐‘Ÿ๐‘’๐‘“๐‘œ๐‘Ÿ๐‘’, ๐‘ค๐‘’ ๐‘”๐‘’๐‘ก ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡(๐‘›)๐‘’(๐‘›)๐œ‡(๐‘›), ๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’ ๐œ‡ โ€–๐‘ขโƒ—โƒ— (๐‘›)โ€–2 = ๐œ‡(๐‘›) The LLMS has similar update equation except that it includes a leaky factor. The leaky factor has a range (0, 0.1) and has direct relation with steady state error (SSE). If leaky factor is increased, the SSE increases and the leaky factor decreases the SSE decreases. The LLMS has the following cost function and update equation. Leaky LMS (LLMS) ๐ฝ(๐‘›) = ๐‘’2(๐‘›) + ๐›ผ โˆ‘ ๐‘Š๐‘˜ 2 (๐‘›) ๐‘โˆ’1 ๐‘˜=0 ๐‘คโƒ—โƒ— (๐‘› + 1) = (1 โˆ’ ๐œ‡๐›ผ). ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ ๐‘’(๐‘›) ๐œ‡ (๐‘›) We can see that the cost function includes both error signal and filter coefficients along with a leaky factor. Therefore, LLMS is able to reduce the coefficient overflow problem. In the update equation, if ๐›ผ = 0, the update equation turns into the same update equation as standard LMS. The LMS algorithm is often implemented in digital signal processors. As DSPโ€™s often has limited computational resource and LMS computational overhead is crucially important in DSP implementation. Therefore, computationally simpler version of standard LMS algorithm are Sign-Error LMS, Sign-Data LMS and Sign-Sign LMS and they require fewer multiplication operation in compare to standard LMS. The simplification from standard LMS to sign LMS is done using the following equation.
  • 37. ๐‘ ๐‘”๐‘›(๐‘ฅ) = { 1, ๐‘ฅ > 0 0, ๐‘ฅ = 0 โˆ’1, ๐‘ฅ < 0 ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘ ๐‘”๐‘›(๐‘’(๐‘›)) . ๐œ‡ (๐‘›) : Sign-Error LMS Algorithm ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘’(๐‘›) . ๐‘ ๐‘”๐‘›( ๐œ‡ (๐‘›)) : Sign-Data LMS Algorithm ๐‘คโƒ—โƒ— (๐‘› + 1) = ๐‘คโƒ—โƒ— (๐‘›) + ๐œ‡ . ๐‘ ๐‘”๐‘›(๐‘’(๐‘›)). ๐‘ ๐‘”๐‘›(๐œ‡ (๐‘›)) : Sign-Sign LMS Algorithm We can clearly see from the above equations that, the convergence speed for Sign-LMS algorithms are slower in compare to standard LMS and the SSE using Sign-LMS will be larger than standard-LMS. Therefore, Sign-LMS algorithms are useful where computational resources are important than performance. In ANC, we often have large input signal vector and at the same time real-time processing of adaptive filter is required for real-time performance. In this case, BLMSFFT can be used which offers fewer computational overhead through fewer multiplication than standard LMS. In BLMSFFT, the input signal is first transformed into frequency domain and filter coefficients are updated in the frequency domain. In standard LMS filter, filter coefficients are updated based on sample by sample processing which is better for performance but increases computational overhead as well takes more time. In the BLMSFFT adaptive filter, the block size and filter length is same and coefficients are updated based on block processing. 5.3 Implemented Adaptive Filter Applications We have discussed earlier about the applications of adaptive filters. However, in this project, we have implemented the following applications. 5.3.1 Adaptive Noise Cancellation (ANC) In adaptive noise cancellation, we have a measured signal that contains primary noise from the same signal source. In addition, we have reference noise available that is knowingly or unknowingly correlated with the primary noise that are contained within the measured signal. The reason of using reference noise is that we want to adaptively estimate how much undesired noise is contained within the primary measured signal. Because of adaptive reference noise, the necessary noise reduction can be estimated through real-time experiment to ensure the best quality of desired signal. ๐‘–๐‘“ ๐‘ฅ(๐‘›) ๐‘–๐‘  ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘–๐‘š๐‘Ž๐‘Ÿ๐‘ฆ ๐‘š๐‘’๐‘Ž๐‘ ๐‘ข๐‘Ÿ๐‘’๐‘š๐‘’๐‘›๐‘ก ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘คโ„Ž๐‘–๐‘โ„Ž ๐‘๐‘œ๐‘›๐‘ก๐‘Ž๐‘–๐‘›๐‘  ๐‘๐‘œ๐‘กโ„Ž ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ (๐‘›) ๐‘Ž๐‘›๐‘‘ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ฃ(๐‘›) ๐‘“๐‘Ÿ๐‘œ๐‘š ๐‘กโ„Ž๐‘’ ๐‘ ๐‘Ž๐‘š๐‘’ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ ๐‘œ๐‘ข๐‘Ÿ๐‘๐‘’, ๐‘กโ„Ž๐‘’๐‘›, ๐‘ฅ(๐‘›) = ๐‘ (๐‘›) + ๐‘ฃ(๐‘›)
  • 38. ๐‘–๐‘“ ๐‘ค๐‘’ โ„Ž๐‘Ž๐‘ฃ๐‘’ ๐‘Ž ๐‘Ÿ๐‘’๐‘“๐‘’๐‘Ÿ๐‘’๐‘›๐‘๐‘’ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘”(๐‘›) ๐‘คโ„Ž๐‘–๐‘โ„Ž ๐‘–๐‘  ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘’๐‘‘ ๐‘ค๐‘–๐‘กโ„Ž ๐‘กโ„Ž๐‘’ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ฃ(๐‘›), ๐‘กโ„Ž๐‘’๐‘›, ๐‘’(๐‘›) = {๐‘ (๐‘›) + ๐‘ฃ(๐‘›)} โˆ’ ๐‘”(๐‘›) ๐‘’(๐‘›) โ‰ˆ ๐‘ (๐‘›) In the following figure, a reference noise is extracted from a measured signal to obtain error signal and this error signal is the approximated desired signal. FIR Filter Adaptive Control Algorithm desired error signal e(n) = x(n) - y(n) = s(n) Updated Coefficients Feedback Loop y(n) measurement signal x(n) that contains signal s(n) with noise v(n) x(n) = s(n) + v(n) correlated noise g(n) Figure 16: Adaptive Noise Cancellation 5.3.2 Adaptive Line Enhancement (ALE) or FIR Linear Prediction Adaptive Line Enhancement is done when a narrowband desired signal is mixed with wideband undesired noise and at the same time we do not have any knowledge about wideband noise. In this scenario, we slightly delay the received signal but large enough to de-correlate the wideband noise and then use a FIR linear predictor to estimate the desired narrowband signal. Then we subtract this estimated narrowband signal from the primary signal and obtain the estimated error and reduce this error to obtain the enhanced desired narrowband signal. Therefore, the quality of desired enhanced narrowband signal depends on better performance of the FIR linear predictor. ๐น๐‘Ÿ๐‘œ๐‘š ๐‘Ž ๐‘Ÿ๐‘’๐‘๐‘’๐‘–๐‘ฃ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฃ(๐‘›), ๐‘คโ„Ž๐‘’๐‘Ÿ๐‘’ ๐‘ค๐‘–๐‘‘๐‘’๐‘๐‘Ž๐‘›๐‘‘ ๐‘›๐‘œ๐‘–๐‘ ๐‘’ ๐‘ค(๐‘›) ๐‘š๐‘Ž๐‘ ๐‘˜๐‘  ๐‘กโ„Ž๐‘’ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘›๐‘Ž๐‘Ÿ๐‘Ÿ๐‘œ๐‘ค ๐‘๐‘Ž๐‘›๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฅ(๐‘›), ๐‘ค๐‘’ ๐‘ค๐‘Ž๐‘›๐‘ก ๐‘ก๐‘œ ๐‘’๐‘›โ„Ž๐‘Ž๐‘›๐‘๐‘’ ๐‘กโ„Ž๐‘’ ๐‘›๐‘Ž๐‘Ÿ๐‘Ÿ๐‘œ๐‘ค๐‘๐‘Ž๐‘›๐‘‘ ๐‘‘๐‘’๐‘ ๐‘–๐‘Ÿ๐‘’๐‘‘ ๐‘ ๐‘–๐‘”๐‘›๐‘Ž๐‘™ ๐‘ฅ(๐‘›). ๐‘‡โ„Ž๐‘’๐‘›,
  • 39. ๐‘ฃ(๐‘›) = ๐‘ฅ(๐‘›) + ๐‘ค(๐‘›) ๐‘ฅ(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ… = โˆ‘ โ„Ž(๐‘˜) ๐‘ฃ(๐‘› โˆ’ ๐ท โˆ’ ๐‘˜) ๐‘€โˆ’1 ๐‘˜=0 ๐‘’(๐‘›) = ๐‘ฃ(๐‘›) โˆ’ ๐‘ฅ(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ… = ๐‘ค(๐‘›)ฬ…ฬ…ฬ…ฬ…ฬ…ฬ…ฬ… ๐‘‡๐‘œ ๐‘”๐‘’๐‘ก ๐‘กโ„Ž๐‘’ ๐‘œ๐‘๐‘ก๐‘–๐‘š๐‘Ž๐‘™ ๐น๐ผ๐‘… ๐‘™๐‘–๐‘›๐‘’๐‘Ž๐‘Ÿ ๐‘๐‘Ÿ๐‘’๐‘‘๐‘–๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก๐‘  โˆ‘ โ„Ž(๐‘˜) ๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ โˆ’ ๐‘˜) = ๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ + ๐ท), ๐‘™ = 0,1, โ€ฆ โ€ฆ โ€ฆ , ๐‘€ โˆ’ 1 ๐‘€โˆ’1 ๐‘˜=0 The expected value of the right hand side of the above equation is the statistical autocorrelation of the narrowband signal ๐‘ฅ(๐‘›) which can be seen as follows. ๐‘Ÿ๐‘ฃ ๐‘ฃ(๐‘™ + ๐ท) = โˆ‘ ๐‘ฃ(๐‘›) ๐‘ฃ(๐‘› โˆ’ ๐‘™ โˆ’ ๐ท) ๐‘ ๐‘›=0 = โˆ‘[๐‘ค(๐‘›) + ๐‘ฅ(๐‘›)][๐‘ค(๐‘› โˆ’ ๐‘™ โˆ’ ๐ท) + ๐‘ฅ (๐‘› โˆ’ ๐‘™ โˆ’ ๐ท)] ๐‘ ๐‘›=0 = ๐‘Ÿ๐‘ค ๐‘ค(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ค ๐‘ฅ(๐‘™ + ๐ท) + ๐‘Ÿ๐‘ฅ ๐‘ค(๐‘™ + ๐ท) = 0 + ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) + 0 + 0 (๐ด๐‘ ๐‘ ๐‘ข๐‘š๐‘’๐‘‘) = ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ + ๐ท) = ๐›พ๐‘ฅ๐‘ฅ(๐‘™ + ๐ท) In the following figure, we have delayed the primary signal to de-correlate the wideband noise and then fed into a linear FIR predictor to best estimate the narrowband desired signal ๐‘ฅ(๐‘›) and then this estimation is used to estimate the wideband noise error. Subsequently, the error is reduced and enhanced narrowband desired signal ๐‘ฅ(๐‘›) is obtained. FIR Filter Adaptive Control Algorithm Estimated Wideband Error Signal e(n) = Updated Coefficients Feedback Loop Enhanced Narrowband Output Decorrelation Delay v (n-D) Estimated Narrowband Wideband Noise w(n) that masks Narrowband x(n) v(n) = x(n) + w(n) Figure 17: Adaptive Line Enhancement
  • 40. 5.3.3 System Identification or Modelling (SI) System identification is the modelling or extraction of the impulse response of an unknown system through replicating the similar impulse response in an adjacent FIR filter. The input signal sequence ๐‘ฅ(๐‘›) is fed into both unknown system and adjacent FIR filter. The output signal sequence ๐‘ฆฬ‚ of the FIR filter is subtracted from the unknown systemโ€™s output signal sequence ๐‘ฆ(๐‘›) and error signal sequence ๐‘’(๐‘›) is obtained. The new coefficients for FIR filter are now selected from the error signal sequence and minimized to get the corrected new coefficients. The optimally minimized coefficients replicates or approximates the impulse response of the unknown system. Thus the unknown systemโ€™s impulse response is modelled without any prior knowledge through using adaptive FIR filter. ๐‘‡๐‘œ ๐‘š๐‘œ๐‘‘๐‘’๐‘™ ๐‘Ž ๐‘ข๐‘›๐‘˜๐‘›๐‘œ๐‘ค๐‘› ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐‘ค๐‘–๐‘กโ„Ž ๐‘Ž๐‘› ๐‘€ ๐‘Ž๐‘‘๐‘—๐‘ข๐‘ ๐‘ก๐‘Ž๐‘๐‘™๐‘’ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก ๐น๐ผ๐‘… ๐‘“๐‘–๐‘™๐‘ก๐‘’๐‘Ÿ, ๐‘กโ„Ž๐‘’๐‘›, ๐น๐ผ๐‘… ๐‘“๐‘–๐‘ก๐‘™๐‘’๐‘Ÿ ๐‘ค๐‘–๐‘กโ„Ž ๐‘€ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘’๐‘›๐‘ก, ๐‘ฆ(๐‘›) = โˆ‘ โ„Ž(๐‘˜) โˆ— ๐‘ฅ(๐‘› โˆ’ ๐‘˜) ๐‘€โˆ’1 ๐‘˜=0 ๐‘ˆ๐‘›๐‘˜๐‘›๐‘œ๐‘ค๐‘› ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘šโ€ฒ ๐‘  ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก, ๐‘‘(๐‘›) ๐ธ๐‘Ÿ๐‘Ÿ๐‘œ๐‘Ÿ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’, ๐‘’(๐‘›) = ๐‘‘(๐‘›) โˆ’ ๐‘ฆ(๐‘›) ๐‘๐‘œ๐‘ค, ๐‘ก๐‘œ ๐‘”๐‘’๐‘ก ๐‘š๐‘–๐‘›๐‘–๐‘š๐‘–๐‘ง๐‘’๐‘‘ ๐‘œ๐‘Ÿ ๐‘œ๐‘๐‘ก๐‘–๐‘š๐‘–๐‘ง๐‘’๐‘‘ ๐‘๐‘œ๐‘’๐‘“๐‘“๐‘–๐‘๐‘–๐‘’๐‘›๐‘ก๐‘  โ„Ž(๐‘˜) ๐‘ค๐‘–๐‘กโ„Ž ๐‘ + 1 ๐‘œ๐‘๐‘ ๐‘’๐‘Ÿ๐‘ฃ๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘ , แถ“ ๐‘€ = โˆ‘ [๐‘‘(๐‘›) โˆ’ โˆ‘ โ„Ž(๐‘˜) ๐‘ฅ(๐‘› โˆ’ ๐‘˜) ๐‘€โˆ’1 ๐‘˜=0 ] 2๐‘ ๐‘›=0 แถ“ ๐‘€ = โˆ‘ [๐‘‘(๐‘›) โˆ’ โˆ‘ โ„Ž(๐‘˜) ๐‘Ÿ๐‘ฅ ๐‘ฅ(๐‘™ โˆ’ ๐‘˜) = ๐‘Ÿ๐‘ฆ ๐‘ฅ(๐‘™) ๐‘€โˆ’1 ๐‘˜=0 ] 2๐‘ ๐‘›=0 ๐‘Šโ„Ž๐‘’๐‘Ÿ๐‘’, ๐‘™ = 0,1, โ€ฆ โ€ฆ . ๐‘€ โˆ’ 1 ๐‘กโ„Ž๐‘’ ๐‘Ž๐‘ข๐‘ก๐‘œ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’ ๐‘ฅ(๐‘›) = ๐‘Ÿ๐‘ฅ๐‘ฅ(๐‘™) ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘œ๐‘ ๐‘ ๐‘๐‘œ๐‘Ÿ๐‘Ÿ๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘ ๐‘ฆ๐‘ ๐‘ก๐‘’๐‘š ๐‘œ๐‘ข๐‘ก๐‘๐‘ข๐‘ก ๐‘ค๐‘–๐‘กโ„Ž ๐‘กโ„Ž๐‘’ ๐‘–๐‘›๐‘๐‘ข๐‘ก ๐‘ ๐‘’๐‘ž๐‘ข๐‘’๐‘›๐‘๐‘’, ๐‘Ÿ๐‘ฆ ๐‘ฅ(๐‘™) In the figure, we can clearly see that, the input signal is provided to both FIR filter and unknown system. The FIR filter is initialized with some best guessed coefficients. Then, from the error signal, we can measure the deviation of default coefficients from the desired coefficients through calculating new corrected coefficients.
  • 41. FIR/IIR Filter Adaptive Control Algorithm Input Signal: x(n) Output Signal: y(n) Error Signal: e(n) Updated Coefficients Feedback Loop Unknown Time- variant System Desired Signal: d(n) Figure 18: System Identification using Adaptive Filter
  • 42. Chapter 6 MATLAB and Development Tools 6.1 MATLAB GUI Design Methodology MATLAB is resource rich and offers several development alternatives to develop a software in MATLAB. For an example, to develop a GUI in MATLAB we can either use GUI preform GUIDE or we can write the GUI programmatically. Moreover, for run-time data storage, we can either use โ€œguidata()โ€ function or โ€œsetappdata()/getappdata()โ€ function. Furthermore, for function management we can either use โ€œmultiple-functionโ€ or โ€œnested-functionโ€ approach. In addition, for GUI structural block we can either use โ€œsingle panelโ€ or โ€œnested panelsโ€ approach. Each of these alternatives have their own trade-off and need to be used according to the software need. Some of these alternatives are discussed with more details in the following sections. 6.1.1 Compact data representation The goal of compact data representation is to optimally utilize the spatial spaces available within a data display and to reuse the same space to display multiple data. In MATLAB this can be easily accomplished using function property โ€œVisibleโ€. When the โ€œVisibleโ€ property is โ€œonโ€, the corresponding GUI elements will be visible and vice versa. Therefore, a set of GUI elements can be made invisible and visible in an execution instance using this property and this flexibility can be used to contain multiple GUI element in the same spatial coordinate and can be made visible when needed. 6.1. 2 Aesthetical data representation The overall aesthetics of software workspace is important as like as physical workspace aesthetics are important to concentrate on work. This aesthetical matter always influences humans because human mind drives human brain and our mind always likes beauty. Therefore, most used data need to be placed on the focal point of the convenient eye focus. Data need to be represented with pleasant but eye-friendly colors. Moreover, in a GUI, data need to be spread in a coherent manner so that there should be less congestion in visibility even with more data. All of these aesthetical aspects were attempted to be maintained in the developed software.
  • 43. 6.1.3 GUI Development using โ€œGUIDEโ€ In MATLAB, โ€œGUIDEโ€ is a GUI development form which is pre-developed. It allows itโ€™s user to place GUI elements in the GUI using drag and drop method. Besides, it also allows user to extend the functionality of GUI elements using further programming. However, there are both advantages and disadvantages using this approach and these are discussed as follows: 6.1.3.1 Advantages: ๏‚ท Less time-consuming ๏‚ท Best for prototyping ๏‚ท Best for short-term use ๏‚ท Best for simpler GUI ๏‚ท Easy solution for newbie computing professional or engineers 6.1.3.2: Disadvantages: ๏‚ท Does not offer full understanding on GUI construction ๏‚ท There are cases where it can take more time to fix GUI error issues in compare to programmatic implementation ๏‚ท Needs to keep track of two files i.e. โ€œ.mโ€ and โ€œ.figโ€ for every GUI ๏‚ท GUIDE generated codes are messy and large in size ๏‚ท Little changes in GUI causes substantial reordering of the corresponding GUI code hence it is not worthy to keep track of the code through source code control system (e.g. CVS) 6.1.4 Programmatic GUI Development In MATLAB, a GUI can be developed programmatically. This approach has huge advantages but as well contains some drawbacks. However, the advantages overcome its drawbacks and therefore, we have used we have the developed the GUI in these project programmatically. The advantages and disadvantages are discussed as follows. 6.1.4.1 Advantages: ๏‚ท Faster from an overall consideration if implemented with good experience and expertise ๏‚ท Best for applications that will be used for Long-term ๏‚ท Best for applications that will evolve with more complexity in the future ๏‚ท Allows to make use of nested functions ๏‚ท Hand-coding GUI results in lucid, simpler and easy-to-follow code ๏‚ท Easy deployment; for example it easier to upgrade and update the GUI when there are fewer files and less codes ๏‚ท Best solution for competent or advanced computing professionals, engineers, scientist and researchers
  • 44. ๏‚ท GUI layout can be controlled programmatically and hence appropriate adaptability with various screen sizes becomes possible ๏‚ท GUI related code can be reused ๏‚ท Easy to keep track of the changes that is made to the earlier version of the code through source code control system (e.g. CVS) 6.1.4.2 Disadvantages ๏‚ท Longer learning curve ๏‚ท Have to start from scratch ๏‚ท Take more time to create a simple GUI in compare to GUIDE 6.2 Structural GUI Design Tools The structure of GUI depends on the extent and type of GUI elements are used to construct it. We can formulate the GUI structure in two categories, namely, โ€œskinโ€ structure and โ€œcodeโ€ structure. For skin structure, two notions are important in the development of GUI, these are: 1. GUI elements 2. How these elements are placed within GUI. We have used โ€œnested panelsโ€ in this project that has shaped both โ€œskinโ€ and โ€œcodeโ€ structure of the GUI. Moreover, we have also used โ€œnested functionsโ€ in this project that has mostly shaped the โ€œcodeโ€ structure. Both โ€œnested panelsโ€ and โ€œnested functionsโ€ have their own trade-offs and are discussed as follows. 6.2.1 Nested Panels โ€œNested panelsโ€ means putting several panels within a single parent panel. A parent panel can have several level of child panels based on the degree of nesting. In other words, we can say that, a parent panel can have child panels and grand-child panels which in turns result in several parent panels within a grandparent panel. There are both advantages and disadvantages of using โ€œnested panelsโ€ and are discussed as follows. In this project, we have used โ€œnested panelsโ€ because its advantages overcomes its disadvantages. 6.2.1.1 Advantages ๏‚ท Realignment only impact within child panel and GUI elements within outer panels stays intact ๏‚ท Offers locked GUI elements within a certain GUI area and therefore prevents any accidental realignment ๏‚ท All components within a parent panel can be easily relocated with 100% same alignment ratio ๏‚ท Facilitates moduler GUI development ๏‚ท Facilitates re-use of code in another symmetric panel with same alignment ratio
  • 45. 6.2.1.2 Disadvantages ๏‚ท If parent panels needed to be reorganized, then whole GUI layout needed to be re- implemented 6.2.2 Nested Functions โ€œNested functionsโ€ means putting several or hundreds of child functions within a single parent function. However, there are advantages and disadvantages for this approach and are discussed as follows. 6.2.1.1 Advantages ๏‚ท It is possible to use variables that are not explicitly passed as input arguments, namely externally scoped variables from the parent function. ๏‚ท A handle created in parent function can be used for data storage purpose from the nested function. 6.2.1.2 Disadvantages ๏‚ท When a code become larger, a function and several hundreds of nested functions within it creates inconvenience to programmer. 6.3 Used Functions In MATLAB, there are cases which can be only solved using a unique function and there are no alternatives available. However, there are also cases which can be solved using several alternative functions and a user need to make choice based on need and convenience. ๏‚ท Main GUI window: using โ€œfigureโ€ function. ๏‚ท GUI element handling: using โ€œfunction handleโ€ of each GUI element ๏‚ท GUI element customization: using each functionโ€™s associated โ€œPropertyโ€ and โ€œValuesโ€. ๏‚ท GUI elements: โ€œuimenuโ€, โ€œuitoolbarโ€, โ€œuipushtoolโ€, โ€œuipanelโ€, โ€œuicontrolโ€, โ€œaxesโ€, โ€œgetappdataโ€, โ€œuitableโ€, โ€œuigetfileโ€ ๏‚ท Run-time data storage: โ€œguidataโ€, โ€œsetappdataโ€ ๏‚ท Callback event execution: โ€œCallbackโ€ and associatively directed functions ๏‚ท Data Loading: โ€œdlmwriteโ€, โ€œfilepartsโ€ ๏‚ท Learning Curve Calculation: โ€œmsesimโ€ function is used
  • 46. Chapter 7 Algorithm and Software Development 7.1 Graphical User Interface (GUI) Structure and Elements The Graphical User Interface (GUI) is composed of several elements such as menubar, menus, toolbar, pushbutton, popup menu, slider, axes, text, edit and as well as design structures such as panels etc. In the previous chapter, we have briefly mentioned about it. All of these elements are placed in the coordinate of the main parent figure. In another word, the whole MATLAB GUI is a figure function instance which contains various sub components to accomplish the tasks of the software. 7.1.1 Main GUI Window or Figure In MATLAB, the whole GUI is realized within a single function called โ€œfigureโ€. The function is called along with desired arguments and in turn it generates a blank GUI window in accordance with the passed on properties. This blank GUI window has horizontal coordinate and vertical coordinate. Then, we have placed several GUI elements into this blank GUI window through using this coordinates. After declaration of the โ€œfigureโ€ function it returns the handle to that function, reciprocally, to the blank GUI window. We have used this handle for placing other GUI elements to the blank parent GUI window. In the following code, we can see that, first we have declared the main parent โ€œfigureโ€ function and then placed menubar, menus and toolbar into the generated main GUI window. myHandle=figure('Visible','off','HandleVisibility','callback','NumberTitle' ,'off','MenuBar','None','Resize','off','Name','A MATLAB Simulation Software for Key Adaptive Algorithms and Applications, Developed By Main Uddin-Al- Hasan','units','normalized','outerposition',[0 0 1 1],'Visible','on'); myMenu1=uimenu(myHandle,'Label','File'); addItem2=uimenu(myMenu1,'Label','Load Data','Callback',@loadData); addItem4=uimenu(myMenu1,'Label','Close','Callback',@closeFigure); myToolbar=uitoolbar(myHandle); img1 = imread('new.png'); img11 = imresize(img1,[25,25]); tool1 = uipushtool(myToolbar,'CData',img11,'Separator','on','TooltipString','Load Data','HandleVisibility','off','ClickedCallback',@loadData); In figure 16, we can see the structure of the developed GUI. The main parent figure contains all GUI elements and panels.
  • 47. Figure 19: Developed GUI without data In the figure 16, from the middle to left there are four panels of dissimilar sizes. The top 2 panels are child panel within a parent panel. The bottom two panels are individual panels that are positioned into main parent figure coordinate. And, from the middle to right, we have four display panels and each of which are locked into another display parent panel. This parent display panel is locked into the main parent figure coordinate. 7.1.2 Nested Panelling Figure 20: Main GUI window with some data
  • 48. In figure 17, the bottom left panel of the main GUI window is populated with several child panels and each panel is populated with several GUI elements. In the following code, first we have declared four parent panels. All other GUI elements are placed into these four parent panels. This nested panelling offer modular software development such that if we want to swap between left half and right half of the above GUI then we just need to change four coordinate values of corresponding four parent panels and can disregard coordinate locations of all other GUI elements. That is to say that when we move a parent panel, we move all other child panels within it and their internal location consistency stays unchanged. % Creating Parent Panels DataAndSelection=uipanel(myHandle,'BorderType','none','BackgroundColor','wh ite','Position',[.0 .70 .5 .30]); AlgorithmParameter=uipanel(myHandle,'BorderType','none','BackgroundColor',' white','Position',[.0 .0 .3 .70]); titleData=uicontrol(AlgorithmParameter,'Style','text','String','Algorithm Paramters','BackgroundColor',[.5 .5 1],... 'Units','normalized','FontSize',12,'Position',[.0 .95 1 .05]); LoadedDataDisplay=uipanel(myHandle,'BorderType','none','Position',[.3 .0 .2 .70]); SignalDisplay=uipanel(myHandle,'BorderType','none','Position',[.5 .0 .5 1]); In the following code, we have created two child panels. In the first child panel, we have placed popup menus, default data load option and execution push button. In the second child panel, we have placed GUI elements for ALE and SI application data input. % Creating child panels for Data&Selection AlgorithmsAndApplications=uipanel(DataAndSelection,'BorderType','line','Hig hlightColor',[.5 .5 1],'ShadowColor',[.5 .5 1],... 'FontSize',12,'FontWeight','normal','Position',[.0 .0 .35 1]); titleData=uicontrol(AlgorithmsAndApplications,'Style','text','String','Algo rithms & Applications','BackgroundColor',[.5 .4 1],... 'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]); ApplicationData=uipanel(DataAndSelection,'Visible','off','BorderType','line ','FontSize',12,'HighlightColor',[.5 .6 1],... 'ShadowColor',[.5 .6 1],'Position',[.35 .0 .65 1]); titleData=uicontrol(ApplicationData,'Style','text','String','Application Data','BackgroundColor',[.5 .7 1],... 'Units','normalized','FontSize',12,'Position',[.0 .876 1 .124]); In the following code, we have created child panels for each class of algorithms. Then, in each child panel for each class, we have placed grand-child panels for each type of individual algorithm. % Creating child panels for each Algorithm Type LMSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp e','none','Position',[.0 .0 1 .95]); RLSAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderTyp e','none','Position',[.0 .0 1 .95]); APAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType ','none','Position',[.0 .0 1 .95]); FDAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType ','none','Position',[.0 .0 1 .95]); LBAlgorithmParameter=uipanel(AlgorithmParameter,'Visible','off','BorderType ','none','Position',[.0 .0 1 .95]);
  • 49. In the following code, we have created several grand-child panels for each type of LMS based algorithms. After that, we have populated each child panel with corresponding algorithm properties. % Creating child panels for LMS Based Algorithms lms=uipanel(LMSAlgorithmParameter,'Title','LMS','Position',[.0 .66 .333 .33]); nlms=uipanel(LMSAlgorithmParameter,'Title','NLMS','Position',[.333 .66 .333 .33]); llms=uipanel(LMSAlgorithmParameter,'Title','LLMS','Position',[.666 .66 .333 .33]); adjlms=uipanel(LMSAlgorithmParameter,'Title','ADJLMS','Position',[.0 .33 .333 .33]); blms=uipanel(LMSAlgorithmParameter,'Title','BLMS','Position',[.333 .33 .333 .33]); blms_fft=uipanel(LMSAlgorithmParameter,'Title','BLMS-FFT','Position',[.666 .33 .333 .33]); dlms=uipanel(LMSAlgorithmParameter,'Title','DLMS','Position',[.0 .0 .333 .33]); filtxlms=uipanel(LMSAlgorithmParameter,'Title','FILT-XLMS','Position',[.333 .0 .333 .33]); sDESlms=uipanel(LMSAlgorithmParameter,'Title','SD/SE/SS','Position',[.666 .0 .333 .33]); In the figure, we can see the internal blocks of the resultant GUI. The position of each block in this figure exactly similar to the corresponding developed GUI. Main Parent Figure Menubar: menus, sub-menus, Toolbar Parent Panel: Selection, Execution and Application Data Parent Panel: Algorithm Parameters Parent Panel: Data Display Child Panel: Select Applications and Algorithms and Execute Child Panel: Enter ALE and SI Data Child Panel 1 (Parameters) Child Panel 2 (Parameters) Child Panel 3 (Parameters) Parent Panel: Loaded Data Display Child Panel 4 (Parameters) Child Panel 5 (Parameters) Child Panel 6 (Parameters) Child Panel 7 (Parameters) Child Panel 8 (Parameters) Child Panel 9 (Parameters) Child Panel: Original Signal Child Panel: All Learning Curve Grand Child Panel: Axis Customization and Listening Child Panel: All Estimated Signal Grand Child Panel: Axis Customization and Listening Child Panel: All Error Signal Grand Child Panel: Axis Customization and Listening Figure 21: Internal GUI Blocks The benefit of modular GUI management is clearly understandable from the figure 18. For an example, if we want to swap between โ€œChild Panel 1โ€ and โ€œChild Panel 2โ€, we just need to
  • 50. change the โ€œPositionโ€ property coordinate. All of the GUI elements that are contained within these two child panels will stay unchanged. 7.1.3 Popup Menu or Listing Menubar is a common element of modern software GUI. The common standard to use this element is at the top of the software. However, there are shortage of spaces there and popup menu is a good alternative to show a listing. Moreover, multiple popup menu can be locked into a single place and then can be conveniently accessed using the โ€œvisibleโ€ property of GUI. We have used this property to show several popup menu in a small place. A small block of the code related to popup menu is given blow. Here, we have first declared the list and then created the popup menu and assigned the list to the โ€œStringโ€ property of popup function. After that, we have fetched the currently selected value and associated string value from second column of the list. This fetched string value is later used to decide which configuration of function is called. popupLMSClass ={... % LMS Based Algorithms '',''; 'LMS FIR' 'LMS'; 'Normalized LMS FIR' 'NLMS'; 'Leaky LMS FIR' 'LLMS'; 'Adjoint LMS FIR' 'ADJLMS'; 'Block LMS FIR' 'BLMS'; 'FFT-based Block LMS FIR' 'BLMSFFT'; 'Delayed LMS FIR' 'DLMS'; 'Filtered-x LMS FIR' 'FILTXLMS'; 'Sign-Data LMS FIR (SD)' 'SD'; 'Sign-Error LMS FIR (SE)' 'SE'; 'Sign-Sign LMS FIR (SS)' 'SS'}; selectLMSClass = uicontrol(AlgorithmsAndApplications,'Visible','off','Style','popupmenu','Un its','normalized','String',popupLMSClass(:,1),'HandleVisibility','callback' ,'Position',[.05 .44 .83 .1],'Callback',@AlgCustomizedVisibility); whatLMSAlgorithm = popupLMSClass{get(selectLMSClass,'Value'), 2}; In total, we have created three visible popup menu at an execution instance and they need to be selected in a descending order to be able to use it correctly. That is to say to mean that, when an option is selected from the first popup menu, the second popup menu is displayed based on the first selection and similarly based second selection third popup menu is displayed. The first popup menu shows the applications, second popup menu shows the algorithm class types and the third popup menu shows the individual algorithms.
  • 51. Popup Menu 1: Select Applications 1. Adaptive Noise Cancellation (ANC) 2. Adaptive Line Enhancement (ALE) 3. System Identification (SI) START Popup Menu 2: Select Algorithm Group or Comparison 1. Run & Compare Algorithms 2. LMS Based FIR Filter 3. RLS Based FIR Filter 4. Affine Projection Based FIR Filter 5. Frequency Domain Based FIR Filter 6. Lattice Base FIR Filter Is ANC/ALE/SI Chosen? Is Option 4 Chosen? Is Option 3 Chosen? Is Option 2 Chosen? Is Option 1 Chosen? Is Option 5 Chosen? Is Option 6 Chosen? YES Popup Menu 3(1):Run and Compare Algorithms-> 1. All LMS Based Algorithms 2. All RLS Based Algorithms 3. All AP Based Algorithms 4. All FD Based Algorithms 5. All Lattice Based Algorithms 6. LMS Based Algorithms in Group 7. RLS Based Algorithms in Group 8. AP Based Algorithms in Group 9. FD Based Algorithms in Group 10. Lattice Based Algorithms in Group YES Popup Menu 3(2): LMS Based Algorithms-> 1. LMS FIR 2. NLMS FIR 3. LLMS FIR 4. ADJLMS FIR 5. BLMS FIR 6. BLMSFFT FIR 7. DLMS FIR 8. FILTXLMS FIR 9. SD FIR 10. SE FIR 11. SS FIR YES YES Popup Menu 3(3): RLS Based Algorithms-> 1. RLS FIR 2. QRDRLS FIR 3. HRLS FIR 4. HSWRLS FIR 5. SWRLS FIR 6. FTF FIR YES Popup Menu 3(4): AP Based Algorithms-> 1. AP 2. APRU 3. BAP YES Popup Menu 3(5): FD Based Algorithms-> 1. PBFDAF 2. PBUFDAF 3. TDAFDCT 4. TDAFDFT 5. UFDAF Popup Menu 3(6): Lattice Based Algorithms-> 1. GAL 2. LSL 3. QRDLSL YES Figure 22: Popup menu execution flow In the figure 19, the orderly execution of popup menu is given along with the content of each popup menu. The first popup menu location has a single popup menu that shows the type of application. The second popup menu location also has a single popup menu that shows the class of algorithms and comparison mode. But, we have placed six popup menu in the third popup menu location and each of these menu is connected with the corresponding entry in the popup menu of second popup menu location. 7.1.4 Slider Control We have used sliders in the developed GUI. The user input value for the variable parameters (i.e. step-size, filter order) of each algorithm can be easily and conveniently controlled using these sliders. The sliders works in real-time and that is to say to mean that when slider position changes it also changes the associated value for corresponding parameter and when corresponding parameter value is changed the associated slider position is updated. This auto update is accomplished through using โ€œCallbackโ€ property of both โ€œeditโ€ and โ€œsliderโ€ GUI elements. When there is a change in a โ€œeditโ€ box it also executes the associated โ€œCallbackโ€ function. And, we have fetched current โ€œeditโ€ box value and used this value to update the slider position inside this associated โ€œCallbackโ€ function. And, when there is a change in a โ€œsliderโ€, it also executes the associated โ€œCallbackโ€ function and in a similar way updates the corresponding value in the โ€œeditโ€ box. In the following code, the first function is executed when there is a change in the corresponding โ€œeditโ€ box and the second function is executed when
  • 52. there is a change in the corresponding โ€œsliderโ€. Similarly, the third and fourth function works for the order parameters of the algorithm. function editLMSmu(hObject,evendata) set(lmsMuSl1,'Value',str2double(get(lmsDF1,'string'))); end function sliderLMSmu(hObject, eventdata) sliderValue=get(lmsMuSl1,'Value'); set(lmsDF1,'string',sliderValue); end function editLMSorder(hObject,eventdata) set(lmsOrderSl1,'Value',str2double(get(lmsDF2,'string'))); end function sliderLMSorder(hObject,eventdata) sliderValue=get(lmsOrderSl1,'Value'); set(lmsDF2,'string',sliderValue); end In the following figure, we can see how the โ€œeditโ€ box and โ€œsliderโ€ interact with each-other to update the corresponding value in real-time. START Change parameter value Update parameter value accordingly Execute associated callback function Update slider position accordingly Change slider position Execute associated callback function Figure 23: Real-time slider control
  • 53. 7.1.5 Application and Parameter Data Input In the developed software, we have two types of user input, namely, application data input for ALE and SI and variable parameter data input for each algorithm. In the following code, first we have created the text label using โ€œtextโ€ for corresponding data and then used โ€œeditโ€ box to insert data. % Data Fields for Signal 1 AmplitudeS1=uicontrol(Signal1,'Style','text','String','Amplitude','units',' normalized','Position',[.1 .80 .3 .15]); SignalFreqS1=uicontrol(Signal1,'Style','text','String','Frequency','units', 'normalized','Position',[.09 .6 .3 .15]); SampleTimeS1=uicontrol(Signal1,'Style','text','String','Sample Time','units','normalized','Position',[.07 .4 .3 .15]); SamplingRateS1=uicontrol(Signal1,'Style','text','String','Sampling Rate','units','normalized','Position',[.0 .2 .4 .15]); PhaseS1=uicontrol(Signal1,'Style','text','String','Phase','units','normaliz ed','Position',[.13 .0 .3 .15]); AmplitudeDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor' ,'white','units','normalized','Position',[.45 .79 .4 .15]); SignalFreqDFS1=uicontrol(Signal1,'Style','edit','string',1200,'BackgroundCo lor','white','units','normalized','Position',[.45 .59 .4 .15]); SampleTimeDFS1=uicontrol(Signal1,'Style','edit','string',3000,'BackgroundCo lor','white','units','normalized','Position',[.45 .39 .4 .15],'Callback',@updateSampleTimeForOtherSignal1); SamplingRateDFS1=uicontrol(Signal1,'Style','edit','string',1000,'Background Color','white','units','normalized','Position',[.45 .19 .4 .15]); PhaseDFS1=uicontrol(Signal1,'Style','edit','string',2,'BackgroundColor','wh ite','units','normalized','Position',[.45 .01 .4 .15]); In the following code, we have created text label using โ€œtextโ€ for both โ€œeditโ€ and corresponding sliders and then used โ€œeditโ€ to insert data for varying algorithm parameters and used sliders to conveniently increase or decrease that data. % Data Fields for LMS lmsT1=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi tion',[.14 .8 .2 .15]); lmsT2=uicontrol(lms,'Style','text','String','order','units','normalized','P osition',[.1 .59 .21 .15]); lmsDF1=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm alized','Position',[.4 .8 .5 .15],'Callback',@editLMSmu); lmsDF2=uicontrol(lms,'Style','edit','BackgroundColor','white','units','norm alized','Position',[.4 .59 .5 .15],'Callback',@editLMSorder); lmsT3=uicontrol(lms,'Style','text','String','mu','units','normalized','Posi tion',[.14 .34 .2 .15]); lmsT4=uicontrol(lms,'Style','text','String','order','units','normalized','P osition',[.1 .14 .21 .15]); lmsMuSl1=uicontrol(lms,'Style','slider','Min',0,'Max',5,'SliderStep',[0.05 0.1],'units','normalized','Position',[.4 .35 .5 .15],'Callback',@sliderLMSmu); lmsOrderSl1=uicontrol(lms,'Style','slider','Min',0,'Max',1000,'SliderStep', [.001 .005],'units','normalized','Position',[.4 .15 .5 .15],'Callback',@sliderLMSorder);
  • 54. Change another Signalโ€™s Sample Time Equally Change Noise Signalโ€™s Sample Time Equally START Is Sample Time for One Signal Changed? If Changed Fetch Default Sample Time If not Changed Change Signal One Sample Time Equally Change Signal Two Sample Time Equally START Is Sample Time for Noise Signal Changed? If Changed Fetch Default Sample Time If not Changed Figure 24: Application data input consistency In the application data input for ALE and SI, the sample time for signal 1, signal 2 and additive noise must be same in order to be computed correctly. Therefore, we have used similar method that we have used in โ€œedit-sliderโ€ to maintain automatic consistency among these data types. For an example, if we change โ€œSignal 1โ€ sample time, then sample time for both โ€œSignal 2โ€ and โ€œNoiseโ€ will automatically turn similar to โ€œSignal 1โ€. The same thing holds for โ€œSignal 2โ€ and โ€œNoiseโ€ and when sample time from one of them is changed then the sample time for other two will also change. 7.1.6 Data storage and retrieval In the developed software, the use of data can be realized into two categories. Firstly, loaded data or external data. Secondly, software generated data after processing. The external speech data or loaded data is stored in the guidata() storage function of main GUI handle for further processing. On the other hand, the software generated data such as estimated signal, error signal, learning curve are stored in the axis handle of corresponding display axis using setappdata() function. The software generated data is stored so that processed signals can be played whenever needed after processing or can be displayed in a new figure. In the following code, we have loaded the speech data for ANC and saved it in the guidata() function of main figure handle.
  • 55. function loadData(hObject, eventdata) [filename,filepath] = uigetfile('*.*','All Files','Select your Data or Files'); [path,name,ext] = fileparts(filename); if(strcmp(ext,'.mat')) data=matfile(filename); dlmwrite('inputData.dat',[data.d data.x]); myData=load('inputData.dat'); guidata(myHandle,myData); setappdata(AncData,'SignalWithNoise',data); updateDataTable(); else myData=load(filename); guidata(myHandle,myData); updateDataTable(); end end In the following code, we have fetched back the loaded and stored data and displayed in the โ€œuitableโ€ function generated table. This โ€œuitableโ€ GUI element is placed into the third main parent panel. function updateDataTable(hObject,eventdata) % Setting uitable in Statistical and Data Analysis columnFormat = {'numeric', 'numeric'}; columnEdit = [true true]; columnWidth = {60 60}; inputRawData=guidata(myHandle); colnames={'1','2','3'}; inputDataTable = uitable(StatisticalAndDataAnalysis,'Units','normalized','Position',[.0 .0 1 .95],'Data',inputRawData,... 'ColumnName',colnames,'ColumnFormat', columnFormat,'ColumnWidth', columnWidth,'ColumnEditable', columnEdit,... 'ToolTipString','Loaded Signal Data'); end In the following code, we have fetched back stored software generated data (e.g. estimated signal) to be played. Similarly, error signal and learning curve data can be also fetched and be listened or displayed respectively. function playEstimatedSound(hObject,eventdata)