This document outlines an introduction to adaptive signal processing. It discusses how adaptive signal processing designs adaptive systems for signal processing applications. The course covers adaptive algorithm families including Newton's method, steepest descent, LMS, RLS, and Kalman filtering. It also covers applications in communications and blind equalization. The evaluation is based on assignments, a midterm, and a final exam.
The document provides an overview of adaptive filters. It discusses that adaptive filters are digital filters that have self-adjusting characteristics to changes in input signals. They have two main components: a digital filter with adjustable coefficients and an adaptive algorithm. Common adaptive algorithms are LMS and RLS. Adaptive filters are used for applications like noise cancellation, system identification, channel equalization, and signal prediction. The key aspects of adaptive filter theory and algorithms like LMS, RLS, Wiener filters are also covered.
Lecture Notes on Adaptive Signal Processing-1.pdfVishalPusadkar1
Adaptive filters are time-variant, nonlinear, and stochastic systems that perform data-driven approximation to minimize an objective function. The chapter discusses adaptive filter applications like system identification, inverse modeling, linear prediction, and noise cancellation. It also covers stochastic signal models, optimum linear filtering techniques like Wiener filtering, and solutions to the Wiener-Hopf equations. Numerical techniques like steepest descent are discussed for minimizing the mean square error function in adaptive filters. Stability and convergence analysis is presented for the steepest descent approach.
This document discusses multirate digital signal processing. It explains that multirate systems use multiple sampling rates to process digital signals. Common operations in multirate systems are decimation, which decreases the sampling rate, and interpolation, which increases it. Decimation and interpolation can be realized through filtering and downsampling/upsampling. The document also provides examples of multirate applications like digital audio conversion and discusses tools like polyphase filters used in multirate signal processing.
Windowing techniques of fir filter designRohan Nagpal
Windowing techniques are used in FIR filter design to convert an infinite impulse response to a finite impulse response. The process involves choosing a desired frequency response, taking the inverse Fourier transform to get the impulse response, multiplying the impulse response by a window function, and realizing the filter. Common window functions include rectangular, Hanning, Hamming, and Blackman windows, which are selected based on the required stopband attenuation. The windowing technique allows designing FIR filters with a simple process but lacks flexibility compared to other design methods.
This document is a thesis submitted by Mohammed Abuibaid to Kocaeli University regarding adaptive beam-forming. It discusses various beam-forming techniques including switched array antennas, DSP-based phase manipulation, and beamforming by precoding. It also covers adaptive beamforming algorithms such as LMS, NLMS, RLS, and CM. Various beam patterns generated by these algorithms are presented. The document motivates the need for adaptive beamforming and 3D beamforming to improve energy efficiency in wireless networks.
The document discusses the Fast Fourier Transform (FFT) algorithm. It begins by explaining how the Discrete Fourier Transform (DFT) and its inverse can be computed on a digital computer, but require O(N2) operations for an N-point sequence. The FFT was discovered to reduce this complexity to O(NlogN) operations by exploiting redundancy in the DFT calculation. It achieves this through a recursive decomposition of the DFT into smaller DFT problems. The FFT provides a significant speedup and enables practical spectral analysis of long signals.
The document discusses linear prediction techniques including forward and backward prediction. Forward prediction involves observing past samples and predicting future samples, while backward prediction involves observing future samples and predicting past samples. Both techniques can be solved using Wiener filter theory and result in minimum mean-square prediction errors. The Levinson-Durbin algorithm provides an efficient recursive method for computing linear prediction filter coefficients. Lattice predictors provide an efficient implementation structure for forward and backward linear prediction filters.
The document provides an overview of adaptive filters. It discusses that adaptive filters are digital filters that have self-adjusting characteristics to changes in input signals. They have two main components: a digital filter with adjustable coefficients and an adaptive algorithm. Common adaptive algorithms are LMS and RLS. Adaptive filters are used for applications like noise cancellation, system identification, channel equalization, and signal prediction. The key aspects of adaptive filter theory and algorithms like LMS, RLS, Wiener filters are also covered.
Lecture Notes on Adaptive Signal Processing-1.pdfVishalPusadkar1
Adaptive filters are time-variant, nonlinear, and stochastic systems that perform data-driven approximation to minimize an objective function. The chapter discusses adaptive filter applications like system identification, inverse modeling, linear prediction, and noise cancellation. It also covers stochastic signal models, optimum linear filtering techniques like Wiener filtering, and solutions to the Wiener-Hopf equations. Numerical techniques like steepest descent are discussed for minimizing the mean square error function in adaptive filters. Stability and convergence analysis is presented for the steepest descent approach.
This document discusses multirate digital signal processing. It explains that multirate systems use multiple sampling rates to process digital signals. Common operations in multirate systems are decimation, which decreases the sampling rate, and interpolation, which increases it. Decimation and interpolation can be realized through filtering and downsampling/upsampling. The document also provides examples of multirate applications like digital audio conversion and discusses tools like polyphase filters used in multirate signal processing.
Windowing techniques of fir filter designRohan Nagpal
Windowing techniques are used in FIR filter design to convert an infinite impulse response to a finite impulse response. The process involves choosing a desired frequency response, taking the inverse Fourier transform to get the impulse response, multiplying the impulse response by a window function, and realizing the filter. Common window functions include rectangular, Hanning, Hamming, and Blackman windows, which are selected based on the required stopband attenuation. The windowing technique allows designing FIR filters with a simple process but lacks flexibility compared to other design methods.
This document is a thesis submitted by Mohammed Abuibaid to Kocaeli University regarding adaptive beam-forming. It discusses various beam-forming techniques including switched array antennas, DSP-based phase manipulation, and beamforming by precoding. It also covers adaptive beamforming algorithms such as LMS, NLMS, RLS, and CM. Various beam patterns generated by these algorithms are presented. The document motivates the need for adaptive beamforming and 3D beamforming to improve energy efficiency in wireless networks.
The document discusses the Fast Fourier Transform (FFT) algorithm. It begins by explaining how the Discrete Fourier Transform (DFT) and its inverse can be computed on a digital computer, but require O(N2) operations for an N-point sequence. The FFT was discovered to reduce this complexity to O(NlogN) operations by exploiting redundancy in the DFT calculation. It achieves this through a recursive decomposition of the DFT into smaller DFT problems. The FFT provides a significant speedup and enables practical spectral analysis of long signals.
The document discusses linear prediction techniques including forward and backward prediction. Forward prediction involves observing past samples and predicting future samples, while backward prediction involves observing future samples and predicting past samples. Both techniques can be solved using Wiener filter theory and result in minimum mean-square prediction errors. The Levinson-Durbin algorithm provides an efficient recursive method for computing linear prediction filter coefficients. Lattice predictors provide an efficient implementation structure for forward and backward linear prediction filters.
DSP_2018_FOEHU - Lec 06 - FIR Filter DesignAmr E. Mohamed
This lecture discusses the design of finite impulse response (FIR) filters. It introduces the window method for FIR filter design, which involves truncating the ideal impulse response with a window function to obtain a causal FIR filter. Common window functions are presented such as rectangular, triangular, Hanning, Hamming, and Blackman windows. These windows trade off main lobe width and side lobe levels. The document provides an example design of a low-pass FIR filter using the Hamming window to meet given passband and stopband specifications.
The document summarizes key properties of the discrete Fourier transform (DFT). It describes linearity, periodicity, time/frequency shifts, conjugation, multiplication, convolution, correlation, and Parseval's theorem. Linearity means the DFT of a linear combination of signals is the linear combination of the DFTs. Periodicity means an N-point DFT is periodic with N samples. Shifts change the time or frequency domain representation. Multiplication in the time domain is convolution in the frequency domain. Correlation relates the time and frequency domain representations. Parseval's theorem relates the energy in the time and frequency domains.
This document discusses multirate signal processing, which involves changing the sampling rate of signals in different parts of a system. It describes how up-samplers are used to increase the sampling rate by an integer factor, while down-samplers decrease the sampling rate by an integer factor. Examples of where multirate signal processing is used include audio signal processing, transmultiplexers, and narrowband filtering for fetal ECG and EEG signals. The document also provides block diagrams to illustrate up-sampling and down-sampling operations.
This document provides an overview of equalizer design in digital communication systems. It discusses the need for equalization to address inter-symbol interference caused by channel limitations. It describes two main equalizer designs: zero-forcing equalizers that apply the inverse channel response and minimum mean square error equalizers that minimize the error between the equalized signal and desired signal. It explains how the tap coefficients of these equalizers can be calculated using linear algebra methods like solving sets of equations. The document concludes by noting that equalization is a key technique in modern communications to compensate for channel distortions.
Filters selectively attenuate certain frequency ranges in a signal. They are used widely in electronics, telecommunications, audio/video, and other applications. Filters are classified as analog or digital depending on the signal type. Ideal filters have constant gain in the passband and zero gain in the stopband with linear phase, but practical filters have variable gain and non-zero/non-linear characteristics. Digital filters are further divided into finite impulse response (FIR) filters, which depend only on past inputs, and infinite impulse response (IIR) filters, which are recursive and depend on both past inputs and outputs. IIR filters are designed by first designing an analog filter prototype and transforming it to the digital domain using techniques like impulse invari
DSP_2018_FOEHU - Lec 03 - Discrete-Time Signals and SystemsAmr E. Mohamed
The document discusses discrete-time signals and systems. It defines discrete-time signals as sequences represented by x[n] and discusses important sequences like the unit sample, unit step, and periodic sequences. It then defines discrete-time systems as devices that take a discrete-time signal x(n) as input and produce another discrete-time signal y(n) as output. The document classifies systems as static vs. dynamic, time-invariant vs. time-varying, linear vs. nonlinear, and causal vs. noncausal. It provides examples to illustrate each classification.
IIR filter realization using direct form I & IISarang Joshi
The document discusses IIR filter realization using Direct Form I and Direct Form II structures. It presents the difference equation and transfer function for an IIR filter. It also provides examples of implementing IIR filters using Direct Form I and Direct Form II structures based on a given difference equation or transfer function.
1. The document discusses different types of waveguides including parallel plate, rectangular, and circular waveguides. It provides information on their modes of propagation, field components, cutoff frequencies, and other related parameters.
2. Formulas are presented for calculating propagation constants, cutoff frequencies, wavelengths, velocities, and impedances for TE and TM waves in various waveguide structures.
3. Examples are worked out demonstrating the application of the formulas to determine parameters for given waveguide geometries and operating frequencies.
This document discusses different realization topologies for discrete-time systems, including direct form I and II, cascaded, and parallel realizations. Direct form II represents systems using two transfer functions - one for the input-output relationship and one for the output of the first function to the overall output. It can reduce the number of multiplications compared to direct form I. Cascaded and parallel realizations decompose the transfer function into smaller factors or terms that are each realized as small direct form II subsystems, which are then cascaded or paralleled. Using second-order subsystems rather than first-order can avoid complex coefficients and reduce operations compared to cascading first-order subsystems. FIR filters with linear phase can be
DSP_2018_FOEHU - Lec 07 - IIR Filter DesignAmr E. Mohamed
The document discusses the design of discrete-time IIR filters from continuous-time filter specifications. It covers common IIR filter design techniques including the impulse invariance method, matched z-transform method, and bilinear transformation method. An example applies the bilinear transformation to design a first-order low-pass digital filter from a continuous analog prototype. Filter design procedures and steps are provided.
This document provides an overview of frequency analysis techniques for signals and systems, including the Fourier series, Fourier transform, discrete-time Fourier series (DTFS), discrete-time Fourier transform (DTFT), and discrete Fourier transform (DFT). It discusses properties and applications of these techniques, such as analyzing periodic and aperiodic signals. Examples are provided to illustrate calculating the Fourier series and transform of simple signals. The document also covers sampling theory and the Nyquist criterion for proper reconstruction of signals from samples.
MicroStrip Antenna
Introduction .
Micro-Strip Antennas Types .
Micro-Strip Antennas Shapes .
Types of Substrates (Dielectric Media) .
Comparison of various types of flat profile printed antennas .
Advantages & DisAdvantages of MSAs .
Applications of MSAs .
Radiation patterns of MSAs .
How to Optimizing the Substrate Properties for Increased Bandwidth ?
Comparing the different feed techniques .
Fir filter design (windowing technique)Bin Biny Bino
The window design technique for FIR filters involves choosing an ideal frequency-selective filter with the desired passband and stopband characteristics, and then multiplying or "windowing" its infinite impulse response with an appropriate window function to make it causal and finite. This windowing in the time domain corresponds to convolution in the frequency domain. Common window functions are used to truncate the ideal filter response while maintaining desirable filtering properties. MATLAB code can be used to implement windowed FIR filters.
This document provides an introduction to equalization and summarizes several equalization techniques:
1) Zero forcing equalizers aim to completely eliminate intersymbol interference by inverting the channel response but can amplify noise.
2) The mean square error criterion aims to minimize the error between the received and desired signals when filtered by the equalizer. This can be solved using least squares or adaptive algorithms like LMS.
3) The least mean square algorithm approximates the steepest descent method to iteratively and adaptively update the equalizer filter taps to minimize the mean square error based only on instantaneous measurements. This makes it suitable for time-varying channels.
The document discusses constructing a systematic cyclic code and generator matrix for a (7,4) Hamming code. It provides an example of encoding the data word 1010 using a generator polynomial g(x)= x3+x2+1. The resulting code word is 1010001. It then shows how to construct the generator matrix G by using the code words for the rows of the identity matrix as the first four rows of G. The complete code table is also shown.
A signal is a pattern of variation that carry information.
Signals are represented mathematically as a function of one or more independent variable
basic concept of signals
types of signals
system concepts
This document discusses the design of FIR filters using window functions. It begins by explaining that windows are used to modify the impulse response of filters to reduce ripples and achieve a smooth transition from passband to stopband. It then provides examples of common window functions, including rectangular, Hanning, Hamming, and Blackman windows. It concludes by showing the design of a low-pass FIR filter using a Hamming window to meet specific specifications for cutoff frequency and transition width.
This document discusses the design of IIR and FIR filters. IIR (Infinite Impulse Response) filters are analog filters that use feedback and have non-linear phase responses. Common IIR design methods are impulse invariant, bilinear transformation, and approximation of derivatives. FIR (Finite Impulse Response) filters are digital filters with no feedback and linear phase responses. FIR filters are designed using windowing methods like rectangular, Hamming, and Kaiser windows which concentrate the filter response around the desired frequencies. IIR filters require less computation but FIR filters are required where linear phase response is needed such as data transmission and speech processing.
This document discusses circuit design processes, specifically stick diagrams and design rules. It provides objectives and outcomes for understanding stick diagrams, which convey layer information through color codes. Stick diagrams show relative component placement but not exact sizes or parasitics. The document defines rules for stick diagrams and provides examples. It also discusses lambda-based design rules that define minimum widths and spacings to prevent shorts and allows scalability. Design rules provide a compromise between designers wanting smaller sizes and fabricators requiring controllability.
This lecture discusses synaptic learning rules in neural networks. It introduces the basic anatomy and physiology of synapses and different coding schemes neurons use, such as rate coding and spike timing coding. It then covers several synaptic plasticity rules, including Hebbian learning, spike-timing dependent plasticity (STDP), and the Bienenstock-Cooper-Munro (BCM) rule. It also discusses modeling synapses using the conductance-based model and implementations of STDP learning through online learning rules and weight dependence mechanisms.
This document discusses the process of backpropagation in neural networks. It begins with an example of forward propagation through a neural network with an input, hidden and output layer. It then introduces backpropagation, which uses the calculation of errors at the output to calculate gradients and update weights in order to minimize the overall error. The key steps are outlined, including calculating the error derivatives, weight updates proportional to the local gradient, and backpropagating error signals from the output through the hidden layers. Formulas for calculating each step of backpropagation are provided.
DSP_2018_FOEHU - Lec 06 - FIR Filter DesignAmr E. Mohamed
This lecture discusses the design of finite impulse response (FIR) filters. It introduces the window method for FIR filter design, which involves truncating the ideal impulse response with a window function to obtain a causal FIR filter. Common window functions are presented such as rectangular, triangular, Hanning, Hamming, and Blackman windows. These windows trade off main lobe width and side lobe levels. The document provides an example design of a low-pass FIR filter using the Hamming window to meet given passband and stopband specifications.
The document summarizes key properties of the discrete Fourier transform (DFT). It describes linearity, periodicity, time/frequency shifts, conjugation, multiplication, convolution, correlation, and Parseval's theorem. Linearity means the DFT of a linear combination of signals is the linear combination of the DFTs. Periodicity means an N-point DFT is periodic with N samples. Shifts change the time or frequency domain representation. Multiplication in the time domain is convolution in the frequency domain. Correlation relates the time and frequency domain representations. Parseval's theorem relates the energy in the time and frequency domains.
This document discusses multirate signal processing, which involves changing the sampling rate of signals in different parts of a system. It describes how up-samplers are used to increase the sampling rate by an integer factor, while down-samplers decrease the sampling rate by an integer factor. Examples of where multirate signal processing is used include audio signal processing, transmultiplexers, and narrowband filtering for fetal ECG and EEG signals. The document also provides block diagrams to illustrate up-sampling and down-sampling operations.
This document provides an overview of equalizer design in digital communication systems. It discusses the need for equalization to address inter-symbol interference caused by channel limitations. It describes two main equalizer designs: zero-forcing equalizers that apply the inverse channel response and minimum mean square error equalizers that minimize the error between the equalized signal and desired signal. It explains how the tap coefficients of these equalizers can be calculated using linear algebra methods like solving sets of equations. The document concludes by noting that equalization is a key technique in modern communications to compensate for channel distortions.
Filters selectively attenuate certain frequency ranges in a signal. They are used widely in electronics, telecommunications, audio/video, and other applications. Filters are classified as analog or digital depending on the signal type. Ideal filters have constant gain in the passband and zero gain in the stopband with linear phase, but practical filters have variable gain and non-zero/non-linear characteristics. Digital filters are further divided into finite impulse response (FIR) filters, which depend only on past inputs, and infinite impulse response (IIR) filters, which are recursive and depend on both past inputs and outputs. IIR filters are designed by first designing an analog filter prototype and transforming it to the digital domain using techniques like impulse invari
DSP_2018_FOEHU - Lec 03 - Discrete-Time Signals and SystemsAmr E. Mohamed
The document discusses discrete-time signals and systems. It defines discrete-time signals as sequences represented by x[n] and discusses important sequences like the unit sample, unit step, and periodic sequences. It then defines discrete-time systems as devices that take a discrete-time signal x(n) as input and produce another discrete-time signal y(n) as output. The document classifies systems as static vs. dynamic, time-invariant vs. time-varying, linear vs. nonlinear, and causal vs. noncausal. It provides examples to illustrate each classification.
IIR filter realization using direct form I & IISarang Joshi
The document discusses IIR filter realization using Direct Form I and Direct Form II structures. It presents the difference equation and transfer function for an IIR filter. It also provides examples of implementing IIR filters using Direct Form I and Direct Form II structures based on a given difference equation or transfer function.
1. The document discusses different types of waveguides including parallel plate, rectangular, and circular waveguides. It provides information on their modes of propagation, field components, cutoff frequencies, and other related parameters.
2. Formulas are presented for calculating propagation constants, cutoff frequencies, wavelengths, velocities, and impedances for TE and TM waves in various waveguide structures.
3. Examples are worked out demonstrating the application of the formulas to determine parameters for given waveguide geometries and operating frequencies.
This document discusses different realization topologies for discrete-time systems, including direct form I and II, cascaded, and parallel realizations. Direct form II represents systems using two transfer functions - one for the input-output relationship and one for the output of the first function to the overall output. It can reduce the number of multiplications compared to direct form I. Cascaded and parallel realizations decompose the transfer function into smaller factors or terms that are each realized as small direct form II subsystems, which are then cascaded or paralleled. Using second-order subsystems rather than first-order can avoid complex coefficients and reduce operations compared to cascading first-order subsystems. FIR filters with linear phase can be
DSP_2018_FOEHU - Lec 07 - IIR Filter DesignAmr E. Mohamed
The document discusses the design of discrete-time IIR filters from continuous-time filter specifications. It covers common IIR filter design techniques including the impulse invariance method, matched z-transform method, and bilinear transformation method. An example applies the bilinear transformation to design a first-order low-pass digital filter from a continuous analog prototype. Filter design procedures and steps are provided.
This document provides an overview of frequency analysis techniques for signals and systems, including the Fourier series, Fourier transform, discrete-time Fourier series (DTFS), discrete-time Fourier transform (DTFT), and discrete Fourier transform (DFT). It discusses properties and applications of these techniques, such as analyzing periodic and aperiodic signals. Examples are provided to illustrate calculating the Fourier series and transform of simple signals. The document also covers sampling theory and the Nyquist criterion for proper reconstruction of signals from samples.
MicroStrip Antenna
Introduction .
Micro-Strip Antennas Types .
Micro-Strip Antennas Shapes .
Types of Substrates (Dielectric Media) .
Comparison of various types of flat profile printed antennas .
Advantages & DisAdvantages of MSAs .
Applications of MSAs .
Radiation patterns of MSAs .
How to Optimizing the Substrate Properties for Increased Bandwidth ?
Comparing the different feed techniques .
Fir filter design (windowing technique)Bin Biny Bino
The window design technique for FIR filters involves choosing an ideal frequency-selective filter with the desired passband and stopband characteristics, and then multiplying or "windowing" its infinite impulse response with an appropriate window function to make it causal and finite. This windowing in the time domain corresponds to convolution in the frequency domain. Common window functions are used to truncate the ideal filter response while maintaining desirable filtering properties. MATLAB code can be used to implement windowed FIR filters.
This document provides an introduction to equalization and summarizes several equalization techniques:
1) Zero forcing equalizers aim to completely eliminate intersymbol interference by inverting the channel response but can amplify noise.
2) The mean square error criterion aims to minimize the error between the received and desired signals when filtered by the equalizer. This can be solved using least squares or adaptive algorithms like LMS.
3) The least mean square algorithm approximates the steepest descent method to iteratively and adaptively update the equalizer filter taps to minimize the mean square error based only on instantaneous measurements. This makes it suitable for time-varying channels.
The document discusses constructing a systematic cyclic code and generator matrix for a (7,4) Hamming code. It provides an example of encoding the data word 1010 using a generator polynomial g(x)= x3+x2+1. The resulting code word is 1010001. It then shows how to construct the generator matrix G by using the code words for the rows of the identity matrix as the first four rows of G. The complete code table is also shown.
A signal is a pattern of variation that carry information.
Signals are represented mathematically as a function of one or more independent variable
basic concept of signals
types of signals
system concepts
This document discusses the design of FIR filters using window functions. It begins by explaining that windows are used to modify the impulse response of filters to reduce ripples and achieve a smooth transition from passband to stopband. It then provides examples of common window functions, including rectangular, Hanning, Hamming, and Blackman windows. It concludes by showing the design of a low-pass FIR filter using a Hamming window to meet specific specifications for cutoff frequency and transition width.
This document discusses the design of IIR and FIR filters. IIR (Infinite Impulse Response) filters are analog filters that use feedback and have non-linear phase responses. Common IIR design methods are impulse invariant, bilinear transformation, and approximation of derivatives. FIR (Finite Impulse Response) filters are digital filters with no feedback and linear phase responses. FIR filters are designed using windowing methods like rectangular, Hamming, and Kaiser windows which concentrate the filter response around the desired frequencies. IIR filters require less computation but FIR filters are required where linear phase response is needed such as data transmission and speech processing.
This document discusses circuit design processes, specifically stick diagrams and design rules. It provides objectives and outcomes for understanding stick diagrams, which convey layer information through color codes. Stick diagrams show relative component placement but not exact sizes or parasitics. The document defines rules for stick diagrams and provides examples. It also discusses lambda-based design rules that define minimum widths and spacings to prevent shorts and allows scalability. Design rules provide a compromise between designers wanting smaller sizes and fabricators requiring controllability.
This lecture discusses synaptic learning rules in neural networks. It introduces the basic anatomy and physiology of synapses and different coding schemes neurons use, such as rate coding and spike timing coding. It then covers several synaptic plasticity rules, including Hebbian learning, spike-timing dependent plasticity (STDP), and the Bienenstock-Cooper-Munro (BCM) rule. It also discusses modeling synapses using the conductance-based model and implementations of STDP learning through online learning rules and weight dependence mechanisms.
This document discusses the process of backpropagation in neural networks. It begins with an example of forward propagation through a neural network with an input, hidden and output layer. It then introduces backpropagation, which uses the calculation of errors at the output to calculate gradients and update weights in order to minimize the overall error. The key steps are outlined, including calculating the error derivatives, weight updates proportional to the local gradient, and backpropagating error signals from the output through the hidden layers. Formulas for calculating each step of backpropagation are provided.
The document discusses digital filters and adaptive signal processing. It covers:
- The two types of digital filters - FIR and IIR, with FIR having no feedback and IIR having feedback.
- How the coefficients of a digital filter determine the desired frequency response.
- Why adaptive signal processing is needed when environments are constantly changing.
- An overview of applications like channel equalization in wireless communications to mitigate multipath effects.
- How adaptive equalizers find the inverse of the channel transfer function H-1(z) automatically using algorithms like LMS.
This document discusses artificial neural networks and their learning processes. It provides an overview of biological inspiration for neural networks from the nervous system. It then describes artificial neurons and how they are modeled, including the McCulloch-Pitts model. Neural networks are composed of interconnected artificial neurons. Learning in neural networks and biological systems involves changing synaptic strengths. The document outlines learning rules and processes for artificial neural networks, including minimizing an error function through optimization techniques like backpropagation.
The slides for the equation deviation of recurrent neural network (RNN), back-propagation through time and Sequence-to-sequence (Seq2Seq) models in image/video captioning tasks. Used in group paper reading in University of Sydney.
1. The document discusses various applications of artificial neural networks (ANNs) such as pattern classification, clustering, forecasting, association, and summarization of news articles.
2. It provides examples of how ANNs can be used to classify images and documents into different groups or events. The architecture of a multi-document news summarization system using ANNs is shown.
3. The biological mechanisms of neural networks in the human brain are compared with artificial neural networks. Examples of different activation functions in artificial neurons and learning algorithms like the perceptron are presented.
1) This document summarizes research on phase transitions and self-organized criticality in networks of stochastic spiking neurons. It presents a mean-field model of neurons that exhibits continuous and discontinuous phase transitions between silent and active states.
2) The model shows power law distributions of neuronal avalanche sizes and durations near the critical point, consistent with experimental data. Introducing dynamic neuronal gains instead of static synapses allows the system to self-organize to a slightly supercritical state.
3) Future work is proposed to better understand the effects of network topology, inhibitory neurons, and to apply the model to more realistic large-scale neuronal networks modeling different brain regions. The research contributes to understanding phase transitions and
The document discusses the dynamic response of structures with uncertain properties. It begins with an introduction discussing how stochasticity impacts dynamic response and efficient quantification of uncertainty. It then covers stochastic single degree of freedom and multiple degree of freedom damped systems. Equivalent damping factors are derived for single degree systems with random natural frequencies. The spectral function approach is also introduced for representing multiple degree of freedom stochastic systems in the frequency domain.
Avionics 738 Adaptive Filtering at Air University PAC Campus by Dr. Bilal A. Siddiqui in Spring 2018. This lecture covers background material for the course.
The document discusses backpropagation, an algorithm used to train neural networks. It begins with background on perceptron learning and the need for an algorithm that can train multilayer perceptrons to perform nonlinear classification. It then describes the development of backpropagation, from early work in the 1970s to its popularization in the 1980s. The document provides examples of using backpropagation to design networks for binary classification and multi-class problems. It also outlines the generalized mathematical expressions and steps involved in backpropagation, including calculating the error derivative with respect to weights and updating weights to minimize loss.
In this paper Enhanced whale Optimization Algorithm (EWO) proposed to solve the optimal reactive power problem. Whale optimization algorithm is modeled by Bubble-net hunting tactic. In the projected optimization algorithm an inertia weight ω ∈ [1, 0] has been introduced to perk up the search ability. Whales are commonly moving 10-16 meters down then through the bubbles which are created artificially then they encircle the prey and move upward towards the surface of sea. Proposed Enhanced whale optimization algorithm (EWO) is tested in standard IEEE 57 bus systems and power loss reduced considerably.
- Artificial neural networks are inspired by biological neural networks and try to mimic their learning mechanisms by modifying synaptic strengths through an optimization process.
- Learning in neural networks can be formulated as a function approximation task where the network learns to approximate a function by minimizing an error measure through optimization of synaptic weights.
- A single hidden layer neural network is capable of learning nonlinear function approximations if general optimization methods are applied to update the synaptic weights.
Deterministic sampling methods can be used to generate ensembles that represent modeling uncertainty in a more efficient and reproducible way than traditional Monte Carlo sampling. The document discusses applications of deterministic sampling in fields like dynamic metrology, medicine, meteorology and more. It also presents some specific deterministic sampling techniques like matched moments, sigma points, and sample annealing and discusses how these can be used for both direct uncertainty quantification and inverse problems like model identification.
Optimal Multisine Probing Signal Design for Power System Electromechanical Mo...Luigi Vanfretti
This talk presents a methodology for the design of a probing signal used for power system electromechanical mode estimation. Firstly, it is shown that probing mode estimation accuracy depends solely on the probing signal’s power spectrum and not on a specific time-domain realization. A relationship between the probing power spectrum and the accuracy of the mode estimation is used to determine a multisine probing signal by solving an optimization problem. The objective function is defined as a weighting sum of the probing signal variance and the level of the system disturbance caused by the probing. A desired level of the mode estimation accuracy is set as a constraint. The proposed methodology is demonstrated through simulations using the KTH Nordic 32 power system model.
Fixed point theorems for random variables in complete metric spacesAlexander Decker
This document presents two fixed point theorems for random variables in complete metric spaces. Theorem 1 proves that if a self-mapping E on a complete metric space satisfies certain rational inequalities involving distances between random variables, then E has a fixed point. Theorem 2 proves a similar result for a self-mapping E satisfying alternative rational inequalities, assuming E is onto. Both theorems use properties of complete metric spaces and rational inequalities to show the existence of fixed points for random variables under the given conditions.
The document outlines neural network problems and discusses single-layer perceptrons. It provides examples of using perceptrons to classify linearly and non-linearly separable data with 1 or 2 features. The perceptron learning algorithm is then described in steps, showing how the weights are updated for each training example to minimize errors and converge to a solution.
The document provides an overview of artificial neural networks (ANNs) and the perceptron learning algorithm. It discusses how biological neurons inspire ANNs and how a basic perceptron works using a simple example with inputs, weights, and outputs. The perceptron learning algorithm is then explained, which updates weights based on whether the perceptron's prediction was correct or incorrect on each training example. Finally, the document introduces multilayer perceptrons which can solve non-linearly separable problems by connecting multiple perceptron layers together through a process called backpropagation.
1) The document discusses various techniques for single-input single-output (SISO) and multiple-input multiple-output (MIMO) wireless communication systems. It begins with an overview of SISO detection using Bayesian approaches like maximum likelihood detection.
2) It then introduces MIMO techniques like diversity and spatial multiplexing. Diversity techniques like space-time coding use multiple transmission paths to improve reliability, while spatial multiplexing uses multiple antennas to increase throughput.
3) Specific diversity techniques discussed include repetition coding, time/frequency diversity, and Alamouti space-time coding. For spatial multiplexing, the document describes the MIMO channel model and mentions maximum likelihood detection for MIMO receivers.
Feasibility of EEG Super-Resolution Using Deep Convolutional NetworksSangjun Han
This document summarizes a master's thesis presentation on using deep convolutional networks for EEG spatial super-resolution. The study used simulated EEG data to test how different noise types and upscaling ratios affect the super-resolution process. Key findings include that super-resolution recovered low-resolution signals beyond the level of high-resolution signals for white noise, but only to the level of high-resolution signals for real noise. Higher upscaling ratios yielded better quality signals for white noise. Whitening real noise helped super-resolution, especially for source analysis at low SNR. The study used simulations to isolate the effects of noise types since real EEG noise sources cannot be extracted.
Similar to Introduction to adaptive signal processing (20)
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
Introduction to adaptive signal processing
1. EECS0712 Adaptive Signal Processing
1
Introduction to Adaptive Signal
Processing
EECS0712 Adaptive Signal Processing
1
Introduction to Adaptive Signal
Processing
Assoc. Prof. Dr. Peerapol Yuvapoositanon
Dept. of Electronic Engineering
CESdSP ASP1-1
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
2. Course Outline
• Introduction to Adaptive Signal Processing
• Adaptive Algorithms Families:
• Newton’s Method and Steepest Descent
• Least Mean Squared (LMS)
• Recursive Least Squares (RLS)
• Kalman Filtering
• Applications of Adaptive Signal Processing in
Communications and Blind Equalization
• Introduction to Adaptive Signal Processing
• Adaptive Algorithms Families:
• Newton’s Method and Steepest Descent
• Least Mean Squared (LMS)
• Recursive Least Squares (RLS)
• Kalman Filtering
• Applications of Adaptive Signal Processing in
Communications and Blind Equalization
CESdSP ASP1-2
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
3. Evaluation
• Assignment= 20 %
• Midterm = 30 %
• Final = 50 %
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-3
6. QR code
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-6
7. Adaptive Signal Processing
• Definition: Adaptive signal processing is the
design of adaptive systems for signal-
processing applications.
[http://encyclopedia2.thefreedictionary.com/adaptive+signal+pr
ocessing]
• Definition: Adaptive signal processing is the
design of adaptive systems for signal-
processing applications.
[http://encyclopedia2.thefreedictionary.com/adaptive+signal+pr
ocessing]
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-7
8. System Identification
• Let’s consider a system called “plant”
• We need to know its characteristics, i.e., The
impulse response of the system
CESdSP ASP1-8
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
10. Error of Plant Outputs
CESdSP ASP1-10
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
11. Error of Estimation
• Error of estimation is represented by the
signal energy of error
2 2
2 2
( )
2
e d y
d dy y
CESdSP ASP1-11
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
2 2
2 2
( )
2
e d y
d dy y
12. Adaptive System
• We can do it adaptively
CESdSP ASP1-12
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
13. • Adjust the weight for minimum error e
One-weight
CESdSP ASP1-13
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
14. 2 2
2 2
2 2
0 0 0 0
( )
2
( ) 2( )( ) ( )I I
e d y
d dy y
w x w x w x w x
CESdSP
2 2
2 2
2 2
0 0 0 0
( )
2
( ) 2( )( ) ( )I I
e d y
d dy y
w x w x w x w x
ASP1-14
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
15. Error Curve
• Parabola equation
CESdSP ASP1-15
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
16. Partial diff. and set to zero
• Partial differentiation
• Set to zero
• Result:
2
2 2
0 0 0 0
0 0
2 2
0 0
( ) 2( )( ) ( )
2 2
I I
I I
I
e
w x w x w x w x
w w
w x w x
• Partial differentiation
• Set to zero
• Result:
CESdSP
2
2 2
0 0 0 0
0 0
2 2
0 0
( ) 2( )( ) ( )
2 2
I I
I I
I
e
w x w x w x w x
w w
w x w x
2 2
0 00 2 2 I
w x w x
0 0
I
w w
ASP1-16
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
17. Multiple Weight Plants
• We calculate the weight adaptively
• Questions:
– What is the type of signal “x” to be used, e.g.
Sine, Cosine or Random signals ?
– If there is more than one weight w0 , i.e., w0….wN-
1, how do we calculate the solution?
• We calculate the weight adaptively
• Questions:
– What is the type of signal “x” to be used, e.g.
Sine, Cosine or Random signals ?
– If there is more than one weight w0 , i.e., w0….wN-
1, how do we calculate the solution?
CESdSP ASP1-17
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
18. Plants with Multiple Weight
• If we have multiple weights
CESdSP
1
0 1w w z
w
ASP1-18
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
19. • In the case of two-weight
Two-weight
CESdSP ASP1-19
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
20. Input
• From
• We construct the x as vector with first
element is the most recent
(3), (2), (1), (0), ( 1), ( 2),...x x x x x x
• From
• We construct the x as vector with first
element is the most recent
CESdSP
[ (3) (2) (1) (0)...]T
x x x xx
ASP1-20
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
21. Plants with Multiple Weight
(aka “Transversal Filter”)
• If we have multiple weights
( )x n ( 1)x n
CESdSP
0 ( )w x n
0 ( 1)w x n
0 0( ) ( ) ( 1)y n w x n w x n
ASP1-21
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
22. Regression input signal vector
• If the current time is n, we have “Regression
input signal vector”
[ ( ) ( 1) ( 2) ( 3)...]T
x n x n x n x n x
CESdSP
[ ( ) ( 1) ( 2) ( 3)...]T
x n x n x n x n x
ASP1-22
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
23. 0
0 1
1
[ ]T
w
w ww
w
CESdSP
0
0 1
1
[ ]T
w
w ww
w
0
0 1
1
ˆ [ ]
I
I I T
I
w
w w
w
w
ASP1-23
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
24. Convolution
• Output of plant is a convolution
• Ex For N=2
1
1
( ) ( )
N
k
k
y n w x n k
• Output of plant is a convolution
• Ex For N=2
CESdSP
1
1
( ) ( )
N
k
k
y n w x n k
0 0( ) ( 0) ( 1)y n w x n w x n
ASP1-24
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
25. 0 1
0 1
0 1
0 1
0 1
(3) (3) (2)
(2) (2) (1)
(1) (1) (0)
(0) (0) ( 1)
( 1) ( 1) ( 2)
y w x w x
y w x w x
y w x w x
y w x w x
y w x w x
CESdSP
0 1
0 1
0 1
0 1
0 1
(3) (3) (2)
(2) (2) (1)
(1) (1) (0)
(0) (0) ( 1)
( 1) ( 1) ( 2)
y w x w x
y w x w x
y w x w x
y w x w x
y w x w x
ASP1-25
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
26. • We can use a vector-matrix multiplication
• For example, for n=3 we construct y(3) as
• For example, for n=1 we construct y(1) as
0 1 0 1
(3)
(3) (3) (2) [ ] (3)
(2)
T
x
y w x w x w w
x
w x
• We can use a vector-matrix multiplication
• For example, for n=3 we construct y(3) as
• For example, for n=1 we construct y(1) as
CESdSP
0 1 0 1
(3)
(3) (3) (2) [ ] (3)
(2)
T
x
y w x w x w w
x
w x
0 1 0 1
(1)
(1) (1) (0) [ ] (1)
(0)
T
x
y w x w x w w
x
w x
ASP1-26
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
27. 0 1 0 1
0 1 0 1
0 1 0 1
0 1 0 1
(3)
(3) (3) (2) [ ] (3)
(2)
(2)
(2) (2) (1) [ ] (2)
(1)
(1)
(1) (1) (0) [ ] (1)
(0)
(2)
(0) (0) ( 1) [ ] (0
(1)
T
T
T
T
x
y w x w x w w
x
x
y w x w x w w
x
x
y w x w x w w
x
x
y w x w x w w
x
w x
w x
w x
w x )
CESdSP
0 1 0 1
0 1 0 1
0 1 0 1
0 1 0 1
(3)
(3) (3) (2) [ ] (3)
(2)
(2)
(2) (2) (1) [ ] (2)
(1)
(1)
(1) (1) (0) [ ] (1)
(0)
(2)
(0) (0) ( 1) [ ] (0
(1)
T
T
T
T
x
y w x w x w w
x
x
y w x w x w w
x
x
y w x w x w w
x
x
y w x w x w w
x
w x
w x
w x
w x )
ASP1-27
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
28. • The error squared is
• Let us stop there to consider Random signal
theory first.
2 2
2 2
2 2
( )
2
ˆ ˆ( ) 2( )( ) ( )T T T T
e d y
d dy y
w x w x w x w x
• The error squared is
• Let us stop there to consider Random signal
theory first.
CESdSP
2 2
2 2
2 2
( )
2
ˆ ˆ( ) 2( )( ) ( )T T T T
e d y
d dy y
w x w x w x w x
ASP1-28
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
29. Review of Random Signals
CESdSP ASP1-29
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
30. Wireless Transmissions
• Ideal signal transmission
11 00 11 00 11 0011 11 11 000011
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP2-30
11 00 11 00 11 0011 11 11 000011
Information
Information is Random
32. Random Variable
• Random variable is a function
• For a single time Coin Tossing
1,
( )
-1,
x H
X x
x T
• Random variable is a function
• For a single time Coin Tossing
CESdSP
1,
( )
-1,
x H
X x
x T
ASP1-32
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
33. Our signal x(n) is a Random
Variable
• For a series of Coin Tossing
1,
( )
-1,
i
i
i
x H
X x
x T
• For a series of Coin Tossing
CESdSP
1,
( )
-1,
i
i
i
x H
X x
x T
0 1 2 3 4{ , , , , ,....}x x x x x x
ASP1-33
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
34. Coin tossing and Random Variable
• If random
• We have random variable X
0 1 2 3 4
{ , , , , }
{ , , , , }
x H H T H T
x x x x x
CESdSP
• If random
• We have random variable X
0 1 2 3 4( ) { ( ), ( ), ( ), ( ), ( )}
{ ( ), ( ), ( ), ( ), ( )}
{1,1, 1,1, 1}
iX x X x X x X x X x X x
X H X H X T X H X T
ASP1-34
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
35. Random Digital Signal
• If the random variable is a function of time, it
is called a stochastic process
CESdSP ASP1-35
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
36. Probability Mass Function
• We need also to define the probability of each
random variable
( ) { ( ), ( ), ( ), ( ), ( )}
{1,1, 1,1, 1}
X x X H X H X T X H X T
CESdSP
( ) { ( ), ( ), ( ), ( ), ( )}
{1,1, 1,1, 1}
X x X H X H X T X H X T
ASP1-36
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
37. Probability Mass Function
• PMF is for Discrete distribution function
CESdSP ASP1-37
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
38. Time and Emsemble
CESdSP ASP1-38
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
39. Probability of X(2)
CESdSP ASP1-39
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
40. Probability Density Function
• PDF is for Continuous Distribution Function
CESdSP ASP1-40
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
42. Probability Density Function
• PDF values can be > 1 as long as its area under
curve is 1
2
CESdSP
1/2
2
1
1
ASP1-42
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
43. Cumulative Distribution Function
CESdSP
( ( )) Pr[ ( )]P x n X x n x
ASP1-43
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
44. ( )
( ( )) ( )
x n
P x n p z dz
x x
CESdSP
( )
( ( )) ( )
x n
P x n p z dz
x x
ASP1-44
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
45. Expectation Operator
{}E
CESdSP
{}E
ASP1-45
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
46. Expected Value
• Expected value is known as the “Mean”
{ } ( )X XE x xp x dx
CESdSP
{ } ( )X XE x xp x dx
ASP1-46
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
47. Example of Expected Value
(Discrete)
• We toss a die N times and get a set of
outcomes
• Suppose we roll a die with N=6, we might get
{ ( )} { (1), (2), (3),..., ( )}X i X X X X N
• We toss a die N times and get a set of
outcomes
• Suppose we roll a die with N=6, we might get
CESdSP
{ ( )} { (1), (2), (3),..., ( )}X i X X X X N
{ ( )} {2,3,6,3,1,1}X i
ASP1-47
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
48. Example of Expected Value
(Discrete)
• But, empirically we have Empirical (Monte
Carlo) estimate as Expected Value
6
1
{ } ( )Pr( ( ))
1 1 1 1
1 2 3 6
3 6 3 6
2.67
X
i
E x X i X X i
CESdSP
6
1
{ } ( )Pr( ( ))
1 1 1 1
1 2 3 6
3 6 3 6
2.67
X
i
E x X i X X i
ASP1-48
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
49. Theoretical Expected Value
• But in theory, for a die
6
1
{ } ( )Pr( ( ))
1 1 1 1 1 1
1 2 3 4 5 6
6 6 6 6 6 6
3.5
X
i
E X X i X X i
1
Pr( ( ))
6
X X i
CESdSP
6
1
{ } ( )Pr( ( ))
1 1 1 1 1 1
1 2 3 4 5 6
6 6 6 6 6 6
3.5
X
i
E X X i X X i
ASP1-49
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
50. Ensemble Average
i ensembles
1 1 2 2Ensemble Average of (1) (1)Pr[ (1)] (1)Pr[ (1)]
(1)Pr[ (1)]N N
x x x x x
x x
1 ensemble
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-50
i ensembles
51. Ensemble Average
{ ( )}E x n
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-51
{ ( )} ( ) ( ( )) ( )E x n x n p x n dx n
x
{ ( )}E x n
52. • I) Linearity
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-52
{ ( ) ( )} { ( )} { ( )}E ax n by n aE x n bE y n
53. • II)
{ ( ) ( )} { ( )} { ( )}E x n y n E x n E y n
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-53
{ ( ) ( )} { ( )} { ( )}E x n y n E x n E y n
54. • III)
{ ( )} ( ( )) ( ( )) ( )E y n g x n p x n dx n
x
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-54
{ ( )} ( ( )) ( ( )) ( )E y n g x n p x n dx n
x
55. Autocorrelation
1 1( , ) { ( ) ( )}r n m E x n x mxx
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-55
1 11 1 1 1 1 1( , ) ( ) ( ) ( ( ), ( )) ( ) ( )r n m x n x m p x n x m dx n x m
xx x x
56. 1 1(1,4) { (1) (4)}r E x xxx
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-56
57. Autocorrelation
• n=m
2
( , ) ( , ) { ( )}r n m r n n E x n xx xx
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-57
2
( , ) ( , ) { ( )}r n m r n n E x n xx xx
58. Autocorrelation Matrix
(0,0) (0,1) (0, 1)
(1,0) (1,1) (1, 1)
( 1,0) ( 1,1) ( 1, 1)
r r r N
r r r N
r N r N r N N
xx xx xx
xx xx xx
xx
xx xx xx
R
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-58
(0,0) (0,1) (0, 1)
(1,0) (1,1) (1, 1)
( 1,0) ( 1,1) ( 1, 1)
r r r N
r r r N
r N r N r N N
xx xx xx
xx xx xx
xx
xx xx xx
R
59. Covariance
( , ) {[ ( ) ( )][ ( ) ( )]}c n m E x n n x m m xx
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-59
( , ) {[ ( ) ( )][ ( ) ( )]}c n m E x n n x m m xx
60. Stationarity (I)
• I)
{ ( )} { ( )}E x n E x m
n1
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-60
n2
61. Stationarity (II)
• II)
( , ) { ( ) ( )}r n n m E x n x n m xx
1 1 1 1( , ) { ( ) ( )}r n n m E x n x n m xx
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-61
1 1 1 1( , ) { ( ) ( )}r n n m E x n x n m xx
62. Expected Value of Error Energy
• Let’s take the expected value of error energy
2 2 2
ˆ ˆ{ } {( ) 2( )( ) ( ) }
ˆ ˆ ˆ{( )( )} 2 {( )( )} {( )( )}
ˆ ˆ ˆ{ } 2 {( )( )} { }
ˆ ˆ ˆ2 {( )( )}
T T T T
T T T T T T
T T T T T T
T T T T
E e E
E E E
E E E
E
w x w x w x w x
w x x w x w w x w x x w
w xx w x w x w w xx w
w Rw x w x w w Rw
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-62
2 2 2
ˆ ˆ{ } {( ) 2( )( ) ( ) }
ˆ ˆ ˆ{( )( )} 2 {( )( )} {( )( )}
ˆ ˆ ˆ{ } 2 {( )( )} { }
ˆ ˆ ˆ2 {( )( )}
T T T T
T T T T T T
T T T T T T
T T T T
E e E
E E E
E E E
E
w x w x w x w x
w x x w x w w x w x x w
w xx w x w x w w xx w
w Rw x w x w w Rw
63. Vector-Matrix Differentiation
ˆI)
ˆ
ˆ ˆ ˆII) 2
ˆ
T
T T
w x x
w
w xx w Rw
w
CESdSP
ˆI)
ˆ
ˆ ˆ ˆII) 2
ˆ
T
T T
w x x
w
w xx w Rw
w
ASP1-63
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
64. Partial diff. and set to zero
• Differentiation
• Result:
ˆ0 2 {( ) } 2
ˆ
ˆ2 { } 2
ˆ2 2
T
E
E d
w x x Rw
w
x Rw
r Rw
• Differentiation
• Result:
CESdSP
ˆ0 2 {( ) } 2
ˆ
ˆ2 { } 2
ˆ2 2
T
E
E d
w x x Rw
w
x Rw
r Rw
1
ˆ
w R r
ASP1-64
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
65. 2-D Error surface
CESdSP
1
ˆ
w R r
ASP1-65
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
66. Four Basic Classes of Adaptive
Signal Processing
• I) Identification
• II) Inverse Modelling
• III) Prediction
• IV) Interference Cancelling
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-66
• I) Identification
• II) Inverse Modelling
• III) Prediction
• IV) Interference Cancelling
67. The Four Classes of Adaptive
Filtering
CESdSP
EECS0712 Adaptive Signal Processing
http://embedsigproc.wordpress.com/eecs0712
Assoc. Prof. Dr. P.Yuvapoositanon
ASP1-67