Channel Equalisation
Upcoming SlideShare
Loading in...5
×
 

Channel Equalisation

on

  • 2,839 views

 

Statistics

Views

Total Views
2,839
Views on SlideShare
2,839
Embed Views
0

Actions

Likes
2
Downloads
126
Comments
4

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Channel Equalisation Channel Equalisation Presentation Transcript

  • CHANNEL EQUALISATION BY- AJIT KUMAR PANDA POONAN SAHOO SAYANTAN DAS SURAJ CHOUDHURY
  • THREATS IN DIGITALCOMMUNICATION •There are four main threats in the process of digital communication Inter Symbol Interference (ISI) Multipath Propagation Co-channel Interference Presence of noise in the channel
  • INTER SYMBOL INTERFERENCE: Inter Symbol Interference in Digital Transmission Inter-symbol interference (ISI) arises when the data transmitted through the channel is dispersive, in which each received pulse is affected somewhat by adjacent pulses and due to which interference occurs in the transmitted signals. It is difficult to recover the original data from one channel sample View slide
  • CO-CHANNEL INTERFERENCE: Co-channel Interference (CCI) and Adjacent Channel Interference (ACI) occur in communication systems due to multiple access techniques using space, frequency or time. CCI occurs in cellular radio and dual- polarized microwave radio, for efficient utilization of the allocated channels frequencies by reusing the frequencies in different cells. View slide
  • MULTI-PATH PROPAGATION: Within telecommunication channels multiple paths of propagation commonly occur. In practical terms this is equivalent to transmitting the same signal through a number of separate channels, each having a different attenuation and delay. Consider an open-air radio transmission channel that has three propagation paths, as illustrated in Fig. These could be - Direct - Earth Bound - Sky Bound Fig1.2b describes how a receiver picks up the transmitted data. The direct signal is received first whilst the earth and sky bound are delayed. All three of the signals are attenuated with the sky path suffering the most. Multipath interference between consecutively transmitted signals will take place if one signal is received whilst the previous signal is still being detected. In Fig1.2 this would occur if the symbol transmission rate is greater than 1/τ where, τ represents transmission delay. Because bandwidth efficiency leads to high data rates, multi-path interference commonly occurs.
  • EQUALIZER Equalization is the process to remove ISI and noise effects from the channel It is located at the receiver end of the channel It is an inverse filter placed at the front end of the receiver The transfer function of the equalizer is just inverse of the transfer function of the channel Equalization is an iterative process of reducing the mean square error the difference between desired response and output of filter used in equalizer
  • TYPES OF EQUALIZERS: Equalizers are of two types LINEAR EQUALIZERS NON LINEAR Linear equalizers aim at reducing ISI in linear channels using various algorithms like Least Mean Square(LMS), Recursive Least Square(RLS) and normalized LMS Non linear equalizers equalize non-linear channels. They mainly use Neural Networks(NN) and Multilayer Perception(MLP) algorithms for equalization
  • Linear Adaptive Filters: An adaptive filter is a computational device that attempts to model the relationship between two signals in real time in an iterative manner Here the output is compared to the desired signal and accordingly the parameters of adaptive filter are varied and so it is known as self designing filter.
  • Applications of Adaptive Filters:Identification Used to provide a linear model of an unknown plant Parameters  u=input of adaptive filter=input to plant  y=output of adaptive filter  d=desired response=output of plant  ---> e=d-y=estimation error Applications:  System identification
  • Applications of Adaptive Filters:Inverse Modeling  Used to provide an inverse model of an unknown plant Parameters  u=input of adaptive filter=output to plant  y=output of adaptive filter  d=desired response=delayed system input  e=d-y=estimation error Applications:  Channel Equalization
  • The channel equalizationmodel:
  • Stochastic Gradient Approach:  Most commonly used type of Adaptive Filters  Define cost function as mean-squared error  Difference between filter output and desired response  Based on the method of steepest descent  Move towards the minimum on the error surface to get to minimum  Requires the gradient of the error surface to be known  Most popular adaptation algorithm is LMS  Derived from steepest descent  Doesn’t require gradient to be know: it is estimated at every iteration  Least-Mean-Square (LMS) Algorithmupdate value old value learning - tap errorof tap - weigth of tap - weight rate input signalvector vector parameter vector
  • LMS algorithm• Introduced by Widrow & Hoff in 1959• Simple, no matrices calculation involved in the adaptation• In the family of stochastic gradient algorithms• Approximation of the steepest – descent method• Based on the MMSE criterion.(Minimum Mean square Error)• Adaptive process containing two important signals:• 1.) Filtering process, producing output signal.• 2.) Desired signal (Training sequence)• Adaptive process: recursive adjustment of filter tap weights
  • Least-Mean-Square (LMS)Algorithm continued.... The LMS Algorithm consists of two basic processes that is followed in the adaptive equalization processes:  Training : It refers to adapting to the training sequence  Tracking: keeps track of the changing characteristics of the channel.
  • LMS Algorithm Steps: M 1 Filter output zn * u n k wk n k 0 Estimation error en dn zn wk n 1 wk n un k e* n Tap-weight adaptation
  • Derivation of the LMS MSEexpression: Error=E=(x(n)-x(n)’) Square error=E=(x(n)-x(n)’)2 Using minimum mean square error criterion , we differentiate the expression dE/dw=d/dw((x(n)-x(n)’)2) Applying chain rule and substitution of x(n)’ ,we get dE/dw=2(x(n)-x(n)’)*d/dw(x(n)- Ʃw*s(n-i)) dE/dw=2(e(n))(s(n-i)) From this we can derive an update equation for every new sample n using steepest descent and gradient method as w(n+1)= -u*(dE/dw) so,w(n+1)=2*u*e(n)*s(n-i) for i=0,1,2,3...........
  • Stability of LMS: The LMS algorithm is convergent in the mean square if and only if the step-size parameter satisfy 1 0 m ax Here max is the largest eigen value of the correlation matrix of the input data. 1 More practical test for stability is 0 input signal power The value of step size has to be a trade off between fast convergence rates and less steady state misadjustment. Larger values for step size  Increases adaptation rate (faster adaptation)  Increases residual mean-squared error
  • LMS-Pros & cons: LMS – Advantage: • Simplicity of implementation • Not neglecting the noise like Zero forcing equalizer • Stable and robust performance against different signal conditions LMS – Disadvantage:  Slow Convergence  Demands using of training sequence as reference ,thus decreasing the communication BW.
  • NLMS-Normalised LMSalgorithm Is mainly required to provide better performance than LMS as LMS performance is slow Uses normalization technique to provide a variable step size as step size ‘u’ is divided by instantaneous signal power thus providing more stability and faster convergence. Is equivalent to running the LMS recursion for a new sample of inputs every time recursion or the NLMS operation is carried out. W(n+1) = w(n) + (1/xT(n)x(n)) * e(n) x(n) The step size value for the input vector is calculated µ (n) = 1/xT(n)x(n) The filter tap weights are updated in preparation for the next iteration W(n+1) = w(n) + 2*µ (n) * e(n) * x(n)
  • Results for LMS algorithm: Convergence is faster with increased step size . Plot is for noise=30 dB
  • Results for NLMS algorithm: Convergence is faster in case of NLMS algorithm It provides a more stable output.
  • NON-LINEAR CHANNELEQUALISATION
  • Need For Non-LinearEqualizer: Linear Equalizers do not perform well on channels having deep spectral nulls in the pass band. To compensate distortion linear equalizer places too much gain in the vicinity of spectral nulls thereby enhancing the noise present in these frequencies. BER is better in Non-linear channel equalizer Linear equalizer-inverse problem Non-linear equalizer-pattern classification
  • Non-Linear ChannelEqualizer: t k denotes a sequence of T spaced complex symbols of an BPSK constellation, where 1/T denotes the symbol rate and k denotes the discrete time index. A widely used model for a linear dispersive channel is an FIR filter whose output at th the kN instant is given by h-1 Schematic Diagram of a Non-Linear Wireless Digital Communication system ∑ ak= i=0 hi * t k-i with channel equalizer
  • Continued… where hi- denotes the FIR filter weights Nh- denotes the FIR order. Considering the channel to be a nonlinear one the NL block introduces channel nonlinearity to the filter output. The transmitted signal t k after being passed through the nonlinear channel and added with the additive noise arrives at the receiver, which is denoted by r k .The received signal at the kth time instant is given by r k. The purpose of equalizer attached at the receiver front end is to recover the transmitted sequence t k or its delayed version t k-1 ,where t is the propagation delay associated with the physical channel.
  • Neural Network: Started in 1800s as an effort to describe how human mind performs. It is applied to computational models with Turing ‘s B-type machine and Perceptron A neural network is a massively parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use.
  • Continued: Today in general form a neural network is a machine that is designed by using electronic components or is simulated in software on a digital computer. To achieve good performance, neural networks employ a massive interconnection of simple computing cells referred to as “Neurons” or “processing units” The procedure is called a learning algorithm, the function of which is to modify the synaptic weights of the network in an orderly fashion to attain a desired design objective. McCulloch and Pitts have developed the neural networks for different computing machines.
  • Artificial Neural Network: Artificial Neural Network (ANN) have become a powerful tool for many complex applications including functional approximation, nonlinear system identification, motor control, pattern recognition, adaptive channel equalization and optimization. ANN is capable of performing nonlinear mapping between the input and output space due to its large parallel interconnection between different layers and the nonlinear processing characteristics.
  • Continued: An artificial neuron basically consists of a computing element that performs the weighted sum of the input signal and the connecting weight. The weighted sum is added with the bias called threshold and the resultant signal is passed through a nonlinear activation function. Common types of activation functions are sigmoid and hyperbolic tangent. Each neuron is associated with three parameters whose learning can be adjusted. These are the connecting weights, the bias and the slope of the nonlinear function. For the structural point of view a NN may be single layer or it may be multilayer
  • Multi-layer Perceptron: The perceptron is a single level connection of McCulloch-Pitts neurons is called as Single-layer feed forward networks. The network is capable of linearly separating the input vectors into pattern of classes by a hyper plane. Similarly many perceptrons can be connected in layers To provide a MLP network, the input signal propagates through the network in a forward direction, on a layer-by-layer basis. This network has been applied successfully to solve diverse problems.
  • Continued… Generally MLP is trained using popular error back- propagation algorithm. The scheme of MLP using four layers is shown. Si represent the inputs s1, s2, ….. , sn to the network, and yk represents the output of the final layer of the neural network. The connecting weights between the input to the first hidden layer, first to second hidden layer and the second hidden layer to the output layers are represented by W i ,W ji ,W kj respectively. The final output layer of the MLP may be expressed as