Adaptive equalization

5,643 views

Published on

Adaptive equalization using ANN

Published in: Education
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,643
On SlideShare
0
From Embeds
0
Number of Embeds
8
Actions
Shares
0
Downloads
382
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide

Adaptive equalization

  1. 1. ADAPTIVE CHANNEL EQUALIZATION College of Technology, PantnagarG.B.Pant University of Agriculture and Technology, Pantnagar Kamal Bhatt M.Tech-Electronics & Communication Engg. ID-44036
  2. 2. NEURAL NETWORKNeural networks are the simplified models of the biologicalneuron systems. Neural networks are typically organized in layers. Layers aremade up of a number of interconnected nodes .which contain anactivation function.Patterns are presented to the network via the input layer, whichcommunicates to one or more hidden layers where the actualprocessing is done via a system of weighted connections.The hidden layers then link to an output layer where the answeris output
  3. 3. MODEL OF ARTIFICIAL NEURON An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems. The nervous system is build by relatively simple units, the neurons, so copying their behaviour and functionality should be the solution.
  4. 4. LEARNING IN A SIMPLE NEURON Preceptron Learning Algorithm:1. Initialize weights2. Present a pattern and target output 2 y f [ wxi] 2 i3. Compute output : y f [ wi x ] 0 i i i 04. Update weights : wi(t 1 wi(t) ) wiRepeat starting at 2 until acceptable level of error
  5. 5. NEURAL NETWORK ARCHITECTURE An artificial Neural Network is defined as a data processing system consisting of a large number of interconnected processing elements or artificial neurons. There are three fundamentally different classes of neural networks. Those are.  Single layer feedforward Networks.  Multilayer feedforward Networks.  Recurrent Networks.
  6. 6. ApplicationThe tasks to which artificial neural networks are appliedtend to fall within the following broad categories:•Function approximation, or regression analysis,including time series prediction and modeling.•Classification, including pattern and sequencerecognition, novelty detection and sequentialdecision making.•Data processing, including filtering, clustering,blind signal separation and compression.
  7. 7. Equalization History The LMS algorithm by Widrow and Hoff in 1960paved the way for the development of adaptive filtersused for equalisation.Lucky used this algorithm in 1965 to design adaptivechannel equalisers. Maximum Likelihood SequenceEstimator (MLSE) equaliser and its Viterbiimplementation in 1970’s.The multi layer perceptron (MLP) based symbol-by-symbol equalisers was developed in 1990
  8. 8. During 1989 to 1995 some efficient nonlinear artificialneural network equalizer structure for channel equalizationwere proposed, those include Chebyshev Neural Network, Functional link ANNIn 2002 Kevin M. Passino described the OptimizationForaging Theory in article “Biomimicry of Bacterial Foraging”More recently in 2008, a rank based statistics approachknown as Wilcoxon learning method has been proposed forsignals processing application to mitigate the linear andnonlinear learning problems.
  9. 9. Digital CommunicationSystems
  10. 10. EqualizersAdaptive channel equalizers have played an important role indigital communication systems.Equalizer works like an inversed filter which is placed atthe front end of the receiver. Its transfer function is inverse tothe transfer function of the associated channel , is able toreduce the error causes between the desired and estimatedsignal.This is achieved through a process of training. During thisperiod the transmitter transmits a fixed data sequence and thereceiver has a copy of the same.
  11. 11. We use Equalizers to compensate the received signals whichare corrupted by the noise, interference and signal powerattenuation introduced by communication channels duringtransmission.Linear transversal filters (LTF) are commonly used in thedesign of channel equalizers. The linear equalizers fail to workwell when transmitted signals have encountered severenonlinear distortion.A neural network (NN) has the capability of complicatedlymapping the input to the output signals, which makes the NN-based equalizers a potentially suitable solution to deal withnonlinear channel distortion.
  12. 12. The problem of equalization may be treated as a problem of signalsclassification, so neural networks (NN) are quite promising candidatesbecause they can produce arbitrarily complex decision region.Studies performed during the last decade have established thesuperiority of neural equalizers comparative to the traditional equalizers,in conditions of shigh nonlinear distortions and rapidly varying signals.Several different neural equalizers architectures have beendeveloped, mostly combinations between a conventional lineartransversal filter (LTE) and a neural network.The LTE eliminates the linear distortions, such as ISI, so the NN canbe focused on compensating the nonlinearities. There have beenstudies on the following structures: a LTE and a multilayer perception(MLP) , a LTE and a radial basis function network (RBF) a LTE and arecurrent neural network
  13. 13. MLP networks are sometimes plagued by long trainingtimes and may be trapped at bad local minima.RBF networks often provide a faster and more robustsolution to the equalization problem. In addition, the RBFneural network has a structure similar to the optimalBayesian symbol decision Therefore, the RBF is an idealprocessing structure to implement the optimal Bayesianequalizer. The RBF performances are better than the LTE and MLPequalizers. g. Several learning algorithms have beenproposed to update the RBF parameters. However, the mostpopular algorithm consists of an unsupervised learning rulefor the centers of hidden neurons and a supervised learningrule for the weights of the output neurons.
  14. 14. The centers are generally updated using the k-means clusteringalgorithm which consists of computing the squared distancebetween the input vector and the centers, choosing a minimumsquared distance, and moving the corresponding center closer tothe input vector.The k mean algorithm has some potential problems:classification depend on the initials values of the centers ofRBF, on the type of chosen distance, on the number of classes. If acenter is inappropriate chosen it may never be updated, so it maynever represent a class. Here is proposed a new competitive method to update the RBFcenters, which recompenses the winning neuron and penalizes thesecond winner, named rival..
  15. 15. Gradient Based Adaptive AlgorithmAn adaptive algorithm is a procedure for adjusting theparameters of an adaptive filter to minimize a cost functionchosen for the task at hand.
  16. 16. In this case, the parameters in ω(t) correspondto the impulse response values of the filter attime n. We can write the output signal y(t) as The general form of an adaptive FIR filtering algorithm iswhere G( ) is a particular vector-valued nonlinear function(depends on cost function chosen), μ(t) is a step sizeparameter, e(t) and s(t) are the error signal and input signalvector, respectively, and ω (t) is a vector of states that storepertinent information about the characteristics of the input anderror signals
  17. 17. The Mean-Squared Error (MSE) cost function can bedefined as WMSE(t) can be found from the solution to the system of equations The method of steepest descent is an optimization procedure for minimizing the cost function J(t) with respect to a set of adjustable parameters W(t). This procedure adjusts each parameter of the system according to relationship
  18. 18. Linear Equalization Algorithms
  19. 19. LMS ALGORITHM• In the family of stochastic gradient algorithms• Approximation of the steepest – descent method• Based on the MMSE criterion.(Minimum Mean square Error)• Adaptive process containing two input signals:• 1.) Filtering process, producing output signal.• 2.) Desired signal (Training sequence)• Adaptive process: recursive adjustment of filter tap weights
  20. 20. LMS ALGORITHM STEPS M 1 * yn un k wk n Filter output k 0 en dn yn Estimation error wk n 1 wk n u n k e* n Tap-weight adaptation update value old value learning - tap error of tap - weigth of tap - weight rate input signal vector vector parameter vector 21
  21. 21. Recursive Least Square AlgorithmThe recursive least squares (RLS) algorithm is anotheralgorithm for determining the coefficients of an adaptive filter.In contrast to the LMS algorithm, the RLS algorithm usesinformation from all past input samples (and not only from thecurrent tap-input samples) to estimate the (inverse of the)autocorrelation matrix of the input vector.To decrease the influence of input samples from the farpast, a weighting factor for the influence of each sample isused. This cost function can be represented as
  22. 22. Non Linear Equalizers
  23. 23. Multilayer Perceptron NetworkIn 1958, Rosenblatt demonstrated some practicalapplications using the perceptron. The perceptron is asingle level connection of McCulloch-Pitts neurons iscalled as Single-layer feed forward networks.The network is capable of linearly separating the inputvectors intopattern of classes by a hyper plane. Similarly manyperceptrons can be connected in layers to provide aMLP network, the input signal propagates through thenetwork in a forward direction, on a layer-by-layerbasis. This network has been applied successfully tosolvediverse problems.
  24. 24. MLP Neural Network Using BP Algorithm
  25. 25. Generally MLP is trained using popular error back-propagation algorithm.Si represent the inputss1, s2………. sn to the network, and ykrepresents the output of the final layer of the neuralnetwork. Theconnecting weights between the input to the first hiddenlayer, first to second hidden layer and the secondhidden layer to the output layers are represented byrespectively.The final output layer of the MLP may be expressed as
  26. 26. The final output yk(t) at the output of neuron k, is compared with the desiredoutput d(t) and the resulting error signal e(t) is obtained as The instantaneous value of the total error energy is obtained by summing all error signals over all neurons in the output layer, that isThis error signal is used to update the weights and thresholds of the hiddenlayers as well as the output layer. The updated weights are,
  27. 27. Functional Link Artificial Neural NetworkFLANN is a novel single layer ANN network in which the original inputpattern is expanded to a higher dimensional space using nonlinearfunctions, which provides arbitrarily complex decision regions bygenerating nonlinear decision boundaries.The main purpose of enhanced the functional expansion block to usedfor the channel equalization process.Each element undergoes nonlinear expansion to form M elements suchthat the resultant matrix has the dimension of N×M. The functionalexpansion of the element xk by power series expansion is carried out usingthe equation given in
  28. 28. At tth iteration the error signal e(t) can becomputed as The weight vector can be updated by least mean square (LMS) algorithm, as
  29. 29. BER Performance of FLANN equalizer compared withLMS, RLS based Equalizer
  30. 30. Chebyshev Artificial Neural NetworkChebyshev artificial neural network is similar to FLANN.The difference being that in a FLANN the input signal isexpanded to higher dimension using functional expansion.In Chebyshev the input is expanded using Chebyshevpolynomial. Similarly as FLANN network given in theChNN weights are updated by LMS algorithm. TheChebyshev polynomials generated using the recursiveformula given as
  31. 31. BER Performance of ChNN equalizer compared with FLANN andLMS, RLS based equalizer
  32. 32. Radial Basis Function Equalizer
  33. 33. The centres of the RBF networks are updated using k-meansclustering algorithm. This RBF structure can be extended formultidimensional output as well. Gaussian kernel is the mostpopular form of kernel function for equalization application, itcan be represented asThis network can implement a mapping Frbf : Rm→ R by thefunctionTraining of the RBF networks involves setting theparameters for the centres Ci, spread σr and the linearweights ωi RBF spread parameter, σr 2 is set to channelnoise variance σn 2This provides the optimum RBF network as an equaliser.
  34. 34. BER Performance RBF Equalizer Compared ChNN, FLANN,LMS, RLS equalizer
  35. 35. ConclusionWe observed that RLS provides faster convergencerate than LMS equalizer.We observed that MLP equalizer is a feed-forwardnetwork trained using BP algorithm, it performed betterthan the linear equalizer, but it has a drawback of slowconvergence rate, depending upon the number of nodes andlayers.Optimal equalizer based on maximum a-posteriorprobability (MAP) criterion can be implemented using Radialbasis function (RBF) network.RBF equalizer mitigation all the ISI, CCI and BNinterference and provide minimum BER plot. But it has onedraw back that if input is increased the number of centres ofthe network increases and makes the network morecomplicated.
  36. 36. REFERENCES•Haykin, S., "Adaptive Filter Theory", Prentice Hall,2005•Haykin.S “Neural Network”, PHI 2003•Kavita Burse, R. N. Yadav, and S. C. ShrivastavaChannel Equalization Using Neural Networks: A Review „ IEEETransactions on Systems, Man, And Cybernetics —Part B:CYBERNETICS, VOL. 40, NO. 3, MAY 2010‟•Jagdish C. Patra, Ranendra N. Pal, Rameswar Baliarsingh, andGanapati Panda : Nonlinear Channel Equalization for QAMConstellation Using Artificial Neural Network „ IEEE Transactions onSystems, Man, And Cybernetics —Part B: CYBERNETICS, VOL. 29,NO. 2, APRIL 1999‟•Amalendu Patnaik„, Dimitrios E. Anagnostou„, Rabindra K. Mishra‟,Christos G. Christodoulou„, and J. C. Lyke‟ „Applications of Neural Networks in Wireless Communications „IEEEAntennas and Propagation Magazine, Vol. 46, No. 3. June 2004•R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996•http://www.geocities.com/SiliconValley/Lakes/6007/Neural.htm

×