Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Equalization

1,923 views

Published on

Equalization

Published in: Engineering
  • Be the first to comment

Equalization

  1. 1. Introduction To Equalization Presented By : BHABENDU KUMAR BHAKAT 15304002 Mtech(ECE)
  2. 2. TOC • Communication system model • Need for equalization • ZFE • MSE criterion • LS • LMS • Blind Equalization – Concepts • Turbo Equalization – Concepts • MLSE - Concepts
  3. 3. InformationInformation sourcesource PulsePulse generatorgenerator TransTrans filterfilter channelchannel X(t)X(t) ReceiverReceiver filterfilterA/DA/D ++ Channel noiseChannel noise n(t)n(t) DigitalDigital ProcessingProcessing ++ HHTT(f)(f) HHRR(f)(f) Y(t)Y(t) HHcc(f)(f) Basic Digital Communication SystemBasic Digital Communication System
  4. 4. TransTrans filterfilter channelchannel ReceiverReceiver filterfilter Basic Communication SystemBasic Communication System HHTT(f)(f) HHRR(f)(f)HHcc(f)(f) ∑ +−−= k bdck tnkTtthAtY )()()( 0 The received Signal is the transmitted signal, convolved with the channel And added with AWGN (Neglecting HTx,HRx) ( ) ( )[ ] ( )tnTAt mb mK ckmm kmhAY 0 +−+= ∑≠ ISI -ISI - IInternter SSymbolymbol IInterferencenterference Y(t) Ak Y(tm)
  5. 5. Explanation of ISIExplanation of ISI tt ff FourierFourier TransformTransform ChannelChannel ff FourierFourier TransformTransform tt TTbb 2T2Tbb 3T3Tbb 4T4Tbb 5T5Tbb tt 6T6Tbb
  6. 6. Reasons for ISI • Channel is band limited in nature Physics – e.g. parasitic capacitance in twisted pairs – limited frequency response   unlimited time response •Tx filter might add ISI when channel spacing is crucial. • Channel has multi-path reflections
  7. 7. Channel Model • Channel is unknown • Channel is usually modeled as Tap-Delay- Line (FIR) D D D h(0) h(1) h(2) h(N-1) h(N) y(n) x(n) + + + + +
  8. 8. Example for Measured Channels The Variation of the Amplitude of the Channel Taps is Random (changing Multipath)and usually modeled as Railegh distribution in Typical Urban Areas
  9. 9. Example for Channel Variation:
  10. 10. Equalizer: equalizes the channel – the received signal would seen like it passed a delta response. ))(arg())(arg( )()(1|)(||)(| |)(| 1 |)(| fGfG tthfGfG fG fG CE totalCE C E −= =⇒=⋅⇒= δ
  11. 11. Need For Equalization • Need For Equalization: – Overcome ISI degradation • Need For Adaptive Equalization: – Changing Channel in Time • => Objective: Find the Inverse of the Channel Response to reflect a ‘delta channel to the Rx *Applications (or standards recommend us the channel types for the receiver to cope with).
  12. 12. Zero forcing equalizers (according to Peak Distortion Criterion) Tx Ch Eq qx ∑−=       ±±= = =⋅−⋅= 2 2 ...2,1,0 0,1 )()( n m m nmTXCnmTq τ No ISI Equalizer taps is described as matrix)2/( nTmTx −             −− −−−− = )1()5.1()2()5.2()3( )0()5.0()1()5.1()2( )1()5.0()0()5.0()1( )2()5.1()1()5.0()0( TxTxTxTxTx xTxTxTxTx TxTxxTxTx TxTxTxTxx X Example: 5tap-Equalizer, 2/T sample rate: :Force
  13. 13.                 = − − 2 1 0 1 2 c c c c c C                 = 0 0 1 0 0 q Equalizer taps as vector Desired signal as vector ⇒ XC=q Copt=X-1 q ⇒ Disadvantages: Ignores presence of additive noise (noise enhancement)
  14. 14. MSE Criterion 2 1 0 ])[][(][ nhnxJ N n θθ −= ∑ − = Mean Square Error between the received signal and the desired signal, filtered by the equalizer filter LS Algorithm LMS Algorithm Desired Signal UnKnown Parameter )Equalizer filter response( Received Signal
  15. 15. LS • Least Square Method: – Unbiased estimator – Exhibits minimum variance )optimal( – No probabilistic assumptions )only signal model( – Presented by Guass )1795( in studies of planetary motions(
  16. 16. LS - Theory ][][][ mmnhns θ∑ −= Hns θ=][ 2 1 0 ])[][(][ nhnxJ N n θθ −= ∑ − = ∑ ∑ − = − = = 1 0 2 1 0 ][ ][][ ˆ N n N n nh nhnx θ Derivative according to: θ 1. 2. 3. 4. MSE:
  17. 17. The minimum LS error would be obtained by substituting 4 to 3: ][] ])[][( ][ ][][ˆ][ ])[ˆ][(][ˆ])[ˆ][(][ ])[ˆ][])([ˆ][(])[ˆ][(][ 2 1 0 2 1 0 1 0 2 min 1 0 1 0 2 )ˆ(0 1 0 1 0 1 0 2 1 0 min nh nhnx nxJ nhnxnx nhnxnhnhnxnx nhnxnhnxnhnxJJ N n N n N n N n N n tinhBySubstitu N n N n N n N n ∑ ∑ ∑ ∑∑ ∑∑ ∑∑ − = − = − = − = − = − = − = − = − = −==> −= −−−= −−=−== θ θθθ θθθθ θ    Energy Of Original Signal Energy Of Fitted Signal ][][ nwSignalnx += If Noise Small enough )SNR large enough(: Jmin~0 Back-Up
  18. 18. Finding the LS solution θHns =][ ])][(])][( ])[ˆ][])([ˆ][(])[ˆ][(][ 1 0 2 1 0 θθ θθθθ HnxHnx nhnxnhnxnhnxJ T N n N n −−= −−=−= ∑∑ − = − = θθθ θθθθθ HHHxzx HHxHHxxxJ TT scalar TT TTTTTT +−= +−−= 2 ][ )H: observation matrix )Nxp( and θ θ θ HHxH J T scalar T 22 )( +−= ∂ ∂  xHHH TT 1 )(ˆ − =θ T Nsssns ])1[],...1[],0[(][ −=
  19. 19. LS : Pros & Cons •Advantages: •Optimal approximation for the Channel- once calculated it could feed the Equalizer taps. •Disadvantages: •heavy Processing )due to matrix inversion which by It self is a challenge( •Not adaptive )calculated every once in a while and is not good for fast varying channels • Adaptive Equalizer is required when the Channel is time variant )changes in time( in order to adjust the equalizer filter tap Weights according to the instantaneous channel properties.
  20. 20. Contents: • Introduction - approximating steepest-descent algorithm • Steepest descend method • Least-mean-square algorithm • LMS algorithm convergence stability • Numerical example for channel equalization using LMS • Summary LEAST-MEAN-SQUARE ALGORITHM
  21. 21. SYSTEM BLOCK USING THE LMS U[n] = Input signal from the channel ; d[n] = Desired Response H[n] = Some training sequence generator e[n] = Error feedback between : A.) desired response. B.) Equalizer FIR filter output W = Fir filter using tap weights vector
  22. 22. STEEPEST DESCENT METHOD • Steepest decent algorithm is a gradient based method which employs recursive solution over problem )cost function( • The current equalizer taps vector is W)n( and the next sample equalizer taps vector weight is W)n+1(, We could estimate the W)n+1( vector by this approximation: • The gradient is a vector pointing in the direction of the change in filter coefficients that will cause the greatest increase in the error signal. Because the goal is to minimize the error, however, the filter coefficients updated in the direction opposite the gradient; that is why the gradient term is negated. • The constant μ is a step-size. After repeatedly adjusting each coefficient in the direction opposite to the gradient of the error, the adaptive filter should converge. ])[(5.0]1[][ nJnWnW −∇++= µ
  23. 23. • Given the following function we need to obtain the vector that would give us the absolute minimum. • It is obvious that give us the minimum. STEEPEST DESCENT EXAMPLE 2 2 2 121 ),( CCccY += ,021 == CC 1C 2C y Now lets find the solution by the steepest descend method
  24. 24. • We start by assuming (C1 = 5, C2 = 7) • We select the constant . If it is too big, we miss the minimum. If it is too small, it would take us a lot of time to het the minimum. I would select = 0.1. • The gradient vector is: STEEPEST DESCENT EXAMPLE µ µ ][2 1 ][2 1 ][2 1 ][2 1 ]1[2 1 9.01.02.0 nnnnn C C C C C C y C C C C       =      −      =∇∗−      =      +       =             =∇ 2 1 2 1 2 2 C C dc dy dc dy y • So our iterative equation is:
  25. 25. STEEPEST DESCENT EXAMPLE       =            =            =      567.0 405.0 :3 3.6 5.4 :2 7 5 :1 2 1 2 1 2 1 C C Iteration C C Iteration C C Iteration       =            =      ∞→ 0 0 lim 013.0 01.0 :60 ...... ][2 1 2 1 n n C C C C Iteration As we can see, the vector [c1,c2] convergates to the value which would yield the function minimum and the speed of this convergence depends on .µ 1C 2C y Initial guess Minimum
  26. 26. MMSE CRITERIA FOR THE LMS • MMSE – Minimum mean square error • MSE = • To obtain the LMS MMSE we should derivative the MSE and compare it to 0: • }])()()({[(})]()({[( 22 ∑−= −−=− N Nn nkunwkdEkykdE )( ))()()()()(2})({( )( )( 2 kdW mnRmwnwnPnwkdEd kdW MSEd N Nn N Nm N Nn du ∑ ∑∑ −= −=−= −+− = )}()({)( )}()({)( )()()()()(2})({}])()()({[( 22 knukmuEmnR knukdEnP mnRmwnwnPnwkdEnkunwkdE uu du N Nn N Nm N Nn du N Nn −−=− −= −+−=−− ∑ ∑∑∑ −= −=−=−=
  27. 27. MMSE CRITERION FOR THE LMS ,...2,1,0),(][2)(2 )( )( )( ±±=−+−==∇ ∑−− kknRnwkP kdW MSEd nJ uu N Nn du And finally we get: By comparing the derivative to zero we get the MMSE: PRwopt •= −1 This calculation is complicated for the DSP (calculating the inverse matrix ), and can cause the system to not being stable cause if there are NULLs in the noise, we could get very large values in the inverse matrix. Also we could not always know the Auto correlation matrix of the input and the cross-correlation vector, so we would like to make an approximation of this.
  28. 28. LMS – APPROXIMATION OF THE STEEPEST DESCENT METHOD W(n+1) = W(n) + 2*[P – Rw(n)] <= According the MMSE criterion We assume the following assumptions: • Input vectors :u(n), u(n-1),…,u(1) statistically independent vectors. • Input vector u(n) and desired response d(n), are statistically independent of d(n), d(n-1),…,d(1) • Input vector u(n) and desired response d(n) are Gaussian-distributed R.V. •Environment is wide-sense stationary; In LMS, the following estimates are used: Ruu^ = u(n)u H (n) – Autocorrelation matrix of input signal Pud^ = u(n)d*(n) - Cross-correlation vector between U[n] and d[n]. *** Or we could calculate the gradient of |e[n]|2 instead of E{|e[n]|2 }
  29. 29. LMS ALGORITHM ≈ [n]}y–[n]u[n]{d{w(n) [n]w[n]}u[n]u–[n]{u[n]dw(n) w[n]}R–{PW[n]1]W[n ** H* ^^ µ µ µ += += +≅+ We get the final result: [n]}u[n]e{W[n]1]W[n * µ+≅+
  30. 30. LMS STABILITY The size of the step size determines the algorithm convergence rate. Too small step size will make the algorithm take a lot of iterations. Too big step size will not convergence the weight taps. Rule Of Thumb: RPN )12(5 1 + =µ Where, N is the equalizer length Pr, is the received power (signal+noise) that could be estimated in the receiver.
  31. 31. LMS – CONVERGENCE GRAPH This graph illustrates the LMS algorithm. First we start from guessing the TAP weights. Then we start going in opposite the gradient vector, to calculate the next taps, and so on, until we get the MMSE, meaning the MSE is 0 or a very close value to it.(In practice we can not get exactly error of 0 because the noise is a random process, we could only decrease the error below a desired minimum) Example for the Unknown Channel of 2nd order: Desired Combination of tapsDesired Combination of taps
  32. 32. LMS Convergence Vs u
  33. 33. LMS – EQUALIZER EXAMPLE Channel equalization example: Average Square Error as a function of iterations number using different channel transfer function (change of W)
  34. 34. LMS – Advantage: • Simplicity of implementation • Not neglecting the noise like Zero forcing equalizer • By pass the need for calculating an inverse matrix. LMS : Pros & Cons LMS – Disadvantage: Slow Convergence Demands using of training sequence as reference ,thus decreasing the communication BW.
  35. 35. Non linear equalization Linear equalization (reminder): • Tap delayed equalization • Output is linear combination of the equalizer input C E G G 1 = ... )( )( )( 2 3 1 10 1 +++== −= −− − ∏ zazaaC zX zY zaG E i iE  as FIR ...)2()1()()( 210 +−⋅+−⋅+⋅= nxanxanxany
  36. 36. Non linear equalization – DFE (Decision feedback Equalization) ∑ ∑ −⋅−−⋅= )()()( inybinxany ii Advantages: copes with larger ISI ∏ ∏ − − − − == i i i i E zb za G zX zY )( )( )( )( 1 1  as IIR A(z) Receiver detector B(z) +In Output + - The nonlinearity is due the detector characteristics that is fed back (MAPPER) The Decision feedback leads poles in z domain Disadvantages: instability danger
  37. 37. Non linear equalization - DFE
  38. 38. Blind Equalization • ZFE and MSE equalizers assume option of training sequence for learning the channel. • What happens when there is none? – Blind Equalization Adaptive Equalizer Decision + - Input Output Error Signal nV nIˆ nI ~ nd neBut Usually employs also : InterleavingDeInterleaving Advanced coding ML criterion Why? Blind Eq is hard and complicated enough! So if you are going to implement it, use the best blocks For decision (detection) and equalizing With LMS
  39. 39. Turbo Equalization MAP Decoder + Π Π −1 MAP Equalizer + Channel Estimator r L D e(c’) L D e(c) L D (c) L D (d) L E e(c)L E e(c’)L E (c’) Iterative : Estimate Equalize Decode ReEncode Usually employs also : InterleavingDeInterleaving TurboCoding (Advanced iterative code) MAP (based on ML criterion) Why? It is complicated enough! So if you are going to implement it, use the best blocks Next iteration would rely on better estimation therefore would lead more precise equalization
  40. 40. Performance of Turbo Eq Vs Iterations
  41. 41. ML criterion • MSE optimizes detection up to 1st /2nd order statistics. • In Uri’s Class: – Optimum Detection: • Strongest Survivor • Correlation (MF) (allow optimal performance for Delta ch and Additive noise.  Optimized Detection maximizes prob of detection (minimizes error or Euclidean distance in Signal Space) • Lets find the Optimal Detection Criterion while in presence of memory channel (ISI)
  42. 42. ML criterion –Cont. • Maximum Likelihood : Maximizes decision probability for the received trellis Example BPSK (NRZI) bESS =−= 01 2possible transmitted signals Energy Per Bit kbk nEr +±= Received Signal occupies AWGN         − −= 2 2 1 2 )( exp 2 1 )|( n bk n k Er srp σπσ         + −= 2 2 0 2 )( exp 2 1 )|( n bk n k Er srp σπσ Conditional PDF (prob of correct decision on r1 pending s1 was transmitted…) N0/2 Prob of correct decision on a sequence of symbols ∏= = K k m kk m k srpsrrrp 1 )()( 21 )|()|,...,,( Transmitted sequence optimal
  43. 43. ML – Cont. With logarithm operation, it could be shown that this is equivalent to: Minimizing the Euclidean distance metric of the sequence: ∑= −= K k m kk m srsrD 1 2)()( )(),( How could this be used? Looks Similar? while MSE minimizes Error (maximizes Prob) for decision on certain Sym, MLSE minimizes Error (maximizes Prob) for decision on certain Trellis ofSym, (Called Metric)
  44. 44. Viterbi Equalizer (On the tip of the tongue) bE−bE bE/1 Example for NRZI: Trasmit Symbols: (0=No change in transmitted Symbol (1=Alter Symbol) Tt = Tt 2= Tt 3= Tt 4= S0 S1 bE−/0 bE−/0 bE−/0 bE−/0 bE/1 bE/1 bE/1 bE−/1 bE−/1 bE−/1 2 2 2 10 )()()0,0( bb ErErD +++= 2 2 2 10 )()()1,1( bb ErErD ++−= 2 2 2 10 )()()1,0( bb ErErD −++= 2 2 2 10 )()()0,1( bb ErErD −+−= bE/0 bE/0 bE/0 Metric (Sum of Euclidean Distance) 2 300 )()0,0()0,0,0( bErDD ++= 2 300 )()1,0()1,1,0( bErDD ++= 2 300 )()0,0()1,0,0( bErDD −+= 2 300 )()1,0()0,1,0( bErDD −+=
  45. 45. We Always disqualify one metric for possible S0 and possible S1. Finally we are left with 2 options for possible Trellis. Finally are decide on the correct Trellis with the Euclidean Metric of each or with Apost DATA
  46. 46. References • John G.Proakis – Digital Communications. • John G.Proakis –Communication Systems Eng. • Simon Haykin - Adaptive Filter Theory • K Hooli – Adaptive filters and LMS • S.Kay – Statistical Signal Processing – Estimation Theory

×