SlideShare a Scribd company logo
An Affine Combination of Two LMS Adaptive Filters
    Statistical Analysis of an Error Power Ratio Scheme
        Neil Bershad(1) , Jose C. M. Bermudez(2) and Jean-Yves Tourneret(3)
                             ´


                             (1) University of California, Irvine, USA
                                      bershad@ece.uci.edu
              (2) Federal University of Santa Catarina, Florianopolis, Brazil
                                      j.bermudez@ieee.org
                                                     ´
          (3) University of Toulouse, ENSEEIHT-IRIT-TeSA, Toulouse, France
                                            jyt@n7.fr



Asilomar, Tuesday, November 3, 2009
                                                  Asilomar Conf. on Signals, Systems, and Computers – p. 1/21
Property of most adaptive algorithms

           Large step size µ
                Fast convergence
                Large steady-state weight misadjustment
           Small step size µ
                Slow convergence
                Small steady-state weight misadjustment

     Possible solution: Variable µ algorithms
           T. Aboulnasr and K. Mayyas, “A robust variable step-size LMS type algorithm: Analysis
           and simulations, IEEE Trans. Signal Process., vol. 45, pp. 631-639, March 1997.
           H. C. Shin, A. H. Sayed and W. J. Song, “Variable step-size NLMS and affine projection
           algorithms,” IEEE Trans. Signal Process. Lett., vol. 11, pp. 132-135, Feb. 2004.



Asilomar, Tuesday, November 3, 2009
                                                   Asilomar Conf. on Signals, Systems, and Computers – p. 2/21
Affine combination of two adaptive filters

           New approach recently proposed
             Use two adaptive filters with different step-sizes
             adapting on the same data
             Convex combination of the adaptive filter outputs
                J. Arenas-Garcia, A. R. Figueiras-Vidal, and A. H. Sayed, “Mean-square
                performance of a convex combination of two adaptive filters,” IEEE Trans. Signal
                Process., vol. 54, pp. 1078-1090, March 2006.

           Affine combination
              Recent study for affine combination of LMS filters
                N. J. Bershad, J. C. M. Bermudez, and J-Y Tourneret, “An Affine Combination of
                Two LMS Adaptive Filters - Transient Mean-Square Analysis,” IEEE Trans. Signal
                Process., vol. 56, pp. 1853-1864, May 2008.



Asilomar, Tuesday, November 3, 2009
                                                    Asilomar Conf. on Signals, Systems, and Computers – p. 3/21
d(n)

                                                   e1 (n)          +
                                                               −
                                                                                              +
                                                    y1 (n)
                              W 1 (n)                                                          −
                                                                λ(n)            +
    U (n)
                                                                                 +                            y(n)
                                                    y2 (n)                 1 − λ(n)
                               W 2 (n)


                                               e2 (n) −            +

      Adaptive combining of two transversal adaptive filters.
                           Convex: λ(n) ∈ (0, 1)             Affine: λ(n) ∈ R

Asilomar, Tuesday, November 3, 2009
                                                     Asilomar Conf. on Signals, Systems, and Computers – p. 4/21
Affine Combination Schemes
           Two schemes for updating λ(n) proposed in 2008
                Stochastic gradient approx. of opt. sequence λo (n).
                Analyzed in
                R. Candido, M. T. M. Silva and V. Nascimento, “Affine combinations of adaptive

                filters,” (Asilomar 2008).

                Power error ratio
                Very good performance but still not analyzed.




Asilomar, Tuesday, November 3, 2009
                                                   Asilomar Conf. on Signals, Systems, and Computers – p. 5/21
This paper

           Analysis of the power ratio combination scheme
           Mean behavior of λ(n)
           Mean square deviation of weight vector
           Monte Carlo simulations to verify the theoretical models

    Statistical Assumptions
           Input signal u(n) is white
           Additive noise is white and uncorr. with u(n)
           Unknown system is stationary



Asilomar, Tuesday, November 3, 2009
                                           Asilomar Conf. on Signals, Systems, and Computers – p. 6/21
The Affine Combiner – Brief Review
           LMS adaptation rule
                       W i (n + 1) = W i (n) + µi ei (n)U (n), i = 1, 2                            (1)

                                      ei (n) = d(n) − W T (n)U (n),
                                                        i                                          (2)

                                      d(n) = e0 (n) + W T U (n),
                                                        o                                          (3)

           Combination of filter outputs
                              y(n) = λ(n)y1 (n) + [1 − λ(n)]y2 (n),                                (4)

                               e(n) = d(n) − y(n),                                                 (5)


           where yi (n) = W T U (n) and λ(n) ∈ R.
                            i




Asilomar, Tuesday, November 3, 2009
                                                   Asilomar Conf. on Signals, Systems, and Computers – p. 7/21
Optimal Combining Rule

           Optimal Combiner
                                                        T
                                       W o − W 2 (n)        W 1 (n) − W 2 (n)
                    λo (n) =                                T
                                      W 1 (n) − W 2 (n)         W 1 (n) − W 2 (n)

           Steady-State Behavior

                 lim E[λo (n)]
                n→∞

                                 E W T (n)W 2 (n) − E W T (n)W 1 (n)
                                     2                  2
                ≃ lim                                       T
                                                                                                 .
                    n→∞
                            E         W 1 (n) − W 2 (n)         W 1 (n) − W 2 (n)


Asilomar, Tuesday, November 3, 2009
                                                   Asilomar Conf. on Signals, Systems, and Computers – p. 8/21
Optimal Combining Rule (cont.)

           Expected Values

                     lim E[W T (n)W 1 (n)]
                             2
                    n→∞
                                                             2
                                                    µ1 µ2 N σo
                                 = WTWo +
                                    o                                2
                                          (µ1 + µ2 ) − µ1 µ2 (N + 2)σu

           and
                                                         2
                                                   µi N σo
            lim E[W T (n)W i (n)] = W T W o +
                    i                 o                      2
                                                               ,                          i = 1, 2.
           n→∞                                2 − µi (N + 2)σu

           After Simplifications
                                            δ
                        lim E[λo (n)] ≃          ,       δ = (µ2 /µ1 )  1.
                       n→∞              2(δ − 1)
Asilomar, Tuesday, November 3, 2009
                                              Asilomar Conf. on Signals, Systems, and Computers – p. 9/21
Error Power Ratio Based Scheme

                                                                  e2 (n)
                                                                  ˆ1
                                      λ(n) = 1 − κ erf
                                                                  e2 (n)
                                                                  ˆ2
    where
                                                         n
                                                 1
                                      e2 (n)
                                      ˆ1       =                       e2 (m)
                                                                        1
                                                 K   m=n−K+1
                                                        n
                                                 1
                                      e2 (n)
                                      ˆ2       =                       e2 (m)
                                                                        2
                                                 K   m=n−K+1

    and
                                                              x
                                                 2                    −t2
                                       erf(x) = √                 e         dt.
                                                  π       0



Asilomar, Tuesday, November 3, 2009
                                                      Asilomar Conf. on Signals, Systems, and Computers – p. 10/21
The Value of κ
           Objective
                                      lim E[λ(n)] ≃ lim E[λo (n)].
                                      n→∞               n→∞

           First order approximation

                                                                E[ˆ2 (n)]
                                                                  e1
                                 E[λ(n)] ≃ 1 − κ erf
                                                                E[ˆ2 (n)]
                                                                  e2

           with
                                             2      n
                                            σu
                     ei         2
                   E[ˆ2 (n)] = σo +                         MSDi (m),             i = 1, 2.
                                            K    m=n−K+1

                   MSDi (n) = E{[W o − W i (n)]T [W o − W i (n)]}


Asilomar, Tuesday, November 3, 2009
                                                   Asilomar Conf. on Signals, Systems, and Computers – p. 11/21
The Value of κ (cont.)

           Taking the limn→∞
                                                2    2
                                               σo + σu MSD1 (∞)
                       lim E[λ(n)] ≃ 1 − κ erf 2     2
                      n→∞                      σo + σu MSD2 (∞)

           For limn→∞ E[λ(n)] ≃ limn→∞ E[λo (n)]
                                             2    2                               −1
                             δ              σo + σu MSD1 (∞)
                   κ= 1−                erf 2     2
                         2(δ − 1)           σo + σu MSD2 (∞)




Asilomar, Tuesday, November 3, 2009
                                         Asilomar Conf. on Signals, Systems, and Computers – p. 12/21
Mean Behavior of λ(n)

           Define
                       e2 (n)
                       ˆ1
                     ξ= 2 ,            η = E (ξ)    and        σξ = E(ξ 2 ) − η 2
                                                                2
                       e2 (n)
                       ˆ

           Second order approximation
                                                       2
                                                      σξ ′′
                                      E[g(ξ)] ≃ g(η) + g (η)
                                                      2
           Mean Behavior
                                                         2
                                                      2ησξ −η2
                             E[λ(n)] ≃ 1 − κ erf(η) − √ e
                                                        π
                                           2
           Expressions required for η and σξ .
Asilomar, Tuesday, November 3, 2009
                                               Asilomar Conf. on Signals, Systems, and Computers – p. 13/21
Mean Behavior of λ(n) (cont.)

           Approximation for η
           Writing

                       ˆi         e2
                       e2 (n) = E[ˆi (n)] + εi = mi + εi ,               i = 1, 2

           the mean η is approximated as

                                  m1 + ε 1   m1   E [ˆ2 (n)]
                                                     e1
                              η=E          ≃    =
                                  m2 + ε 2   m2   E [ˆ2 (n)]
                                                     e2




Asilomar, Tuesday, November 3, 2009
                                             Asilomar Conf. on Signals, Systems, and Computers – p. 14/21
Mean Behavior of λ(n) (cont.)
                              2
           Approximation for σξ

                                                                  2    2
                              e2 (n)
                              ˆ1
                                        2               E      [ˆ1 (n)]
                                                                e                     E [ˆ2 (n)]
                                                                                         e1
                                                                                                        2
            2
           σξ = E                            − η2 ≃                            −
                              e2 (n)
                              ˆ2                        E [ˆ2 (n)]
                                                           e2
                                                                         2            E [ˆ2 (n)]
                                                                                         e2

           Thus,
                                           m2 E(ε2 ) − m2 E(ε2 )
                                       σξ ≃ 2 2 1
                                        2                1    2
                                            [m2 + E(ε2 )] m2
                                                       2    2

           with
                                         n
                   2     2                       2         2                    2
                E(εi ) = 2                      σo    +   σu   MSDi (m) ,               i = 1, 2.
                        K             m=n−K+1




Asilomar, Tuesday, November 3, 2009
                                                     Asilomar Conf. on Signals, Systems, and Computers – p. 15/21
Mean-Square Deviation (MSD)

           Error signal

                        e(n) = eo (n) + λ(n)[W o − W 1 (n)]
                                                                             T
                                      + [1 − λ(n)][W o − W 2 (n)]                U (n)

           Squaring and averaging

                  MSDc (n) = E[e2 (n)] − σo ≃ σu E[λ2 (n)]MSD1 (n)
                                          2    2


                                       + {1 − 2E[λ(n)] + E[λ2 (n)]}MSD2 (n)

                                       + 2{E[λ(n)] − E[λ2 (n)]}MSD21 (n)

           Expression for E[λ2 (n)] is necessary.
Asilomar, Tuesday, November 3, 2009
                                                 Asilomar Conf. on Signals, Systems, and Computers – p. 16/21
Mean-Square Deviation (MSD) (cont.)

           From the expression of λ(n)

                         E[λ2 (n)] = 1 − 2κ E[erf(ξ)] + κ2 E[erf2 (ξ)]

           Expression of E[erf(ξ)]
                                                              2
                                                           2ησξ −η2
                                      E[erf(ξ)] ≃ erf(η) − √ e
                                                             π

           Second order approximation of E[erf2 (ξ)]
                                         2
                     2          2      2σξ 2 −η2                −η 2
                E[erf (ξ)] ≃ erf (η) + √   √ e   − 2 η erf(η) e
                                         π  π

           The model for MSDc (n) is complete.
Asilomar, Tuesday, November 3, 2009
                                                  Asilomar Conf. on Signals, Systems, and Computers – p. 17/21
Simulation Results
               Responses to be identified: W o = [wo1 , . . . , woN ]T

                                sin[2πfo (k − ∆)] cos[2πrfo (k − ∆)]
                       w ok   =                                      ,                                 k = 1, . . . , N
                                  2πfo (k − ∆) 1 − 4rfo (k − ∆)
                                                     2
               In all simulations: N = 32, K = 100, σu = 1, 50 MC runs.

                              Example 1                                                    Example 2
       ∆ = 10, r = 0.2, α = 1.2                                             ∆ = 5, r = 0, α = 3.8
             1.2                                                              1


              1                                                              0.8


             0.8                                                             0.6
       Wo




                                                                       Wo
             0.6                                                             0.4


             0.4                                                             0.2


             0.2                                                              0


              0                                                             −0.2


            −0.2                                                            −0.4
                   0     5     10   15     20   25   30   35                       0   5    10   15     20   25   30   35
                                         sample k                                                     sample k
Asilomar, Tuesday, November 3, 2009
                                                               Asilomar Conf. on Signals, Systems, and Computers – p. 18/21
Simulation Results – Example 1

                                                      2                µ2
                                                     σo     = 10 , δ =
                                                                     −4
                                                                          = 0.1
                                                                       µ1
                1.6                                                                                   0


                1.4
                                                                                                    −10




                                                                                         MSDc (n)
                1.2


                 1                                                                                  −20
        λ(n)




                0.8
                                                                                                    −30
                0.6


                0.4                                                                                 −40


                0.2
                                                                                                    −50
                 0


               −0.2                                                                                 −60
                      0   500   1000   1500   2000   2500   3000   3500   4000                            0   500   1000   1500   2000   2500   3000   3500   4000
                                              iteration n                                                                         iteration n


                                       Optimal combination λo (n)
                                 λ(n) obtained from the error power ratio
                                       Theoretical model for λ(n)
Asilomar, Tuesday, November 3, 2009
                                                                                 Asilomar Conf. on Signals, Systems, and Computers – p. 19/21
Simulation Results – Example 2

                                                      2                µ2
                                                     σo     = 10 , δ =
                                                                     −3
                                                                          = 0.3
                                                                       µ1
                1.4                                                                                   5


                1.2                                                                                   0




                                                                                         MSDc (n)
                 1                                                                                  −5


                0.8                                                                                 −10
        λ(n)




                0.6                                                                                 −15


                0.4                                                                                 −20


                0.2                                                                                 −25


                 0                                                                                  −30


               −0.2                                                                                 −35


               −0.4                                                                                 −40
                      0   500   1000   1500   2000   2500   3000   3500   4000                            0   500   1000   1500   2000   2500   3000   3500   4000
                                              iteration n                                                                         iteration n


                                       Optimal combination λo (n)
                                 λ(n) obtained from the error power ratio
                                       Theoretical model for λ(n)
Asilomar, Tuesday, November 3, 2009
                                                                                 Asilomar Conf. on Signals, Systems, and Computers – p. 20/21
Conclusions
           Affine combination two LMS adaptive filters studied.
           Analysis of an error power ratio scheme
                Tuning parameter κ determined for optimal
                steady-state performance
                Analytical model for E[λ(n)]
                Analytical model for MSDc (n)
           Monte Carlo Simulations show that
                Error power ratio scheme is close to optimum
                Analytical models are very accurate



Asilomar, Tuesday, November 3, 2009
                                           Asilomar Conf. on Signals, Systems, and Computers – p. 21/21

More Related Content

What's hot

Lecture 15 DCT, Walsh and Hadamard Transform
Lecture 15 DCT, Walsh and Hadamard TransformLecture 15 DCT, Walsh and Hadamard Transform
Lecture 15 DCT, Walsh and Hadamard Transform
VARUN KUMAR
 
Koc2(dba)
Koc2(dba)Koc2(dba)
Koc2(dba)
Serhat Yucel
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
University of Glasgow
 
8 lti psd
8 lti psd8 lti psd
8 lti psd
bantisworld
 
Computational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensorsComputational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensors
University of Glasgow
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
hirokazutanaka
 
Dsp3
Dsp3Dsp3
The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
ESCOM
 
Lecture 12 (Image transformation)
Lecture 12 (Image transformation)Lecture 12 (Image transformation)
Lecture 12 (Image transformation)
VARUN KUMAR
 
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network DynamicsJAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
hirokazutanaka
 
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANSWE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
grssieee
 
computational stochastic phase-field
computational stochastic phase-fieldcomputational stochastic phase-field
computational stochastic phase-field
cerniagigante
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
JeremyHeng10
 
Convolution
ConvolutionConvolution
Convolution
muzuf
 
Natalini nse slide_giu2013
Natalini nse slide_giu2013Natalini nse slide_giu2013
Natalini nse slide_giu2013
Madd Maths
 
Linear Machine Learning Models with L2 Regularization and Kernel Tricks
Linear Machine Learning Models with L2 Regularization and Kernel TricksLinear Machine Learning Models with L2 Regularization and Kernel Tricks
Linear Machine Learning Models with L2 Regularization and Kernel Tricks
Fengtao Wu
 
Unit 5: All
Unit 5: AllUnit 5: All
Unit 5: All
Hector Zenil
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
Stephane Senecal
 
Rousseau
RousseauRousseau
Rousseau
eric_gautier
 
Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
Fabian Pedregosa
 

What's hot (20)

Lecture 15 DCT, Walsh and Hadamard Transform
Lecture 15 DCT, Walsh and Hadamard TransformLecture 15 DCT, Walsh and Hadamard Transform
Lecture 15 DCT, Walsh and Hadamard Transform
 
Koc2(dba)
Koc2(dba)Koc2(dba)
Koc2(dba)
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
 
8 lti psd
8 lti psd8 lti psd
8 lti psd
 
Computational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensorsComputational methods for nanoscale bio sensors
Computational methods for nanoscale bio sensors
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
 
Dsp3
Dsp3Dsp3
Dsp3
 
The multilayer perceptron
The multilayer perceptronThe multilayer perceptron
The multilayer perceptron
 
Lecture 12 (Image transformation)
Lecture 12 (Image transformation)Lecture 12 (Image transformation)
Lecture 12 (Image transformation)
 
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network DynamicsJAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
JAISTサマースクール2016「脳を知るための理論」講義03 Network Dynamics
 
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANSWE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
WE4.L09 - POLARIMETRIC SAR ESTIMATION BASED ON NON-LOCAL MEANS
 
computational stochastic phase-field
computational stochastic phase-fieldcomputational stochastic phase-field
computational stochastic phase-field
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
 
Convolution
ConvolutionConvolution
Convolution
 
Natalini nse slide_giu2013
Natalini nse slide_giu2013Natalini nse slide_giu2013
Natalini nse slide_giu2013
 
Linear Machine Learning Models with L2 Regularization and Kernel Tricks
Linear Machine Learning Models with L2 Regularization and Kernel TricksLinear Machine Learning Models with L2 Regularization and Kernel Tricks
Linear Machine Learning Models with L2 Regularization and Kernel Tricks
 
Unit 5: All
Unit 5: AllUnit 5: All
Unit 5: All
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
 
Rousseau
RousseauRousseau
Rousseau
 
Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4Random Matrix Theory and Machine Learning - Part 4
Random Matrix Theory and Machine Learning - Part 4
 

Similar to An Affine Combination Of Two Lms Adaptive Filters

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
ijceronline
 
Many electrons atoms_2012.12.04 (PDF with links
Many electrons atoms_2012.12.04 (PDF with linksMany electrons atoms_2012.12.04 (PDF with links
Many electrons atoms_2012.12.04 (PDF with links
Ladislav Kocbach
 
Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
Yang Zhang
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
The Statistical and Applied Mathematical Sciences Institute
 
Artificial intelligence ai choice mechanism hypothesis of a mathematical method
Artificial intelligence ai choice mechanism hypothesis of a mathematical methodArtificial intelligence ai choice mechanism hypothesis of a mathematical method
Artificial intelligence ai choice mechanism hypothesis of a mathematical method
Alexander Decker
 
Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02
Rediet Moges
 
02_signals.pdf
02_signals.pdf02_signals.pdf
02_signals.pdf
MituBaral2
 
Paper computer
Paper computerPaper computer
Paper computer
bikram ...
 
Paper computer
Paper computerPaper computer
Paper computer
bikram ...
 
Dsp U Lec04 Discrete Time Signals & Systems
Dsp U   Lec04 Discrete Time Signals & SystemsDsp U   Lec04 Discrete Time Signals & Systems
Dsp U Lec04 Discrete Time Signals & Systems
taha25
 
Irjet v2i170
Irjet v2i170Irjet v2i170
Irjet v2i170
IRJET Journal
 
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
Chiheb Ben Hammouda
 
dalrymple_slides.ppt
dalrymple_slides.pptdalrymple_slides.ppt
dalrymple_slides.ppt
AzeemKhan17786
 
A Novel Methodology for Designing Linear Phase IIR Filters
A Novel Methodology for Designing Linear Phase IIR FiltersA Novel Methodology for Designing Linear Phase IIR Filters
A Novel Methodology for Designing Linear Phase IIR Filters
IDES Editor
 
hone_durham
hone_durhamhone_durham
hone_durham
Andy Hone
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
Franco Bontempi Org Didattica
 
Eh3 analysis of nonlinear energy harvesters
Eh3   analysis of nonlinear energy harvestersEh3   analysis of nonlinear energy harvesters
Eh3 analysis of nonlinear energy harvesters
University of Glasgow
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Frank Nielsen
 
EC8352-Signals and Systems - Laplace transform
EC8352-Signals and Systems - Laplace transformEC8352-Signals and Systems - Laplace transform
EC8352-Signals and Systems - Laplace transform
NimithaSoman
 
5_2019_02_01!09_42_56_PM.pptx
5_2019_02_01!09_42_56_PM.pptx5_2019_02_01!09_42_56_PM.pptx
5_2019_02_01!09_42_56_PM.pptx
ShalabhMishra10
 

Similar to An Affine Combination Of Two Lms Adaptive Filters (20)

IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Many electrons atoms_2012.12.04 (PDF with links
Many electrons atoms_2012.12.04 (PDF with linksMany electrons atoms_2012.12.04 (PDF with links
Many electrons atoms_2012.12.04 (PDF with links
 
Continuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform NetworksContinuum Modeling and Control of Large Nonuniform Networks
Continuum Modeling and Control of Large Nonuniform Networks
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
 
Artificial intelligence ai choice mechanism hypothesis of a mathematical method
Artificial intelligence ai choice mechanism hypothesis of a mathematical methodArtificial intelligence ai choice mechanism hypothesis of a mathematical method
Artificial intelligence ai choice mechanism hypothesis of a mathematical method
 
Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02
 
02_signals.pdf
02_signals.pdf02_signals.pdf
02_signals.pdf
 
Paper computer
Paper computerPaper computer
Paper computer
 
Paper computer
Paper computerPaper computer
Paper computer
 
Dsp U Lec04 Discrete Time Signals & Systems
Dsp U   Lec04 Discrete Time Signals & SystemsDsp U   Lec04 Discrete Time Signals & Systems
Dsp U Lec04 Discrete Time Signals & Systems
 
Irjet v2i170
Irjet v2i170Irjet v2i170
Irjet v2i170
 
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
 
dalrymple_slides.ppt
dalrymple_slides.pptdalrymple_slides.ppt
dalrymple_slides.ppt
 
A Novel Methodology for Designing Linear Phase IIR Filters
A Novel Methodology for Designing Linear Phase IIR FiltersA Novel Methodology for Designing Linear Phase IIR Filters
A Novel Methodology for Designing Linear Phase IIR Filters
 
hone_durham
hone_durhamhone_durham
hone_durham
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
 
Eh3 analysis of nonlinear energy harvesters
Eh3   analysis of nonlinear energy harvestersEh3   analysis of nonlinear energy harvesters
Eh3 analysis of nonlinear energy harvesters
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
 
EC8352-Signals and Systems - Laplace transform
EC8352-Signals and Systems - Laplace transformEC8352-Signals and Systems - Laplace transform
EC8352-Signals and Systems - Laplace transform
 
5_2019_02_01!09_42_56_PM.pptx
5_2019_02_01!09_42_56_PM.pptx5_2019_02_01!09_42_56_PM.pptx
5_2019_02_01!09_42_56_PM.pptx
 

An Affine Combination Of Two Lms Adaptive Filters

  • 1. An Affine Combination of Two LMS Adaptive Filters Statistical Analysis of an Error Power Ratio Scheme Neil Bershad(1) , Jose C. M. Bermudez(2) and Jean-Yves Tourneret(3) ´ (1) University of California, Irvine, USA bershad@ece.uci.edu (2) Federal University of Santa Catarina, Florianopolis, Brazil j.bermudez@ieee.org ´ (3) University of Toulouse, ENSEEIHT-IRIT-TeSA, Toulouse, France jyt@n7.fr Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 1/21
  • 2. Property of most adaptive algorithms Large step size µ Fast convergence Large steady-state weight misadjustment Small step size µ Slow convergence Small steady-state weight misadjustment Possible solution: Variable µ algorithms T. Aboulnasr and K. Mayyas, “A robust variable step-size LMS type algorithm: Analysis and simulations, IEEE Trans. Signal Process., vol. 45, pp. 631-639, March 1997. H. C. Shin, A. H. Sayed and W. J. Song, “Variable step-size NLMS and affine projection algorithms,” IEEE Trans. Signal Process. Lett., vol. 11, pp. 132-135, Feb. 2004. Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 2/21
  • 3. Affine combination of two adaptive filters New approach recently proposed Use two adaptive filters with different step-sizes adapting on the same data Convex combination of the adaptive filter outputs J. Arenas-Garcia, A. R. Figueiras-Vidal, and A. H. Sayed, “Mean-square performance of a convex combination of two adaptive filters,” IEEE Trans. Signal Process., vol. 54, pp. 1078-1090, March 2006. Affine combination Recent study for affine combination of LMS filters N. J. Bershad, J. C. M. Bermudez, and J-Y Tourneret, “An Affine Combination of Two LMS Adaptive Filters - Transient Mean-Square Analysis,” IEEE Trans. Signal Process., vol. 56, pp. 1853-1864, May 2008. Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 3/21
  • 4. d(n) e1 (n) + − + y1 (n) W 1 (n) − λ(n) + U (n) + y(n) y2 (n) 1 − λ(n) W 2 (n) e2 (n) − + Adaptive combining of two transversal adaptive filters. Convex: λ(n) ∈ (0, 1) Affine: λ(n) ∈ R Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 4/21
  • 5. Affine Combination Schemes Two schemes for updating λ(n) proposed in 2008 Stochastic gradient approx. of opt. sequence λo (n). Analyzed in R. Candido, M. T. M. Silva and V. Nascimento, “Affine combinations of adaptive filters,” (Asilomar 2008). Power error ratio Very good performance but still not analyzed. Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 5/21
  • 6. This paper Analysis of the power ratio combination scheme Mean behavior of λ(n) Mean square deviation of weight vector Monte Carlo simulations to verify the theoretical models Statistical Assumptions Input signal u(n) is white Additive noise is white and uncorr. with u(n) Unknown system is stationary Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 6/21
  • 7. The Affine Combiner – Brief Review LMS adaptation rule W i (n + 1) = W i (n) + µi ei (n)U (n), i = 1, 2 (1) ei (n) = d(n) − W T (n)U (n), i (2) d(n) = e0 (n) + W T U (n), o (3) Combination of filter outputs y(n) = λ(n)y1 (n) + [1 − λ(n)]y2 (n), (4) e(n) = d(n) − y(n), (5) where yi (n) = W T U (n) and λ(n) ∈ R. i Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 7/21
  • 8. Optimal Combining Rule Optimal Combiner T W o − W 2 (n) W 1 (n) − W 2 (n) λo (n) = T W 1 (n) − W 2 (n) W 1 (n) − W 2 (n) Steady-State Behavior lim E[λo (n)] n→∞ E W T (n)W 2 (n) − E W T (n)W 1 (n) 2 2 ≃ lim T . n→∞ E W 1 (n) − W 2 (n) W 1 (n) − W 2 (n) Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 8/21
  • 9. Optimal Combining Rule (cont.) Expected Values lim E[W T (n)W 1 (n)] 2 n→∞ 2 µ1 µ2 N σo = WTWo + o 2 (µ1 + µ2 ) − µ1 µ2 (N + 2)σu and 2 µi N σo lim E[W T (n)W i (n)] = W T W o + i o 2 , i = 1, 2. n→∞ 2 − µi (N + 2)σu After Simplifications δ lim E[λo (n)] ≃ , δ = (µ2 /µ1 ) 1. n→∞ 2(δ − 1) Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 9/21
  • 10. Error Power Ratio Based Scheme e2 (n) ˆ1 λ(n) = 1 − κ erf e2 (n) ˆ2 where n 1 e2 (n) ˆ1 = e2 (m) 1 K m=n−K+1 n 1 e2 (n) ˆ2 = e2 (m) 2 K m=n−K+1 and x 2 −t2 erf(x) = √ e dt. π 0 Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 10/21
  • 11. The Value of κ Objective lim E[λ(n)] ≃ lim E[λo (n)]. n→∞ n→∞ First order approximation E[ˆ2 (n)] e1 E[λ(n)] ≃ 1 − κ erf E[ˆ2 (n)] e2 with 2 n σu ei 2 E[ˆ2 (n)] = σo + MSDi (m), i = 1, 2. K m=n−K+1 MSDi (n) = E{[W o − W i (n)]T [W o − W i (n)]} Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 11/21
  • 12. The Value of κ (cont.) Taking the limn→∞ 2 2 σo + σu MSD1 (∞) lim E[λ(n)] ≃ 1 − κ erf 2 2 n→∞ σo + σu MSD2 (∞) For limn→∞ E[λ(n)] ≃ limn→∞ E[λo (n)] 2 2 −1 δ σo + σu MSD1 (∞) κ= 1− erf 2 2 2(δ − 1) σo + σu MSD2 (∞) Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 12/21
  • 13. Mean Behavior of λ(n) Define e2 (n) ˆ1 ξ= 2 , η = E (ξ) and σξ = E(ξ 2 ) − η 2 2 e2 (n) ˆ Second order approximation 2 σξ ′′ E[g(ξ)] ≃ g(η) + g (η) 2 Mean Behavior 2 2ησξ −η2 E[λ(n)] ≃ 1 − κ erf(η) − √ e π 2 Expressions required for η and σξ . Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 13/21
  • 14. Mean Behavior of λ(n) (cont.) Approximation for η Writing ˆi e2 e2 (n) = E[ˆi (n)] + εi = mi + εi , i = 1, 2 the mean η is approximated as m1 + ε 1 m1 E [ˆ2 (n)] e1 η=E ≃ = m2 + ε 2 m2 E [ˆ2 (n)] e2 Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 14/21
  • 15. Mean Behavior of λ(n) (cont.) 2 Approximation for σξ 2 2 e2 (n) ˆ1 2 E [ˆ1 (n)] e E [ˆ2 (n)] e1 2 2 σξ = E − η2 ≃ − e2 (n) ˆ2 E [ˆ2 (n)] e2 2 E [ˆ2 (n)] e2 Thus, m2 E(ε2 ) − m2 E(ε2 ) σξ ≃ 2 2 1 2 1 2 [m2 + E(ε2 )] m2 2 2 with n 2 2 2 2 2 E(εi ) = 2 σo + σu MSDi (m) , i = 1, 2. K m=n−K+1 Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 15/21
  • 16. Mean-Square Deviation (MSD) Error signal e(n) = eo (n) + λ(n)[W o − W 1 (n)] T + [1 − λ(n)][W o − W 2 (n)] U (n) Squaring and averaging MSDc (n) = E[e2 (n)] − σo ≃ σu E[λ2 (n)]MSD1 (n) 2 2 + {1 − 2E[λ(n)] + E[λ2 (n)]}MSD2 (n) + 2{E[λ(n)] − E[λ2 (n)]}MSD21 (n) Expression for E[λ2 (n)] is necessary. Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 16/21
  • 17. Mean-Square Deviation (MSD) (cont.) From the expression of λ(n) E[λ2 (n)] = 1 − 2κ E[erf(ξ)] + κ2 E[erf2 (ξ)] Expression of E[erf(ξ)] 2 2ησξ −η2 E[erf(ξ)] ≃ erf(η) − √ e π Second order approximation of E[erf2 (ξ)] 2 2 2 2σξ 2 −η2 −η 2 E[erf (ξ)] ≃ erf (η) + √ √ e − 2 η erf(η) e π π The model for MSDc (n) is complete. Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 17/21
  • 18. Simulation Results Responses to be identified: W o = [wo1 , . . . , woN ]T sin[2πfo (k − ∆)] cos[2πrfo (k − ∆)] w ok = , k = 1, . . . , N 2πfo (k − ∆) 1 − 4rfo (k − ∆) 2 In all simulations: N = 32, K = 100, σu = 1, 50 MC runs. Example 1 Example 2 ∆ = 10, r = 0.2, α = 1.2 ∆ = 5, r = 0, α = 3.8 1.2 1 1 0.8 0.8 0.6 Wo Wo 0.6 0.4 0.4 0.2 0.2 0 0 −0.2 −0.2 −0.4 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 sample k sample k Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 18/21
  • 19. Simulation Results – Example 1 2 µ2 σo = 10 , δ = −4 = 0.1 µ1 1.6 0 1.4 −10 MSDc (n) 1.2 1 −20 λ(n) 0.8 −30 0.6 0.4 −40 0.2 −50 0 −0.2 −60 0 500 1000 1500 2000 2500 3000 3500 4000 0 500 1000 1500 2000 2500 3000 3500 4000 iteration n iteration n Optimal combination λo (n) λ(n) obtained from the error power ratio Theoretical model for λ(n) Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 19/21
  • 20. Simulation Results – Example 2 2 µ2 σo = 10 , δ = −3 = 0.3 µ1 1.4 5 1.2 0 MSDc (n) 1 −5 0.8 −10 λ(n) 0.6 −15 0.4 −20 0.2 −25 0 −30 −0.2 −35 −0.4 −40 0 500 1000 1500 2000 2500 3000 3500 4000 0 500 1000 1500 2000 2500 3000 3500 4000 iteration n iteration n Optimal combination λo (n) λ(n) obtained from the error power ratio Theoretical model for λ(n) Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 20/21
  • 21. Conclusions Affine combination two LMS adaptive filters studied. Analysis of an error power ratio scheme Tuning parameter κ determined for optimal steady-state performance Analytical model for E[λ(n)] Analytical model for MSDc (n) Monte Carlo Simulations show that Error power ratio scheme is close to optimum Analytical models are very accurate Asilomar, Tuesday, November 3, 2009 Asilomar Conf. on Signals, Systems, and Computers – p. 21/21