• Save
Low complexity algorithm for updating the coefficients of adaptive 2
Upcoming SlideShare
Loading in...5
×
 

Low complexity algorithm for updating the coefficients of adaptive 2

on

  • 376 views

 

Statistics

Views

Total Views
376
Views on SlideShare
376
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Low complexity algorithm for updating the coefficients of adaptive 2 Low complexity algorithm for updating the coefficients of adaptive 2 Document Transcript

  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME63LOW COMPLEXITY ALGORITHM FOR UPDATING THECOEFFICIENTS OF ADAPTIVE FILTERG.Prasannakumar1, K.Indirapriyadarsini21(ECE/Vishnu Institute of Tech/ Jntuk,Bhimavaram, India)2(ECE/ Swarnandhra institute of engg/Jntuk, Narsapuram, India)ABSTRACTThis paper presents a novel algorithm which can dynamically change the update rateof the coefficients of adaptive filter by analyzing the actual application environments. Thislow complexity algorithm changes the update rate of the coefficients of the adaptive filterdynamically by analyzing the actual application environment. This algorithm builds anonlinear relationship between the update rate and the minimum error. Change in update rateis based on time partition method, which updates the coefficients for every ‘m’ samples,where m is down sampling rate. If the coefficients are updated for every two samples, itresults reduction in computations by half, further increase of down sampling rate reducesmore number of computations, but the convergence time increases. To minimize theconvergence time, the update rate can be adjusted dynamically by using the relation betweendown sampling factor and error, in this method. Acoustic echo cancellation (AEC)experiments indicate that the scheme proposed in this paper performs significantly better thantraditional algorithmsKeywords: AEC, AGC, FIR, LMS, MSE1. INTRODUCTIONThe last decades have witnessed a rapid increase in the use of Internet voicecommunications systems which require the use of an acoustic echo cancellation (AEC) toeliminate acoustic feedback from the loudspeaker to the microphone. The AEC is an adaptivefilter that estimates the acoustic transfer function and utilizes this estimation to remove theacoustic echo from the microphone signal. Adaptation algorithm used in AEC is required topossess good convergence and tracking properties, without posing excessive computationalrequirements[1]. On the other hand, in order to be effective, AEC algorithm typically requiresINTERNATIONAL JOURNAL OF ELECTRONICS ANDCOMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)ISSN 0976 – 6464(Print)ISSN 0976 – 6472(Online)Volume 4, Issue 3, May – June, 2013, pp. 63-69© IAEME: www.iaeme.com/ijecet.aspJournal Impact Factor (2013): 5.8896 (Calculated by GISI)www.jifactor.comIJECET© I A E M E
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME64filters with thousands of coefficients. When applying classical adaptive filter to the AECproblem, the resulting computational complexity might be prohibitively high. The mostwidely used adaptation algorithm is the least mean square error (LMS) method [2], [3].Although LMS is easy to realize, the giant computational works will cause a long outputdelay which is intolerable, as its everely prevents a nature, full-duplex speech conversation. Asignificant reduction in the computational burden can be achieved by using frequency domainadaptive filtering [4]. The time domain linear convolution is efficiently implemented in thefrequency domain, but data gathering might introduce an inherent delay [5], [6].This paperproposes a low complexity algorithm to update the weights of adaptive filter. Based on lots ofpractical experiments, the characteristic is found that the most coefficients of systems changeslowly or keep constant during the conversation. So we need not to update the coefficients ofadaptive filter for each sample. A nonlinear function relationship between the update-rate andthe minimum error is introduced in the new method. The remaining sections of this paper areorganized as follows. Section 2 gives a brief introduction of the adaptive and mean squareerror surface. In Section 3, we derive the new updating method based on the deducing ofsteepest descent method. In Section 4, we present experimental results, and finally in Section5, we draw some conclusions.2. ADAPTIVE FILTERAdaptive filter is composed of filtering sub-system and adaptation algorithm [7]. Theformer can be categorized as system identification, channel equalization and echocancellation based on how the structures are chosen. And the latter can apply variouscriterions to adjust the parameters of filtering sub system. The factors that influence thechoice of the adaptation algorithm are the speed of convergence to optimal operatingcondition, the minimum error at convergence and computational complexity. Lots of researchworks are focused on quickening the convergence speed and reducing the minimum error[8],[9], and there is a few to pay attentions to decreasing the computational complexity.2.1. Mean square error surfaceConsider the adaptive filter demonstrated in Fig. 1, we need to use the adaptationalgorithm to adjust the coefficients of the filter, which decreases the difference between theinput signal vector x and the reference signal d. At time n ,x(n) is defined as:X(n)=[ x(n),x(n-1),x(n-2),------x(n-N+1)]T(1)The weight vector w(n) is:W (n) = [w0 (n) w1(n) w2(n) ----wN-1(n)]T(2)Then the filter output can be expressed as:Y (n) = WT(n)X(n) (3)The errore(n)is defined as:e (n)=d(n)-wT(n)x(n). (4)
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME65error desiredFig 1: Typical structure of the adaptive filter using input and error signals to update its tapweightsThe adaptive processing is based on the minimization of the mean square error criterion,defined as:ξ(n)=E(e2(n)) (5)By (5) we can deduce that the mean square error ξ (n) is a quadratic function of the filtercoefficient vector w(n), and it is a super parabolic in L +1 dimension with a single minimumpoint. Usually we can use the steepest descent method to search the minimum point along theparabola surface [10]. And the gradient vector ߘ is defined as:డకడ௪ൌ ߘ ൌడకడ௪బడకడ௪భെ െడకడ௪೙(6)Adaptation algorithm is initialized with a random state, then it moves along the negative ofgradient direction at each step, and reaches the minimum point finally. The coefficients canbe updated by:W(n+1)=w(n)+µ(-∇(n)) (7)Where µ is the step length, usually it is given by experience.3. A NEW METHOD UPDATING THE COEFFICIENTS VECTORIn practical application, the input data should be processed promptly in a short time.The computational complexity is a very important factor that influences the choice of theadaptation algorithm. The traditional adaptation algorithms such as LMS update thecoefficients for each sample [3], which cannot satisfy the requirement of online processing.Many stationary signals can be processed with low tracking speed in the real environment.desiredError desired signalOutputInput TransversalFilterAdaptiveweight controlmechanism
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME66And a conclusion can be deduced from the mean square error surface that we need not toadjust the coefficients vector since the output have reached the minimum error, if the systemis invariable, because that ∇ = 0 .And the low complexity method of updating the coefficientsis viable. That is mean the updating rate can be decreased when the adaptation algorithmsearches around the minimum point. It is a tradeoff between computational complexity andconvergence speed. To change the updating rate, we utilize a time partition method whichupdating the coefficients only once in every m samples. Because the most computation ofadaptation algorithm is consumed to update the coefficients vector, the complexity can bereduced almost a half when the coefficients of the adaptive filter is updated only once inevery 2 samples, which is called factor-of-2 down-sampling. And according to the increasingof down-sampling factor m, the computational complexity will decrease. We can concludefrom equation (7) that changing the updating rate of w(n) will only decelerates the speed ofsearching the minimum point, the adaptation algorithm will approach convergence at last.And it cost double time for factor-of-2 down-sampling method to get the optimal coefficients.According to the increasing of factor m, the convergence time will increases, and it isillustrated in Fig. 2.Fig 2: Convergence speed of down-updatingAlong with the reducing of complexity, the convergence speed is slow-down at thesame time. To solve this problem, we proposed a method to adjust the update ratedynamically. It builds a function between the down-sampling factor m and the minimumerrore(n), as:m=βଵଵାୣ୶୮ ሺఈ௘ሺ௡ሻೖሻ(8)Where α controls the shape of the function, β controls the value range, and k controls thesmoothness degree. If m = 0 , filter weights are updated at each sample. If m =1, the weightsare updated in every two samples, so on and so forth
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME67Fig 3: Relationship of e(n) and m with various kFig. 3 illustrates the curves of equation (8) with k =1, 2,3, 4,5,8, α=50, β=20.For k = 3, the update rate is relatively fast at the beginning of convergence stage, and with thedecreasing of e(n) , the update rate is slow down accordingly. For k =1, 2 , when e(n) nearlyreach the minimum point, the update rate is still so fast that complexity cannot be reduced inthe stable stage. For k = 4,8 , the update rate is too slow when e(n) is faraway from theminimum point, and it will take a longtime for the filter to converge to the optimal state. Sothe median value of k should be selected to ensure the smoothness of the curve for thetradeoff of convergence speed and computation.4. EXPERIMENT AND RESULTSWe use the number of multiplications per input sample as a measure of thecomputational complexity of the various algorithms. The complexity of each algorithm isnormalized by the complexity of k =1 (i.e., the complexity of k =1 is 100%).The filter is chosen to be 20 taps. This number of taps was chosen for two reasons. Itwas small enough to limit processing time; however it was large enough to show goodconvergence in the MATLAB simulations. The adaptive filter taps at first are initialized tothe null vector.In low complexity algorithm the filter weights of adaptive filter are updated accordingto the equation 4 and equation 8 in this the step size parameter µ is selected a small value of0.01, the scaling factor α is selected in the range of 3000 to 15000, the range of downsampling rate β is selected in the range of 10 to 20.
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME68Fig 4: Echo cancellation using low complexity algorithmTable 1: comparison of number of computations and convergence time in LMS and lowcomplexity algorithm5. CONCLUSIONWe have proposed a new method to change the update rate of adaptation algorithm byestablishing a function between update rate and error signal. The value of k is fixed on therange of 1.4-1.6 that will ensure the better performance of the adaptive filter. This methodkeeps a fast update rate at the convergence stage and has a low computation complex at thestable stage. The results of echo cancellation indicate that the new method has the fastconvergence speed, the low computation complexity, and the same minimum error as thetraditional method. However, the parameters of the algorithm should be taken into accountstrictly depend on the application environment, which implies that a complex tuning activitymight be required to find the optimal settings.6. REFERENCES[1] Y. Bendel, D. Burshtein, O. Shalvi, “Delay less Frequency Domain Acoustic EchoCancellation”, IEEE Transactions on Speech and Audio Processing, 2001, vol. 9, no. 5, pp.589-597.[2] T. J. Shan, T. Kailath, “Adaptive Algorithm with An Automatic Gain Control Feature”,IEEE Transactions on Circuits and Systems, 1988, vol.35, no.1, pp. 122-127.[3] S. C. Chan, Y. Zhou, “Improved Generalized proportionate Step-size LMS Algorithmsand Performance Analysis”, In: Proceedings of International Symposium on Circuits andSystems, 2006, pp. 2325-2328.Number ofcomputationsConvergence timeLMSalgorithm64,8210.05(approximately)Lowcomplexityalgorithm29,110 0.05
  • International Journal of Electronics and Communication Engineering & Technology (IJECET),ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 3, May – June (2013), © IAEME69[4] J. J. Shynk, “Frequency-domain and Multirate Adaptive Filtering”, IEEE SignalProcessing Mag., 1992, vol. 9, pp.14-37.[5] G. Clark, S. Mitra, S. Parker, “Block Implementation of Adaptive Digital Filters”, IEEETransactions on Circuits and Systems, 1981, vol. CAS-28, pp. 584-592.[6] G. A. Clark, S. R. Parker, S. K. Mitra, “A Unified Approach to Time-and frequence-domain Realization of FIR Adaptive Digital Filters”, IEEE Transactions on Speech,Audio and Signal Processing, 1983, vol. ASSP-31, pp. 1073-1083.[7] B. Widrow, Adaptive Filters, In Aspects of Network and System Theory. New York:Hoit, Rinehart and Winson, 1970.[8] M. Nekuii, M. Atarodi, “A Fast Converging Algorithm for Network Echo Cancellation”,Signal Processing Letters, 2004, vol. 11, no. 4, pp. 427- 430.[9] S. Ohta, Y. Kajikawa, Y. Nomura, “Acoustic Echo Cancellation Using Sub-adaptiveFilter”, International Conference on Acoustics, Speech and Signal Processing,2007, vol. 1, pp. 85-88.[10] Yao Tian-Ren, Sun Hong. Advanced Digital Signal Processing. Wuhan: HUST Press,1999.[11] Prabira Kumar Sethy and Professor Dr. Subrata Bhattacharya, “Noise Cancellation inAdaptive Filtering Through RLS Algorithm using TMS320C6713DSK”, InternationalJournal of Electronics and Communication Engineering &Technology (IJECET), Volume 3,Issue 1, 2012, pp. 154 - 159, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472.[12] Er. Ravi Garg and Er. Abhijeet Kumar, “Comparasion of SNR and MSE for VariousNoises using Bayesian Framework”, International Journal of Electronics and CommunicationEngineering & Technology (IJECET), Volume 3, Issue 1, 2012, pp. 76 - 82, ISSN Print:0976- 6464, ISSN Online: 0976 –6472.