Your SlideShare is downloading. ×
44I6 IJAET0612696
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

44I6 IJAET0612696


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963 AUDIO DENOISING USING WAVELET TRANSFORM B. JaiShankar1 and K. Duraiswamy2 1 Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore, India. 2 Dean, K.S.Rangasamy College of Technology, Tiruchengode, India.ABSTRACTNoises present in communication channels are disturbing and the recovery of the original signals from the pathwithout any noise is very difficult task. This is achieved by denoising techniques that remove noises from adigital signal. Many denoising technique have been proposed for the removal of noises from the digital audiosignals. But the effectiveness of those techniques is less. In this paper, an audio denoising technique based onwavelet transformation is proposed. Denoising is performed in the transformation domain and the improvementin denoising is achieved by a process of grouping closer blocks. The technique exposes each and every finestdetails contributed by the set of blocks and also it protects the vital features of every individual block. Theblocks are filtered and replaced in their original positions. The grouped blocks overlap each other and thus forevery element a much different estimation is obtained. A technique based on this denoising strategy and itsefficient implementation is presented in full detail. The implementation results reveal that the proposedtechnique achieves a state-of-the-art denoising performance in terms of both signal-to-noise ratio and audiblequality.KEYWORDS: Wavelet Transformation, Block Matching, Grouping, Denoising, Reconstruction I. INTRODUCTIONSignal processing applications always disturbed by noise and it seems to be a major problem. Anonessential signal gets superimposed over an undisturbed signal. If the regularity of noise lessens,then the method for denoising [12] gets more sophisticated. When a signal pass through equipmentsand connecting wires it naturally gets added with a noise. Therefore it results in signal contamination.Once a signal is polluted, it is essentially difficult to remove it without altering the original signal.Hence, the basic task in signal processing [15] is denoising of signals. Humming noise from audioequipments and background environment noise, both serves as the major root cause for pollution inaudio signals. The objective of audio denoising is attenuating the noise, while recovering theunderlying signals. It is accessible in many applications such as music and speech restoration etcPrevious methods, such as Gaussian filters and anisotropic diffusion, denoise the value of a signalbased on the observed values neighbouring points. Various authors proposed many global andmultiscale denoising approaches [15] in order to overcome this locality property. From the beginningof wavelet transforms in signal processing, it is noticed that the wavelet thresholding focuses aattention in removing noise from signals and images. To remove the wavelet coefficients smaller thana given amplitude and to transform the data back into the original domain, the method has todecompose the noisy data into an orthogonal wavelet basis. A nonlinear thresholding estimator cancompute in an orthogonal basis such as Fourier or cosine.In denoising of the audio signals, the denoised signal obtained after performing wavelettransformation is not totally free from noise, some residue of noise left or some other kinds of noisegets introduced by the transformation that is present in the output signal. Several techniques have 419 Vol. 2, Issue 1, pp. 419-425
  • 2. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963been proposed so far for the removal of noise from an audio signal, yet, the efficiency remains anissue as well as they have some drawbacks in general. In this article, we propose an audio signaldenoising technique based on an improved block matching technique in transformation domain. Thetransformation can achieve a clear representation of the input signal so that the noise can be removedwell by reconstruction of the signal. A biorthogonal 1.5 wavelet transform is used for thetransformation process. A multi dimensional signal vector is generated from the transformed signalvector and the original vector signal is reconstructed by applying inverse transform. The signal tonoise ratio (SNR) is comparatively higher than SNR level of the noisy input signal thus increasing thequality of the signal.II. LITERATURE REVIEWMichael T. Johnson et al. [6] have demonstrated the application of the Bionic Wavelet Transform(BWT), an adaptive wavelet transform derived from a non-linear auditory model of the cochlea, to thetask of speech signal enhancement. Results measured objectively by Signal-to-Noise ratio andSegmental SNR and subjectively by Mean Opinion Score were given for additive white Gaussiannoise as well as four different types of realistic noise environments. Enhancement is accomplishedthrough the use of thresholding on the adapted BWT coefficients and the results were compared to avariety of speech enhancement techniques, including Ephraim Malah filtering, iterative Wienerfiltering, and spectral subtraction, as well as to wavelet denoising based on perceptually scaledwavelet packet transform decomposition. Overall results indicated that SNR and SSNR improvementsfor the proposed approach were comparable to those of the Ephraim Malah filter, with BWTenhancement giving the best results of all methods for the noisiest conditions. Subjectivemeasurements using MOS surveys across a variety of 0 db SNR noise conditions indicate that thequality enhancement competitive with but still lower than results for Ephraim Malah filtering anditerative Wiener filtering, but higher than the perceptually scaled wavelet method.Mohammed Bahoura and Jean Rouat [7] have proposed a new speech enhancement method based ontime and scale adaptation of wavelet thresholds. The time dependency was introduced byapproximating the Teager energy of the wavelet coefficients, while the scale dependency wasintroduced by extending the principle of level dependent threshold to wavelet packet thresholding.The technique does not require an explicit estimation of the noise level or of the a priori knowledge ofthe SNR, as was usually needed in most of the popular enhancement methods. Performance of theproposed method was evaluated on the recorded speech in real conditions (plane, sawmill, tank,subway, babble, car, exhibition hall, restaurant, street, airport, and train station) and artificially addednoise. MEL-scale decomposition based on wavelet packets was also compared to the commonwavelet packet scale. Comparison in terms of signal-to-noise ratio (SNR) was reported for timeadaptation and time–scale adaptation of the wavelet coefficients thresholds. Visual inspection ofpectrograms and listening experiments were also used to support the results. Hidden Markov Modelsspeech recognition experiments were conducted on the AURORA-2 database and showed that theproposed method improved the speech recognition rates for low SNRs.Ching-Ta and Hsiao-Chuan Wang [8] have proposed a method based on critical-band decompositionwhich converts a noisy signal into wavelet coefficients (WCs), and enhanced the WCs by subtractinga threshold from noisy WCs in each subband. The threshold of each subband is adapted according tothe segmental SNR (SegSNR) and the noise masking threshold. Thus residual noise could beefficiently suppressed for a speech-dominated frame. In a noise-dominated frame, the backgroundnoise could be almost removed by adjusting the wavelet coefficient threshold (WCT) according to theSegSNR. Speech distortion could be reduced by decreasing the WCT in speech-dominated subbands.The proposed method could effectively enhance noisy speech which was infected by colored-noise.Its performance was better than other wavelet-based speech enhancement methods in theirexperiments.Marián Képesia and Luis Weruaga [11] have proposed new method for time–frequency analysis ofspeech signals. The analysis basis of the proposed Short-Time Fan-Chirp Transform (FChT) wasdefined univocally by the analysis window length and by the frequency variation rate, that parameterbeing predicted from the last computed spectral segments. Comparative results between the proposedShort-Time FChT and popular time–frequency techniques reveal an improvement in spectral and 420 Vol. 2, Issue 1, pp. 419-425
  • 3. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963time–frequency representation. Since the signal can be synthesized from its FChT, the proposedmethod was suitable for filtering purposes.Nanshan Li et al. [13] have proposed an audio denoising algorithm on the basis of adaptive waveletsoft-threshold which is based on the gain factor of linear filter system in the wavelet domain and thewavelet coefficients teager energy operator in order to progress the effect of the content-based songsretrieval system. Their algorithm integrated the gain factor of linear filter system and nonlinear energyoperator with conventional wavelet soft-threshold function. Experiments demonstrated that theiralgorithm had important outcome in inhibiting Gaussian white noise and pink noise in audio samplesand enhancing the accuracy of the songs retrieval system.III. PROPOSED METHODIn this section, the proposed audio denoising technique for the removal of unwanted noises from anyaudio signal is explained. It is considered that the audio signal is polluted by Additive White GaussianNoise (AWGN) and the polluted signal is subjected to the removal of noise using the proposeddenoising technique. The processes performed in the proposed denoising technique are explained indetail as follows (i) Transformation of the noisy signal to wavelet domain, (ii)Generation of a set ofcloser blocks (iii) Generating a multidimensional array (iv) reconstructing the denoised audio signal.In the proposed work, initially, the noisy audio signal is subjected to Wavelet Transformation.Wavelet transformation produces a few significant coefficients for the signals with discontinuities.Audio signals are smooth with a few discontinuities. Hence, wavelet transformation has bettercapability for representing these signals when compared to other transformations. Once the signal istransformed to wavelet domain, set of closer blocks are synthesized from it. Figure1. Generation of a Set of Closer blocks3.1. Synthesizing Closer Blocks from the Noisy SignalThe Bior 1.5 wavelet transformation is applied to the input noisy signal and a transformed signal isobtained as output through this process. Designing biorthogonal wavelets allows more level ofchoices than orthogonal wavelets. One additional level of choice is the possibility to generatesymmetric wavelet functions. For the process of transformation, the vector noisy audio signal isreshaped into a matrix of size same as that of the transformation coefficient matrix. The noisy audiosignal is transformed to biorthogonal wavelet transformation domain and it is represented as a vectorsignal that eases the operation. Figure2.Block Representation of the noisy input signalThe process is then followed by the calculation of the L2 norm distance for each block generated inthe vector signal. The transformed initial block is kept as the reference block. The distance between 421 Vol. 2, Issue 1, pp. 419-425
  • 4. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963the reference block and all the other grouped blocks are calculated. Similarly, the process is repeatedin the same fashion for all the blocks with the consideration of every block as a reference block. Figure3.Generation of multidimensional vectorThe obtained temporary vector signal is then transformed using Haar’s transformation. The process ofreshaping is carried out to perform the transformation process. Haar’s Transformation deals with a 2-point mean and difference operation. Every element in the transformed vector block is compared witha threshold value such that if the element’s value is less than the particular element is replaced with‘0’ and replaced back in the temporary block. If the element’s value of the transformed block isgreater than the threshold then the value of the transformed element is not changed. The process isrepeated for all the elements in the reference block and their respective grouped set of closer blocks.The reconstruction of the signal is shown in figure 4. Figure4.Reconstruction of the audio signalIV. IMPLEMENTATION RESULTSFor testing the performance of the proposed technique a dog barking signal is taken as input. Theinput signal is then contaminated with AWGN noise for the testing purpose. The input noisy signalwith respect to its length n =12000 is represented in Figure 5(a) .The AWGN was generated andadded to the input signal. The SNR of the noisy audio signal was 7.77 dB at a noise level σ of 0.047.A linear combination of the generated noise and the original signal is used as the primary input for theblock matching technique. Figure 5(b) shows the input signal corrupted by white noise. The denoisedspeech signal achieved through the block matching technique is shown in Figure 5(c). The SNR valueof the denoised signal with block matching technique was 12.85 dB. Figure5 (a).Original audio signal 422 Vol. 2, Issue 1, pp. 419-425
  • 5. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963 Figure5 (b).Original audio signal with noise Figure5(c).Denoised audio signal V. CONCLUSIONSThis paper presented an audio denoising technique based on block matching technique. The techniquewas based on the denoising strategy and its efficient implementation was presented in full detail. Theimplementation results have revealed that the process of block matching has achieved a state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective improvement inthe audible quality of the audio signal. Grouping of the similar blocks improved the efficientoperation of the technique. The blocks were filtered and replaced in their original positions fromwhere they were detached. The grouped blocks were overlapping each other and thus for everyelement a much different estimation was obtained that were combined to remove noise from the inputsignal. The reduction in the noise level interprets that the technique has protected the vital uniquefeatures of each individual block even when the finest details were contributed by grouped blocks. Inaddition the technique can be modified for various other audio signals as well as for other problemsthat can be benefit from highly linear signal representations.REFERENCES[1] Qiang Fu and Eric A. Wan, 2003. "Perceptual Wavelet Adaptive Denoising of Speech", In: Proc. European Conf. on Speech Commun. and Technology, pp: 577-580.[2] Alyson K. Fletcher, Vivek K Goyal and Kannan Ramchandran, 2003. "Iterative Projective Wavelet Methods for Denoising", Proc. Wavelets X, part of the 2003 SPIE Int. Symp. on Optical Science & Technology, Vol. 5207, pp: 9-15, San Diego, CA August.[3] Claudia Schremmer, Thomas Haenselmann and Florian Bomers, 2001. "A Wavelet Based Audio Denoiser", In Proc. IEEE International Conference on Multimedia and Expo (ICME2001), pp: 145-148.[4] Sylvain Durand and Jacques Froment, 2001. "Artifact Free Signal Denoising With Wavelets", Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing 423 Vol. 2, Issue 1, pp. 419-425
  • 6. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963 (ICASSP 01), Vol.6, pp: 3685-3688.[5] Imola K. Fodor and Chandrika Kamath, 2003. "Denoising Through Wavelet Shrinkage: An Empirical Study", Journal of Electronic Imaging, Vol. 12, pp: 151-160.[6] Michael T. Johnson, Xiaolong Yuan and Yao Ren, 2007. "Speech Signal Enhancement through Adaptive Wavelet Thresholding", Speech Communications, Vol. 49, No. 2, pp: 123-133.[7] Mohammed Bahoura and Jean Rouat, 2006. "Wavelet speech enhancement based on time–scale adaptation", Speech Communication, Vol. 48, No. 12, pp: 1620-1637.[8] Ching-Ta and Hsiao-Chuan Wang, 2003. "Enhancement of single channel speech based on masking property and wavelet transform", Speech Communication, Vol. 41, No 2-3, pp: 409- 427.[9] Ching-Ta and Hsiao-Chuan Wang, 2007. "Speech enhancement using hybrid gain factor in critical-band-wavelet-packet transform", Digital Signal Processing, Vol. 17, No. 1, pp: 172- 188.[10] Erik Visser, Manabu Otsuka and Te-Won Lee, 2003. "A spatio-temporal speech enhancement scheme for robust speech recognition in noisy environments", Speech Communication, Vol. 41, No. 2-3, pp: 393-407.[11] Marián Képesia and Luis Weruaga, 2006. "Adaptive chirp-based time–frequency analysis of speech signals", Speech Communication, Vol. 48, No. 5, pp: 474-492.[12] Claudia Schremmer, Thomas Haenselmann and Florian Bomers, 2001. "A Wavelet Based Audio Denoiser," Proceedings of IEEE International Conference on Multimedia and Expo, pp: 145-148, Tokyo, Japan.[13] Nanshan Li and Mingquan Zhou, 2008. "Audio Denoising Algorithm Based on Adaptive Wavelet Soft-Threshold of Gain Factor and Teager Energy Operator," Proceedings of IEEE International Conference on Computer Science and Software Engineering, Vol. 1, pp: 787-790.[14] Enqing Dong and Xiaoxiang Pu, 2008. "Speech denoising based on perceptual weighting filter," Proceedings of 9th IEE International Conference on Signal Processing, pp: 705-708, October 26-29, Beijing.[15] Amit Singer, Yoel Shkolnisky, and Boaz Nadler, 2009. "Diffusion Interpretation of Nonlocal Neighborhood Filters for Signal Denoising", SIAM J. Imaging Sciences, Vol. 2, No. 1, pp: 118– 139.[16] Guoshen Yu, Stephane Mallat and Emmanuel Bacry, 2007. "Audio Signal Denoising with Complex Wavelets and Adaptive Block Attenuation," Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Vol. 3, pp: 869-872, April 15-20, Honolulu, HI.[17] Guoshen Yu, Stéphane Mallat and Emmanuel Bacry, 2008. "Audio Denoising by Time- Frequency Block Thresholding", IEEE Transactions On Signal Processing, Vol. 56, No. 5, pp: 1830-1839.[18] D. L. Donoho and I. M. Johnstone, 1994. “Ideal spatial adaptation via wavelet shrinkage,” Biometrika, Vol. 81, pp: 425–455.[19] D. L. Donoho, 1995. “De-noising by soft-thresholding,” IEEE Trans. Inform. Th., Vol. 41, pp: 613–627.[20] R. R. Coifman and D. L. Donoho, 1995. “Translation-invariant de-noising, in Wavelets and Statistics”, A. Antoniadis and G. Oppenheim, eds., Springer Lecture Notes in Statistics 103, pp: 125–150, Springer-Verlag, New York.[21] Jonathan Berger and Charles Nichols, “Brahms at the piano,” Leonardo Mus. Journal, vol. 4, pp. 23– 30, 1994.[22] David L. Donoho, “Nonlinear Wavelet Methods for Recovering Signals, Images, and Densities from Indirect and Noisy Data,”˜donoho/Reports/, 1993.[23] Ingrid Daubechies, Ten Lectures on Wavelets, vol. 61, SIAM. Society for Industrial and Applied Mathematics, Philadelphia, Pennsylvania, 1992, ISBN 0-89871-274-2.[24] St´ephaneMallat, AWavelet Tour of Signal Processing,Academic Press, San Diego, CA, USA, 1998, ISBN 0-12-466605-1.[25] M. Lang, H. Guo, J.E. Odegard, and C.S. Burrus,“Nonlinear processing of a shift invariant DWT for noise reduction,” SPIE,Mathematical Imaging: WaveletApplications for Dual Use, April 1995.[26] Curtis Roads, The ComputerMusic Tutorial, TheMIT Press, 1996.[27] Claudia Schremmer, Thomas Haenselmann, and Florian B¨omers, “Wavelets in Real–Time Digital Audio Processing: A Software For Understanding Wavelets in Applied Computer Science,” in Workshop on Signal Processing Applications (WoSPA),, December 2000, Signal Processing Research Center (SPRC) and IEEE.[28] Claudia Schremmer, “The Wavelet Tool,”˜cschremm/- 424 Vol. 2, Issue 1, pp. 419-425
  • 7. International Journal of Advances in Engineering & Technology, Jan 2012.©IJAET ISSN: 2231-1963 wavelets/WaveletAudioTool/, 2000.[29] Manojit Roy, V. Ravi Kumar, B.D. Kulkarni, John Sanderson, Martin Rhodes, and Michel van der Stappen, “Simple denoising algorithmusing wavelet transform,”AIChE Journal, vol. 45, no. 11, pp. 2461–2466, 1999.[30] Florian B¨omers, “Wavelets in Real–Time Digital Audio Processing: Analysis and Sample Implementations,” M.S. thesis, Universit¨at Mannheim,, 2000.AUTHORS PROFILEB. Jai Shankar received B.E. degree in Electronics and Communication Engineering fromGovernment College of Engineering, Salem and M.E. degree in Applied Electronics fromKongu engineering College, Erode. He worked in K.S.R College of Engineering, Tiruchengodefor three years. Currently he is working as lecturer in Kumaraguru College of Technology,Coimbatore since 2008. His research interest includes Digital Signal Processing, ImageProcessing and Wavelets.K. Duraiswamy received his B.E. degree in Electrical and Electronics Engineering fromP.S.G. College of Technology, Coimbatore in 1965 and M.Sc.(Engg) from P.S.G. College ofTechnology, Coimbatore in 1968 and Ph.D. from Anna University in 1986. From 1965 to 1966he was in Electricity Board. From 1968 to 1970 he was working in ACCET, Karaikudi. From1970 to 1983, he was working in Government College of Engineering Salem. From 1983 to1995, he was with Government College of Technology, Coimbatore as Professor. From 1995 to2005 he was working as Principal at K.S. Rangasamy College of Technology, Tiruchengodeand presently he is serving as DEAN of K.S. Rangasamy College of Technology, Tiruchengode, India. Dr. K.Duraiswamy is interested in Digital Image Processing, Computer Architecture and Compiler Design. Hereceived 7 years Long Service Gold Medal for NCC. He is a life member in ISTE, Senior member in IEEE and amember of CSI. 425 Vol. 2, Issue 1, pp. 419-425