The document proposes a hybrid method called Wavelet Embedded Anisotropic Diffusion (WEAD) for image denoising. WEAD is a two-stage filter that first applies anisotropic diffusion to reduce noise, followed by wavelet-based Bayesian shrinkage. This reduces the convergence time of anisotropic diffusion, allowing the image to be denoised with less blurring compared to anisotropic diffusion or wavelet methods alone. Experimental results on various images demonstrate that WEAD achieves better denoising performance than anisotropic diffusion or Bayesian shrinkage methods, as measured by higher PSNR and SSIM scores and fewer required iterations.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
MULTIFOCUS IMAGE FUSION USING MULTIRESOLUTION APPROACH WITH BILATERAL GRADIEN...cscpconf
The fusion of two or more images is required for images captured using different sensors,
different modalities or different camera settings to produce the image which is more suitable for
computer processing and human visual perception. The optical lenses in the cameras are having
limited depth of focus so it is not possible to acquire an image that contains all the objects infocus.
In this case we need a Multifocus image fusion technique to create a single image where
all objects are in-focus by combining relevant information in the two or more images. As the
sharp images contain more information than blurred images image sharpness will be taken as
one of the relevant information in framing the fusion rule. Many existing algorithms use
contrast or high local energy as a measure of local sharpness (relevant information). In
practice particularly in multimodal image fusion this assumption is not true. Here in this paper
we are proposing the method which combines the multiresolution transform and local phase
coherence measure to measure the sharpness in the images. The performance of the fusion
process was evaluated with mutual information, edge-association and spatial frequency as
quality metrics and compared with Laplacian pyramid, DWT (Discrete Wavelet Transform) and
bilateral gradient based sharpness criterion methods etc. The results showed that the proposed
algorithm is performing better than the existing ones.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Filtering Corrupted Image and Edge Detection in Restored Grayscale Image Usin...CSCJournals
In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.
In this paper we discuss the speckle reduction in images with the recently proposed Wavelet Embedded Anisotropic Diffusion (WEAD) and Wavelet Embedded Complex Diffusion (WECD). Both these methods are improvements over anisotropic and complex diffusion by adding wavelet based bayes shrink in its second stage. Both WEAD and WECD produce excellent results when compared with the existing speckle reduction filters.
3 ijaems nov-2015-6-development of an advanced technique for historical docum...INFOGAIN PUBLICATION
In this paper, technique used for historical document preservation is explored. In this paper a noise estimation technique is applied to know noise standard deviation. We first estimate or detect level of noise present in noisy images by selecting weak textured patches in image on the basis of gradient matrix and its statistical properties, then eliminate that noise through non local means(NLM) denoising technique that will use estimated noise level as filtering parameter for eliminating noise from the image. This technique is based on weighted average of the similar pixels in historical image. Non local means techniques removes noise from images without taking care of noise level ,it is mandatory to take care of noise level for best preserving Historical document images.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
This presentation contains concepts of different image restoration and reconstruction techniques used nowadays in the field of digital image processing. Slides are prepared from Gonzalez book and Pratt book.
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
WAVELET THRESHOLDING APPROACH FOR IMAGE DENOISINGIJNSA Journal
The original image corrupted by Gaussian noise is a long established problem in signal or image processing .This noise is removed by using wavelet thresholding by focused on statistical modelling of wavelet coefficients and the optimal choice of thresholds called as image denoising . For the first part, threshold is driven in a Bayesian technique to use probabilistic model of the image wavelet coefficients that are dependent on the higher order moments of generalized Gaussian distribution (GGD) in image processing applications. The proposed threshold is very simple. Experimental results show that the proposed method is called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image. It outperforms Donoho and Johnston Sure Shrink. The second part of the paper is attempt to claim on lossy compression can be used for image denoising .thus achieving the image compression & image denoising simultaneously. The parameter is choosing based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression & denoise method does indeed remove noise significantly, especially for large noise power.
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
Curvelets denoise approach has been widely used in many fields for its ability to obtain high quality result images.Curvelet transform is superior to wavelet in the expression of image edge, such as geometry characteristic of curve, which has already obtained good results in image denoising. However artifacts those appear in the result images of Curvelets approach prevent its application in some fields such as medical image. This paper puts forward a fusion based method because certain regions of the image have the ringing and radial stripe after Curvelets transform. The experimental results indicate that fusion method has an abroad future for eliminating the noise of images. The results of the algorithm applied to ultrasonic medical images also indicate that the algorithm can be used efficiently in medical image fields.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Improved nonlocal means based on pre classification and invariant block matchingIAEME Publication
One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Filtering Corrupted Image and Edge Detection in Restored Grayscale Image Usin...CSCJournals
In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.
In this paper we discuss the speckle reduction in images with the recently proposed Wavelet Embedded Anisotropic Diffusion (WEAD) and Wavelet Embedded Complex Diffusion (WECD). Both these methods are improvements over anisotropic and complex diffusion by adding wavelet based bayes shrink in its second stage. Both WEAD and WECD produce excellent results when compared with the existing speckle reduction filters.
3 ijaems nov-2015-6-development of an advanced technique for historical docum...INFOGAIN PUBLICATION
In this paper, technique used for historical document preservation is explored. In this paper a noise estimation technique is applied to know noise standard deviation. We first estimate or detect level of noise present in noisy images by selecting weak textured patches in image on the basis of gradient matrix and its statistical properties, then eliminate that noise through non local means(NLM) denoising technique that will use estimated noise level as filtering parameter for eliminating noise from the image. This technique is based on weighted average of the similar pixels in historical image. Non local means techniques removes noise from images without taking care of noise level ,it is mandatory to take care of noise level for best preserving Historical document images.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
This presentation contains concepts of different image restoration and reconstruction techniques used nowadays in the field of digital image processing. Slides are prepared from Gonzalez book and Pratt book.
Non-Blind Deblurring Using Partial Differential Equation MethodEditor IJCATR
In this paper, a new idea for two dimensional image deblurring algorithm is introduced which uses basic concepts of PDEs... The various methods to estimate the degradation function (PSF is known in prior called non-blind deblurring) for use in restoration are observation, experimentation and mathematical modeling. Here, PDE based mathematical modeling is proposed to model the degradation and recovery process. Several restoration methods such as Weiner Filtering, Inverse Filtering [1], Constrained Least Squares, and Lucy -Richardson iteration remove the motion blur either using Fourier Transformation in frequency domain or by using optimization techniques. The main difficulty with these methods is to estimate the deviation of the restored image from the original image at individual points that is due to the mechanism of these methods as processing in frequency domain .Another method, the travelling wave de-blurring method is a approach that works in spatial domain.PDE type of observation model describes well several physical mechanisms, such as relative motion between the camera and the subject (motion blur), bad focusing (defocusing blur), or a number of other mechanisms which are well modeled by a convolution. In last PDE method is compared with the existing restoration techniques such as weiner filters, median filters [2] and the results are compared on the basis of calculated PSNR for various noises
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
WAVELET THRESHOLDING APPROACH FOR IMAGE DENOISINGIJNSA Journal
The original image corrupted by Gaussian noise is a long established problem in signal or image processing .This noise is removed by using wavelet thresholding by focused on statistical modelling of wavelet coefficients and the optimal choice of thresholds called as image denoising . For the first part, threshold is driven in a Bayesian technique to use probabilistic model of the image wavelet coefficients that are dependent on the higher order moments of generalized Gaussian distribution (GGD) in image processing applications. The proposed threshold is very simple. Experimental results show that the proposed method is called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image. It outperforms Donoho and Johnston Sure Shrink. The second part of the paper is attempt to claim on lossy compression can be used for image denoising .thus achieving the image compression & image denoising simultaneously. The parameter is choosing based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression & denoise method does indeed remove noise significantly, especially for large noise power.
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
Curvelets denoise approach has been widely used in many fields for its ability to obtain high quality result images.Curvelet transform is superior to wavelet in the expression of image edge, such as geometry characteristic of curve, which has already obtained good results in image denoising. However artifacts those appear in the result images of Curvelets approach prevent its application in some fields such as medical image. This paper puts forward a fusion based method because certain regions of the image have the ringing and radial stripe after Curvelets transform. The experimental results indicate that fusion method has an abroad future for eliminating the noise of images. The results of the algorithm applied to ultrasonic medical images also indicate that the algorithm can be used efficiently in medical image fields.
Image Denoising Using Wavelet TransformIJERA Editor
In this project, we have studied the importance of wavelet theory in image denoising over other traditional methods. We studied the importance of thresholding in wavelet theory and the two basic thresholding method i.e. hard and soft thresholding experimentally. We also studied why soft thresholding is preferred over hard thresholding, three types of soft thresholding (Bayes shrink, Sure shrink, Visu shrink) as well as advantages and disadvantage of each of them
Performance Analysis of Image Enhancement Using Dual-Tree Complex Wavelet Tra...IJERD Editor
Resolution enhancement (RE) schemes which are not based on wavelets has one of the major
drawbacks of losing high frequency contents which results in blurring. The discrete wavelet- transform-based
(DWT) Resolution Enhancement scheme generates artifacts (due to a DWT shift-variant property). A wavelet-
Domain approach based on dual-tree complex wavelet transform (DT-CWT) & nonlocal means (NLM) is
proposed for RE of the satellite images. A satellite input image is decomposed by DT-CWT (which is nearly
shift invariant) to obtain high-frequency sub bands. Here the Lanczos interpolator is used to interpolate the highfrequency
sub bands & the low-resolution (LR) input image. The high frequency sub bands are passed through
an NLM filter to cater for the artifacts generated by DT-CWT (despite of it’s nearly shift invariance). The
filtered high-frequency sub bands and the LR input image are combined by using inverse DTCWT to obtain a
resolution-enhanced image. Objective and subjective analyses show superiority of the new proposed technique
over the conventional and state-of-the-art RE techniques.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Clustered Compressive Sensingbased Image Denoising Using Bayesian Frameworkcsandit
This paper provides a compressive sensing (CS) method of denoising images using Bayesian
framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
This paper provides a compressive sensing (CS) method of denoising images using Bayesian framework. Some images, for example like magnetic resonance images (MRI) are usually very
weak due to the presence of noise and due to the weak nature of the signal itself. So denoising
boosts the true signal strength. Under Bayesian framework, we have used two different priors:
sparsity and clusterdness in an image data as prior information to remove noise. Therefore, it is
named as clustered compressive sensing based denoising (CCSD). After developing the
Bayesian framework, we applied our method on synthetic data, Shepp-logan phantom and
sequences of fMRI images. The results show that applying the CCSD give better results than
using only the conventional compressive sensing (CS) methods in terms of Peak Signal to Noise
Ratio (PSNR) and Mean Square Error (MSE). In addition, we showed that this algorithm could
have some advantages over the state-of-the-art methods like Block-Matching and 3D
Filtering (BM3D).
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
Computer apparition plays the most important role in human perception, which is limited to only the visual band of the electromagnetic spectrum. The need for Radar imaging systems, to recover some sources that
are not within human visual band, is raised. This paper present new algorithm for Synthetic Aperture Radar (SAR) images segmentation based on thresholding technique. Entropy based image thresholding has
received sustainable interest in recent years. It is an important concept in the area of image processing.
Pal (1996) proposed a cross entropy thresholding method based on Gaussian distribution for bi-modal images. Our method is derived from Pal method that segment images using cross entropy thresholding based on Gamma distribution and can handle bi-modal and multimodal images. Our method is tested using
Synthetic Aperture Radar (SAR) images and it gave good results for bi-modal and multimodal images. The
results obtained are encouraging.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The complexity of Medical image reconstruction requires tens to hundreds of billions of computations per second. Until few years ago, special purpose processors designed especially for such applications were used. Such processors require significant design effort and are thus difficult to change as new algorithms in reconstructions evolve and have limited parallelism. Hence the demand for flexibility in medical applications motivated the use of stream processors with massively parallel architecture. Stream processing architectures offers data parallel kind of parallelism.
As data processing requirements increased with new applications, new processing technologies like Stream computing and parallel execution came into being. This write‐up briefly compares two competing performance architectures for data parallelism – Cell Broadband Engine (Cell BE in short) and the GPU (Graphics Processing Unit). The Cell BE Processor architecture was developed in collaboration between IBM, Sony and Toshiba. Development started in 2001 and first set of products based on this architecture started appearing in 2005.
A Set-top-Box (STB) is a very common name heard in the consumer electronics market. It is a device that is attached to a Television for enhancing its functions or the quality of its functions. On the other side, the STB is connected to an external source of signal, like satellite, cable, terrestrial or internet. The STB processes the signal it receives, turns it into content, which is then displayed on the television screen or other display device. There are different types of STBs based on what kind of signals it can receive and what kind of processing it can do. The most widely used STBs are DVB STBs, which receive DVB (Digital Video Broadcast) transmission.
Fast and robust tracking of multiple faces is receiving increased attention from computer vision researchers as it finds potential applications in many fields like video surveillance and computer mediated video conferencing. Real-time tracking of multiple faces in high resolution videos involve three basic tasks namely initialization, tracking and display. Among these, tracking is quite compute intensive as it involves particle filtering that won’t yield a real time performance if we use a conventional CPU based system alone.
This paper presents a study of the efficiency and performance speedup achieved by applying Graphics Processing Units for Face Recognition Solutions. We explore one of the possibilities of parallelizing and optimizing a well-known Face Recognition algorithm, Principal Component Analysis (PCA) with Eigenfaces. In recent years, the Graphics Processing Units (GPU) has been the subject of extensive research and the computation speed of GPUs has been rapidly increasing.
In today’s competitive software development scenario, the customer demands a testing coverage which not only ensures the stated requirements but also the implied ones. This situation calls for an exhaustive testing which may not be always possible due to various reasons. Testing, due to its last position in SDLC, often gets crunched due to the cumulative schedule slippages. Hence Tester is faced with a challenge to make testing as efficient as possible within a short time span due to cost constraints. With selective testing an only option, test leads usually go for the age-old approach of Random Testing. Random testing does not ensure coverage in a scientific manner.
SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. For efficient construction of large maps searching the best-matching unit is usually the computationally heaviest operation in the SOM. The parallel nature of the algorithm and the huge computations involved makes it a good target for GPU based parallel implementation. This paper presents an overall idea of the optimization strategies used for the parallel implementation of Basic-SOM on GPU using CUDA programming paradigm.
This paper provides an overview of Universal Plug and Play (UPnP) and how it works to build a digital home network. UPnP network technology allows personal computer and consumer electronics devices to advertise and offer their services to network clients. UPnP can be viewed as the technological foundation of the digital home, enabling innovative usage models, higher levels of automation, and easier integration of devices from different manufacturers. UPnP technology is all about making home networking simple and affordable for users.
In this paper we present a recently developed tool named BrainAssist, which can be used for the study and analysis of brain abnormalities like Focal Cortical Dysplasia (FCD), Heterotopia and Multiple Sclerosis (MS). For the analysis of FCD and Heterotopia we used T1 weighted MR images and for the analysis of Multiple Sclerosis we used Proton Density (PD) images. 52 patients were studied. Out of 52 cases 36 were affected with FCDs, 6 with MS lesions and 10 normal cases. Preoperative MR images were acquired on a 1.5-T scanner (Siemens Medical Systems, Germany).
Identification of Focal Cortical Dysplasia (FCD) can be difficult due to the subtle MRI changes. Though sequences like FLAIR (fluid attenuated inversion recovery) can detect a large majority of these lesions, there are smaller lesions without signal changes that can easily go unnoticed by the naked eye. The aim of this study is to improve the visibility of Focal Cortical Dysplasia lesions in the T1 weighted brain MRI images. In the proposed method, we used a complex diffusion based approach for calculating the FCD affected areas.
Software Defined Networking (SDN) is an emerging trend in the networking and communication industry and promises to deliver enormous benefits, from reduced costs to more efficient network operations. It is a new approach that gives network operators and owners more control of the infrastructure, allowing optimization, customization and virtualization that enable the creation of new types of network services. This is done by decoupling the management and control planes that make decisions about where traffic is sent from (the control plane) the underlying hardware that forwards data traffic to the selected destination (the data plane).
Software Testing is the last phase in software development lifecycle which has high impact on the quality of the final product delivered to the customer. Even after being a critical phase, it was not given the importance as it actually deserves. The schedule constraints and slippage carry forwarded from the previous phase also make the testing phase more torrent. History reveals that the situation has changed with time, wherein testing is now visualized as one of the most critical, phase of software development. This makes software testing a discipline which demands for continuous and systematic growth. Software testing is a trade-off between Cost, Time and Quality.
Test Automation is an accepted technique which is adapted by the industry for increasing the effectiveness of the testing phase. The recurring tasks are being automated by the tools thus simplifying the human efforts and results in increased quality of product under test. A study of test automation programmes in the industry reveals the fact that a good percentage of them fail to find the intended results.
Complex digital and analog circuits and multiple clock signals used for design and development of modern systems usually make the job of engineers and designers a tedious one. While working with complex circuits and signals, a designer might encounter problems with circuit validation due to long simulation time. These complexities adversely affect the development time and hence increase time to market incurring higher production costs. By applying a new methodology in their Digital Phase-Locked Loop (Digital PLL) design, the engineers at QuEST reduced the simulation effort to one-by-third.
In software industry, test automation is a key solution for achieving volume verification and validation with optimal costs. Picking up the right automation tool and underlying scripting language has always been a challenge, balancing between cost factors and team’s expertise levels in various tools and scripting languages. A real solution would be one that allows full flexibility for team on these two core concern areas – test automation tool and scripting language. Flexi any Script any Tool (FaSaT) is a test automation framework which provides interoperability among multiple test automation tools and multiple scripting languages.
The improved hybrid model for molecular image denoising, proposed by NeST Software, can give a better SNR Molecular Image output. Read more on the proposed hybrid model.
Analog-to-Digital Converter (ADC) is an integral part of high-speed signal processing applications. This paper discusses about 10-bit SAR based ADC that enables very low power consumption and sampling rate as high as 165 MSPS.
Ground breaking innovations like Advanced Driver Assistance System (ADAS) makes driving easier and safer on congested roads. The whitepaper details how FPGA technology emerges as a complete solution for ADAS.
Reusable Video IP Cores give software engineering service providers flexibility and less time to market while catering to the ever increasing demands of customers. Read on to know more about the Reusable IP Cores developed by NeST Software.
More from QuEST Global (erstwhile NeST Software) (19)
Deep Leg Vein Thrombosis (DVT): Meaning, Causes, Symptoms, Treatment, and Mor...The Lifesciences Magazine
Deep Leg Vein Thrombosis occurs when a blood clot forms in one or more of the deep veins in the legs. These clots can impede blood flow, leading to severe complications.
Global launch of the Healthy Ageing and Prevention Index 2nd wave – alongside...ILC- UK
The Healthy Ageing and Prevention Index is an online tool created by ILC that ranks countries on six metrics including, life span, health span, work span, income, environmental performance, and happiness. The Index helps us understand how well countries have adapted to longevity and inform decision makers on what must be done to maximise the economic benefits that comes with living well for longer.
Alongside the 77th World Health Assembly in Geneva on 28 May 2024, we launched the second version of our Index, allowing us to track progress and give new insights into what needs to be done to keep populations healthier for longer.
The speakers included:
Professor Orazio Schillaci, Minister of Health, Italy
Dr Hans Groth, Chairman of the Board, World Demographic & Ageing Forum
Professor Ilona Kickbusch, Founder and Chair, Global Health Centre, Geneva Graduate Institute and co-chair, World Health Summit Council
Dr Natasha Azzopardi Muscat, Director, Country Health Policies and Systems Division, World Health Organisation EURO
Dr Marta Lomazzi, Executive Manager, World Federation of Public Health Associations
Dr Shyam Bishen, Head, Centre for Health and Healthcare and Member of the Executive Committee, World Economic Forum
Dr Karin Tegmark Wisell, Director General, Public Health Agency of Sweden
CRISPR-Cas9, a revolutionary gene-editing tool, holds immense potential to reshape medicine, agriculture, and our understanding of life. But like any powerful tool, it comes with ethical considerations.
Unveiling CRISPR: This naturally occurring bacterial defense system (crRNA & Cas9 protein) fights viruses. Scientists repurposed it for precise gene editing (correction, deletion, insertion) by targeting specific DNA sequences.
The Promise: CRISPR offers exciting possibilities:
Gene Therapy: Correcting genetic diseases like cystic fibrosis.
Agriculture: Engineering crops resistant to pests and harsh environments.
Research: Studying gene function to unlock new knowledge.
The Peril: Ethical concerns demand attention:
Off-target Effects: Unintended DNA edits can have unforeseen consequences.
Eugenics: Misusing CRISPR for designer babies raises social and ethical questions.
Equity: High costs could limit access to this potentially life-saving technology.
The Path Forward: Responsible development is crucial:
International Collaboration: Clear guidelines are needed for research and human trials.
Public Education: Open discussions ensure informed decisions about CRISPR.
Prioritize Safety and Ethics: Safety and ethical principles must be paramount.
CRISPR offers a powerful tool for a better future, but responsible development and addressing ethical concerns are essential. By prioritizing safety, fostering open dialogue, and ensuring equitable access, we can harness CRISPR's power for the benefit of all. (2998 characters)
How many patients does case series should have In comparison to case reports.pdfpubrica101
Pubrica’s team of researchers and writers create scientific and medical research articles, which may be important resources for authors and practitioners. Pubrica medical writers assist you in creating and revising the introduction by alerting the reader to gaps in the chosen study subject. Our professionals understand the order in which the hypothesis topic is followed by the broad subject, the issue, and the backdrop.
https://pubrica.com/academy/case-study-or-series/how-many-patients-does-case-series-should-have-in-comparison-to-case-reports/
Antibiotic Stewardship by Anushri Srivastava.pptxAnushriSrivastav
Stewardship is the act of taking good care of something.
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
WHO launched the Global Antimicrobial Resistance and Use Surveillance System (GLASS) in 2015 to fill knowledge gaps and inform strategies at all levels.
ACCORDING TO apic.org,
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
ACCORDING TO pewtrusts.org,
Antibiotic stewardship refers to efforts in doctors’ offices, hospitals, long term care facilities, and other health care settings to ensure that antibiotics are used only when necessary and appropriate
According to WHO,
Antimicrobial stewardship is a systematic approach to educate and support health care professionals to follow evidence-based guidelines for prescribing and administering antimicrobials
In 1996, John McGowan and Dale Gerding first applied the term antimicrobial stewardship, where they suggested a causal association between antimicrobial agent use and resistance. They also focused on the urgency of large-scale controlled trials of antimicrobial-use regulation employing sophisticated epidemiologic methods, molecular typing, and precise resistance mechanism analysis.
Antimicrobial Stewardship(AMS) refers to the optimal selection, dosing, and duration of antimicrobial treatment resulting in the best clinical outcome with minimal side effects to the patients and minimal impact on subsequent resistance.
According to the 2019 report, in the US, more than 2.8 million antibiotic-resistant infections occur each year, and more than 35000 people die. In addition to this, it also mentioned that 223,900 cases of Clostridoides difficile occurred in 2017, of which 12800 people died. The report did not include viruses or parasites
VISION
Being proactive
Supporting optimal animal and human health
Exploring ways to reduce overall use of antimicrobials
Using the drugs that prevent and treat disease by killing microscopic organisms in a responsible way
GOAL
to prevent the generation and spread of antimicrobial resistance (AMR). Doing so will preserve the effectiveness of these drugs in animals and humans for years to come.
being to preserve human and animal health and the effectiveness of antimicrobial medications.
to implement a multidisciplinary approach in assembling a stewardship team to include an infectious disease physician, a clinical pharmacist with infectious diseases training, infection preventionist, and a close collaboration with the staff in the clinical microbiology laboratory
to prevent antimicrobial overuse, misuse and abuse.
to minimize the developme
CHAPTER 1 SEMESTER V PREVENTIVE-PEDIATRICS.pdfSachin Sharma
This content provides an overview of preventive pediatrics. It defines preventive pediatrics as preventing disease and promoting children's physical, mental, and social well-being to achieve positive health. It discusses antenatal, postnatal, and social preventive pediatrics. It also covers various child health programs like immunization, breastfeeding, ICDS, and the roles of organizations like WHO, UNICEF, and nurses in preventive pediatrics.
Explore our infographic on 'Essential Metrics for Palliative Care Management' which highlights key performance indicators crucial for enhancing the quality and efficiency of palliative care services.
This visual guide breaks down important metrics across four categories: Patient-Centered Metrics, Care Efficiency Metrics, Quality of Life Metrics, and Staff Metrics. Each section is designed to help healthcare professionals monitor and improve care delivery for patients facing serious illnesses. Understand how to implement these metrics in your palliative care practices for better outcomes and higher satisfaction levels.
Navigating Challenges: Mental Health, Legislation, and the Prison System in B...Guillermo Rivera
This conference will delve into the intricate intersections between mental health, legal frameworks, and the prison system in Bolivia. It aims to provide a comprehensive overview of the current challenges faced by mental health professionals working within the legislative and correctional landscapes. Topics of discussion will include the prevalence and impact of mental health issues among the incarcerated population, the effectiveness of existing mental health policies and legislation, and potential reforms to enhance the mental health support system within prisons.
Leading the Way in Nephrology: Dr. David Greene's Work with Stem Cells for Ki...Dr. David Greene Arizona
As we watch Dr. Greene's continued efforts and research in Arizona, it's clear that stem cell therapy holds a promising key to unlocking new doors in the treatment of kidney disease. With each study and trial, we step closer to a world where kidney disease is no longer a life sentence but a treatable condition, thanks to pioneers like Dr. David Greene.
1. IMAGE DENOISING USING WAVELET EMBEDDED
ANISOTROPIC DIFFUSION (WEAD)
Jeny Rajan*, M.R Kaimal †
*Network Systems & Technologies Ltd (NeST), Technopark Campus, Trivandrum INDIA, Email :jenyrajan@rediffmail.com
†
Dept. Of Computer Science, University of Kerala, Trivandrum, INDIA , Email :mrkaimal@yahoo.com
Keywords: Anisotropic Diffusion, Bayesian Shrinkage,
Denoising, Wavelets.
Abstract
In this paper a PDE based hybrid method for image denoising
is introduced. The method is a bi-stage filter with anisotropic
diffusion followed by wavelet based bayesian shrinkage. Here
efficient denoising is achieved by reducing the convergence
time of anisotropic diffusion. As the convergence time
decreases, image blurring can be restricted and will produce a
better denoised image than anisotropic or wavelet based
methods. Experimental results based on PSNR, SSIM and
edge analysis shows excellent performance of the proposed
method.
1 Introduction
Image denoising has a significant role in image pre
processing. As the application areas of image denoising are
more, there is a big demand for efficient denoising
algorithms. In this work, we developed a new method;
Wavelet Embedded Anisotropic Diffusion (WEAD), and
applied it to denoise images corrupted with additive Gaussian
noise. The intention behind this method is to reduce the
convergence time of anisotropic diffusion and thereby
increase its performance. The proposed method produces
excellent results when compared with various wavelet
shrinkage [6], [7], [8] and non-linear diffusion methods [1],
[2].
Second order partial differential equations have been used as
efficient methods for removing noise from images. One of the
most commonly used PDE based denoising technique since
its introduction is the Perona-Malik method [1]. The Perona –
Malik equation for an image u is given by
[ ] ),(),(,)( 00
yxuyxuuucdiv
t
u
t
=∇∇=
∂
∂
=
(1)
where ∇u is the gradient of the image u, div is the divergence
operator and c is the diffusion coefficient. The diffusion
coefficient c is a non-decreasing function and diffuses more
on plateaus and less on edges and thus edges are preserved.
Another objective for the selection of c(.) is to incur
backward diffusion around intensity transitions so that edges
are sharpened, and to assure forward diffusion in smooth
areas for noise removal [2]. Some of the commonly employed
diffusivity functions are given in [12]. The equation (1) is
studied as an efficient tool for noise removal and scale space
analysis of images.
Wavelet based methods are always a good choice for image
denoising and has been discussed widely in literatures for the
past two decades [6]-[8],[11],[12].Wavelet shrinkage permits
a more efficient noise removal while preserving high
frequencies based on the disbalancing of the energy of such
representations [4]. The technique denoises image in the
orthogonal wavelet domain, where each coefficient is
thresholded by comparing against a threshold; if the
coefficient is smaller than the threshold, it is set to zero,
otherwise it is kept or modified.
Wavelet shrinkage depends heavily on the choice of a
thresholding parameter and the choice of this threshold
determines, to a great extent the efficacy of denoising. The
denoising process is based on the fact that the wavelet
transform compresses most of the L2
energy of the signal in a
restricted number of large coefficients. The procedure can be
summarized in three steps
(Z)WY'
T(Y,λZ
W(x)Y
1
)
−=
=
=
(2)
where x is the affected signal, W(.) and W-1
is the forward and
inverse wavelet transform operators. T(Y,λ ) denotes the
denoising operator with soft or hard threshold λ. Of the
various methods based on wavelet thresholding,
VisuShrink[6], SureShrink[7], BayesShrink[8] and its
variants are the most popular. VisuShrink uses one of the well
known thresholding rules: the universal threshold. In addition,
subband adaptive systems have superior performance, such as
SureShrink , which is a data driven system. Recently,
BayesShrink [8], which is also a data driven subband adaptive
technique, is proposed and outperforms VisuShrink and
SureShrink. In the proposed method BayesShrink is used
along with anisotropic diffusion to get a better performance
than stand alone Anisotropic diffusion or BayesShrink.
This work does not attempt to investigate in deep the
theoretical properties of the proposed model in general
settings. Our primary goal is to demonstrate that how the
performance of PDE based denoising methods can be
improved by using the proposed hybrid method. The paper is
Appeared in the Proceedings of IEE International Conference on Visual Information Engineering (VIE) 2006, pp 589 - 593
2. organized as follows. In section II Bayesian denoising
technique is discussed. Section III and IV explains the
proposed method, experimental results and comparison of the
proposed method with other popular models. Finally
conclusion and remarks are included in section V.
2 Denoising Using Bayesian Shrinkage
The Bayesian Shrinkage estimates a soft-threshold that
minimizes the Bayesian risk. The Bayesian risk estimation is
subband dependent. The threshold is mathematically derived
in [8]. The generalized Gaussian distribution (GCD),
following [9] is
{ }|]|),([exp),()(,
β
βσ βσαβσ xXXCGG xX
−= (5)
-∞ <x< ∞, σX>0,β>0, where
2/1
1
)/1(
)/3(
),(
Γ
Γ
= −
β
β
σβσα XX (6)
and
Γ
=
β
βσαβ
βσ
1
2
),(.
),(
X
XC (7)
and
duuet tu 1
0
)( −
∞
−
∫=Γ (8)
is the gamma function. The parameter σX is the standard
deviation and β is the shape parameter. Here the objective is
to find a soft threshold T that minimizes the Bayes risk,
2
^
|
2
^
)(
−=
−= XXEEXXETr XYX
(9)
where )2,(|),(
^
ση xNXYYTX ≈= and
βσ ,X
GGX ≈ .
Denote the optimal threshold by T*
)(minarg),(* Tr
T
XT =βσ (10)
which is a function of the parameters σX and β. In [8] it is
shown that for general σ, the threshold TB can be written as
( )
X
X
B
T
σ
σ
σ
2
= (11)
where σ2
is the noise variance and σX2
the signal variance.
By using the threshold in (11), we can restore the image much
better than by using VisuShrink or Sure Shrink, where
VisuShrink uses a threshold choice, Mlog2σ . This can be
unwarrantedly large due to its dependence on the number of
samples. SureShrink uses a hybrid of the universal threshold
and the SURE threshold, derived from Stein’s unbiased risk
estimator.
3 Proposed Model
In the proposed model the Bayesian Shrinkage of the non-
linearly diffused signal is taken. The equation can be written
as
)( '
1−
= nsn
IBI (12)
where Bs is the Bayesian shrink and '
1−n
I is anisotropic
diffusion as shown in (1) at (n-1)th
time. Numerically (12) can
be written as
( )nnsn
tdIBI ∆+= −1
(13)
where Bs can be calculated by finding Tb as mentioned in eqn
(11) after taking wavelet transform of '
1−n
I .
The intention to develop this method is to decrease the
convergence time of the anisotropic diffusion. It is understood
that the convergence time for denoising is directionally
proportional to the image noise level. In the case of
anisotropic diffusion, as iteration continues, the noise level in
image decreases (till it reaches the convergence point), but in
a slow manner. But in the case of Bayesian shrinkage, it just
cut the frequencies above the threshold and that in a single
step. An iterative Bayesian Shrinkage will not incur any
change in the detail coefficients from the first one. Now
consider the proposed algorithm, here the threshold for
Bayesian shrinkage is recalculated each time after anisotropic
diffusion, and as a result of two successive noise reduction
step, it approaches the convergence point much faster than
anisotropic diffusion.
As the convergence time decreases, image blurring can be
restricted, and as a result image quality increases. The whole
process is illustrated in Fig.2. Fig.2(a) shows the convergence
of the image processed by Perona-Malik anisotropic
diffusion. The convergence point is at P. ie. at P we will get
the better image, with the assumption that the input image is a
noisy one. If this convergence point P can be shifted towards
y-axis, its movement will be as shown in Fig 2 (b).ie. if we
pull the point P towards y-axis, it will move in a left-top
fashion. Here the Bayesian shrinkage is the catalyst, which
pulls the convergence point P of the anisotropic diffusion
towards a better place. The method can be extended to other
PDE based methods like fourth order PDEs, Total-variation
minimization, Complex diffusion etc.
Yy
Scale space images
Anisotropic
Diffusion
BayesShrink
Fig 1: Block diagram of the proposed denoising algorithm.
The iteration process will continue till the input signal y is
converged to Y.
3. 4 Experimental Results & Comparative Analysis
Experiments were carried out on various types of images.
Comparisons and analysis were done on the basis of MSSIM
(Mean Structural Similarity Index Matrix) and PSNR (Peak
Signal to Noise Ratio)
The MSSIM[10] is used to evaluate the overall image quality
and is defined as
),(
1
),(
1
j
M
j
j
yxSSIM
M
YXMSSIM ∑=
= (14)
where X and Y are the reference and the distorted images
respectively, M is the number of local windows in the image,
SSIM is the Structural Similarity Index Matrix, xj and yj are
the image contents at the jth
local window. The Structural
Similarity Index Matrix (SSIM) is defined as
( )( )
( )( )2
22
1
22
21
22
),(
CC
CC
yxSSIM
yxyx
xyyx
++++
++
=
σσµµ
σµµ
(15)
(a) (b) (c)
(d) (e)
Fig 3: Noise removal with Anisotropic Diffusion, BayesShrink and Propose method
a) Original Image b) Noisy Image
(PSNR 20.24 dB, SSIM 0.3788)
c) Denoised with Anisotropic
diffusion( PSNR 25.87 dB, SSIM
0.6744, iterations :315 d)
Denoised with BayesShrink (PSNR
26.71 dB, SSIM 0.7267, iterations
: 1) e)Denoised with proposed
method (PSNR 27.67 dB, SSIM
0.7836, iterations : 51)
(a) (b) (c)
Fig 2: Working of WEAD (a) Shows the convergence of a noisy image (convergence at P). If this P can be shifted towards left,
image quality can be increased and time complexity can be reduced. Illustrated in (b). (c) shows the signal processed by WEAD. It
can be seen that the convergence point is shifted to left and moved upwards.
4. where µx and µy are the estimated mean intensity along x and
y directions and σx and σy are the standard deviation
respectively. σxy can be estimated as
−−
−
= ∑=
yix
N
i
ixy
yx
N
µµσ )((
1
1
1
(16)
where K1,K2«1 is a small constant and L is the dynamic
range of the pixel values (255 for 8 bit grayscale images).
The second parameter used for evaluation is the PSNR which
is defined as
C1 and C2 in (15) are constants and the values are given as
( )2
11
LKC = (17)
and
( )2
22
LKC = (18)
=
rms
b
PSNR 10
log20 (19)
where b is the largest possible value of the signal (typically
255 or 1), and rms is the root mean square difference between
two images.
(a) (b)
(c) (d)
(e) (f)
Fig 4. Comparative Analysis of Anisotropic Diffusion, BayesShrink and Proposed Method. (a) and (c) based on PSNR and (b) and (d)
based on Mean SSIM. For (a) and (b) the image used is gray (shown in (e) ) and for (c) and (d) the image used is Lena (shown in (f)).
It can be seen that in both cases the proposed method performs better than the other two.
5. Fig 3 shows the image denoised with anisotropic diffusion,
Bayesshrink and proposed method. It is observed that the
proposed method reduces the number of iterations of
anisotropic diffusion from 315 to 51 and improves the image
quality. It can be seen that there is 10% improvement over
anisotropic diffusion and around 5% over Bayesshrink in
preserving image structure. Based on PSNR also it can be
seen that the proposed method performs better than the other
two. The graphs in Fig.4 shows comparative analysis of
anisotropic diffusion, bayesshrink and proposed method. It is
clear that the performance of the methods depends on image
type and noise levels. But in both cases, whether anisotropic
diffusion or bayesshrink gives better results, the performance
of the proposed method seems to be much better than the
other two. It can be seen that the proposed method preserves
image structures much better than anisotropic diffusion and
bayesshrink. Also the number of iterations required for the
proposed method to produce the better image is much less
than that of anisotropic diffusion. The experiment is repeated
for various types of images with varying noise levels and
seems that the method proposed is giving better results than
anisotropic diffusion and bayesshrink.
Conclusion
A method to improve the performance of nonlinear
anisotropic diffusion is proposed. The method produces a
converged image with less number of iterations preserving
image edges better than anisotropic diffusion.
Acknowledgements
The authors would like to thank the Scientists of ADRIN
(Hyderabad, India) for helpful discussions. They would also
like to thank the authorities of NeST (Trivandrum, India) for
providing the necessary facilities for doing the experiments.
References
[1] P. Perona and J. Malik, “Scale-space and edge detection
using anisotropic diffusion”, IEEE Trans. Pattern
Analysis and Machine Intelligence., vol 12. No. 12, pp
629-639, July 1990.
[2] Yu-Li You, Wenguan Xu, Allen Tannenbaum and
Mostafa Kaveh, “Behavioral Analysis of Anisotropic
Diffusion in Image Processing”, IEEE Trans. Image
Processing, vol. 5, no. 11, pp 1539-1553, November
1996.
[3] W. Rudin, “Real and Complex Analysis”, New York :
McGraw-Hill, 1996.
[4] Gabriel Cristobal, Monica Chagoyen, Boris Escalante-
Ramirez, Juan R Lopez, “Wavelet-based denoising
methods, A comparative study with applications in
microscopy (Unpublished work style),” Proc. SPIE’s
International Symposium on Optical Science,
Engineering and Instrumentation, Wavelet Applications
in Signal and Image Processing IV, Vol. 2825, Denver,
CO. 1996.
[5] Raghuram Rangarajan, Ramji Venkataramanan,
Siddharth Shah, “Image Denoising Using Wavelets”, ----
---,2002.
[6] David L Donoho, “Ideal spatial adaptation by wavelet
shrinkage”, Biometrika, 81(3) : 425-455, August 1994.
[7] David L. Donoho and Iain M. Johnstone, “Adapting to
Unknown Smoothness via Wavelet Shrinkage,” Journal
of American StatisticalAssociation, 90(432):1200-1224,
December 1995
[8] S. Grace Chang, Bin Yu and Martin Vetterli, “Adaptive
Wavelet Thresholding for Image Denoising and
Compression,” IEEE Trans. Image Processing, Vol 9,
No. 9, Sept 2000, pg 1532-1546.
[9] R.L. Joshi, V.J. Crump and T.R. Fisher, “Image subband
coding using arithmetic and trellis coded quantization”,
IEEE Trans. Circuits Syst. Video Technol., vol. 5, pp
515-523, Dec. 1995.
[10] Zhou Wang, Alan Conard Bovik, Hamid Rahim Sheik
and Erno P Simoncelli, “ Image Quality Assessment :
From error visibility to structural similarity”, IEEE
Trans. Image Processing, Vol. 13, No. 4, 2004.
[11] R.R Cofiman and David L. Donoho, “Translation
invariant denoising”, Wavelets and Statistics, Lecture
Notes in Ststistics, Springer Verlag, 1995.
[12] Pavel Mrazek, Joachim Weickert and Gabriele Steidl,
“Correspondence between Wavelet Shrinkage and
Nonkinear Diffusion”, Scale-Space 2003, LNCS 2695,
pp. 101-116, 2003