This paper presents a technique for denoising digital radiographic images using a wavelet-based hidden Markov model. The method first applies the Anscombe transformation to adjust for Poisson noise, then uses the dual-tree complex wavelet transform for decomposition. A hidden Markov tree model is used to capture correlations between wavelet coefficients across scales. Two correction functions are applied to shrink coefficients before inverse transformation. Evaluation on phantom and clinical images showed the method outperforms Gaussian filtering in terms of noise reduction, detail quality and bone sharpness, though some edges had artifacts.
Filtering Techniques to reduce Speckle Noise and Image Quality Enhancement me...IOSR Journals
Abstract: Noise in images is the vital factor which degrades the quality of the images. Reducing noise from the
satellite images, medical images etc., is a challenge for the researchers in digital image processing. Several
approaches are there for noise reduction. Generally speckle noise is commonly found in Synthetic Aperture
Radar (SAR) satellite images and medical images. This research paper put forward some of the filtering
techniques for the removal of speckle noise from the satellite images, which enhances the quality of the images.
Although many filters are available for speckle reduction, some filters are best suited for SAR images are used
for which the statistical parameters are calculated for the output images obtained from all the filters. The
statistical measures SNR, PSNR, RMSE and CoC are compared. The output images corresponding to the best
statistical values are displayed along with the filters name and corresponding values of the statistical measures.
Keywords: Filters, Speckle noise reduction, Image enhancement, Satellite images, Statistical measures.
A NOVEL ALGORITHM FOR IMAGE DENOISING USING DT-CWT sipij
This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on
Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding - softclustering technique. The clustering techniques classify the noisy and image pixels based on the
neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the
proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with
RMSE to assess the quality of denoised images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Removal of noise is a determining track in
the image rebuilding process, but denoising of image remains a
claiming problem in upcoming analysis accomplice along
image processing. Denoising is utilized to expel the noise from
corrupted image, where as we need to maintain the edges and
other detailed characteristics almost accessible. This noise gets
imported during accretion, transmitting & receiving and
storage & retrieval techniques. In this paper, to discover out
denoised image the modified denoising technique and the local
adaptive wavelet image denoising technique can be obtained.
The input (noisy image) is denoised with the help of modified
denoising technique which is form on wavelet domain as well as
spatial domain along with the local adaptive wavelet image
denoising technique which is form on wavelet domain. In this
paper, I have appraised and analyzed achievements of
modified denoising technique and the local adaptive wavelet
image denoising technique. The above procedures are
contemplated with other based on PSNR between input image
and noisy image and SNR between input image and denoised
image. Simulation and experimental outgrowth for an image
reflects as the mean square error of the local adaptive wavelet
image denoising procedure is less efficient as compare to
modified denoising procedure including the signal to noise
ratio of the local adaptive wavelet image denoising technique is
effective than other approach. Therefore, the image after
denoising has a superior visual effect. In this paper, these two
techniques are materialized with the help of MATLAB for
denoising of image
Filtering Techniques to reduce Speckle Noise and Image Quality Enhancement me...IOSR Journals
Abstract: Noise in images is the vital factor which degrades the quality of the images. Reducing noise from the
satellite images, medical images etc., is a challenge for the researchers in digital image processing. Several
approaches are there for noise reduction. Generally speckle noise is commonly found in Synthetic Aperture
Radar (SAR) satellite images and medical images. This research paper put forward some of the filtering
techniques for the removal of speckle noise from the satellite images, which enhances the quality of the images.
Although many filters are available for speckle reduction, some filters are best suited for SAR images are used
for which the statistical parameters are calculated for the output images obtained from all the filters. The
statistical measures SNR, PSNR, RMSE and CoC are compared. The output images corresponding to the best
statistical values are displayed along with the filters name and corresponding values of the statistical measures.
Keywords: Filters, Speckle noise reduction, Image enhancement, Satellite images, Statistical measures.
A NOVEL ALGORITHM FOR IMAGE DENOISING USING DT-CWT sipij
This paper addresses image enhancement system consisting of image denoising technique based on Dual Tree Complex Wavelet Transform (DT-CWT) . The proposed algorithm at the outset models the noisy remote sensing image (NRSI) statistically by aptly amalgamating the structural features and textures from it. This statistical model is decomposed using DTCWT with Tap-10 or length-10 filter banks based on
Farras wavelet implementation and sub band coefficients are suitably modeled to denoise with a method which is efficiently organized by combining the clustering techniques with soft thresholding - softclustering technique. The clustering techniques classify the noisy and image pixels based on the
neighborhood connected component analysis(CCA), connected pixel analysis and inter-pixel intensity variance (IPIV) and calculate an appropriate threshold value for noise removal. This threshold value is used with soft thresholding technique to denoise the image .Experimental results shows that that the
proposed technique outperforms the conventional and state-of-the-art techniques .It is also evaluated that the denoised images using DTCWT (Dual Tree Complex Wavelet Transform) is better balance between smoothness and accuracy than the DWT.. We used the PSNR (Peak Signal to Noise Ratio) along with
RMSE to assess the quality of denoised images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Removal of noise is a determining track in
the image rebuilding process, but denoising of image remains a
claiming problem in upcoming analysis accomplice along
image processing. Denoising is utilized to expel the noise from
corrupted image, where as we need to maintain the edges and
other detailed characteristics almost accessible. This noise gets
imported during accretion, transmitting & receiving and
storage & retrieval techniques. In this paper, to discover out
denoised image the modified denoising technique and the local
adaptive wavelet image denoising technique can be obtained.
The input (noisy image) is denoised with the help of modified
denoising technique which is form on wavelet domain as well as
spatial domain along with the local adaptive wavelet image
denoising technique which is form on wavelet domain. In this
paper, I have appraised and analyzed achievements of
modified denoising technique and the local adaptive wavelet
image denoising technique. The above procedures are
contemplated with other based on PSNR between input image
and noisy image and SNR between input image and denoised
image. Simulation and experimental outgrowth for an image
reflects as the mean square error of the local adaptive wavelet
image denoising procedure is less efficient as compare to
modified denoising procedure including the signal to noise
ratio of the local adaptive wavelet image denoising technique is
effective than other approach. Therefore, the image after
denoising has a superior visual effect. In this paper, these two
techniques are materialized with the help of MATLAB for
denoising of image
PERFORMANCE ANALYSIS OF UNSYMMETRICAL TRIMMED MEDIAN AS DETECTOR ON IMAGE NOI...ijistjournal
This Paper Analyze the performance of Unsymmetrical trimmed median, which is used as detector for the detection of impulse noise, Gaussian noise and mixed noise is proposed. The proposed algorithm uses a fixed 3x3 window for the increasing noise densities. The pixels in the current window are arranged in sorting order using a improved snake like sorting algorithm with reduced comparator. The processed pixel is checked for the occurrence of outliers, if the absolute difference between processed pixels is greater than fixed threshold. Under high noise densities the processed pixel is also noisy hence the median is checked using the above procedure. if found true then the pixel is considered as noisy hence the corrupted pixel is replaced by the median of the current processing window. If median is also noisy then replace the corrupted pixel with unsymmetrical trimmed median else if the pixel is termed uncorrupted and left unaltered. The proposed algorithm (PA) is tested on varying detail images for various noises. The proposed algorithm effectively removes the high density fixed value impulse noise, low density random valued impulse noise, low density Gaussian noise and lower proportion of mixed noise. The proposed algorithm is targeted on Xc3e5000-5fg900 FPGA using Xilinx 7.1 compiler version which requires less number of slices, optimum speed and low power when compared to the other median finding architectures.
Survey Paper on Image Denoising Using Spatial Statistic son PixelIJERA Editor
The classical non-local means image denoising approach, the value of a pixel is determined based on the weighted average of other pixels, where the weights are determined based on a fixed isotropic ally weighted similarity function between the local neighbourhoods. It is demonstrate that noticeably improved perceptual quality can be achieved through the use of adaptive anisotropic ally weighted similarity functions between local neighbourhoods. This is accomplished by adapting the similarity weighing function in an anisotropic manner based on the perceptual characteristics of the underlying image content derived efficiently based on the Mexican Hat wavelet. Experimental results show that the it can be used to provide improved perceptual quality in the denoised image both quantitatively and qualitatively when compared to existing methods.
Novel adaptive filter (naf) for impulse noise suppression from digital imagesijbbjournal
In general, it is known that an adaptive filter adjusts its parameters iteratively such as size of the working
window, decision threshold values used in two stage detection-estimation based switching filters, number of
iterations etc. It is also known that nonlinear filters such as median filters and its several variants are
popularly known for their ability in dealing with the unknown circumstances. In this paper an efficient and
simple adaptive nonlinear filtering scheme is presented to eliminate the impulse noise from the digital images with an impulsive noise detection and reduction scheme based on adaptive nonlinear filter techniques. The proposed scheme employs image statistics based dynamically varying working window and an adaptive threshold for noise detection with a Noise Exclusive Median (NEM) based restoration. The intensity value of the Noise Exclusive Median (NEM) is derived from the processed pixels in local
neighborhood of a dynamically adaptive window. In the proposed scheme use of an adaptive threshold value derived from the noisy image statistics returns more precise results for the noisy pixel detection. The
proposed scheme is simple and can be implemented as either a single pass or a multi-pass with a maximum
of three iterations with a simple stopping criterion. The goodness of the proposed scheme is evaluated with respect to the qualitative and quantitative measures obtained by MATLAB simulations with standard images added with impulsive noise of varying densities. From the comparative analysis it is evident that the proposed scheme out performs the state-of-art schemes, preferably in cases of high-density impulse noise
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
A Study of Total-Variation Based Noise-Reduction Algorithms For Low-Dose Cone...CSCJournals
In low-dose cone-beam computed tomography, the reconstructed image is contaminated with
excessive quantum noise. In this work, we examined the performance of two popular noisereduction
algorithms—total-variation based on the split Bregman (TVSB) and total-variation based
on Nesterov’s method (TVN)—on noisy imaging data from a computer-simulated Shepp–Logan
phantom, a physical CATPHAN phantom and head-and-neck patient. Up to 15% Gaussian noise
was added to the Shepp–Logan phantom. The CATPHAN phantom was scanned by a Varian OBI
system with scanning parameters 100 kVp, 4 ms, and 20 mA. Images from the head-and-neck
patient were generated by the same scanner, but with a 20-ms pulse time. The 4-ms low-dose
image of the head-and-neck patient was simulated by adding Poisson noise to the 20-ms image.
The performance of these two algorithms was quantitatively compared by computing the peak
signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR) and the total computational time. For
CATPHAN, PSNR improved by 2.3 dB and 3.1 dB with respect to the low-dose noisy image for the
TVSB and TVN based methods, respectively. The maximum enhancement ratio of CNR for
CATPHAN was 4.6 and 4.8 for TVSB and TVN respectively. For data for head-and-neck patient,
the PSNR improvement was 2.7 dB and 3.4 dB for TVSB and TVN respectively. Convergence
speed for the TVSB-based method was comparatively slower than TVN method. We conclude that
TVN algorithm has more desirable properties than TVSB for image denoising.
발표자: 전석준(KAIST 박사과정)
발표일: 2018.8.
Super-resolution은 저해상도 이미지를 고해상도 이미지로 변환시키는 기술로 오랜기간 연구되어 온 주제입니다. 최근 딥러닝 기술이 적용됨에 따라 super-resolution 성능이 비약적으로 향상되었습니다. 저희는 스테레오 이미지를 이용하여 더 높은 해상도의 이미지를 얻는 기술을 개발하였습니다. 이에 관련 내용을 발표하고자 합니다.
1. Multi-Frame Super-Resolution
2. Learning-Based Super-Resolution
3. Stereo Imaging
4. Deep-Learning Based Stereo Super-Resolution
Speckle Noise Reduction in Ultrasound Images using Adaptive and Anisotropic D...Md. Shohel Rana
US Imaging Technique less cost. Nonlinear and Anisotropic filter for removing speckle noise can be removed from US images. Proposed a modified Anisotropic filter which reduces speckle noises.
Reduced Ordering Based Approach to Impulsive Noise Suppression in Color ImagesIDES Editor
In this paper a novel filtering design intended for
the impulsive noise removal in color images is presented.
The described scheme utilizes the rank weighted cumulated
distances between the pixels belonging to the local filtering
window. The impulse detection scheme is based on the
difference between the aggregated weighted distances assigned
to the central pixel of the window and the minimum value,
which corresponds to the rank weighted vector median. If the
difference exceeds an adaptively determined threshold value,
then the processed pixel is replaced by the mean of the
neighboring pixels, which were found to be not corrupted,
otherwise it is retained. The important feature of the described
filtering framework is its ability to effectively suppress
impulsive noise, while preserving fine image details. The
comparison with the state-of-the-art denoising schemes
revealed that the proposed filter yields better restoration
results in terms of objective restoration quality measures.
PERFORMANCE ANALYSIS OF UNSYMMETRICAL TRIMMED MEDIAN AS DETECTOR ON IMAGE NOI...ijistjournal
This Paper Analyze the performance of Unsymmetrical trimmed median, which is used as detector for the detection of impulse noise, Gaussian noise and mixed noise is proposed. The proposed algorithm uses a fixed 3x3 window for the increasing noise densities. The pixels in the current window are arranged in sorting order using a improved snake like sorting algorithm with reduced comparator. The processed pixel is checked for the occurrence of outliers, if the absolute difference between processed pixels is greater than fixed threshold. Under high noise densities the processed pixel is also noisy hence the median is checked using the above procedure. if found true then the pixel is considered as noisy hence the corrupted pixel is replaced by the median of the current processing window. If median is also noisy then replace the corrupted pixel with unsymmetrical trimmed median else if the pixel is termed uncorrupted and left unaltered. The proposed algorithm (PA) is tested on varying detail images for various noises. The proposed algorithm effectively removes the high density fixed value impulse noise, low density random valued impulse noise, low density Gaussian noise and lower proportion of mixed noise. The proposed algorithm is targeted on Xc3e5000-5fg900 FPGA using Xilinx 7.1 compiler version which requires less number of slices, optimum speed and low power when compared to the other median finding architectures.
Survey Paper on Image Denoising Using Spatial Statistic son PixelIJERA Editor
The classical non-local means image denoising approach, the value of a pixel is determined based on the weighted average of other pixels, where the weights are determined based on a fixed isotropic ally weighted similarity function between the local neighbourhoods. It is demonstrate that noticeably improved perceptual quality can be achieved through the use of adaptive anisotropic ally weighted similarity functions between local neighbourhoods. This is accomplished by adapting the similarity weighing function in an anisotropic manner based on the perceptual characteristics of the underlying image content derived efficiently based on the Mexican Hat wavelet. Experimental results show that the it can be used to provide improved perceptual quality in the denoised image both quantitatively and qualitatively when compared to existing methods.
Novel adaptive filter (naf) for impulse noise suppression from digital imagesijbbjournal
In general, it is known that an adaptive filter adjusts its parameters iteratively such as size of the working
window, decision threshold values used in two stage detection-estimation based switching filters, number of
iterations etc. It is also known that nonlinear filters such as median filters and its several variants are
popularly known for their ability in dealing with the unknown circumstances. In this paper an efficient and
simple adaptive nonlinear filtering scheme is presented to eliminate the impulse noise from the digital images with an impulsive noise detection and reduction scheme based on adaptive nonlinear filter techniques. The proposed scheme employs image statistics based dynamically varying working window and an adaptive threshold for noise detection with a Noise Exclusive Median (NEM) based restoration. The intensity value of the Noise Exclusive Median (NEM) is derived from the processed pixels in local
neighborhood of a dynamically adaptive window. In the proposed scheme use of an adaptive threshold value derived from the noisy image statistics returns more precise results for the noisy pixel detection. The
proposed scheme is simple and can be implemented as either a single pass or a multi-pass with a maximum
of three iterations with a simple stopping criterion. The goodness of the proposed scheme is evaluated with respect to the qualitative and quantitative measures obtained by MATLAB simulations with standard images added with impulsive noise of varying densities. From the comparative analysis it is evident that the proposed scheme out performs the state-of-art schemes, preferably in cases of high-density impulse noise
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
A Study of Total-Variation Based Noise-Reduction Algorithms For Low-Dose Cone...CSCJournals
In low-dose cone-beam computed tomography, the reconstructed image is contaminated with
excessive quantum noise. In this work, we examined the performance of two popular noisereduction
algorithms—total-variation based on the split Bregman (TVSB) and total-variation based
on Nesterov’s method (TVN)—on noisy imaging data from a computer-simulated Shepp–Logan
phantom, a physical CATPHAN phantom and head-and-neck patient. Up to 15% Gaussian noise
was added to the Shepp–Logan phantom. The CATPHAN phantom was scanned by a Varian OBI
system with scanning parameters 100 kVp, 4 ms, and 20 mA. Images from the head-and-neck
patient were generated by the same scanner, but with a 20-ms pulse time. The 4-ms low-dose
image of the head-and-neck patient was simulated by adding Poisson noise to the 20-ms image.
The performance of these two algorithms was quantitatively compared by computing the peak
signal-to-noise ratio (PSNR), contrast-to-noise ratio (CNR) and the total computational time. For
CATPHAN, PSNR improved by 2.3 dB and 3.1 dB with respect to the low-dose noisy image for the
TVSB and TVN based methods, respectively. The maximum enhancement ratio of CNR for
CATPHAN was 4.6 and 4.8 for TVSB and TVN respectively. For data for head-and-neck patient,
the PSNR improvement was 2.7 dB and 3.4 dB for TVSB and TVN respectively. Convergence
speed for the TVSB-based method was comparatively slower than TVN method. We conclude that
TVN algorithm has more desirable properties than TVSB for image denoising.
발표자: 전석준(KAIST 박사과정)
발표일: 2018.8.
Super-resolution은 저해상도 이미지를 고해상도 이미지로 변환시키는 기술로 오랜기간 연구되어 온 주제입니다. 최근 딥러닝 기술이 적용됨에 따라 super-resolution 성능이 비약적으로 향상되었습니다. 저희는 스테레오 이미지를 이용하여 더 높은 해상도의 이미지를 얻는 기술을 개발하였습니다. 이에 관련 내용을 발표하고자 합니다.
1. Multi-Frame Super-Resolution
2. Learning-Based Super-Resolution
3. Stereo Imaging
4. Deep-Learning Based Stereo Super-Resolution
Speckle Noise Reduction in Ultrasound Images using Adaptive and Anisotropic D...Md. Shohel Rana
US Imaging Technique less cost. Nonlinear and Anisotropic filter for removing speckle noise can be removed from US images. Proposed a modified Anisotropic filter which reduces speckle noises.
Reduced Ordering Based Approach to Impulsive Noise Suppression in Color ImagesIDES Editor
In this paper a novel filtering design intended for
the impulsive noise removal in color images is presented.
The described scheme utilizes the rank weighted cumulated
distances between the pixels belonging to the local filtering
window. The impulse detection scheme is based on the
difference between the aggregated weighted distances assigned
to the central pixel of the window and the minimum value,
which corresponds to the rank weighted vector median. If the
difference exceeds an adaptively determined threshold value,
then the processed pixel is replaced by the mean of the
neighboring pixels, which were found to be not corrupted,
otherwise it is retained. The important feature of the described
filtering framework is its ability to effectively suppress
impulsive noise, while preserving fine image details. The
comparison with the state-of-the-art denoising schemes
revealed that the proposed filter yields better restoration
results in terms of objective restoration quality measures.
Smart Noise Cancellation Processing: New Level of Clarity in Digital RadiographyCarestream
Smart Noise Cancellation significantly reduces noise in diagnostic images while retaining fine spatial detail –there is no degradation of anatomical sharpness. When SNC is applied, it produces images that are significantly clearer than with standard processing. It also provides better contrast-to-noise ratio for images acquired at a broad range of exposures.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Contourlet Transform Based Method For Medical Image DenoisingCSCJournals
Noise is an important factor of the medical image quality, because the high noise of medical imaging will not give us the useful information of the medical diagnosis. Basically, medical diagnosis is based on normal or abnormal information provided diagnose conclusion. In this paper, we proposed a denoising algorithm based on Contourlet transform for medical images. Contourlet transform is an extension of the wavelet transform in two dimensions using the multiscale and directional filter banks. The Contourlet transform has the advantages of multiscale and time-frequency-localization properties of wavelets, but also provides a high degree of directionality. For verifying the denoising performance of the Contourlet transform, two kinds of noise are added into our samples; Gaussian noise and speckle noise. Soft thresholding value for the Contourlet coefficients of noisy image is computed. Finally, the experimental results of proposed algorithm are compared with the results of wavelet transform. We found that the proposed algorithm has achieved acceptable results compared with those achieved by wavelet transform.
IMAGE DENOISING BY MEDIAN FILTER IN WAVELET DOMAINijma
The details of an image with noise may be restored by removing noise through a suitable image de-noising
method. In this research, a new method of image de-noising based on using median filter (MF) in the
wavelet domain is proposed and tested. Various types of wavelet transform filters are used in conjunction
with median filter in experimenting with the proposed approach in order to obtain better results for image
de-noising process, and, consequently to select the best suited filter. Wavelet transform working on the
frequencies of sub-bands split from an image is a powerful method for analysis of images. According to this
experimental work, the proposed method presents better results than using only wavelet transform or
median filter alone. The MSE and PSNR values are used for measuring the improvement in de-noised
images.
An efficient fusion based up sampling technique for restoration of spatially ...ijitjournal
The various up-sampling techniques available in the literature produce blurring artifacts in the upsampled,
high resolution images. In order to overcome this problem effectively, an image fusion based interpolation technique is proposed here to restore the high frequency information. The Discrete Cosine Transform interpolation technique preserves low frequency information whereas Discrete Sine Transform preserves high frequency information. Therefore, by fusing the DCT and DST based up-sampled images, more high frequency, relevant information of both the up-sampled images can be preserved in the restored,
fused image. The restoration of high frequency information lessens the degree of blurring in the fusedimage and hence improves its objective and subjective quality. Experimental result shows the proposed method achieves a Peak Signal to Noise Ratio (PSNR) improvement up to 0.9947dB than DCT interpolation and 2.8186dB than bicubic interpolation at 4:1 compression ratio.
Adapter Wavelet Thresholding for Image Denoising Using Various Shrinkage Unde...muhammed jassim k
At Softroniics we provide job oriented training for freshers in IT sector. We are Pioneers in all leading technologies like Android, Java, .NET, PHP, Python, Embedded Systems, Matlab, NS2, VLSI etc. We are specializiling in technologies like Big Data, Cloud Computing, Internet Of Things (iOT), Data Mining, Networking, Information Security, Image Processing, Mechanical, Automobile automation and many other. We are providing long term and short term internship also.
We are providing short term in industrial training, internship and inplant training for Btech/Bsc/MCA/MTech students. Attached is the list of Topics for Mechanical, Automobile and Mechatronics areas.
MD MANIKANDAN-9037291113,04954021113
softroniics@gmail.com
www.softroniics.com
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...IJERA Editor
Image Resolution is one of the important quality metrics of images. Images with high resolution are required in
many fields. In this paper, a new resolution enhancement technique is proposed based on the interpolation of
four sub band images generated by Discrete Wavelet Transform (DWT) and the original Low Resolution (LR)
input image. In this technique, the four sub band images generated by DWT and the input LR image are
interpolated with scaling factor, α and then performed inverse DWT to obtain the intermediate High Resolution
(HR) Image. The difference between the intermediate HR image and the interpolated LR input image is added
to the intermediate HR image to obtain final output HR Image. Lanczos interpolation is used in this technique.
The proposed technique is tested on well known bench mark images. The quantitative and visual results shows
the superiority of the proposed technique over the conventional and state of art image resolution enhancement
techniques in wavelet domain using haar wavelet filter.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Image compression using Hybrid wavelet Transform and their Performance Compa...IJMER
Images may be worth a thousand words, but they generally occupy much more space in hard disk, or
bandwidth in a transmission system, than their proverbial counterpart. Compressing an image is significantly
different than compressing raw binary data. Of course, general purpose compression programs can be used to
compress images, but the result is less than optimal. This is because images have certain statistical properties
which can be exploited by encoders specifically designed for them. Also, some of the finer details in the image
can be sacrificed for the sake of saving a little more bandwidth or storage space. Compression is the process of
representing information in a compact form. Compression is a necessary and essential method for creating
image files with manageable and transmittable sizes. The data compression schemes can be divided into
lossless and lossy compression. In lossless compression, reconstructed image is exactly same as compressed
image. In lossy image compression, high compression ratio is achieved at the cost of some error in reconstructed
image. Lossy compression generally provides much higher compression than lossless compression.
Ijri ece-01-02 image enhancement aided denoising using dual tree complex wave...Ijripublishers Ijri
This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution. To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image. Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform , lanczos interpolator and edge preserving smoothing filters. Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized.
Keywords: dual-tree complex wavelet transform (DTWCT), lanczos interpolator, edge preserving smoothing filters.
1. Digital Radiographic Image Denoising Via Wavelet-Based Hidden
Markov Model Estimation
Ricardo J. Ferrari1,2
and Robin Winsor2
This paper presents a technique for denoising digital
radiographic images based upon the wavelet-domain
Hidden Markov tree (HMT) model. The method uses the
Anscombe’s transformation to adjust the original image,
corrupted by Poisson noise, to a Gaussian noise model.
The image is then decomposed in different subbands of
frequency and orientation responses using the dual-tree
complex wavelet transform, and the HMT is used to
model the marginal distribution of the wavelet coeffi-
cients. Two different correction functions were used to
shrink the wavelet coefficients. Finally, the modified
wavelet coefficients are transformed back into the
original domain to get the denoised image. Fifteen
radiographic images of extremities along with images
of a hand, a line-pair, and contrastYdetail phantoms
were analyzed. Quantitative and qualitative assessment
showed that the proposed algorithm outperforms the
traditional Gaussian filter in terms of noise reduction,
quality of details, and bone sharpness. In some images,
the proposed algorithm introduced some undesirable
artifacts near the edges.
KEY WORDS: Medical image denoising, digital radiogra-
phy, wavelet denoising, hidden Markov model
Digital radiographic images acquired using an
optically coupled charge-coupled device
(CCD) detector can provide very high detective
quantum efficiency (DQE) at low spatial frequen-
cies but fall off at higher frequencies, requiring
the use of sharpening algorithms. This inevitably
boosts noise which can mask some features. This
is often counteracted by using postprocessing
sharpening algorithms which unfortunately in-
crease the image noise. In order to minimize the
amount of noise introduced in the image by
sharpening algorithms, we have developed an
algorithm to reduce the noise while keeping
important details of the image.
General image denoising techniques based upon
the traditional (orthogonal, maximally decimated)
discrete wavelet transform (DWT) have proved to
provide the state-of-the-art in denoising perfor-
mance, in terms of peak signal-to-noise ratio
(PSNR), according to many papers presented in
the literature.1Y3
The basic idea behind this image-
denoising approach is to decompose the noisy
image by using the wavelet transform, to shrink or
keep (by applying a soft or hard thresholding
technique) wavelet coefficients which are signif-
icant relative to a specific threshold value or the
noise variance and to eliminate or suppress
insignificant coefficients, as they are more likely
related to the noise. The modified coefficients are
then transformed back into the original domain in
order to get the denoised image.
Despite the high PSNR values, most of these
techniques have their visual performance degrad-
ed by the introduction of noticeable artifacts
which may limit their use in denoising of medical
images.4
The common cause of artifacts in the
traditional wavelet-based denoising techniques is
due to the pseudo-Gibbs phenomenon5
which is
caused by the lack of translation invariance of
the wavelet method. Shift variance results from
1
From the Department of Computing Science, University of
Alberta, 221 Athabasca Hall, Edmonton, Alberta, Canada,
T6G 2E8.
2
From the Imaging Dynamics Company Ltd., 151, 2340
Pegasus Way N.E., Calgary, AB, Canada, T2E 8M5.
Correspondence to: Ricardo J. Ferrari, Department of
Computing Science, University of Alberta, 221 Athabasca
Hall, Edmonton, Alberta, Canada, T6G 2E8; email: ferrari@
cs.ualberta.ca
Copyright * 2004 by SCAR (Society for Computer
Applications in Radiology)
Online publication 00 Month 2004
doi: 10.1007/s10278-004-1908-3
Journal of Digital Imaging, Vol 0, No 0 (December), 2004: pp 1Y14 1
2. the use of critical subsampling (decimation) in
the DWT. Because of that, the wavelet coeffi-
cients are highly dependent on their location in
the subsampling lattice6
which affects directly the
discrimination of large/small wavelet coefficients,
likely related to singularities/nonsingularities, re-
spectively. Although this problem can be avoided
by using the undecimated DWT, it is too compu-
tationally expensive.
The proposed method for denoising radiograph-
ic images, shown in Figure 1, starts by prepro-
cessing the original image using the Anscombe’s
variance stabilizing transformation, which acts as
if the data arose from a Gaussian white noise
model.7
The image is then decomposed into
different subbands of frequency and orientation
responses using the overcomplete dual-tree com-
plex wavelet transform (DT-CWT). By using the
DT-CWT, the visual artifacts usually present in
the image when using the traditional DWT are
significantly minimized,8,9
with the advantage of
having a task that is still tractable in terms of
computation time. The HMT model is used to
capture the correlation among the wavelet coef-
ficients by modeling their marginal distribution
and thus improving the discrimination between
noisy and singularity pixels in an image. Finally,
the modified wavelet coefficients are transformed
back into the original domain in order to get the
denoised image. The efficacy of our method was
demonstrated on both phantom and clinical digital
radiographic images using quantitative and qual-
itative evaluation.
MATERIALS AND METHODS
Digital Radiographic System
The digital radiographic (DR) system used in our tests
(referred to as Xploreri system,10
) is an optically coupled
Fig 1. Flow chart of the method proposed for denoising of digital radiographic images.
2 FERRARI AND WINSOR
3. CCD-based digital radiography unit. It uses a CsI scintillator as
the primary x-ray conversion layer and couples the resulting
light output to the CCD by a mirror-and-lens system. The 4 Â
4K CCD is cooled to 263 K resulting in a dark current rate of
less than one electron per pixel per second. Images are
digitized at 14 bits and subsequently reduced for display to
12 bits. The Nyquist resolution is 4.6l p/mm. System DQE is
very high at low frequencies but falls off at higher frequencies,
requiring the use of sharpening algorithms. This inevitably
boosts noise which can mask some features, hence the current
work on wavelet-based denoising.
Hand Phantom and Image Dataset
The hand phantom from Nuclear Associates (Carle Place,
NY) illustrated in Figure 2(A) is composed of human skeletal
parts embedded in anatomically accurate, tissue-equivalent
material. The materials have the same absorption and second-
ary radiation-emitting characteristics as living tissue. Accord-
ing to Nuclear Associates, all bone marrow has been simulated
with tissue-equivalent material, which permits critical detail
study of bone structure and sharpness comparisons using x-rays.
In this work, the phantom was used to determine the character-
Fig 2. (A) Phantom hand from Nuclear Associates. (B) Radiographic image obtained from the hand phantom with 60 kVp, 3.2 mAs,
SID = 100 cm, small focal spot. (C) Clinical radiographic image used in this paper to illustrate the results of the proposed denoising
algorithm. The selected box in (C) indicates the region area that will be zoomed in for the sake of better visualization of the details of the
denoised images. (DYE) Radiographic images of the line-pair and contrast-detail phantoms, respectively, acquired with 70 kVp, 32 mAs.
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 3
4. istics of the image noise variance and the appropriate image set
to be used in the training stage of the HMT model.
In order to assess the improvement in sharpness after
denoising the images, a line-pair phantom from Nuclear
Associates model 07-5388-1000 with 0.1-mm-thick lead strips
and a maximum resolution of 5.0 line pairs per millimeter was
used. Improvement in contrast was assessed using the CDRAD
contrast detail digital radiography phantom with 225 target
squares arranged in a 15 Â 15 grid. In each square one or two
holes are present. Holes increase in depth logarithmically in
one direction and in diameter in the other direction ranging
from 0.3 to 8 mm. The line connecting the central spots with
the smallest visible diameter is the contrast detail curve. The
phantom images were acquired with 70 kVp and 32 mAs.
A total of 15 single-view radiographic images of lower and
upper extremities (hands, feet, wrists, and heels) were
analyzed. All images were acquired using the same type of
digital radiographic system, described in the BDigital Radio-
graphic System^ subsection, with 108 mm sampling interval
and 14-bits gray-level quantization. The images used in this
work were selected to characterize the best and worst quality
images in terms of noise level.
Protocol for the Evaluation of Results
The proposed algorithm was evaluated quantitatively mea-
suring the PSNR using digital radiographic images from the
phantom illustrated in Figure 1(a) and qualitatively using a set
of 15 clinical images.
The PSNR measure is defined as
PSNR ¼ 10log10
max xi;j
À Á2
1
N
P
i;j Ii;j À ^IIi;j
À Á2
0
@
1
A; ð1Þ
where Ii , j and ^IIi , j are the original and denoised images,
respectively. xi , j is the pixel value in the spatial location (i,j) of
the original image, and N is the total number of pixels in the
image. The PSNR is a scaled measure of the quality of a
reconstructed or denoised image. Higher PSNR values indicate
good quality resulting images.
The qualitative analysis was assessed according to the
opinion of two expert imaging specialists (HA and CT)
using a ranking table. All 15 single-view radiographic
images were visually inspected on a 21-in. computer mon-
itor. Image intensity histogram equalization11
and image
enhancement, using a standard unsharp-mask technique,12
were used for the sake of better visualization of the
denoising results. In addition, each processed image was
visually compared to the same original image filtered using
the Gaussian filter. The radius size of the Gaussian was
changed during the analysis to provide the best tradeoff
between sharpness of the bone details and noise reduction.
Table 1 was filled out for all 15 images during the assessment
of the algorithm.
Noise Modeling and Anscombe’s Transformation
In digital radiographic systems there are a variety of
imaging noise sources, which originate from the different
stages and elements of the system, such as x-ray source,
scattered radiation, imaging screen, CCD camera, and elec-
tronic circuits among others. The dominant cause of noise,
however, is due to the quantum fluctuations in the x-ray beam.
In the present work, a preprocessing stage was applied to the
acquired images to correct for the impulse noise, CCD dark
current noise and pixel nonuniformity.
It is well known that the Poisson distribution can be used to
model the arrival of photons and their expression by electron
counts on CCD detectors.7
Unlike Gaussian noise, Poisson
noise is proportional to the underlying signal intensity, which
makes separating signal from noise a very difficult task. Be-
sides, well-established methods for image denoising, including
the HMT model,1
are based upon the additive white Gaussian
noise model. Therefore, in order to overcome this limitation,
we have applied a variance stabilization (Anscombe’s) trans-
formation,7
described by
IA x; yð Þ ¼ 2
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
I x; yð Þ þ
3
8
r
; ð2Þ
to the original noise image. I(x,y) and IA(x,y) indicate the orig-
inal and transformed images, respectively. The Anscombe’s
transformation acts as if the image data arose from a Gaussian
Table 1. Example of the rank options and image characteristics analyzed which were used by the two imaging specialists to assess the
results of the proposed denoising algorithm
Image # Image characteristics being assessed
Anatomy Noise reduction Lack of artifacts Quality of details Sharpness
Soft tissue
Bone details
The images should be rated according to the following scores:
1. Excellent
2. Good
3. Average
4. Poor
5. Not acceptable
4 FERRARI AND WINSOR
5. white noise model. More precisely, as the number of photon
counts increases, the noise variance in a square-root image
tends to a constant, independent of the signal intensity. The
inverse Anscombe’s transformation is easily obtained by
manipulating Equation 2. In order to have a more tractable
problem, in this work we are considering that the images are
corrupted only by additive Poisson noise. Other sources of
noise, including electronic noise normally present in digital
radiographic systems, were not taken into account.
Dual-Tree Complex Wavelet
Compared with the DWT, the dual-tree complex wavelet
transform is a very attractive technique for medical image
denoising because it performs as well as the undecimated
DWT, in the context of shift invariance, and with significantly
lower computational cost.8
The nearly shift invariant property is obtained with a real
biorthogonal transform having double the sampling rate at each
scale and by computing parallel wavelet trees as illustrated in
Figure 3, which are differently subsampled. The DT-CWT
presents perfect shift invariance at level 1, and approximate
shift invariance, beyond this level. The DT-CWT also presents
limited redundancy in the representation (4:1 for the 2D case—
independent of the number of scales), good directional
selectivity (six oriented subbands: T15-, T45-, T75-), and it
permits perfect image reconstruction.
Hidden Markov Tree Model in the
Wavelet Domain
The HMT model, applied in the wavelet context,1
is a
statistical model that can be used to capture statistical
correlations between the magnitudes of wavelet coefficients
across consecutive scales of resolution. The HMT works by
modeling the following three important properties of the
wavelet coefficients:
Non-Gaussian distribution: The marginal distribution of
the magnitude of the complex wavelet coefficients can be
well modeled by using a mixture of two-state Rayleigh
distributions. The choice for using the Rayleigh mixture
model instead of the Gaussian mixture model was based
upon the fact that the real and imaginary parts of the
complex wavelet coefficients may be slightly correlated,
and therefore only the magnitudes of the complex wavelet
coefficients will present a nearly shift-invariant property,
but not the phase.9
Persistency: Large/small wavelet coefficients related to
pixels in the image tend to propagate through scales of the
quad trees. Therefore, a state variable is defined for each
wavelet coefficient that associates the coefficient with one
of the two Rayleigh marginal distributions [one with small
(S) and the other with large (L) variance]. The HMT model
is then constructed by connecting the state variables (L and
S) across scales using the ExpectationYMaximization
(EM) algorithm. Figure 4 shows the 1D structure of the
HMT model.
Clustering: Adjacent wavelet coefficients of a particular
large/small coefficient are very likely to share the same
state (large/small).
The HMT model is parameterized1
by the conditional
probability stating that the variable Sj is in state m given S jð Þ
is in state n, or, m;n
j ; jð Þ ¼ p Sj ¼ m S jð Þ ¼ n
À Á
m,n = 1,...,2.
The state probability of the root J is indicated by pSJ (m) =
p(Sj = m) and the Rayleigh mixture parameters as mj,m and sj,m
2
.
The value of mj,m is set to zero because the real and imaginary
Fig 3. Schematic of the dual-tree complex wavelet transform. (Figure provided by Dr. Kingsbury.8
)
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 5
6. parts of the complex wavelet coefficients must have zero
means (wavelets have zero gain at dc). sj,m
2
is the variance. The
parameters, grouped into a vector ¼ pSJ mð Þ; m;n
j; jð Þ; '2
j;m
n o
,
are determined by the EM algorithm proposed in Ref. 1.
Herein, we assume that the complex wavelet coefficients wj
follow one of the two-state Rayleigh distributions as
f wj;m '2
j;m
¼
w2
j;m
'2
j;m
exp
w2
j;m
2'2
j;m
!
; m ¼ 1; 2: ð3Þ
In order to have a more reliable and robust (not biased)
parameter estimation, the HMT model was simplified by
assuming that all the wavelet coefficients and state variables
within a particular level of a subband have identical paren-
tYchild relationships. Therefore, each of the six image sub-
bands obtained by using the DT-CWT was trained
independently and hence presents its own set of parameters.
The magnitude of the complex wavelet coefficients for each
subband was modeled by the resulting mixture model
P wj;m
À Á
¼
X
m¼1;2
pSJ mð Þf wj;m '2
j;m
: ð4Þ
To take into account the dependencies among the wavelet
coefficients of different scales, a tree graph representing a
parentYchild relationship is used (Figure 4). The transition of a
specific wavelet coefficient j between two consecutive levels in
the tree is specified by the conditional probability m;n
j; jð Þ. The
algorithm for training the HMT model is provided in Ref. 1.
Training of the HMT Model
The main goal of the training stage is to find the correlation
among the wavelet coefficients through the scales. Based upon
experimental analysis and also in a practical laboratory
experiment using the hand phantom object, we have verified
that the best set of images to be used in the training stage of the
HMT model should have the lowest level of noise and present
enough image structure.
To validate the above statement, the hand phantom was
imaged with different radiation levels, according to the
parameters kVp and mAs as indicated in Table 2, given a set
of five images with different SNR values. The images were
used in turn to train five models. The images were then
processed and the PSNR was recorded for further evaluation.
The results of the experiment are described in the BResults^
section.
Selection of the clinical radiographic images used in the
training of the HMT model was conducted by using a set of
representative images (outside of the testing image set) of each
anatomy being studied (hand, foot, wrist, and heel). A HMT
model was estimated for each specific anatomy. The images
were visually chosen based on the level of noise and amount of
bone details. Images with lower level of noise and richer in
bone details were given preference.
Noise variance estimation
Estimation of the noise variance is an important step in our
image-denoising algorithm as it is used directly, along with the
Fig 4. 1D tree structure graph for the dependencies of the Hidden Markov tree model. Three levels are illustrated. The trees for the
two internal wavelet coefficients in level J + 1 are not shown for the sake of better visualization.
Table 2. Parameters of the x-ray tube used in the experiment
with the hand phantom shown in Fig. 2. In this experiment, the
SID was set to 100 cm and the small focal spot was used.
Except for the last set of parameters, the others are default
values used in clinical application
Image kVp mAs Type of patient usually applicable
1 60 2.5 Pediatric
2 60 3.2 Normal/medium
3 60 4.0 Large
4 60 20 Very high dose—NOT applicable
6 FERRARI AND WINSOR
7. HMT parameters, in our wavelet-based filtering procedure. In
the present work, the noise variance was estimated as
'2
n ¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
'2
real  '2
imaginary
q
; ð5Þ
where sreal
2
and simaginary
2
are, respectively, the noise variance
of the real and imaginary parts of the wavelet coefficients
computed by using the median absolute deviation (MAD,13
)
algorithm.
Denoising Using the HMT
The denoising procedure proposed in this work is composed
of two shrinkage procedures: one is used for levels 1 and 2, and
the other for the subsequent levels. The rationality of this
strategy is related to the fact that the DT-CWT provides perfect
shift invariance only at level 1, and approximate shift
invariance for the other levels. Because of that, the capture of
the inter-scale dependencies among the wavelet coefficients
using the HMT model starts to become unreliable beyond level
2 or 3, due to the considerable image energy variation.
For the first two levels of decomposition, the conditional
mean estimation of the noise-free wavelet coefficient was
obtained using
^wwj ¼ E wj j
 Ã
¼
X
j
p Sj ¼ m wj;
À Á '2
j;m
'2
j;m þ '2
n
wj; ð6Þ
where p(Sj = m|wj,q) is the probability of state m given the
noise wavelet coefficient wj and the model parameters q
computed by the EM algorithm. sn
2
is the variance of the
additive white Gaussian noise and E[] is the expectation
operator.
As the estimation of the subband variances sj,m
2
in the HMT
model is performed using noise wavelet coefficients, their
values are biased and should be corrected. The corrected
estimation is then obtained by
'2
j;m ¼
'2
j;m À '2
n; if '2
j;m '2
n
0; otherwise
(
ð7Þ
After level 2, a modified version of the soft-threshold
procedure proposed in Ref. 14 was used to find the shrinkage
factor
cj ¼
sigm S wj
À T
À ÁÂ Ã
Àsigm ÀS wj
þ T
À ÁÂ Ã
sigm S max wj
À Á
À T
 ÃÈ É
Àsigm ÀS max wj
À Á
þ T
 ÃÈ É
ð8Þ
which is applied to the real and imaginary parts of the complex
wavelet coefficient wj. In the above equation, sigm yð Þ ¼ 1
1þeÀy
is the sigma function, S is an enhancement factor, and
T ¼ 'n= is a threshold value. b is considered as a smoothing
parameter. In the present work the default values of S and b
were set to 1.3 and 0.9, respectively.
RESULTS AND DISCUSSIONS
Figure 5 shows the results of the experiment
carried out to determine the relation between the
radiation dose and the algorithm performance, in0
terms of PSNR. The results were used to confirm
that a high-quality image (the one obtained with a
high x-ray dose, 60 kVp and 20 mAs) is in fact the
best option to be used in the training of the HMT
model. By analyzing the average PSNR values, we
noticed that image 3 (obtained with 60 kVp and
4.0 mAs) provides the second best average result.
The worst choice would be image 1, acquired with
60 kVp and 2.5 mAs. Despite the difference in the
average values shown in Figure 5, and except for
image 4, the PSNR values obtained by using
different training images were very similar. The x-
ray tube parameters used in the experiment are
shown in Table 2.
Figure 6 shows the results of the two-state
Rayleigh mixture model fitting the marginal
distribution of the wavelet coefficients for the
first four consecutive levels (1 to 4) of the image
in Figure 2(C). Visual inspection indicates the
good curve fitting provided by the Rayleigh
function. Due to the high image energy concen-
tration around magnitude 0.25 in Figure 6(A)Y(B),
application of a simple threshold technique to
differentiate large/small values of wavelet coef-
ficients, probably would not produce good results.
Indeed, HMT-based denoising algorithms usually
outperform standard thresholding techniques be-
cause the degree of coefficient shrinkage is
determined based not only upon the value of the
coefficient but also upon its relationship with its
neighbors across scales (quad-tree relationship).
Figure 7 shows the line-pair phantom images
denoised by using our proposed algorithm with
Fig 5. PSNR values resulting from the processing of the four
phantom images acquired using different exposure levels. Each
image was used in turn to train a HMT model. Afterwards, the
estimated HMT models were used in the denoising algorithm.
The PSNR average values from columns 1 to 4 in the attached
table are 25.59, 22.64, 22.59, and 22.65, respectively.
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 7
8. two levels of denoising and by using a Gaussian
filter with radius size of two pixels. In the original
image [Fig 2(D)], the sharpness of the edges can
be visually assessed up to 3.4 or 3.7 lp/mm. In
fact, visual inspection of this line-pair phantom
image in a computer monitor can provide up to
4.6 lp/mm using the system described in the
BDigital Radiographic System^ subsection. How-
ever, a noticeable amount of quantum noise can
be observed through the whole image. Figure 7(A)
shows the processed image using our proposed
algorithm. The amount of noise present in the
original line-pair phantom image was reduced
significantly. A noticeable improvement in sharp-
ness can also be visually assessed which is
confirmed by the high PSNR value (Table 3). In
this case, visual differences between the small
edges can be noticed only up to 3.1 lp/mm due to
the blurring effect caused by noise removal.
Visible structured artifacts can be seen closer to
the strong edges of the phantom. Because of the
regular pattern characteristic of the artifacts
introduced in the image, we argue that they may
be acceptable in visual analysis of radiographic
images providing a significant reduction in quan-
tum noise and improvement in sharpness. The
image resulting from the Gaussian filtering is
shown in Figure 7(C). Although the noise level
was considerably reduced without creating any
visible artifact, all the small edge details were
smoothed out. The visual differences between the
edges can only be seen up to 2.2 or 2.5 lp/mm. In
this case, the computed PSNR values were 32.23
and 31.58, respectively. Figures 7(B) and (D) show
the image differences resulting from the subtrac-
tion of the original image and the denoised image.
Fig 6. Example of two-state Rayleigh mixture marginal distributions used to model the wavelet coefficients. The densities summation
and the histograms of the wavelet coefficients are also shown. Plots were obtained for the first four levels (AYD); subbands with
orientation 0-.
8 FERRARI AND WINSOR
9. Fig 7. Images from the line-pair phantom used to assess the image sharpness. The images were cropped for the sake of better
visualization of the details. (A,B) Image denoised by using our proposed algorithm with two levels denoising and difference between the
original and denoised image. (C,D) Image denoised by using a Gaussian filter with radius size equal to two pixels and difference between
the original and denoised image. Denoised images were enhanced by using the unsharp-mask technique. The image differences were
histogram-equalized for the sake of better visualization.
Table 3. PSNR values computed for the line-pair and CDRAD phantom images, and for the radiographic hand image used in this work
Technique
PSNR (dB)
Line pair CDRAD Hand
Gaussian (radius size = 2 pixels) 31.58 49.34 48.32
Gaussian (radius size = 3 pixels) 29.76 46.72 47.88
Gaussian (radius size = 4 pixels) 28.39 44.66 47.24
Proposed method (two levels denoising) 32.23 52.37 48.68
Proposed method (three levels denoising) 32.22 52.15 47.93
Proposed method (four levels denoising) 32.22 52.12 47.87
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 9
10. Fig 8. Assessment of image contract using contrastYdetail curves. (AYC) Curves obtained using the proposed technique (with two,
three, and four levels of denoising, respectively) and using Gaussian filter (with kernel sizes of 2, 3, and 4 pixels, respectively).
10 FERRARI AND WINSOR
11. Comparing these two images, we can easily
confirm that our proposed algorithm can keep
much more of the fine details from the original
image than the Gaussian smoothing method.
Modification in the visual contrast was also
assessed by using the contrastYdetail phantom
image illustrated in Figure 2(E). The denoised
images resulting from applying our proposed
algorithm and the Gaussian smoothing to the
contrastYdetail phantom were visually evaluated
and the respective contrast curves were obtained
as illustrated in Figure 8(A)Y(C). All three plots
[Fig 8(A)Y(C)] show a slight improvement in the
image contrast using the proposed algorithm.
Herein, we would like to mention that the
proposed technique was not designed to improve
the contrast of the image but only reduce the
quantum noise. We believe that introducing small
changes in the algorithm can improve even more
the image contrast. The best result in terms of
contrast improvement was obtained by using the
proposed technique with four levels of denoising
[see Figure 8(C)].
For the sake of comparison, Figures 9 and 11
show examples of the radiographic hand image in
Figure 2(C) denoised by using the proposed
technique with different levels of denoising and
the Gaussian filter with different kernel sizes. The
granular appearance of the images in Figures 9(A)
and 11(A) is typical of images corrupted by
quantum noise. In these cases, the Gaussian filter
and the proposed algorithm using two levels of
Fig 9. Radiographic hand image shown in Figure 2(C)
denoised by using the proposed technique with different levels:
(A) two levels, (B) three levels, and (C) four levels of denoising.
Fig 10. Radiographic hand image shown in Figure 2(C)
denoised by using the isotropic Gaussian filter with different
radius sizes: (AYC) radius sizes equal to 2, 3, and 4 pixels,
respectively.
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 11
12. denoising were not efficient in removing the
noise. A huge improvement in reducing the
quantum noise, however, is demonstrated in
Figures 9(B) and (C). The soft tissue is very clean
and smooth compared to the results of the
Gaussian filter in Figures 11(B) and (C). On the
other hand, the amount of artifacts introduced
close to the strong edges (especially in the
boundaries of the metacarpals hand long bones)
becomes more noticeable, compared to the results
of the Gaussian filter. In general, the edge details
are clearer and crisper in the images processed
using the proposed technique and an improvement
in the overall perceived image sharpness can also
be noticed [see Figures 9(B) and (C) and Figures
11(B) and (C) for comparison]. The improvement
in image sharpness is due to the fact that our
proposed method treats differently soft tissue
regions and regions presenting fine bone details.
This fact can be noticed by comparing Figures
10(A)Y(C) and 12(A)Y(C). These image differ-
ences show that the proposed method can remove
the noise without removing the small bone details
from the image which are of great importance for
diagnostic purposes.
The results obtained from the denoising of the
15 clinical digital radiographs were analyzed
according to the protocol described in the
BProtocol for the Evaluation of Results^ subsec-
tion and are shown in Figure 13. In Figure 13(A),
Fig 11. Image differences computed between the original and
the denoised images using the proposed technique with differ-
ent levels: (A) two levels, (B) three levels, and (C) four levels of
denoising.
Fig 12. Image differences computed between the original and
the denoised images using the isotropic Gaussian filter with
different radius sizes: (AYC) radius sizes equal to 2, 3, and 4
pixels, respectively.
12 FERRARI AND WINSOR
13. we can confirm the excellent performance of the
algorithm, using two and four levels, in reducing
the noise of both soft tissue and bone. As pointed
out by the two specialists who analyzed the im-
ages, the algorithm was able to remove the quan-
tum noise with great success. Despite the good
performance in noise reduction, the proposed
algorithm presented a poorer performance with
regard to artifacts, when using four level of de-
noising, according to Figure 13(B). Artifacts are
mostly caused by the pseudo-Gibbs phenomenon
appearing near strong edges. This undesirable
effect becomes predominant as the number of de-
noised scales increases. The proposed algorithm
also scored well on overall quality of details after
denoising, as can be seen in Figure 13(C). The
bone sharpness was also preserved when compared
to the Gaussian filter in Figure 13(D). Except for
the presence of artifacts, the proposed denoising
algorithm using four-level denoising presented
better performance than the same method using
two-level denoising or the Gaussian filter.
Finally, Table 3 shows the PSNR values
computed for the phantoms and hand images.
For all cases the proposed algorithm presented
higher PSNR values compared to the Gaussian
filter method.
CONCLUSION
In this paper, we present a method for denoising
of digital radiographic images. Although the pre-
liminary results have shown to be very promising,
a more extensive evaluation of the algorithm
should be carried out by a panel of radiologists.
Also, investigation of directional response infor-
mation provided by the DT-CWT and reduction of
artifacts by using penalized reconstruction of the
wavelet coefficients is under way. As the main
idea of our proposed algorithm is based on mod-
eling the wavelet coefficient associated with edges
in an image, by reducing the noise level in the soft
tissue while keeping the sharpness of the edges,
Fig 13. Average results of the qualitative assessment of the proposed denoising algorithm performed by the two imaging specialists.
The plots also provide a comparison with denoising using Gaussian filter. The assessment included analysis of noise reduction (A),
analysis of artifacts (B), quality of details (C), and analysis of bone sharpness (D).
DIGITAL RADIOGRAPHIC IMAGE DENOISING VIA WAVELET-BASED HIDDEN MARKOV MODEL ESTIMATION 13
14. we expect an improvement in the detection of
small bone fractures. Application to musculoskel-
etal images in which the noise level may not be
the confounding factor in conspicuity of image
features, but lack of adequate depiction of fine
details, may benefit from the application of our
proposed method.
ACKNOWLEDGMENTS
The authors are very grateful to Carolyn Tinney and Heather
Andrews for helping in the assessment of the results. They also
would like to thank Prof. Dr. Nick Kingsbury from the Signal
Processing and Communication group of the University of
Cambridge, UK, for help in clarifying details about the DT-
CWT and for kindly providing Figure 3.
REFERENCES
1. Crouse M, Nowak R, Baraniuk R: Wavelet-based
statistical signal processing using hidden Markov models.
IEEE Trans Signal Process 46:886Y902, 1998
2. Donoho D: De-noising by soft-thresholding. IEEE Trans
Inf Theory 41:613Y627, 1995
3. Romberg J, Choi H, Baraniuk R: Bayesian tree-structured
image modeling using wavelet-domain hidden Markov models.
IEEE Trans Image Process 10:1056Y1068, 2001
4. Dippel S, Stahl M, Wiemker R, Blaffert T: Multiscale
contrast enhancement for radiographies: Laplacian pyramid
versus fast wavelet transform. IEEE Trans Med Imag
21:343Y353, 2002
5. Durand S, Froment J: Artifact free signal denoising
with wavelets. In: International Conference in Acoustics,
Speech and Signal Processing. Salt Lake City, Utah, USA,
2001, pp. 3685Y3688
6. Bradley A: Shift-invariance in discrete wavelet trans-
form. In: Sun C, Talbot H, Ourselin S, Adriaansen T (Eds).
Proceedings of the Seventh Digital Image Computing: Techni-
ques and Applications. CSIRO Publishing, Macquarie Univer-
sity, Sydney, Australia, 2003, pp 29Y38
7. Starck J, Murtagh F, Bijaoui A: Image processing and
data analysis: the multiscale approach. Cambridge: Cambridge
University Press, 1998
8. Kingsbury N: Image processing with complex wavelets.
Philos Trans R Soc Lond 357:2543Y2560, 1999
9. Lee V: Denoising of multidimensional data using
complex wavelets and hidden Markov treesSignal Processing
Laboratory. Cambridge: University of Cambridge, 2000, p 64
10. Winsor R: Filmless x-ray apparatus and method of using
the same. Imaging Dynamics Company Ltd, USA, 1992, p 7
11. Gonzalez R, Woods R: Digital image processing.
Addison-Wesley, 1992
12. Jain A: Fundamentals of digital image processing.
Englewood Cliffs, NJ, USA: Prentice Hall, 1989, p 64
13. Donoho D, Johnstone I: Adapting to unknown smooth-
ness via wavelet shrinkage. J Am Stat Assoc 90:1200Y1224,
1995
14. Laine A, Schuler S, Fan J, Huda W: Mammographic
feature enhancement by multiscale analysis. IEEE Trans Med
Imag 13:725Y740, 1994
14 FERRARI AND WINSOR