In this paper we discuss the speckle reduction in images with the recently proposed Wavelet Embedded Anisotropic Diffusion (WEAD) and Wavelet Embedded Complex Diffusion (WECD). Both these methods are improvements over anisotropic and complex diffusion by adding wavelet based bayes shrink in its second stage. Both WEAD and WECD produce excellent results when compared with the existing speckle reduction filters.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
In this paper a PDE based hybrid method for image denoising is introduced. The method is a bi-stage filter with anisotropic diffusion followed by wavelet based bayesian shrinkage. Here efficient denoising is achieved by reducing the convergence time of anisotropic diffusion.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Noise Removal with Morphological Operations Opening and Closing Using Erosio...IJMER
The mathematical operations are proposed in this paper. By using two mathematical
operations erosion and dilation we can add and remove pixels. We can remove the noise or interference in
power system. Opening and closing operations also discussed with erosion and dilation. These four
morphological operations are also helpful in developing a morphological filter.
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
In this paper a PDE based hybrid method for image denoising is introduced. The method is a bi-stage filter with anisotropic diffusion followed by wavelet based bayesian shrinkage. Here efficient denoising is achieved by reducing the convergence time of anisotropic diffusion.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Noise Removal with Morphological Operations Opening and Closing Using Erosio...IJMER
The mathematical operations are proposed in this paper. By using two mathematical
operations erosion and dilation we can add and remove pixels. We can remove the noise or interference in
power system. Opening and closing operations also discussed with erosion and dilation. These four
morphological operations are also helpful in developing a morphological filter.
Uniform and non uniform single image deblurring based on sparse representatio...ijma
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
The improved hybrid model for molecular image denoising, proposed by NeST Software, can give a better SNR Molecular Image output. Read more on the proposed hybrid model.
Change Detection of Water-Body in Synthetic Aperture Radar ImagesCSCJournals
Change detection is the art of quantifying the changes in the Synthetic Aperture Radar (SAR) images that have happened over a period of time. Remote sensing has been the parental technique to perform change detection analysis. This paper empirically investigates the impact of applying the combination of texture features for different classification techniques to separate water body from non-water body. At first, the images are classified using unsupervised Principle Component Analysis (PCA) based K-means clustering for dimension reduction. Then the texture features like Energy, Entropy, Contrast , Inverse Differential Moment , Directional Moment and the Median are extracted using Gray Level Co-occurrence Matrix (GLCM) and these features are utilized in Linear Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers. This paper aims to apply a combination of the texture features in order to significantly improve the accuracy of detection. The utility of detection analysis, influences management and policy decision making for long-term construction projects by predicting the preventable losses.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
A novel image fusion strategy is presented forpanchromatic (PAN) high resolution image and
multispectralimage (MS) in nonsubsampledcontourlet transform (NSCT) domain. The NSCT can
give an asymptotic optimalrepresentationof edges and contours in image by virtue of its
characteristics ofgood multi resoluti on, shift invariance, and high directionality. We obtain a high
spatial resolution (HR) and high MS image using the available high spectral but resolution MS image
and the PAN image. Since weneed to predict the missing high resolution pixels in each ofthe MS
images the problem is ill-posed and is solved usingmaximum a posteriori (MAP) approach. We first
obtain an initialapproximation to the HR fused image by learning the edges fromthe PAN image
using the NSCT. Markov random field (MRF) prior term is used for regularization to obtain
smoothness in theimage. We then optimize the cost function which is formed usingthe data fitting
term and the prior term and obtain the fusedimage, in which the edges correspond to those in the
initial HR approximation. The procedure is repeated for each of the MSimages. The advantage of the
proposed method lies in the use ofsimple gradient based optimization for regularization
purposeswhile preserving the discontinuities and color components.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSIONsipij
The goal of multi-focus image fusion is to integrate images with different focus objects in order to obtain a
single image with all focus objects. In this paper, we give a new method based on neighbour local
variability (NLV) to fuse multi-focus images. At each pixel, the method uses the local variability calculated
from the quadratic difference between the value of the pixel and the value of all pixels in its
neighbourhood. It expresses the behaviour of the pixel with respect to its neighbours. The variability
preserves the edge function because it detects the sharp intensity of the image. The proposed fusion of each
pixel consists of weighting each pixel by the exponential of its local variability. The quality of this fusion
depends on the size of the neighbourhood region considered. The size depends on the variance and the size
of the blur filter. We start by modelling the value of the neighbourhood region size as a function of the
variance and the size of the blur filter. We compare our method to other methods given in the literature.
We show that our method gives a better result.
Performance Analysis of Spatial and Frequency Domain Multiple Data Embedding ...CSCJournals
Data hiding is an age-old technique used to hide data in an image. Several attacks are prevalent to hack the data hidden inside the image. Considerable researches are going on in this area to protect the hidden data from unauthorized access. The current work is focused towards studying the behavior of Spatial and Frequency Domain Multiple data embedding techniques towards noise prone channels enabling the user to select an optimal embedding technique. The Performance of the above techniques is also focused towards multiple embedded data inside a single cover image. The robustness of the watermark is tested by incorporating several attacks and testing the watermark strength.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The improved hybrid model for molecular image denoising, proposed by NeST Software, can give a better SNR Molecular Image output. Read more on the proposed hybrid model.
Change Detection of Water-Body in Synthetic Aperture Radar ImagesCSCJournals
Change detection is the art of quantifying the changes in the Synthetic Aperture Radar (SAR) images that have happened over a period of time. Remote sensing has been the parental technique to perform change detection analysis. This paper empirically investigates the impact of applying the combination of texture features for different classification techniques to separate water body from non-water body. At first, the images are classified using unsupervised Principle Component Analysis (PCA) based K-means clustering for dimension reduction. Then the texture features like Energy, Entropy, Contrast , Inverse Differential Moment , Directional Moment and the Median are extracted using Gray Level Co-occurrence Matrix (GLCM) and these features are utilized in Linear Vector Quantization (LVQ) and Support Vector Machine (SVM) classifiers. This paper aims to apply a combination of the texture features in order to significantly improve the accuracy of detection. The utility of detection analysis, influences management and policy decision making for long-term construction projects by predicting the preventable losses.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
A novel image fusion strategy is presented forpanchromatic (PAN) high resolution image and
multispectralimage (MS) in nonsubsampledcontourlet transform (NSCT) domain. The NSCT can
give an asymptotic optimalrepresentationof edges and contours in image by virtue of its
characteristics ofgood multi resoluti on, shift invariance, and high directionality. We obtain a high
spatial resolution (HR) and high MS image using the available high spectral but resolution MS image
and the PAN image. Since weneed to predict the missing high resolution pixels in each ofthe MS
images the problem is ill-posed and is solved usingmaximum a posteriori (MAP) approach. We first
obtain an initialapproximation to the HR fused image by learning the edges fromthe PAN image
using the NSCT. Markov random field (MRF) prior term is used for regularization to obtain
smoothness in theimage. We then optimize the cost function which is formed usingthe data fitting
term and the prior term and obtain the fusedimage, in which the edges correspond to those in the
initial HR approximation. The procedure is repeated for each of the MSimages. The advantage of the
proposed method lies in the use ofsimple gradient based optimization for regularization
purposeswhile preserving the discontinuities and color components.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSIONsipij
The goal of multi-focus image fusion is to integrate images with different focus objects in order to obtain a
single image with all focus objects. In this paper, we give a new method based on neighbour local
variability (NLV) to fuse multi-focus images. At each pixel, the method uses the local variability calculated
from the quadratic difference between the value of the pixel and the value of all pixels in its
neighbourhood. It expresses the behaviour of the pixel with respect to its neighbours. The variability
preserves the edge function because it detects the sharp intensity of the image. The proposed fusion of each
pixel consists of weighting each pixel by the exponential of its local variability. The quality of this fusion
depends on the size of the neighbourhood region considered. The size depends on the variance and the size
of the blur filter. We start by modelling the value of the neighbourhood region size as a function of the
variance and the size of the blur filter. We compare our method to other methods given in the literature.
We show that our method gives a better result.
Performance Analysis of Spatial and Frequency Domain Multiple Data Embedding ...CSCJournals
Data hiding is an age-old technique used to hide data in an image. Several attacks are prevalent to hack the data hidden inside the image. Considerable researches are going on in this area to protect the hidden data from unauthorized access. The current work is focused towards studying the behavior of Spatial and Frequency Domain Multiple data embedding techniques towards noise prone channels enabling the user to select an optimal embedding technique. The Performance of the above techniques is also focused towards multiple embedded data inside a single cover image. The robustness of the watermark is tested by incorporating several attacks and testing the watermark strength.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
An Ultrasound Image Despeckling Approach Based on Principle Component AnalysisCSCJournals
An approach based on principle component analysis (PCA) to filter out multiplicative noise from ultrasound images is presented in this paper. An image with speckle noise is segmented into small dyadic lengths, depending on the original size of the image, and the global covariance matrix is found. A projection matrix is then formed by selecting the maximum eigenvectors of the global covariance matrix. This projection matrix is used to filter speckle noise by projecting each segment into the signal subspace. The approach is based on the assumption that the signal and noise are independent and that the signal subspace is spanned by a subset of few principal eigenvectors. When applied on simulated and real ultrasound images, the proposed approach has outperformed some popular nonlinear denoising techniques such as 2D wavelets, 2D total variation filtering, and 2D anisotropic diffusion filtering in terms of edge preservation and maximum cleaning of speckle noise. It has also showed lower sensitivity to outliers resulting from the log transformation of the multiplicative noise.
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
Abstract : This paper describes a comparison of the discriminating power of the various multiresolution based thresholding techniques i.e., Wavelet, curve let for image denoising.Curvelet transform offer exact reconstruction, stability against perturbation, ease of implementation and low computational complexity. We propose to employ curve let for facial feature extraction and perform a thorough comparison against wavelet transform; especially, the orientation of curve let is analysed. Experiments show that for expression changes, the small scale coefficients of curve let transform are robust, though the large scale coefficients of both transform are likely influenced. The reason behind the advantages of curvelet lies in its abilities of sparse representation that are critical for compression, estimation of images which are denoised and its inverse problems, thus the experiments and theoretical analysis coincide . Keywords: Curvelet transform, Face recognition, Feature extraction, Sparse representation Thresholding rules,Wavelet transform..
Segmentation Based Multilevel Wide Band Compression for SAR Images Using Coif...CSCJournals
Synthetic aperture radar (SAR) data represents a significant resource of information for a large variety of researchers. Thus, there is a strong interest in developing data encoding and decoding algorithms which can obtain higher compression ratios while keeping image quality to an acceptable level. In this work, results of different wavelet-based image compression and segmentation based wavelet image compression are assessed through controlled experiments on synthetic SAR images. The effects of dissimilar wavelet functions, number of decompositions are examined in order to find optimal family for SAR images. The choice of optimal wavelets in segmentation based wavelet image compression is coiflet for low frequency and high frequency component. The results presented here is a good reference for SAR application developers to choose the wavelet families and also it concludes that wavelets transform is rapid, robust and reliable tool for SAR image compression. Numerical results confirm the potency of this approach.
Optimization of Aberrated Coherent Optical SystemsIOSR Journals
The images of a straight edge in coherent illumination, produced by an optical system with circular aperture and apodised with amplitude filters have been studied. The image quality assessment parameters such as edge-ringing, edge-gradient and edge-shift of the edge fringes have been studied as a function of apodisation parameter for various degrees of defocus, Coma and primary spherical aberrations. It is found that, at certain combinations of aberrations the quality of the image of straight edge objects can be improved
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
Curvelets denoise approach has been widely used in many fields for its ability to obtain high quality result images.Curvelet transform is superior to wavelet in the expression of image edge, such as geometry characteristic of curve, which has already obtained good results in image denoising. However artifacts those appear in the result images of Curvelets approach prevent its application in some fields such as medical image. This paper puts forward a fusion based method because certain regions of the image have the ringing and radial stripe after Curvelets transform. The experimental results indicate that fusion method has an abroad future for eliminating the noise of images. The results of the algorithm applied to ultrasonic medical images also indicate that the algorithm can be used efficiently in medical image fields.
Removing noise from the Medical image is still a challenging problem for researchers. Noise added is not easy to remove from the images. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper summarizes the major techniques to denoise the medical images and finds the one is better for image denoising. We can conclude that the Multiwavelet technique with Soft threshold is the best technique for image denoising.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
Noise is one of the most widespread problems present in nearly all imaging applications. In spite of the sophistication of the recently proposed methods, most denoising algorithms have not yet attained a desirable level of applicability. This paper proposes a two-stage algorithm for speckle noise reduction jointly in the wavelet and spatial domains. At the first stage, the optimal parameter value of the spatial speckle reduction filter is estimated, based on edge pixel statistics and noise variance. Then the optimized filter is used at the second stage to additionally smooth the approximation image of the wavelet sub-band. A complexity reduction algorithm for wavelet decomposition is also proposed. The obtained results are highly encouraging in terms of image quality which paves the way towards the reinforcement of the proposed algorithm for the performance enhancement of the Block Matching and 3D Filtering algorithm tackling multiplicative speckle noise.
An Application of Second Generation Wavelets for Image Denoising using Dual T...IDES Editor
The lifting scheme of the discrete wavelet transform
(DWT) is now quite well established as an efficient technique
for image denoising. The lifting scheme factorization of
biorthogonal filter banks is carried out with a linear-adaptive,
delay free and faster decomposition arithmetic. This adaptive
factorization is aimed to achieve a well transparent, more
generalized, complexity free fast decomposition process in
addition to preserve the features that an ordinary wavelet
decomposition process offers. This work is targeted to get
considerable reduction in computational complexity and power
required for decomposition. The hard striking demerits of
DWT structure viz., shift sensitivity and poor directionality
had already been proven to be washed out with an emergence
of dual tree complex wavelet (DT-CWT) structure. The well
versed features of DT-CWT and robust lifting scheme are
suitably combined to achieve an image denoising with prolific
rise in computational speed and directionality, also with a
desirable drop in computation time, power and complexity of
algorithm compared to all other techniques.
WAVELET THRESHOLDING APPROACH FOR IMAGE DENOISINGIJNSA Journal
The original image corrupted by Gaussian noise is a long established problem in signal or image processing .This noise is removed by using wavelet thresholding by focused on statistical modelling of wavelet coefficients and the optimal choice of thresholds called as image denoising . For the first part, threshold is driven in a Bayesian technique to use probabilistic model of the image wavelet coefficients that are dependent on the higher order moments of generalized Gaussian distribution (GGD) in image processing applications. The proposed threshold is very simple. Experimental results show that the proposed method is called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image. It outperforms Donoho and Johnston Sure Shrink. The second part of the paper is attempt to claim on lossy compression can be used for image denoising .thus achieving the image compression & image denoising simultaneously. The parameter is choosing based on a criterion derived from Rissanen’s minimum description length (MDL) principle. Experiments show that this compression & denoise method does indeed remove noise significantly, especially for large noise power.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
IMPROVEMENT OF BM3D ALGORITHM AND EMPLOYMENT TO SATELLITE AND CFA IMAGES DENO...ijistjournal
This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.
The complexity of Medical image reconstruction requires tens to hundreds of billions of computations per second. Until few years ago, special purpose processors designed especially for such applications were used. Such processors require significant design effort and are thus difficult to change as new algorithms in reconstructions evolve and have limited parallelism. Hence the demand for flexibility in medical applications motivated the use of stream processors with massively parallel architecture. Stream processing architectures offers data parallel kind of parallelism.
As data processing requirements increased with new applications, new processing technologies like Stream computing and parallel execution came into being. This write‐up briefly compares two competing performance architectures for data parallelism – Cell Broadband Engine (Cell BE in short) and the GPU (Graphics Processing Unit). The Cell BE Processor architecture was developed in collaboration between IBM, Sony and Toshiba. Development started in 2001 and first set of products based on this architecture started appearing in 2005.
A Set-top-Box (STB) is a very common name heard in the consumer electronics market. It is a device that is attached to a Television for enhancing its functions or the quality of its functions. On the other side, the STB is connected to an external source of signal, like satellite, cable, terrestrial or internet. The STB processes the signal it receives, turns it into content, which is then displayed on the television screen or other display device. There are different types of STBs based on what kind of signals it can receive and what kind of processing it can do. The most widely used STBs are DVB STBs, which receive DVB (Digital Video Broadcast) transmission.
Fast and robust tracking of multiple faces is receiving increased attention from computer vision researchers as it finds potential applications in many fields like video surveillance and computer mediated video conferencing. Real-time tracking of multiple faces in high resolution videos involve three basic tasks namely initialization, tracking and display. Among these, tracking is quite compute intensive as it involves particle filtering that won’t yield a real time performance if we use a conventional CPU based system alone.
This paper presents a study of the efficiency and performance speedup achieved by applying Graphics Processing Units for Face Recognition Solutions. We explore one of the possibilities of parallelizing and optimizing a well-known Face Recognition algorithm, Principal Component Analysis (PCA) with Eigenfaces. In recent years, the Graphics Processing Units (GPU) has been the subject of extensive research and the computation speed of GPUs has been rapidly increasing.
In today’s competitive software development scenario, the customer demands a testing coverage which not only ensures the stated requirements but also the implied ones. This situation calls for an exhaustive testing which may not be always possible due to various reasons. Testing, due to its last position in SDLC, often gets crunched due to the cumulative schedule slippages. Hence Tester is faced with a challenge to make testing as efficient as possible within a short time span due to cost constraints. With selective testing an only option, test leads usually go for the age-old approach of Random Testing. Random testing does not ensure coverage in a scientific manner.
SOM (Self-Organizing Map) is one of the most popular artificial neural network algorithms in the unsupervised learning category. For efficient construction of large maps searching the best-matching unit is usually the computationally heaviest operation in the SOM. The parallel nature of the algorithm and the huge computations involved makes it a good target for GPU based parallel implementation. This paper presents an overall idea of the optimization strategies used for the parallel implementation of Basic-SOM on GPU using CUDA programming paradigm.
This paper provides an overview of Universal Plug and Play (UPnP) and how it works to build a digital home network. UPnP network technology allows personal computer and consumer electronics devices to advertise and offer their services to network clients. UPnP can be viewed as the technological foundation of the digital home, enabling innovative usage models, higher levels of automation, and easier integration of devices from different manufacturers. UPnP technology is all about making home networking simple and affordable for users.
In this paper we present a recently developed tool named BrainAssist, which can be used for the study and analysis of brain abnormalities like Focal Cortical Dysplasia (FCD), Heterotopia and Multiple Sclerosis (MS). For the analysis of FCD and Heterotopia we used T1 weighted MR images and for the analysis of Multiple Sclerosis we used Proton Density (PD) images. 52 patients were studied. Out of 52 cases 36 were affected with FCDs, 6 with MS lesions and 10 normal cases. Preoperative MR images were acquired on a 1.5-T scanner (Siemens Medical Systems, Germany).
Identification of Focal Cortical Dysplasia (FCD) can be difficult due to the subtle MRI changes. Though sequences like FLAIR (fluid attenuated inversion recovery) can detect a large majority of these lesions, there are smaller lesions without signal changes that can easily go unnoticed by the naked eye. The aim of this study is to improve the visibility of Focal Cortical Dysplasia lesions in the T1 weighted brain MRI images. In the proposed method, we used a complex diffusion based approach for calculating the FCD affected areas.
Software Defined Networking (SDN) is an emerging trend in the networking and communication industry and promises to deliver enormous benefits, from reduced costs to more efficient network operations. It is a new approach that gives network operators and owners more control of the infrastructure, allowing optimization, customization and virtualization that enable the creation of new types of network services. This is done by decoupling the management and control planes that make decisions about where traffic is sent from (the control plane) the underlying hardware that forwards data traffic to the selected destination (the data plane).
Software Testing is the last phase in software development lifecycle which has high impact on the quality of the final product delivered to the customer. Even after being a critical phase, it was not given the importance as it actually deserves. The schedule constraints and slippage carry forwarded from the previous phase also make the testing phase more torrent. History reveals that the situation has changed with time, wherein testing is now visualized as one of the most critical, phase of software development. This makes software testing a discipline which demands for continuous and systematic growth. Software testing is a trade-off between Cost, Time and Quality.
Test Automation is an accepted technique which is adapted by the industry for increasing the effectiveness of the testing phase. The recurring tasks are being automated by the tools thus simplifying the human efforts and results in increased quality of product under test. A study of test automation programmes in the industry reveals the fact that a good percentage of them fail to find the intended results.
Complex digital and analog circuits and multiple clock signals used for design and development of modern systems usually make the job of engineers and designers a tedious one. While working with complex circuits and signals, a designer might encounter problems with circuit validation due to long simulation time. These complexities adversely affect the development time and hence increase time to market incurring higher production costs. By applying a new methodology in their Digital Phase-Locked Loop (Digital PLL) design, the engineers at QuEST reduced the simulation effort to one-by-third.
In software industry, test automation is a key solution for achieving volume verification and validation with optimal costs. Picking up the right automation tool and underlying scripting language has always been a challenge, balancing between cost factors and team’s expertise levels in various tools and scripting languages. A real solution would be one that allows full flexibility for team on these two core concern areas – test automation tool and scripting language. Flexi any Script any Tool (FaSaT) is a test automation framework which provides interoperability among multiple test automation tools and multiple scripting languages.
Analog-to-Digital Converter (ADC) is an integral part of high-speed signal processing applications. This paper discusses about 10-bit SAR based ADC that enables very low power consumption and sampling rate as high as 165 MSPS.
Ground breaking innovations like Advanced Driver Assistance System (ADAS) makes driving easier and safer on congested roads. The whitepaper details how FPGA technology emerges as a complete solution for ADAS.
Reusable Video IP Cores give software engineering service providers flexibility and less time to market while catering to the ever increasing demands of customers. Read on to know more about the Reusable IP Cores developed by NeST Software.
More from QuEST Global (erstwhile NeST Software) (18)
Explore our infographic on 'Essential Metrics for Palliative Care Management' which highlights key performance indicators crucial for enhancing the quality and efficiency of palliative care services.
This visual guide breaks down important metrics across four categories: Patient-Centered Metrics, Care Efficiency Metrics, Quality of Life Metrics, and Staff Metrics. Each section is designed to help healthcare professionals monitor and improve care delivery for patients facing serious illnesses. Understand how to implement these metrics in your palliative care practices for better outcomes and higher satisfaction levels.
One of the most developed cities of India, the city of Chennai is the capital of Tamilnadu and many people from different parts of India come here to earn their bread and butter. Being a metropolitan, the city is filled with towering building and beaches but the sad part as with almost every Indian city
Telehealth Psychology Building Trust with Clients.pptxThe Harvest Clinic
Telehealth psychology is a digital approach that offers psychological services and mental health care to clients remotely, using technologies like video conferencing, phone calls, text messaging, and mobile apps for communication.
The Importance of Community Nursing Care.pdfAD Healthcare
NDIS and Community 24/7 Nursing Care is a specific type of support that may be provided under the NDIS for individuals with complex medical needs who require ongoing nursing care in a community setting, such as their home or a supported accommodation facility.
Antibiotic Stewardship by Anushri Srivastava.pptxAnushriSrivastav
Stewardship is the act of taking good care of something.
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
WHO launched the Global Antimicrobial Resistance and Use Surveillance System (GLASS) in 2015 to fill knowledge gaps and inform strategies at all levels.
ACCORDING TO apic.org,
Antimicrobial stewardship is a coordinated program that promotes the appropriate use of antimicrobials (including antibiotics), improves patient outcomes, reduces microbial resistance, and decreases the spread of infections caused by multidrug-resistant organisms.
ACCORDING TO pewtrusts.org,
Antibiotic stewardship refers to efforts in doctors’ offices, hospitals, long term care facilities, and other health care settings to ensure that antibiotics are used only when necessary and appropriate
According to WHO,
Antimicrobial stewardship is a systematic approach to educate and support health care professionals to follow evidence-based guidelines for prescribing and administering antimicrobials
In 1996, John McGowan and Dale Gerding first applied the term antimicrobial stewardship, where they suggested a causal association between antimicrobial agent use and resistance. They also focused on the urgency of large-scale controlled trials of antimicrobial-use regulation employing sophisticated epidemiologic methods, molecular typing, and precise resistance mechanism analysis.
Antimicrobial Stewardship(AMS) refers to the optimal selection, dosing, and duration of antimicrobial treatment resulting in the best clinical outcome with minimal side effects to the patients and minimal impact on subsequent resistance.
According to the 2019 report, in the US, more than 2.8 million antibiotic-resistant infections occur each year, and more than 35000 people die. In addition to this, it also mentioned that 223,900 cases of Clostridoides difficile occurred in 2017, of which 12800 people died. The report did not include viruses or parasites
VISION
Being proactive
Supporting optimal animal and human health
Exploring ways to reduce overall use of antimicrobials
Using the drugs that prevent and treat disease by killing microscopic organisms in a responsible way
GOAL
to prevent the generation and spread of antimicrobial resistance (AMR). Doing so will preserve the effectiveness of these drugs in animals and humans for years to come.
being to preserve human and animal health and the effectiveness of antimicrobial medications.
to implement a multidisciplinary approach in assembling a stewardship team to include an infectious disease physician, a clinical pharmacist with infectious diseases training, infection preventionist, and a close collaboration with the staff in the clinical microbiology laboratory
to prevent antimicrobial overuse, misuse and abuse.
to minimize the developme
CHAPTER 1 SEMESTER V - ROLE OF PEADIATRIC NURSE.pdfSachin Sharma
Pediatric nurses play a vital role in the health and well-being of children. Their responsibilities are wide-ranging, and their objectives can be categorized into several key areas:
1. Direct Patient Care:
Objective: Provide comprehensive and compassionate care to infants, children, and adolescents in various healthcare settings (hospitals, clinics, etc.).
This includes tasks like:
Monitoring vital signs and physical condition.
Administering medications and treatments.
Performing procedures as directed by doctors.
Assisting with daily living activities (bathing, feeding).
Providing emotional support and pain management.
2. Health Promotion and Education:
Objective: Promote healthy behaviors and educate children, families, and communities about preventive healthcare.
This includes tasks like:
Administering vaccinations.
Providing education on nutrition, hygiene, and development.
Offering breastfeeding and childbirth support.
Counseling families on safety and injury prevention.
3. Collaboration and Advocacy:
Objective: Collaborate effectively with doctors, social workers, therapists, and other healthcare professionals to ensure coordinated care for children.
Objective: Advocate for the rights and best interests of their patients, especially when children cannot speak for themselves.
This includes tasks like:
Communicating effectively with healthcare teams.
Identifying and addressing potential risks to child welfare.
Educating families about their child's condition and treatment options.
4. Professional Development and Research:
Objective: Stay up-to-date on the latest advancements in pediatric healthcare through continuing education and research.
Objective: Contribute to improving the quality of care for children by participating in research initiatives.
This includes tasks like:
Attending workshops and conferences on pediatric nursing.
Participating in clinical trials related to child health.
Implementing evidence-based practices into their daily routines.
By fulfilling these objectives, pediatric nurses play a crucial role in ensuring the optimal health and well-being of children throughout all stages of their development.
How many patients does case series should have In comparison to case reports.pdfpubrica101
Pubrica’s team of researchers and writers create scientific and medical research articles, which may be important resources for authors and practitioners. Pubrica medical writers assist you in creating and revising the introduction by alerting the reader to gaps in the chosen study subject. Our professionals understand the order in which the hypothesis topic is followed by the broad subject, the issue, and the backdrop.
https://pubrica.com/academy/case-study-or-series/how-many-patients-does-case-series-should-have-in-comparison-to-case-reports/
India Clinical Trials Market: Industry Size and Growth Trends [2030] Analyzed...Kumar Satyam
According to TechSci Research report, "India Clinical Trials Market- By Region, Competition, Forecast & Opportunities, 2030F," the India Clinical Trials Market was valued at USD 2.05 billion in 2024 and is projected to grow at a compound annual growth rate (CAGR) of 8.64% through 2030. The market is driven by a variety of factors, making India an attractive destination for pharmaceutical companies and researchers. India's vast and diverse patient population, cost-effective operational environment, and a large pool of skilled medical professionals contribute significantly to the market's growth. Additionally, increasing government support in streamlining regulations and the growing prevalence of lifestyle diseases further propel the clinical trials market.
Growing Prevalence of Lifestyle Diseases
The rising incidence of lifestyle diseases such as diabetes, cardiovascular diseases, and cancer is a major trend driving the clinical trials market in India. These conditions necessitate the development and testing of new treatment methods, creating a robust demand for clinical trials. The increasing burden of these diseases highlights the need for innovative therapies and underscores the importance of India as a key player in global clinical research.
CHAPTER 1 SEMESTER V PREVENTIVE-PEDIATRICS.pdfSachin Sharma
This content provides an overview of preventive pediatrics. It defines preventive pediatrics as preventing disease and promoting children's physical, mental, and social well-being to achieve positive health. It discusses antenatal, postnatal, and social preventive pediatrics. It also covers various child health programs like immunization, breastfeeding, ICDS, and the roles of organizations like WHO, UNICEF, and nurses in preventive pediatrics.
2. Speckle Reduction in Images with WEAD and WECD 185
a
g
e
a
g
gF
−−
−
= α
α
α )!1(
)(
1
(1)
where g is the gray level and α is the variance. Below figure shows the plot of
speckle noise distribution.
Fig. 1. Plot of speckle noise distribution
A number of methods are proposed in the literature for removing speckle from ul-
trasound images. Popular methods among them are Lee, Frost, Kuan, Gamma and
SRAD filters. The Lee and Kuan filters have the same formation, although the signal
model assumptions and the derivations are different. Essentially, both the Lee and
Kuan filters form an output image by computing a linear combination of the center
pixel intensity in a filter window with the average intensity of the window. So, the
filter achieves a balance between straightforward averaging (in homogeneous regions)
and the identity filter (where edges and point features exist). This balance depends on
the coefficient of variation inside the moving window[2].
The Frost filter also strikes a balance between averaging and the all-pass filter. In
this case, the balance is achieved by forming an exponentially shaped filter kernel that
can vary from a basic average filter to an identity filter on a point wise, adaptive ba-
sis. Again, the response of the filter varies locally with the coefficient of variation. In
case of low coefficient of variation, the filter is more average-like, and in cases of
high coefficient of variation, the filter attempts to preserve sharp features by not aver-
aging. The Gamma filter is a Maximum A Posteriori (MAP) filter based on a Bayes-
ian analysis of the image statistics [1]. Speckle Reducing Anisotropic Diffusion
(SRAD) is an edge sensitive diffusion method for speckled images [2].
Wavelet Embedded Anisotropic Diffusion (WEAD) [8] and Wavelet Embedded
Complex Diffusion (WECD)[9] are extensions of non linear Anisotropic and Com-
plex diffusion by adding Bayesian shrinkage at its second stage. The methods increase
the speed of processing and improve the quality of images than their parent methods.
The paper is organized as follows. Section 2 deals with diffusion techniques for
removing noise from images. It mainly discusses anisotropic and complex diffusion.
Section 3 explains the recently proposed WEAD and WECD and its capability to
remove speckle noises. Experimental results and comparative analysis with other
popular methods is shown in Section 4. Finally conclusion and remarks are added in
Section 5.
3. 186 J. Rajan and M.R. Kaimal
2 Noise Removal with Diffusion Techniques
Diffusion is a physical process that equilibrates concentration differences without
creating or destroying mass [10]. This physical observation, the equilibrium property
can be expressed by Fick’s law
uDj ∇−= . (2)
This equation states that a concentration gradient ∇u causes a flux j, which aims to
compensate for this gradient. The relation between ∇u and j is described by the diffu-
sion tensor D, a positive definite symmetric matrix. The case where j and ∇u are par-
allel is called isotropic. Then we may replace the diffusion tensor by a positive scalar
valued diffusivity g. In the general case i.e., anisotropic case, j and ∇u are not paral-
lel. The observation that diffusion does only transport mass without destroying it or
creating new mass is expressed by the continuity equation
judt div−= (3)
where t denotes the time. If we apply the Fick’s law into the continuity equation we
will get the diffusion equation. i.e.,
)( uDDiv
t
u
∇⋅=
∂
∂
(4)
This equation appears in many physical transport process. In the context of heat trans-
fer, it is called the heat equation [10]. When applied to an image, the linear diffusion
will generate scale space images. Each image will be more smoothed than the previ-
ous one. By smoothing an image, to some extend noise can be removed. This is why
linear diffusion is used for noise removal. But one problem with this method is its
inability to preserve image structures.
2.1 Anisotropic Diffusion
To avoid the defects of linear diffusion (especially the inability to preserve edges and
to impel inter region smoothing before intra region smoothing) non-linear partial
differential equations can be used. In [11] Perona and Malik has given 3 necessary
conditions for generating multiscale semantically meaningful images
1. Causality : Scale space representation should have the property that no spurious
detail should be generated passing from finer to coarser scale.
2. Immediate Localization : At each resolution, the region boundaries should be sharp
and coincide with the semantically meaningful boundaries at that resolution.
3. Piecewise Smoothing : At all scales, intra region smoothing should occur preferen-
tially over inter region smoothing.
Linear diffusion is especially not satisfying the third condition, which can be over-
come by using a non linear one. Among the non linear diffusion , the one proposed by
4. Speckle Reduction in Images with WEAD and WECD 187
Perona and Malik [11] and its variants are the most popular. They proposed a nonlin-
ear diffusion method for avoiding the blurring and localization problems of linear
diffusion filtering. There has been a great deal of interest in this anisotropic diffusion
as a useful tool for multiscale description of images, image segmentation, edge detec-
tion and image enhancement [12]. The basic idea behind anisotropic diffusion is to
evolve from an original image ),(
0
yxu , defined in a convex domain RRΩ ×⊂ , a
family of increasingly smooth images u(x,y,t)derived from the solution of the fol-
lowing partial differential equation [11] :
[ ] ),(),(,)( 00 yxuyxuuucdiv
t
u
t =∇∇=
∂
∂
= (5)
where ∇u is the gradient of the image u, div is the divergence operator and c is the
diffusion coefficient. The desirable diffusion coefficient c(.) should be such that equa-
tion (5) diffuses more in smooth areas and less around less intensity transitions, so
that small variations in image intensity such as noise and unwanted texture are
smoothed and edges are preserved. Another objective for the selection of c(.) is to
incur backward diffusion around intensity transitions so that edges are sharpened, and
to assure forward diffusion in smooth areas for noise removal [12]. Here are some of
the previously employed diffusivity functions[13] :
A. Linear diffusivity [14]: 1)( =sc , (6)
B. Charbonnier diffusivity [15]:
2
2
1
1
)(
k
s
sc
+
= (7)
C. Perona–Malik diffusivity [11] : 2
1
1
)(
⎟
⎠
⎞
⎜
⎝
⎛
+
=
k
s
sc (8)
⎥
⎥
⎦
⎤
⎢
⎢
⎣
⎡
⎟
⎠
⎞
⎜
⎝
⎛
−=
2
exp)(
k
s
sc (9)
D. Weickert diffusivity[10] :
( )
031488.3
exp1
01
)(
8
>
⎪
⎪
⎩
⎪
⎪
⎨
⎧
⎟
⎟
⎟
⎠
⎞
⎜
⎜
⎜
⎝
⎛
−
−
=
= s
k
s
s
sc (10)
E. TV diffusivity [16] :
s
sc
1
)( = (11)
F. BFB diffusivity [17]: 2
1
)(
s
sc = (12)
5. 188 J. Rajan and M.R. Kaimal
2.2 Complex Diffusion
In 1931 Schrodinger explored the possibility that one might use diffusion theory as a
starting point for the derivation of the equations of quantum theory. These ideas were
developed by Fuerth who indicated that the Schrodinger equation could be derived
from the diffusion equation by introducing a relation between the diffusion coefficient
and Planck’s constant, and stipulating that the probability amplitude of quantum the-
ory should be given by the resulting differential equation [18]. It has been the goal of
a variety of subsequent approaches to derive the probabilistic equations of quantum
mechanics from equations involving probabilistic or stochastic processes. The time
dependent Schrodinger equation is the fundamental equation of quantum mechanics.
In the simplest case for a particle without spin in an external field it has the form [19]
ψψ
ψ
)(
2
2
xV
mt
i +Δ−=
∂
∂
(13)
where ),( xtψψ = is the wave function of a quantum particle, m is the mass of the
particle, is Planck’s constant, V(x) is the external field potential, Δ is the Laplacian
and 1−=i . With an initial condition )(| 00 xt ψψ == , requiring that 2),( Lt ∈⋅ψ for
each fixed t, the solution is 0),( ψψ
tH
h
i
et
−
=⋅ , where the exponent is shorthand for the
corresponding power series, and the higher order terms are defined recursively by
( )ψψ 1−
= nn
HHH . The operator
)(
2
2
xV
m
H +Δ−= (14)
called the Schrodinger operator, is interpreted as the energy operator of the particle
under consideration. The first term is the kinetic energy and the second is the potential
energy. The duality relations that exist between the Schrodinger equation and the
diffusion theory have been studied in [9]. The standard linear diffusion equation is as
in (4). From (13) and (4) we can derive the following two equations.
00|, IIICICI tRIxxIRxxRRT =−= = (15)
0, 0| =+= =tIIxxRRxxIIT IICICI (16)
where RTI is the image obtained at real plane and ITI is the image obtained at imagi-
nary plane at time T and )cos(θ=RC , )sin(θ=IC . The relation IxxRxx II θ>> holds
for small theta approximation[8]:
RxxRT II ≈ ; RxxIxxIt III θ+≈ (17)
In (17) RI is controlled by a linear forward diffusion equation, whereas II is affected
by both the real and imaginary equations. The above said method is linear complex
diffusion equation.
6. Speckle Reduction in Images with WEAD and WECD 189
A more efficient nonlinear complex diffusion can be written as in eqn. (18) [19]
))(Im(( IIcIt ∇⋅∇= (18)
where
( ) 2
)Im(
1
)Im(
⎟
⎠
⎞
⎜
⎝
⎛
+
=
θ
θ
k
I
e
Ic
i
(19)
where k is the threshold parameter Non linear complex diffusion seems to be more
efficient than linear complex diffusion in terms of preserving edges.
3 WEAD and WECD
Both WEAD and WECD are improvements of anisotropic and complex diffusion by
adding BayesShrink [20] at the second stage. In the case of WEAD, Bayesian Shrink-
age of the non-linearly diffused signal is taken. The equation can be written as
)( '
1−= nsn IBI (20)
and in the case of WECD the Bayesian Shrinkage of the real part of the non-linearly
complex diffused signal is taken. The equation can be written as
))(( '
1−= ncsn IRBI (21)
where Bs is the bayesian shrink and '
1−nI is anisotropic diffusion as shown in (5) at (n-1)th
time and )( '
1−nc IR is the real part of the non linearly diffused complex diffusion.
Numerically (20) and (21) can be written as
( )nnsn tdIBI Δ+= −1 (22)
and
))(( 1 nncsn tdIRBI Δ+= − (23)
respectively.
The intention behind these two methods is to decrease the convergence time of the
anisotropic diffusion and complex diffusion respectively. It is understood that the
convergence time for denoising is directionally proportional to the image noise level.
In the case of diffusion, as iteration continues, the noise level in image decreases (till
it reaches the convergence point), but in a slow manner. But in the case of Bayesian
Shrinkage, it just cut the frequencies above the threshold and that in a single step. An
iterative Bayesian Shrinkage will not incur any change in the detail coefficients from
7. 190 J. Rajan and M.R. Kaimal
(a) (b) (c)
Fig. 2. Working of WEAD & WECD (a) Shows the convergence of a noisy image (conver-
gence at P). If this P can be shifted towards left, image quality can be increased and time com-
plexity can be reduced. Illustrated in (b). (c) shows the signal processed by WEAD & WECD.
It can be seen that the convergence point is shifted to left and moved upwards.
the first one. Now consider the case of WEAD and WECD, here the threshold for
Bayesian shrinkage is recalculated each time after diffusion, and since as a result of
two successive noise reduction step, it approaches the convergence point much faster
than anisotropic diffusion or complex diffusion.
As the convergence time decreases, image blurring can be restricted, and as a result
image quality increases. The whole process is illustrated in Fig. 2. Fig. 2(a) shows the
convergence of the image processed by diffusion methods. The convergence point is
at P. i.e. at P we will get the better image, with the assumption that the input image is
a noisy one. If this convergence point P can be shifted towards y-axis, its movement
will be as in the figure shown in Fig 2 (b).i.e. if we pull the point P towards y-axis, it
will move in a left-top fashion. Here the Bayesian shrinkage is the catalyst, which
pulls the convergence point P of the anisotropic or complex diffusion towards a better
place.
4 Experimental Results and Comparative Analysis
Experiments were carried out on various types of standard images. Comparisons and
analysis were done on the basis of MSSIM (Mean Structural Similarity Index Matrix)
[21] and PSNR (Peak Signal to Noise Ratio). SSIM is used to evaluate the overall
image quality and is in the range 0 to 1. The SSIM works as follows, suppose x and y
be two non negative image signals, one of the signals to have perfect quality, then the
similarity measure can serve as a quantitative measure of the quality of the second
signal. The system separates the task of similarity measurement into three compari-
sons: luminance, contrast and structure. The PSNR is given in decibel units (dB),
which measure the ratio of the peak signal and the difference between two images.
Fig.3 shows the performance of various filters against speckle noise. It can be seen
that the image processed by WEAD and WECD given a better result than the other
three speckle filters. Table 1 shows a comparative analysis of popular speckle filters
with WEAD and WECD. Various levels of noise are added to image for testing its
capability. In all the cases the performance of WEAD and WECD was superior to
others.
8. Speckle Reduction in Images with WEAD and WECD 191
(a) (b) (c)
(d) (e) (f)
(g) (h) (i)
(j) (k) (l)
Fig. 3. Speckle affected image processed with various filters (a) Image with speckle noise
(PSNR 18.85), (b) Image processed with Frost Filter (PSNR : 22.37), (c) Image Processed with
Kuan Filter (PSNR : 23.12), (d) Image processed with SRAD (PSNR: 23.91), (e) Image proc-
essed with WEAD (PSNR : 25.40), (f) Image Processed with WECD (PSNR :24.52), (g),
(h),(i), (j),(k),(l) shows the 3D plot of (a),(b),(c),(d),(e),(f)
9. 192 J. Rajan and M.R. Kaimal
Table 1. Comparative analysis of various speckle filters
5 Conclusion
In this paper a comparative analysis of Wavelet Embedded Anisotropic Diffusion
(WEAD) and Wavelet Embedded Complex Diffusion (WECD) with other methods is
done. When compared with other methods it can be seen that the complexity and
processing time of WEAD and WECD is slightly more but the performance is supe-
rior. The hybrid concept used in WEAD and WECD can be extended to other PDE
based methods.
References
1. L Gagnon, A Jouan, Speckle Filtering of SAR Images – A Comparative Study between
Complex-Wavelet-Based and Standard Filters, Wavelet Applications in Signal and Image
Processing : Proceedings of SPIE, Vol. 3169, (1997) 80-91
2. Yongjian Yu, Scott T Acton, Speckle Reducing Anisotropic Diffusion, IEEE Trans. on
Image Processing, Vol. 11 (2002) 1260–1270
3. Zhao Hui Zhang, Veronique Prinet, SongDe MA, A New Method for SAR Speckle Reduc-
tion, IEEE (2002)
4. R.N Rohling, A.H. Gee, Issues in 3-D Free-Hand Medical Ultrasound Imaging, Technical
Report, Cambridge University Engineering Department (1996)
Method PSNR MSSIM Time Taken
(in seconds)
Image : Cameraman, Noise variance : 0.04, PSNR : 18.85, MSSIM : 0.4311
Frost 22.37 0.4885 5.75
Kuan 23.12 0.5846 10.00
SRAD 23.91 0.5923 0.68
WEAD 25.40 0.6835 31.00
WECD 24.52 0.6092 7.23
Image : Lena, Noise variance : 0.02 , PSNR : 24.16, MSSIM : 0.6047
Frost 26.77 0.7027 5.81
Kuan 25.59 0.7916 9.96
SRAD 27.42 0.7750 0.75
WEAD 29.08 0.8208 18.56
WECD 28.56 0.7941 3.48
Image : Bird, Noise variance : 0.04, PSNR : 18.66 , MSSIM : 0.2421
Frost 24.2604 0.4221 5.84
Kuan 25.78 0.5817 10.00
SRAD 27.65 0.6792 0.99
WEAD 29.98 0.8318 49.45
WECD 28.38 0.7117 17.59
10. Speckle Reduction in Images with WEAD and WECD 193
5. G.E Trahey, S.M. Hubbard, O.T von Ramm, Angle Independent Ultrasonic Blood Flow
Dtection by Frame-to-Frame Correlation of B-mode Images, Ultrasonics, Vol 26, (1988)
271 – 276
6. Z. Sun, H. Ying, J. Lu, A Noninvasive Cross-Correlation Ultrasound Technique for De-
tecting Spatial Profile of Laser – Induced Coagulation Damage – An in vitro Study, IEEE
Trans. on Biomed. Engg., Vol. 48, ( 2001) 223-229
7. Sarita Dangeti, Denoising Techniques – A Comparison, Thesis Report, Submitted to the
Dept. of Electrical and Computer Engineering, Louisiana State University and Agricultural
and Mechanical College, (2003)
8. Jeny Rajan, M.R. Kaimal, Image Denoising using Wavelet Embedded Anisotropic Diffu-
sion (WEAD), Proceedings of IEE International Conference on Visual Information Engi-
neering, (2006)
9. Jeny Rajan, Image Denoising using Partial Differential Equations, M.Tech. Thesis, Sub-
mitted to Department of Computer Science, University of Kerala (INDIA), (2005).
10. Joachim Weickert, Anisotropic Diffusion in Image Processing, ECMI Series, Teubner –
Verlag (1998)
11. P. Perona, J. Malik, Scale Space and Edge Detection using Anisotropic Diffusion, IEEE
Trans. on Pattern Analysis and Machine Intelligence, Vol. 12. (1990) 629-639
12. Yu-Li You, Wenguan Xu, Allen Tannenbaum, Mostafa Kaveh, Behavioral Analysis of
Analysis of Anisotropic Diffusion, IEEE Trans. on Image Processing, Vol. 5, (1996)
13. Pavel Mrazek, Joachim Weickert, Gabriele Steidl, Correspondence between Wavelet
Shrinkage and Non linear Diffusion, Scale Space 2003, LNCS 2695, (2003) 101-116
14. T. Lijima, Basic Theory on Normalization of Pattern, Bulletin of the Electrotechnical
Laboratory, Vol. 26, (1962) 368-388
15. P. Charbonnier, G. Aubert, L. Blanc-Feraud and M. Barlaud, Two deterministic half-
quadratic regularization algorithms for computed imaging, In Proc. 1994 IEEE Interna-
tional Conference on Image Processing, Vol. 2, (1994) 168–172
16. F. Andreu, C. Ballester, V. Caselles, and J. M. Mazn, Minimizing total variation flow, Dif-
ferential and Integral Equations, Vol 14, (2001) 321 – 360
17. S. L. Keeling and R. Stollberger, Nonlinear anisotropic diffusion filters for wide range
edge sharpening, Inverse Problems, Vol 18, (2002) 175-190
18. M.D. Kostin, Schrodinger-Fuerth quantum diffusion theory: Generalized complex diffu-
sion equation, J. Math. Phys, (1992)
19. Guy Gilboa, Nir Sochen and Yehoshua Y Zeevi, Image Enhancement and Denoising by
Complex Diffusion Processes, IEEE Trans. On Pattern Analysis and Machine Imtelli-
gence, Vol. 26, (2004).
20. S. Grace Chang, Bin Yu and Martin Vetterli, Adaptive Wavelet Thresholding for Image
Denoising and Compression, IEEE Trans. Image Processing, Vol 9, (2000)
21. Zhou Wang, Alan Conard Bovik, Hamid Rahim Sheik and Erno P Simoncelli, Image Qual-
ity Assessment : From Error Visibility to Structural Similarity, IEEE Trans. Image Proc-
essing, Vol. 13, (2004)