One of the most popular image denoising methods based on self-similarity is called nonlocal
means (NLM). Though it can achieve remarkable performance, this method has a few shortcomings,
e.g., the computationally expensive calculation of the similarity measure, and the lack of reliable
candidates for some non repetitive patches. In this paper, we propose to improve NLM by integrating
Gaussian blur, clustering, and row image weighted averaging into the NLM framework.
Experimental results show that the proposed technique can perform denoising better than the original
NLM both quantitatively and visually, especially when the noise level is high.
2. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 – 19, July 2014, Mysore, Karnataka, India
1 v Ni v Nj h e
Z i
179
The original NLM algorithm is computationally intensive, especially its full search.
Accordingly, there has been a lot of work focusing on this issue. The most time-consuming part of
NLM is weight calculation, so a lot of methods are dominantly based on how to eliminate dissimilar
patches before weighted averaging. In [8], pre-selection of contributing neighborhoods based on
mean and gradient values was proposed. Similarly, local variance [9] and singular value
decomposition (SVD) [10] have been introduced to eliminate dissimilar pixels. In order to accelerate
the weight calculation, fast Fourier transform (FFT) has been proposed in [11], which is
approximately 50 times faster than the original NLM. The approach in [12] exploits the symmetry in
the weight function, and computes Euclidean distance by a recursive moving average filter
symmetrically, which also considerably improves the efficiency. Pang et al. [13] utilized several
critical pixels in the center instead of all pixels in the neighborhood. For the improvement of
quantitative and qualitative results, the tuning of the smoothing parameters has been proposed in [9].
In [14], a family of non-local image smoothing algorithms were designed which approximate
the application of diffusion partial differential equation (PDE)’s on a specific Euclidean space of
image patches. It can preserve the structures in the original image domain. In order to increase the
number of reliable candidates of noisy target patches, the authors in [15] proposed RIBM for
nonlocal image denoising, which involves several steps such as estimating the rotation angle,
rotating the block via interpolation and then applying standard block matching. In our method, we
focus on improving the denoising performance of NLM by the means of finding reliable candidate
sets. Though previous methods [10], [15] have attempted to provide better candidates for weighted
averaging, our approach is unique in that it exploits moment invariants in pre-selection and row
image weighted averaging for performance improvement. The experimental results show that this
method outperforms the original NLM in terms of both quantitatively and visual quality.
The rest of this paper is organized as follows. Related work on NLM is summarized in
Section 2. The proposed improvements on NLM are described in Section 3. In Section 4,
experiments and results are presented. Section 5 provides the conclusion and future work.
2. EXISTING METHOD
The idea of NLM is based on the fact that patches in an image always have self-similarity.
Given a noisy image V={v(i)|i} R2, the restored intensity of the pixel, N(v)(i) is a weighted average
of all intensity values within the neighbourhood I . Let us denote [7]
NL(v)(I)=
jÌI
w(i, j)v( j)
(1)
Where v is the intensity function, v(j) is the intensity at pixel j, and w(i, j) is the weight
assigned to v(j) in the restoration of pixel i. The weight can be calculated by [7]
W(i,j)=
2 2 | ( ) ( )| /
( )
− −
(2)
Where Ni denotes a patch of fixed size and it is cantered at the pixel i. The similarity |V(Ni)-
v(Nj)|2 is measured as a decreasing function of weighted Euclidean distance. a0 is the standard
deviation of the Gaussian kernel, Z(i) is the normalization constant with Z(i)=w(i, j)
, and h acts
as a filtering parameter. This method is computationally expensive and time consuming. The quality
of the reconstructed image is poor when noise is high. In the proposed method the set of reliable
3. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 – 19, July 2014, Mysore, Karnataka, India
candidates that are similar to current patch is increased by clustering based on similarities and row
image weighted averaging.
1
Ps e
x + y
−
G (x, y) s
G (x, y)
s
Sum
180
3. PROPOSED METHOD
In the proposed algorithm we are trying to improve the denoising performance of NLM by
the means of finding more reliable candidate sets based on similarities. Improved NLM can be
divided into Pre-processing, Feature extraction, clustering, and row image weighted averaging.
3.1 Pre-processing
In pre-processing Gaussian function is convolved with the noisy image to obtain Gaussian
blurred image. This step removes high frequency noise and smoothens the noisy image. These are a
type of low pass filters which are applied before feature extraction. Gaussian filter provides the pre-processing
for pre-classification. They are a class of linear smoothing filters with weights chosen
according to a Gaussian function. It is a very good filter to remove noise drawn from a normal
distribution. The 2D zeromean discrete Gaussian function used for a mask defined by (2m+1) x
(2m+1) with centre (0,0) and x,y ranging from (-m,-m) to (m,m) is denoted by
G(x,y)=
2 2
2 2
2
2s
(3)
Where x,y={-m,.....,0,....,m} and is the standard deviation of the Gaussian distribution.
Normalization is necessary if we need to obtain the brightness level of the image
Sum =
m
=− =−
x m
m
y m
(4)
Gk(x,y) = s
(5)
The result of Gaussian blur for the whole image is given by
Gb = Gk * v (6)
Where v is the intensity of the input noisy image and denotes the convolution operation. In
our implementation, a large is not necessary, because most details of the input noisy image should be
retained and Gaussian blur with a large might introduce artifacts. determines the width of the filter
and hence the amount of smoothing. After smoothing the image the filtered image is divided into
patches of appropriate size. These patches serve as an input to feature extraction block. It is
important to determine the size of the patches because if the size of the patch is large then the quality
of the reconstructed image will be poor leading to less PSNR value. If the size of patch is small then
there will be less reliable candidate for weighted averaging.
3.2 Feature extraction
Feature extraction is a special form of dimensionality reduction, in which we transform the
input data in to set of features. Feature set will extract the relevant information from the input data in
order to perform desired task using this reduced representation instead of full size input. Feature
extraction is used in many algorithms such as face recognition, pattern recognition ect. In feature
4. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 – 19, July 2014, Mysore, Karnataka, India
extraction moment invariants is applied on raw image patches to obtain moment vectors. Higher
moment invariants were demonstrated to be more vulnerable in the case of additive white noise.
Therefore, in the proposed algorithm, Hu’s moment invariants are applied, which has the highest
order of 2, as feature descriptor for clustering. Given an image and an patch which is centered at
location, the moment invariants of this patch can be represented by a vector. Then, for the whole
image, such vectors which serve as the input vectors of the clustering HU’s Moment invariants are
widely applied to image pattern recognition in a variety of applications due to its invariant features
on image translation, scaling and rotation. It derives six absolute orthogonal invariants and one skew
orthogonal invariant based upon algebraic invariants. Hu’s moments are rotational invariant which
means that even if the patches is rotated by some angle or mirrored then also the moment values will
be the same hence they are clustered under same group in later sections.
| ( ) |
= Î
181
3.3 Clustering
Clustering is a method of quantizing the vectors. In the proposed algorithm adaptive k-means
clustering is used for vector quantization. Clustering is performed to obtain cluster of similar patches
based on moment features. Here HU’s moment features are served for adaptive K-means clustering.
In k-means clustering, the data is clustered randomly. To avoid this Davis-bouldin formula is used to
get the best number of cluster, it can be defined as,
DBI =
M
i
=
i R
1
M 1
(7)
The adaptive K-means clustering algorithm starts with the selection of K elements from the
input data set. In each cluster it decides the number of comparisons for each search. Adaptively
classify the acquired data by choosing appropriate centroid. Given a set of observations (x1, x2, …,
xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n
observations into k sets (k n) S = {S1, S2, …, Sk} so as to minimize the within-cluster sum of
squares (WCSS)
argsmin
2
1 ( ) ( )
−
k
i x j s i
i x j μ
(8)
Where μi is the mean of points in Si.
3.4 Row image and weighted averaging
The clustered patches have similarities in terms of intensity shape and size. Patches in same
cluster has more similar neighbourhood. A row image is constructed for each cluster hence for n
clusters there will be n number of row images. Finally NLM is applied for each row image. The
NLM filtered images are reconstructed by replacing each corresponding patches in the denoised
image.
The differences between our approach and NLM are as follows.
1. Gaussian blur provides the pre-processing for pre-classification. The effect is illustrated in
Fig. 2. In the original NLM, there is no pre-processing step.
2. K-means clustering on moment invariants of the blurred noisy image serves as the pre-classification
for our filtering process. In the original NLM, all target patches have fixed
candidate sets, which is either the whole image or the neighbourhood centred at them. The
figure below shows the block diagram of proposed algorithm.
5. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM
4. EXPERIMENTAL RESULTS
17 – 19, July 2014, Mysor
Fig. 1: Proposed Method
In our experiments, the image data set is defined as:’1.ti
-2014
Mysore, Karnataka, India
tif”.
For performance evaluation, we compare our proposed method with the original NLM and a recent
related method [15] based on this dataset. The evaluation metrics we adopt in our experiments are
mean square error (MSE) and peak signal
quantitative evaluations of the denoising results. MSE and PSNR are defined as:
MSE=
m
−
1 m
Where I(I,j) is the original image, K(I,j) is the noisy image
PSNR value can be calculated using MSE value as
PSNR=10log
MAX I
Where MAXI is the maximum range of intensity and MSE is the mean square error.
4.1 Parameters of Clustering
We implemented our clustering method based on moment invariants. For standard K
K-means
clustering, there are several parameters which need to be decided. The type of distance we use, the
number of clusters we assign, and the length of vectors we use in our
we exploit the Euclidean distance for measuring the distance between two feature vectors as paper
[10] did. According to [16], we choose the patch size as 5X 5. To test how the performance of the
method varies with different values of K, we vary K in the range of 400 and 500.
trends of PSNR are roughly the same: when K becomes larger, there are more clusters represent
different types of details. However, if K goes too high, some clusters will not have enough
candidates. As a result, the PSNR go down after the climax. Therefore, if complexity is not a
concern, we can choose the optimal value of K depending on the size of the input noisy image. For
our testing set, all the images are 225X225, so we choose K=1800 (when K=2
twice of the time as takes.) to guarantee enough candidates for each patch according to the variation
of visual results when we change K .
182
tif’, ’2.tif’, ’3.tif’, “4.tif”,”5.tif”.”6.tif”
r signal-to-noise ratio (PSNR) PSNR is employed to provide
=
−
=
=
1
0
1
0
[ ( , ) ( , )]2
*
i
n
j
I i j K i j
n
(9)
m,n is the size of the image.
10log10(
MSE
2
) (10)
NLM based framework. Here
s . The changing
2800 800 it takes more than
6. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM
17 – 19, July 2014, Mysor
Fig. 2: Experimental Results (A) Original Image, (B) Noisy Image, (C) Existing
(D) Proposed
NLM, (E) Gaussian Blur
The difference in visual quality between the two methods can be inspected in the examples
shown in Fig. 2. We observe that the proposed method can not only preserve better details but also
remove severe noise. The method in [15] e
may cause lack of proper candidates when the variation of the textures is strong. Our algorithm
overcomes this by obtaining sufficient reliable candidates from K
the original NLM is almost ineffective. When the noise level is high, the intensity based matching
between patches is vulnerable to noise. Our scheme has adopted Gaussian blur as pre
moment invariants are robust in noise inference as well. Our alg
much better compared to other approaches (the original NLM
before weighted averaging can ensure most patches to get reliable candidates.
5. CONCLUSION
In this paper, we proposed an
employs RIBM but it is applied to neighborhoods, which
K-means clustering on the Gaussian blurred image, which provides better classification before
weighted averaging. Experimental results show that clustering on moment invariants is very effective
for pre-classification. The proposed algorithm can effectively
same time introduce fewer artifacts than the other methods.
The K-means clustering used in our proposed method is a time
work, we will investigate more efficient clustering methods to speed up the pre
6. REFERENCE
[1] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. 6th Int.
Conf. Computer Vision, 1998, pp. 839
[2] L. Shao, H. Zhang, and G. de Haan, “An overview and performance evaluation of
classification-based least squares trained filters,” IEEE Trans. Image Process., vol. 17,
pp. 1772–1782, Oct. 2008.
[3] M. Protter and M. Elad, “Image sequence denoising via sparse and redu
representations,” IEEE Trans. Image Process., vol. 18, pp. 27
[4] G. Varghese and W. Zhou, “Video denoising based on a spatiotemporal Gaussian scale
mixture model,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 7, pp. 1032
Jul. 2010.
183
K–means clustering. We can see that
iginal algorithms preserves the main structures
NLM). It demonstrates that using clustering
improved NLM method. It applies moment invariants based
reconstruct finer details and at the
time-consuming part. In future
pre-classification step.
839–846.
27–35, Nov. 2003.
-2014
Mysore, Karnataka, India
NLM,
mploys pre-processing and
orithms ). redundant
1032–1040,
7. Proceedings of the 2nd International Conference on Current Trends in Engineering and Management ICCTEM -2014
17 – 19, July 2014, Mysore, Karnataka, India
184
[5] F. Luisier, T. Blu, and M. Unser, “SURE-LET for orthonormal wavelet domain video
denoising,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, no. 6, pp. 913–919,
Jun. 2010.
[6] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D
transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16,
pp. 2080–2095, Aug. 2007.
[7] A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithm, with a new
one,” IEEE trans, vol. 4, pp. 490–530, 2005.
[8] M. Mahmoudi and G. Sapiro, “Fast image and video denoising via nonlocal means of similar
neighborhoods,” IEEE Signal Process. Lett., vol. 12, pp. 839–842, Dec. 2005.
[9] P. Coupe, P. Yger, S. Prima, P. Hellier, C. Kervrann, and C. Barillot, “An optimized
blockwise nonlocal means denoising filter for 3-D magnetic resonance images,” IEEE Trans.
Med. Imag., vol. 27, no. 4, pp. 425–441, Apr. 2008.
[10] T. Thaipanich, O. B. Tae, W. Ping-Hao, X. Daru, and C. C. J. Kuo, “Improved image
denoising with adaptive nonlocal means (ANL-means) algorithm,” IEEE Trans. Consum.
Electron., vol. 56,no. 4, pp. 2623–2630, Nov. 2010.
[11] J. Wang, Y. Guo, Y. Ying, Y. Liu, and Q. Peng, “Fast non-local algorithm for image
denoising,” in Proc. IEEE Int. Conf. Image Process, Atlanta, GA, USA, 2006,
pp. 1429–1432.
[12] B. Goossens, H. Luong, A. Pizurica, and W. Philips, “An improved non-local denoising
algorithm,” in IEEE trans, Tuusalu, Finland, 2008, pp. 143–156.
[13] P. Chao, O. C. Au, D. Jingjing, Y.Wen, and Z. Feng, “A fast NL-means method in image
denoising based on the similarity of spatially sampled pixels,” in Proc. IEEE trans, on
Multimedia Signal Processing, Rio de Janeiro, Brazil, 2009, pp. 1–4.
[14] D. Tschumperle and L. Brun, “Non-local image smoothing by applying anisotropic diffusion
PDE’s in the space of patches,” in Proc. IEEE Int.Conf. Image Process., Cairo, Egypt, 2009,
pp. 2957–2960.
[15] G. Sven, Z. Sebastian, and W. Joachim, “Rotationally invariant similarity measures for
nonlocal image denoising,” J. Visual Comm. And Image Represent., vol. 22, pp. 117–130,
Feb. 2011.