ISSN: 2278 – 1323                             International Journal of Advanced Research in Computer Engineering & Technol...
ISSN: 2278 – 1323                             International Journal of Advanced Research in Computer Engineering & Technol...
ISSN: 2278 – 1323                             International Journal of Advanced Research in Computer Engineering & Technol...
ISSN: 2278 – 1323                           International Journal of Advanced Research in Computer Engineering & Technolog...
ISSN: 2278 – 1323                                     International Journal of Advanced Research in Computer Engineering &...
ISSN: 2278 – 1323                               International Journal of Advanced Research in Computer Engineering & Techn...
Upcoming SlideShare
Loading in...5
×

190 195

222

Published on

Published in: Technology, Art & Photos
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
222
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

190 195

  1. 1. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 Comparative analysis between PCA and Fast ICA based Denoising of CFA Images for Single- Sensor Digital Cameras. Shawetangi kala Raj Kumar SahuAbstract- Denoising of natural images is the methodologies for noise reduction (or denoising)fundamental and challenging research problem of giving an insight as to which algorithm should beImage processing. This problem appears to be used to find the most reliable estimate of the originalvery simple however that is not so when image data given its degraded version. Noiseconsidered under practical situations, where the modeling in images is greatly affected by capturingtype of noise, amount of noise and the type of instruments, data transmission media, imageimages all are variable parameters, and the single quantization and discrete sources of radiation. Mostalgorithm or approach can never be sufficient to existing digital color cameras use a single sensor withachieve satisfactory results. Single-sensor digital a color filter array (CFA) [1] to capture visual scenescolor cameras use a process called color in color. Since each sensor cell can record only onedemosaicking to produce full color images from color value, the other two missing color componentsthe data captured by a color filter array at each position need to be interpolated from the(CFA).Normally the quality of images are available CFA sensor readings to reconstruct the full-degraded because of sensor of camera. In this color image. The color interpolation process ispaper we have developed PCA, FASTICA based usually called color demosaicking (CDM).Manyalgorithm with K-means clustering and compared CDM algorithms [2]-[5] proposed in the past arethe both .Performance evaluation is done with based on the unrealistic assumption of noise-freePSNR, WPSNR, SSIM, Correlation Coefficient. CFA data. The presence of noise in CFA data not only deteriorates the visual quality of capturedKeywords—CFA, Kmean, ICA, PCA images, but also often causes serious demosaicking artifacts which can be extremely difficult to remove I. INTRODUCTION using a subsequent denoising process. Many advanced denoising algorithms which are designedDigital images play an important role both in daily for monochromatic (or full color) images, are notlife applications such as satellite television, magnetic directly applicable to CFA images due to theresonance imaging, computer tomography as well as underlying mosaic structure of CFAs. Our task is toin areas of research and technology such as attempt to reduce the noise inherent in the image. Ageographical information systems and astronomy. critical concept in signal processing is signalData sets collected by image sensors are generally representation the same image is representation incontaminated by noise. Imperfect instruments, many ways, and thus an important problem is toproblems with the data acquisition process, and determine which one is best to be able to effectivelyinterfering natural phenomena can all degrade the denoise signal. The traditional solution is the Fourierdata of interest. Furthermore, noise can be introduced representation, but more modern is nonlinearby transmission errors and compression. Thus, methods employ the wavelet transforms. Thedenoising is often a necessary and the first step to be common ground of Fourier and wavelet method istaken before the images data is analyzed. It is that they use signal representation which are pre-necessary to apply an efficient denoising technique to determined i.e. they cannot be directly adapted to thecompensate for such data corruption. Image signal structure.denoising still remains a challenge for researchersbecause noise removal introduces artifacts and causes The goal is to introduce a nonlinear de-noisingblurring of the images. This paper describes different method which adapts the representation used to the 190 All Rights Reserved © 2012 IJARCET
  2. 2. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012statistical structure of data to be de-noised. We ith row of sample matrix X, denoted by Xi= [xi1 xi2concentrate on linear data adaptive transform such as …. xin], is the sample vector of xi. The mean value ofthe independent component analysis (ICA) and xi can be estimated as 𝒏principle component analysis (PCA). 𝝁 𝒊 =1/n 𝒋=𝟏 𝑿 𝒊 (j) (6) and then the sample vector Xi is centralized asII. ADAPTIVE LINEAR TRANSFORM 𝑿 𝒊 = 𝑿 𝒊 - 𝝁 𝒊 = 𝒙 𝒊𝟏 𝒙 𝒊𝟐 … … . 𝒙 𝒊𝒏 (7)In general non- linear transformation function is 𝒔 = 𝒇(𝒙) (1) Where 𝑥 𝑖 𝑗 =𝑥 𝑖 𝑗 -𝜇 𝑖 𝑗Where f is vector valued function of a vector Accordingly, the centralized matrix of X isargument and x, s denotes random vector. The 𝑻 𝑻 𝑿= 𝑿 𝟏 𝑿 𝟐 … … . 𝑿 𝑻𝒎 T (8)mapping f is a fixed function but often we wouldlike to adapt it to structure of x so that we may Finally, the co-variance matrix of the centralizedcapture the information in inherent in it. Such type of dataset is calculated astransform makes it difficult to adapt it to the data. Ω = 1/n 𝑿 𝑿 𝑻 (9)So the simplest solution is to restrict f to be linear.Linearity means it should satisfy the following two The goal of PCA is to find an orthonormalconstraints: f(x1+x2) = f(x1) + f(x2) (2) transformation matrix P to de-correlate 𝑋, i.e. 𝑌=P𝑋 So that the co-variance matrix of 𝑌 is diagonal. Since f (αx1)=α . f (x1) (3) the co-variance matrix Ω is symmetrical, it can beWhere α is any scalar, and x1,x2 are arbitrary written asvectors. Then Ω = ΦΦT (10) s= W. x (4) Where Φ= [Φ1, Φ2…..Φm] is the m×m orthonormalSince any linear function can be written as a matrix eigenvector matrix and Λ=diag { λ1 λ2…..λm }is themultiplication, and multiplication by any matrix diagonal eigenvalue matrix with λ1 ≥λ2≥…..≥λm. Thesatisfies the linearity constraints. Thus if W is n by m terms Φ1 Φ2…..Φm and λ1 λ2…..λm are thematrix the transform has been parameterized by nm eigenvectors and eigenvalues of Ω. By settingparameters which can subsequently be adapted to the P = ΦΦT (11)statistics of x. so that s forms a representation with 𝑋 can be decor related, i.e. 𝑌=P𝑋 and Λ = (1/n) 𝑌 𝑌 𝑇relevant properties. Linear transforms have manyadvantages over nonlinear ones. Most important of An important property of PCA is that it fully de-which is mathematical simplicity with which linear correlates the original dataset𝑋. Generally speaking,transforms can be analyzed. the energy of a signal will concentrate on a small subset of the PCA transformed dataset, while the energy of noise will evenly spread over the wholeIII. PRINCIPAL COMPONENT ANALYSIS dataset. Therefore, the signal and noise can be better distinguished in the PCA domain[1].PCA is a classical de-correlation technique which hasbeen widely used for dimensionality reduction withdirect applications in pattern recognition, data IV. INDEPENDENT COMPONENT ANALYSIScompression and noise reduction. Denote by x=[x1x2…..xm]T an m-component vector variable and To rigorously define ICA (Jutten and Hérault, 1991;denote by Comon, 1994), we can use a statistical “latent 𝒙 𝟏 𝒙 𝟐 … … . 𝒙 𝟏𝒏 variables” model. Assume that we observe n linear 𝟏 𝟏 mixtures x1, ...,xn of n independent components X= 𝟐 𝒙 𝟏 𝒙 𝟐 … … . 𝒙 𝟐𝒏 𝟐 (5) 𝒙 𝒋 = 𝒂 𝒋𝟏 𝒔 𝒋 +𝒂 𝒋𝟐 𝒔 𝒋 +…..𝒂 𝒋𝒏 𝒔 𝒏 For all j (12) ⋮ 𝒙 𝟏𝒎 𝒙 𝟐𝒎 … … . 𝒙 𝒏𝒎 We have now dropped the time index t; in the ICA model, we assume that each mixture xj as well asThe sample matrix of x, where ,j=1,2,……,n, is xij each independent component sk is a random variable,the discrete sample of variable xi ,i=1,2,…..,m. The 191 All Rights Reserved © 2012 IJARCET
  3. 3. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012instead of a proper time signal. The observed values k-means clustering is a method of cluster analysisxj(t), e.g., the microphone signals in the cocktail party which aims to partition n observations into k clustersproblem, are then a sample of this random variable. in which each observation belongs to the cluster withWithout loss of generality, we can assume that both the nearest mean. This result into a partitioning of thethe mixture variables and the independent data space into Voroni Cells. The problem iscomponents have zero mean: If this is not true, then computationally difficult, however there are efficientthe observable variables xi can always be centered by heuristic algorithms that are commonly employedsubtracting the sample mean, which makes the model and converge fast to a local optimum. These arezero-mean. It is convenient to use vector-matrix usually similar to the expectation-maximizationnotation instead of the sums like in the previous algorithm for mixtures of Gaussian distributions viaequation. Let us denote by x the random vector an iterative refinement approach employed by bothwhose elements are the mixtures x1, ...,xn, and algorithms.[10]-[11] Additionally, they both uselikewise by s the random vector with elementss1,.., sn. cluster centers to model the data, however k-meansLet us denote by A the matrix with elements ai clustering tends to find clusters of comparable spatialj.Generally, bold lower case letters indicate vectors extent, while the expectation-maximizationand bold upper-case letters denote matrices. All mechanism allows clusters to have different shapesvectors are understood as column vectors; thus xT , or .the transpose of x, is a row vector. Using this vector- Given a set of observations (x1, x2, …, xn), wherematrix notation, the above mixing model is written as each observation is a d-dimensional real vector, k- x = As. (13) means clustering aims to partition the n observations into k sets (k ≤ n) S = {S1, S2, …, Sk} so as toSometimes we need the columns of matrix A; minimize the within-cluster sum of squaresdenoting them by aj the model can also be written as 𝒏 X= 𝒊=𝟏 𝒂 𝒊 𝒔 𝒊 (14) 𝒂𝒓𝒈𝐦𝐢𝐧 𝒌 𝟐 𝒊=𝟏 𝒙 𝒋 𝝐𝒔 𝒋 𝑿𝒊 − 𝝁𝒊 (16) 𝒔 The statistical model is called independentcomponent analysis, or ICA model[6]-[9]. The ICA where μi is the mean of points in Si.model is a generative model, which means that it The most common algorithm uses an iterativedescribes how the observed data are generated by a refinement technique. Due to its ubiquity it is oftenprocess of mixing the components si. The called the k-means algorithm; it is also referred to asindependent components are latent variables, LIoyd’s algorithm, particularly in the computermeaning that they cannot be directly observed. Also science community.the mixing matrix is assumed to be unknown. All we Given an initial set of k means m1(1),…,mk(1) theobserve is the random vector x, and we must estimate algorithm proceeds by alternating between two steps:both A and s using it. This must be done under as Assign each observation to the cluster with thegeneral assumptions as possible. closest mean (𝒕) (𝒕) (𝒕) 𝑺 𝒊 ={𝒙 𝒑 ∶ 𝒙 𝒑 − 𝒎 𝒊 ≤ 𝒙 𝒑 − 𝒎 𝒊 ∀𝟏 ≤ 𝒋 ≤The starting point for ICA is the very simple 𝒌} (17)assumption that the components si are statisticallyindependent. It will be seen below that we must also Where each 𝑥 𝑃 goes into exactly one𝑠 𝑡 (𝑡) even if itassume that the independent component must have could go in two of them.nongaussian distributions. However, in the basic (𝒕+𝟏) 𝟏 𝒎𝒊 = (𝒕) 𝒙 𝝐 𝒔 (𝒕) 𝑿 𝒋 (18)model we do not assume these distributions known 𝑺𝒊 𝒋 𝒊(if they are known, the problem is considerablysimplified.) For simplicity, we are also assuming that The algorithm is deemed to have converged when thethe unknown mixing matrix is square, but this assignments no longer change.assumption can be sometimes relaxed. Then, afterestimating the matrix A, we can compute its inverse,say W, and obtain the independent component simply V. EXPERIMENTAL RESULTS:by: S=Wx (15) In this paper we have generated the noise withI.K mean random sequence and that noise is added to reference image taken by single sensor camera. Our work has 192 All Rights Reserved © 2012 IJARCET
  4. 4. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012been implemented in Matlab graphical user interface. PSNR: The peak signal-to-noise ratio, oftenWe have calculated different parameters like PSNR, abbreviated PSNR, is an engineering term for theWPSNR, SSIM and correlation coefficient to show ratio between the maximum possible power of athe performance of our algorithm.The formulas used signal and the power of corrupting noise that affectsfor calculation are shown below: the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. 𝒎−𝟏 𝒏−𝟏 MSE = 1/mn 𝒊=𝟎 𝒋=𝟎 𝑰 𝒊, 𝒋 − 𝑲(𝒊, 𝒋) 𝟐 (19) The PSNR is defined as 𝑴𝑨𝑿 𝑰𝟐 PSNR = 10·𝐥𝐨𝐠 𝟏𝟎 (20) 𝑴𝑺𝑬 Weighted PSNR: Weighted peak signal to noise ratio (WPSNR) takes into account, the texture of the image and the fact that the human eye is less sensitive to changes in textured areas than in smooth areas. It is nothing but PSNR weighted by an HVS Fig 1.Original image parameter called noise visibility function (NVF). 𝟐𝟓𝟓 WPSNR = 20·𝐥𝐨𝐠 𝟏𝟎 (21) 𝐍𝐕𝐅 × 𝐌𝐒𝐄 The formula to calculate this factor is: 𝟏 NVF = NORM 𝟐 (22) 𝟏×𝛅 𝐛𝐥𝐨𝐜𝐤 where δ block is the standard deviation of luminance of the block of pixels and NORM is the normalization function applied on each block of pixels to normalize the value of NVF in the range from zero to unity. High value of WPSNR indicates that the image is less distorted. SSIM:Structural similarity index matrix (SSIM) they Fig 2. Noisy image process all the pixels equally, so they cannot reasonably reflect the Subjective feeling on different scenes based on human visual sensitivity (HVS). Thus, a more comprehensive image quality assessment can be made by considering HVS. The method here is based on structural distortion measurement instead of error measurement. The idea behind this is that the human vision system is highly specialized in extracting structural information from the viewing field and it is not specialized in extracting errors. The structural similarity index correlates with human visual system. Thus SSIM is used as a perceptual image quality evaluation metric. The SSIM is defined as function of luminance (l), contrast(c) and structural components(s) respectively [10]. Fig 3.Denoised image 193 All Rights Reserved © 2012 IJARCET
  5. 5. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 SSIM = l(x, y), c(x, y), s(x, y) (23)Where l (x, y) = 2μx μy+C1/μx2+ μy2+ C1c (x,y) = 2𝝈x σy+C2/ σx2+ σy2+ C2 s(x, y) = 2𝝈xy+ C3/ 𝝈x 𝝈y+ C3The mean and variance are denoted by μ and σ. Theconstants C1, C2 and C3 are included to avoidnumerical instabilities in the ratios. The SSIM iscalculated between two NxN neighbourhoods oforiginal and denoised images. The overall perceptualsimilarity of an image is obtained by taking mean ofSSIM (MSSIM) of several neighbourhoods.Correlation Coefficient:It is used to compare two Fig 4. Comparision of PSNR values of FASTICAimages. and PCA denosing algorithm 𝐦 𝐧 𝐀 𝐦𝐧 −𝐀 𝐁 𝐦𝐧 −𝐁r= (24) 𝟐 𝟐 𝐦 𝐧 𝐀 𝐦𝐧 −𝐀 𝐦 𝐧 𝐁 𝐦𝐧 −𝐁Where, 𝑨 = mean2(A), and 𝑩= mean2(B).The Correlation Coefficient has the value r=1 if theimages are absolutely identical ,r=0 if they arecompletely uncorrelated and r=-1 if they arecompletely anti-correlated.The Table 1 shows theperformance comparison of denoising algorithmsbased on estimated PSNR, WPSNR and SSIM andCorrelation Coefficient values of FAST ICA andPCA. The estimated values in Table 1 indicate thatthe independent component analysis technique is themost effective denoising method compared to PCA. Fig 5. Comparision of SSIM values of FASTICA andTable 1 shows the performance comparison of PCA denosing algorithmdenoising algorithms: PCA FASTICAIMAGES ORIGNAL PSNR CORR WPSNR SSIM PSNR CORR WPSNR SSIM SNR COEFF COEFF 1 26.6987 27.2961 0.9916 38.8532 0.7397 33.0141 0.99771 44.6329 0.8999 2 26.7002 27.4057 0.9836 38.7727 0.7883 33.1462 0.9955 44.7029 0.9239 3 26.6835 27.3481 0.9835 38.8753 0.5981 33.0279 0.9954 44.5311 0.8301 4 26.7002 27.1891 0.9928 38.7607 0.7055 33.0410 0.9980 44.5141 0.8871 5 26.6987 27.2166 0.9915 38.7856 0.8092 32.9141 0.9976 44.3741 0.9250 6 26.6899 27.7531 0.9840 38.7846 0.8907 32.8258 0.9949 44.6122 0.9633 7 26.6834 27.1938 0.9779 38.7634 0.9254 32.1453 0.9928 43.6400 0.9764 8 26.7226 27.0580 0.9831 38.9210 0.6513 33.1550 0.9957 44.8423 0.8703 9 26.6987 27.0928 0.9841 38.7111 0.9630 32.0107 0.9947 41.9592 0.9876 10 26.6899 27.6035 0.9892 38.8047 0.9004 32.7465 0.9966 43.9275 0.9664 194 All Rights Reserved © 2012 IJARCET
  6. 6. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 errorestimation,” IEEE Trans. Image Process., vol.14, no. 12, pp. 2167–2178, Dec. 2005. [3] Xiaolin Wu, Dahua Gao, Guangming Shi, and Danhua Liu, “Color demosaicking with sparse representations” Proceedings of 2010 IEEE 17th International Conference on Image Processing September 26-29, 2010, Hong Kong. [4] K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image Process.ing, vol. 14, no. 3, pp. 360– 369, Mar. 2005. [5] R. Lukac and K. N. Plataniotis, “Color filter arrays: design and performance analysis,” IEEE Trans.Consum. Electron., vol. 51, no. 11, pp. 1260– 1267, Nov. 2005.Fig 6. Comparision of WPSNR values of FASTICA and PCA denosing algorithm [6] Liujicheng ,zhangyi “Image Denoising based on Independent Component Analysis” International VI. CONCLUSION: Conference on Intelligent Human-Machine SystemsThis paper presented a PCA and Fast ICA based CFA and Cybernetics 2009 .image denoising scheme for single-sensor digitalcamera imaging applications. To fully exploit the [7] Megha.P.Arakeri1, G.Ram Mohana Reddy , “Aspatial and spectral correlations of the CFA sensor Comparative Performance Evaluation of Independentreadings during the denoising process, the mosaic Component Analysis inMedical Image Denoising”Samples from different color channels were localized IEEE-International Conference on Recent Trends inby using a supporting window to constitute a vector Information Technology, ICRTIT 2011978-1-4577-variable, whose Statistics are calculated to find the 0590-8/11/$26.00 ©2011 IEEE MIT, AnnaPCA and ICA transform matrix. A synthetic noise is University, Chennai. June 3-5, 2011.also generated so the algorithmic difference can beevaluated. Kmeans Clustering approach has been [8] P. Gruber , F.J. Theis ,A. M. Tomie ,E.W. Langconsidered The denoising performance has been “Automatic denoising using local independentevaluated by PSNR, WPSNR and SSIM and component analysis” EIS 2004 Island of Madeira,Correlation Coefficient values. In comparison with Portugal.the other methods, the image denoised by ICA hasgood signal - to - noise ratio and structural similarity [9] Hong-yan Li Guang-long Ren Bao-jin Xiaoindex is almost close to that of original image. Thus “Image Denoising Algorithm Based on IndependentICA method provides better quality image which is Component Analysis” World Congress on Softwareclose to human perception. Engineering,2009.REFERENCES [10] Radha Chitta, Rong Jin, Timothy C. Havens,[1] Lei Zhang, Rastislav Lukac, Xiaolin Wu and Anil K. Jain Approximate Kernel k-means: SolutionDavid Zhang, “PCA-Based Spatially Adaptive to Large Scale Kernel Clustering” KDD’11, AugustDenoising of CFA Images for Single-Sensor Digital 21–24, 2011, San Diego, California, USA.Cameras” IEEE transactions on image processing,vol. 18, no. 4, April 2009. [11] T. Hitendra Sarma and P. Viswanath,B. Eswara Reddy A Fast Approximate Kernel k-means[2] L. Zhang and X. Wu, “Color demosaicking via ClusteringMethod For Large Data sets”. 978-1-4244-directional linear minimum mean square- 9477-4/©2011 IEEE. 195 All Rights Reserved © 2012 IJARCET

×