Your SlideShare is downloading. ×
Hh3114071412
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Hh3114071412

99
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
99
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412 Performance Comparison Of Medical Image Fusion Methods Based On Redundant Discrete Wavelet Transform, Wavelet Packet Transform And Contourlet Transform Divya Anand K Mrs. Prathibha Varghese M.Tech VLSI (IV Sem) Asst. Professor, Dept. of ECE S.N.G.C.E., Kadayiruppu S.N.G.C.E., Kadayiruppu Ernakulam, India Ernakulum, IndiaAbstract Image fusion is the process of combining MRI (fMRI) etc. provide low-spatial resolutionrelevant information from two or more images images with functional information. A singleinto a single fused image. The resulting image will modality of medical image cannot providebe more informative than any of the input comprehensive and accurate information. As aimages. The fusion in medical images is necessary result, combining anatomical and functional medicalfor efficient diseases diagnosis from images to provide much more useful informationmultimodality, multidimensional and multi through image fusion (IF), has become the focus ofparameter type of images. This paper describes a imaging research and processing.multimodality medical image fusion system usingdifferent fusion techniques and the resultant is In the research of fusion techniques, manyanalysed with quantitative measures. Initially, the approaches are proposed and implemented. Some ofregistered images from two different modalities these methods are available in Fusion Tool ofsuch as CT (anatomical information) and MRI - Matlab5.0 namely are Filter- Subtraction-DecimateT2, FLAIR (pathological information) are Pyramid (FSD), Gradient Pyramid, Laplacianconsidered as input, since the diagnosis requires Pyramid, Principle Component Analysis,anatomical and pathological information. Then Morphological Pyramid, Ratio Pyramid, Contrastthe fusion techniques based on Redundancy Pyramid and so on[1]. In all the above methods eachDiscrete Wavelet Transform (RDWT), Wavelet approach has its own limitation in fusion process.Packet Transform and Contourlet Transform are With the evolution of Discrete Wavelet Transformapplied. Further the fused image is analyzed with (DWT), many fusion techniques based on DWTquantitative metrics such as Standard Deviation was also proposed. One of the major drawbacks of(SD), Entropy (EN), and Signal to Noise Ratio DWT is that the transformation does not provide(SNR) for performance evaluation. From the shift invariance. This causes a major change in theexperimental results it is observed that RDWT wavelet coefficients of the image even for minormethod provides better information quality for shifts in the input image. Redundant discrete waveletSD and SNR metric and the Contourlet transform (RDWT) another variant of waveletTransform method provides better information transform, is used to overcome the shift variancequality using EN metric. problem of DWT [2]. It has been applied in different signal processing applications but it is not wellKeywords— Contourlet Transform,Entropy, SD, researched in the field of medical image fusion.SNR. Recently, a multi-dimensional signal processing theory called Multiscale Geometric Analysis (MGA)I. INTRODUCTION has been developed, which includes ridgelet , Image fusion aims at synthesizing curvelet, bandelet and brushlet, etc. Since 2001, theinformation from multiple source images to obtain a latest MGA tool contourlet proposed by M.N. Domore accurate, complete and reliable fusion image and M. Vetterli has been paid widely attention[3].for the same scenes or targets. Compared with Contourlet offers a highly efficient imageoriginal inputs, the fused image are more suitable for representation for its good properties :observation, analysis, understanding and multiresolution, localization, directionality andrecognition. Structural images like Magnetic anisotropy, etc. on the fused image.Resonance Imaging (MRI), Computed Tomography(CT), Ultrasonography (USG), Magnetic Resonance The remaining sections of this paper areAngiography (MRA) etc. provide high resolution organized as follows. In Section II, the systemimages with anatomical information. On the other design is briefly reviewed. Section-III describeshand, functional images such as Position Emission quantitative metrics and Section IV deals with theTomography (PET), Single-Photon Emission experimental results and evaluates the performanceComputed Tomography (SPECT) and functional 1407 | P a g e
  • 2. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412of the proposed methods based on the quantitative B. Contourlet Transformmetrics. The Contourlet transform is a multi- resolution and a multidirectional transform that isII. SYSTEM DESIGN powerful for providing sparse expansions of images In this system initially the two different containing smooth contours by applying a doubletypes of modality CT filter bank structure named Pyramid Directional(anatomical) and MRI (pathological) are given as Filter Bank (PDFB). Implementation of the CT isinput. Then the fusion techniques are applied to the achieved via two major steps: First, the Laplacianregistered images to find a more informative fused pyramid is used to capture the points discontinuities.image. The fused image is validated using Second, the Directional Filter Bank (DFB) is utilizedquantitative measures to link the discontinues into linear structures[3]. Therefore, the directional filter bank is designed toA. Redundant Discrete Wavelet Transform And capture the high frequency components representing Wavelet Packet Transform directionality, while the Laplacian pyramid is used Here the same algorithm is used for both to avoid leaking of low frequency components intoRDWT based and wavelet packet transform based several directional sub bands. Therefore, theimage fusion.Let A1 and B1 be the registered directional information is captured efficiently[6].images of different modalities (CT and T2) or (CTand T1). Three levels of RDWT decomposition is Laplacian Pyramiddone using Daubechies filters on both the images. The Laplacian Pyramid (LP), as introduced In the first stage I1 image is decomposed by Burt et al. in, is a multiscale decomposition filterinto I1a, I1v, I1d, and I1h, be the RDWT subbands and bank, which produces two output images at eachalso I2a, I2v, I2d, and I2h, be the corresponding RDWT decomposition level. The first output imagesub bands from I2 image. To extract the features represents the low frequency components (lowpass)from both the images, coefficients from of the original image. The second image is theapproximation band of I1 and I2 are averaged. difference between the original input image and the prediction as illustrated in Figure1 . The input image is processed with the two filters (H and G), which (1) are the analysis and synthesis filters respectively.where IFa is the approximation band of the fused The LP decomposition produces two output images image. (a1 and a2), where a1 is the coarse approximation and a2 is the difference between the original image In the next stage each sub band namely LH, and the prediction. Also from the figure 1, the LPHL, HH is divided into blocks of size 3 * 3 and the avoids the scrambled frequencies by only downentropy of each block is calculated, as in equation sampling the output of the lowpass filter. Thisbelow process is iterated to produce the next decomposition level. (2)where j (= v, d, h) denotes the sub bands, m = 3 (sizeof each block), k represents the block number, and i(= 1,2) is used to differentiate the two multimodalimages I1 and I2. Using the entropy values, the detailsub bands for the fused image IFv, IFd, and IFh aregenerated, as in (3). The derive a fused image blockIF jk, RDWT coefficients from I1 is selected if theentropy value of the specific block of I1 image isgreater than the specific block of I2 image, otherwiseI2 jk is selected. Figure.1 Laplacian Pyramid. (a) One level of = I2 jk if (e1jk <= e2 jk) decomposition. (b)Reconstruction. (3) Finally inverse transform is applied to all thefour sub bands to obtain the resultant fused image. 1408 | P a g e
  • 3. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412Directional Filter Bank energy E(x,y) is calculated centering the current In 1992, Bamberger and Smith constructed coefficient in the sub band a, which isa 2D-DFB that can be maximally decimated whileachieving perfect reconstruction. It is used in thesecond stage of CT to link the edge points into linear (4)structures, which involves modulating the input where (x,y) denotes the current contourletimage and using quincunx filter banks (QFB) with coefficient, WL(m,n) is a template of size 3*3diamond-shaped filters. An l-level tree structuredDFB is equivalent to a 2l parallel channel filter bankwith equivalent filters and overall sampling matricesas shown in Fig. 4.2. As shown in Fig. 4.2,corresponding to the sub bands indexed, theequivalent analysis and synthesis filters are denotedusing Hk and Gk, 0 ≤ k < 2m. An l-level DFBgenerates a perfect directional basis for discretesignal in l2(Z2) that is composed of the impulseresponses of 2l directional synthesis filters and their Then the salience factor is calculated to determineshift. whether the selection mode and averaging mode to be used in the fusion process. (5) where ajx(x,y); x=A,B denotes the contourlet coefficients (low pass or high pass) of the source image A or B and Mj AB(x,y) is the salience factor. Salience factor reflects the similarity of the low pass or high pass sub bands of the two source images. Then this value is compared to a predefined threshold TL (TL=1). If Mj AB(x,y) > TL, averaging mode is selected for fusion. (6)Figure 2 Construction of DBF. Where ajF(x,y) is fused result at position (x,y). The αA and αB are selected based on the condition The main advantage of the CT is the specified, as belowflexibility in choosing the number of directions ateach level. Also, the utilization of the iterated filterbanks increases its computational efficiency. CTbased techniques successfully deal with imagescontaining contours and textures as the CT exhibitsvery high directionality and anisotropy. These (7)characteristics are very important when processingremote sensing images, which contain contours and Where αB= 1-αA, αmin € (0,1) αmin +αmax=1lines (i.e., roads). The selection mode is chosen for the condition Here image A and B denotes the input MjAB(x,y) <= TL, with the fusion rules denoted assource images CT and MRI respectively. F is the belowfinal fused outcome after inverse Contourlettransform. In Contourlet transform based imagefusion method, the local energy is developed as themeasurement, then the selection and averagingmodes are used to compute the final coefficients .The same rule is used for both low frequency fusion (8)and high frequency sub band fusion. The local 1409 | P a g e
  • 4. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412The fused image is constructed from ajF(x,y)(contourlet coefficients of the sub bands of the fusedimage )using inverse contourlet decompositionmethod.III. QUANTITATIVE ANALYSIS The quantitative measurement is done on the fused images using some objective and subjective quality measures. It helps better in assessing the information of images. This section explains the quantitative metrics used in the analysis of this system.A. Entropy (9) where PF is the normalized histogram of thefused image to be valuated, L is the maximum graylevel for a pixel in the image. In our tests, L is set to255[8]. Figure 3 Input Data SetB. Standard Deviation (SD) The standard deviation defines the contrastinformation of an image. The image with morecontrast has high value of standard deviation whereas low value for minimum contrast images. (10)where x is defined as a summation (a) (11)C. Signal To Noise Ratio (SNR) It is defined as the ratio of mean pixel valueto that of standard deviation of the correspondingpixel values[8].SNR = Mean / Standard Deviation (12) IV.EXPERIMENTAL RESULTS ANDDISCUSSION The fusion algorithms based on redundantdiscrete wavelet transform, wavelet packet transform (b)and contourlet transform are simulated using Matlab7.10. Simulation results of the fusion techniques areanalysed with three input datasets. The input imagesconsists of CT images and three different MRIimages (T1, T2 and PD). All images have the samesize of 256 * 256 pixels, with 256- level gray scale. 1410 | P a g e
  • 5. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412 Table 1. Entropy values for different data sets IMAGE IMAGE IMAGE SET 1 SET 2 SET 3 RDWT LEVEL1 4.7280 4.8399 5.0479 RDWT LEVEL2 4.7554 4.8945 5.1204 RDWT LEVEL 3 4.7708 4.8972 5.1306 WAVELET PACKET 4.7520 4.8705 5.1178 (c) TRANSFORM CONTOURLET 5.1108 5.2986 5.6416 TRANSFORM (d) Figure 5. Plot of entropy values for different data sets Table 2. Standard Deviaiton values for different data sets IMAGE IMAGE IMAGE SET SET SET 1 2 3 RDWT LEVEL1 15.4551 14.8417 22.5408 (e) RDWT LEVEL2 15.5789 14.9842 22.7693 Figure 4 Fused image a)RDWT Level 1 b) RDWT Level 2 c)RDWT Level 3 d)Wavelet RDWT LEVEL 3 15.4195 14.8318 22.5586 packet e)Contourlet Transform WAVELET PACKET 15.7578 15.1322 23.0472 TRANSFORM CONTOURLET 16.4829 16.1886 26.5253 TRANSFORM 1411 | P a g e
  • 6. Divya Anand K, Mrs. Prathibha Varghese / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 3, Issue 1, January -February 2013, pp.1407-1412 CONCLUSION In this paper, we have compared medical image fusion algorithms based on three transforms viz, redundant discrete wavelet transform, wavelet packet transform and contourlet transform. From the experimental results it is observed that the contourlet transform provides better fused image in terms of entropy metric and also in the visualization of the fused image event though it has poor signal to noise ratio than the other two transforms. REFERENCES [1] Firooz Sadjadi “ Comparative Image Fusion Analysis” , lEEE Computer Society Conference on Computer Vision and Figure 6. Plot of standard deviation values for Pattern Recognition, vol. 3, June 2005. different data sets [2] Richa Singh, Mayank Vatsa “ Multimodal Medical Image Fusion using Redundant Table 3. SNR values for different data sets Discrete Wavelet Transform”, In proceedings of Seventh International Conference on Advances in Pattern IMAGE IMAGE Recognition, pp. 232-235, February 2009 IMAGE SET SET [3] S .Rajkumar, S.Kavitha, “ Redundancy SET 1 2 3 Discrete Wavelet Transform and ContourletRDWT LEVEL1 1.7242 1.7635 1.7503 Transform for Multimodality Medical Image Fusion with Quantitative Analysis”,RDWT LEVEL2 1.7105 1.7467 1.7327 3rd International Conference on Emerging Trends in Engineering and Technology,RDWT LEVEL November 2010. 1.7281 1.7646 1.74883 [4] L Yang, B.L.Guo, W.Ni, “ MultimodalityWAVELET Medical Image Fusion Based on MultiscalePACKET 1.6911 1.7296 1.7118 Geometric Analysis of ContourletTRANSFORM Transform”, Elsevier Science Publishers,CONTOURLET vol. 72, pp. 203-211, December 2008. 1.6885 1.7057 1.6798TRANSFORM [5] Minh N. Do, “The Contourlet Transform: An Efficient Directional Multiresolution Image Representation”, IEEE Transaction on Image Processing, Vol. 14,pp. 2091- 2106, November 2005. [6] .J. Fowler, “The Redundant Discrete Wavelet Transform and Additive Noise”, IEEE Signal Proessing Letters, vol. 12, pp. 629- 632, September 2005. [7] L Yang, B. L. Guo, W. Ni, “Multifocus Image Fusion Algorithm Based on Contourlet Decomposition and Region Statistics”, Fourth International Conference on Image and Graphics, pp. 707- 712, September 2007. [8] .R. Maruthi, R.M. Suresh, “Metrics for Measuring the Quality of Fused Images”, International Conference on Computational Figure 7. Plot of SNR values for different data Intelligence and Multimedia Applications, sets December16, 2007 [9] Wavelet transforms by Robi Polikar. 1412 | P a g e