Your SlideShare is downloading. ×
Dd25624627
Dd25624627
Dd25624627
Dd25624627
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Dd25624627

154

Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
154
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
14
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. R.J.Sapkal, S.M.Kulkarni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.624-627 Image Fusion based on Wavelet Transform for Medical Application R.J.Sapkal1, S.M.Kulkarni2 1, 2 (MAEER‟s MIT College of Engineering, Kothrud, Pune, IndiaAbstractImage Fusion is a technique that integrates Also it can make the images more suitable for thecomplementary information from multiple task of the computer vision and the follow-up imageimages such that the fused image is more suitable processing [1].for processing tasks. The paper starts with the With the many new sensor types blooming up, thestudy of initial concepts for image fusion. In this image acquisition capability is rapidly increasing,paper, the fusion of images from different and the images generated by the sensors with thesources using multiresolution wavelet transform different physical characteristics. As the image datawith preprocessing of Image Fusion is proposed. retrieved by different sensors has the obvious limitThe fused image has more complete information and difference in the geometry, spectrum, time andwhich is useful for human or machine space resolutions, it‟s hard to satisfy the practicalperception. The fused image with such rich acquirement to use just one type of image data. Ininformation will improve the performance of order to have the more complete and accurateimage analysis algorithms for medical comprehension and acknowledge to the target,applications. people always hope to find a technical method to make use of the different types of image data. It‟sKeywords – CT, DWT, fusion, MR. very important and practical to combine the advantages of the different types of image data. ForI. INTRODUCTION medical image fusion, the fusion of images canInformation is used in many forms to solve often lead to additional clinical information notproblems and monitor conditions. When multiple apparent in the separate images. Another advantagesource information is combined, it is essentially is that it can reduce the storage cost by storing justused to derive or infer more reliable information. the single fused image instead of multi-sourceHowever, there is usually a point of diminishing images. So far, many techniques for image fusionreturns after which more information provides little have been proposed in this paper and a thoroughimprovement in the final result. Which information overview of these methods can be viewed inand how to combine it is an area of research called reference [3].data fusion. In many cases, the problem is ill defined The fusion of multiple measurements canwhen data is collected. More information is gathered reduce noise and therefore eliminates theirin hopes of better understanding the problem, individual limitations. It is required that the fusedultimately arriving at a solution. Large amounts of image should preserve as closely as possible allinformation are hard to organize, evaluate, and relevant information obtained in the input imagesutilize. Less information giving the same or a better and the fusion process should not introduce anyanswer is desirable. Data fusion attempts to combine artifacts or inconsistencies, which can distract ordata such that more information can be derived from mislead the medical professional, thereby a wrongthe combined sources than from the separate diagnosis.sources. Data fusion techniques combine data andrelated information from associated databases, to Ii. Preprocessing Of Image Fusionachieve improved accuracies and more specific Two images taken in different angles of sceneinferences. sometimes cause distortion. Most of objects are the The image fusion system processing flow is same but the shapes change a little. At the beginningshown in Fig.1.The image fusion is the synthesis of of fusing images, we have to make sure that eachmulti source image information which is retrieved pixel at correlated images has the connectionfrom the different sensors. These images are then between images in order to fix the problem ofregistered to assure the corresponding pixels are distortion; image registration can do this. Twoaligned properly. Afterwards they are fused using images having same scene can register togetherany of the transforms discussed below. It then using software to connect several control points.generates fused image which is more accurate, all- After registration, resampling is done to adjust eacharound and reliable. It can result in less data size, image that about to fuse to the same dimension.more efficient Target detection, and target After resampling, each image will be of the sameidentification and situation estimation for observers. size. Several interpolation approaches can be used, 624 | P a g e
  • 2. R.J.Sapkal, S.M.Kulkarni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.624-627to resample the image; the reason is that mostapproaches we use are all pixel-by-pixel fused. Images with the same size will be easy forfusing process. After the re-sampling, fusionalgorithm is applied. Sometimes we have to transferthe image into different domain, sometimes haven‟tdepending on the algorithm. Inverse transfer isnecessary if image has been transferred into anotherdomain. Fig.1 summarizes these steps called,preprocessing of image fusion. Fig. 1 Two-dimensional subband coding algorithm for DWT: The source image is decomposed in rows and columns by low-pass (L) and high-pass (H) filtering and subsequent downsampling at each level to get approximation (LL) and detail (LH, HL and Fig.1. Preprocessing of image fusion. HH) coefficients. Scaling function is associated with smooth filters or low pass filters and waveletIii. Wavelet Based Image Fusion function with high-pass filteringA. Wavelet Theory B. Fusion AlgorithmsWavelets are finite duration oscillatory functions The advent of multiresolution wavelet transformswith zero average value. The irregularity and good gave rise to wide developments in image fusionlocalization properties make them better basis for research. Several methods were proposed foranalysis of signals with discontinuities. Wavelets various applications utilizing the directionality,can be described by using two functions viz. the orthogonality and compactness of wavelets [4],[5],scaling function f (t), also known as „father wavelet‟ [6]. Fusion process should conserve all importantand the wavelet function or „mother wavelet‟. analysis information in the image and should not„Mother‟ wavelet ψ (t) undergoes translation and introduce any artefacts or inconsistencies whilescaling operations to give self similar wavelet suppressing the undesirable characteristics like noisefamilies as in (1). (1)where a is the scale parameter and b the translationparameter. Practical implementation of wavelettransforms requires discretisation of its translationand scale parameters by taking, (2)Thus, the wavelet family can be defined as: and other irrelevant details [1], [5]. (3) Fig.2. Wavelet based image fusionIf discretisation is on a dyadic grid with a0 = 2 and The source images such as CT and MR areb0 = 1 it is called standard DWT [13]. Wavelet decomposed in rows and columns by low-pass (L)transformation involves constant Q filtering and and high-pass (H) filtering and subsequent downsubsequent Nyquist sampling as given by Fig. 1 [9]. sampling at each level to get approximation (LL)Orthogonal, regular filter bank when iterated and detail (LH, HL and HH) coefficients. Scalinginfinitely gives orthogonal wavelet bases [14]. The function is associated with smooth filters or lowscaling function is treated as a low pass filter and pass filters and wavelet function with high-passthe mother wavelet as high pass filter in DWT filtering. Wavelet transforms provide a frameworkimplementation. in which an image is decomposed, with each level corresponding to a coarser resolution band. IV. PROPOSED ALGORITHM 625 | P a g e
  • 3. R.J.Sapkal, S.M.Kulkarni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.624-627 The steps involved in this algorithm are asbelow:Step 1: Read the two source images, image I andimage II to be fused and apply as input for fusion.Step 2: Perform independent wavelet decompositionof the two images until level L to get approximation(LLL) and detail (LHl, HLl, HHl) coefficients for l=1,2... L (b) (a)Step 3: Apply pixel based algorithm forapproximations which involves fusion based ontaking the maximum valued pixels fromapproximations of source images I and II. (2)Here, LLLf is the fused and LLLI and LLLII are theinput approximations, i and j represent the pixelpositions of the sub images. LHlf, LHlI, LHlII arevertical high frequencies, HLlf, HLlI, HLlII arehorizontal high frequencies, HHlf, HHlI, HHlII are (c )diagonal high frequencies of the fused and inputdetail sub bands respectively.Step 4: Based on the maximum valued pixelsbetween the approximations from Eq. (2), a binarydecision map is formulated. Eq. (3) gives thedecision rule Df for fusion of approximationcoefficients in the two source images I and II as (d) (e) (3)Step 5: Thus, the final fused transformcorresponding to approximations through maximumselection pixel rule is obtained. (f ) (g)Step 6: Concatenation of fused approximations anddetails gives the new coefficient matrix.Step 7: Apply inverse wavelet transform toreconstruct the resultant fused image and display theresult.V. EXPERIMENTAL RESULTS In medicine, CT and MRI image both aretomographic scanning images. They have differentfeatures. Fig.4(a) shows CT image, in which image (h)brightness related to tissue density, brightness ofbones is higher, and some soft tissue can‟t been seenin CT images. Fig.4(b) shows MRI image, here Fig.4 Images and their image fusion. (a): CT Image;image brightness related to amount of hydrogen (b): MRI Image; (c):Decomposed image;atom in tissue, thus brightness of soft tissue is (d):approximations; (e):Horizontal details;higher, and bones can‟t been seen. There is (f):Vertical Details; (g):Diagonal Details; (h):complementary information in these images. We use Fused Image;three methods of fusion forenamed in medicalimages, and adopt the same fusion standards 626 | P a g e
  • 4. R.J.Sapkal, S.M.Kulkarni / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.624-627VI. CONCLUSION This paper puts forward an image fusionalgorithm based on Wavelet Transform. It includesmultiresolution analysis ability in WaveletTransform. Image fusion seeks to combineinformation from different images. It integratescomplementary information to give a better visualpicture of a scenario, suitable for processing. ImageFusion produces a single image from a set of inputimages. It is widely recognized as an efficient toolfor improving overall performance in image basedapplication. Wavelet transforms provide aframework in which an image is decomposed, witheach level corresponding to a coarser resolutionband. The wavelet sharpened images have a verygood spectral quality. The wavelet transform suffersfrom noise and artifacts and has low accuracy forcurved edges. In imaging applications, imagesexhibit edges & discontinuities across curves. Inimage fusion the edge preservation is important inobtaining complementary details of input images. Sofurure work include Curvelet based image fusionwhich is best suited for medical images.REFERENCES[1] Yijian Pei, Jiang Yu, Huayu Zhou, Guanghui Cai,”The Improved Wavelet Transform Based Image Fusion Algorithm And The Quality Assessment”,3rd IEEE International Congress on Image and signal processing,2010.[2] Yong yang, "Multimodal Medical Image Fusion Through A New DWT Based Technique”, IEEE Trans on medical imaging,vol 10,No 13.2010pp.[3] F. E. Ali, I. M. El-Dokany, A. A. Saad and F. E. Abd El-Samie, “Curvelet Fusion of MR and CT Images” Progress In Electromagnetics Research C, Vol. 3, 215–224, 2008.[4] M.J. Fadili, J.-L. Starck,” Curvelets and Ridgelets” GREYC CNRS UMR 6072, Image Processing Group, ENSICAEN 14050, Caen Cedex, France, October 24, 2007.[5] Yan Sun, Chunhui Zhao, Ling Jiang,” A New Image Fusion Algorithm Based on Wavelet Transform and the Second Generation Curvelet Transform”,IEEE Trans on medical imaging ,vol 18,No 3.2010pp.[6] Yong Chai,You He,Chalong Ying,”Ct and MRI Image Fusion Based on Contourlet Using a Novel Rule”, IEEE Trans on medical imaging,vol 08,No 3.2008pp.[7] Y.Kirankumar,”Three Band MRI Image Fusion: A Curvelet Transform Approach”Philips HealthCare Philips Electronics India Ltd,Banglore. 627 | P a g e

×