Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

No Downloads

Total views

13,575

On SlideShare

0

From Embeds

0

Number of Embeds

8

Shares

0

Downloads

1,096

Comments

0

Likes

6

No embeds

No notes for slide

- 1. Prepared By : AMR NASR
- 2. Introduction• Developments in the field of sensing technology• Multi-sensor systems in many applications such as remote sensing , medical imaging , military , etc.• Result is increase of data available• Can we reduce increasing volume of information simultaneously extracting all useful information
- 3. Basics of image fusionAim of image fusion is to• Reduce amount of data• Retain important information and• Create new image that is more suitable for the purposes of human/machine perception or for further processing tasks.
- 4. Single Sensor image fusion system• Sequence of images are taken by the sensor• Then they are fused in an image• It has some limitations due to capability of sensor
- 5. Multi-sensor image fusion• Images are taken by more than one sensor• Then they are fused in an image• It overcomes limitations of single sensor system
- 6. Fusion Camera used in Avatar
- 7. DARPA Unveils Gigapixel Camera• The gigapixel camera, in a manner similar to a parallel- processor supercomputer, uses between 100 and 150 micro cameras to build a wide-field panoramic image. These small cameras local aberration and focus provide extremely high resolutions, combined with smaller system volume and less distortion than traditional wide-field lens systems.
- 8. 360 degrees panoramic camera for the police
- 9. Multi-view fusion• Images are taken from different viewports to make 3D View• Multi-modal fusion• Multi-focus fusion
- 10. Multi-modal fusion
- 11. Multi-focus fusion
- 12. System level consideration• Three key non-fusion processes:• Image registration• Image pre-processing• Image post-processing
- 13. • Post processing stage depends on the type of display, fusion system is being used and the personal preference of a human operator• Pre-processing makes images best suited for fusion algorithm• Image registration is the process of aligning images so that their detail overlap accurately.
- 14. MethodologyFeature detection• Algorithm should be able to detect the same features• Feature matching Correspondence between the featuresdetected in the sensed image and thosedetected in the reference image is establishedImage resampling and transformation Thesensed image is transformed
- 15. Methods of image fusionClassification Spatial domain fusion Weighted pixel averaging Brovey method Principal component analysis Intensity Hue SaturationTransform domain fusion Laplacian pyramid Curvelet transform Discrete wavelet transform (DWT)
- 16. Weighted pixel averaging• Simplest image fusion technique• F(x,y)=Wa*A(x,y)+Wb*B(x,y)• Where Wa, Wb are scalars• It has an advantage of suppressing any noise in the source imagery.• It also suppresses salient image features,inevitably producing a low contrast fused image with a ‘washed-out’ appearance.
- 17. Pyramidal method• Produce sharp , high-contrast images that are clearly more appealing and have greater information content than simpler ratio-based schemes.• Image pyramid is essentially a data structure consisting of a series of low-pass or band-pass copies of an image, each representing pattern information of a different scale.
- 18. Flow of pyramidal method
- 19. Discrete wavelet transform method• It represents any arbitrary function x(t) as a superposition of a set of such wavelets or basis functions –mother wavelet by dilation or contractions (scaling) and translational (shifts)
- 20. Medical image fusion• Helps physicians to extract features from multi-modal images.• Two types- structural (MRI, CT) & functional (PET, SPECT)
- 21. Objectives of image fusion in remote sensing• Improve the spatial resolution• Improve the geometric precision• Enhanced the capabilities of feature display• Improve classification accuracy• Enhance the capability of change detection.• Replace or repair the defect of image data.• Enhance the visual interpretation.
- 22. Dual resolution images in satellitesSeveral commercial earth observation satellites carrydual-resolution sensors of this kind, which providehigh-resolution panchromatic images (HRPIs) and low-resolution multispectral images (LRMIs).For example, the first commercial high-resolutionsatellite, IKONOS, launched on September 24, 1999,produces 1-m HRPIs and 4-m LRMIs.
- 23. PRINCIPLES OF SEVERAL EXISTINGIMAGE FUSION METHODS USED IN REMOTE SENSING Multi resolution Analysis-Based Intensity Modulation À Trous Algorithm-Based Wavelet Transform Principal Component Analysis High-Pass Modulation High-Pass Filtering Brovey Transform IHS Transform
- 24. Relationship between low-resolution pixel and the corresponding high-resolution pixels
- 25. each low-resolution pixel value (orradiance) can be treated as a weighted averageof high-resolution pixel values
- 26. Brovey transform.The BT is based on the chromaticity transformIt is a simple method for combining data from different sensors,with the limitation that only three bands are involved. Itspurpose is to normalize the three multispectral bands used forRGB display and to multiply the result by any other desired datato add the intensity or brightness component to the image.
- 27. IHS TransformThe IHS technique is a standard procedure in image fusion, with themajor limitation that only three bands are involved . Originally, it wasbased on the RGB true color space.
- 28. High-Pass FilteringThe principle of HPF is to add the high-frequency informationfrom the HRPI to the LRMIs to get the HRMIs .The high-frequency information is computed by filtering theHRPI with a high-pass filter or taking the original HRPI andsubtracting the LRPI, which is the low-pass filtered HRPI. Thismethod preserves a high percentage of the spectral characteristics,since the spatial information is associated with the high-frequencyinformation of the HRMIs, which is from the HRPI, andthe spectral information is associated with the low-frequency information of the HRMIs, which isfrom the LRMIs. The mathematical model is
- 29. High-Pass ModulationThe principle of HPM is to transfer the high-frequency informationof the HRMI to the LRMIs, with modulationcoefficients , which equal the ratio between theLRMIs and the LRPI . The LRPI is obtained by low-pass filtering the HRPI. The equivalentmathematical model is
- 30. Principal Component AnalysisThe PCA method is similar to the IHS method, with the mainadvantage that an arbitrary number of bands can be used .The input LRMIs are first transformed into the samenumber of uncorrelated principal components.Then, similar to the IHS method, the first principalcomponent (PC1) is replaced by the HRPI, which is firststretched to have the same mean and variance as PC1. As alast step, the HRMIs are determined by performing theinverse PCA transform.
- 31. where the transformation matrix
- 32. À Trous Algorithm-Based Wavelet TransformIt is based on wavelet transform and is particularly suitable for signal processing sinceit is isotropic and shift-invariant and does not create artifacts when used in imageprocessing. Its application to image fusion is reported in and . The ATW method is given by
- 33. Multiresolution Analysis-Based Intensity ModulationMRAIM was proposed by Wang. It follows theGIF method, with the major advantage that itcan be used for the fusion case in which theratio is an arbitrary integer M , with a verysimple scheme. The mathematical model is
- 34. Comparisons1-The IHS, BT, and PCA methods use a linear combination ofthe LRMIs to compute the LRPIs, with different coefficients.2- The HPF, HPM, ATW, and MRAIM methods compute theLRPIs by low-pass filtering the original HRPI with differentfilters.3- The BT, HPM, and MRAIM methods use the modulationcoefficients as the ratios between the LRMIs and the LRPI,whereas the IHS, HPF, ATW, and PCA methods simplify themodulation coefficients to constant values for all pixels ofeach band.
- 35. It is obvious that the IHSand PCA methods belong to class 1, the BTmethod belongs to class 2, the HPF and ATWmethods belong to class 3, and the HPM andMRAIM methods belong to class 4. Theperformance of each image fusion method isdetermined by two factors: how the LRPI iscomputed and how the modulation coefficientsare defined.
- 36. EXPERIMENTS AND RESULTS1-The IHS and BT can only have 3 bands.2-In order to evaluate the NIR band as well, weselected the red–green–blue combination fortrue natural color and the NIR–red–greencombination for false color3-In comparison the NIR can be used with othercomponents of red,green and blue.
- 37. Results
- 38. Original HRPI Original LRMIs (RGB) (resampled at 1-m pixel(panchromatic band) size).
- 39. Result of the IHS Result of the BT method method
- 40. Result of the Result of the HPFPCA method method
- 41. Result of the Result of theHPM method ATW method
- 42. Result of the MRAIM method
- 43. 4-MRAIM looks better than the other methods.5-MRAIM looks better than HPM method inspatial quality.6-the correlation coefficient (CC) is the mostpopular similarity metric in image fusion .However, CC is insensitive to a constant gain andbias between two images and does not allowsubtle discrimination of possible fusion artifacts.
- 44. Recently, a universal image quality index(UIQI) has been used to measure thesimilarity between two images. In thisexperiment, we used the UIQI to measuresimilarity.The UIQI is designed by modeling any imagedistortion as a combination of three factors:loss of correlation, radiometric distortion, andcontrast distortion. It is defined as follows:
- 45. UIQI MEASUREMENT OF SIMILARITY BETWEEN THE DEGRADED FUSED IMAGE AND THE ORIGINAL IMAGE AT 4-m RESOLUTION LEVEL
- 46. UIQIS FOR THE RESULTANT IMAGES AND THE ORIGINAL LRMIS AT 4 m. (FUSION AT THE INFERIOR LEVEL)
- 47. This maybe because all the methods providegood results in the NIR band, so the difference isvery small , while the spatial degradationprocess will influence the final result differentlyfor different fusion methods.
- 48. Subscenes of the original LRMIs and the fused resulting HRMIs by different methods (double zoom). (Left to right sequence, row by row) Original LRMIs, IHS, BT, PCA, HPF, HPM, ATW, and MRAIM.
- 49. ConclusionThe performance of each method is determined by two factors: howthe LRPI is computed and how the modulation coefficients aredefined. If the LRPI is approximated from the LRMIs, it usually has aweak correlation with the HRPI, leading to color distortion in thefused image. If the LRPI is a low-pass filtered HRPI, it usually showsless spectral distortion.By combination of the visual inspection results and the quantitativeresults, it is possible to see that the experimental results are inconformity with the theoretical analysis and that the MRAIMmethod produces the synthesized images closest to those thecorresponding multi sensors would observe at the high-resolutionlevel.
- 50. THANK YOU

No public clipboards found for this slide

Be the first to comment