Introduction• Developments in the field of sensing technology• Multi-sensor systems in many applications such as remote sensing , medical imaging , military , etc.• Result is increase of data available• Can we reduce increasing volume of information simultaneously extracting all useful information
Basics of image fusionAim of image fusion is to• Reduce amount of data• Retain important information and• Create new image that is more suitable for the purposes of human/machine perception or for further processing tasks.
Single Sensor image fusion system• Sequence of images are taken by the sensor• Then they are fused in an image• It has some limitations due to capability of sensor
Multi-sensor image fusion• Images are taken by more than one sensor• Then they are fused in an image• It overcomes limitations of single sensor system
DARPA Unveils Gigapixel Camera• The gigapixel camera, in a manner similar to a parallel- processor supercomputer, uses between 100 and 150 micro cameras to build a wide-field panoramic image. These small cameras local aberration and focus provide extremely high resolutions, combined with smaller system volume and less distortion than traditional wide-field lens systems.
System level consideration• Three key non-fusion processes:• Image registration• Image pre-processing• Image post-processing
• Post processing stage depends on the type of display, fusion system is being used and the personal preference of a human operator• Pre-processing makes images best suited for fusion algorithm• Image registration is the process of aligning images so that their detail overlap accurately.
MethodologyFeature detection• Algorithm should be able to detect the same features• Feature matching Correspondence between the featuresdetected in the sensed image and thosedetected in the reference image is establishedImage resampling and transformation Thesensed image is transformed
Weighted pixel averaging• Simplest image fusion technique• F(x,y)=Wa*A(x,y)+Wb*B(x,y)• Where Wa, Wb are scalars• It has an advantage of suppressing any noise in the source imagery.• It also suppresses salient image features,inevitably producing a low contrast fused image with a ‘washed-out’ appearance.
Pyramidal method• Produce sharp , high-contrast images that are clearly more appealing and have greater information content than simpler ratio-based schemes.• Image pyramid is essentially a data structure consisting of a series of low-pass or band-pass copies of an image, each representing pattern information of a different scale.
Discrete wavelet transform method• It represents any arbitrary function x(t) as a superposition of a set of such wavelets or basis functions –mother wavelet by dilation or contractions (scaling) and translational (shifts)
Medical image fusion• Helps physicians to extract features from multi-modal images.• Two types- structural (MRI, CT) & functional (PET, SPECT)
Objectives of image fusion in remote sensing• Improve the spatial resolution• Improve the geometric precision• Enhanced the capabilities of feature display• Improve classification accuracy• Enhance the capability of change detection.• Replace or repair the defect of image data.• Enhance the visual interpretation.
Dual resolution images in satellitesSeveral commercial earth observation satellites carrydual-resolution sensors of this kind, which providehigh-resolution panchromatic images (HRPIs) and low-resolution multispectral images (LRMIs).For example, the first commercial high-resolutionsatellite, IKONOS, launched on September 24, 1999,produces 1-m HRPIs and 4-m LRMIs.
PRINCIPLES OF SEVERAL EXISTINGIMAGE FUSION METHODS USED IN REMOTE SENSING Multi resolution Analysis-Based Intensity Modulation À Trous Algorithm-Based Wavelet Transform Principal Component Analysis High-Pass Modulation High-Pass Filtering Brovey Transform IHS Transform
Relationship between low-resolution pixel and the corresponding high-resolution pixels
each low-resolution pixel value (orradiance) can be treated as a weighted averageof high-resolution pixel values
Brovey transform.The BT is based on the chromaticity transformIt is a simple method for combining data from different sensors,with the limitation that only three bands are involved. Itspurpose is to normalize the three multispectral bands used forRGB display and to multiply the result by any other desired datato add the intensity or brightness component to the image.
IHS TransformThe IHS technique is a standard procedure in image fusion, with themajor limitation that only three bands are involved . Originally, it wasbased on the RGB true color space.
High-Pass FilteringThe principle of HPF is to add the high-frequency informationfrom the HRPI to the LRMIs to get the HRMIs .The high-frequency information is computed by filtering theHRPI with a high-pass filter or taking the original HRPI andsubtracting the LRPI, which is the low-pass filtered HRPI. Thismethod preserves a high percentage of the spectral characteristics,since the spatial information is associated with the high-frequencyinformation of the HRMIs, which is from the HRPI, andthe spectral information is associated with the low-frequency information of the HRMIs, which isfrom the LRMIs. The mathematical model is
High-Pass ModulationThe principle of HPM is to transfer the high-frequency informationof the HRMI to the LRMIs, with modulationcoefficients , which equal the ratio between theLRMIs and the LRPI . The LRPI is obtained by low-pass filtering the HRPI. The equivalentmathematical model is
Principal Component AnalysisThe PCA method is similar to the IHS method, with the mainadvantage that an arbitrary number of bands can be used .The input LRMIs are first transformed into the samenumber of uncorrelated principal components.Then, similar to the IHS method, the first principalcomponent (PC1) is replaced by the HRPI, which is firststretched to have the same mean and variance as PC1. As alast step, the HRMIs are determined by performing theinverse PCA transform.
À Trous Algorithm-Based Wavelet TransformIt is based on wavelet transform and is particularly suitable for signal processing sinceit is isotropic and shift-invariant and does not create artifacts when used in imageprocessing. Its application to image fusion is reported in and . The ATW method is given by
Multiresolution Analysis-Based Intensity ModulationMRAIM was proposed by Wang. It follows theGIF method, with the major advantage that itcan be used for the fusion case in which theratio is an arbitrary integer M , with a verysimple scheme. The mathematical model is
Comparisons1-The IHS, BT, and PCA methods use a linear combination ofthe LRMIs to compute the LRPIs, with different coefficients.2- The HPF, HPM, ATW, and MRAIM methods compute theLRPIs by low-pass filtering the original HRPI with differentfilters.3- The BT, HPM, and MRAIM methods use the modulationcoefficients as the ratios between the LRMIs and the LRPI,whereas the IHS, HPF, ATW, and PCA methods simplify themodulation coefficients to constant values for all pixels ofeach band.
It is obvious that the IHSand PCA methods belong to class 1, the BTmethod belongs to class 2, the HPF and ATWmethods belong to class 3, and the HPM andMRAIM methods belong to class 4. Theperformance of each image fusion method isdetermined by two factors: how the LRPI iscomputed and how the modulation coefficientsare defined.
EXPERIMENTS AND RESULTS1-The IHS and BT can only have 3 bands.2-In order to evaluate the NIR band as well, weselected the red–green–blue combination fortrue natural color and the NIR–red–greencombination for false color3-In comparison the NIR can be used with othercomponents of red,green and blue.
4-MRAIM looks better than the other methods.5-MRAIM looks better than HPM method inspatial quality.6-the correlation coefficient (CC) is the mostpopular similarity metric in image fusion .However, CC is insensitive to a constant gain andbias between two images and does not allowsubtle discrimination of possible fusion artifacts.
Recently, a universal image quality index(UIQI) has been used to measure thesimilarity between two images. In thisexperiment, we used the UIQI to measuresimilarity.The UIQI is designed by modeling any imagedistortion as a combination of three factors:loss of correlation, radiometric distortion, andcontrast distortion. It is defined as follows:
UIQI MEASUREMENT OF SIMILARITY BETWEEN THE DEGRADED FUSED IMAGE AND THE ORIGINAL IMAGE AT 4-m RESOLUTION LEVEL
UIQIS FOR THE RESULTANT IMAGES AND THE ORIGINAL LRMIS AT 4 m. (FUSION AT THE INFERIOR LEVEL)
This maybe because all the methods providegood results in the NIR band, so the difference isvery small , while the spatial degradationprocess will influence the final result differentlyfor different fusion methods.
Subscenes of the original LRMIs and the fused resulting HRMIs by different methods (double zoom). (Left to right sequence, row by row) Original LRMIs, IHS, BT, PCA, HPF, HPM, ATW, and MRAIM.
ConclusionThe performance of each method is determined by two factors: howthe LRPI is computed and how the modulation coefficients aredefined. If the LRPI is approximated from the LRMIs, it usually has aweak correlation with the HRPI, leading to color distortion in thefused image. If the LRPI is a low-pass filtered HRPI, it usually showsless spectral distortion.By combination of the visual inspection results and the quantitativeresults, it is possible to see that the experimental results are inconformity with the theoretical analysis and that the MRAIMmethod produces the synthesized images closest to those thecorresponding multi sensors would observe at the high-resolutionlevel.