Prepared By : AMR NASR
Introduction
• Developments in the field of sensing
  technology
• Multi-sensor systems in many applications
  such as remote sensing , medical imaging ,
  military , etc.
• Result is increase of data available
• Can we reduce increasing volume of
  information simultaneously extracting all
  useful information
Basics of image fusion
Aim of image fusion is to
• Reduce amount of data
• Retain important information and
• Create new image that is more suitable for
  the purposes of human/machine perception
  or for further processing tasks.
Single Sensor image fusion system
• Sequence of images are taken by the sensor
• Then they are fused in an image
• It has some limitations due to capability of
  sensor
Multi-sensor image fusion
• Images are taken by more than one sensor
• Then they are fused in an image
• It overcomes limitations of single sensor
  system
Fusion Camera used in Avatar
DARPA Unveils Gigapixel Camera
• The gigapixel camera, in a
  manner similar to a parallel-
  processor supercomputer,
  uses between 100 and 150
  micro cameras to build a
  wide-field panoramic
  image. These small
  cameras' local aberration
  and focus provide extremely
  high resolutions, combined
  with smaller system volume
  and less distortion than
  traditional wide-field lens
  systems.
360 degrees panoramic camera for
            the police
Multi-view fusion
• Images are taken from different viewports to
  make 3D View
• Multi-modal fusion
• Multi-focus fusion
Multi-modal fusion
Multi-focus fusion
System level consideration
•   Three key non-fusion processes:
•   Image registration
•   Image pre-processing
•   Image post-processing
• Post processing stage depends on the type of
  display, fusion system is being used and the
  personal preference of a human operator

• Pre-processing makes images best suited for
  fusion algorithm

• Image registration is the process of aligning
  images so that their detail overlap accurately.
Methodology
Feature detection
• Algorithm should be able to detect the same
  features
• Feature matching
   Correspondence between the features
detected in the sensed image and those
detected in the reference image is established
Image resampling and transformation The
sensed image is transformed
Methods of image fusion
Classification
   Spatial domain fusion
 Weighted pixel averaging
    Brovey method
    Principal component analysis
    Intensity Hue Saturation
Transform domain fusion
 Laplacian pyramid
 Curvelet transform
 Discrete wavelet transform (DWT)
Weighted pixel averaging
• Simplest image fusion technique
• F(x,y)=Wa*A(x,y)+Wb*B(x,y)
• Where Wa, Wb are scalars

• It has an advantage of suppressing any noise in
  the source imagery.
• It also suppresses salient image
  features,inevitably producing a low contrast
  fused image with a ‘washed-out’ appearance.
Pyramidal method
• Produce sharp , high-contrast images that are
  clearly more appealing and have greater
  information content than simpler ratio-based
  schemes.

• Image pyramid is essentially a data structure
  consisting of a series of low-pass or band-pass
  copies of an image, each representing pattern
  information of a different scale.
Flow of pyramidal method
Discrete wavelet transform method
• It represents any arbitrary function x(t) as a
  superposition of a set of such wavelets or
  basis functions –mother wavelet by dilation or
  contractions (scaling) and translational (shifts)
Medical image fusion
• Helps physicians to extract features from
  multi-modal images.
• Two types- structural (MRI, CT) & functional
  (PET, SPECT)
Objectives of image fusion in remote
                   sensing
•   Improve the spatial resolution
•   Improve the geometric precision
•   Enhanced the capabilities of feature display
•   Improve classification accuracy
•   Enhance the capability of change detection.
•   Replace or repair the defect of image data.
•   Enhance the visual interpretation.
Dual resolution images in satellites
Several commercial earth observation satellites carry
dual-resolution sensors of this kind, which provide
high-resolution panchromatic images (HRPIs) and low-
resolution multispectral images (LRMIs).



For example, the first commercial high-resolution
satellite, IKONOS, launched on September 24, 1999,
produces 1-m HRPIs and 4-m LRMIs.
PRINCIPLES OF SEVERAL EXISTING
IMAGE FUSION METHODS USED IN REMOTE SENSING

   Multi resolution Analysis-Based Intensity
   Modulation
   À Trous Algorithm-Based Wavelet Transform
   Principal Component Analysis
   High-Pass Modulation
   High-Pass Filtering
   Brovey Transform
   IHS Transform
Relationship between low-resolution pixel and the
       corresponding high-resolution pixels
each low-resolution pixel value (or
radiance) can be treated as a weighted average
of high-resolution pixel values
Brovey transform
.The BT is based on the chromaticity transform
It is a simple method for combining data from different sensors,
with the limitation that only three bands are involved. Its
purpose is to normalize the three multispectral bands used for
RGB display and to multiply the result by any other desired data

to add the intensity or brightness component to the image.
IHS Transform
The IHS technique is a standard procedure in image fusion, with the
major limitation that only three bands are involved . Originally, it was
based on the RGB true color space.
High-Pass Filtering
The principle of HPF is to add the high-frequency information
from the HRPI to the LRMIs to get the HRMIs .
The high-frequency information is computed by filtering the
HRPI with a high-pass filter or taking the original HRPI and
subtracting the LRPI, which is the low-pass filtered HRPI. This
method preserves a high percentage of the spectral characteristics,
since the spatial information is associated with the high-frequency
information of the HRMIs, which is from the HRPI, and
the spectral information is associated with the low-frequency information of the HRMIs, which is
from the LRMIs. The mathematical model is
High-Pass Modulation
The principle of HPM is to transfer the high-
frequency information
of the HRMI to the LRMIs, with modulation
coefficients , which equal the ratio between the
LRMIs and the LRPI . The LRPI is obtained by low-
pass filtering the HRPI. The equivalent
mathematical model is
Principal Component Analysis

The PCA method is similar to the IHS method, with the main
advantage that an arbitrary number of bands can be used .
The input LRMIs are first transformed into the same
number of uncorrelated principal components.


Then, similar to the IHS method, the first principal
component (PC1) is replaced by the HRPI, which is first
stretched to have the same mean and variance as PC1. As a
last step, the HRMIs are determined by performing the
inverse PCA transform.
where the transformation matrix
À Trous Algorithm-Based Wavelet
                 Transform
It is based on wavelet transform and is particularly suitable for signal processing since
it is isotropic and shift-invariant and does not create artifacts when used in image
processing. Its application to image fusion is reported in and .




   The ATW method is given by
Multiresolution Analysis-Based
         Intensity Modulation
MRAIM was proposed by Wang. It follows the
GIF method, with the major advantage that it
can be used for the fusion case in which the
ratio is an arbitrary integer M , with a very
simple scheme. The mathematical model is
Comparisons

1-The IHS, BT, and PCA methods use a linear combination of
the LRMIs to compute the LRPIs, with different coefficients.
2- The HPF, HPM, ATW, and MRAIM methods compute the
LRPIs by low-pass filtering the original HRPI with different
filters.
3- The BT, HPM, and MRAIM methods use the modulation
coefficients as the ratios between the LRMIs and the LRPI,
whereas the IHS, HPF, ATW, and PCA methods simplify the
modulation coefficients to constant values for all pixels of
each band.
It is obvious that the IHS
and PCA methods belong to class 1, the BT
method belongs to class 2, the HPF and ATW
methods belong to class 3, and the HPM and
MRAIM methods belong to class 4. The
performance of each image fusion method is
determined by two factors: how the LRPI is
computed and how the modulation coefficients
are defined.
EXPERIMENTS AND RESULTS
1-The IHS and BT can only have 3 bands.
2-In order to evaluate the NIR band as well, we
selected the red–green–blue combination for
true natural color and the NIR–red–green
combination for false color
3-In comparison the NIR can be used with other
components of red,green and blue.
Results
Original HRPI       Original LRMIs (RGB)
                      (resampled at 1-m pixel
(panchromatic band)             size).
Result of the IHS   Result of the BT
    method          method
Result of the   Result of the HPF
PCA method          method
Result of the   Result of the
HPM method      ATW method
Result of the MRAIM method
4-MRAIM looks better than the other methods.
5-MRAIM looks better than HPM method in
spatial quality.
6-the correlation coefficient (CC) is the most
popular similarity metric in image fusion .
However, CC is insensitive to a constant gain and
bias between two images and does not allow
subtle discrimination of possible fusion artifacts.
Recently, a universal image quality index
(UIQI) has been used to measure the
similarity between two images. In this
experiment, we used the UIQI to measure
similarity.
The UIQI is designed by modeling any image
distortion as a combination of three factors:
loss of correlation, radiometric distortion, and
contrast distortion. It is defined as follows:
UIQI MEASUREMENT OF SIMILARITY BETWEEN THE DEGRADED FUSED
    IMAGE AND THE ORIGINAL IMAGE AT 4-m RESOLUTION LEVEL
UIQIS FOR THE RESULTANT IMAGES AND THE ORIGINAL
   LRMIS AT 4 m. (FUSION AT THE INFERIOR LEVEL)
This maybe because all the methods provide
good results in the NIR band, so the difference is
very small , while the spatial degradation
process will influence the final result differently
for different fusion methods.
Subscenes of the original LRMIs and the fused resulting HRMIs by different methods
           (double zoom). (Left to right sequence, row by row) Original
                 LRMIs, IHS, BT, PCA, HPF, HPM, ATW, and MRAIM.
Conclusion
The performance of each method is determined by two factors: how
the LRPI is computed and how the modulation coefficients are
defined. If the LRPI is approximated from the LRMIs, it usually has a
weak correlation with the HRPI, leading to color distortion in the
fused image. If the LRPI is a low-pass filtered HRPI, it usually shows
less spectral distortion.



By combination of the visual inspection results and the quantitative
results, it is possible to see that the experimental results are in
conformity with the theoretical analysis and that the MRAIM
method produces the synthesized images closest to those the
corresponding multi sensors would observe at the high-resolution
level.
THANK YOU

Comparison of image fusion methods

  • 1.
    Prepared By :AMR NASR
  • 2.
    Introduction • Developments inthe field of sensing technology • Multi-sensor systems in many applications such as remote sensing , medical imaging , military , etc. • Result is increase of data available • Can we reduce increasing volume of information simultaneously extracting all useful information
  • 3.
    Basics of imagefusion Aim of image fusion is to • Reduce amount of data • Retain important information and • Create new image that is more suitable for the purposes of human/machine perception or for further processing tasks.
  • 4.
    Single Sensor imagefusion system • Sequence of images are taken by the sensor • Then they are fused in an image • It has some limitations due to capability of sensor
  • 5.
    Multi-sensor image fusion •Images are taken by more than one sensor • Then they are fused in an image • It overcomes limitations of single sensor system
  • 6.
  • 7.
    DARPA Unveils GigapixelCamera • The gigapixel camera, in a manner similar to a parallel- processor supercomputer, uses between 100 and 150 micro cameras to build a wide-field panoramic image. These small cameras' local aberration and focus provide extremely high resolutions, combined with smaller system volume and less distortion than traditional wide-field lens systems.
  • 8.
    360 degrees panoramiccamera for the police
  • 9.
    Multi-view fusion • Imagesare taken from different viewports to make 3D View • Multi-modal fusion • Multi-focus fusion
  • 10.
  • 11.
  • 12.
    System level consideration • Three key non-fusion processes: • Image registration • Image pre-processing • Image post-processing
  • 13.
    • Post processingstage depends on the type of display, fusion system is being used and the personal preference of a human operator • Pre-processing makes images best suited for fusion algorithm • Image registration is the process of aligning images so that their detail overlap accurately.
  • 14.
    Methodology Feature detection • Algorithmshould be able to detect the same features • Feature matching Correspondence between the features detected in the sensed image and those detected in the reference image is established Image resampling and transformation The sensed image is transformed
  • 15.
    Methods of imagefusion Classification Spatial domain fusion  Weighted pixel averaging  Brovey method  Principal component analysis  Intensity Hue Saturation Transform domain fusion  Laplacian pyramid  Curvelet transform  Discrete wavelet transform (DWT)
  • 16.
    Weighted pixel averaging •Simplest image fusion technique • F(x,y)=Wa*A(x,y)+Wb*B(x,y) • Where Wa, Wb are scalars • It has an advantage of suppressing any noise in the source imagery. • It also suppresses salient image features,inevitably producing a low contrast fused image with a ‘washed-out’ appearance.
  • 17.
    Pyramidal method • Producesharp , high-contrast images that are clearly more appealing and have greater information content than simpler ratio-based schemes. • Image pyramid is essentially a data structure consisting of a series of low-pass or band-pass copies of an image, each representing pattern information of a different scale.
  • 18.
  • 19.
    Discrete wavelet transformmethod • It represents any arbitrary function x(t) as a superposition of a set of such wavelets or basis functions –mother wavelet by dilation or contractions (scaling) and translational (shifts)
  • 21.
    Medical image fusion •Helps physicians to extract features from multi-modal images. • Two types- structural (MRI, CT) & functional (PET, SPECT)
  • 22.
    Objectives of imagefusion in remote sensing • Improve the spatial resolution • Improve the geometric precision • Enhanced the capabilities of feature display • Improve classification accuracy • Enhance the capability of change detection. • Replace or repair the defect of image data. • Enhance the visual interpretation.
  • 23.
    Dual resolution imagesin satellites Several commercial earth observation satellites carry dual-resolution sensors of this kind, which provide high-resolution panchromatic images (HRPIs) and low- resolution multispectral images (LRMIs). For example, the first commercial high-resolution satellite, IKONOS, launched on September 24, 1999, produces 1-m HRPIs and 4-m LRMIs.
  • 24.
    PRINCIPLES OF SEVERALEXISTING IMAGE FUSION METHODS USED IN REMOTE SENSING Multi resolution Analysis-Based Intensity Modulation À Trous Algorithm-Based Wavelet Transform Principal Component Analysis High-Pass Modulation High-Pass Filtering Brovey Transform IHS Transform
  • 25.
    Relationship between low-resolutionpixel and the corresponding high-resolution pixels
  • 26.
    each low-resolution pixelvalue (or radiance) can be treated as a weighted average of high-resolution pixel values
  • 27.
    Brovey transform .The BTis based on the chromaticity transform It is a simple method for combining data from different sensors, with the limitation that only three bands are involved. Its purpose is to normalize the three multispectral bands used for RGB display and to multiply the result by any other desired data to add the intensity or brightness component to the image.
  • 28.
    IHS Transform The IHStechnique is a standard procedure in image fusion, with the major limitation that only three bands are involved . Originally, it was based on the RGB true color space.
  • 30.
    High-Pass Filtering The principleof HPF is to add the high-frequency information from the HRPI to the LRMIs to get the HRMIs . The high-frequency information is computed by filtering the HRPI with a high-pass filter or taking the original HRPI and subtracting the LRPI, which is the low-pass filtered HRPI. This method preserves a high percentage of the spectral characteristics, since the spatial information is associated with the high-frequency information of the HRMIs, which is from the HRPI, and the spectral information is associated with the low-frequency information of the HRMIs, which is from the LRMIs. The mathematical model is
  • 31.
    High-Pass Modulation The principleof HPM is to transfer the high- frequency information of the HRMI to the LRMIs, with modulation coefficients , which equal the ratio between the LRMIs and the LRPI . The LRPI is obtained by low- pass filtering the HRPI. The equivalent mathematical model is
  • 32.
    Principal Component Analysis ThePCA method is similar to the IHS method, with the main advantage that an arbitrary number of bands can be used . The input LRMIs are first transformed into the same number of uncorrelated principal components. Then, similar to the IHS method, the first principal component (PC1) is replaced by the HRPI, which is first stretched to have the same mean and variance as PC1. As a last step, the HRMIs are determined by performing the inverse PCA transform.
  • 33.
  • 35.
    À Trous Algorithm-BasedWavelet Transform It is based on wavelet transform and is particularly suitable for signal processing since it is isotropic and shift-invariant and does not create artifacts when used in image processing. Its application to image fusion is reported in and . The ATW method is given by
  • 36.
    Multiresolution Analysis-Based Intensity Modulation MRAIM was proposed by Wang. It follows the GIF method, with the major advantage that it can be used for the fusion case in which the ratio is an arbitrary integer M , with a very simple scheme. The mathematical model is
  • 38.
    Comparisons 1-The IHS, BT,and PCA methods use a linear combination of the LRMIs to compute the LRPIs, with different coefficients. 2- The HPF, HPM, ATW, and MRAIM methods compute the LRPIs by low-pass filtering the original HRPI with different filters. 3- The BT, HPM, and MRAIM methods use the modulation coefficients as the ratios between the LRMIs and the LRPI, whereas the IHS, HPF, ATW, and PCA methods simplify the modulation coefficients to constant values for all pixels of each band.
  • 39.
    It is obviousthat the IHS and PCA methods belong to class 1, the BT method belongs to class 2, the HPF and ATW methods belong to class 3, and the HPM and MRAIM methods belong to class 4. The performance of each image fusion method is determined by two factors: how the LRPI is computed and how the modulation coefficients are defined.
  • 40.
    EXPERIMENTS AND RESULTS 1-TheIHS and BT can only have 3 bands. 2-In order to evaluate the NIR band as well, we selected the red–green–blue combination for true natural color and the NIR–red–green combination for false color 3-In comparison the NIR can be used with other components of red,green and blue.
  • 41.
  • 42.
    Original HRPI Original LRMIs (RGB) (resampled at 1-m pixel (panchromatic band) size).
  • 43.
    Result of theIHS Result of the BT method method
  • 44.
    Result of the Result of the HPF PCA method method
  • 45.
    Result of the Result of the HPM method ATW method
  • 46.
    Result of theMRAIM method
  • 47.
    4-MRAIM looks betterthan the other methods. 5-MRAIM looks better than HPM method in spatial quality. 6-the correlation coefficient (CC) is the most popular similarity metric in image fusion . However, CC is insensitive to a constant gain and bias between two images and does not allow subtle discrimination of possible fusion artifacts.
  • 48.
    Recently, a universalimage quality index (UIQI) has been used to measure the similarity between two images. In this experiment, we used the UIQI to measure similarity. The UIQI is designed by modeling any image distortion as a combination of three factors: loss of correlation, radiometric distortion, and contrast distortion. It is defined as follows:
  • 49.
    UIQI MEASUREMENT OFSIMILARITY BETWEEN THE DEGRADED FUSED IMAGE AND THE ORIGINAL IMAGE AT 4-m RESOLUTION LEVEL
  • 50.
    UIQIS FOR THERESULTANT IMAGES AND THE ORIGINAL LRMIS AT 4 m. (FUSION AT THE INFERIOR LEVEL)
  • 51.
    This maybe becauseall the methods provide good results in the NIR band, so the difference is very small , while the spatial degradation process will influence the final result differently for different fusion methods.
  • 52.
    Subscenes of theoriginal LRMIs and the fused resulting HRMIs by different methods (double zoom). (Left to right sequence, row by row) Original LRMIs, IHS, BT, PCA, HPF, HPM, ATW, and MRAIM.
  • 53.
    Conclusion The performance ofeach method is determined by two factors: how the LRPI is computed and how the modulation coefficients are defined. If the LRPI is approximated from the LRMIs, it usually has a weak correlation with the HRPI, leading to color distortion in the fused image. If the LRPI is a low-pass filtered HRPI, it usually shows less spectral distortion. By combination of the visual inspection results and the quantitative results, it is possible to see that the experimental results are in conformity with the theoretical analysis and that the MRAIM method produces the synthesized images closest to those the corresponding multi sensors would observe at the high-resolution level.
  • 54.