SlideShare a Scribd company logo
1 of 27
CHAPTER-1.
INTRODUCTION
1.1 MOTIVATION FOR IMAGE FUSION RESEARCH
Motivation for image fusion is the result of recent advancements in the remote sensing
field. As the new image sensors are of high resolution and are available at low cost, multiple
sensors are used in a wide range of imaging applications. These sensors are of high spatial and
spectral resolution and offer faster scan rates. The images taken by these sensors are more
reliable, informative and contain complete picture of the scanned environment. Thus, they help
in improved performance of dedicated imaging systems. Over a period of decade, remote
sensing, medical imaging, surveillance systems, etc., are few applications areas that were
benefited by these multi-sensors. As the number of sensors increase in an application, the more
proportionate amount of image data is collected. To improve imaging system’s performance,
deployment of additional sensors is permitted by a corresponding increase in the processing
power of the system. A sensor grabs multiple images of a location and one of them will be
considered for analysis. However, the considered image may not have good spatial and spectral
resolution. To overcome this and to generate a fused image with high spatial and spectral
resolution, this work identifies the need for image fusion by developing new methods to improve
the performance of existing fusion methods.
Image fusion is a process of combining two or more different images to form a new
image which contains enhanced information from the source images i.e., original application
specific information should be preserved and the artifacts should be minimized in the fused
image. The purpose of image fusion is to enhance the spatial and spectral resolution from several
low resolution images. Due to this reason image fusion has become an interesting topic for many
researchers [1],[2].
Today the image fusion find its role in image classification, aerial and satellite imaging ,
medical imaging, avionics, concealed weapon detection, multi-focus image fusion, digital
camera application, battle field monitoring, defense for situation awareness, surveillance, target
tracking, intelligence gathering, person authentication, geo-informatics etc.
Satellites are objects in orbit around Earth, other planets, or the Sun. In the universe,
hundreds of satellites are present. These satellites are of two types - natural and artificial. Natural
satellites are objects which orbit around another object in space. E.g. Moon, Earth, Comets etc.
Artificial satellites are manmade satellites which are very important to earth.
There are six different types of artificial satellites.
1. Communication Satellites: They capture different radio waves and send them to different
spots in the world. These help us to communicate around the world.
2. Resource Satellites: They help scientists to monitor natural resources by taking pictures.
The scientists will turn the pictures into maps. These maps show things like underground
oil, foggy air, etc.
3. Navigation Satellites: They capture the signals from ships and aircrafts and send them to
emergency resource stations. These images are used by pilots and sailors to know where
they are and where they are headed.
4. Military Satellites: They help the armed forces to navigate, communicate, and spy on
other countries. They take pictures and pick up the radio waves that are sent by other
countries.
5. Scientific Satellites: They study the Sun, planets, other solar systems and deep space.
They help scientists to study Earth and outer space. They help to find asteroids, comets,
and black holes.
6. Weather Satellites: They help scientists to study different types of weather patterns. They
are used to predict the weather and track severe storm.
Remote sensing images provide a better way to understand earth's environment by collecting
huge amount of data via manmade satellites, aircrafts, Synthetic Aperture Radar (SAR) and so
on. Remote sensing images which are different from natural images cover wide variety of scenes
that have a large number of natural, manmade and military objects. The latest sensors exhibit
high resolution and can sense many objects which have different kinds of shapes, edges and
contours. So, these images are more reliable as they contain much information in high-frequency
bands as well as in low-frequency bands. In satellite imaging, two types of images are available.
1. Panchromatic images (PAN): An image collected in the broad visual wavelength range
but rendered in black and white. In PAN mode, the image is acquired with high spatial
resolution and depends on the type of the satellite. For example, 5.8m pixel (IRS), 10m
pixel (SPOT), 1m pixel (IKONOS).
2. Multispectral images (MS): An image optically acquired in more than one spectral or
wavelength interval. In MS mode, the image is acquired with much lower spatial
resolution and depends on the type of the satellite. For example, of 23.5m pixel (IRS),
20m pixel (SPOT), 4m pixel (IKONOS).
1.3. LEVELS OF IMAGE FUSION:
1.3.1. Pixel level image fusion:
This is the fusion at the lowest possible level of abstraction, in which the data from two
different sources fuse directly. In image fusion, the data are the pixels of the images from
different sources. Fusion at this level has the advantage that it uses the original data that is
most possibly close to the reality. The images merge on the pixel-by-pixel basis, after the
software co-registered them exactly at the same resolution level. Most of the times, the
images are geo-coded as well before fusion since the fusion on pixel level requires accurate
registration of the images to be merged. The accurate registration requires re-sampling and
geometric correction. There are several methods of re-sampling and registration of the
images. The geometric correction requires the knowledge of the sensor viewing parameters
along with software that takes into account the image acquisition geometry and Ground
Control Points (GCPs). The GCPs are the landscape features whose exact locations on the
ground are known. The GCPs may be naturally occurring, e.g. road intersections and costal
features; or may be intentionally introduced for the purpose of geometric corrections. In
some cases, where the surfaces are highly uneven, a DEM is required. This is especially
important for SAR data processing, whose sensor has the side-looking geometry, i.e. oblique
view. The oblique radar waves strike a bump on the rough terrain instead of the targeted
location on the surface. The image fusion at this level has the highest requirements of
computer memory and processing power and it takes longer processing times.
1.3.2. Feature Level image fusion:
This approach merges the datasets, i.e. images at an intermediate level of abstraction. It is
suitable to opt for feature-level fusion only if the features extracted from various data
sources, i.e. images can properly be associated with each other, for example the features like
edges and segments can be extracted from both, the optical as well as SAR images and then
can be merged together to work out joint features and classification. SAR images provide
textural information that is complementary to the spectral information from the optical
images. Therefore, texture features extracted from SAR images and spectral features
extracted from MS images may fuse before a classifier classifies them. [3] fuses hyper-
spectral image with high-resolution image on the feature level. Some works propose fusing
different kinds of features extracted from the same image before classifying the image. For
example, [4] fuses texture features for classification of very high-resolution RS images and
[5] fuses different texture features extracted from SAR images.
1.3.3. Decision Level image fusion:
It is not necessary to perform fusion at only one of the three levels. The fusion may take
place at any two or all the three levels and there exist example of techniques that allow fusion
of image and non-image data at multiple levels of inference [4]. [5] applies multi-level fusion
to multi-spectral image sequences for target detection. [6] proposes the multilevel image
fusion framework to perform image fusion at all the three levels and reports significantly
better results with the image fusion simultaneously performed at the first two levels (i.e. pixel
and feature level) than with the fusion performed at any one level alone. However, multilevel
fusion may take in several forms such as the one in the succeeding section.
1.4 GENERIC REQUIREMENTS OF IMAGE FUSION
After an in-depth and critical literature survey, the present study found that to design an
image fusion system one needs to take care of the following requirements:
1. The fused image should preserve as closely as possible all relevant information contained
in the input images.
2. The fusion process should not introduce any artefacts or inconsistencies which can
distract or mislead the human observer or any subsequent image processing steps.
3. The fused image should suppress to a maximum extent the irrelevant features and noise.
4. The fusion process should maximize the amount of relevant information in the fused
image, while minimizing the amount of irrelevant details, uncertainty and redundancy in
the fused image.
Chapter-2
LITERATURE SURVEY
Claire Thomas, Thierry Ranchin et al [7] proposed a framework to synthesize the high
resolution multispectral image from low resolution panchromatic image. Presented many
existing fusion methods like substitution based methods, relative spectral contribution methods,
ARISIS based methods and advantages and disadvantages of each method.
Henrik Aanaes, Johmaes R et al [8] proposed a method for pixel level satellite image fusion
from the imaging sensor model. Pixel neighborhood regularization is presented for the
regularization of the proposed method. The algorithm is tested on QuickBird, IKONOS,
Meteosat data sets. The performance evaluation metrics used are Root Mean square
Error(RMSE), Cross Correlation (CC), Structural Similarity Index(SSI), and Q4. The author
shown that the proposed method perform well compared to many existing methos.
Faming Fang, Fang Li, et al [9] proposed a new variational image fusion method based on
three assumptions i) gradient of PAN image is linear combination of image bands used in
pansharpened image. Ii) The gradient in the spectrum direction of the fused image should be
approximated to low resolution MS image. The algorithm is tested on QuickBird, IKONOS data
sets. The performance evaluation parameters used are RMSE, CC, Structural Angle Mapper
(SAM), Spatial Frequency (SF).
Xinghao Ping, Yiyong Jiang et al. [10] proposed a Bayesian non parametric dictionary learning
model for image fusion. The proposed method will not demand the original MS image for
dictionary learning, rather it directly uses the reconstructed images for dictionary learning. The
algorithm is tested on IKONOS, Pleiades,QuickBird data sets. The performance evaluation
metrics used are RMSE, CC,ERGAS,Q4.
S. Li and B. Yang et al.[11] proposed a image fusion problem from compressed sensing theory.
First the degradation model is constructed from low resolution MS image and high resolution
PAN as a linear sampling process. So the image fusion process is converted in to restoration
problem. Later the Pursuit algorithm is used to resolve the restoration problem. The QuickBird
and IKONOS satellite images are used to test the image fusion algorithm. The performance
evaluation parameters used are CC, SAM, RMSE, ERGAS and Q4.
F. Palsson, J. R. Sveinsson, et al.[12] proposed a model based image fusion method. The model
is created based on the assumption that a linear combinations of the bands of the fused images
gives a panchromatic image and the down sampling the fused image gives multispectral image.
The algorithm is tested by using QuickBird data sets and performance evaluation metrics used
are SAM, ERGAS, CC, and Q4.
S. Leprince, S. Barbot, et al.[13] proposed a method to automatic co-register the optical
satellite images by using ground deformation measurement. By using the proposed method the
images are co-registered with 1/50 pixel accuracy. The algorithm is tested for SPOT satellite
images in the case of non ceseismic deformation and in the case of large ceseismic deformation.
M. L. Uss, B. Vozel, et al. [14] proposed a new performance bound for analyzing the image
registration methods objectively. This proposed lower bond involved in a geometric
transformation assumed between the reference image and templet images. The experimental
results proved that the lower bound describes more efficiently the performance of conventional
estimators then other bounds proposed in the literature.
Y. Peng, A. Ganesh, et al.[15] proposed image registration method called robust alignment by
sparse and low rank decomposition for linearly correlated images (RASL) which efficiently co-
register linearly correlated satellite images. The accuracy of the proposed method is very high
and proposed method efficiently co-register the data sets over wide range of realistic
misalignments and corruptions.
Miloud Chikr El-Mezouar, Nasreddine Taleb, et al. [16] A new fusion approach that produces
images with natural colors is proposed. Moreover, in this technique, a high-resolution
normalized difference is also proposed and used in delineating the vegetation. The procedure is
performed in two steps: MS fusion using the HIS technique and vegetation enhancement.
Vegetation enhancement is a correction step, and it depends on the considered application. The
new approach provides very good results in terms of objective quality measures. In addition,
visual analysis proves that the concept of the proposed approach is promising, and it improves
well the fusion quality by enhancing the vegetated zones.
M. E. Nasr, S. M. Elkaffas et al.[17] proposed image fusion technique, based on integrating
both the Intensity-Hue-Saturation (IHS) and the Discrete Wavelet Frame Transform (DWFT), is
proposed for boosting the quality of remote sensing images. A panchromatic and multispectral
image from Landsat-7(ETM+) satellite has been fused using this new approach. Experimental
results show that the proposed technique improves the spectral and spatial qualities of the fused
images. Moreover, when this technique is applied to noisy and de-noised remote sensing images
it can preserve the quality of the fused images. Comparison analyses between different fusion
techniques are also presented and show that the proposed technique outperforms the other
techniques.
Xia Chun-lina, Deng Jie,[18] et al. proposed a new fusion method was presentedβ€”PWI
transformation: First, multispectral image was transformed by HIS, and then the obtained
brightness componentβ€”I was transformed by PCA to extract the first principal componentβ€”
PC1. Using wavelet transform, the PC1 and panchromatic images were fused, and then the
results were used to replace the brightness components of the multispectral image. Finally the
new multispectral image was got by the inverse IHS transformation. Subjective visual effect
analysis and objective evaluation indicate that the new method was superior to the single one of
three fusion methods of HIS, the wavelet transform and PCA, enhanced the representability of
image spatial detail largely, and well reserved the spectral information of the multispectral
image.
Hamid Reza Shahdoosti and Hassan Ghassemian et al.[19] Designing an optimal filter that is able to
extract relevant and nonredundant information from the PAN image is presented in this letter. The
optimal filter coefficients extracted from statistical properties of the images are more consistent with type
and texture of the remotely sensed images compared with other kernels such as wavelets. Visual and
statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of
correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal
image quality index, and quality without reference, as compared with fusion methods, including improved
intensity–hue–saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet
transform, and sparse fusion of image.
Jianwen Hu and Shutao Li, [20] et al. presents a novel method based on the developed multiscale dual
bilateral filter to fuse high spatial resolution panchromatic image and high spectral resolution
multispectral image. Compared with traditional multiresolution based methods, the process of detail
extraction considers the characteristics of panchromatic image and multispectral image simultaneously.
The low resolution multispectral image is resampled to the same size of the high resolution panchromatic
image and sharpened through injecting the extracted details. The proposed fusion method is tested over
QuickBird and IKONOS images and compared with three popular methods.
Qizhi Xu, Bo Li, et al. [21] proposed a data fitting scheme is adopted to improve spectral quality in
image fusion based on well-established CS approach. A generalized CS framework that is capable of
modeling any CS image fusion method is also presented. In this framework, instead of injecting detail
information of panchromatic (Pan) image into substituted component, the data fitting strategy is designed
to adjust the mean information of Pan image in the construction of substitution component. The data
fitting scheme involves two matrix subtractions and one matrix convolution. It is fast in implementation
and is effective to avoid the spectral distortion problem. Experimental results on a large number of Pan
and multispectral images show that the improved CS methods have good performance on the spatial and
spectral fidelity.
Jaewan Choi, Junho Yeom, et al. [22]In this letter, we developed a hybrid pansharpening algorithm
based on primary and secondary high-frequency information injection to efficiently improve the spatial
quality of the pansharpened image. The injected high-frequency information in our algorithm is composed
of two types of data, i.e., the difference between panchromatic and intensity images, and the Laplacian
filtered image of high-frequency information. The extracted high frequencies are injected by the
multispectral image using the local adaptive fusion parameter and postprocessing of the fusion parameter.
In the experiments using various satellite images, our results show better spatial quality than those of
other fusion algorithms while maintaining as much spectral information as possible.
Qian Zhang, Zhiguo Cao, et al. [23] proposed an iterativevoptimization approach, which jointly
considers the registration and fusion processes, is proposed for panchromatic (PAN) and multispectral
(MS) images. Given a registration method and a fusion method, the joint optimization process is
described as finding the optimal registration parameters to gain the optimal fusion performance. In our
approach, the downhill simplex algorithm is adopted to refine the registration parameters iteratively.
Experiments on a set of PAN andMS images of ZY-3 and GeoEye-1 show that the proposed approach
outperforms several competing ones in terms of registration accuracy and fusion quality
CHAPTER 3
METHODOLOGY
3.1 Brovey Transform:
The Brovey transform method is a ratio fusion technique that preserves the relative spectral
contributions of each pixel, but replaces its overall brightness by the high-resolution panchroma
image. It is operated by the formula:
[
𝑅 𝐡𝑇
β€²
𝐺 𝐡𝑇
β€²
𝐡 𝐡𝑇
β€²
] =
𝑃𝐴𝑁
𝐼
βˆ— [
𝑅
𝐺
𝐡
] (1)
From Eqs. (5), it is evident that BT is indeed simple fusion methods requiring only arithmetical
operations without any statistical analysis of filter design. Owing to their efficiency and
implementation, either of them can achieve the goal of fast fusion for IKONOS/QuickBird
imagery. However, color distortion problems are often produced in the fused images. Hence,
originated from the image fusion process, color distortion becomes an important issue for
practical applications [24].
3.2 IHS Transform:
The color system with red, green and blue channels (RGB) is usually used by computer monitors
to display a color image. Another color system widely used to describe a color is the system of
intensity, hue and saturation (IHS). The intensity represents the total amount of the light in a
color, the hue is the property of a color determined by its wavelength, and the saturation is the
purity of the color [10]. Whatever algorithm is chosen, the IHS transform is always applied to an
RGB composite. This implies that the fusion will be applied to groups of three bands of the MS
image. As a result of this transformation, we obtain the new intensity, hue, and saturation
components. The PAN image then replaces the intensity image. Before doing this, in order to
minimize the modification of the spectral information of the fused MS image with respect to the
original MS image, the histogram of the PAN image is matched with that of the intensity image.
Applying the inverse transform, we obtain the fused RGB image, with the spatial detail of the
PAN image incorporated into it [9-10].
[
𝐼
𝑉1
𝑉2
] =
[
1
√3
1
√3
1
√3
1
√6
1
√6
βˆ’2
√6
1
√2
βˆ’1
√2
0 ]
π‘₯ [
𝑅
𝐺
𝐡
] (2)
𝐻 = π‘‘π‘Žπ‘›βˆ’1
(
𝑉2
𝑉1
) (3)
𝑆 = βˆšπ‘‰1
2 + 𝑉2
2 (4)
Replacing I by PAN image and taking inverse transform shown below equation (12) gives the
fused MS image.
[
𝑀𝑆1
𝐻
𝑀𝑆2
𝐻
𝑀𝑆3
𝐻
] =
[
1
βˆ’1
√6
3
√6
1
βˆ’1
√6
βˆ’3
√6
1
2
√6
0 ]
π‘₯ [
𝑃𝐴𝑁
𝑉1
𝑉2
] (5)
3.3 PCA method for image fusion: The PCA also preferred as the K-L transform, is a very
useful and well known technique for compression of the dimensionality of the highly correlated
multispectral data. In general, the first principal component (PC1) collects the information that is
common to all the bands used as input data in the PCA. It makes PCA a very adequate technique
when merging MS and PAN images. In this case, all the bands of the original MS image
constitute the input data, As a result of this transformation, we obtain non-correlated new bands,
the principal components. The PC1 is substituted by the PAN image, whose histogram has
previously been matched with that of PC1. Finally, the inverse transformation is applied to the
whole dataset formed by the modified PAN image and the PC2…PCn, obtaining that
incorporated into them [11].
3.4 Wavelet transform based fusion: The WT is suitable for image fusion, not only because it
enables one to fuse image features separately at different scales, but also because it produces
large coefficients near edges in the
transformed image and reveals relevant spatial information [12]. The WT decomposes the signal
based on elementary functions: the wavelets. Wavelets can be described in terms of two groups
of functions: wavelet functions and scaling functions. It is also common to be defined the
wavelet function as the "mother wavelet", and the scaling function is the "father" wavelet. So the
transformations of the parent wavelets are "daughter" and "son" wavelets. In one-dimensional
case, the continuous wavelet transform of a distributionRt) can be expressed
π‘Šπ‘‡( π‘Ž, 𝑏) =
1
√ π‘Ž
∫ 𝑓( 𝑑)
∞
βˆ’βˆž
πœ“(
π‘‘βˆ’π‘
π‘Ž
)𝑑𝑑 (6)
where WT (a, b) is the wavelet coefficient of the function Rt); the analyzing wavelet and a (a >
0) and b are scaling and translational parameters, respectively. Each base function is a
scaled and translated version of a function πœ“(𝑑)called Mother Wavelet.
Currently used wavelet-based image fusion methods are mostly based on two algorithms: the
Mallat algorithm [13] and the it trous algorithm [14]. The Mallat algorithm-based dyadic wavelet
transform (WT), which uses decimation, is not shift invariant and exhibits artifacts due to
aliasing in the fused image [15]. The WT method allows the decomposition of the image in a set
of wavelet and approximation planes, according to the theory of multiresolution wavelet
transform given by Mallat. Each wavelet plane contains the wavelet coefficients where the
amplitude of a coefficient defines the scale and informations of the local features. Formally,
wavelet coefficients are computed by means of the following equation:
𝑀𝑗
𝑝
( π‘˜, 𝑙) = π‘ƒπ‘—βˆ’1( π‘˜, 𝑙)βˆ’ 𝑃𝑗(π‘˜, 𝑙) (7)
j= 1, ... ,N, such as j is the scale index, N is the number of decomposition, Po(k,l) corresponds to
the original image P(k,l) and P/k,l) is the filtered version of the image produced by means of the
flowing equation:
𝑃𝑗( π‘˜, 𝑙) = βˆ‘ βˆ‘ β„Ž( π‘š, 𝑛) π‘ƒπ‘—βˆ’1(𝑛 + 2 π‘—βˆ’1
π‘˜, π‘š + 2 π‘—βˆ’1
𝑙)π‘›π‘š (8)
H(n,m) are the coefficients.
π‘Šπ‘‡π‘—( π‘˜, 𝑙) = π‘ƒπ‘—βˆ’1( π‘˜, 𝑙) βˆ’ 𝑃𝑗(π‘˜, 𝑙) (9)
j=1,2, ... ,N, andj is the scale index, N is the level number of decomposition, Po(k,l) corresponds
to the original ETM+ image P(k, I) and Pj(k, l) is the filtered version of the image
Figure 1. Three level decomposition using wavelet transform
3.4.1. Wavelet based fusion scheme:
Since the useful features in the image usually are larger than one pixel, the rules based single
pixel may not be the most appropriate method. Then the rules based the neighbourhood features
of pixel is more appropriate. This kind of rules uses some neighborhood features of one pixel to
guide the selection of coefficients at that location. The neighborhood window is often set 3*3 in
the paper. Suppose A and B are high frequency sub images waiting for fusing, F isthe fusion
result sub image, then,
𝑓( π‘₯, 𝑦) = 𝐴( π‘₯, 𝑦) 𝑖𝑓 𝜎𝐴(π‘₯, 𝑦) β‰₯ 𝜎𝐡(π‘₯, 𝑦) (10)
𝑓( π‘₯, 𝑦) = 𝐴( π‘₯, 𝑦) 𝑖𝑓 𝜎𝐴(π‘₯, 𝑦) < 𝜎𝐡(π‘₯, 𝑦) (11)
3.5 Guided filter based fusion method [25]:
3.5.1 TWO SCALE DECOMPOSITION:
As shown in Fig. 3, the source images are first decomposed into two-scale representations by
average filtering. The base layer of each source image is obtained as follows:
𝑀 𝑛 = 𝐼 𝑛 βˆ— 𝐴 (12)
where In is the nth source image, Z is the average filter, and the size of the average filter is
conventionally set to 31 x 31 . Once the base layer is obtained, the detail layer can be easily
obtained by subtracting the base layer from the source image.
𝑁 𝑛 = 𝐼 𝑛 βˆ’ 𝑀 𝑛(13)
The two-scale decomposition step aims at separating each source image into a base layer
containing the large-scale variations in intensity and a detail layer containing the smallscale
details.
3.5.2 WAIT MAP CONSTRUCTION WITH GUIDED FILTERING
First, Laplacian filtering is applied to each source image to obtain the high-pass image Hn.
𝐻 𝑛 = 𝐼 𝑛 βˆ— 𝐿 (14)
Where L is the 3 x3 Laplacian filter.
the local average of the absolute value of Hn is used to construct the saliency maps Sn.
𝑆 𝑛 = | 𝐻 𝑛| πΉπ‘Ÿ 𝑔, 𝜎 𝑔
(15)
where g is a Gaussian low-pass filter of size (2rg + 1) (2rg + 1), and the parameters rg and Οƒg are
set to 5. The measured saliency maps provide good characterization of the saliency level of detail
information. Next, the saliency maps are compared to determine the weight maps as follows:
𝑂 𝑛
π‘˜
= {
1, 𝑖𝑓 𝑆 𝑛
π‘˜
= π‘šπ‘Žπ‘₯{ 𝑆1
π‘˜
, 𝑆2
π‘˜
, 𝑆3
π‘˜
, 𝑆 𝑛
π‘˜}
0, π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’
} (17)
where N is number of source images, Skn is the saliency value of the pixel k in the nth image.
However, the weight maps obtained above are usually noisy and not aligned with object
boundaries (see Fig. 3), which may produce artifacts to the fused image. Using spatial
consistency is an effective way to solve this problem. Spatial consistency means that if two
adjacent pixels have similar brightness or color, they will tend to have similar weights. A popular
spatial consistency based fusion approach is formulating an energy function, where the pixel
saliencies are encoded in the function and edge aligned weights are enforced by regularization
terms, e.g., a smoothness term. This energy function can be then minimized globally to obtain
the desired weight maps. However, the optimization based methods are often relatively
inefficient.
In this paper, an interesting alternative to optimization based methods is proposed. Guided image
filtering is performed on each weight map Pn with the corresponding source image In serving as
the guidance image.
π‘Šπ‘›
𝑀
= πΊπ‘Ÿ1,πœ–1
(𝑂 𝑛, 𝐼 𝑛) (18)
π‘Šπ‘›
𝑁
= πΊπ‘Ÿ2,πœ–2
( 𝑂 𝑛, 𝐼 𝑛) (19)
where r1, _𝛆1, r2, and 𝛆2 are the parameters of the guided filter, Wn
B and WD
n are the resulting
weight maps of the base and detail layers. Finally, the values of the N weight maps are
normalized such that they sum to one at each pixel k.
3.5.3 TWO SCALE IMAGE RECONSTRUCTION
Two-scale image reconstruction consists of the following two steps. First, the base and detail
layers of different source images are fused together by weighted averaging.
𝐡̅ = βˆ‘ π‘Šπ‘›
𝑀𝑁
𝑛=1 𝑀 𝑛 (20)
𝐷̅ = βˆ‘ π‘Šπ‘›
𝑁𝑁
𝑛=1 𝑁 𝑛 (21)
𝑅 = 𝐡̅ + 𝐷̅ (22)
3.6 Proposed Hybrid image fusion methods: In this thesis we proposed comparative study of
three hybrid image fusion methods which are named as:
i) Brovey transform with Guided filter hybrid image fusion method
ii) IHS with Guided filter image fusion method
iii) Wavelet transform with Guided fusion method
3.6.1 Brovey Transform with Guided filter(BTGF) : the detailed steps of fusion procedure is
as given below:
i) Consider panchromatic and Multispectral images.
ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.
iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS
image called GMS.
iv)Seperate RGB components from the MS image.
iv)Use this GMS in Brovey transform to generate new R’ Gβ€˜ B’ components.
vi) Generate MS image from new R’ G β€˜B’ components using ERDAS tool.
3.6.2. IHS Transform with Guided filter(IHSGF): the detailed steps of fusion procedure is as
given below:
i) Consider panchromatic and Multispectral images.
ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.
iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS
image called GMS.
iv)Apply IHS transform to MS image and GMS images to extract the component I (Intensity)
from both the images. Let I be the intensity component from MS image and I’ be the intensity
component from GMS image.
v) Replace I component from MS image with the I’ component from the GMS image.
v)Inverse transform the components I’ H S to get the fused image.
3.6.3. Wavelet transform with Guided filter(WTGF): the detailed steps of fusion procedure is
as given below:
i) Consider panchromatic and Multispectral images.
ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool.
iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS
image called GMS.
iv)Apply three level wave let transform to both MS and GMS images and fuse the decomposed
components using the specific wavelet rules.
v)Apply inverse wavelet transform to the fused components to get the high resolution MS
images.
3.7 Performance Measurement Parameters:
1. SAM(u,v) = cosβˆ’1
[
βˆ‘ ui,vi
L
i=1
βˆšβˆ‘ ui
2L
i=1
βˆšβˆ‘ vi
2L
i=1
] (23)
2. CC =
βˆ‘ βˆ‘ (xi,jβˆ’xΜ…)(yi,jβˆ’yΜ…)N
j=1
M
i=1
βˆšβˆ‘ βˆ‘ (x,jβˆ’xΜ…)2N
j=1
M
i=1 βˆ’βˆ‘ βˆ‘ (yi,jβˆ’yΜ…)2N
j=1
M
i=1
(24)
3 RMSE = √
βˆ‘ βˆ‘ (xi,jβˆ’yi,j)2N
j=1
M
i=1
MxN
(25)
4.PSNR = 20LOG10 [
L2
1
MN
βˆ‘ βˆ‘ (Ir (i,j)βˆ’If(i,j))N
j=1
M
i=1
](26)
5.SD = √
βˆ‘ βˆ‘ (BR(m,n)βˆ’ΞΌ)2N
j=1
M
i=1
MxN
(27)
6. SSIM(x,y) =
(2ΞΌxΞΌy+C1)(2Οƒxy+C2)
(ΞΌx
2
+ΞΌy
2
+C1)(ΞΌx
2
+ΞΌy
2
+C2)
(28)
CHAPTER 4
RESULTS AND DISCUSSION
a) Original MS image b) Original PAN image
c) Result of BTGF method d) Result of IHSGF method
e)Result of WTGF method
Figure .1. Result of data set 1
a) Original MS image b) Original PAN image
c) Result of BTGF method d) Result of IHSGF method
e)Result of WTGF method
Figure .2. Result of data set 2.
a) Original MS image b) Original PAN image
c) Result of BTGF method d) Result of IHSGF method
e)Result of WTGF method
Figure.3. Result of data set 3.
a) Original MS image b) Original PAN image
c) Result of BTGF method d) Result of IHSGF method
e)Result of WTGF method
Figure.4. Result of data set 4.
Table 1. Performance measurement parameters for data set 1
Table 2. Performance measurement parameters for data set 1
Table 3. Performance measurement parameters for data set 1
Table 4. Performance measurement parameters for data set 1
IDEAL
VALUES
BTGF IHSGF WTGF
SAM 0 5.8590 2.7589 1.3178
CC 1 0.8191 0.9582 0.9941
MSE 0 0.0768 0.0234 0.0191
PSNR MAXIMUM 140.84 161.4846 165.0632
SD 0 0.0228 0.0634 0.0228
SSIM 1 0.8076 0.9234 0.9091
IDEAL
VALUES
BTGF IHSGF WTGF
SAM 0 6.8660 3.8461 1.5164
CC 1 0.6099 0.9456 0.9930
MSE 0 0.0725 0.0273 0.0188
PSNR MAXIMUM 141.8393 158.8136 165.2613
SD 0 0.0284 0.0506 0.0284
SSIM 1 0.7939 0.9034 0.9188
IDEAL
VALUES
BTGF IHSGF WTGF
SAM 0 5.0028 4.2428 1.1325
CC 1 0.7920 0.9426 0.9910
MSE 0 0.0798 0.0293 0.0107
PSNR MAXIMUM 140.1900 157.6005 175.1078
SD 0 0.0269 0.0277 0.0269
SSIM 1 0.8497 0.8819 0.9700
IDEAL
VALUES
BTGF IHSGF WTGF
SAM 0 6.4060 4.3301 1.3675
CC 1 0.6989 0.9485 0.9960
MSE 0 0.0597 0.0272 0.0194
PSNR MAXIMUM 145.2268 158.8695 164.7638
SD 0 0.0368 0.0572 0.0368
SSIM 1 0.8397 0.8996 0.9310
a) Results of Structural Angle Mapper b) Results of Cross Correlation
c) Results of Root Mean Square Error d) Results of Peak Signal To Noise Ratio
e) Results of Standard Deviation f) Results of Structural Similarity Index
Figure 5. Graphs showing the results of various parameters
4.1 Discussion: The Figures 1, 2, 3 and 4 shows the fusion results of four data sets and three
different hybrid image fusion methods under consideration. Tables 1,2,3 and 4 show the
performance measurement parameters values for different datasets and methods under
consideration. Figures 5 (a) to 5 (e) shows the graphs of six performance measurement
parameters and their values.
Objective Evaluation: in this thesis total six performance measurement parameters are
considered to validate the results. From the tables 1, 2, 3, and 4, we can observe that out of three
hybrid fusion methods considered the WTGF method performs well compared to remaining two
methods. From the graphs of figures 5 (a) to 5 (f) it clear that the WTGF method have best
values in all performance measurement values under considered.
Subjective Evaluation: the dataset-1 contains the both vegetation and non vegetation areas, from
the figures 1 (c), 2 (c), 3 (c), 1 (d), 2 (d), 3 (d), 1 (e), 2 (e), 3 (e), visually it is observed that the
hybrid methods of BTGF, IHSGF are unable to retain the color information of the original MS
image i.e. they produce the color distortion in the fused image. The hybrid methods like BTGF,
IHSGF and WTGF can produce the good pan sharpened image but BTGF and IHSGF are unable
to preserve the color information, the WTGF can retain color information too.
CHAPTER 5
CONCLUSION
In this thesis we considered the total three hybrid image fusion methods to conduct the
comparative study, and total six performance measurement parameters to test the algorithms of
three hybrid image fusion methods. Experimental study show that the two hybrid fusion methods
like BTGF and IHSGF are unable to preserve the color information of the original MS image,
while the WTGF method is good in retaining the color information from the original MS image.
We can conclude that the hybrid image fusion method of wavelet transform and guided filter
(WTGF) is good in preserving the spatial and spectral properties.
REFERENCES
[1]. Mouyan Zou,Yan Liu, β€œ Multisensory image fusion: Difficulties and key techniques”, IEEE
second international congress on image and signal processing, pages-1-5,2009.
[2]. Vaishali Asirwal, Himanshu Yadav, Anurag Jain, β€œ Hybrid model for preserving brightness
over the digital image processing”, 4th IEEE international conference on computer and
communication technology(ICCCT),pages-48-53, 2013.
[3]. Goshtasby, A. Ardeshir, and Stavri Nikolov. "Image fusion: Advances in the state of the art."
(2007): 114-118, ScienceDirect, Elsevier.
[4]. Paul Mather, Brandt Tso, Classification Methods for Remotely Sensed Data, Second Edition,
CRC Press, 19-Apr-2016 - Technology & Engineering - 376 pages.
[5]. A. Fanelli, A Leo, M.Ferri, β€œRemote sensing image data fusion: A wavelet transform
approach for urban analysis”, IEEE/ISPRS joint workshop on remote sensing data fusion over
urban areas, pages112-116, 2001.
[6]. Jinaliang Wang, Chengana Wang, Xiaohu Wang, β€œ An experimental research on fusion
algorithms of ETM+image, pages-1-6, IEEE 18th international conference on Geoinformatics,
2010.
[7]. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, β€œSynthesis of multispectral images to
high spatial resolution: A critical review of fusion methods based on remote sensing physics,”
IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301–1312, 2008.
[8]. H. AanΓ¦s, J. R. Sveinsson, A. A. Nielsen, T. Bovith, and A. Benediktsson, β€œModel-based
satellite image fusion,” IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1336–
1346, 2008.
[9]. F. Fang, F. Li, C. Shen, and G. Zhang, β€œA variational approach for pan-sharpening,” IEEE
Trans. Image Processing, vol. 22, no. 7, pp. 2822–2834, 2013.
[10]. S. Li and B. Yang, β€œA new pan-sharpening method using a compressed sensing technique,”
IEEE Trans. Geoscience and Remote Sensing, vol. 49, no. 2, pp. 738–746, 2011.
[11]. F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, β€œA new pansharpening algorithm based
on total variation,” IEEE Geo-science and Remote Sensing Letters, vol. 11, pp. 318–322, 2014.
[12].S. Leprince, S. Barbot, F. Ayoub, and J.-P. Avouac, β€œAutomatic and precise
orthorectification, coregistration, and subpixel cor-relation of satellite images, application to
ground deformation measurements,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no.
6, pp. 1529–1558, 2007.
[13]. M. L. Uss, B. Vozel, V. A. Dushepa, V. A. Komjak, and Chehdi, β€œA precise lower bound
on image subpixel reg-istration accuracy,” IEEE Trans. Geoscience and Remote Sensing, vol.
52, no. 6, pp. 3333–3345, 2014.
[14]. Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, β€œRASL: Robust alignment by sparse
and low-rank decomposition for linearly correlated images,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 34, no. 11, pp. 2233–2246, 2012.
[15]. Firouz Abdullah Alwassai.; N.V. kalyankar.; Ali A. Al-Zuky, β€œ The IHS Based Image
Fusion,” Computer Vision & Pattern Recognition (CS.CV), 19 July, 2011.
[16]. Mohammad R Metwalli.; Ayman H Nasar.; Osmas. Farag Allah.; S.EI-rabaie, β€œImage
Fusion Based on Principal Component Alalysis and High-pass filterers,” IEEE international
conference on Computer Engineering & systems, pp: 63-70, 2009.
[17]. Heng Ma, Chunying Jai, Shuang Liu, β€œ Multisource Image Fusion Based on Wavelet
Transform,”International Journal of Information Technology, vol.11, No.7, 2005.
[18]. Maria Gonzalez- Audicana.; Jose Luis Saleta.;Raquel Catalan.; Rafael Gracia, β€œFusion of
Multispectral and Panchromatic Images Using Improved HIS and PCA Mergers Based on
Wavelet Decomposition,”IEEE Transactions on Geoscience and Remote Sensing, vol.42, issue
:6, pp:1291-1299, June2004.
[19].Wei Liu:, Jie Huang:, Yong Jun Zho:, β€œ Multisensor image fusion with undecimated discrete
wavelet transform, IEEE 8th international conference on signal processing, Vol.2, 2006.
[20]. Jin Wu.; Jian Lui.; Jinwen Tian.; Bingkun Yin.; β€œ Wavelet –based Remote Sensing Image
Fusion With PCA and Feature Product,” IEEE Proceedings Of the International Conference On
Mechatronics and Automation, pp:2053-2057, June 25-28, 2006.
[21]. Juliana G. Denipote.; Maria Stela V.Paiva.; β€œA Fourier Transform-based Approach to
Fusion High Spatial Resolution Remote Sensing Images,” IEEE Sixth Indian Conference on
Computer Vision, Graphics & Image Processing, pp: 179-186, 2008.
[22]. JING Tao.;MA Hao.; ZHU Homgchun, β€œThe Study of Remote Sensing Image Fusion based
on GIHS Transform,” Image and Signal Processing, 2009, IEEE Second International Congress
on image and signal processing, pp:1-4, Oct 17-19,2009.
[23]. Heng Chu.; De-guiTeng.;Ming-quan Wang, β€œFusion of Remotely Semsed Images based on
Subsampled Contourlet Transform and Spectral Response,” IEEE Urban Remote Sensing Event,
pp:1-5, 20-22 May, 2009.
[24]. Mengxian Song.; Xinyu Chen.; Ping Guo, β€œA Fusion Method For Multispectral and
Panchromatic Images Based on HSI and Contourlet Transformation,” IEEE workshop on Image
Analysis For Multimedia Interactive Services, pp: 77-88, 6-8 May 2009.
[25]. Gong Jianzhou.;Zhang Ling.; Liu Yansui, β€œ Fusion Processing and Quality Evaluation of
Remote Sensing Images Based on the Integration of Different Transform Methods with
HIS”,IEEE ICMT2010,pp:1-4, Oct 29-31, 2010.

More Related Content

What's hot

Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusionUmed Paliwal
Β 
Image enhancement techniques
Image enhancement techniquesImage enhancement techniques
Image enhancement techniquesSaideep
Β 
Object tracking presentation
Object tracking  presentationObject tracking  presentation
Object tracking presentationMrsShwetaBanait1
Β 
Image segmentation
Image segmentationImage segmentation
Image segmentationDeepak Kumar
Β 
Image Registration (Digital Image Processing)
Image Registration (Digital Image Processing)Image Registration (Digital Image Processing)
Image Registration (Digital Image Processing)VARUN KUMAR
Β 
Computer Vision Structure from motion
Computer Vision Structure from motionComputer Vision Structure from motion
Computer Vision Structure from motionWael Badawy
Β 
Region filling
Region fillingRegion filling
Region fillinghetvi naik
Β 
Content based image retrieval(cbir)
Content based image retrieval(cbir)Content based image retrieval(cbir)
Content based image retrieval(cbir)paddu123
Β 
Image segmentation
Image segmentationImage segmentation
Image segmentationBulbul Agrawal
Β 
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present..."Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...Edge AI and Vision Alliance
Β 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentationasodariyabhavesh
Β 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsMostafa G. M. Mostafa
Β 
Digital image processing
Digital image processingDigital image processing
Digital image processingAstha Jain
Β 

What's hot (20)

Image Fusion
Image FusionImage Fusion
Image Fusion
Β 
Wavelet based image fusion
Wavelet based image fusionWavelet based image fusion
Wavelet based image fusion
Β 
Image enhancement techniques
Image enhancement techniquesImage enhancement techniques
Image enhancement techniques
Β 
Computer Vision
Computer VisionComputer Vision
Computer Vision
Β 
Object tracking presentation
Object tracking  presentationObject tracking  presentation
Object tracking presentation
Β 
Object recognition
Object recognitionObject recognition
Object recognition
Β 
Object Recognition
Object RecognitionObject Recognition
Object Recognition
Β 
Image segmentation
Image segmentationImage segmentation
Image segmentation
Β 
Image Registration (Digital Image Processing)
Image Registration (Digital Image Processing)Image Registration (Digital Image Processing)
Image Registration (Digital Image Processing)
Β 
Computer Vision Structure from motion
Computer Vision Structure from motionComputer Vision Structure from motion
Computer Vision Structure from motion
Β 
Region filling
Region fillingRegion filling
Region filling
Β 
Content based image retrieval(cbir)
Content based image retrieval(cbir)Content based image retrieval(cbir)
Content based image retrieval(cbir)
Β 
Image segmentation
Image segmentationImage segmentation
Image segmentation
Β 
Depth Buffer Method
Depth Buffer MethodDepth Buffer Method
Depth Buffer Method
Β 
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present..."Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
"Introduction to Feature Descriptors in Vision: From Haar to SIFT," A Present...
Β 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
Β 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
Β 
Image segmentation
Image segmentation Image segmentation
Image segmentation
Β 
Digital image processing
Digital image processingDigital image processing
Digital image processing
Β 
Sharpening spatial filters
Sharpening spatial filtersSharpening spatial filters
Sharpening spatial filters
Β 

Similar to Motivation for image fusion

RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformINFOGAIN PUBLICATION
Β 
Satellite image Processing Seminar Report
Satellite image Processing Seminar ReportSatellite image Processing Seminar Report
Satellite image Processing Seminar Reportalok ray
Β 
Thesis Manuscript Final Draft
Thesis Manuscript Final DraftThesis Manuscript Final Draft
Thesis Manuscript Final DraftDerek Foster
Β 
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...DR.P.S.JAGADEESH KUMAR
Β 
Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]DR.P.S.JAGADEESH KUMAR
Β 
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
Β 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionMultiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionIOSRJVSP
Β 
Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Β 
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Β 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
Β 
Remote Sensing
Remote SensingRemote Sensing
Remote SensingSovanBar
Β 
Lw3620362041
Lw3620362041Lw3620362041
Lw3620362041IJERA Editor
Β 
International Journal of Computational Engineering Research(IJCER)
 International Journal of Computational Engineering Research(IJCER)  International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) ijceronline
Β 
Mn3621372142
Mn3621372142Mn3621372142
Mn3621372142IJERA Editor
Β 
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...CSCJournals
Β 

Similar to Motivation for image fusion (20)

RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
Β 
Satellite image Processing Seminar Report
Satellite image Processing Seminar ReportSatellite image Processing Seminar Report
Satellite image Processing Seminar Report
Β 
Fd36957962
Fd36957962Fd36957962
Fd36957962
Β 
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
Β 
Thesis Manuscript Final Draft
Thesis Manuscript Final DraftThesis Manuscript Final Draft
Thesis Manuscript Final Draft
Β 
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Β 
Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]
Β 
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...
Β 
Multiresolution SVD based Image Fusion
Multiresolution SVD based Image FusionMultiresolution SVD based Image Fusion
Multiresolution SVD based Image Fusion
Β 
Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...
Β 
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...
Β 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
Β 
H017534552
H017534552H017534552
H017534552
Β 
Remote Sensing
Remote SensingRemote Sensing
Remote Sensing
Β 
Lw3620362041
Lw3620362041Lw3620362041
Lw3620362041
Β 
International Journal of Computational Engineering Research(IJCER)
 International Journal of Computational Engineering Research(IJCER)  International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
Β 
sibgrapi2015
sibgrapi2015sibgrapi2015
sibgrapi2015
Β 
Mn3621372142
Mn3621372142Mn3621372142
Mn3621372142
Β 
Remote+Sensing
Remote+SensingRemote+Sensing
Remote+Sensing
Β 
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Unsupervised Building Extraction from High Resolution Satellite Images Irresp...
Β 

Recently uploaded

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
Β 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
Β 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
Β 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
Β 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
Β 
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
Β 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
Β 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
Β 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
Β 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
Β 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
Β 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
Β 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
Β 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
Β 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
Β 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
Β 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
Β 

Recently uploaded (20)

Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
Β 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
Β 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
Β 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
Β 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
Β 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
Β 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
Β 
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
β€œOh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
Β 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
Β 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
Β 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
Β 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Β 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
Β 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
Β 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Β 
Model Call Girl in Bikash Puri Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Bikash Puri  Delhi reach out to us at πŸ”9953056974πŸ”Model Call Girl in Bikash Puri  Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Bikash Puri Delhi reach out to us at πŸ”9953056974πŸ”
Β 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Β 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
Β 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
Β 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
Β 

Motivation for image fusion

  • 1. CHAPTER-1. INTRODUCTION 1.1 MOTIVATION FOR IMAGE FUSION RESEARCH Motivation for image fusion is the result of recent advancements in the remote sensing field. As the new image sensors are of high resolution and are available at low cost, multiple sensors are used in a wide range of imaging applications. These sensors are of high spatial and spectral resolution and offer faster scan rates. The images taken by these sensors are more reliable, informative and contain complete picture of the scanned environment. Thus, they help in improved performance of dedicated imaging systems. Over a period of decade, remote sensing, medical imaging, surveillance systems, etc., are few applications areas that were benefited by these multi-sensors. As the number of sensors increase in an application, the more proportionate amount of image data is collected. To improve imaging system’s performance, deployment of additional sensors is permitted by a corresponding increase in the processing power of the system. A sensor grabs multiple images of a location and one of them will be considered for analysis. However, the considered image may not have good spatial and spectral resolution. To overcome this and to generate a fused image with high spatial and spectral resolution, this work identifies the need for image fusion by developing new methods to improve the performance of existing fusion methods. Image fusion is a process of combining two or more different images to form a new image which contains enhanced information from the source images i.e., original application specific information should be preserved and the artifacts should be minimized in the fused image. The purpose of image fusion is to enhance the spatial and spectral resolution from several low resolution images. Due to this reason image fusion has become an interesting topic for many researchers [1],[2]. Today the image fusion find its role in image classification, aerial and satellite imaging , medical imaging, avionics, concealed weapon detection, multi-focus image fusion, digital camera application, battle field monitoring, defense for situation awareness, surveillance, target tracking, intelligence gathering, person authentication, geo-informatics etc.
  • 2. Satellites are objects in orbit around Earth, other planets, or the Sun. In the universe, hundreds of satellites are present. These satellites are of two types - natural and artificial. Natural satellites are objects which orbit around another object in space. E.g. Moon, Earth, Comets etc. Artificial satellites are manmade satellites which are very important to earth. There are six different types of artificial satellites. 1. Communication Satellites: They capture different radio waves and send them to different spots in the world. These help us to communicate around the world. 2. Resource Satellites: They help scientists to monitor natural resources by taking pictures. The scientists will turn the pictures into maps. These maps show things like underground oil, foggy air, etc. 3. Navigation Satellites: They capture the signals from ships and aircrafts and send them to emergency resource stations. These images are used by pilots and sailors to know where they are and where they are headed. 4. Military Satellites: They help the armed forces to navigate, communicate, and spy on other countries. They take pictures and pick up the radio waves that are sent by other countries. 5. Scientific Satellites: They study the Sun, planets, other solar systems and deep space. They help scientists to study Earth and outer space. They help to find asteroids, comets, and black holes. 6. Weather Satellites: They help scientists to study different types of weather patterns. They are used to predict the weather and track severe storm. Remote sensing images provide a better way to understand earth's environment by collecting huge amount of data via manmade satellites, aircrafts, Synthetic Aperture Radar (SAR) and so on. Remote sensing images which are different from natural images cover wide variety of scenes that have a large number of natural, manmade and military objects. The latest sensors exhibit high resolution and can sense many objects which have different kinds of shapes, edges and contours. So, these images are more reliable as they contain much information in high-frequency bands as well as in low-frequency bands. In satellite imaging, two types of images are available.
  • 3. 1. Panchromatic images (PAN): An image collected in the broad visual wavelength range but rendered in black and white. In PAN mode, the image is acquired with high spatial resolution and depends on the type of the satellite. For example, 5.8m pixel (IRS), 10m pixel (SPOT), 1m pixel (IKONOS). 2. Multispectral images (MS): An image optically acquired in more than one spectral or wavelength interval. In MS mode, the image is acquired with much lower spatial resolution and depends on the type of the satellite. For example, of 23.5m pixel (IRS), 20m pixel (SPOT), 4m pixel (IKONOS). 1.3. LEVELS OF IMAGE FUSION: 1.3.1. Pixel level image fusion: This is the fusion at the lowest possible level of abstraction, in which the data from two different sources fuse directly. In image fusion, the data are the pixels of the images from different sources. Fusion at this level has the advantage that it uses the original data that is most possibly close to the reality. The images merge on the pixel-by-pixel basis, after the software co-registered them exactly at the same resolution level. Most of the times, the images are geo-coded as well before fusion since the fusion on pixel level requires accurate registration of the images to be merged. The accurate registration requires re-sampling and geometric correction. There are several methods of re-sampling and registration of the images. The geometric correction requires the knowledge of the sensor viewing parameters along with software that takes into account the image acquisition geometry and Ground Control Points (GCPs). The GCPs are the landscape features whose exact locations on the ground are known. The GCPs may be naturally occurring, e.g. road intersections and costal features; or may be intentionally introduced for the purpose of geometric corrections. In some cases, where the surfaces are highly uneven, a DEM is required. This is especially important for SAR data processing, whose sensor has the side-looking geometry, i.e. oblique view. The oblique radar waves strike a bump on the rough terrain instead of the targeted location on the surface. The image fusion at this level has the highest requirements of computer memory and processing power and it takes longer processing times. 1.3.2. Feature Level image fusion: This approach merges the datasets, i.e. images at an intermediate level of abstraction. It is suitable to opt for feature-level fusion only if the features extracted from various data sources, i.e. images can properly be associated with each other, for example the features like
  • 4. edges and segments can be extracted from both, the optical as well as SAR images and then can be merged together to work out joint features and classification. SAR images provide textural information that is complementary to the spectral information from the optical images. Therefore, texture features extracted from SAR images and spectral features extracted from MS images may fuse before a classifier classifies them. [3] fuses hyper- spectral image with high-resolution image on the feature level. Some works propose fusing different kinds of features extracted from the same image before classifying the image. For example, [4] fuses texture features for classification of very high-resolution RS images and [5] fuses different texture features extracted from SAR images. 1.3.3. Decision Level image fusion: It is not necessary to perform fusion at only one of the three levels. The fusion may take place at any two or all the three levels and there exist example of techniques that allow fusion of image and non-image data at multiple levels of inference [4]. [5] applies multi-level fusion to multi-spectral image sequences for target detection. [6] proposes the multilevel image fusion framework to perform image fusion at all the three levels and reports significantly better results with the image fusion simultaneously performed at the first two levels (i.e. pixel and feature level) than with the fusion performed at any one level alone. However, multilevel fusion may take in several forms such as the one in the succeeding section. 1.4 GENERIC REQUIREMENTS OF IMAGE FUSION After an in-depth and critical literature survey, the present study found that to design an image fusion system one needs to take care of the following requirements: 1. The fused image should preserve as closely as possible all relevant information contained in the input images. 2. The fusion process should not introduce any artefacts or inconsistencies which can distract or mislead the human observer or any subsequent image processing steps. 3. The fused image should suppress to a maximum extent the irrelevant features and noise. 4. The fusion process should maximize the amount of relevant information in the fused image, while minimizing the amount of irrelevant details, uncertainty and redundancy in the fused image.
  • 5. Chapter-2 LITERATURE SURVEY Claire Thomas, Thierry Ranchin et al [7] proposed a framework to synthesize the high resolution multispectral image from low resolution panchromatic image. Presented many existing fusion methods like substitution based methods, relative spectral contribution methods, ARISIS based methods and advantages and disadvantages of each method. Henrik Aanaes, Johmaes R et al [8] proposed a method for pixel level satellite image fusion from the imaging sensor model. Pixel neighborhood regularization is presented for the regularization of the proposed method. The algorithm is tested on QuickBird, IKONOS, Meteosat data sets. The performance evaluation metrics used are Root Mean square Error(RMSE), Cross Correlation (CC), Structural Similarity Index(SSI), and Q4. The author shown that the proposed method perform well compared to many existing methos. Faming Fang, Fang Li, et al [9] proposed a new variational image fusion method based on three assumptions i) gradient of PAN image is linear combination of image bands used in pansharpened image. Ii) The gradient in the spectrum direction of the fused image should be approximated to low resolution MS image. The algorithm is tested on QuickBird, IKONOS data sets. The performance evaluation parameters used are RMSE, CC, Structural Angle Mapper (SAM), Spatial Frequency (SF). Xinghao Ping, Yiyong Jiang et al. [10] proposed a Bayesian non parametric dictionary learning model for image fusion. The proposed method will not demand the original MS image for dictionary learning, rather it directly uses the reconstructed images for dictionary learning. The algorithm is tested on IKONOS, Pleiades,QuickBird data sets. The performance evaluation metrics used are RMSE, CC,ERGAS,Q4. S. Li and B. Yang et al.[11] proposed a image fusion problem from compressed sensing theory. First the degradation model is constructed from low resolution MS image and high resolution PAN as a linear sampling process. So the image fusion process is converted in to restoration problem. Later the Pursuit algorithm is used to resolve the restoration problem. The QuickBird and IKONOS satellite images are used to test the image fusion algorithm. The performance evaluation parameters used are CC, SAM, RMSE, ERGAS and Q4.
  • 6. F. Palsson, J. R. Sveinsson, et al.[12] proposed a model based image fusion method. The model is created based on the assumption that a linear combinations of the bands of the fused images gives a panchromatic image and the down sampling the fused image gives multispectral image. The algorithm is tested by using QuickBird data sets and performance evaluation metrics used are SAM, ERGAS, CC, and Q4. S. Leprince, S. Barbot, et al.[13] proposed a method to automatic co-register the optical satellite images by using ground deformation measurement. By using the proposed method the images are co-registered with 1/50 pixel accuracy. The algorithm is tested for SPOT satellite images in the case of non ceseismic deformation and in the case of large ceseismic deformation. M. L. Uss, B. Vozel, et al. [14] proposed a new performance bound for analyzing the image registration methods objectively. This proposed lower bond involved in a geometric transformation assumed between the reference image and templet images. The experimental results proved that the lower bound describes more efficiently the performance of conventional estimators then other bounds proposed in the literature. Y. Peng, A. Ganesh, et al.[15] proposed image registration method called robust alignment by sparse and low rank decomposition for linearly correlated images (RASL) which efficiently co- register linearly correlated satellite images. The accuracy of the proposed method is very high and proposed method efficiently co-register the data sets over wide range of realistic misalignments and corruptions. Miloud Chikr El-Mezouar, Nasreddine Taleb, et al. [16] A new fusion approach that produces images with natural colors is proposed. Moreover, in this technique, a high-resolution normalized difference is also proposed and used in delineating the vegetation. The procedure is performed in two steps: MS fusion using the HIS technique and vegetation enhancement. Vegetation enhancement is a correction step, and it depends on the considered application. The new approach provides very good results in terms of objective quality measures. In addition, visual analysis proves that the concept of the proposed approach is promising, and it improves well the fusion quality by enhancing the vegetated zones. M. E. Nasr, S. M. Elkaffas et al.[17] proposed image fusion technique, based on integrating both the Intensity-Hue-Saturation (IHS) and the Discrete Wavelet Frame Transform (DWFT), is proposed for boosting the quality of remote sensing images. A panchromatic and multispectral
  • 7. image from Landsat-7(ETM+) satellite has been fused using this new approach. Experimental results show that the proposed technique improves the spectral and spatial qualities of the fused images. Moreover, when this technique is applied to noisy and de-noised remote sensing images it can preserve the quality of the fused images. Comparison analyses between different fusion techniques are also presented and show that the proposed technique outperforms the other techniques. Xia Chun-lina, Deng Jie,[18] et al. proposed a new fusion method was presentedβ€”PWI transformation: First, multispectral image was transformed by HIS, and then the obtained brightness componentβ€”I was transformed by PCA to extract the first principal componentβ€” PC1. Using wavelet transform, the PC1 and panchromatic images were fused, and then the results were used to replace the brightness components of the multispectral image. Finally the new multispectral image was got by the inverse IHS transformation. Subjective visual effect analysis and objective evaluation indicate that the new method was superior to the single one of three fusion methods of HIS, the wavelet transform and PCA, enhanced the representability of image spatial detail largely, and well reserved the spectral information of the multispectral image. Hamid Reza Shahdoosti and Hassan Ghassemian et al.[19] Designing an optimal filter that is able to extract relevant and nonredundant information from the PAN image is presented in this letter. The optimal filter coefficients extracted from statistical properties of the images are more consistent with type and texture of the remotely sensed images compared with other kernels such as wavelets. Visual and statistical assessments show that the proposed algorithm clearly improves the fusion quality in terms of correlation coefficient, relative dimensionless global error in synthesis, spectral angle mapper, universal image quality index, and quality without reference, as compared with fusion methods, including improved intensity–hue–saturation, multiscale Kalman filter, Bayesian, improved nonsubsampled contourlet transform, and sparse fusion of image. Jianwen Hu and Shutao Li, [20] et al. presents a novel method based on the developed multiscale dual bilateral filter to fuse high spatial resolution panchromatic image and high spectral resolution multispectral image. Compared with traditional multiresolution based methods, the process of detail extraction considers the characteristics of panchromatic image and multispectral image simultaneously. The low resolution multispectral image is resampled to the same size of the high resolution panchromatic image and sharpened through injecting the extracted details. The proposed fusion method is tested over QuickBird and IKONOS images and compared with three popular methods.
  • 8. Qizhi Xu, Bo Li, et al. [21] proposed a data fitting scheme is adopted to improve spectral quality in image fusion based on well-established CS approach. A generalized CS framework that is capable of modeling any CS image fusion method is also presented. In this framework, instead of injecting detail information of panchromatic (Pan) image into substituted component, the data fitting strategy is designed to adjust the mean information of Pan image in the construction of substitution component. The data fitting scheme involves two matrix subtractions and one matrix convolution. It is fast in implementation and is effective to avoid the spectral distortion problem. Experimental results on a large number of Pan and multispectral images show that the improved CS methods have good performance on the spatial and spectral fidelity. Jaewan Choi, Junho Yeom, et al. [22]In this letter, we developed a hybrid pansharpening algorithm based on primary and secondary high-frequency information injection to efficiently improve the spatial quality of the pansharpened image. The injected high-frequency information in our algorithm is composed of two types of data, i.e., the difference between panchromatic and intensity images, and the Laplacian filtered image of high-frequency information. The extracted high frequencies are injected by the multispectral image using the local adaptive fusion parameter and postprocessing of the fusion parameter. In the experiments using various satellite images, our results show better spatial quality than those of other fusion algorithms while maintaining as much spectral information as possible. Qian Zhang, Zhiguo Cao, et al. [23] proposed an iterativevoptimization approach, which jointly considers the registration and fusion processes, is proposed for panchromatic (PAN) and multispectral (MS) images. Given a registration method and a fusion method, the joint optimization process is described as finding the optimal registration parameters to gain the optimal fusion performance. In our approach, the downhill simplex algorithm is adopted to refine the registration parameters iteratively. Experiments on a set of PAN andMS images of ZY-3 and GeoEye-1 show that the proposed approach outperforms several competing ones in terms of registration accuracy and fusion quality
  • 9. CHAPTER 3 METHODOLOGY 3.1 Brovey Transform: The Brovey transform method is a ratio fusion technique that preserves the relative spectral contributions of each pixel, but replaces its overall brightness by the high-resolution panchroma image. It is operated by the formula: [ 𝑅 𝐡𝑇 β€² 𝐺 𝐡𝑇 β€² 𝐡 𝐡𝑇 β€² ] = 𝑃𝐴𝑁 𝐼 βˆ— [ 𝑅 𝐺 𝐡 ] (1) From Eqs. (5), it is evident that BT is indeed simple fusion methods requiring only arithmetical operations without any statistical analysis of filter design. Owing to their efficiency and implementation, either of them can achieve the goal of fast fusion for IKONOS/QuickBird imagery. However, color distortion problems are often produced in the fused images. Hence, originated from the image fusion process, color distortion becomes an important issue for practical applications [24]. 3.2 IHS Transform: The color system with red, green and blue channels (RGB) is usually used by computer monitors to display a color image. Another color system widely used to describe a color is the system of intensity, hue and saturation (IHS). The intensity represents the total amount of the light in a color, the hue is the property of a color determined by its wavelength, and the saturation is the purity of the color [10]. Whatever algorithm is chosen, the IHS transform is always applied to an RGB composite. This implies that the fusion will be applied to groups of three bands of the MS
  • 10. image. As a result of this transformation, we obtain the new intensity, hue, and saturation components. The PAN image then replaces the intensity image. Before doing this, in order to minimize the modification of the spectral information of the fused MS image with respect to the original MS image, the histogram of the PAN image is matched with that of the intensity image. Applying the inverse transform, we obtain the fused RGB image, with the spatial detail of the PAN image incorporated into it [9-10]. [ 𝐼 𝑉1 𝑉2 ] = [ 1 √3 1 √3 1 √3 1 √6 1 √6 βˆ’2 √6 1 √2 βˆ’1 √2 0 ] π‘₯ [ 𝑅 𝐺 𝐡 ] (2) 𝐻 = π‘‘π‘Žπ‘›βˆ’1 ( 𝑉2 𝑉1 ) (3) 𝑆 = βˆšπ‘‰1 2 + 𝑉2 2 (4) Replacing I by PAN image and taking inverse transform shown below equation (12) gives the fused MS image. [ 𝑀𝑆1 𝐻 𝑀𝑆2 𝐻 𝑀𝑆3 𝐻 ] = [ 1 βˆ’1 √6 3 √6 1 βˆ’1 √6 βˆ’3 √6 1 2 √6 0 ] π‘₯ [ 𝑃𝐴𝑁 𝑉1 𝑉2 ] (5) 3.3 PCA method for image fusion: The PCA also preferred as the K-L transform, is a very useful and well known technique for compression of the dimensionality of the highly correlated multispectral data. In general, the first principal component (PC1) collects the information that is common to all the bands used as input data in the PCA. It makes PCA a very adequate technique when merging MS and PAN images. In this case, all the bands of the original MS image constitute the input data, As a result of this transformation, we obtain non-correlated new bands, the principal components. The PC1 is substituted by the PAN image, whose histogram has previously been matched with that of PC1. Finally, the inverse transformation is applied to the
  • 11. whole dataset formed by the modified PAN image and the PC2…PCn, obtaining that incorporated into them [11]. 3.4 Wavelet transform based fusion: The WT is suitable for image fusion, not only because it enables one to fuse image features separately at different scales, but also because it produces large coefficients near edges in the transformed image and reveals relevant spatial information [12]. The WT decomposes the signal based on elementary functions: the wavelets. Wavelets can be described in terms of two groups of functions: wavelet functions and scaling functions. It is also common to be defined the wavelet function as the "mother wavelet", and the scaling function is the "father" wavelet. So the transformations of the parent wavelets are "daughter" and "son" wavelets. In one-dimensional case, the continuous wavelet transform of a distributionRt) can be expressed π‘Šπ‘‡( π‘Ž, 𝑏) = 1 √ π‘Ž ∫ 𝑓( 𝑑) ∞ βˆ’βˆž πœ“( π‘‘βˆ’π‘ π‘Ž )𝑑𝑑 (6) where WT (a, b) is the wavelet coefficient of the function Rt); the analyzing wavelet and a (a > 0) and b are scaling and translational parameters, respectively. Each base function is a scaled and translated version of a function πœ“(𝑑)called Mother Wavelet. Currently used wavelet-based image fusion methods are mostly based on two algorithms: the Mallat algorithm [13] and the it trous algorithm [14]. The Mallat algorithm-based dyadic wavelet transform (WT), which uses decimation, is not shift invariant and exhibits artifacts due to aliasing in the fused image [15]. The WT method allows the decomposition of the image in a set of wavelet and approximation planes, according to the theory of multiresolution wavelet transform given by Mallat. Each wavelet plane contains the wavelet coefficients where the amplitude of a coefficient defines the scale and informations of the local features. Formally, wavelet coefficients are computed by means of the following equation:
  • 12. 𝑀𝑗 𝑝 ( π‘˜, 𝑙) = π‘ƒπ‘—βˆ’1( π‘˜, 𝑙)βˆ’ 𝑃𝑗(π‘˜, 𝑙) (7) j= 1, ... ,N, such as j is the scale index, N is the number of decomposition, Po(k,l) corresponds to the original image P(k,l) and P/k,l) is the filtered version of the image produced by means of the flowing equation: 𝑃𝑗( π‘˜, 𝑙) = βˆ‘ βˆ‘ β„Ž( π‘š, 𝑛) π‘ƒπ‘—βˆ’1(𝑛 + 2 π‘—βˆ’1 π‘˜, π‘š + 2 π‘—βˆ’1 𝑙)π‘›π‘š (8) H(n,m) are the coefficients. π‘Šπ‘‡π‘—( π‘˜, 𝑙) = π‘ƒπ‘—βˆ’1( π‘˜, 𝑙) βˆ’ 𝑃𝑗(π‘˜, 𝑙) (9) j=1,2, ... ,N, andj is the scale index, N is the level number of decomposition, Po(k,l) corresponds to the original ETM+ image P(k, I) and Pj(k, l) is the filtered version of the image Figure 1. Three level decomposition using wavelet transform 3.4.1. Wavelet based fusion scheme: Since the useful features in the image usually are larger than one pixel, the rules based single pixel may not be the most appropriate method. Then the rules based the neighbourhood features of pixel is more appropriate. This kind of rules uses some neighborhood features of one pixel to guide the selection of coefficients at that location. The neighborhood window is often set 3*3 in the paper. Suppose A and B are high frequency sub images waiting for fusing, F isthe fusion result sub image, then, 𝑓( π‘₯, 𝑦) = 𝐴( π‘₯, 𝑦) 𝑖𝑓 𝜎𝐴(π‘₯, 𝑦) β‰₯ 𝜎𝐡(π‘₯, 𝑦) (10) 𝑓( π‘₯, 𝑦) = 𝐴( π‘₯, 𝑦) 𝑖𝑓 𝜎𝐴(π‘₯, 𝑦) < 𝜎𝐡(π‘₯, 𝑦) (11)
  • 13. 3.5 Guided filter based fusion method [25]: 3.5.1 TWO SCALE DECOMPOSITION: As shown in Fig. 3, the source images are first decomposed into two-scale representations by average filtering. The base layer of each source image is obtained as follows: 𝑀 𝑛 = 𝐼 𝑛 βˆ— 𝐴 (12) where In is the nth source image, Z is the average filter, and the size of the average filter is conventionally set to 31 x 31 . Once the base layer is obtained, the detail layer can be easily obtained by subtracting the base layer from the source image. 𝑁 𝑛 = 𝐼 𝑛 βˆ’ 𝑀 𝑛(13) The two-scale decomposition step aims at separating each source image into a base layer containing the large-scale variations in intensity and a detail layer containing the smallscale details. 3.5.2 WAIT MAP CONSTRUCTION WITH GUIDED FILTERING First, Laplacian filtering is applied to each source image to obtain the high-pass image Hn. 𝐻 𝑛 = 𝐼 𝑛 βˆ— 𝐿 (14) Where L is the 3 x3 Laplacian filter. the local average of the absolute value of Hn is used to construct the saliency maps Sn. 𝑆 𝑛 = | 𝐻 𝑛| πΉπ‘Ÿ 𝑔, 𝜎 𝑔 (15) where g is a Gaussian low-pass filter of size (2rg + 1) (2rg + 1), and the parameters rg and Οƒg are set to 5. The measured saliency maps provide good characterization of the saliency level of detail information. Next, the saliency maps are compared to determine the weight maps as follows: 𝑂 𝑛 π‘˜ = { 1, 𝑖𝑓 𝑆 𝑛 π‘˜ = π‘šπ‘Žπ‘₯{ 𝑆1 π‘˜ , 𝑆2 π‘˜ , 𝑆3 π‘˜ , 𝑆 𝑛 π‘˜} 0, π‘œπ‘‘β„Žπ‘’π‘Ÿπ‘€π‘–π‘ π‘’ } (17) where N is number of source images, Skn is the saliency value of the pixel k in the nth image. However, the weight maps obtained above are usually noisy and not aligned with object
  • 14. boundaries (see Fig. 3), which may produce artifacts to the fused image. Using spatial consistency is an effective way to solve this problem. Spatial consistency means that if two adjacent pixels have similar brightness or color, they will tend to have similar weights. A popular spatial consistency based fusion approach is formulating an energy function, where the pixel saliencies are encoded in the function and edge aligned weights are enforced by regularization terms, e.g., a smoothness term. This energy function can be then minimized globally to obtain the desired weight maps. However, the optimization based methods are often relatively inefficient. In this paper, an interesting alternative to optimization based methods is proposed. Guided image filtering is performed on each weight map Pn with the corresponding source image In serving as the guidance image. π‘Šπ‘› 𝑀 = πΊπ‘Ÿ1,πœ–1 (𝑂 𝑛, 𝐼 𝑛) (18) π‘Šπ‘› 𝑁 = πΊπ‘Ÿ2,πœ–2 ( 𝑂 𝑛, 𝐼 𝑛) (19) where r1, _𝛆1, r2, and 𝛆2 are the parameters of the guided filter, Wn B and WD n are the resulting weight maps of the base and detail layers. Finally, the values of the N weight maps are normalized such that they sum to one at each pixel k. 3.5.3 TWO SCALE IMAGE RECONSTRUCTION Two-scale image reconstruction consists of the following two steps. First, the base and detail layers of different source images are fused together by weighted averaging. 𝐡̅ = βˆ‘ π‘Šπ‘› 𝑀𝑁 𝑛=1 𝑀 𝑛 (20) 𝐷̅ = βˆ‘ π‘Šπ‘› 𝑁𝑁 𝑛=1 𝑁 𝑛 (21) 𝑅 = 𝐡̅ + 𝐷̅ (22)
  • 15. 3.6 Proposed Hybrid image fusion methods: In this thesis we proposed comparative study of three hybrid image fusion methods which are named as: i) Brovey transform with Guided filter hybrid image fusion method ii) IHS with Guided filter image fusion method iii) Wavelet transform with Guided fusion method 3.6.1 Brovey Transform with Guided filter(BTGF) : the detailed steps of fusion procedure is as given below: i) Consider panchromatic and Multispectral images. ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool. iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS image called GMS. iv)Seperate RGB components from the MS image. iv)Use this GMS in Brovey transform to generate new R’ Gβ€˜ B’ components. vi) Generate MS image from new R’ G β€˜B’ components using ERDAS tool. 3.6.2. IHS Transform with Guided filter(IHSGF): the detailed steps of fusion procedure is as given below: i) Consider panchromatic and Multispectral images. ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool. iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS image called GMS. iv)Apply IHS transform to MS image and GMS images to extract the component I (Intensity) from both the images. Let I be the intensity component from MS image and I’ be the intensity component from GMS image. v) Replace I component from MS image with the I’ component from the GMS image. v)Inverse transform the components I’ H S to get the fused image.
  • 16. 3.6.3. Wavelet transform with Guided filter(WTGF): the detailed steps of fusion procedure is as given below: i) Consider panchromatic and Multispectral images. ii) Pre process both the images (Ortho-rectification and Geo-rectification) using ERDAS tool. iii) Apply Guided filter fusion to both PAN and MS images and generate high resolution MS image called GMS. iv)Apply three level wave let transform to both MS and GMS images and fuse the decomposed components using the specific wavelet rules. v)Apply inverse wavelet transform to the fused components to get the high resolution MS images. 3.7 Performance Measurement Parameters: 1. SAM(u,v) = cosβˆ’1 [ βˆ‘ ui,vi L i=1 βˆšβˆ‘ ui 2L i=1 βˆšβˆ‘ vi 2L i=1 ] (23) 2. CC = βˆ‘ βˆ‘ (xi,jβˆ’xΜ…)(yi,jβˆ’yΜ…)N j=1 M i=1 βˆšβˆ‘ βˆ‘ (x,jβˆ’xΜ…)2N j=1 M i=1 βˆ’βˆ‘ βˆ‘ (yi,jβˆ’yΜ…)2N j=1 M i=1 (24) 3 RMSE = √ βˆ‘ βˆ‘ (xi,jβˆ’yi,j)2N j=1 M i=1 MxN (25) 4.PSNR = 20LOG10 [ L2 1 MN βˆ‘ βˆ‘ (Ir (i,j)βˆ’If(i,j))N j=1 M i=1 ](26) 5.SD = √ βˆ‘ βˆ‘ (BR(m,n)βˆ’ΞΌ)2N j=1 M i=1 MxN (27) 6. SSIM(x,y) = (2ΞΌxΞΌy+C1)(2Οƒxy+C2) (ΞΌx 2 +ΞΌy 2 +C1)(ΞΌx 2 +ΞΌy 2 +C2) (28)
  • 17. CHAPTER 4 RESULTS AND DISCUSSION a) Original MS image b) Original PAN image c) Result of BTGF method d) Result of IHSGF method e)Result of WTGF method Figure .1. Result of data set 1
  • 18. a) Original MS image b) Original PAN image c) Result of BTGF method d) Result of IHSGF method e)Result of WTGF method Figure .2. Result of data set 2.
  • 19. a) Original MS image b) Original PAN image c) Result of BTGF method d) Result of IHSGF method e)Result of WTGF method Figure.3. Result of data set 3.
  • 20. a) Original MS image b) Original PAN image c) Result of BTGF method d) Result of IHSGF method e)Result of WTGF method Figure.4. Result of data set 4.
  • 21. Table 1. Performance measurement parameters for data set 1 Table 2. Performance measurement parameters for data set 1 Table 3. Performance measurement parameters for data set 1 Table 4. Performance measurement parameters for data set 1 IDEAL VALUES BTGF IHSGF WTGF SAM 0 5.8590 2.7589 1.3178 CC 1 0.8191 0.9582 0.9941 MSE 0 0.0768 0.0234 0.0191 PSNR MAXIMUM 140.84 161.4846 165.0632 SD 0 0.0228 0.0634 0.0228 SSIM 1 0.8076 0.9234 0.9091 IDEAL VALUES BTGF IHSGF WTGF SAM 0 6.8660 3.8461 1.5164 CC 1 0.6099 0.9456 0.9930 MSE 0 0.0725 0.0273 0.0188 PSNR MAXIMUM 141.8393 158.8136 165.2613 SD 0 0.0284 0.0506 0.0284 SSIM 1 0.7939 0.9034 0.9188 IDEAL VALUES BTGF IHSGF WTGF SAM 0 5.0028 4.2428 1.1325 CC 1 0.7920 0.9426 0.9910 MSE 0 0.0798 0.0293 0.0107 PSNR MAXIMUM 140.1900 157.6005 175.1078 SD 0 0.0269 0.0277 0.0269 SSIM 1 0.8497 0.8819 0.9700 IDEAL VALUES BTGF IHSGF WTGF SAM 0 6.4060 4.3301 1.3675 CC 1 0.6989 0.9485 0.9960 MSE 0 0.0597 0.0272 0.0194 PSNR MAXIMUM 145.2268 158.8695 164.7638 SD 0 0.0368 0.0572 0.0368 SSIM 1 0.8397 0.8996 0.9310
  • 22. a) Results of Structural Angle Mapper b) Results of Cross Correlation c) Results of Root Mean Square Error d) Results of Peak Signal To Noise Ratio e) Results of Standard Deviation f) Results of Structural Similarity Index Figure 5. Graphs showing the results of various parameters
  • 23. 4.1 Discussion: The Figures 1, 2, 3 and 4 shows the fusion results of four data sets and three different hybrid image fusion methods under consideration. Tables 1,2,3 and 4 show the performance measurement parameters values for different datasets and methods under consideration. Figures 5 (a) to 5 (e) shows the graphs of six performance measurement parameters and their values. Objective Evaluation: in this thesis total six performance measurement parameters are considered to validate the results. From the tables 1, 2, 3, and 4, we can observe that out of three hybrid fusion methods considered the WTGF method performs well compared to remaining two methods. From the graphs of figures 5 (a) to 5 (f) it clear that the WTGF method have best values in all performance measurement values under considered. Subjective Evaluation: the dataset-1 contains the both vegetation and non vegetation areas, from the figures 1 (c), 2 (c), 3 (c), 1 (d), 2 (d), 3 (d), 1 (e), 2 (e), 3 (e), visually it is observed that the hybrid methods of BTGF, IHSGF are unable to retain the color information of the original MS image i.e. they produce the color distortion in the fused image. The hybrid methods like BTGF, IHSGF and WTGF can produce the good pan sharpened image but BTGF and IHSGF are unable to preserve the color information, the WTGF can retain color information too.
  • 24. CHAPTER 5 CONCLUSION In this thesis we considered the total three hybrid image fusion methods to conduct the comparative study, and total six performance measurement parameters to test the algorithms of three hybrid image fusion methods. Experimental study show that the two hybrid fusion methods like BTGF and IHSGF are unable to preserve the color information of the original MS image, while the WTGF method is good in retaining the color information from the original MS image. We can conclude that the hybrid image fusion method of wavelet transform and guided filter (WTGF) is good in preserving the spatial and spectral properties.
  • 25. REFERENCES [1]. Mouyan Zou,Yan Liu, β€œ Multisensory image fusion: Difficulties and key techniques”, IEEE second international congress on image and signal processing, pages-1-5,2009. [2]. Vaishali Asirwal, Himanshu Yadav, Anurag Jain, β€œ Hybrid model for preserving brightness over the digital image processing”, 4th IEEE international conference on computer and communication technology(ICCCT),pages-48-53, 2013. [3]. Goshtasby, A. Ardeshir, and Stavri Nikolov. "Image fusion: Advances in the state of the art." (2007): 114-118, ScienceDirect, Elsevier. [4]. Paul Mather, Brandt Tso, Classification Methods for Remotely Sensed Data, Second Edition, CRC Press, 19-Apr-2016 - Technology & Engineering - 376 pages. [5]. A. Fanelli, A Leo, M.Ferri, β€œRemote sensing image data fusion: A wavelet transform approach for urban analysis”, IEEE/ISPRS joint workshop on remote sensing data fusion over urban areas, pages112-116, 2001. [6]. Jinaliang Wang, Chengana Wang, Xiaohu Wang, β€œ An experimental research on fusion algorithms of ETM+image, pages-1-6, IEEE 18th international conference on Geoinformatics, 2010. [7]. C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, β€œSynthesis of multispectral images to high spatial resolution: A critical review of fusion methods based on remote sensing physics,” IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1301–1312, 2008. [8]. H. AanΓ¦s, J. R. Sveinsson, A. A. Nielsen, T. Bovith, and A. Benediktsson, β€œModel-based satellite image fusion,” IEEE Trans. Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1336– 1346, 2008. [9]. F. Fang, F. Li, C. Shen, and G. Zhang, β€œA variational approach for pan-sharpening,” IEEE Trans. Image Processing, vol. 22, no. 7, pp. 2822–2834, 2013. [10]. S. Li and B. Yang, β€œA new pan-sharpening method using a compressed sensing technique,” IEEE Trans. Geoscience and Remote Sensing, vol. 49, no. 2, pp. 738–746, 2011. [11]. F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, β€œA new pansharpening algorithm based on total variation,” IEEE Geo-science and Remote Sensing Letters, vol. 11, pp. 318–322, 2014. [12].S. Leprince, S. Barbot, F. Ayoub, and J.-P. Avouac, β€œAutomatic and precise orthorectification, coregistration, and subpixel cor-relation of satellite images, application to
  • 26. ground deformation measurements,” IEEE Trans. Geoscience and Remote Sensing, vol. 45, no. 6, pp. 1529–1558, 2007. [13]. M. L. Uss, B. Vozel, V. A. Dushepa, V. A. Komjak, and Chehdi, β€œA precise lower bound on image subpixel reg-istration accuracy,” IEEE Trans. Geoscience and Remote Sensing, vol. 52, no. 6, pp. 3333–3345, 2014. [14]. Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, β€œRASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2233–2246, 2012. [15]. Firouz Abdullah Alwassai.; N.V. kalyankar.; Ali A. Al-Zuky, β€œ The IHS Based Image Fusion,” Computer Vision & Pattern Recognition (CS.CV), 19 July, 2011. [16]. Mohammad R Metwalli.; Ayman H Nasar.; Osmas. Farag Allah.; S.EI-rabaie, β€œImage Fusion Based on Principal Component Alalysis and High-pass filterers,” IEEE international conference on Computer Engineering & systems, pp: 63-70, 2009. [17]. Heng Ma, Chunying Jai, Shuang Liu, β€œ Multisource Image Fusion Based on Wavelet Transform,”International Journal of Information Technology, vol.11, No.7, 2005. [18]. Maria Gonzalez- Audicana.; Jose Luis Saleta.;Raquel Catalan.; Rafael Gracia, β€œFusion of Multispectral and Panchromatic Images Using Improved HIS and PCA Mergers Based on Wavelet Decomposition,”IEEE Transactions on Geoscience and Remote Sensing, vol.42, issue :6, pp:1291-1299, June2004. [19].Wei Liu:, Jie Huang:, Yong Jun Zho:, β€œ Multisensor image fusion with undecimated discrete wavelet transform, IEEE 8th international conference on signal processing, Vol.2, 2006. [20]. Jin Wu.; Jian Lui.; Jinwen Tian.; Bingkun Yin.; β€œ Wavelet –based Remote Sensing Image Fusion With PCA and Feature Product,” IEEE Proceedings Of the International Conference On Mechatronics and Automation, pp:2053-2057, June 25-28, 2006. [21]. Juliana G. Denipote.; Maria Stela V.Paiva.; β€œA Fourier Transform-based Approach to Fusion High Spatial Resolution Remote Sensing Images,” IEEE Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp: 179-186, 2008. [22]. JING Tao.;MA Hao.; ZHU Homgchun, β€œThe Study of Remote Sensing Image Fusion based on GIHS Transform,” Image and Signal Processing, 2009, IEEE Second International Congress on image and signal processing, pp:1-4, Oct 17-19,2009. [23]. Heng Chu.; De-guiTeng.;Ming-quan Wang, β€œFusion of Remotely Semsed Images based on Subsampled Contourlet Transform and Spectral Response,” IEEE Urban Remote Sensing Event, pp:1-5, 20-22 May, 2009.
  • 27. [24]. Mengxian Song.; Xinyu Chen.; Ping Guo, β€œA Fusion Method For Multispectral and Panchromatic Images Based on HSI and Contourlet Transformation,” IEEE workshop on Image Analysis For Multimedia Interactive Services, pp: 77-88, 6-8 May 2009. [25]. Gong Jianzhou.;Zhang Ling.; Liu Yansui, β€œ Fusion Processing and Quality Evaluation of Remote Sensing Images Based on the Integration of Different Transform Methods with HIS”,IEEE ICMT2010,pp:1-4, Oct 29-31, 2010.