This paper reports a detailed study of a set of image fusion algorithms for its implementation. The paper explains the theory and implementation of the effective image fusion algorithm and the experimental results. Based on the research and development of some image quality metrics, the fusion algorithm is evaluated. The report is an image fusion algorithm that evaluates and implements image quality metrics that are used to evaluate the implementation. In this study, two different image fusion techniques have been applied to hyperspectral and low spatial resolution satellite images with high spatial and low spectral resolution images to obtain a fusion graph with increased spatial resolution Like, while keeping spectral information as much as possible. These techniques are raw component analysis
(PCA) and wavelet transform (WT) image fusion MATLAB is used to build the GUto
apply and render the results of image fusion algorithms. The subjective (visual) and objective evaluation of the fusion image has been carried out to assess the success of the method. The objective evaluation methods include correlation coefficient (CC), root mean square error (RMSE), relative global dimension synthesis error (ERGAS) The results show that the PCA method performs better on the top of the spectral information, and is less successful in increasing the spatial resolution. The WT is performed after the IHS transformation to improve the spatial resolution and is performed with respect to the preservation of the spectral information after the PCA and WT methods.
Image Maximization Using Multi Spectral Image Fusion Technique
1. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 1 | P a g e Copyright@IDL-2017
Image Maximization Using Multi Spectral
Image Fusion Technique
Ms. SAVITHA, PG Scholar, VTU University Karnataka Email ID:savib87@gmail.com
ABSTRACT
This paper reports a detailed study of a set
of image fusion algorithms for its
implementation. The paper explains the
theory and implementation of the effective
image fusion algorithm and the
experimental results. Based on the
research and development of some image
quality metrics, the fusion algorithm is
evaluated. The report is an image fusion
algorithm that evaluates and implements
image quality metrics that are used to
evaluate the implementation.
In this study, two different image fusion
techniques have been applied to
hyperspectral and low spatial resolution
satellite images with high spatial and low
spectral resolution images to obtain a
fusion graph with increased spatial
resolution Like, while keeping spectral
information as much as possible. These
techniques are raw component analysis
(PCA) and wavelet transform (WT) image
fusion MATLAB is used to build the GUto
apply and render the results of image
fusion algorithms. The subjective (visual)
and objective evaluation of the fusion
image has been carried out to assess the
success of the method. The objective
evaluation methods include correlation
coefficient (CC), root mean square error
(RMSE), relative global dimension
synthesis error (ERGAS)
The results show that the PCA method
performs better on the top of the spectral
information, and is less successful in
increasing the spatial resolution. The WT
is performed after the IHS transformation
to improve the spatial resolution and is
performed with respect to the preservation
of the spectral information after the PCA
and WT methods.
I. INTRODUCTION
Image processing techniques focus
primarily on enhancing the quality of
2. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 2 | P a g e Copyright@IDL-2017
images or a set of images and deriving
maximum information from them. Image
fusion is a technique for generating high-
quality images from a set of available
images.example indicates that it will
appear relatively dark.
The eye can sense the spectral response
mode because it is a true multispectral
sensor (ie, it is sensed in more than one
place in the spectrum). Although the actual
function of the eye is quite complex, it
actually does have three independent types
of detectors that can be effectively
considered to be responsive to red, green
and blue wavelength regions. These are the
primary colors of the additive, and the
eyes respond to the sensation of the other
colors to produce other colors
II. IMAGE PROCESSING
Description:A visual representation of
the (object or scene or person or abstract)
produced on the surface. The data
representing the two-dimensional scene.
An image is an artifact, such as a two-
dimensional image, having a look that is
similar to some subject, usually a physical
object or person.
1) Sample and quantize: Make moderate
readings at evenly spaced locations in both
the x and y directions Visualized by
placing an regularly spaced grid over the
analog image.
2) Quantize Intensity: quantize the
sampled values of intensity to arrive at a
signal that is discrete in both position and
amplitude.
3) Encoding: translate data to binary form.
The process of analog to digital signal
translation is completed by encoding the
3. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 3 | P a g e Copyright@IDL-2017
quantized values into a binary chain.
Gray Scale Image
Once a grayscale image has been obtained
and digitized, it is stored as a two-
dimensional array in computer storage.
Fig.: Gray Scale Image
Color Image
1. To digitize a grayscale image, we
look at the overall concentration
level of the sensed light and record
as a function of position.
2. To digitize a color image the
concentration of each of the three
primary colors must be noticeable
of the incoming light.
3. One way to carry out this is to filter
the light sensed by the sensor so
that it lies within the wavelength
range of a definite color.
4. We can detect the intensity of that
specific color for that definite
location.
5. Note that the three primary colors
are red, green, and blue. They are
stated as primary because any color
of light consists of a mixture of
frequencies contained in the three
"primary" color ranges an example
of quantizing a color image
consider a computer imaging
systems that utilize 24 bit color.
6. For 24 bit color each of the three
primary color concentration is
allowed one byte of storage per
pixel for a total of three bytes per
pixel.
7. Each color has an permitted
numerical range from 0 to 255, for
example 0=no red, 255=all red.
8. The combinations that can be made
with 256 levels for each of the
three primary colors amounts to
over 16 million distinct colors
ranging from white (R,G,B) =
(255,255,255) to black (R,G,B) =
(0,0,0).
9. Majority of computers store color
digital image information in three
dimensional arrays. The first two
indexes in this array specify the
row and column of the pixel and
the third index specifies the color
4. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 4 | P a g e Copyright@IDL-2017
"plane" where 1 is red, 2 is green,
and 3 is blue.
III. Block Diagram of System
Design
The images obtained from the different
resources have spatial addiction but due to
their different spectral quality, they also
exhibit difference in information content.
The information contained in panchromatic
images depends on the multispectral
reflectivity of the object illuminated by sun
light. SAR image intensities depend on the
characteristics of the illuminated surface
object as well as on the signal itself. The
fusion of this dissimilar data contributes to
the understanding of the objects observed.
For many applications, images of the same
location are obtained at different periods of
times. This provides us with a large volume
of images with dissimilar temporal, spectral
and spatial resolution
IV.DIM ARRAY FOR 24 BIT
COLOR
Fig: 3D array for 24 bit color
Software Requirements
Software necessities for the
implementation and testing
Operating System :
Windows XP/07/Vista
Language :
MATLAB programming language
Software Packages :
MATLAB 7.0 and above
Hardware Requirements
Processor :
INTEL Core 2 Duo 32 bit
Output device : Color
monitor
Network hardware :
Network Interface Card
5. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 5 | P a g e Copyright@IDL-2017
RAM : 1 GB
Input device :
Keyboard and mouse
V. Principal Components
Analysis (Pca)
PCA is a general-purpose statistical
technique that converts multivariable
data with associated variables into
multivariable data with irrelevant
variables. These new variables are
obtained as a linear combination of the
original variables. PCA has been
widely used in image coding, image
data compression, image enhancement
and image fusion.
VI. Wavelet Transform (WT)
The wavelet coefficients from the MS
approximation subband and the PAN
detail subband are then combined and the
fusion image is reconstructed by
performing inverse wavelet transform.
Since the coefficient distribution in the
detail subband is averaged to zero, the
fusion result does not change the radiance
of the original multi-spectral image. The
simplest method is based on the choice of
higher value coefficients, but the
literature presents various other methods.
Satellite Image Fusion Enhancement –
GUI Design
Figure– Image enhancement GUI design
6. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 6 | P a g e Copyright@IDL-2017
Figure:– Image enhancement
Programmed GUI
Figure :– Image enhancement working
GUI design
Figure:– Image enhancement Working
Environment
Fig: Image enhancement Resultant Output
using PCA method
Flow chart for PCA based
IMAGE fusion
7. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 7 | P a g e Copyright@IDL-2017
The structured flow chart provides an
overall strategy for structured projects. It
details the development of each module
in detail design and coding. These
specific application modules and their
designs are shown in the figure. Structure
description
The range and difficulty of the
system.
Number of readily identifiable
functions and unitss within each
function.
Whether each identifiable function
is a controllable entity or should be
busted down into smaller parts.
The structure diagram is also used
to associate elements that contain
running streams or threads.
It is usually developed as a
hierarchical map, but other
representations are allowed.
The representation must describe
the subdivision of the configuration
system as a subsystem
Flow chart for Image fusion
based WT
8. IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 8 | P a g e Copyright@IDL-2017
CONCLUSION
Finally, from the above analysis and
comparison, it can be concluded that the
improved IHS algorithm can preserve the
spectral characteristics of the source multi-
spectral image and the high spatial
resolution of the source panchromatic
image, which is suitable for the fusion of
IRS P5 and P6 images. In PC and standard
REFERENCES
1. T. M. Tu et al., “A new look at IHS-like
image fusion methods,” Inform. Fusion 2(3),
177–186 (2001),
http://dx.doi.org/10.1016/S1566-
2535(01)00036-7.
2. J. Hill et al., “A local correlation approach
for the fusion of remote sensing data with
different spatial resolution in forestry
applications,” in Proc. of Int. Archives of
Photogrammetry and Remote Sensing, Vol. 32,
Part 7-4-3 W6, pp. 167–174, ISPRS,
Valladolid, Spain (1999).
3. S. Klonus and M. Ehlers, “Image fusion
using the Ehlers spectral characteristics
preservation algorithm,” GIsci. Rem. Sens.
44(2), 93–116 (2007),
http://dx.doi.org/10.2747/1548- 1603.44.2.93.
4. B. Aiazzi et al., “Context-driven fusion of
high spatial and spectral resolution
imagesbased on oversampled multiresolution
analysis,” IEEE Trans. Geosci. Rem. Sens.
40(10), 2300–2312 (2002),
http://dx.doi.org/10.1109/TGRS.2002.803623.
5. L. Alparone et al., “Comparison of
pansharpening algorithms: outcome of the
2006 GRS-S data-fusion contest,” IEEE Trans.
Geosci. Rem. Sens. 45(10), 3012–3021 (2007),
http:// dx.doi.org/10.1109/TGRS.2007.904923.
6. B. Aiazzi et al., “A comparison between
global and context-adaptive pansharpening of
multispectral images,” IEEE Geosci.Rem.
Sens. Lett.6(2), 302–306 (2009),
http://dx.doi.org/
10.1109/LGRS.2008.2012003