SlideShare a Scribd company logo
1 of 31
Int. J. of Computational Science and Engineering, Vol. 15, No. 5/6, 2018 340
Copyright © 2018 Inderscience Enterprises Ltd.
Panchromatic and Multispectral Remote Sensing
Image Fusion Using Machine Learning for Classifying
Bucolic and Farming Region
P.S.Jagadeesh Kumar1
, Tracy Lin Huan2
,
Xianpei Li3
, Yanmin Yuan4
1,2
Dartmouth College, Hanover, New Hampshire, United States.
3
Stanford University, California, United States.
4
Harvard University, Cambridge, United States.
Abstract:
Various kinds of sensors are persevered in geographical monitoring,
forecasting, and planning. To advance fusion in remote sensing applications,
researchers antedate image fusion systems in combining panchromatic and
multispectral images. This article accents on the fusion of high resolution
panchromatic and low resolution multispectral images by machine learning for
categorization of bucolic and farming region. Qualitative and quantitative
assessment methods were used to measure the distinction of the fused images.
The applied outcomes display that the predicted method provided healthier
recital over other fusion algorithms in enhancing the quality of the fused
images and shaped operative evaluation of bucolic and farming region.
Keywords:
Bucolic and Farming Region, Multispectral Imaging, Panchromatic Imaging,
Image Fusion, Machine Learning
Reference to this paper should be made as follows: P.S.Jagadeesh Kumar,
Tracy Lin Huan, Xianpei Li, Yanmin Yuan. (2018) ‘Panchromatic and
Multispectral Remote Sensing Image Fusion Using Machine Learning for
Classifying Bucolic and Farming Region’, Int. J. Computational Science and
Engineering, Vol. 15, No. 5/6, pp.340-370.
Biographical notes: P.S.Jagadeesh Kumar is working as Postdoctoral Research
Associate in the department of Earth Science and Remote Sensing, Dartmouth
College, Hanover, New Hampshire, United States.
Tracy Lin Huan is working as Professor in the department of Earth Science and
Remote Sensing, Dartmouth College, Hanover, New Hampshire, United States.
Xianpei Li is working as Assistant Professor in the Institute for Computational
and Mathematical Engineering, Stanford University, California, United States.
Yanmin Yuan is working as Professor in the department of Bioengineering,
Harvard University, Cambridge, United States.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
1 Introduction
Remote sensing systems, mainly those deployed on satellites, provide a monotonous and
constant interpretation of the earth. To happenstance the wants of diverse remote sensing
demands, the systems deliver an eclectic range of three-dimensional, ethereal,
radiometric and time-based resolutions. Satellites generally take plentiful images from
frequency bands in the visual and non-visual range (Alparone L, Wald L et al., 2007).
The color information in a remote sensing image by means of spectral band, blends for a
detailed 3-D resolution expansion of information which is employed in quite a lot of
remote sensing applications. Otherwise, assorted targets in a single band might look
analogous which makes it tough to differentiate. Various bands can be developed by a
solitary multispectral sensor or through multiple sensors functioning at dissimilar
frequencies (Chen T, Zhang J, Zhang Y, 2005). Harmonizing evidence about the same
scene can be attained in the subsequent cases; data chronicled by diverse sensors, data
chronicled by the same sensor functioning in diverse spectral bands, data chronicled by
the same sensor at disparate polarization, data chronicled by the same sensor positioned
on platforms hovering at dissimilar heights (Dai F.C, Lee C.F, 2002). Habitually, sensors
having high spectral resolution, restrained by catching the radiance from diverse land
covers in numerous bands of the electromagnetic spectrum, do not have an ideal spatial
resolution, that might be insufficient to a precise classification regardless of its decent
spectral resolution (Fonseca L.M.G, Prasad G.S.S.D, Mascarenhas N.D.A, 1993). On a
high spatial resolution panchromatic image (PAN), overall geometric features can
effortlessly be identified, whereas the multispectral images (MS) incorporates more
prosperous spectral information. The capacities of the images can be improved if the
benefits of together high spatial and spectral resolution can be combined into one solitary
image. The comprehensive structures of such a cohesive image therefore can be basically
recognized and will assist numerous applications, for example built-up and
environmental studies (Fonseca et al., 2008). By means of suitable algorithms it is
plausible to merge MS and PAN bands and yield an artificial image with their best
topographies. This procedure is acknowledged as multisensory merging, or fusion, or
sharpening (Laporterie-Dejean F, de Boissezon H, Flouzat G, Lefevre-Fonollosa M.J,
2005). Its objective is to assimilate the spatial feature of high-resolution PAN image and
the color info of a low-resolution MS image to achieve a high-resolution MS image
(Ioannidou S, Karathanassi V, 2007). The outcome of image fusion is a novel image
which is further apposite for human and machine perspicacity or for supplementary
image-processing procedures like segmentation, feature extraction and object recognition.
The fused image must provide the uppermost spatial information while still
conserving decent spectral information quality (Garzelli A et al., 2005). It is identified
that the spatial information of PAN image is usually armoured by its high-frequency
components, whereas the spectral information of MS image is often supported by its low-
frequency components (Jing L, Cheng Q, 2009). If the high-frequency components of the
MS image are principally replaced by the high-frequency components of the PAN image,
the spatial resolution is enhanced but with the loss of spectral information from the high-
frequency components of MS image (Guo Q, Chen S, Leung H, Liu S, 2010). The exact
purposes of image fusion are to upsurge the spatial resolution, advance the geometric
accurateness, improve the competences of topographies presentation, advance
classification precision, progress the ability of the change detection and substitute or
reinstate the shortcoming of image data (Pohl C, Genderen J.L.V, 1998).
PSJ Kumar et al. 342
To yield fused images with improved quality certain significant steps must be restrained
throughout the fusion process;
(1) The PAN and MS images must be attained at adjoining period. Many variations
might transpire throughout the interlude of acquisition time; discrepancies in the
vegetation reliant on the season of the year, dissimilar lighting circumstances,
erection of buildings, or discrepancies triggered by natural disasters like floods,
earthquakes and volcanic explosions.
(2) The spectral frequency of PAN image must overlay the spectral frequency of all
multispectral bands during the fusion process to conserve the image colour. This
could evade the color falsehood in the fused image.
(3) The spectral band of the high-resolution image must be as analogous as possible
to that of the substituted low-resolution component during the fusion process.
(4) The high-resolution image must be altogether contrast matched to the substituted
component to decrease enduring radiometric artifacts.
(5) The PAN and MS images must be registered with an accuracy of not more than
0.5-pixel, evading artifacts in the fused image.
Some of the above aspects are not substantial when the fused images are from extents of
diverse range with diverse remote sensing practice (Temesgen B, Mohammed M.U,
2001).
2 Panchromatic and Multispectral Imaging System
Optical remote sensing brands the usage of visible, near infrared and short-wave infrared
sensors to produce images of the earth's superficial by noticing the solar radiation
reflected by the targets on the land. Dissimilar materials reflect and absorb light in a
distinct way at dissimilar wavelengths as publicized in Fig. 1. Hereafter, the targets can
be discriminated by their spectral reflectance signs in remotely sensed images (Lillo-
Saavedra M, Gonzalo C et al., 2005). Optical remote sensing schemes are branded into
different classes reliant on the number of spectral bands recycled in the imaging process
specifically panchromatic imaging scheme, multispectral imaging scheme, hyperspectral
imaging scheme in addition to superspectral imaging procedure (Li Z, Jing Z et al, 2005).
In panchromatic imaging scheme, the sensor is a solitary channel detector subtle to
radiation in the interior of a broad wavelength range. If the wavelength range accords
with the visible range, then the ensuing image is called panchromatic image i.e. black-
and-white image. The corporeal measure being restrained is the superficial illumination
of the targets (L. Dong, Q. Yang, H. Wu et al., 2015). Specimens of panchromatic
imaging schemes are IKONOS Pan, QuickBird Pan, SPOT Pan, GeoEye Pan, LANDSAT
ETM+ Pan. The sensor in multispectral imaging is a multi-channel detector with a limited
spectral band. Every channel is subtle to radiation within a narrow wavelength band. The
ensuing multilayer image encompasses both the brightness and spectral colour
information of the targets being detected (Marcelino E.V, Ventura F et al., 2003).
Specimens of multispectral schemes are Landsat TM, MSS, Spot HRV-XS, Ikonos MS,
QuickBird MS, GeoEye MS.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 1 Illustration of optical remote sensing
Optical remote sensing hinge on the sun as the solitary cause of illumination. The
solar irradiation range above the atmosphere can be exhibited by a black body radiation
spectrum possessing a source temperature of 5950 K, with a topmost irradiation situated
at about 550 nm wavelength (Miao Q, Shi C, Xu P, Yang M, Shi Y, 2011). Corporal
measurement of the solar irradiance has also been achieved by means of ground based
and spaceborne sensors. After transient through the atmosphere, the solar irradiation
spectrum at the ground is moderated by the atmospheric transmission windows.
Momentous energy remains only within the wavelength range from about 0.24 to 3 µm.
When solar radiation knockouts a target surface, it might be diffused, absorbed or
reflected (Pajares G, de la Cruz J.M, 2004). Dissimilar materials reflect and absorb light
rebelliously at dissimilar wavelengths. The reflectance spectrum of a material is a plot of
the fraction of radiation reflected as a function of the incident wavelength and aids as a
sole sign for the material (Tu T.M, Cheng W.C et al., 2007). In general, a material can be
recognized from its spectral reflectance sign if the sensing system has adequate spectral
resolution to discriminate its spectrum from those of other materials. This principle
affords the source for multispectral remote sensing (Pohl C, Genderen J.L.V, 1998). Fig.
2 shows the distinctive reflectance spectra of five materials: clear water, turbid water,
bare soil and two types of vegetation. The reflectance of clear water is normally low.
Though, the reflectance is extreme at the blue end of the spectrum and declines as
wavelength increases. Henceforth, clear water seems dark-bluish (Rahman M.M,
Csaplovics E, 2007). Turbid water has certain sediment postponement which surges the
reflectance in the red end of the spectrum, accounting for its brownish appearance. The
reflectance of bare soil habitually depends on its composition. The reflectance increases
monotonically with increasing wavelength. Henceforth, it would appear yellowish-red to
the eye (X. Luo, Z. Zhang, X. Wu, 2016).
PSJ Kumar et al. 344
Fig. 2 Reflectance bands of multispectral imaging
Vegetation has an inimitable spectral sign which permits it to be illustrious voluntarily
from other kinds of land cover in an optical/near-infrared image. In both the blue and red
regions of the spectrum, the reflectance is low owing to absorption by chlorophyll for
photosynthesis. It has the topmost value at the green region which gives rise to the green
color of vegetation. In the near infrared (NIR) region, the reflectance is much larger than
that in the visible band owing to the cellular structure in the leaves (Silva F.C, Dutra L.V
et al., 2007). Henceforth, vegetation can be recognized by the high NIR but normally
with low visible reflectance. This phenomenon has been secondhand in principal
investigation missions during war times for camouflage detection. The silhouette of the
reflectance range can be hand-me-down for identification of vegetation type (Simone G,
Farina A, Morabito F.C, Bruzzone L et al., 2002). For instance, the reflectance spectra of
vegetation 1 and 2 can be illustrious however they display the normal features of high
NIR but low visible reflectances. Vegetation 1 has larger reflectance in the visible region
but lower reflectance in the NIR region. For the similar vegetation type, the reflectance
spectrum also hinges on other factors like the leaf moisture and healthiness of the plants
(Song H, Yu S, Yang X et al., 2007). The reflectance of vegetation is more diverse,
contingent on the kinds of plants and the plant's water content. Reflectance of leaves
normally upsurges when leaf liquid water content declines.
3 Image Fusion Algorithms
Preferably, image fusion methods must permit merging of images with dissimilar spectral
and spatial resolution possessing the radiometric data (Tu T.M, Huang P.S, Hung C.L,
Chang C.P, 2004). Enormous exertion has been laid in evolving fusion approaches that
conserve the spectral data and increase detail information in the fused image fashioned by
fusion method. Strategies based on Intensity Hue Saturation (IHS) transform and
Principal Components Analysis (PCA) conceivably are the utmost dominant approaches
recycled to improve the spatial tenacity of multispectral images with panchromatic
images (Baatz A, 2000). However, both approaches agonize from the delinquent that the
radiometry on the spectral frequencies is rehabilitated after fusion process. This is due to
the high-resolution panchromatic image generally has spectral features dissimilar from
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
the intensity and the first principal components (Ling Y, Ehlers M, Usery E.L, 2008).
Further recently, novel methods have been projected such as those that truncate wavelet
transform with IHS model and PCA transform to achieve the colour and information
distortion in the fused image. Underneath, are presented the basic theory of the fusion
approaches based on IHS, PCA, Wavelet Transform and Machine Learning, which are
the utmost traditional procedures employed in remote sensing applications.
(a) Intensity Hue Saturation (IHS) Transform
The IHS method is one of the utmost regularly recycled fusion system for sharpening. It
has developed into a typical practice in image examination for color improvement,
feature augmentation, enhancement of spatial resolution and the fusion of incongruent
datasets (Carper W, Lillesand T, Kiefer R, 1990). In the IHS space, spectral data is
characteristically reflected on the hue and the saturation. From the visual system, one can
accomplish that the intensity variation has trivial consequence on the spectral data and is
easy to deal with. For the fusion of the high-resolution PAN image and multispectral
remote sensing images, the goalmouth is confirming the spectral data and accumulating
the detail information of high spatial resolution, thus, the fusion is even more passable for
handling in IHS space (Choi M, 2006). IHS scheme encompasses of transmuting the R, G
and B bands of the MS image into IHS components, substituting the intensity component
by the PAN image, and accomplishing the inverse transformation to attain a high spatial
resolution MS image. The three multispectral bands, R, G and B, of a low-resolution
image are first transformed into the IHS colour space as;
( )
(
√ √ √
√ √ )
( )
( )
√
where I, H, S components are intensity, hue and saturation, and V1 and V2 are the
intermediate variables. Fusion proceeds by substituting component I with the
panchromatic high-resolution image information, after matching its radiometric
information with the component I. The fused image, which has both rich spectral
information and high spatial resolution, is then attained by accomplishing the inverse
transformation from IHS back to the original RGB space as;
PSJ Kumar et al. 346
( )
(
√ √
√
√ √ )
( )
Though the IHS technique has been extensively used, the technique cannot decompose an
image into dissimilar frequencies in frequency space such as higher or lower frequency
(Schetselaar E. M, 1998). Thus, the IHS technique cannot be employed to improve
certain image features. Above and beyond, the color falsehood of IHS method is often
important. To decrease the color falsehood, the PAN image is harmonized to the intensity
component before the substitution otherwise the hue and saturation components gets
stretched before the reverse transform. Certain approaches were proposed that combines a
standard IHS transform with FFT filtering of both the PAN image and the intensity
component of the original MS image to decrease color falsehood in the fused image (Ling
Y, Ehlers M, Usery E.L, Madden M, 2007). Studies recommend numerous IHS
transformation algorithms have been advanced for converting the RGB values. Some are
also named HSV (hue, saturation, value) or HLS (hue, luminance, saturation). Though
the intricacy of the model differs, they yield corresponding values for hue and saturation
(Tu T.M, Su S.C, Shyu H.C, Huang P.S, 2001). Nonetheless, the algorithms vary in the
technique recycled in scheming the intensity component of the transformation.
(b) Principal Component Analysis (PCA) method
The fusion technique based on PCA is very modest. PCA is a general statistical technique
that transmutes multivariate information with allied variables into one with uncorrelated
variables. These novel variables are attained as linear combinations of the original
variables (Chavez P.S, Kwakteng A.Y, 1989). PCA has been broadly employed in image
encoding, image data compression, image enhancement and image fusion. In the fusion
method, PCA method produces uncorrelated images (PC1, PC2, …, PCn, where n is the
number of input multispectral bands). The first principal component (PC1) is replaced
with the PAN band, which has sophisticated spatial resolution than the MS images (Cao
W, Li B, Zhang Y, 2003). Then, the inverse PCA transformation is functional to acquire
the image in the RGB colour model. In PCA image fusion, foremost spatial data and
weak colour information is habitually a problem. The first principal component, which
comprises maximum variance, is substituted by PAN image. Such replacement exploits
the consequence of PAN image in the fused artefact (Gonzalez-Audicana M, Saleta J et
al., 2004). One resolution might be stretching the principal component to give a spherical
distribution. Also, the PCA method is delicate to the choice of range to be fused. Other
complication is connected to the fact that the first principal component can be also
suggestively dissimilar from the PAN image. If the grey values of the PAN image are
accustomed to the grey values corresponding to PC1 component before the substitution,
the colour falsehood is meaningfully concentrated (Wang Q, Shen Y, Jin J, 2008).
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
(c) Wavelet Transform (WT)
Multi-resolution and multi-scale approaches, for example pyramid transformation, have
been espoused for data fusion since the early 1980s (Chibani Y, Houacine A, 2000). The
Pyramid-based image fusion approaches, together with Laplacian pyramid transform,
were all established from Gaussian pyramid transform, have been reviewed and
expansively employed. The procedure of wavelet construction was deployed into the
background of functional examination and designated the fast wavelet transform
algorithm and overall technique of building wavelet on orthonormal basis (Amolins K,
Zhang Y, Dare P, 2007). On this source, wavelet transform can be practical to image
breakdown and reconstruction. Wavelet transforms deliver an outline in which an image
is disintegrated, with each level matching to a bristlier resolution band (Chibani Y et al.,
2003). Perhaps, in the occasion of fusing a MS image with a high-resolution PAN image
with wavelet fusion, the PAN image is first disintegrated into a set of low-resolution
PAN images with corresponding wavelet coefficients (spatial details) for each level (Choi
M, Kim R.Y, Nam M.R, Kim, H.O, 2005). Discrete bands of the MS image then
substitute the low-resolution PAN at the resolution level of the original MS image. The
high resolution spatial aspect is inoculated into every MS band by accomplishing a
reverse wavelet transform on each MS band organized with the corresponding wavelet
coefficients (Cao W, Li B, Zhang Y, 2003).
In the wavelet-based fusion systems, detail information is mined from the PAN image
by means of wavelet transforms and inserted into the MS image. Distortion of the
spectral information is lessened related to the typical methods. To realize optimal fusion
outcomes, many wavelet-based fusion systems had been verified by several researchers
(Chai Y, Li H.F, Qu J.F, 2010). A technique for fusing SAR and visible MS images by
means of the Curvelet transformation was projected. This organisation was established to
be further competent for perceiving edge information and denoising than wavelet
transformation. Curvelet based image fusion has been castoff to combine a Landsat
ETM+ PAN and MS image. The projected technique concurrently delivers richer
information in the spatial and spectral domains. A flexible multi-resolution, local, and
directional image extension by means of contour segments, the Contourlet transform, to
resolve the problem that wavelet transform could not competently signify the singularity
of linear curve in image processing was projected (Garguet-Duport B, Girel J, 1996).
Contourlet transform affords flexible number of guidelines and apprehends the inherent
geometrical structure of images. Generally, as a distinct feature level fusion technique,
wavelet-based fusion could obviously accomplish better than expedient approaches in
terms of lessening color falsehood and denoising effects. It has been one of the most
prevalent fusion methods in remote sensing in topical years, and has been typical segment
in numerous commercial image processing software’s like ENVI, PCI, ERDAS
(Gonzalez-Audicana M, Otazu X, Fors O, 2005). Complications and restrictions related
with them comprise: high computational intricacy related to the typical approaches,
spectral content of small objects frequently mislaid in the fused images, it regularly
entails the user to regulate suitable standards for certain constraints such as thresholds (Li
S, Kwok J.T, Wang Y, 2002). The growth of classier wavelet-based fusion algorithm for
example ridgelet, curvelet, and contourlet transformation might advance the presentation
consequences, though these new-fangled organisations might source superior
complication in the computation and setting of parameters (Ventura F.N, Fonseca L.M.G,
Santa Rosa A.N.C, 2002).
PSJ Kumar et al. 348
(d)Machine Learning and Convolutional Neural Network
Machine learning is an arena of computer science that stretches computer the capability
to learn without being explicitly programmed (Yang X.H, Jing J.L, Liu G, Hua L.Z, Ma
D.W, 2007). In machine learning, a convolutional neural network (CNN) is a class of
deep, feed-forward artificial neural networks that has efficaciously been functional to
investigate visual imagery. CNN has recognized to be very effective in areas such
as image recognition and classification. The general schematic diagram of a
convolutional neural network is shown in Fig. 3. The input layer has several neurons,
which recommend the characteristic factors extracted and normalized from PAN image
and MS image. The function of each neuron is a sigmoid function given by
( )
The hidden layer has numerous neurons and the output layer has one or more neuron. The
ith
neuron of the input layer connects with the jth neuron of the hidden layer by weight
Wij, and weight between the jth
neuron of the hidden layer and the tth
neuron of output
layer is Vjt (in this case t = 1). The weighting function is employed to simulate and
distinguish the rejoinder connection amid structures of fused image and matching feature
from original images (PAN image and MS image). The CNN model is specified as
follows (P.S.Jagadeesh Kumar, Krishna Moorthy, 2013):
* (∑ )+
where Y is the pixel value of fused image exported from the neural network model,
q is the number of nodes hidden (q=8 here), Vj is the weight between jth
hidden node and
output node (in this case, there is only one output node), c is the threshold of the output
node, Hj is the exported values from the jth
hidden node:
[ (∑ )]
where Wij is the weight between ith
input node and the jth
hidden node, ai is the values of
the ith
input factor, n is the number of nodes of input (n=5), hj is the threshold of the jth
hidden node.
As the initial step of CNN-based data fusion, two registered images are disintegrated into
numerous blocks with size of M and N. Then, structures of the matching blocks in the
two original images are mined, and the normalized feature vector incident to neural
networks can be built. The structures employed here to estimate the fusion result are
generally spatial frequency, visibility, and edge. The following step is to select some
vector models to train neural networks (Zhang Y, 2004). CNN is a universal function
approximator that directly acclimates to any nonlinear function demarcated by an
illustrative set of training data. Once trained, the CNN model can reminisce a functional
connexion and be employed for further assessments. For these cause, the CNN perception
has been realized to advance sturdily in nonlinear replicas for multiple sensors data
fusion (P.S.Jagadeesh Kumar, 2013).
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 3 Model of convolutional neural network based classification
Artificial neural network groups input model through modest learning (Zhou J, Civco
D.L et al., 1998). The number of output neurons essentially be set earlier to building
neural networks model. Neural network can estimate objective function at any specific
level if sufficient hidden units are provided (Zhang Y, 2008). The CNN-based fusion
method achieves pattern recognition proficiencies of artificial neural networks, and
meanwhile, the learning competence of neural networks brands it practicable to modify
the image fusion method. Numerous applications specified that the CNN-based methods
have additional benefits than traditional statistical approaches, particularly when input
multiple sensor data were inadequate or with much noises. It is habitually assisted as an
effective decision level fusion paraphernalia for its self-learning features, precisely in
land usage and land cover classification (Camara G, Souza R.C.M, Freitas U.M, Garrido
J, 1996).
4 Fused Image Quality Evaluation Methods
Some investigators have appraised diverse image fusion approaches using dissimilar
image quality measures. Usually, the effectiveness of an image-fusion technique can be
assessed by relating the subsequent fused image with a reference image, which is
presumed to be ideal. This assessment might be grounded on spectral and spatial
constructions, and can be accomplished both qualitatively and quantitatively (G
Palubinskas, 2015). Regrettably, the reference image is not always obtainable practically,
therefore, it is essential to simulate it or to achieve a quantitative and blind estimation of
the fused images. For measuring quality of an image after fusion, some features must be
demarcated. These comprise, for example, spatial and spectral tenacity, amount of
information, perceptibility, contrast, or facts of object of interest. Quality evaluation is
application protege so that dissimilar applications might need discrete features of image
quality (Wei Z.G, Yuan J.H et al., 1999). Normally, image evaluation approaches can be
divided into two groups: qualitative or subjective and quantitative or objective
procedures. Qualitative approaches encompass visual assessment amongst a reference
image and the fused image whereas quantitative examination includes quality metrics that
measures spectral and spatial comparison amid multispectral and fused images for its
effectiveness.
PSJ Kumar et al. 350
(a) Qualitative Assessment
Affording to prior evaluation standards or specific knowledges, individual decision or
even grades can be assumed to the quality of an image. The transcriber studies the tone,
contrast, saturation, sharpness, and texture of the fused images (Chavez P.S, Sides S.C,
Anderson J.A, 1991). The concluding overall quality decision can be accomplished by,
for instance, a weighted mean constructed on the distinct ranking. The qualitative
technique principally comprises complete and the comparative measures as shown in
Table 1. This technique hinge on the expert’s skills or prejudice and certain vagueness is
intricated. Qualitative measures cannot be implied by laborious mathematical
representations, and their method is chiefly visual assessment.
TABLE I. QUALITATIVE ASSESSMENT FOR IMAGE QUALITY
Ranking Complete Measure Comparative Measure
A Outstanding Best
B Decent Better
C Reasonable Average
D Poor Lower
E Meager Lowest
(b) Quantitative Assessment
Some quality metrics encompass; average grey value, for representing concentration of
an image, standard deviation, information entropy, profile intensity curve for calculating
actualities of fused images, bias and correlation coefficient for computing falsification
amid the original image and fused image in relation of spectral data (Chen H, Varshney
P.K, 2007). Assume Fi and Ri  (1, … , N) be the N bands of the fused and reference
images, correspondingly. The following methods were used to determine the variance in
spectral and global data amid each band of the merged images with reference images and
without reference image.
(i) Method 1 (With reference image):
1. Correlation coefficient (CC) amid the reference and the fused image that ought to be
near to 1 as probable (UK Wang, Bovik A.C, 2002).
2. Difference between the means of the reference and the fused image (DM), in vivacity
as well as its value comparative to the mean of the original. The lesser these variances,
the healthier will be the spectral quality of the merged image. Therefore, the variance
value must be as near to 0 as probable (Wei Z.G, Yuan J.H et al., 1999).
3. Standard deviation of the difference image (SSD), proportional to the mean of the
reference image stated in percentage. The lesser its value, the healthier will be the
spectral quality of the fused image (Zheng Y, Qin Z, 2009).
4. Universal Image Quality Indicator (UIQI) can be computed by:
( ) *( ) ( ) +
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Where is the covariance between the band of reference image and the band of fused
image, µ and σ are the mean and standard deviation of the images. The higher UIQI
index, the healthier will be spectral quality of the fused image. It is recommended to
practice the moving windows of unusual sizes to evade errors due to index spatial
dependence (Lillo-Saavedra M, Gonzalo C, 2006).
To evaluate the global spectral quality of the fused image, subsequent constraints can be
employed;
5. The relative average spectral error index (RASE) exemplifies the average performance
for all the bands of fused image (Chen Y, Blum R.S, 2009):
√ ∑[( ( ) ( )]
where µ is the mean radiance of the N spectral bands (Ri) of the reference image. DM
and SSD are defined above.
6. Relative global dimensional synthesis error (ERGAS) can be computed by:
√ ∑([( ( ) ( )] )
where h and l are the resolution of the high and low spatial resolution images,
respectively, and i is the mean radiance of each spectral band involved in the fusion
process. DM and SSD are defined above. Lesser the values of RASE and ERGAS
indexes, the healthier will be the spectral quality of the fused images (Alparone L, Aiazzi
B et al., 2004).
7. An effective fusion scheme must permit the accumulation of a high degree of the
spatial aspect of the PAN image into the MS image (G Palubinskas, 2015). Visually the
details of the information can be observed. The average gradient index (AG) for spatial
quality evaluation might be employed. AG describes the changing feature of image
texture and the detailed information (Liu Z, Forsyth D.S et al., 2008). Higher values of
the AG index correspond to healthier spatial resolution. The AG index of the fused
images at each band can be computed by:
∑ ∑ √( [
( )
] [
( )
] )
where K and L are the number of lines and columns of the fused image F.
PSJ Kumar et al. 352
(ii) Method 2 (Without reference image):
The image fusion rarely has the standard as a reference image (C. Zhang, J. Pan, S. Chen
et al., 2016). Evaluation of the effect of the traditional image processing parameters, such
as the mean square error (MSE), peak signal to noise ratio (PSNR), normalized minimum
variance (NSLE). These cannot be employed for objective evaluation of the experimental
effect (L. Li, Y. Zhou, W. Lin, J. Wu et al., 2016). The current commonly used no
reference image evaluation method for the analysis of the experimental results, the main
information entropy (HF), standard deviation (SF) and cross entropy (C).
1. The information entropy HF can be computed by:
∑ ( ) ( )
In the above equation, PF is to estimate the fused image pixel value distribution. N is
the fusion image of the total gray level. Information entropy HF represents the amount of
information including an image of the value, Larger values of HF, which means the image
information is richer, the visual effect is better (Shi W, Zhu C, Tian Y, Nichol J, 2005).
2. The standard deviation SF can be computed by:
√∑( ) ( )
In the above equation, AvF is the pixel gray mean image, PF is the image pixel value
distribution. The standard deviation of SF reflects the image contrast, larger values of
SF, the image contrast is stronger, the visual effect is better (Wei Z.G, Yuan J.H et al.,
1999).
3. The cross-entropy C can be computed by:
∑ ( )
In the above equation, Pi is the gray level distribution of the source image, Qi is the
gray distribution of image fusion, Cross entropy C is the pixel difference of two images.
When the image difference is small, the more amount of information is extracted, cross
entropy C is the better evaluation of image fusion.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
5 Bucolic and Farming Region Classification
Farming is the backbone of national economy and the subsequent section for
safeguarding food sanctuary. Timely accessibility of information on agriculture is
significant for taking conversant choices on food safety issues (P.S.Jagadeesh Kumar,
J.Nedumaan, 2015). Many nations in the world that practices space technology and land-
based annotations for causing periodic apprises on crop production information and
taking efforts to accomplish demonstrative agriculture as shown in Fig. 4. Satellite
based optical and radar imagery are extensively employed in contemplate agriculture.
Radar imagery are particularly recycled during rainy season (Nikolakopoulos K.G, 2005).
Joint usage of geospatial tools with crop productions and the observation network
approves suitable crop yield predictions, scarcity evaluation and monitoring for
appropriate agricultural needs.
Fig. 4 Farming data and monitoring stratagem
Suburbanization is captivating at a quick phase owing to the increased population.
The amount of bucolic populace is augmented from 30% in 1992 to more than 70% in
2016. Comprehensive urbanization has had a histrionic effect on the atmosphere and the
welfare of civic inhabitants. Studies that measure this progression and its influences are
imperative for enchanting corrective actions and scheming improved urbanization
strategies for the future (P.S.Jagadeesh Kumar, 2005). To realize these goals,
comprehensive urban land cover/use charts are essential. Presently, land cover info with
resolutions vacillating from low to high is the principal data source recycled in revisions
such as bucolic development replication, estimation of bucolic public health, and
calculation of bucolic ecosystem amenities. Nevertheless, to study topics such as housing
establishment, urban conveyance, job affability and suburban movement, and land use
outlines, comprehensive evidence on urban land usage is desirable owing to the change
amongst the two notions: land use is a national impression that explains human
happenings and their usage of land, while land cover is a physical portrayal of land
surface (Nikolakopoulos K.G, 2005). Land cover can be secondhand to deduce land
usage, but the two notions are not completely substitutable. Nonetheless, high-resolution
PSJ Kumar et al. 354
urban land usage charts casing enormous spatial ranges are comparatively sporadic since
local information and the methods vital for evolving these kinds of charts are usually not
accessible, principally for emerging areas (Yang X.H et al., 2008). Furthermore, bucolic
land usage charts are generally fashioned by thoughtful aerial photographs, field analysis
outcomes, and support materials, such as assessment chronicles or statistical information.
The developing nature of urban growth frequently overtakes the on-and-off exertions to
apprise prevailing land usage records and outcomes in out-of-date charts (Zhang Y,
2002). To make the condition worse, high-resolution land usage charts are normally kept
out of the influence of the civic. Consequently, to acquire land usage charts that
apprehend the pace of urban growth in an appropriate and exact way at a comparatively
huge spatial scale is a serious challenge in urban revisions, together in India and in other
nations fronting analogous circumstances.
Fig. 5 Bucolic and Farming Region, Salem, Tamil Nadu, India in 1996
Fig. 6 Bucolic and Farming Region, Salem, Tamil Nadu, India in 2016
Satellite-based remote sensing embraces certain benefits in monitoring the subtleties
of urban land usage because of the huge spatial exposure, high time tenacity, and
widespread accessibility. Fig. 5 and Fig. 6 shows the bucolic and farming region
classification of Salem district, Tamil Nadu, India in year 1996 and 2016
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
correspondingly. The data clearly reveals a drastic transformation in bucolic and farming
landscape of Salem province in the last two decades. The bucolic and farming region
classification at consistent interludes have been supported at sub-national scales for
example geoclimatic regions, biogeographic provinces, atmospheric sub-divisions,
bioclimatic regions, and changed watersheds (Zhang Y, 1999). National level mapping
has recognized major inducement with the accessibility of multi-spectral and multi-
resolution remote sensing information having synoptic and time-based exposure. This
inventiveness chanced the abrupt national requirements for scheduling and handling
natural resources, agriculture development in the catchment, afforestation, eco-
development, and isometrics of watershed expansion and irrigation tactics (P.S.Jagadeesh
Kumar, J.Tisa, J.Lepika, 2017). Utmost of these inventiveness were one-time exertions
and differ in terms of project ideas, classification systems, procedure of charting and the
satellite data quality. Presently, landscape subtleties and climate revisions require data on
phenology and index of forest; variation of cropland, unused, sterile and wasteland;
charting of non-permeable superficial like built-up zones; and topographies such as,
dams, mining, aquaculture, and marshlands (Pinho C.M.D, Silva F.C, Fonseca L.M.G et
al., 2008). To demarcate these classes with satisfactory precision, there is a requirement
to custom satellite data of high and medium spatial resolution. Above and beyond, time-
series charts must be reliable with universally acknowledged land cover classification
system to turn as substitute to climate variables. Moreover, the physical features of urban
focussed section, which might support to regulate land usage at the segment level, were
rarely involved in these revisions. There is a tough latent in combining the strength of
these two data sources, i.e., assimilating social data with remotely sensed data, to advance
improved intuitions into urban land usage outlines.
6 Implementation
The schematic block diagram of implementation and analysis for effective classification
of bucolic and farming region using image fusion technique is shown in Fig. 7. The high
resolution panchromatic (PAN) image and the low resolution multispectral (MS) images
were used to obtain the fused images. Different image fusion algorithms such as IHS
Transform, PCA method, Wavelet Transform and Machine Learning based Convolution
Neural Networks (CNN) were performed in obtaining the fused images. Inorder to attain
the fused images, two methods were exasperated specifically Method 1 (with reference
image) and Method 2 (without reference image). In Method 1, Dataset 1 consisting of the
PAN image and MS images with the relative reference image is employed. In Method 2,
Dataset 2 consisting of the PAN image and MS images without reference images is
employed. Both the datasets comprise of various satellite images such as landsat thematic
mapper, spot, ikonos, worldview, seastar and geoeye imageries. Once the fused images
were obtained over the distinct image fusion algorithms, they were evaluated by means of
various fused image quality assessment metrics for example Correlation coefficient (CC),
Difference between the means of the reference and the merged images (DM), Standard
deviation of the difference image (SSD), Universal image quality indicator (UIQI),
Relative average spectral error index (RASE), Relative global dimensional synthesis error
(ERGAS), Average gradient index (AG) meant for Method 1 and Information entropy
(HF), Standard deviation (SF), Cross entropy (C) for Method 2 as portrayed in Table II
and Table III.
PSJ Kumar et al. 356
The fused images individually from Method 1, Dataset 1 and Method 2, Dataset 2
were subjected to object oriented classification procedure for distinguishing the bucolic
and farming region. Object oriented method takes the form, textures and spectral
information into account, corresponding to color, shape, smoothness, compactness and
scale parameter. The classification phase begins with the primary phase of clustering
neighboring pixels into useful regions, which can further be recycled in the impending
phase of classification. Such segmentation and topology generation essentially be set
conferring to the resolution and the scale of the probable objects. This can be achieved in
multiple resolutions, thus permitting to distinguish numerous levels of object groups.
Classification grounded on pixel based procedures to image analysis is restricted.
Classically, they have substantial complications dealing with high-resolution data images,
they yield varying classification results and they are quite outside the prospects in mining
the object of interest. In contrast, object oriented classification method yields higher
accuracy compared to pixel based classification method, minimum distance
classification, parallelepiped classification and maximum likelihood classification
method for high resolution fused images. Once the classification of bucolic and farming
regions were done, they are measured for their efficiency through distinct image fusion
algorithms. Overall accuracy and kappa index were the metrics used to measure the
effectiveness of classification by distinct image fusion algorithms. Kappa is a measure of
agreement amid the two entities. Kappa is always less than or equal to 1. A value of 1
indicates perfect classification and values less than 1 specify below perfect classification.
In rare situations, Kappa can be negative. This is a sign that the two witnesses decided
less than would be anticipated just by chance. The effectiveness of the classification
results was construed in Table IV and Table V using overall efficiency and kappa index
for both Method 1, Dataset 1 and Method 2, Dataset 2 respectively.
TABLE II. COMPARISON OF IMAGE FUSION ALGORITHMS FOR FUSED IMAGE QUALITY (METHOD 1, DATASET 1)
(Note: All metric values are calculated with respect to the reference image)
Fusion
Algorithm
CC DM SSD UIQI RASE ERGAS AG
Qualitative
Assessment
IHS Transform 0.6342 0.0546 76.62% 0.0086 0.1144 3.9452 0.0813 Average
PCA Method 0.5256 0.0825 79.12% 0.0096 0.1362 3.2638 0.0714 Average
WT Transform 0.7694 0.0086 56.32% 0.0516 0.0582 2.6934 0.0891 Better
Machine Learning
based CNN
0.9864 0.0007 38.19% 0.0964 0.0357 1.9786 0.1079 Best
CC – Correlation coefficient
DM – Difference between the means of the reference and the merged images
SSD – Standard deviation of the difference image
UIQI – Universal image quality indicator
RASE – Relative average spectral error index
ERGAS – Relative global dimensional synthesis error
AG – Average gradient index
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 7 Schematic Block Diagram of Implementation and Analysis of Bucolic and
Farming Region Classification using Image Fusion
PSJ Kumar et al. 358
TABLE III. COMPARISON OF IMAGE FUSION ALGORITHMS FOR FUSED IMAGE QUALITY (METHOD 2, DATASET 2)
Fusion Algorithm HF SF C
Qualitative
Assessment
IHS Transform 5.297 16.563 0.7561 Average
PCA Method 5.686 17.821 0.7279 Average
WT Transform 6.252 19.546 0.5467 Better
Machine Learning
based CNN
7.127 23.698 0.4296 Best
HF – Information entropy
SF – Standard deviation
C – Cross entropy
TABLE IV. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION (METHOD 1, DATASET 1)
(Note: All metric values are calculated with respect to the reference image)
Fusion Algorithm
Overall
Accuracy
Kappa
Index*
Qualitative
Assessment
IHS Transform 71.93% 0.54 Average
PCA Method 78.64% 0.57 Average
WT Transform 83.89% 0.73 Better
Machine Learning
based CNN
90.43% 0.92 Best
*Poor classification = Less than 0.20
*Fair classification = 0.20 to 0.40
*Moderate classification = 0.40 to 0.60
*Good classification = 0.60 to 0.80
*Very good classification = 0.80 to 1.00
TABLE V. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION (METHOD 2, DATASET 2)
Fusion Algorithm
Overall
Accuracy
Kappa
Index*
Qualitative
Assessment
IHS Transform 71.56% 0.52 Average
PCA Method 76.89% 0.56 Average
WT Transform 86.52% 0.77 Better
Machine Learning
based CNN
93.23% 0.89 Best
*Poor classification = Less than 0.20
*Fair classification = 0.20 to 0.40
*Moderate classification = 0.40 to 0.60
*Good classification = 0.60 to 0.80
*Very good classification = 0.80 to 1.00
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
7 Results and Discussion
Fig. 8 and Fig. 9 provide the illustration of remote sensing image fusion by means of
Method 1, Dataset 1 and Method 2, Dataset 2 respectively. The correlation coefficient
(CC) of the fused image with respect to the reference image for the four image fusion
algorithms is shown in Table II. CC is expected to be as near as possible to 1 for the best
fused image quality. In Method 1, Dataset 1, the CC of Machine learning based CNN is
observed to be 0.9864, for WT it is 0.7694, for PCA method it is 0.5256 and for IHS
transform it is 0.6342. Obviously, CNN has the highest correlation coefficient than other
fusion algorithms, proving it for the best fused image quality. The same can be observed
in Fig. 10, the spatial quality of CNN is very effective. The finest quality agrees to higher
correlation coefficient, which shows that the object will appear white on the quality map.
The darker the object appears, the poorer the spatial quality with lower correlation
coefficient. Difference between the means of the reference and the fused images (DM) is
expected to be as near as possible to 0 for best spectral quality of fused image. In Method
1, Dataset 1, the DM of Machine learning based CNN is observed to be 0.0007, for WT it
is 0.0086, for PCA method it is 0.0825 and for IHS transform it is 0.0546. From the
subsequent observation, it can be affirmed that CNN has DM value very close to zero.
Hence, CNN proves to have the best visualization compared to other fusion algorithms.
Standard deviation of the difference image relative to the mean of the reference image
(SSD) is anticipated to be lower for improved spectral quality of fused image. In
Method1, Dataset 1, the SSD of Machine learning based CNN is pragmatic to be 38.19%,
for WT it is 56.32%, for PCA method it is 79.12% and for IHS transform it is 76.62%.
CNN has the lower percentage of DM, again attesting to have healthier spectral quality
correlated to other fusion algorithms.
Universal Image Quality Indicator (UIQI) is expected to be higher for healthier
spectral quality of the fused image. In Method 1, Dataset 1, the UIQI index of Machine
learning based CNN is observed to be 0.0964, for WT it is 0.0516, for PCA method it is
0.0096 and for IHS transform it is 0.0086. CNN has higher UIQI index than other fusion
algorithms, showing the best quality of fused image. The relative average spectral error
index (RASE) exemplifies the average recital of the method for all bands. Multispectral
images have numerous bands, the relative spectral error value amid the bands of the fused
image and reference image establishes the RASE. The RASE is anticipated to be lower
for the best spectral quality of the fused image. In Method 1, Dataset 1, the RASE of
Machine learning based CNN is observed to be 0.0357, for WT it is 0.0582, for PCA
method it is 0.1362 and for IHS transform it is 0.1144. From the subsequent observation,
it can be affirmed that CNN has RASE value lower compared to other fusion algorithms,
showing the best fused image quality. Relative global dimensional synthesis error
(ERGAS) is projected to be lower for the best spectral quality of the fused image. In
Method 1, Dataset 1, ERGAS of Machine learning based CNN is observed to be 1.9786,
for WT it is 2.6934, for PCA method it is 3.2638 and for IHS transform it is 3.9452.
From the subsequent observation, it can be detailed that CNN has ERGAS value lower
equated to other fusion algorithms, displaying the best fused image quality. The
corresponding can be observed in Fig. 12, the global quality of CNN is very effective.
Average gradient index (AG) defines the changing features of the image texture and the
comprehensive data. Higher values of the AG index match to higher spatial resolution. In
CNN has the higher value of AG, yet again confirming to have best spatial quality allied
to other fusion algorithms.
PSJ Kumar et al. 360
Information entropy (HF) of the fused image without reference image for the various
image fusion algorithms is publicized in Table III. Higher values of HF depict the image
information is richer, the visualization is best. In Method 2, Dataset 2, the HF of Machine
learning based CNN is observed to be 7.127, for WT it is 6.252, for PCA method it is
5.686 and for IHS transform it is 5.297. From the subsequent observation, it can be stated
that CNN has the higher value of HF. Henceforth, CNN validates to have the best
visualization equated to other fusion algorithms. The same can be observed in Fig. 11, the
spatial quality of CNN is very effective. The standard deviation SF replicates the image
contrast, higher values of SF, the image contrast is robust, the visualization is the best. In
Method 2, Dataset 2, the HF of Machine learning based CNN is observed to be 23.698,
for WT it is 19.546, for PCA method it is 17.821 and for IHS transform it is 16.563. CNN
has higher SF than other fusion algorithms, showing the best visual quality of fused
image. Cross entropy C is the pixel variance of two images. When the image variance is
trivial then more quantity of information can be mined, cross entropy C offers improved
appraisal of image fusion. Lower values of C portray the image information is more
affluent, the image quality is best. In Method 2, Dataset 2, the C of Machine learning
based CNN is observed to be 0.4296, for WT it is 0.5467, for PCA method it is 0.7279
and for IHS transform it is 0.7561. From the subsequent observation, it can be specified
that CNN has the lower value of C. The corresponding can be observed in Fig. 13, the
global quality of CNN is very effective. Consequently, CNN authorizes to have the best
fused image quality equated to other image fusion algorithms.
Fig. 14 and Fig. 15 portrays the classification of bucolic and farming region by
object oriented method through various fusion algorithm with and without reference
image by means of Method 1, Dataset 1 and Method 2, Dataset 2 respectively. The
overall accuracy and kappa index of bucolic and farming region classification is
publicized in Table IV and Table V respectively. The overall accuracy provides the
efficacy of region classification. Higher the accuracy, best is the classification. In
Method1, Dataset 1, the overall accuracy of Machine learning based CNN is observed to
be 90.43%, for WT it is 83.89%, for PCA method it is 78.64% and for IHS transform it is
71.93%. In Method 2, Dataset 2, the overall accuracy of Machine learning based CNN is
observed to be 93.23%, for WT it is 86.52%, for PCA method it is 76.89% and for IHS
transform it is 71.56%. From the subsequent observation, it can be stated that in both the
methods; CNN has the higher efficacy of bucolic and farming region classification.
Kappa is a quantity of covenant amid the two entities. Kappa is always less than or equal
to 1. A value of 1 specifies perfect classification and values less than 1 specifies below
perfect classification. In Method1, Dataset 1, the kappa index of Machine learning based
CNN is observed to be 0.92, for WT it is 0.73, for PCA method it is 0.57 and for IHS
transform it is 0.54. In Method 2, Dataset 2, the kappa index of Machine learning based
CNN is observed to be 0.89, for WT it is 0.77, for PCA method it is 0.56 and for IHS
transform it is 0.52. From the subsequent observation, it can be specified that in both the
methods; CNN has the kappa value closer to 1 than other image fusion algorithms in the
classification of bucolic and farming region using object oriented method. From the
above annotations, it can be evidently and unanimously concluded that machine learning
based convolutional neural network provides the best image fusion quality with respect to
spatial quality, spectral quality and global quality with and without a reference image.
Consequently, only with a best fused image, superlative classification of bucolic and
farming region of remote sensing image is practicable.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 8 Illustration of Remote Sensing Image Fusion using Method 1, Dataset 1. (a) PAN Image Quick Bird.
(b) MS Image Quick Bird. (c1) Image Fused by IHS Transform. (c2) Image Fused by PCA Method.
(c3) Image Fused by Wavelet Transform. (c4) Image Fused by Machine Learning Based CNN.
PSJ Kumar et al. 362
Fig. 9 Illustration of Remote Sensing Image Fusion using Method 2, Dataset 2. (a) PAN Image Geo Eye.
(b) MS Image Geo Eye. (c1) Image Fused by IHS Transform. (c2) Image Fused by PCA Method.
(c3) Image Fused by Wavelet Transform. (c4) Image Fused by Machine Learning Based CNN.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 10 Spatial quality of Fig. 8 using Method 1, Dataset 1. (a) Image Fused by IHS Transform.
(b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform.
(d) Image Fused by Machine Learning Based CNN.
Fig. 11 Spatial quality of Fig. 9 using Method 2, Dataset 2. (a) Image Fused by IHS Transform.
(b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform.
(d) Image Fused by Machine Learning Based CNN.
PSJ Kumar et al. 364
Fig. 12 Global quality of Fig. 8 using Method 1, Dataset 1. (a) Image Fused by IHS Transform.
(b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform.
(d) Image Fused by Machine Learning Based CNN.
Fig. 13 Global quality of Fig. 9 using Method 2, Dataset 2. (a) Image Fused by IHS Transform.
(b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform.
(d) Image Fused by Machine Learning Based CNN.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Fig. 14 Bucolic and Farming Region Classification by Object Oriented Method using Method 1, Dataset 1.
(a) IHS Transform. (b) PCA Method. (c) Wavelet Transform. (d) Machine Learning Based CNN.
Fig. 15 Bucolic and Farming Region Classification by Object Oriented Method using Method 2, Dataset 2.
(a) IHS Transform. (b) PCA Method. (c) Wavelet Transform. (d) Machine Learning Based CNN.
PSJ Kumar et al. 366
8 Conclusion
Image data fusion has developed into a valued tool in remote sensing to assimilate the
best features of each sensor data intricated in the processing. Amongst the several image
fusion techniques, the most commonly used approaches for example IHS transform,
PCA, wavelet transform were tested for the fused image quality of PAN and MS images
with and without reference image against the proposed machine learning based CNN in
the classification of bucolic and farming region using object oriented classification
method. Both qualitative and quantitative approaches such as correlation coefficient
(CC), difference between the means of the reference and the merged images (DM),
standard deviation of the difference image (SSD), universal image quality indicator
(UIQI), relative average spectral error index (RASE), relative global dimensional
synthesis error (ERGAS), average gradient index (AG) were evaluated to examine the
quality of fused image with reference image. Likewise, quantitative metrics for example
information entropy (HF), standard deviation (SF), cross entropy (C) were evaluated to
investigate the quality of fused image without reference image. However, in both the
cases, panchromatic and multispectral remote sensing image fusion using machine
learning based convolutional neural network have evidenced to be effective in the
classification of bucolic and farming region compared to IHS transform, PCA method
and wavelet transform. IHS transform and PCA is experiential to have lower intricacy
and faster processing time but the utmost substantial problem is colour falsehood.
Wavelet based systems accomplish better than those approaches in terms of minimalizing
colour falsehood but they normally cause larger intricacy in computation and constraints
setting. Additional challenge on the existing methods will be the capability for processing
multispectral, hyperspectral and superspectral satellite sensor statistics. Machine learning
seems to be one probable method to lever the high dimension nature of satellite sensor
data. From the above explanations, it can be palpably and consistently determined that
machine learning based convolutional neural network affords the best image fusion
quality with reverence to spatial quality, spectral quality and global quality with and
without a reference image. Subsequently, only with a best fused image, superlative
classification of bucolic and farming region of remote sensing image can be practically
effective.
In the future, diverse fusion methods can be combined in a single outline. Every
fusion scheme has its set of benefits and limits. The blend of several fusion schemes can
be a convenient stratagem in realizing better outcomes. Nevertheless, choosing and
planning of those fusion schemes are relatively subjective and frequently hinge on the
user’s practice. Additional researches are essential for the aspects such as plan of a wide-
ranging context for coalescing diverse fusion procedures; advance of inventive tactics
which can merge the structures of pixel, feature, decision level image fusion;
development of automatic quality assessment technique for estimation of fusion
outcomes. Automatic quality assessment is extremely necessary to appraise the probable
advantages of fusion, to regulate an ideal setting of constraints for a convinced fusion
system, as well as to relate outcomes attained with diverse algorithms. Nevertheless, all
together, no automatic solution has been realized to constantly yield high quality fusion
for diverse data sets. It is projected that the outcome of fusing data from several
independent sensors will provide the possibility for improved performance that can be
proficient by either sensor, and will reduce susceptibility to sensor oriented measures and
deployment aspects.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
References
Alparone L, Aiazzi B et al. (2004) ‘A Golbal Quality Measurement of Pan-Sharpened Multispectral
Imagery’, IEEE Geoscience and Remote Sensing Letters, Vol. 1, No. 4, pp.313-317.
Alparone L, Wald L et al. (2007) ‘Comparison of Pansharpening Algorithms Outcome of the 2006
GRS-S Data-Fusion Contest’, IEEE Transactions on Geoscience and Remote Sensing,
Vol.45, No.10, pp.3012-3021.
Amolins K, Zhang Y, Dare P. (2007) ‘Wavelet based image fusion techniques – An introduction,
review and comparison’, ISPRS Journal of Photogrammetry and Remote Sensing, Vol.62,
No.4, pp.249-263.
Baatz A. (2000) ‘Multiresolution Segmentation-an optimization approach for high quality multi-
scale image segmentation’, Angewandte Geographische Informationsverarbeitung XII,
Wichmann-Verlag, Heidelberg, Vol. 12, pp.12-23.
Cao W, Li B, Zhang Y. (2003) ‘A remote sensing image fusion method based on PCA transform
and wavelet packet transform’, Proceedings of the 2003 International Conference on
Neural Networks and Signal Processing, Vol.2, pp.976-981.
Carper W, Lillesand T, Kiefer R. (1990) ‘The use of intensity-hue-saturation transformations for
merging spot panchromatic and multispectral image data’, Photogrammetric Engineering
and Remote Sensing, Vol.56, No.4, pp.459-467.
Chai Y, Li H.F, Qu J.F. (2010) ‘Image fusion scheme using a novel dual-channel PCNN in lifting
stationary wavelet domain’, Optics Communications, Vol. 283, No. 19, pp.3591–3602.
Chavez P.S, Kwakteng A.Y. (1989) ‘Extracting spectral contrast in Landsat Thematic Mapper
image data using selective principal component analysis’, Photogrammetric Engineering
and Remote Sensing, Vol.55, No.3, pp.339-348.
Chavez P.S, Sides S.C, Anderson J.A. (1991) ‘Comparison of three different methods to merge
multiresolution and multispectral data: Landsat and SPOT panchromatic’, Photogrammetric
Engineering and Remote Sensing, Vol.57, No.3, pp.295-303.
Chen T, Zhang J, Zhang Y. (2005) ‘Remote sensing image fusion based on ridgelet transform’,
Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Vol.2,
2005, pp.1150-1153.
Chen H, Varshney P.K. (2007) ‘A human perception inspired quality metric for image fusion based
on regional information’, Information Fusion, Vol.8, No.2, pp.193-207.
Chen Y, Blum R.S. (2009) ‘A new automated quality assessment algorithm for image fusion’,
Image and Vision Computing, Vol.27, No.10, pp.1421-1432.
Chibani Y, Houacine A. (2000) ‘On the use of the redundant wavelet transform for multisensor
image fusion’, Proceedings of the 7th IEEE International Conference on Electronics,
Circuits and Systems, Vol.1, pp.442-445.
Chibani Y et al. (2003) ‘Redundant versus orthogonal wavelet decomposition for multisensor
image fusion’, Pattern Recognition, Vol.36, No.4, pp.879-887.
Choi M, Kim R.Y, Nam M.R, Kim, H.O. (2005) ‘Fusion of multispectral and panchromatic satellite
images using the curvelet transform’, IEEE Geoscience and Remote Sensing Letters, Vol.2,
No.2, pp.136-140.
Choi M. (2006) ‘A new intensity-hue-saturation fusion approach to image fusion with a tradeoff
parameter’, IEEE Geoscience and Remote Sensing, Vol.44, No.6, pp.1672-1682.
Camara G, Souza R.C.M, Freitas U.M, Garrido J. (1996). ‘Integrating remote sensing and GIS by
object-oriented data modeling’, Computers and Graphics, Vol.20, No.3, pp.395-403.
PSJ Kumar et al. 368
C. Zhang, J. Pan, S. Chen et al. (2016) ‘No reference image quality assessment using sparse feature
representation in two dimensions spatial correlation’, Neurocomputing, Vol.173, pp.462–
470.
Dai F.C, Lee C.F. (2002) ‘Landslide characteristics and slope instability modeling using GIS,
Lantau Island, Hong Kong’, Geomorphology, Vol.42, No.3, Version 4, pp.213-228.
Fonseca L.M.G, Prasad G.S.S.D, Mascarenhas N.D.A. (1993) ‘Combined interpolation restoration
of Landsat images through FIR filter design techniques’, International Journal of Remote
Sensing, Vol.14, No.13, pp.2547-2561.
Fonseca et al. (2008) ‘Multitemporal image registration based on multiresolution decomposition’,
Revista Brasileira de Cartografia, Vol.60, No.3, pp.271-286.
Garguet-Duport B, Girel J, Chassery J.M, Pautou G. (1996) ‘The use of multiresolution analysis
and wavelets transform for merging SPOT panchromatic and multispectral image data’,
Photogrammetric Engineering and Remote Sensing, Vol.62, No.9, pp.1057-1066.
Garzelli A et al. (2005) ‘Interband structure modeling for pan-sharpening of very high-resolution
multispectral images’, Information Fusion, Vol.6, No.3, (September 2005), pp.213-224.
Gonzalez-Audicana M, Saleta J et al. (2004) ‘Fusion of multispectral and panchromatic images
using improved IHS and PCA mergers based on wavelet decomposition’, IEEE
Transactions on Geoscience and Remote Sensing, Vol.42, No.6, pp.1291-1299.
Gonzalez-Audicana M, Otazu X, Fors O, Seco A. (2005) ‘Comparison between Mallat´s and the ´à
trous´ discrete wavelet transform based algorithms for the fusion of multispectral and
panchromatic images’, International Journal of Remote Sensing, Vol.26, No.3, pp.595-614.
G Palubinskas. (2015) ‘Joint quality measure for evaluation of pansharpening accuracy’, Remote
Sensing’, Vol.7, No. 7, pp.9292–9310.
Guo Q, Chen S, Leung H, Liu S. (2010) ‘Covariance intersection based image fusion technique
with application to pan-sharpening in remote sensing’, Information Sciences, Vol.180,
No.18, pp.3434-3443.
Ioannidou S, Karathanassi V. (2007) ‘Investigation of the Dual-Tree Complex and Shift Invariant
Discrete Wavelet Transforms on Quickbird Image Fusion’, IEEE Geoscience and Remote
Sensing Letters, Vol.4, No.4, pp.166-170.
Jing L, Cheng Q. (2009) ‘Two improvement schemes of PAN modulation fusion methods for
spectral distortion minimization’, International Journal of Remote Sensing, Taylor &
Francis Group, Vol. 30, No. 8, pp. 2119–2131.
Laporterie-Dejean F, de Boissezon H, Flouzat G, Lefevre-Fonollosa M.J. (2005) ‘Thematic and
statistical evaluations of five panchromatic/multispectral fusion methods on simulated
PLEIADES-HR images’, Information Fusion, Vol.6, No.3, pp.193-212.
Li S, Kwok J.T, Wang Y. (2002) ‘Using the discrete wavelet frame transform to merge Landsat TM
and SPOT panchromatic images’, Information Fusion, Vol.3, pp.17-23.
Li Z, Jing Z, et al. (2005) ‘Color transfer based remote sensing image fusion using non-separable
wavelet frame transform’, Pattern Recognition Letters, Vol.26, No.13, pp.2006-2014.
L. Li, Y. Zhou, W. Lin, J. Wu et al. (2016) ‘No-reference quality assessment of deblocked images’,
Neurocomputing, Vol.177, pp.572–584.
Lillo-Saavedra M, Gonzalo C et al. (2005) ‘Fusion of multispectral and panchromatic satellite
sensor imagery based on tailored filtering in the Fourier domain’, International Journal of
Remote Sensing, Vol.26, No.6, pp.1263-1268.
Lillo-Saavedra M, Gonzalo C. (2006) ‘Spectral or spatial quality for fused satellite imagery: a
trade-off solution using the wavelet à trous´ algorithm’, International Journal of Remote
Sensing, Vol.27, No.7, pp.1453-1464.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine
Learning for Classifying Bucolic and Farming Region
Ling Y, Ehlers M, Usery E.L, Madden M. (2007) ‘FFT-enhanced IHS transform method for fusing
high-resolution satellite images’, ISPRS Journal of Photogrammetry and Remote Sensing,
Vol.61, No.6, pp.381-392.
Ling Y, Ehlers M, Usery E.L, Madden M. (2008) ‘Effects of spatial resolution ratio in image
fusion’, International Journal of Remote Sensing, Vol.29, No.7, pp.2157-2167.
Liu Z, Forsyth D.S et al. (2008) ‘A feature-based metric for the quantitative evaluation of pixel-
level image fusion’, Computer Vision and Image Understanding, Vol.109, No.1, pp.56-68.
L. Dong, Q. Yang, H. Wu et al. (2015) ‘High quality multi-spectral and panchromatic image fusion
technologies based on curvelet transform’, Neurocomputing, Vol.159, pp.268–274.
Marcelino E.V, Ventura F et al. (2003) ‘Evaluation of image fusion techniques for the
identification of landslide scars using satellite data’, Geografia, Vol.28, No.3, pp.431-445.
Miao Q, Shi C, Xu P, Yang M, Shi Y. (2011) ‘A novel algorithm of image fusion using shearlets’,
Optics Communications, Elsevier, Vol. 284, No.6, pp.1540–1547.
Nikolakopoulos K.G. (2005) ‘Comparison of six fusion techniques for SPOT5 data’, Proceedings
of the IEEE International Geoscience and Remote Sensing Symposium, Vol.4, pp. 2811-
2814.
Pajares G, de la Cruz J.M. (2004) ‘A wavelet-based image fusion tutorial’, Pattern Recognition,
Vol.37, No.9, pp.1855-1872.
Pinho C.M.D, Silva F.C, Fonseca L.M.G et al. (2008) ‘Urban Land Cover Classification from
High-Resolution Images Using the C4.5 Algorithm’, XXI Congress of the International
Society for Photogrammetry and Remote Sensing, 2008, Beijing, Vol. XXXVII. Part B7,
pp.695-699.
P.S.Jagadeesh Kumar. (2005) ‘A Study on the Compatibility of Hybrid Approaches to Satellite
Image Compression', Image and Vision Computing, 23(8/2), December 2005, pp. 776-783.
P.S.Jagadeesh Kumar, J.Nedumaan. (2015) ‘A Comparative Case Study on Compression
Algorithm for Remote Sensing Images’, World Congress on Engineering and Computer
Science, Vol 25, Issue 12, pp.25-29, San Francisco, USA, 21-23, October 2015, IAENG.
P.S.Jagadeesh Kumar, J.Tisa, J.Lepika. (2017) ‘Congenital Bucolic and Farming Region
Taxonomy Using Neural Networks for Remote Sensing Imagery and Pattern Classification’,
IAENG International Journal of Computer Science, 56 (3), July 2017, pp.183-188.
P.S.Jagadeesh Kumar. (2013) 'Compression of Compound Images Using Wavelet Transform',
Asian Journal of Computer Science and Technology, 2 (2), 2013, pp. 263-268.
P.S.Jagadeesh Kumar, Krishna Moorthy. (2013) ‘Intelligent Parallel Processing and Compound
Image Compression', Advances in Parallel Computing, New Frontiers in Computing and
Communications 2013, Vol. 38, Issue 1, January 2013, pp.196-205
Pohl C, Genderen J.L.V. (1998) ‘Multisensor image fusion in remote sensing: concepts, methods
and applications’, International Journal of Remote Sensing, Vol.19, No.5, pp.823-854.
Rahman M.M, Csaplovics E. (2007) ‘Examination of image fusion using synthetic variable ratio
(SVR) technique’, International Journal of Remote Sensing, Vol. 28, No.15, pp.3413-3424.
Schetselaar E. M. (1998) ‘Fusion by the IHS transform: should we use cylindrical or spherical
coordinates?’, International Journal of Remote Sensing, Vol.19, No.4, pp.759-765.
Shi W, Zhu C, Tian Y, Nichol J. (2005) ‘Wavelet-based image fusion and quality assessment’,
International Journal of Applied Earth Observation and Geoinformation, Vol.6, pp.241-
251.
Silva F.C, Dutra L.V et al. (2007) ‘Urban Remote Sensing Image Enhancement Using a
Generalized IHS Fusion Technique’, Proceedings of the Symposium on Radio Wave
Propagation and Remote Sensing, Rio de Janeiro, Brazil, 2007.
PSJ Kumar et al. 370
Simone G, Farina A, Morabito F.C, Bruzzone L et al. (2002) ‘Image fusion techniques for remote
sensing applications’, Information fusion, No.3, 2002, pp.3-15.
Song H, Yu S, Yang X et al. (2007) ‘Fusion of multispectral and panchromatic satellite images
based on contourlet transform and local average gradient’, Optical Engineering, Vol.46,
No.2, 020502. doi:10.1117/1.2437125
Temesgen B, Mohammed M.U, Korme T. (2001) ‘Natural hazard assessment using GIS and remote
sensing methods, with reference to the landslide in the Wondogenet area, Ethiopia’, Physics
and Chemistry of the Earth, Part C, Vol.26, No.9, pp.665-675.
Tu T.M, Su S.C, Shyu H.C, Huang P.S. (2001) ‘A new look at IHS-like image fusion methods’,
Information Fusion, Vol.2, No.3, pp.177-186.
Tu T.M, Huang P.S, Hung C.L, Chang C.P. (2004) ‘A fast intensity-hue-saturation fusion
technique with spectral adjustment for IKONOS imagery’, IEEE Geoscience and Remote
Sensing Letters, Vol.1, No.4, pp.309-312.
Tu T.M, Cheng W.C et al. (2007) ‘Best tradeoff for highresolution image fusion to preserve spatial
details and minimize color distortion’, IEEE Geoscience and Remote Sensing Letters, Vol.4,
No.2, pp.302-306.
Ventura F.N, Fonseca L.M.G, Santa Rosa A.N.C. (2002) ‘Remotely sensed image fusion using the
wavelet transform’, Proceedings of the International Symposium on Remote Sensing of
Environment (ISRSE), Buenos Aires, 8-12, April 2002.
Wang Q, Shen Y, Jin J. (2008) ‘Performance evaluation of image fusion techniques’, Image
Fusion, T. Stathaki, (Ed), Academic Press, ISBN 978-0-12-372529-5, Oxford.
UK Wang, Bovik A.C. (2002) ‘A universal image quality index’, IEEE Signal Processing Letters,
Vol.9, No.3, pp.81-84.
Wei Z.G, Yuan J.H et al. (1999) ‘A picture quality evaluation method based on human perception’,
Acta Electronica Sinica, Vol.27, No.4, pp.79-82.
X. Luo, Z. Zhang, X. Wu. (2016) ‘A novel algorithm of remote sensing image fusion based on
shift-invariant shearlet transform and regional selection,’ AEU-Int. J. Electron. Commun,
Vol.70, No.2, pp.186–197.
Yang X.H, Jing J.L, Liu G, Hua L.Z, Ma D.W. (2007) ‘Fusion of multi-spectral and panchromatic
images using fuzzy rule’, Communications in Nonlinear Science and Numerical Simulation,
Vol.12, No.7, pp.1334-1350.
Yang X.H et al. (2008) ‘Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled
Contourlet Transform’, Acta Automatica Sinica, Elsevier, Vol. 34, No. 3, pp.274-282.
Zhang Y. (1999) ‘A new merging method and its spectral and spatial effects’, International Journal
of Remote Sensing, Vol.20, No.10, pp.2003-2014.
Zhang Y. (2002) ‘Problems in the fusion of commercial high-resolution satellite, Landsat 7 images,
and initial solutions’, Proceedings of the Symposium on Geospatial Theory, Processing and
Applications, Vol. 34, Part 4, Ottawa, Canada.
Zhang Y. (2004) ‘Understanding image fusion’, Photogrammetric Engineering and Remote
Sensing, Vol.70, No.6, pp.657-661.
Zhang Y. (2008) ‘Methods for image fusion quality assessment- a review, comparison and
analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Science, Vol. XXXVII. Part B7, Beijing, pp.1101-1109.
Zheng Y, Qin Z. (2009) ‘Objective Image Fusion Quality Evaluation Using Structural Similarity’,
Tsinghua Science & Technology, Vol.14, No.6, pp.703-709.
Zhou J, Civco D.L et al. (1998) ‘A wavelet transform method to merge Landsat TM and SPOT
panchromatic data’, International Journal of Remote Sensing, Vol.19, No.4, pp.743-757.

More Related Content

What's hot

Scale and resolution
Scale and resolutionScale and resolution
Scale and resolutionshabir dar
 
Cnn acuracia remotesensing-08-00329
Cnn acuracia remotesensing-08-00329Cnn acuracia remotesensing-08-00329
Cnn acuracia remotesensing-08-00329Universidade Fumec
 
Super-Resolution of Multispectral Images
Super-Resolution of Multispectral ImagesSuper-Resolution of Multispectral Images
Super-Resolution of Multispectral Imagesijsrd.com
 
Motivation for image fusion
Motivation for image fusionMotivation for image fusion
Motivation for image fusionVIVEKANAND BONAL
 
Particle image velocimetry
Particle image velocimetryParticle image velocimetry
Particle image velocimetryMohsin Siddique
 
Image interpretation keys & image resolution
Image interpretation keys & image resolutionImage interpretation keys & image resolution
Image interpretation keys & image resolutionPramoda Raj
 
Correction of Inhomogeneous MR Images Using Multiscale Retinex
Correction of Inhomogeneous MR Images Using Multiscale RetinexCorrection of Inhomogeneous MR Images Using Multiscale Retinex
Correction of Inhomogeneous MR Images Using Multiscale RetinexCSCJournals
 
Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
 
Image Denoising Using Earth Mover's Distance and Local Histograms
Image Denoising Using Earth Mover's Distance and Local HistogramsImage Denoising Using Earth Mover's Distance and Local Histograms
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformINFOGAIN PUBLICATION
 
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
 
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...ADEIJ Journal
 
Irrera gold2010
Irrera gold2010Irrera gold2010
Irrera gold2010grssieee
 
Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainFusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainCSCJournals
 
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)CSCJournals
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
 
OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor csandit
 

What's hot (20)

Scale and resolution
Scale and resolutionScale and resolution
Scale and resolution
 
Cnn acuracia remotesensing-08-00329
Cnn acuracia remotesensing-08-00329Cnn acuracia remotesensing-08-00329
Cnn acuracia remotesensing-08-00329
 
Super-Resolution of Multispectral Images
Super-Resolution of Multispectral ImagesSuper-Resolution of Multispectral Images
Super-Resolution of Multispectral Images
 
Motivation for image fusion
Motivation for image fusionMotivation for image fusion
Motivation for image fusion
 
Fd36957962
Fd36957962Fd36957962
Fd36957962
 
Particle image velocimetry
Particle image velocimetryParticle image velocimetry
Particle image velocimetry
 
Image interpretation keys & image resolution
Image interpretation keys & image resolutionImage interpretation keys & image resolution
Image interpretation keys & image resolution
 
Correction of Inhomogeneous MR Images Using Multiscale Retinex
Correction of Inhomogeneous MR Images Using Multiscale RetinexCorrection of Inhomogeneous MR Images Using Multiscale Retinex
Correction of Inhomogeneous MR Images Using Multiscale Retinex
 
Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...Automatic traffic light controller for emergency vehicle using peripheral int...
Automatic traffic light controller for emergency vehicle using peripheral int...
 
Image Denoising Using Earth Mover's Distance and Local Histograms
Image Denoising Using Earth Mover's Distance and Local HistogramsImage Denoising Using Earth Mover's Distance and Local Histograms
Image Denoising Using Earth Mover's Distance and Local Histograms
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
 
RADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet TransformRADAR Image Fusion Using Wavelet Transform
RADAR Image Fusion Using Wavelet Transform
 
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...
 
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...
 
Irrera gold2010
Irrera gold2010Irrera gold2010
Irrera gold2010
 
Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
Fusion of Multispectral And Full Polarimetric SAR Images In NSST DomainFusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
 
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)International Journal of Image Processing (IJIP) Volume (1) Issue (1)
International Journal of Image Processing (IJIP) Volume (1) Issue (1)
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONINFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
 
OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor OBIA on Coastal Landform Based on Structure Tensor
OBIA on Coastal Landform Based on Structure Tensor
 

Similar to Machine learning fuses pan and multi images

Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...DR.P.S.JAGADEESH KUMAR
 
Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]DR.P.S.JAGADEESH KUMAR
 
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A Imagery
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A ImageryIRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A Imagery
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A ImageryIRJET Journal
 
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent  High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent ijsc
 
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTHIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
 
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptURBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptgrssieee
 
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptURBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptgrssieee
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONCOLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONacijjournal
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION acijjournal
 
IMAGE FUSION IN IMAGE PROCESSING
IMAGE FUSION IN IMAGE PROCESSINGIMAGE FUSION IN IMAGE PROCESSING
IMAGE FUSION IN IMAGE PROCESSINGgarima0690
 
seyed armin hashemi
seyed armin hashemiseyed armin hashemi
seyed armin hashemiDheeraj Vasu
 
P.maria sheeba 15 mco010
P.maria sheeba 15 mco010P.maria sheeba 15 mco010
P.maria sheeba 15 mco010W3Edify
 
Color Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionColor Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionSafayet Hossain
 
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...journal ijrtem
 

Similar to Machine learning fuses pan and multi images (20)

Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swa...
 
Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]Earth Science and Remote Sensing Applications [Book]
Earth Science and Remote Sensing Applications [Book]
 
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A Imagery
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A ImageryIRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A Imagery
IRJET- Fusion of VNIR and SWIR Bands of Sentinel-2A Imagery
 
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent  High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
High Resolution Mri Brain Image Segmentation Technique Using Holder Exponent
 
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTHIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENT
 
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptURBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
 
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.pptURBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
URBAN AREA PRODUCT SIMULATION FOR THE ENMAP HYPERSPECTRAL SENSOR.ppt
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSIONCOLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
 
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
 
Remote sensing
Remote sensingRemote sensing
Remote sensing
 
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
A HYBRID APPROACH OF WAVELETS FOR EFFECTIVE IMAGE FUSION FOR MULTIMODAL MEDIC...
 
IMAGE FUSION IN IMAGE PROCESSING
IMAGE FUSION IN IMAGE PROCESSINGIMAGE FUSION IN IMAGE PROCESSING
IMAGE FUSION IN IMAGE PROCESSING
 
seyed armin hashemi
seyed armin hashemiseyed armin hashemi
seyed armin hashemi
 
P.maria sheeba 15 mco010
P.maria sheeba 15 mco010P.maria sheeba 15 mco010
P.maria sheeba 15 mco010
 
Color Guided Thermal image Super Resolution
Color Guided Thermal image Super ResolutionColor Guided Thermal image Super Resolution
Color Guided Thermal image Super Resolution
 
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...
PAN Sharpening of Remotely Sensed Images using Undecimated Multiresolution De...
 
Ijetr021113
Ijetr021113Ijetr021113
Ijetr021113
 
Ijetr021113
Ijetr021113Ijetr021113
Ijetr021113
 
40120140507003
4012014050700340120140507003
40120140507003
 
40120140507003
4012014050700340120140507003
40120140507003
 

More from DR.P.S.JAGADEESH KUMAR

Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...DR.P.S.JAGADEESH KUMAR
 
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...DR.P.S.JAGADEESH KUMAR
 
Optical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaOptical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaDR.P.S.JAGADEESH KUMAR
 
Integrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsIntegrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsDR.P.S.JAGADEESH KUMAR
 
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryAutomatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryDR.P.S.JAGADEESH KUMAR
 
Continuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformContinuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformDR.P.S.JAGADEESH KUMAR
 
A Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityA Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityDR.P.S.JAGADEESH KUMAR
 
Advanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsAdvanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsDR.P.S.JAGADEESH KUMAR
 
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...DR.P.S.JAGADEESH KUMAR
 
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingIntelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingDR.P.S.JAGADEESH KUMAR
 
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksRobotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksDR.P.S.JAGADEESH KUMAR
 
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...DR.P.S.JAGADEESH KUMAR
 
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...DR.P.S.JAGADEESH KUMAR
 
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...DR.P.S.JAGADEESH KUMAR
 
Machine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaMachine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaDR.P.S.JAGADEESH KUMAR
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...DR.P.S.JAGADEESH KUMAR
 
New Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsNew Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsDR.P.S.JAGADEESH KUMAR
 
Accepting the Challenges in Devising Video Game Leeway and Contrivance
Accepting the Challenges in Devising Video Game Leeway and ContrivanceAccepting the Challenges in Devising Video Game Leeway and Contrivance
Accepting the Challenges in Devising Video Game Leeway and ContrivanceDR.P.S.JAGADEESH KUMAR
 
A Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesA Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
 
AVC based Compression of Compound Images Using Block Classification Scheme
AVC based Compression of Compound Images Using Block Classification SchemeAVC based Compression of Compound Images Using Block Classification Scheme
AVC based Compression of Compound Images Using Block Classification SchemeDR.P.S.JAGADEESH KUMAR
 

More from DR.P.S.JAGADEESH KUMAR (20)

Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
Bi-directional Recurrent Neural Networks in Classifying Dementia, Alzheimer’s...
 
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
Promise and Risks Tangled in Hybrid Wavelet Medical Image Fusion Using Firefl...
 
Optical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for LeukemiaOptical Picbots as a Medicament for Leukemia
Optical Picbots as a Medicament for Leukemia
 
Integrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical ApplicationsIntegrating Medical Robots for Brain Surgical Applications
Integrating Medical Robots for Brain Surgical Applications
 
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in SurgeryAutomatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
Automatic Speech Recognition and Machine Learning for Robotic Arm in Surgery
 
Continuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet TransformContinuous and Discrete Crooklet Transform
Continuous and Discrete Crooklet Transform
 
A Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the RelativityA Theoretical Perception of Gravity from the Quantum to the Relativity
A Theoretical Perception of Gravity from the Quantum to the Relativity
 
Advanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical ApplicationsAdvanced Robot Vision for Medical Surgical Applications
Advanced Robot Vision for Medical Surgical Applications
 
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
Pragmatic Realities on Brain Imaging Techniques and Image Fusion for Alzheime...
 
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical ImagingIntelligent Detection of Glaucoma Using Ballistic Optical Imaging
Intelligent Detection of Glaucoma Using Ballistic Optical Imaging
 
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief NetworksRobotic Simulation of Human Brain Using Convolutional Deep Belief Networks
Robotic Simulation of Human Brain Using Convolutional Deep Belief Networks
 
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
Classification and Evaluation of Macular Edema, Glaucoma and Alzheimer’s Dise...
 
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
Multilayer Perceptron Neural Network Based Immersive VR System for Cognitive ...
 
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
Computer Aided Therapeutic of Alzheimer’s Disease Eulogizing Pattern Classifi...
 
Machine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for GlaucomaMachine Learning based Retinal Therapeutic for Glaucoma
Machine Learning based Retinal Therapeutic for Glaucoma
 
Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...Fingerprint detection and face recognition for colonization control of fronti...
Fingerprint detection and face recognition for colonization control of fronti...
 
New Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking ApplicationsNew Malicious Attacks on Mobile Banking Applications
New Malicious Attacks on Mobile Banking Applications
 
Accepting the Challenges in Devising Video Game Leeway and Contrivance
Accepting the Challenges in Devising Video Game Leeway and ContrivanceAccepting the Challenges in Devising Video Game Leeway and Contrivance
Accepting the Challenges in Devising Video Game Leeway and Contrivance
 
A Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesA Comparative Case Study on Compression Algorithm for Remote Sensing Images
A Comparative Case Study on Compression Algorithm for Remote Sensing Images
 
AVC based Compression of Compound Images Using Block Classification Scheme
AVC based Compression of Compound Images Using Block Classification SchemeAVC based Compression of Compound Images Using Block Classification Scheme
AVC based Compression of Compound Images Using Block Classification Scheme
 

Recently uploaded

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...ranjana rawat
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 

Recently uploaded (20)

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 

Machine learning fuses pan and multi images

  • 1. Int. J. of Computational Science and Engineering, Vol. 15, No. 5/6, 2018 340 Copyright © 2018 Inderscience Enterprises Ltd. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region P.S.Jagadeesh Kumar1 , Tracy Lin Huan2 , Xianpei Li3 , Yanmin Yuan4 1,2 Dartmouth College, Hanover, New Hampshire, United States. 3 Stanford University, California, United States. 4 Harvard University, Cambridge, United States. Abstract: Various kinds of sensors are persevered in geographical monitoring, forecasting, and planning. To advance fusion in remote sensing applications, researchers antedate image fusion systems in combining panchromatic and multispectral images. This article accents on the fusion of high resolution panchromatic and low resolution multispectral images by machine learning for categorization of bucolic and farming region. Qualitative and quantitative assessment methods were used to measure the distinction of the fused images. The applied outcomes display that the predicted method provided healthier recital over other fusion algorithms in enhancing the quality of the fused images and shaped operative evaluation of bucolic and farming region. Keywords: Bucolic and Farming Region, Multispectral Imaging, Panchromatic Imaging, Image Fusion, Machine Learning Reference to this paper should be made as follows: P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li, Yanmin Yuan. (2018) ‘Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region’, Int. J. Computational Science and Engineering, Vol. 15, No. 5/6, pp.340-370. Biographical notes: P.S.Jagadeesh Kumar is working as Postdoctoral Research Associate in the department of Earth Science and Remote Sensing, Dartmouth College, Hanover, New Hampshire, United States. Tracy Lin Huan is working as Professor in the department of Earth Science and Remote Sensing, Dartmouth College, Hanover, New Hampshire, United States. Xianpei Li is working as Assistant Professor in the Institute for Computational and Mathematical Engineering, Stanford University, California, United States. Yanmin Yuan is working as Professor in the department of Bioengineering, Harvard University, Cambridge, United States.
  • 2. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region 1 Introduction Remote sensing systems, mainly those deployed on satellites, provide a monotonous and constant interpretation of the earth. To happenstance the wants of diverse remote sensing demands, the systems deliver an eclectic range of three-dimensional, ethereal, radiometric and time-based resolutions. Satellites generally take plentiful images from frequency bands in the visual and non-visual range (Alparone L, Wald L et al., 2007). The color information in a remote sensing image by means of spectral band, blends for a detailed 3-D resolution expansion of information which is employed in quite a lot of remote sensing applications. Otherwise, assorted targets in a single band might look analogous which makes it tough to differentiate. Various bands can be developed by a solitary multispectral sensor or through multiple sensors functioning at dissimilar frequencies (Chen T, Zhang J, Zhang Y, 2005). Harmonizing evidence about the same scene can be attained in the subsequent cases; data chronicled by diverse sensors, data chronicled by the same sensor functioning in diverse spectral bands, data chronicled by the same sensor at disparate polarization, data chronicled by the same sensor positioned on platforms hovering at dissimilar heights (Dai F.C, Lee C.F, 2002). Habitually, sensors having high spectral resolution, restrained by catching the radiance from diverse land covers in numerous bands of the electromagnetic spectrum, do not have an ideal spatial resolution, that might be insufficient to a precise classification regardless of its decent spectral resolution (Fonseca L.M.G, Prasad G.S.S.D, Mascarenhas N.D.A, 1993). On a high spatial resolution panchromatic image (PAN), overall geometric features can effortlessly be identified, whereas the multispectral images (MS) incorporates more prosperous spectral information. The capacities of the images can be improved if the benefits of together high spatial and spectral resolution can be combined into one solitary image. The comprehensive structures of such a cohesive image therefore can be basically recognized and will assist numerous applications, for example built-up and environmental studies (Fonseca et al., 2008). By means of suitable algorithms it is plausible to merge MS and PAN bands and yield an artificial image with their best topographies. This procedure is acknowledged as multisensory merging, or fusion, or sharpening (Laporterie-Dejean F, de Boissezon H, Flouzat G, Lefevre-Fonollosa M.J, 2005). Its objective is to assimilate the spatial feature of high-resolution PAN image and the color info of a low-resolution MS image to achieve a high-resolution MS image (Ioannidou S, Karathanassi V, 2007). The outcome of image fusion is a novel image which is further apposite for human and machine perspicacity or for supplementary image-processing procedures like segmentation, feature extraction and object recognition. The fused image must provide the uppermost spatial information while still conserving decent spectral information quality (Garzelli A et al., 2005). It is identified that the spatial information of PAN image is usually armoured by its high-frequency components, whereas the spectral information of MS image is often supported by its low- frequency components (Jing L, Cheng Q, 2009). If the high-frequency components of the MS image are principally replaced by the high-frequency components of the PAN image, the spatial resolution is enhanced but with the loss of spectral information from the high- frequency components of MS image (Guo Q, Chen S, Leung H, Liu S, 2010). The exact purposes of image fusion are to upsurge the spatial resolution, advance the geometric accurateness, improve the competences of topographies presentation, advance classification precision, progress the ability of the change detection and substitute or reinstate the shortcoming of image data (Pohl C, Genderen J.L.V, 1998).
  • 3. PSJ Kumar et al. 342 To yield fused images with improved quality certain significant steps must be restrained throughout the fusion process; (1) The PAN and MS images must be attained at adjoining period. Many variations might transpire throughout the interlude of acquisition time; discrepancies in the vegetation reliant on the season of the year, dissimilar lighting circumstances, erection of buildings, or discrepancies triggered by natural disasters like floods, earthquakes and volcanic explosions. (2) The spectral frequency of PAN image must overlay the spectral frequency of all multispectral bands during the fusion process to conserve the image colour. This could evade the color falsehood in the fused image. (3) The spectral band of the high-resolution image must be as analogous as possible to that of the substituted low-resolution component during the fusion process. (4) The high-resolution image must be altogether contrast matched to the substituted component to decrease enduring radiometric artifacts. (5) The PAN and MS images must be registered with an accuracy of not more than 0.5-pixel, evading artifacts in the fused image. Some of the above aspects are not substantial when the fused images are from extents of diverse range with diverse remote sensing practice (Temesgen B, Mohammed M.U, 2001). 2 Panchromatic and Multispectral Imaging System Optical remote sensing brands the usage of visible, near infrared and short-wave infrared sensors to produce images of the earth's superficial by noticing the solar radiation reflected by the targets on the land. Dissimilar materials reflect and absorb light in a distinct way at dissimilar wavelengths as publicized in Fig. 1. Hereafter, the targets can be discriminated by their spectral reflectance signs in remotely sensed images (Lillo- Saavedra M, Gonzalo C et al., 2005). Optical remote sensing schemes are branded into different classes reliant on the number of spectral bands recycled in the imaging process specifically panchromatic imaging scheme, multispectral imaging scheme, hyperspectral imaging scheme in addition to superspectral imaging procedure (Li Z, Jing Z et al, 2005). In panchromatic imaging scheme, the sensor is a solitary channel detector subtle to radiation in the interior of a broad wavelength range. If the wavelength range accords with the visible range, then the ensuing image is called panchromatic image i.e. black- and-white image. The corporeal measure being restrained is the superficial illumination of the targets (L. Dong, Q. Yang, H. Wu et al., 2015). Specimens of panchromatic imaging schemes are IKONOS Pan, QuickBird Pan, SPOT Pan, GeoEye Pan, LANDSAT ETM+ Pan. The sensor in multispectral imaging is a multi-channel detector with a limited spectral band. Every channel is subtle to radiation within a narrow wavelength band. The ensuing multilayer image encompasses both the brightness and spectral colour information of the targets being detected (Marcelino E.V, Ventura F et al., 2003). Specimens of multispectral schemes are Landsat TM, MSS, Spot HRV-XS, Ikonos MS, QuickBird MS, GeoEye MS.
  • 4. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 1 Illustration of optical remote sensing Optical remote sensing hinge on the sun as the solitary cause of illumination. The solar irradiation range above the atmosphere can be exhibited by a black body radiation spectrum possessing a source temperature of 5950 K, with a topmost irradiation situated at about 550 nm wavelength (Miao Q, Shi C, Xu P, Yang M, Shi Y, 2011). Corporal measurement of the solar irradiance has also been achieved by means of ground based and spaceborne sensors. After transient through the atmosphere, the solar irradiation spectrum at the ground is moderated by the atmospheric transmission windows. Momentous energy remains only within the wavelength range from about 0.24 to 3 µm. When solar radiation knockouts a target surface, it might be diffused, absorbed or reflected (Pajares G, de la Cruz J.M, 2004). Dissimilar materials reflect and absorb light rebelliously at dissimilar wavelengths. The reflectance spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident wavelength and aids as a sole sign for the material (Tu T.M, Cheng W.C et al., 2007). In general, a material can be recognized from its spectral reflectance sign if the sensing system has adequate spectral resolution to discriminate its spectrum from those of other materials. This principle affords the source for multispectral remote sensing (Pohl C, Genderen J.L.V, 1998). Fig. 2 shows the distinctive reflectance spectra of five materials: clear water, turbid water, bare soil and two types of vegetation. The reflectance of clear water is normally low. Though, the reflectance is extreme at the blue end of the spectrum and declines as wavelength increases. Henceforth, clear water seems dark-bluish (Rahman M.M, Csaplovics E, 2007). Turbid water has certain sediment postponement which surges the reflectance in the red end of the spectrum, accounting for its brownish appearance. The reflectance of bare soil habitually depends on its composition. The reflectance increases monotonically with increasing wavelength. Henceforth, it would appear yellowish-red to the eye (X. Luo, Z. Zhang, X. Wu, 2016).
  • 5. PSJ Kumar et al. 344 Fig. 2 Reflectance bands of multispectral imaging Vegetation has an inimitable spectral sign which permits it to be illustrious voluntarily from other kinds of land cover in an optical/near-infrared image. In both the blue and red regions of the spectrum, the reflectance is low owing to absorption by chlorophyll for photosynthesis. It has the topmost value at the green region which gives rise to the green color of vegetation. In the near infrared (NIR) region, the reflectance is much larger than that in the visible band owing to the cellular structure in the leaves (Silva F.C, Dutra L.V et al., 2007). Henceforth, vegetation can be recognized by the high NIR but normally with low visible reflectance. This phenomenon has been secondhand in principal investigation missions during war times for camouflage detection. The silhouette of the reflectance range can be hand-me-down for identification of vegetation type (Simone G, Farina A, Morabito F.C, Bruzzone L et al., 2002). For instance, the reflectance spectra of vegetation 1 and 2 can be illustrious however they display the normal features of high NIR but low visible reflectances. Vegetation 1 has larger reflectance in the visible region but lower reflectance in the NIR region. For the similar vegetation type, the reflectance spectrum also hinges on other factors like the leaf moisture and healthiness of the plants (Song H, Yu S, Yang X et al., 2007). The reflectance of vegetation is more diverse, contingent on the kinds of plants and the plant's water content. Reflectance of leaves normally upsurges when leaf liquid water content declines. 3 Image Fusion Algorithms Preferably, image fusion methods must permit merging of images with dissimilar spectral and spatial resolution possessing the radiometric data (Tu T.M, Huang P.S, Hung C.L, Chang C.P, 2004). Enormous exertion has been laid in evolving fusion approaches that conserve the spectral data and increase detail information in the fused image fashioned by fusion method. Strategies based on Intensity Hue Saturation (IHS) transform and Principal Components Analysis (PCA) conceivably are the utmost dominant approaches recycled to improve the spatial tenacity of multispectral images with panchromatic images (Baatz A, 2000). However, both approaches agonize from the delinquent that the radiometry on the spectral frequencies is rehabilitated after fusion process. This is due to the high-resolution panchromatic image generally has spectral features dissimilar from
  • 6. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region the intensity and the first principal components (Ling Y, Ehlers M, Usery E.L, 2008). Further recently, novel methods have been projected such as those that truncate wavelet transform with IHS model and PCA transform to achieve the colour and information distortion in the fused image. Underneath, are presented the basic theory of the fusion approaches based on IHS, PCA, Wavelet Transform and Machine Learning, which are the utmost traditional procedures employed in remote sensing applications. (a) Intensity Hue Saturation (IHS) Transform The IHS method is one of the utmost regularly recycled fusion system for sharpening. It has developed into a typical practice in image examination for color improvement, feature augmentation, enhancement of spatial resolution and the fusion of incongruent datasets (Carper W, Lillesand T, Kiefer R, 1990). In the IHS space, spectral data is characteristically reflected on the hue and the saturation. From the visual system, one can accomplish that the intensity variation has trivial consequence on the spectral data and is easy to deal with. For the fusion of the high-resolution PAN image and multispectral remote sensing images, the goalmouth is confirming the spectral data and accumulating the detail information of high spatial resolution, thus, the fusion is even more passable for handling in IHS space (Choi M, 2006). IHS scheme encompasses of transmuting the R, G and B bands of the MS image into IHS components, substituting the intensity component by the PAN image, and accomplishing the inverse transformation to attain a high spatial resolution MS image. The three multispectral bands, R, G and B, of a low-resolution image are first transformed into the IHS colour space as; ( ) ( √ √ √ √ √ ) ( ) ( ) √ where I, H, S components are intensity, hue and saturation, and V1 and V2 are the intermediate variables. Fusion proceeds by substituting component I with the panchromatic high-resolution image information, after matching its radiometric information with the component I. The fused image, which has both rich spectral information and high spatial resolution, is then attained by accomplishing the inverse transformation from IHS back to the original RGB space as;
  • 7. PSJ Kumar et al. 346 ( ) ( √ √ √ √ √ ) ( ) Though the IHS technique has been extensively used, the technique cannot decompose an image into dissimilar frequencies in frequency space such as higher or lower frequency (Schetselaar E. M, 1998). Thus, the IHS technique cannot be employed to improve certain image features. Above and beyond, the color falsehood of IHS method is often important. To decrease the color falsehood, the PAN image is harmonized to the intensity component before the substitution otherwise the hue and saturation components gets stretched before the reverse transform. Certain approaches were proposed that combines a standard IHS transform with FFT filtering of both the PAN image and the intensity component of the original MS image to decrease color falsehood in the fused image (Ling Y, Ehlers M, Usery E.L, Madden M, 2007). Studies recommend numerous IHS transformation algorithms have been advanced for converting the RGB values. Some are also named HSV (hue, saturation, value) or HLS (hue, luminance, saturation). Though the intricacy of the model differs, they yield corresponding values for hue and saturation (Tu T.M, Su S.C, Shyu H.C, Huang P.S, 2001). Nonetheless, the algorithms vary in the technique recycled in scheming the intensity component of the transformation. (b) Principal Component Analysis (PCA) method The fusion technique based on PCA is very modest. PCA is a general statistical technique that transmutes multivariate information with allied variables into one with uncorrelated variables. These novel variables are attained as linear combinations of the original variables (Chavez P.S, Kwakteng A.Y, 1989). PCA has been broadly employed in image encoding, image data compression, image enhancement and image fusion. In the fusion method, PCA method produces uncorrelated images (PC1, PC2, …, PCn, where n is the number of input multispectral bands). The first principal component (PC1) is replaced with the PAN band, which has sophisticated spatial resolution than the MS images (Cao W, Li B, Zhang Y, 2003). Then, the inverse PCA transformation is functional to acquire the image in the RGB colour model. In PCA image fusion, foremost spatial data and weak colour information is habitually a problem. The first principal component, which comprises maximum variance, is substituted by PAN image. Such replacement exploits the consequence of PAN image in the fused artefact (Gonzalez-Audicana M, Saleta J et al., 2004). One resolution might be stretching the principal component to give a spherical distribution. Also, the PCA method is delicate to the choice of range to be fused. Other complication is connected to the fact that the first principal component can be also suggestively dissimilar from the PAN image. If the grey values of the PAN image are accustomed to the grey values corresponding to PC1 component before the substitution, the colour falsehood is meaningfully concentrated (Wang Q, Shen Y, Jin J, 2008).
  • 8. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region (c) Wavelet Transform (WT) Multi-resolution and multi-scale approaches, for example pyramid transformation, have been espoused for data fusion since the early 1980s (Chibani Y, Houacine A, 2000). The Pyramid-based image fusion approaches, together with Laplacian pyramid transform, were all established from Gaussian pyramid transform, have been reviewed and expansively employed. The procedure of wavelet construction was deployed into the background of functional examination and designated the fast wavelet transform algorithm and overall technique of building wavelet on orthonormal basis (Amolins K, Zhang Y, Dare P, 2007). On this source, wavelet transform can be practical to image breakdown and reconstruction. Wavelet transforms deliver an outline in which an image is disintegrated, with each level matching to a bristlier resolution band (Chibani Y et al., 2003). Perhaps, in the occasion of fusing a MS image with a high-resolution PAN image with wavelet fusion, the PAN image is first disintegrated into a set of low-resolution PAN images with corresponding wavelet coefficients (spatial details) for each level (Choi M, Kim R.Y, Nam M.R, Kim, H.O, 2005). Discrete bands of the MS image then substitute the low-resolution PAN at the resolution level of the original MS image. The high resolution spatial aspect is inoculated into every MS band by accomplishing a reverse wavelet transform on each MS band organized with the corresponding wavelet coefficients (Cao W, Li B, Zhang Y, 2003). In the wavelet-based fusion systems, detail information is mined from the PAN image by means of wavelet transforms and inserted into the MS image. Distortion of the spectral information is lessened related to the typical methods. To realize optimal fusion outcomes, many wavelet-based fusion systems had been verified by several researchers (Chai Y, Li H.F, Qu J.F, 2010). A technique for fusing SAR and visible MS images by means of the Curvelet transformation was projected. This organisation was established to be further competent for perceiving edge information and denoising than wavelet transformation. Curvelet based image fusion has been castoff to combine a Landsat ETM+ PAN and MS image. The projected technique concurrently delivers richer information in the spatial and spectral domains. A flexible multi-resolution, local, and directional image extension by means of contour segments, the Contourlet transform, to resolve the problem that wavelet transform could not competently signify the singularity of linear curve in image processing was projected (Garguet-Duport B, Girel J, 1996). Contourlet transform affords flexible number of guidelines and apprehends the inherent geometrical structure of images. Generally, as a distinct feature level fusion technique, wavelet-based fusion could obviously accomplish better than expedient approaches in terms of lessening color falsehood and denoising effects. It has been one of the most prevalent fusion methods in remote sensing in topical years, and has been typical segment in numerous commercial image processing software’s like ENVI, PCI, ERDAS (Gonzalez-Audicana M, Otazu X, Fors O, 2005). Complications and restrictions related with them comprise: high computational intricacy related to the typical approaches, spectral content of small objects frequently mislaid in the fused images, it regularly entails the user to regulate suitable standards for certain constraints such as thresholds (Li S, Kwok J.T, Wang Y, 2002). The growth of classier wavelet-based fusion algorithm for example ridgelet, curvelet, and contourlet transformation might advance the presentation consequences, though these new-fangled organisations might source superior complication in the computation and setting of parameters (Ventura F.N, Fonseca L.M.G, Santa Rosa A.N.C, 2002).
  • 9. PSJ Kumar et al. 348 (d)Machine Learning and Convolutional Neural Network Machine learning is an arena of computer science that stretches computer the capability to learn without being explicitly programmed (Yang X.H, Jing J.L, Liu G, Hua L.Z, Ma D.W, 2007). In machine learning, a convolutional neural network (CNN) is a class of deep, feed-forward artificial neural networks that has efficaciously been functional to investigate visual imagery. CNN has recognized to be very effective in areas such as image recognition and classification. The general schematic diagram of a convolutional neural network is shown in Fig. 3. The input layer has several neurons, which recommend the characteristic factors extracted and normalized from PAN image and MS image. The function of each neuron is a sigmoid function given by ( ) The hidden layer has numerous neurons and the output layer has one or more neuron. The ith neuron of the input layer connects with the jth neuron of the hidden layer by weight Wij, and weight between the jth neuron of the hidden layer and the tth neuron of output layer is Vjt (in this case t = 1). The weighting function is employed to simulate and distinguish the rejoinder connection amid structures of fused image and matching feature from original images (PAN image and MS image). The CNN model is specified as follows (P.S.Jagadeesh Kumar, Krishna Moorthy, 2013): * (∑ )+ where Y is the pixel value of fused image exported from the neural network model, q is the number of nodes hidden (q=8 here), Vj is the weight between jth hidden node and output node (in this case, there is only one output node), c is the threshold of the output node, Hj is the exported values from the jth hidden node: [ (∑ )] where Wij is the weight between ith input node and the jth hidden node, ai is the values of the ith input factor, n is the number of nodes of input (n=5), hj is the threshold of the jth hidden node. As the initial step of CNN-based data fusion, two registered images are disintegrated into numerous blocks with size of M and N. Then, structures of the matching blocks in the two original images are mined, and the normalized feature vector incident to neural networks can be built. The structures employed here to estimate the fusion result are generally spatial frequency, visibility, and edge. The following step is to select some vector models to train neural networks (Zhang Y, 2004). CNN is a universal function approximator that directly acclimates to any nonlinear function demarcated by an illustrative set of training data. Once trained, the CNN model can reminisce a functional connexion and be employed for further assessments. For these cause, the CNN perception has been realized to advance sturdily in nonlinear replicas for multiple sensors data fusion (P.S.Jagadeesh Kumar, 2013).
  • 10. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 3 Model of convolutional neural network based classification Artificial neural network groups input model through modest learning (Zhou J, Civco D.L et al., 1998). The number of output neurons essentially be set earlier to building neural networks model. Neural network can estimate objective function at any specific level if sufficient hidden units are provided (Zhang Y, 2008). The CNN-based fusion method achieves pattern recognition proficiencies of artificial neural networks, and meanwhile, the learning competence of neural networks brands it practicable to modify the image fusion method. Numerous applications specified that the CNN-based methods have additional benefits than traditional statistical approaches, particularly when input multiple sensor data were inadequate or with much noises. It is habitually assisted as an effective decision level fusion paraphernalia for its self-learning features, precisely in land usage and land cover classification (Camara G, Souza R.C.M, Freitas U.M, Garrido J, 1996). 4 Fused Image Quality Evaluation Methods Some investigators have appraised diverse image fusion approaches using dissimilar image quality measures. Usually, the effectiveness of an image-fusion technique can be assessed by relating the subsequent fused image with a reference image, which is presumed to be ideal. This assessment might be grounded on spectral and spatial constructions, and can be accomplished both qualitatively and quantitatively (G Palubinskas, 2015). Regrettably, the reference image is not always obtainable practically, therefore, it is essential to simulate it or to achieve a quantitative and blind estimation of the fused images. For measuring quality of an image after fusion, some features must be demarcated. These comprise, for example, spatial and spectral tenacity, amount of information, perceptibility, contrast, or facts of object of interest. Quality evaluation is application protege so that dissimilar applications might need discrete features of image quality (Wei Z.G, Yuan J.H et al., 1999). Normally, image evaluation approaches can be divided into two groups: qualitative or subjective and quantitative or objective procedures. Qualitative approaches encompass visual assessment amongst a reference image and the fused image whereas quantitative examination includes quality metrics that measures spectral and spatial comparison amid multispectral and fused images for its effectiveness.
  • 11. PSJ Kumar et al. 350 (a) Qualitative Assessment Affording to prior evaluation standards or specific knowledges, individual decision or even grades can be assumed to the quality of an image. The transcriber studies the tone, contrast, saturation, sharpness, and texture of the fused images (Chavez P.S, Sides S.C, Anderson J.A, 1991). The concluding overall quality decision can be accomplished by, for instance, a weighted mean constructed on the distinct ranking. The qualitative technique principally comprises complete and the comparative measures as shown in Table 1. This technique hinge on the expert’s skills or prejudice and certain vagueness is intricated. Qualitative measures cannot be implied by laborious mathematical representations, and their method is chiefly visual assessment. TABLE I. QUALITATIVE ASSESSMENT FOR IMAGE QUALITY Ranking Complete Measure Comparative Measure A Outstanding Best B Decent Better C Reasonable Average D Poor Lower E Meager Lowest (b) Quantitative Assessment Some quality metrics encompass; average grey value, for representing concentration of an image, standard deviation, information entropy, profile intensity curve for calculating actualities of fused images, bias and correlation coefficient for computing falsification amid the original image and fused image in relation of spectral data (Chen H, Varshney P.K, 2007). Assume Fi and Ri  (1, … , N) be the N bands of the fused and reference images, correspondingly. The following methods were used to determine the variance in spectral and global data amid each band of the merged images with reference images and without reference image. (i) Method 1 (With reference image): 1. Correlation coefficient (CC) amid the reference and the fused image that ought to be near to 1 as probable (UK Wang, Bovik A.C, 2002). 2. Difference between the means of the reference and the fused image (DM), in vivacity as well as its value comparative to the mean of the original. The lesser these variances, the healthier will be the spectral quality of the merged image. Therefore, the variance value must be as near to 0 as probable (Wei Z.G, Yuan J.H et al., 1999). 3. Standard deviation of the difference image (SSD), proportional to the mean of the reference image stated in percentage. The lesser its value, the healthier will be the spectral quality of the fused image (Zheng Y, Qin Z, 2009). 4. Universal Image Quality Indicator (UIQI) can be computed by: ( ) *( ) ( ) +
  • 12. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Where is the covariance between the band of reference image and the band of fused image, µ and σ are the mean and standard deviation of the images. The higher UIQI index, the healthier will be spectral quality of the fused image. It is recommended to practice the moving windows of unusual sizes to evade errors due to index spatial dependence (Lillo-Saavedra M, Gonzalo C, 2006). To evaluate the global spectral quality of the fused image, subsequent constraints can be employed; 5. The relative average spectral error index (RASE) exemplifies the average performance for all the bands of fused image (Chen Y, Blum R.S, 2009): √ ∑[( ( ) ( )] where µ is the mean radiance of the N spectral bands (Ri) of the reference image. DM and SSD are defined above. 6. Relative global dimensional synthesis error (ERGAS) can be computed by: √ ∑([( ( ) ( )] ) where h and l are the resolution of the high and low spatial resolution images, respectively, and i is the mean radiance of each spectral band involved in the fusion process. DM and SSD are defined above. Lesser the values of RASE and ERGAS indexes, the healthier will be the spectral quality of the fused images (Alparone L, Aiazzi B et al., 2004). 7. An effective fusion scheme must permit the accumulation of a high degree of the spatial aspect of the PAN image into the MS image (G Palubinskas, 2015). Visually the details of the information can be observed. The average gradient index (AG) for spatial quality evaluation might be employed. AG describes the changing feature of image texture and the detailed information (Liu Z, Forsyth D.S et al., 2008). Higher values of the AG index correspond to healthier spatial resolution. The AG index of the fused images at each band can be computed by: ∑ ∑ √( [ ( ) ] [ ( ) ] ) where K and L are the number of lines and columns of the fused image F.
  • 13. PSJ Kumar et al. 352 (ii) Method 2 (Without reference image): The image fusion rarely has the standard as a reference image (C. Zhang, J. Pan, S. Chen et al., 2016). Evaluation of the effect of the traditional image processing parameters, such as the mean square error (MSE), peak signal to noise ratio (PSNR), normalized minimum variance (NSLE). These cannot be employed for objective evaluation of the experimental effect (L. Li, Y. Zhou, W. Lin, J. Wu et al., 2016). The current commonly used no reference image evaluation method for the analysis of the experimental results, the main information entropy (HF), standard deviation (SF) and cross entropy (C). 1. The information entropy HF can be computed by: ∑ ( ) ( ) In the above equation, PF is to estimate the fused image pixel value distribution. N is the fusion image of the total gray level. Information entropy HF represents the amount of information including an image of the value, Larger values of HF, which means the image information is richer, the visual effect is better (Shi W, Zhu C, Tian Y, Nichol J, 2005). 2. The standard deviation SF can be computed by: √∑( ) ( ) In the above equation, AvF is the pixel gray mean image, PF is the image pixel value distribution. The standard deviation of SF reflects the image contrast, larger values of SF, the image contrast is stronger, the visual effect is better (Wei Z.G, Yuan J.H et al., 1999). 3. The cross-entropy C can be computed by: ∑ ( ) In the above equation, Pi is the gray level distribution of the source image, Qi is the gray distribution of image fusion, Cross entropy C is the pixel difference of two images. When the image difference is small, the more amount of information is extracted, cross entropy C is the better evaluation of image fusion.
  • 14. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region 5 Bucolic and Farming Region Classification Farming is the backbone of national economy and the subsequent section for safeguarding food sanctuary. Timely accessibility of information on agriculture is significant for taking conversant choices on food safety issues (P.S.Jagadeesh Kumar, J.Nedumaan, 2015). Many nations in the world that practices space technology and land- based annotations for causing periodic apprises on crop production information and taking efforts to accomplish demonstrative agriculture as shown in Fig. 4. Satellite based optical and radar imagery are extensively employed in contemplate agriculture. Radar imagery are particularly recycled during rainy season (Nikolakopoulos K.G, 2005). Joint usage of geospatial tools with crop productions and the observation network approves suitable crop yield predictions, scarcity evaluation and monitoring for appropriate agricultural needs. Fig. 4 Farming data and monitoring stratagem Suburbanization is captivating at a quick phase owing to the increased population. The amount of bucolic populace is augmented from 30% in 1992 to more than 70% in 2016. Comprehensive urbanization has had a histrionic effect on the atmosphere and the welfare of civic inhabitants. Studies that measure this progression and its influences are imperative for enchanting corrective actions and scheming improved urbanization strategies for the future (P.S.Jagadeesh Kumar, 2005). To realize these goals, comprehensive urban land cover/use charts are essential. Presently, land cover info with resolutions vacillating from low to high is the principal data source recycled in revisions such as bucolic development replication, estimation of bucolic public health, and calculation of bucolic ecosystem amenities. Nevertheless, to study topics such as housing establishment, urban conveyance, job affability and suburban movement, and land use outlines, comprehensive evidence on urban land usage is desirable owing to the change amongst the two notions: land use is a national impression that explains human happenings and their usage of land, while land cover is a physical portrayal of land surface (Nikolakopoulos K.G, 2005). Land cover can be secondhand to deduce land usage, but the two notions are not completely substitutable. Nonetheless, high-resolution
  • 15. PSJ Kumar et al. 354 urban land usage charts casing enormous spatial ranges are comparatively sporadic since local information and the methods vital for evolving these kinds of charts are usually not accessible, principally for emerging areas (Yang X.H et al., 2008). Furthermore, bucolic land usage charts are generally fashioned by thoughtful aerial photographs, field analysis outcomes, and support materials, such as assessment chronicles or statistical information. The developing nature of urban growth frequently overtakes the on-and-off exertions to apprise prevailing land usage records and outcomes in out-of-date charts (Zhang Y, 2002). To make the condition worse, high-resolution land usage charts are normally kept out of the influence of the civic. Consequently, to acquire land usage charts that apprehend the pace of urban growth in an appropriate and exact way at a comparatively huge spatial scale is a serious challenge in urban revisions, together in India and in other nations fronting analogous circumstances. Fig. 5 Bucolic and Farming Region, Salem, Tamil Nadu, India in 1996 Fig. 6 Bucolic and Farming Region, Salem, Tamil Nadu, India in 2016 Satellite-based remote sensing embraces certain benefits in monitoring the subtleties of urban land usage because of the huge spatial exposure, high time tenacity, and widespread accessibility. Fig. 5 and Fig. 6 shows the bucolic and farming region classification of Salem district, Tamil Nadu, India in year 1996 and 2016
  • 16. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region correspondingly. The data clearly reveals a drastic transformation in bucolic and farming landscape of Salem province in the last two decades. The bucolic and farming region classification at consistent interludes have been supported at sub-national scales for example geoclimatic regions, biogeographic provinces, atmospheric sub-divisions, bioclimatic regions, and changed watersheds (Zhang Y, 1999). National level mapping has recognized major inducement with the accessibility of multi-spectral and multi- resolution remote sensing information having synoptic and time-based exposure. This inventiveness chanced the abrupt national requirements for scheduling and handling natural resources, agriculture development in the catchment, afforestation, eco- development, and isometrics of watershed expansion and irrigation tactics (P.S.Jagadeesh Kumar, J.Tisa, J.Lepika, 2017). Utmost of these inventiveness were one-time exertions and differ in terms of project ideas, classification systems, procedure of charting and the satellite data quality. Presently, landscape subtleties and climate revisions require data on phenology and index of forest; variation of cropland, unused, sterile and wasteland; charting of non-permeable superficial like built-up zones; and topographies such as, dams, mining, aquaculture, and marshlands (Pinho C.M.D, Silva F.C, Fonseca L.M.G et al., 2008). To demarcate these classes with satisfactory precision, there is a requirement to custom satellite data of high and medium spatial resolution. Above and beyond, time- series charts must be reliable with universally acknowledged land cover classification system to turn as substitute to climate variables. Moreover, the physical features of urban focussed section, which might support to regulate land usage at the segment level, were rarely involved in these revisions. There is a tough latent in combining the strength of these two data sources, i.e., assimilating social data with remotely sensed data, to advance improved intuitions into urban land usage outlines. 6 Implementation The schematic block diagram of implementation and analysis for effective classification of bucolic and farming region using image fusion technique is shown in Fig. 7. The high resolution panchromatic (PAN) image and the low resolution multispectral (MS) images were used to obtain the fused images. Different image fusion algorithms such as IHS Transform, PCA method, Wavelet Transform and Machine Learning based Convolution Neural Networks (CNN) were performed in obtaining the fused images. Inorder to attain the fused images, two methods were exasperated specifically Method 1 (with reference image) and Method 2 (without reference image). In Method 1, Dataset 1 consisting of the PAN image and MS images with the relative reference image is employed. In Method 2, Dataset 2 consisting of the PAN image and MS images without reference images is employed. Both the datasets comprise of various satellite images such as landsat thematic mapper, spot, ikonos, worldview, seastar and geoeye imageries. Once the fused images were obtained over the distinct image fusion algorithms, they were evaluated by means of various fused image quality assessment metrics for example Correlation coefficient (CC), Difference between the means of the reference and the merged images (DM), Standard deviation of the difference image (SSD), Universal image quality indicator (UIQI), Relative average spectral error index (RASE), Relative global dimensional synthesis error (ERGAS), Average gradient index (AG) meant for Method 1 and Information entropy (HF), Standard deviation (SF), Cross entropy (C) for Method 2 as portrayed in Table II and Table III.
  • 17. PSJ Kumar et al. 356 The fused images individually from Method 1, Dataset 1 and Method 2, Dataset 2 were subjected to object oriented classification procedure for distinguishing the bucolic and farming region. Object oriented method takes the form, textures and spectral information into account, corresponding to color, shape, smoothness, compactness and scale parameter. The classification phase begins with the primary phase of clustering neighboring pixels into useful regions, which can further be recycled in the impending phase of classification. Such segmentation and topology generation essentially be set conferring to the resolution and the scale of the probable objects. This can be achieved in multiple resolutions, thus permitting to distinguish numerous levels of object groups. Classification grounded on pixel based procedures to image analysis is restricted. Classically, they have substantial complications dealing with high-resolution data images, they yield varying classification results and they are quite outside the prospects in mining the object of interest. In contrast, object oriented classification method yields higher accuracy compared to pixel based classification method, minimum distance classification, parallelepiped classification and maximum likelihood classification method for high resolution fused images. Once the classification of bucolic and farming regions were done, they are measured for their efficiency through distinct image fusion algorithms. Overall accuracy and kappa index were the metrics used to measure the effectiveness of classification by distinct image fusion algorithms. Kappa is a measure of agreement amid the two entities. Kappa is always less than or equal to 1. A value of 1 indicates perfect classification and values less than 1 specify below perfect classification. In rare situations, Kappa can be negative. This is a sign that the two witnesses decided less than would be anticipated just by chance. The effectiveness of the classification results was construed in Table IV and Table V using overall efficiency and kappa index for both Method 1, Dataset 1 and Method 2, Dataset 2 respectively. TABLE II. COMPARISON OF IMAGE FUSION ALGORITHMS FOR FUSED IMAGE QUALITY (METHOD 1, DATASET 1) (Note: All metric values are calculated with respect to the reference image) Fusion Algorithm CC DM SSD UIQI RASE ERGAS AG Qualitative Assessment IHS Transform 0.6342 0.0546 76.62% 0.0086 0.1144 3.9452 0.0813 Average PCA Method 0.5256 0.0825 79.12% 0.0096 0.1362 3.2638 0.0714 Average WT Transform 0.7694 0.0086 56.32% 0.0516 0.0582 2.6934 0.0891 Better Machine Learning based CNN 0.9864 0.0007 38.19% 0.0964 0.0357 1.9786 0.1079 Best CC – Correlation coefficient DM – Difference between the means of the reference and the merged images SSD – Standard deviation of the difference image UIQI – Universal image quality indicator RASE – Relative average spectral error index ERGAS – Relative global dimensional synthesis error AG – Average gradient index
  • 18. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 7 Schematic Block Diagram of Implementation and Analysis of Bucolic and Farming Region Classification using Image Fusion
  • 19. PSJ Kumar et al. 358 TABLE III. COMPARISON OF IMAGE FUSION ALGORITHMS FOR FUSED IMAGE QUALITY (METHOD 2, DATASET 2) Fusion Algorithm HF SF C Qualitative Assessment IHS Transform 5.297 16.563 0.7561 Average PCA Method 5.686 17.821 0.7279 Average WT Transform 6.252 19.546 0.5467 Better Machine Learning based CNN 7.127 23.698 0.4296 Best HF – Information entropy SF – Standard deviation C – Cross entropy TABLE IV. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION (METHOD 1, DATASET 1) (Note: All metric values are calculated with respect to the reference image) Fusion Algorithm Overall Accuracy Kappa Index* Qualitative Assessment IHS Transform 71.93% 0.54 Average PCA Method 78.64% 0.57 Average WT Transform 83.89% 0.73 Better Machine Learning based CNN 90.43% 0.92 Best *Poor classification = Less than 0.20 *Fair classification = 0.20 to 0.40 *Moderate classification = 0.40 to 0.60 *Good classification = 0.60 to 0.80 *Very good classification = 0.80 to 1.00 TABLE V. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION (METHOD 2, DATASET 2) Fusion Algorithm Overall Accuracy Kappa Index* Qualitative Assessment IHS Transform 71.56% 0.52 Average PCA Method 76.89% 0.56 Average WT Transform 86.52% 0.77 Better Machine Learning based CNN 93.23% 0.89 Best *Poor classification = Less than 0.20 *Fair classification = 0.20 to 0.40 *Moderate classification = 0.40 to 0.60 *Good classification = 0.60 to 0.80 *Very good classification = 0.80 to 1.00
  • 20. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region 7 Results and Discussion Fig. 8 and Fig. 9 provide the illustration of remote sensing image fusion by means of Method 1, Dataset 1 and Method 2, Dataset 2 respectively. The correlation coefficient (CC) of the fused image with respect to the reference image for the four image fusion algorithms is shown in Table II. CC is expected to be as near as possible to 1 for the best fused image quality. In Method 1, Dataset 1, the CC of Machine learning based CNN is observed to be 0.9864, for WT it is 0.7694, for PCA method it is 0.5256 and for IHS transform it is 0.6342. Obviously, CNN has the highest correlation coefficient than other fusion algorithms, proving it for the best fused image quality. The same can be observed in Fig. 10, the spatial quality of CNN is very effective. The finest quality agrees to higher correlation coefficient, which shows that the object will appear white on the quality map. The darker the object appears, the poorer the spatial quality with lower correlation coefficient. Difference between the means of the reference and the fused images (DM) is expected to be as near as possible to 0 for best spectral quality of fused image. In Method 1, Dataset 1, the DM of Machine learning based CNN is observed to be 0.0007, for WT it is 0.0086, for PCA method it is 0.0825 and for IHS transform it is 0.0546. From the subsequent observation, it can be affirmed that CNN has DM value very close to zero. Hence, CNN proves to have the best visualization compared to other fusion algorithms. Standard deviation of the difference image relative to the mean of the reference image (SSD) is anticipated to be lower for improved spectral quality of fused image. In Method1, Dataset 1, the SSD of Machine learning based CNN is pragmatic to be 38.19%, for WT it is 56.32%, for PCA method it is 79.12% and for IHS transform it is 76.62%. CNN has the lower percentage of DM, again attesting to have healthier spectral quality correlated to other fusion algorithms. Universal Image Quality Indicator (UIQI) is expected to be higher for healthier spectral quality of the fused image. In Method 1, Dataset 1, the UIQI index of Machine learning based CNN is observed to be 0.0964, for WT it is 0.0516, for PCA method it is 0.0096 and for IHS transform it is 0.0086. CNN has higher UIQI index than other fusion algorithms, showing the best quality of fused image. The relative average spectral error index (RASE) exemplifies the average recital of the method for all bands. Multispectral images have numerous bands, the relative spectral error value amid the bands of the fused image and reference image establishes the RASE. The RASE is anticipated to be lower for the best spectral quality of the fused image. In Method 1, Dataset 1, the RASE of Machine learning based CNN is observed to be 0.0357, for WT it is 0.0582, for PCA method it is 0.1362 and for IHS transform it is 0.1144. From the subsequent observation, it can be affirmed that CNN has RASE value lower compared to other fusion algorithms, showing the best fused image quality. Relative global dimensional synthesis error (ERGAS) is projected to be lower for the best spectral quality of the fused image. In Method 1, Dataset 1, ERGAS of Machine learning based CNN is observed to be 1.9786, for WT it is 2.6934, for PCA method it is 3.2638 and for IHS transform it is 3.9452. From the subsequent observation, it can be detailed that CNN has ERGAS value lower equated to other fusion algorithms, displaying the best fused image quality. The corresponding can be observed in Fig. 12, the global quality of CNN is very effective. Average gradient index (AG) defines the changing features of the image texture and the comprehensive data. Higher values of the AG index match to higher spatial resolution. In CNN has the higher value of AG, yet again confirming to have best spatial quality allied to other fusion algorithms.
  • 21. PSJ Kumar et al. 360 Information entropy (HF) of the fused image without reference image for the various image fusion algorithms is publicized in Table III. Higher values of HF depict the image information is richer, the visualization is best. In Method 2, Dataset 2, the HF of Machine learning based CNN is observed to be 7.127, for WT it is 6.252, for PCA method it is 5.686 and for IHS transform it is 5.297. From the subsequent observation, it can be stated that CNN has the higher value of HF. Henceforth, CNN validates to have the best visualization equated to other fusion algorithms. The same can be observed in Fig. 11, the spatial quality of CNN is very effective. The standard deviation SF replicates the image contrast, higher values of SF, the image contrast is robust, the visualization is the best. In Method 2, Dataset 2, the HF of Machine learning based CNN is observed to be 23.698, for WT it is 19.546, for PCA method it is 17.821 and for IHS transform it is 16.563. CNN has higher SF than other fusion algorithms, showing the best visual quality of fused image. Cross entropy C is the pixel variance of two images. When the image variance is trivial then more quantity of information can be mined, cross entropy C offers improved appraisal of image fusion. Lower values of C portray the image information is more affluent, the image quality is best. In Method 2, Dataset 2, the C of Machine learning based CNN is observed to be 0.4296, for WT it is 0.5467, for PCA method it is 0.7279 and for IHS transform it is 0.7561. From the subsequent observation, it can be specified that CNN has the lower value of C. The corresponding can be observed in Fig. 13, the global quality of CNN is very effective. Consequently, CNN authorizes to have the best fused image quality equated to other image fusion algorithms. Fig. 14 and Fig. 15 portrays the classification of bucolic and farming region by object oriented method through various fusion algorithm with and without reference image by means of Method 1, Dataset 1 and Method 2, Dataset 2 respectively. The overall accuracy and kappa index of bucolic and farming region classification is publicized in Table IV and Table V respectively. The overall accuracy provides the efficacy of region classification. Higher the accuracy, best is the classification. In Method1, Dataset 1, the overall accuracy of Machine learning based CNN is observed to be 90.43%, for WT it is 83.89%, for PCA method it is 78.64% and for IHS transform it is 71.93%. In Method 2, Dataset 2, the overall accuracy of Machine learning based CNN is observed to be 93.23%, for WT it is 86.52%, for PCA method it is 76.89% and for IHS transform it is 71.56%. From the subsequent observation, it can be stated that in both the methods; CNN has the higher efficacy of bucolic and farming region classification. Kappa is a quantity of covenant amid the two entities. Kappa is always less than or equal to 1. A value of 1 specifies perfect classification and values less than 1 specifies below perfect classification. In Method1, Dataset 1, the kappa index of Machine learning based CNN is observed to be 0.92, for WT it is 0.73, for PCA method it is 0.57 and for IHS transform it is 0.54. In Method 2, Dataset 2, the kappa index of Machine learning based CNN is observed to be 0.89, for WT it is 0.77, for PCA method it is 0.56 and for IHS transform it is 0.52. From the subsequent observation, it can be specified that in both the methods; CNN has the kappa value closer to 1 than other image fusion algorithms in the classification of bucolic and farming region using object oriented method. From the above annotations, it can be evidently and unanimously concluded that machine learning based convolutional neural network provides the best image fusion quality with respect to spatial quality, spectral quality and global quality with and without a reference image. Consequently, only with a best fused image, superlative classification of bucolic and farming region of remote sensing image is practicable.
  • 22. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 8 Illustration of Remote Sensing Image Fusion using Method 1, Dataset 1. (a) PAN Image Quick Bird. (b) MS Image Quick Bird. (c1) Image Fused by IHS Transform. (c2) Image Fused by PCA Method. (c3) Image Fused by Wavelet Transform. (c4) Image Fused by Machine Learning Based CNN.
  • 23. PSJ Kumar et al. 362 Fig. 9 Illustration of Remote Sensing Image Fusion using Method 2, Dataset 2. (a) PAN Image Geo Eye. (b) MS Image Geo Eye. (c1) Image Fused by IHS Transform. (c2) Image Fused by PCA Method. (c3) Image Fused by Wavelet Transform. (c4) Image Fused by Machine Learning Based CNN.
  • 24. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 10 Spatial quality of Fig. 8 using Method 1, Dataset 1. (a) Image Fused by IHS Transform. (b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform. (d) Image Fused by Machine Learning Based CNN. Fig. 11 Spatial quality of Fig. 9 using Method 2, Dataset 2. (a) Image Fused by IHS Transform. (b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform. (d) Image Fused by Machine Learning Based CNN.
  • 25. PSJ Kumar et al. 364 Fig. 12 Global quality of Fig. 8 using Method 1, Dataset 1. (a) Image Fused by IHS Transform. (b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform. (d) Image Fused by Machine Learning Based CNN. Fig. 13 Global quality of Fig. 9 using Method 2, Dataset 2. (a) Image Fused by IHS Transform. (b) Image Fused by PCA Method. (c) Image Fused by Wavelet Transform. (d) Image Fused by Machine Learning Based CNN.
  • 26. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Fig. 14 Bucolic and Farming Region Classification by Object Oriented Method using Method 1, Dataset 1. (a) IHS Transform. (b) PCA Method. (c) Wavelet Transform. (d) Machine Learning Based CNN. Fig. 15 Bucolic and Farming Region Classification by Object Oriented Method using Method 2, Dataset 2. (a) IHS Transform. (b) PCA Method. (c) Wavelet Transform. (d) Machine Learning Based CNN.
  • 27. PSJ Kumar et al. 366 8 Conclusion Image data fusion has developed into a valued tool in remote sensing to assimilate the best features of each sensor data intricated in the processing. Amongst the several image fusion techniques, the most commonly used approaches for example IHS transform, PCA, wavelet transform were tested for the fused image quality of PAN and MS images with and without reference image against the proposed machine learning based CNN in the classification of bucolic and farming region using object oriented classification method. Both qualitative and quantitative approaches such as correlation coefficient (CC), difference between the means of the reference and the merged images (DM), standard deviation of the difference image (SSD), universal image quality indicator (UIQI), relative average spectral error index (RASE), relative global dimensional synthesis error (ERGAS), average gradient index (AG) were evaluated to examine the quality of fused image with reference image. Likewise, quantitative metrics for example information entropy (HF), standard deviation (SF), cross entropy (C) were evaluated to investigate the quality of fused image without reference image. However, in both the cases, panchromatic and multispectral remote sensing image fusion using machine learning based convolutional neural network have evidenced to be effective in the classification of bucolic and farming region compared to IHS transform, PCA method and wavelet transform. IHS transform and PCA is experiential to have lower intricacy and faster processing time but the utmost substantial problem is colour falsehood. Wavelet based systems accomplish better than those approaches in terms of minimalizing colour falsehood but they normally cause larger intricacy in computation and constraints setting. Additional challenge on the existing methods will be the capability for processing multispectral, hyperspectral and superspectral satellite sensor statistics. Machine learning seems to be one probable method to lever the high dimension nature of satellite sensor data. From the above explanations, it can be palpably and consistently determined that machine learning based convolutional neural network affords the best image fusion quality with reverence to spatial quality, spectral quality and global quality with and without a reference image. Subsequently, only with a best fused image, superlative classification of bucolic and farming region of remote sensing image can be practically effective. In the future, diverse fusion methods can be combined in a single outline. Every fusion scheme has its set of benefits and limits. The blend of several fusion schemes can be a convenient stratagem in realizing better outcomes. Nevertheless, choosing and planning of those fusion schemes are relatively subjective and frequently hinge on the user’s practice. Additional researches are essential for the aspects such as plan of a wide- ranging context for coalescing diverse fusion procedures; advance of inventive tactics which can merge the structures of pixel, feature, decision level image fusion; development of automatic quality assessment technique for estimation of fusion outcomes. Automatic quality assessment is extremely necessary to appraise the probable advantages of fusion, to regulate an ideal setting of constraints for a convinced fusion system, as well as to relate outcomes attained with diverse algorithms. Nevertheless, all together, no automatic solution has been realized to constantly yield high quality fusion for diverse data sets. It is projected that the outcome of fusing data from several independent sensors will provide the possibility for improved performance that can be proficient by either sensor, and will reduce susceptibility to sensor oriented measures and deployment aspects.
  • 28. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region References Alparone L, Aiazzi B et al. (2004) ‘A Golbal Quality Measurement of Pan-Sharpened Multispectral Imagery’, IEEE Geoscience and Remote Sensing Letters, Vol. 1, No. 4, pp.313-317. Alparone L, Wald L et al. (2007) ‘Comparison of Pansharpening Algorithms Outcome of the 2006 GRS-S Data-Fusion Contest’, IEEE Transactions on Geoscience and Remote Sensing, Vol.45, No.10, pp.3012-3021. Amolins K, Zhang Y, Dare P. (2007) ‘Wavelet based image fusion techniques – An introduction, review and comparison’, ISPRS Journal of Photogrammetry and Remote Sensing, Vol.62, No.4, pp.249-263. Baatz A. (2000) ‘Multiresolution Segmentation-an optimization approach for high quality multi- scale image segmentation’, Angewandte Geographische Informationsverarbeitung XII, Wichmann-Verlag, Heidelberg, Vol. 12, pp.12-23. Cao W, Li B, Zhang Y. (2003) ‘A remote sensing image fusion method based on PCA transform and wavelet packet transform’, Proceedings of the 2003 International Conference on Neural Networks and Signal Processing, Vol.2, pp.976-981. Carper W, Lillesand T, Kiefer R. (1990) ‘The use of intensity-hue-saturation transformations for merging spot panchromatic and multispectral image data’, Photogrammetric Engineering and Remote Sensing, Vol.56, No.4, pp.459-467. Chai Y, Li H.F, Qu J.F. (2010) ‘Image fusion scheme using a novel dual-channel PCNN in lifting stationary wavelet domain’, Optics Communications, Vol. 283, No. 19, pp.3591–3602. Chavez P.S, Kwakteng A.Y. (1989) ‘Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis’, Photogrammetric Engineering and Remote Sensing, Vol.55, No.3, pp.339-348. Chavez P.S, Sides S.C, Anderson J.A. (1991) ‘Comparison of three different methods to merge multiresolution and multispectral data: Landsat and SPOT panchromatic’, Photogrammetric Engineering and Remote Sensing, Vol.57, No.3, pp.295-303. Chen T, Zhang J, Zhang Y. (2005) ‘Remote sensing image fusion based on ridgelet transform’, Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Vol.2, 2005, pp.1150-1153. Chen H, Varshney P.K. (2007) ‘A human perception inspired quality metric for image fusion based on regional information’, Information Fusion, Vol.8, No.2, pp.193-207. Chen Y, Blum R.S. (2009) ‘A new automated quality assessment algorithm for image fusion’, Image and Vision Computing, Vol.27, No.10, pp.1421-1432. Chibani Y, Houacine A. (2000) ‘On the use of the redundant wavelet transform for multisensor image fusion’, Proceedings of the 7th IEEE International Conference on Electronics, Circuits and Systems, Vol.1, pp.442-445. Chibani Y et al. (2003) ‘Redundant versus orthogonal wavelet decomposition for multisensor image fusion’, Pattern Recognition, Vol.36, No.4, pp.879-887. Choi M, Kim R.Y, Nam M.R, Kim, H.O. (2005) ‘Fusion of multispectral and panchromatic satellite images using the curvelet transform’, IEEE Geoscience and Remote Sensing Letters, Vol.2, No.2, pp.136-140. Choi M. (2006) ‘A new intensity-hue-saturation fusion approach to image fusion with a tradeoff parameter’, IEEE Geoscience and Remote Sensing, Vol.44, No.6, pp.1672-1682. Camara G, Souza R.C.M, Freitas U.M, Garrido J. (1996). ‘Integrating remote sensing and GIS by object-oriented data modeling’, Computers and Graphics, Vol.20, No.3, pp.395-403.
  • 29. PSJ Kumar et al. 368 C. Zhang, J. Pan, S. Chen et al. (2016) ‘No reference image quality assessment using sparse feature representation in two dimensions spatial correlation’, Neurocomputing, Vol.173, pp.462– 470. Dai F.C, Lee C.F. (2002) ‘Landslide characteristics and slope instability modeling using GIS, Lantau Island, Hong Kong’, Geomorphology, Vol.42, No.3, Version 4, pp.213-228. Fonseca L.M.G, Prasad G.S.S.D, Mascarenhas N.D.A. (1993) ‘Combined interpolation restoration of Landsat images through FIR filter design techniques’, International Journal of Remote Sensing, Vol.14, No.13, pp.2547-2561. Fonseca et al. (2008) ‘Multitemporal image registration based on multiresolution decomposition’, Revista Brasileira de Cartografia, Vol.60, No.3, pp.271-286. Garguet-Duport B, Girel J, Chassery J.M, Pautou G. (1996) ‘The use of multiresolution analysis and wavelets transform for merging SPOT panchromatic and multispectral image data’, Photogrammetric Engineering and Remote Sensing, Vol.62, No.9, pp.1057-1066. Garzelli A et al. (2005) ‘Interband structure modeling for pan-sharpening of very high-resolution multispectral images’, Information Fusion, Vol.6, No.3, (September 2005), pp.213-224. Gonzalez-Audicana M, Saleta J et al. (2004) ‘Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition’, IEEE Transactions on Geoscience and Remote Sensing, Vol.42, No.6, pp.1291-1299. Gonzalez-Audicana M, Otazu X, Fors O, Seco A. (2005) ‘Comparison between Mallat´s and the ´à trous´ discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images’, International Journal of Remote Sensing, Vol.26, No.3, pp.595-614. G Palubinskas. (2015) ‘Joint quality measure for evaluation of pansharpening accuracy’, Remote Sensing’, Vol.7, No. 7, pp.9292–9310. Guo Q, Chen S, Leung H, Liu S. (2010) ‘Covariance intersection based image fusion technique with application to pan-sharpening in remote sensing’, Information Sciences, Vol.180, No.18, pp.3434-3443. Ioannidou S, Karathanassi V. (2007) ‘Investigation of the Dual-Tree Complex and Shift Invariant Discrete Wavelet Transforms on Quickbird Image Fusion’, IEEE Geoscience and Remote Sensing Letters, Vol.4, No.4, pp.166-170. Jing L, Cheng Q. (2009) ‘Two improvement schemes of PAN modulation fusion methods for spectral distortion minimization’, International Journal of Remote Sensing, Taylor & Francis Group, Vol. 30, No. 8, pp. 2119–2131. Laporterie-Dejean F, de Boissezon H, Flouzat G, Lefevre-Fonollosa M.J. (2005) ‘Thematic and statistical evaluations of five panchromatic/multispectral fusion methods on simulated PLEIADES-HR images’, Information Fusion, Vol.6, No.3, pp.193-212. Li S, Kwok J.T, Wang Y. (2002) ‘Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images’, Information Fusion, Vol.3, pp.17-23. Li Z, Jing Z, et al. (2005) ‘Color transfer based remote sensing image fusion using non-separable wavelet frame transform’, Pattern Recognition Letters, Vol.26, No.13, pp.2006-2014. L. Li, Y. Zhou, W. Lin, J. Wu et al. (2016) ‘No-reference quality assessment of deblocked images’, Neurocomputing, Vol.177, pp.572–584. Lillo-Saavedra M, Gonzalo C et al. (2005) ‘Fusion of multispectral and panchromatic satellite sensor imagery based on tailored filtering in the Fourier domain’, International Journal of Remote Sensing, Vol.26, No.6, pp.1263-1268. Lillo-Saavedra M, Gonzalo C. (2006) ‘Spectral or spatial quality for fused satellite imagery: a trade-off solution using the wavelet à trous´ algorithm’, International Journal of Remote Sensing, Vol.27, No.7, pp.1453-1464.
  • 30. Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming Region Ling Y, Ehlers M, Usery E.L, Madden M. (2007) ‘FFT-enhanced IHS transform method for fusing high-resolution satellite images’, ISPRS Journal of Photogrammetry and Remote Sensing, Vol.61, No.6, pp.381-392. Ling Y, Ehlers M, Usery E.L, Madden M. (2008) ‘Effects of spatial resolution ratio in image fusion’, International Journal of Remote Sensing, Vol.29, No.7, pp.2157-2167. Liu Z, Forsyth D.S et al. (2008) ‘A feature-based metric for the quantitative evaluation of pixel- level image fusion’, Computer Vision and Image Understanding, Vol.109, No.1, pp.56-68. L. Dong, Q. Yang, H. Wu et al. (2015) ‘High quality multi-spectral and panchromatic image fusion technologies based on curvelet transform’, Neurocomputing, Vol.159, pp.268–274. Marcelino E.V, Ventura F et al. (2003) ‘Evaluation of image fusion techniques for the identification of landslide scars using satellite data’, Geografia, Vol.28, No.3, pp.431-445. Miao Q, Shi C, Xu P, Yang M, Shi Y. (2011) ‘A novel algorithm of image fusion using shearlets’, Optics Communications, Elsevier, Vol. 284, No.6, pp.1540–1547. Nikolakopoulos K.G. (2005) ‘Comparison of six fusion techniques for SPOT5 data’, Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Vol.4, pp. 2811- 2814. Pajares G, de la Cruz J.M. (2004) ‘A wavelet-based image fusion tutorial’, Pattern Recognition, Vol.37, No.9, pp.1855-1872. Pinho C.M.D, Silva F.C, Fonseca L.M.G et al. (2008) ‘Urban Land Cover Classification from High-Resolution Images Using the C4.5 Algorithm’, XXI Congress of the International Society for Photogrammetry and Remote Sensing, 2008, Beijing, Vol. XXXVII. Part B7, pp.695-699. P.S.Jagadeesh Kumar. (2005) ‘A Study on the Compatibility of Hybrid Approaches to Satellite Image Compression', Image and Vision Computing, 23(8/2), December 2005, pp. 776-783. P.S.Jagadeesh Kumar, J.Nedumaan. (2015) ‘A Comparative Case Study on Compression Algorithm for Remote Sensing Images’, World Congress on Engineering and Computer Science, Vol 25, Issue 12, pp.25-29, San Francisco, USA, 21-23, October 2015, IAENG. P.S.Jagadeesh Kumar, J.Tisa, J.Lepika. (2017) ‘Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remote Sensing Imagery and Pattern Classification’, IAENG International Journal of Computer Science, 56 (3), July 2017, pp.183-188. P.S.Jagadeesh Kumar. (2013) 'Compression of Compound Images Using Wavelet Transform', Asian Journal of Computer Science and Technology, 2 (2), 2013, pp. 263-268. P.S.Jagadeesh Kumar, Krishna Moorthy. (2013) ‘Intelligent Parallel Processing and Compound Image Compression', Advances in Parallel Computing, New Frontiers in Computing and Communications 2013, Vol. 38, Issue 1, January 2013, pp.196-205 Pohl C, Genderen J.L.V. (1998) ‘Multisensor image fusion in remote sensing: concepts, methods and applications’, International Journal of Remote Sensing, Vol.19, No.5, pp.823-854. Rahman M.M, Csaplovics E. (2007) ‘Examination of image fusion using synthetic variable ratio (SVR) technique’, International Journal of Remote Sensing, Vol. 28, No.15, pp.3413-3424. Schetselaar E. M. (1998) ‘Fusion by the IHS transform: should we use cylindrical or spherical coordinates?’, International Journal of Remote Sensing, Vol.19, No.4, pp.759-765. Shi W, Zhu C, Tian Y, Nichol J. (2005) ‘Wavelet-based image fusion and quality assessment’, International Journal of Applied Earth Observation and Geoinformation, Vol.6, pp.241- 251. Silva F.C, Dutra L.V et al. (2007) ‘Urban Remote Sensing Image Enhancement Using a Generalized IHS Fusion Technique’, Proceedings of the Symposium on Radio Wave Propagation and Remote Sensing, Rio de Janeiro, Brazil, 2007.
  • 31. PSJ Kumar et al. 370 Simone G, Farina A, Morabito F.C, Bruzzone L et al. (2002) ‘Image fusion techniques for remote sensing applications’, Information fusion, No.3, 2002, pp.3-15. Song H, Yu S, Yang X et al. (2007) ‘Fusion of multispectral and panchromatic satellite images based on contourlet transform and local average gradient’, Optical Engineering, Vol.46, No.2, 020502. doi:10.1117/1.2437125 Temesgen B, Mohammed M.U, Korme T. (2001) ‘Natural hazard assessment using GIS and remote sensing methods, with reference to the landslide in the Wondogenet area, Ethiopia’, Physics and Chemistry of the Earth, Part C, Vol.26, No.9, pp.665-675. Tu T.M, Su S.C, Shyu H.C, Huang P.S. (2001) ‘A new look at IHS-like image fusion methods’, Information Fusion, Vol.2, No.3, pp.177-186. Tu T.M, Huang P.S, Hung C.L, Chang C.P. (2004) ‘A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery’, IEEE Geoscience and Remote Sensing Letters, Vol.1, No.4, pp.309-312. Tu T.M, Cheng W.C et al. (2007) ‘Best tradeoff for highresolution image fusion to preserve spatial details and minimize color distortion’, IEEE Geoscience and Remote Sensing Letters, Vol.4, No.2, pp.302-306. Ventura F.N, Fonseca L.M.G, Santa Rosa A.N.C. (2002) ‘Remotely sensed image fusion using the wavelet transform’, Proceedings of the International Symposium on Remote Sensing of Environment (ISRSE), Buenos Aires, 8-12, April 2002. Wang Q, Shen Y, Jin J. (2008) ‘Performance evaluation of image fusion techniques’, Image Fusion, T. Stathaki, (Ed), Academic Press, ISBN 978-0-12-372529-5, Oxford. UK Wang, Bovik A.C. (2002) ‘A universal image quality index’, IEEE Signal Processing Letters, Vol.9, No.3, pp.81-84. Wei Z.G, Yuan J.H et al. (1999) ‘A picture quality evaluation method based on human perception’, Acta Electronica Sinica, Vol.27, No.4, pp.79-82. X. Luo, Z. Zhang, X. Wu. (2016) ‘A novel algorithm of remote sensing image fusion based on shift-invariant shearlet transform and regional selection,’ AEU-Int. J. Electron. Commun, Vol.70, No.2, pp.186–197. Yang X.H, Jing J.L, Liu G, Hua L.Z, Ma D.W. (2007) ‘Fusion of multi-spectral and panchromatic images using fuzzy rule’, Communications in Nonlinear Science and Numerical Simulation, Vol.12, No.7, pp.1334-1350. Yang X.H et al. (2008) ‘Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform’, Acta Automatica Sinica, Elsevier, Vol. 34, No. 3, pp.274-282. Zhang Y. (1999) ‘A new merging method and its spectral and spatial effects’, International Journal of Remote Sensing, Vol.20, No.10, pp.2003-2014. Zhang Y. (2002) ‘Problems in the fusion of commercial high-resolution satellite, Landsat 7 images, and initial solutions’, Proceedings of the Symposium on Geospatial Theory, Processing and Applications, Vol. 34, Part 4, Ottawa, Canada. Zhang Y. (2004) ‘Understanding image fusion’, Photogrammetric Engineering and Remote Sensing, Vol.70, No.6, pp.657-661. Zhang Y. (2008) ‘Methods for image fusion quality assessment- a review, comparison and analysis’, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, Vol. XXXVII. Part B7, Beijing, pp.1101-1109. Zheng Y, Qin Z. (2009) ‘Objective Image Fusion Quality Evaluation Using Structural Similarity’, Tsinghua Science & Technology, Vol.14, No.6, pp.703-709. Zhou J, Civco D.L et al. (1998) ‘A wavelet transform method to merge Landsat TM and SPOT panchromatic data’, International Journal of Remote Sensing, Vol.19, No.4, pp.743-757.