P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li, Yanmin Yuan. (2018) ‘Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm Optimization of Convolutional Neural Network for Effective Comparison of Bucolic and Farming Region’, Earth Science and Remote Sensing Applications, Series of Remote Sensing/Photogrammetry, Vol. 43, pp.1-31, Springer.
Motivation for image fusion is the result of recent advancements in the remote sensing field. As the new image sensors are of high resolution and are available at low cost, multiple sensors are used in a wide range of imaging applications. These sensors are of high spatial and spectral resolution and offer faster scan rates. The images taken by these sensors are more reliable, informative and contain complete picture of the scanned environment. Thus, they help in improved performance of dedicated imaging systems. Over a period of decade, remote sensing, medical imaging, surveillance systems, etc., are few applications areas that were benefited by these multi-sensors.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
Chronological Calibration Methods for Landsat Satellite Images iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Comparing canopy density measurement from UAV and hemispherical photography: ...IJECEIAES
UAV and hemispherical photography are common methods used in canopy density measurement. These two methods have opposite viewing angles where hemispherical photography measures canopy density upwardly, while UAV captures images downwardly. This study aims to analyze and compare both methods to be used as the input data for canopy density estimation when linked with a lower spatial resolution of remote sensing data i.e. Landsat image. We correlated the field data of canopy density with vegetation indices (NDVI, MSAVI, and AFRI) from Landsat-8. The canopy density values measured from UAV and hemispherical photography displayed a strong relationship with 0.706 coefficient of correlation. Further results showed that both measurements can be used in canopy density estimation using satellite imagery based on their high correlations with Landsat-based vegetation indices. The highest correlation from downward and upward measurement appeared when linked with NDVI with a correlation of 0.962 and 0.652, respectively. Downward measurement using UAV exhibited a higher relationship compared to hemispherical photography. The strong correlation between UAV data and Landsat data is because both are captured from the vertical direction, and 30 m pixel of Landsat is a downscaled image of the aerial photograph. Moreover, field data collection can be easily conducted by deploying drone to cover inaccessible sample plots.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Motivation for image fusion is the result of recent advancements in the remote sensing field. As the new image sensors are of high resolution and are available at low cost, multiple sensors are used in a wide range of imaging applications. These sensors are of high spatial and spectral resolution and offer faster scan rates. The images taken by these sensors are more reliable, informative and contain complete picture of the scanned environment. Thus, they help in improved performance of dedicated imaging systems. Over a period of decade, remote sensing, medical imaging, surveillance systems, etc., are few applications areas that were benefited by these multi-sensors.
Satellite image processing is a technique to enhance raw images received from cameras or sensors placed on satellites, space probes and aircrafts or pictures taken in normal day to day life in various applications. The process of creating thematic maps as spatial distribution of particular information. These are structured by Spectral Bands. These have constant density and when they overlap their densities get added. It performs image analysis on multiple scale images and catches the comprehensive information of system for different application. Examples of themes are soil, vegetation, water-depth and air. The supervising of such critical events requires a huge volume of surveillance data and extremely powerful real time processing for infrastructure
Chronological Calibration Methods for Landsat Satellite Images iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Comparing canopy density measurement from UAV and hemispherical photography: ...IJECEIAES
UAV and hemispherical photography are common methods used in canopy density measurement. These two methods have opposite viewing angles where hemispherical photography measures canopy density upwardly, while UAV captures images downwardly. This study aims to analyze and compare both methods to be used as the input data for canopy density estimation when linked with a lower spatial resolution of remote sensing data i.e. Landsat image. We correlated the field data of canopy density with vegetation indices (NDVI, MSAVI, and AFRI) from Landsat-8. The canopy density values measured from UAV and hemispherical photography displayed a strong relationship with 0.706 coefficient of correlation. Further results showed that both measurements can be used in canopy density estimation using satellite imagery based on their high correlations with Landsat-based vegetation indices. The highest correlation from downward and upward measurement appeared when linked with NDVI with a correlation of 0.962 and 0.652, respectively. Downward measurement using UAV exhibited a higher relationship compared to hemispherical photography. The strong correlation between UAV data and Landsat data is because both are captured from the vertical direction, and 30 m pixel of Landsat is a downscaled image of the aerial photograph. Moreover, field data collection can be easily conducted by deploying drone to cover inaccessible sample plots.
Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+,
and EO-1 ALI sensors
Gyanesh Chander a,⁎, Brian L. Markham b, Dennis L. Helder c
a SGT, Inc. 1 contractor to the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198-0001, USA
b National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), Greenbelt, MD 20771, USA
c South Dakota State University (SDSU), Brookings, SD 57007, USA
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
APPLICATION OF REMOTE SENSING AND GIS IN AGRICULTURELagnajeetRoy
India is a country that depends on agriculture. Today in this era of technological supremacy, agriculture is also using different new technologies like some robotic machinery to remote sensing and Geographical Information System (GIS) for the betterment of agriculture. It is easy to get the information about that area where human cannot check the condition everyday and help in gathering the data with the help of remote sensing. Whereas GIS helps in preparation of map that shows an accurate representation of data we get through remote sensing. From disease estimation to stress factor due to water, from ground water quality index to acreage estimation in various way agriculture is being profited by the application of remote sensing and GIS in agriculture. The applications of those software or techniques are very new to the agriculture domain still much more exploration is needed in this part. New software’s are developing in different parts of the world and remote sensing. Today farmers understand the beneficiaries of these kinds of techniques to the farm field which help in increasing productivity that will help future generation as technology is hype in traditional system of farming.
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...ADEIJ Journal
Remote sensing is collecting information about an object without any direct physical contact with the particular object. It is widely used in many fields such as oceanography, geology, ecology. Remote sensing uses the Satellite to detect and classify the particular object or area. They also classify the object on the earth surfaces which includes Vegetation, Building, Soil, Forest and Water. The approach uses the classifiers of previous images to decrease the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is predicted first based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate manner with current training samples. This approach can be further applied as sequential image data, with only a small number of training samples, which are being required from each image. This method uses LANSAT 8 images for Training and Testing processes. First, using the Classifier Prediction technique the Signatures are being generated for the input images. The generated Signatures are used for the Training purposes. SVM Classification is used for classifying the images. The final results describes that the leverage of a priori information from previous images will provide advantageous improvement for future images in multi temporal image classification.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
Hyperspectral images provide detailed spectral info
rmation with more than several hundred
channels. On the other hand, the high dimensionalit
y in hyperspectral images also causes to
classification problems due to the huge ratio betwe
en the number of training samples and the
features. In this paper, Lyapunov Exponents (LEs) a
re used to determine chaotic-type structure
of EO- 1 Hyperion hyperspectral image, a mixed fore
st site in Turkey. Experimental results
demonstrate that EO-1 Hyperion image has a chaotic
structure by checking distribution of
Lyapunov Exponents (LEs) and they can be used as d
iscriminative features to improve
classification accuracy for hyperspectral images.
Tropical Cyclone Determination using Infrared Satellite Imageijtsrd
Many sub continents in the world have the region that are affected by the cyclone in every year. To prevent the loss of life and their assets, cyclone prediction is a major role because of directly related to the lives and household of human being. Satellite images provide an excellent view of clouds which can be used in weather forecasting and especially Infrared Red IR satellite images play in many environmental applications. To find the tropical cyclone TC center, the basic stage is to extract the main cloud of the cyclone. In manual segmentation, selection of the storm region is complicated, time consuming task and it also need the human experts for every time processing. The semi and fully automatic storm detection is sophisticated and difficult process because of the overlapping between boundaries of the cloud. Fuzzy C means FCM clustering and morphological image processing is applied for segmentation each infrared satellite images. The effectiveness is tested for infrared cyclone image over Kalpana satellite which is obtained from the INSAT satellite of India. 45 tropical cyclones are occurred during the period 1989 2014 over Bay of Bengal. Cyclones Nargis that mainly affected to Myanmar in 2 May 2008 as case study. Experimental results show that the high location accuracy can be obtained. Thu Zar Hsan | Myint Myint Sein "Tropical Cyclone Determination using Infrared Satellite Image" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27934.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/27934/tropical-cyclone-determination-using-infrared-satellite-image/thu-zar-hsan
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
Multispectral images are used for space Arial application, target detection and remote sensing application. MS images are very rich in spectral resolution but at a cost of spatial resolution. We propose a new method to increase a spatial resolution MS images. For spatial resolution enhancement of MS images we need to employ a super-resolution technique which uses a Principal Component Analysis (PCA) based approach by learning an edge details from database. Experiments have been carried out on both real multispectral (MS) data and MS data. This experiment is done with the usefulness for hyper spectral (HS) data as a future work.
APPLICATION OF REMOTE SENSING AND GIS IN AGRICULTURELagnajeetRoy
India is a country that depends on agriculture. Today in this era of technological supremacy, agriculture is also using different new technologies like some robotic machinery to remote sensing and Geographical Information System (GIS) for the betterment of agriculture. It is easy to get the information about that area where human cannot check the condition everyday and help in gathering the data with the help of remote sensing. Whereas GIS helps in preparation of map that shows an accurate representation of data we get through remote sensing. From disease estimation to stress factor due to water, from ground water quality index to acreage estimation in various way agriculture is being profited by the application of remote sensing and GIS in agriculture. The applications of those software or techniques are very new to the agriculture domain still much more exploration is needed in this part. New software’s are developing in different parts of the world and remote sensing. Today farmers understand the beneficiaries of these kinds of techniques to the farm field which help in increasing productivity that will help future generation as technology is hype in traditional system of farming.
Automatic traffic light controller for emergency vehicle using peripheral int...IJECEIAES
Traffic lights play such important role in traffic management to control the traffic on the road. Situation at traffic light area is getting worse especially in the event of emergency cases. During traffic congestion, it is difficult for emergency vehicle to cross the road which involves many junctions. This situation leads to unsafe conditions which may cause accident. An Automatic Traffic Light Controller for Emergency Vehicle is designed and developed to help emergency vehicle crossing the road at traffic light junction during emergency situation. This project used Peripheral Interface Controller (PIC) to program a priority-based traffic light controller for emergency vehicle. During emergency cases, emergency vehicle like ambulance can trigger the traffic light signal to change from red to green in order to make clearance for its path automatically. Using Radio Frequency (RF) the traffic light operation will turn back to normal when the ambulance finishes crossing the road. Result showed the design is capable to response within the range of 55 meters. This project was successfully designed, implemented and tested.
CLASSIFICATION AND COMPARISION OF REMOTE SENSING IMAGE USING SUPPORT VECTOR M...ADEIJ Journal
Remote sensing is collecting information about an object without any direct physical contact with the particular object. It is widely used in many fields such as oceanography, geology, ecology. Remote sensing uses the Satellite to detect and classify the particular object or area. They also classify the object on the earth surfaces which includes Vegetation, Building, Soil, Forest and Water. The approach uses the classifiers of previous images to decrease the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is predicted first based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate manner with current training samples. This approach can be further applied as sequential image data, with only a small number of training samples, which are being required from each image. This method uses LANSAT 8 images for Training and Testing processes. First, using the Classifier Prediction technique the Signatures are being generated for the input images. The generated Signatures are used for the Training purposes. SVM Classification is used for classifying the images. The final results describes that the leverage of a priori information from previous images will provide advantageous improvement for future images in multi temporal image classification.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
Hyperspectral images provide detailed spectral info
rmation with more than several hundred
channels. On the other hand, the high dimensionalit
y in hyperspectral images also causes to
classification problems due to the huge ratio betwe
en the number of training samples and the
features. In this paper, Lyapunov Exponents (LEs) a
re used to determine chaotic-type structure
of EO- 1 Hyperion hyperspectral image, a mixed fore
st site in Turkey. Experimental results
demonstrate that EO-1 Hyperion image has a chaotic
structure by checking distribution of
Lyapunov Exponents (LEs) and they can be used as d
iscriminative features to improve
classification accuracy for hyperspectral images.
Tropical Cyclone Determination using Infrared Satellite Imageijtsrd
Many sub continents in the world have the region that are affected by the cyclone in every year. To prevent the loss of life and their assets, cyclone prediction is a major role because of directly related to the lives and household of human being. Satellite images provide an excellent view of clouds which can be used in weather forecasting and especially Infrared Red IR satellite images play in many environmental applications. To find the tropical cyclone TC center, the basic stage is to extract the main cloud of the cyclone. In manual segmentation, selection of the storm region is complicated, time consuming task and it also need the human experts for every time processing. The semi and fully automatic storm detection is sophisticated and difficult process because of the overlapping between boundaries of the cloud. Fuzzy C means FCM clustering and morphological image processing is applied for segmentation each infrared satellite images. The effectiveness is tested for infrared cyclone image over Kalpana satellite which is obtained from the INSAT satellite of India. 45 tropical cyclones are occurred during the period 1989 2014 over Bay of Bengal. Cyclones Nargis that mainly affected to Myanmar in 2 May 2008 as case study. Experimental results show that the high location accuracy can be obtained. Thu Zar Hsan | Myint Myint Sein "Tropical Cyclone Determination using Infrared Satellite Image" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27934.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/27934/tropical-cyclone-determination-using-infrared-satellite-image/thu-zar-hsan
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
Hyperspectral object classification using hybrid spectral-spatial fusion and ...IJECEIAES
Because of its spectral-spatial and temporal resolution of greater areas, hyperspectral imaging (HSI) has found widespread application in the field of object classification. The HSI is typically used to accurately determine an object's physical characteristics as well as to locate related objects with appropriate spectral fingerprints. As a result, the HSI has been extensively applied to object identification in several fields, including surveillance, agricultural monitoring, environmental research, and precision agriculture. However, because of their enormous size, objects require a lot of time to classify; for this reason, both spectral and spatial feature fusion have been completed. The existing classification strategy leads to increased misclassification, and the feature fusion method is unable to preserve semantic object inherent features; This study addresses the research difficulties by introducing a hybrid spectral-spatial fusion (HSSF) technique to minimize feature size while maintaining object intrinsic qualities; Lastly, a soft-margins kernel is proposed for multi-layer deep support vector machine (MLDSVM) to reduce misclassification. The standard Indian pines dataset is used for the experiment, and the outcome demonstrates that the HSSF-MLDSVM model performs substantially better in terms of accuracy and Kappa coefficient.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
REGION CLASSIFICATION AND CHANGE DETECTION USING LANSAT-8 IMAGESADEIJ Journal
The change detection in remote sensing images remains an important and open problem for damage assessment. A new change detection method for LANSAT-8 images based on homogeneous pixel transformation (HPT) is proposed. Homogeneous Pixel Transformation transfers one image from its original feature space (e.g., gray space) to another feature space (e.g., spectral space) in pixel-level to make the pre-event images and post-event images to be represented in a common space or projection space for the convenience of change detection. HPT consists of two operations, i.e., forward transformation and backward transformation. In the forward transformation, each pixel of pre-event image in the first feature space is taken and will estimate its mapping pixel in the second space corresponding to post-event image based on the known unchanged pixels. A multi-value estimation method with the noise tolerance is produced to determine the mapping pixel using K-nearest neighbours technique. Once the mapping pixels of pre-event image are identified, the difference values between the mapping image and the post-event image can be directly generated. Then the similar work is done for backward transformation to combine the post-event image with the first space, and one more difference value for each pixel will be generated. Then, the two difference values are taken and combined to improve the robustness of detection with respect to the noise and heterogeneousness of images. (FRFCM) Fast and Robust Fuzzy C-means clustering algorithm is employed to divide the integrated difference values into two clusters- changed pixels and unchanged pixels. This detection results may contain few noisy regions as small error detections, and a spatial-neighbor based noise filter is developed to reduce the false alarms and missing detections. The experiments for change detection with real images of LANSAT-8 in Tuticorin between 2013-2019 are given to validate the percentage of the changed regions in the proposed method.
An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Re...IDES Editor
In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Land scene classification from remote sensing images using improved artificia...IJECEIAES
The images obtained from remote sensing consist of background complexities and similarities among the objects that act as challenge during the classification of land scenes. Land scenes are utilized in various fields such as agriculture, urbanization, and disaster management, to detect the condition of land surfaces and help to identify the suitability of the land surfaces for planting crops, and building construction. The existing methods help in the classification of land scenes through the images obtained from remote sensing technology, but the background complexities and presence of similar objects act as a barricade against providing better results. To overcome these issues, an improved artificial bee colony optimization algorithm with convolutional neural network (IABC-CNN) model is proposed to achieve better results in classifying the land scenes. The images are collected from aerial image dataset (AID), Northwestern Polytechnical University-Remote Sensing Image Scene 45 (NWPU-RESIS45), and University of California Merced (UCM) datasets. IABC effectively selects the best features from the extracted features using visual geometry group-16 (VGG-16). The selected features from the IABC are provided for the classification process using multiclass-support vector machine (MSVM). Results obtained from the proposed IABC-CNN achieves a better classification accuracy of 96.40% with an error rate 3.6%.
study and analysis of hy si data in 400 to 500IJAEMSJORNAL
The ability to extract information about world and present it in way that our visual perception can comprehend is ultimate goal of imaging science in remote sensing .Hyperspectral imaging system is most powerful tool in the field of remote sensing also called as imaging spectroscopy, It is new technique used by researcher to detect terrestrial, vegetation and mineral. This paper reports analysis of hyperspectral images. Firstly the hyperspectral image analyzed by using supervised classification of Amravati region from Maharashtra province of India. The report reveals spectral analysis of Amravati region. We acquired satellite imagery to perform the classification using maximum like hood classifier. Analysis is performing in ERDAS to determine the spectral reflectance against the no of band. The analytical outcome of paper is representing the soil, water, vegetation index of the region.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
2. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Chapter 1
Panchromatic and Multispectral Remote
Sensing Image Fusion Using Particle Swarm
Optimization of Convolutional Neural
Network for Effective Comparison of Bucolic
and Farming Region
P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li, Yanmin Yuan
Abstract. With the advance of remote sensing in observation engineering,
high multispectral and spatial resolution of remote sensing imagery such as
Landsat Thematic Mapper, Spot, Ikonos, Worldview, Seastar and Geoeye
metaphors were attained by distinct types of sensors and strong-minded in
topographically allied monitoring, planning, mining and understanding
information. To improve the superiority of the fused images, researchers
anticipate image fusion schemes in fusing panchromatic and multispectral
images. This chapter emphases on the optimized fusion of high resolution
panchromatic and low resolution multispectral images using particle swarm
optimization of convolutional neural network in categorizing bucolic and
farming region. Qualitative and quantitative evaluation approaches were
used to measure the eminence of the fused images with and without the
reference image. The practical fallouts demonstrates that the anticipated
method provided better concert in enhancing the quality of the fused images
and fashioned effective comparison of bucolic and farming region.
Keywords: Image fusion, particle swarm optimization, bucolic and farming
region classification, convolutional neural network, multispectral ımaging,
panchromatic ımaging.
Cite this chapter as: P.S.Jagadeesh Kumar, Tracy Lin Huan, Xianpei Li,
Yanmin Yuan. (2018) ‘Panchromatic and Multispectral Remote Sensing
Image Fusion using Particle Swarm Optimization of Convolutional Neural
Network for Effective Comparison of Bucolic and Farming Region’, Earth
Science and Remote Sensing Applications, Series of Remote Sensing
/Photogrammetry, Vol. 43, pp.1-31, Springer.
This work is funded and carried out at Dartmouth College, Hanover, New
Hampshire, United States under the project titled “Bucolic and Farming
Region Taxonomy Using Neural Networks for Remote Sensing Images"
3. P.S.Jagadeesh Kumar et al. 3
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
1 Introduction
Most extreme of the earth critique satellites, for example, QUICKBIRD, SPOT,
IKONOS, FORMOSAT or ORBVIEW and a couple of advanced airborne sensors
report picture measurements in one in every one of the sort mode, a low-choice
multispectral (MS) mode or the top determination panchromatic (PAN) mode. An
unprecedented element for these sensors is the way that the best spatial choice is
recorded in their panchromatic mode while the multispectral recording mode
produces depictions of diminished spatial determination. The change in the spatial
determination in the midst of the panchromatic and the multispectral mode is most
likely bound by the method for the proportion of their relating ground sample
distance (GSD) and can contrast among 1:3 and 1:6. This proportion may wind up
shoddier if data from differing satellites are used. For example, the determination
proportion amidst IKONOS (PAN mode) and IKONOS (MS mode) are 1:11. The
reason for notorious image combination is to blend the panchromatic and the
multispectral records to frame an intertwined multispectral image that holds the
spatial realities from the unbalanced determination panchromatic image and the
ghastly qualities of the reduction choice multispectral image. Programs for
secured picture datasets exemplify town mapping, change recognition, rustic and
cultivating area, bucolic and farming region classification. Image fusion methods
have commonly been progressed for single-sensor, single-date combination, as an
occasion, IKONOS panchromatic images are melded with the equivalent IKONOS
multispectral images. The multisensory or multitemporal combination is from time
to time being employed or is utilized with LANDSAT multispectral and SPOT
panchromatic records. In this manner, greatest of the combination procedures
indicate conditions if extraordinary sensors from striking examples are joined.
With the persistent improvement of software engineering and innovation, inquire
about in the field of picture handling bit by bit expanded. New pattern are getting
turned on utilizing nature enlivened registering in the region of picture preparing.
One of the developing procedures are picture division in view of swarm insight.
Swarm intelligence (SI) is new developing territory in different fields including
enhancement. One of exceptionally well known SI strategies is particle swarm
optimization (PSO) for finding upgraded arrangement. PSO is a stochastic hunt
strategy in view of the sociological conduct of feathered creature running. It
introduces a populace of particles that reenacts a rush of bird flocking. The
calculation of PSO is simple and quick to get arrangement with the goal that it can
be connected to understand an extensive variety of improvement issues in
numerous fields, for example, picture handling fields including picture division.
The objective is to give a compelling correlation of rural and cultivating area in
view of the possibility of PSO utilizing convolutional neural systems.
4. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
2 Remote Sensing Image Fusion
Remote sensing image fusion denotes the procedure of merging two or more
satellite sensor images into one compound image, which assimilates the
information confined within the discrete input images. The resultant image has a
sophisticated content related to the input sensor images. The aim of remote
sensing image fusion procedure is to appraise the data at respective pixel position
in the input sensor images and to hold the information which best signifies the
factual scene or to improve the effectiveness of the fused image for bucolic and
farming region taxonomy. Owing to system compromises associated to data size
and signal-to-noise ratio restrictions, remote sensing images incline to have low
spectral resolution and high spatial resolution or otherwise. The selection of input
data for the fusion procedure is extremely contingent on the drive of image fusion.
Data that are suitable in one instance might be impractical in another instance. The
choice very much hinge on the experiential distinctiveness, sensor features, data
accessibility, besides the availability of current algorithms for information mining.
2.1 Panchromatic Images and Multispectral Images
In remote sensing, different sorts of sensors are accessible like spaceborne and
airborne. The images which are captured by using those sensors can be arranged
into two kinds as panchromatic and multispectral images. The panchromatic
pictures are taken by utilizing SAR radar which is having the wavelength of 1 mm
to 1 m. SAR sensor picture joins diverse wavelengths to get a composite picture
and every wavelength will be shown as a Red, Green, and Blue in the last picture.
By consolidating diverse wavelength pictures in different strategies finds the
highlights from earth surface. The multispectral pictures are caught by getting the
reflectance of the microwaves from the earth surface. On a high spatial PAN
image, general geometric highlights can easily be recognized, though the MS
image joins more prosperous unearthly data. The limits of the pictures can be
enhanced if the advantages of together high spatial and ghastly determination can
be joined into one lone picture. The exhaustive structures of such a strong picture
in this way can be essentially perceived and will help various applications. By
methods for reasonable calculations, it is conceivable to blend MS and PAN
groups and yield a simulated picture with their best geologies. This methodology
is recognized as multisensory blending, or combination, or fusing. Its goal is to
absorb the spatial component of high-determination PAN picture and the shading
data of a low-determination MS picture to accomplish a high-determination MS
picture. Fig.1, Fig.2 and Fig.3 demonstrates the GeoEye-1 0.5m high-resolution
panchromatic picture, GeoEye-1 0.5m low-resolution multispectral picture and
fused panchromatic and multispectral picture individually.
5. P.S.Jagadeesh Kumar et al. 5
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Fig. 1. GeoEye-1, 0.5m High Resolution Panchromatic Image
Fig. 2. GeoEye-1, 0.5m Low resolution Multispectral Image
Fig. 3. Fused Image of GeoEye-1, 0.5m
6. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
2.2 Need For Fusing Remote Sensing Images
Remote sensing conveys multimodal and worldly information from the Earth's
surface. Keeping in mind the end goal is to adapt multidimensional information
sources and to benefit as much as possible from them, image fusion is a profitable
tool. It has been created during recent decades into a usable image combination
method for extricating data of higher quality and dependability. As more sensors
and propelled picture combination strategies have turned out to be accessible,
scientists have led a huge measure of fruitful examinations utilizing picture
combination. Remote sensing image has turned into a built up, cutting edge
handling way to deal with extricate the ideal data from multisensor information.
Remote sensing picture combination has demonstrated its helpfulness and
significance in various applications in the past 20 years. Early examinations were
committed in understanding the reciprocal of optical i.e. noticeable and infrared
other than microwave i.e. radar remote sensing and how to increment spatial
determination while keeping up phantom honesty of optical sensor pictures by
methods for pansharpening. To start with fruitful applications were mapping, GIS,
farming, stereo-photogrammetry, geography and surge checking. It shapes a sub-
aggregate in remote sensing picture combination. It emerged with the accessibility
of single stage multisensor images, perhaps, the multispectral and panchromatic
stations of the principal SPOT satellite. Together with the acknowledgment of the
estimation of joined integral pictures, combination for pansharpening is one
reason why image fusion has picked up ubiquity and perceivability separated from
the way that more research has gone into this logical field. The expansion in
available sensors, spatial determination and PC control has been the contributing
element to the fame of remote sensing image fusion. This advancement is joined
by the trouble of recognizing reasonable preparing methods for multisensor
datasets, for unpracticed clients. There are decisions to be made with respect to the
correct pictures, pre-preparing procedures, picture combination approaches and
the last translation strategies for the melded information.
3 Remote Sensing Images and Fusion Algorithms
Remote sensing procedures have ended up being effective tool for observing the
Earth's surface and environment on a worldwide, local, and even local scale, by
giving essential scope, mapping and arrangement of land cover highlights, for
example, vegetation, soil, water and woodlands. The volume of remote detecting
pictures keeps on developing at a colossal rate because of advances in sensor
innovation for both high spatial and worldly determination frameworks. An
expanding amount of image information from airborne/satellite sensors have been
accessible, including multi-determination pictures, multi-transient pictures, multi-
7. P.S.Jagadeesh Kumar et al. 7
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
recurrence/ghostly groups of pictures and multi-polarization picture. Remote
sensing data is helpful and simple to be gotten to over a vast region requiring little
to no effort, yet because of the effect of cloud, airborne, sun oriented rise edge and
bio-directional reflection, the surface vitality parameters recovered from remote
sensing information are frequently missing; in the mean time, the regular variety
of surface parameter time-arrangement plots will be likewise influenced. To
diminish such effects, for the most part time composite technique is embraced.
The objective of numerous sensor picture combination is to incorporate correlative
and repetitive data to give a composite picture which could be utilized to better
comprehension of the whole scene.
3.1 Image Fusion Methods
In this sub-section. various image fusion approaches are descibed in brief. Image
fusion methods can be divided into three different types: pixel level, feature level,
and decision level image fusion.
3.1.1. Pixel Level Fusion Method. It is performed concerning a pixel-by-pixel
commence as spoke to in Fig.4. It creates an interlaced picture in which
information related with each pixel is settled from a plan of pixels in source
pictures to improve the execution of picture getting ready errands. Pixel-level
picture blend is the most decreased level of picture mix, where another photo is
encircled having pixel regards got by uniting the pixel estimations of different
pictures through a couple of figurings under strict enlistment conditions. The new
picture keeps more unrefined data to give rich and exact picture information which
is furthermore used for straightforward examination and getting ready by
incorporate extraction and gathering. The photo mix at pixel level may be single
sensor, multi-sensor or short lived picture mix, et cetera. Good position of pixel-
level picture mix is slightest loss of information, be that as it may it has the
greatest measure of information to be taken care of, in this manner slowest
planning speed, and a higher enthusiasm for adapt.
3.1.2. Feature Level Fusion Method. It requires an extraction of things perceived
in the diverse data sources as showed up in Fig.5. It requires the extraction of
striking features which depends upon their condition, for instance, pixel powers,
edges or surfaces. These similar features from input pictures are joined. Feature
level combination is direct level of picture mix where the features, for instance,
edges, surface, shape, extend, edge, speed, practically identical significance of
focus zone, et cetera are all things considered in statics are expelled from different
photos of a similar land locale by free preprocessing. The evacuated features are
joined to shape a perfect rundown of capacities, moreover masterminded using
true or various types of classifiers. Features from different source-pictures
preprocessed using assorted plans are joined to outline a decision.
8. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Fig. 4. Schematic of Pixel Level Fusion
3.1.3. Decision-Level Fusion Method. It includes mixing information at a more
hoisted measure of consultation, unites the results from various counts to yield a
last merged decision as portrayed in Fig.6. Data pictures are arranged only for
information extraction. The procured information is then solidified applying
decision precepts to fortify standard clarification. Decision level blend is an
unusual state mix, and its results give the preface to request and control basic
authority. In decision level fusion, the photos are arranged autonomously. The
readied information is then refined by joining the information obtained from
different sources and the qualifications in information are settled in light of certain
decision rules. In composing, two sorts of decision level blend are viewed. Here,
arrange from different sorts of classifiers for a comparative picture may be joined
to hint at change portrayal correctnesses or two particular complimentary sources
like optical imagery and radar data can be masterminded freely and combined to
make a refined gathering map. A grouping of sensible reasoning methods, genuine
methodologies, information theory procedures can be used for decision level mix,
for instance, Bayesian reasoning, Dempster-Shafer Evidence considering, voting
structure, cluster examination, soft set speculation, neural framework , the entropy
system and so forth. Decision level mix has a tolerable constant and adjustment to
non-basic disappointment, however its pretreatment cost is higher. The data
measure of decision level mix is the humblest and its ability of antagonistic to
impedance is the most shocking. The probability and reality of joined results are
high and the execution of multisensor structure is gained ground.
9. P.S.Jagadeesh Kumar et al. 9
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Fig. 5. Schematic of Feature Level Fusion
Fig. 6. Schematic of Decision Level Fusion
10. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
3.2 Choosing an Image Fusion Algorithm for Remote Sensing Application
In this section, the choosing and selection of an image fusion algorithm for various
remote sensing applications are discussed in brief.
3.2.1. In Intensity-Hue-Saturation (IHS) based image fusion, three groups of a
multispectral picture are changed from the RGB area into the IHS shading space.
The panchromatic segment is coordinated to the power of the IHS picture and
replaces the force segment. Changed IHS combination which was produced for a
superior fitting of the intertwined multispectral groups to the first information can
be utilized. Subsequent to coordinating, the panchromatic picture replaces the
force in the first IHS picture and the intertwined picture is changed again into the
RGB shading space. This strategy functions admirably with information from one
sensor, yet for multitemporal or multisensoral combination the outcomes are by
and large not satisfactory.
3.2.2. The Principal Component Analysis (PCA) is a factual system that changes
a multivariate dataset of connected factors into a dataset of uncorrelated direct
blends of the first factors. For pictures, it makes an uncorrelated element space
that can be utilized for encourage examination rather than the first multispectral
highlight space. For the most part, the PCA strategy is connected to the
multispectral groups and the panchromatic picture is histogram coordinated to the
primary vital segment. It at that point replaces the chose part and a converse PCA
change takes the intertwined dataset once again into the first multispectral
highlight space. The upside of the PCA based combination technique is that the
quantity of groups isn't confined. It is, nonetheless, a factual strategy which
implies that it is touchy to the territory to be honed. The combination results may
shift contingent upon the chose picture subsets.
3.2.3. The Ehlers fusion depends on an IHS change combined with a Fourier area
separating. This method is reached out to incorporate more than 3 groups by
utilizing numerous IHS changes until the point when the quantity of groups is
depleted. An ensuing Fourier change of the force part and the panchromatic
picture permits a versatile channel outline in the recurrence space. Utilizing Fast
Fourier change (FFT) systems, the spatial segments to be upgraded or stifled can
be straightforwardly gotten to. The force range is sifted with a low pass channel
while the panchromatic range is separated with an opposite high pass channel. In
the wake of separating, the pictures are changed over into the spatial area with a
backwards FFT and included to shape a melded power segment with the low-
recurrence data from the low determination multispectral picture and the high-
recurrence data from the high determination picture. This new force part and the
first tone and immersion segments of the multispectral picture shape another IHS
picture. These means can be rehashed with progressive 3-band choices until the
point when all groups are combined with the panchromatic picture. The Ehlers
combination demonstrates the best ghostly conservation yet in addition the most
astounding calculation time.
11. P.S.Jagadeesh Kumar et al. 11
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
3.2.4. Wavelet Transform is actualized in the Erdas Imagine Software bundle.
For picture combination, a wavelet change is connected to the panchromatic
picture bringing about a four-segment picture: a low-resolution approximation
component (LL) and three images of horizontal (HL), vertical (LH), and diagonal
(HH) wavelet coefficients which contain data of nearby spatial detail. The low-
determination segment is then supplanted by a chose band of the multispectral
picture. This procedure is rehashed for each band until the point that all groups are
changed. A turn around wavelet change is connected to the intertwined parts to
make the combined multispectral picture. For the most part, wavelet melded
pictures deliver great ghostly conservation yet poor spatial change. The AWL
strategy is one of the current multiresolution wavelet-based picture combination
procedures. It was initially intended for a three-band red-green-blue (RGB)
multispectral picture. In this strategy, the ghastly mark is saved on the grounds
that the high determination panchromatic structure is coordinated into the
luminance band of the first low determination multispectral picture. Subsequently,
this strategy is characterized for three groups. It keeps up the otherworldly mark of
a n-band picture similarly as AWL does with RGB pictures. This summed up
strategy is called proportional AWL (AWLP). This technique creates preferred
outcomes over standard wavelet calculations, however the spatial change is by and
large still not worthy.
3.2.5. Multiplicative method is derived from the four segment method. In this
moethod, amoung the four conceivable number-crunching techniques just the
duplication is probably not going to misshape the hues by changing a force picture
into a panchromatic image. In this manner this calculation is a basic augmentation
of each multispectral band with the panchromatic picture. The upside of the
calculation is that it is clear and basic. By duplicating a similar data into all
groups, nonetheless, it makes ghastly groups of a higher connection which implies
that it alters the unearthly qualities of the first picture information.
3.2.6. Brovey Transformation was created to stay away from the hindrances of
the multiplicative strategy. It is a combination of number juggling operations and
standardizes the ghostly groups previously they are duplicated with the pan image.
The otherworldly properties, in any case, are typically not all around saved.
3.2.7. Color Normalization spectral sharpening is an extension of the Brovey
algorithm and groups the input image bands into spectral segments defined by the
spectral range of the panchromatic image. The corresponding band segments are
processed together. Each input band is multiplied by the sharpening band and then
normalized by dividing it by the sum of the input bands in the segment. This
method works well for data from one sensor, but if the spectral range of the
panchromatic image does not match the spectral range of the multispectral images
no spatial improvement is visible.
12. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
3.2.8. The Gram Schmidt fusion recreates a panchromatic band from the lower
spatial determination ghostly groups. Later, it is accomplished by averaging the
multispectral groups. Subsquently, a Gram Schmidt change is performed for the
mimicked panchromatic band and the multispectral groups with the reproduced
panchromatic band utilized as the principal band. At that point the high spatial
determination panchromatic band replaces the main Gram Schmidt band. At last, a
reverse Gram Schmidt is applied to make the pansharpened multispectral groups.
This strategy more often than not creates great outcomes for combination pictures
from one sensor, however it is likewise a factual methodology like the PCA, with
the goal that the combination results may shift contingent upon the chose datasets.
3.2.9. In High Pass Filtering (HPF) fusion, first the proportion between the
spatial determination of the panchromatic and the multispectral picture is
ascertained. A high pass convolution channel part is made and used to channel the
highresolution input information with the span of the bit in light of the proportion.
The HPF picture is added to each multispectral band. Prior to the summation, the
HPF picture is weighted in respect to the worldwide standard deviation of the
multispectral groups with the weight factors again figured from the proportion. As
a last advance, a direct extend is connected to the new multispectral picture to
coordinate the mean and standard deviation estimations of the first info
multispectral picture. It indicates worthy outcomes for both multisensoral and
multitemporal information. Now and then the edges are underlined excessively.
3.2.10. In University of New Brunswick (UNB) combination calculation, a
histogram institutionalization is ascertained for the multispectral and panchromatic
groups of the info pictures. The multispectral groups in the unearthly scope of the
panchromatic picture are chosen and a relapse investigation is computed utilizing
a slightest square calculation. The outcomes are utilized as weights for the
multispectral groups by means of duplication with the relating groups and
following an expansion, another integrated picture is created. To make the
combined picture, each institutionalized multispectral picture is duplicated with
the institutionalized panchromatic picture and isolated by the blended picture. This
technique was intended for single sensor, single-date pictures and does not deliver
adequate outcomes for multisensor and multitemporal combination. It is utilized
as the standard strategy for Quickbird pansharpening.
3.2.11. Neural Networks are the frameworks that try to imitate the procedure
utilized as a part of organic sensory systems. A neural system comprises in layers
of preparing components, or hubs, which might be interconnected in an assortment
of ways. A neural system can be prepared utilizing an example or preparing
informational index either directed or unsupervised relying upon the preparation
mode to perform rectify groupings by efficiently modifying the weights in the
actuation work. This actuation work characterizes the handling in a solitary hub. A
definitive objective of neural system preparing is to limit the cost or blunder work
13. P.S.Jagadeesh Kumar et al. 13
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
for every single conceivable case through the info yield connection. The neural
systems can be utilized to change multisensor information into a joint revelation of
character for an element. Fig.7 shows a four-layer connect with each layer having
various handling components.
Fig. 7. Four-layered Neural Network
3.3 Nature-Inspired Optimization Algorithms
Nature has inspired numerous scientists from multiple points of view and
accordingly is a rich wellspring of motivation. These days, most new calculations
are nature-motivated, because they have been produced by drawing motivation
from nature. Indeed, even with the accentuation on the wellspring of motivation,
various levels of orders are accessible relying upon alternate subtle elements like
sort of subsources being used. For straightforwardness, the largest amount
sources, for example, science, material science or nature can be considered. In the
most nonspecific term, the principle wellspring of motivation is nature.
Consequently, all new calculations can be alluded to as nature-propelled. By a
long shot most nature-motivated calculations depend on some fruitful attributes of
organic framework. Like this, the biggest part of nature-roused calculations is
science motivated, or bio-propelled in short. Among bio-motivated calculations,
an exceptional class of calculations have been produced by drawing motivation
from swarm insight. In this way, a portion of the bio-motivated calculations can be
called swarm-insight based. Calculations in view of swarm knowledge are among
the most famous. Great illustrations are subterranean insect settlement
enhancement, particle swarm advancement, cuckoo look, bat calculation, and
firefly calculation. Clearly, not all calculations depended on organic frameworks.
Numerous calculations have been produced by utilizing motivation from physical
and synthetic frameworks. Some may even be founded on music.
14. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
3.4 Swarm Intelligence
Swarm intelligence (SI) concerns the emerging, directing and connecting related
agents that take after some straightforward standards. While every specialist might
be considered as unintelligent, the total arrangement of various operators may
demonstrate some self-association conduct and along these lines can carry on like
a type of aggregate knowledge. Numerous calculations have been created by
drawing motivation from swarm-insight frameworks in nature. All SI-based
calculations utilize multi-operators, propelled by the aggregate conduct of social
bugs, like ants, termites, honey bees, and wasps, and additionally from other
creature social orders like runs of winged animals or fish. The established particle
swarm optimization (PSO) utilizes the swarming conduct of fish and flying
creatures, while firefly algorithm (FA) utilizes the blazing conduct of swarming
fireflies. Cuckoo search (CS) depends on the agonizing parasitism of some cuckoo
species, while bat calculation utilizes the echolocation of searching bats.
Subterranean insect state streamlining utilizes the connection of social bugs (e.g.,
ants), while the class of honey bee calculations are altogether in view of the
rummaging conduct of bumble bees. SI-based calculations are among the most
prevalent and broadly utilized. There are numerous purposes behind such ubiquity,
one reason is that SI-based calculations typically sharing data among different
operators, with the goal that self-association, co-advancement and picking up amid
emphases may give the high effectiveness of most SI-based calculations. Another
reason is that various specialist can be parallelized effortlessly so vast scale
streamlining turns out to be more viable from the execution perspective.
3.5 Benefits of Particle Swarm Optimization (PSO)
The advantages of PSO over other nature inspired algorithms are;
1. PSO depends on the insight. It can be connected into both logical research and
building research.
2. PSO have no covering and transformation estimation. The inquiry can be com-
pleted by the speed of the particle. Amid the improvement of a few ages, just the
most positive thinking particle can transmit data onto alternate particles, and the
speed of the investigating is quick.
3. The computation in PSO is extremely basic. Contrasted to the other algorithms,
it involves the greater advancement capacity and it can be finished easily.
4. PSO embraces the genuine number code, and it is chosen straightforwardly by
the arrangement. The quantity of the measurement is equivalent to the stability of
the arrangement.
15. P.S.Jagadeesh Kumar et al. 15
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
4 Machine Intelligence and Image Fusion
Machine Learning (ML) and Artificial Intelligence (AI) have advanced quickly in
recent years. Strategies of both ML and AI have assumed critical part in image
understanding, picture combination, picture enlistment, picture division. Picture
recovery and examination strategies of ML remove data from the pictures and
speaks to data viably and proficiently. These strategies made out of ordinary
calculations without learning like Support Vector Machine (SVM), Neural
Network (NN), and profound learning calculations, for example, Convolutional
Neural Network (CNN), Recurrent Neural Network (RNN), Long Short Term
Memory (LSTM), Extreme Learning Model (ELM), Generative Adversarial
Networks (GANs) and so forth. Previous calculations are restricted in handling the
regular pictures in their crude frame, tedious, in light of master information and
requires a great deal time for tuning the highlights. The later calculations are
sustained with crude information, programmed highlights student and quick.
These calculations attempt to take in various levels of reflection, portrayal and
data naturally from expansive arrangement of pictures that display the coveted
conduct of information. The mechanized characterization of rural and cultivating
locale of remote detecting pictures on traditional strategies has been demonstrated
with huge exactnesses for a considerable period of time, however new advances in
machine learning systems have touched off a blast in the profound learning. CNN
based calculations demonstrated promising execution also speed in various spaces
like discourse acknowledgment, content acknowledgment, lips perusing, PC
supported analysis, confront acknowledgment, tranquilize disclosure, remote
detecting applications.
4.1 Why Convolutional Neural Networks?
Latest advances in machine learning have accomplished promising outcomes in
numerous testing assignments. The best in class in question location is spoken to
by convolutional neural networks (CNNs), like, the quick R-CNN calculation.
These CNNs based techniques enhance the discovery execution fundamentally on
a few open nonexclusive protest identification datasets. CNNs raise a critical
effect on remote detecting applications, which accomplished promising outcomes
in numerous troublesome protest discovery challenges. Contrasted and abnormal
state picture combination, the proposed strategy can accomplish a higher precision
and computational productivity. CNNs comprises of at least one convolutional
layers, frequently with a subsampling layer, which are trailed by at least one
completely associated layers as in a standard neural system. The plan of a CNNs is
propelled by the disclosure of a visual component, the visual cortex, in the mind.
The visual cortex contains a considerable measure of cells that are in charge of
recognizing light in little, covering sub-areas of the visual field, which are called
responsive fields. These responsive fields go about as neighborhood channels over
16. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
the information space, and the more intricate cells have bigger open fields. The
convolution layer in a CNNs plays out the capacity that is performed by the cells
in the visual cortex. A run of the mill CNNs for is appeared in Fig.8. Each element
of a layer gets contributions from an arrangement of highlights situated in a little
neighborhood in the past layer called a nearby open field. With nearby responsive
fields, highlights can remove rudimentary visual highlights, for example, arranged
edges, end-focuses, corners, and so forth., which are then joined by the higher
layers. In the conventional model of example/picture acknowledgment, a hand-
outlined element extractor accumulates applicable data from the information and
dispenses with superfluous inconstancies. The extractor is trailed by a trainable
classifier, a standard neural system that groups highlight vectors into classes. In a
CNNs, convolution layers assume the part of highlight extractor. Be that as it may,
they are not hand planned. Convolution channel piece weights are chosen as a
major aspect of the preparation procedure. Convolutional layers can separate the
nearby highlights since they confine the open fields of the concealed layers to be
neighborhood.
Fig. 8. A typical 2 Stage Convolutional Neural Network
CNNs are utilized as a part of assortment zones, including picture and example
acknowledgment, discourse acknowledgment, regular dialect preparing, and video
investigation. There are various reasons that convolutional neural systems are
getting imperative. In conventional models for design acknowledgment, highlight
extractors are hand planned. In CNNs, the weights of the convolutional layer
being utilized for include extraction and the completely associated layer being
utilized for order are resolved amid the preparation procedure. The enhanced
system structures of CNNs prompt investment funds in memory prerequisites and
calculation unpredictability necessities and, in the meantime, give better execution
for applications where the information has nearby relationship e.g., picture and
discourse. Extensive prerequisites of computational informations for preparing
and assessment of CNNs are once in a while met by realistic handling units, DSPs,
or other silicon structures advanced for high throughput and low vitality when
executing the particular examples of CNN calculation. Truth be told, propelled
processors, for example, the Tensilica Vision P5 DSP for Imaging and Computer
Vision from Cadence have a relatively perfect arrangement of calculation and
memory assets required for running CNNs at high proficiency.
17. P.S.Jagadeesh Kumar et al. 17
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
4.2 Advatages of Convolutional Neural Networks
Convolutional neural networks are naturally motivated variations of multilayer
perceptron (MLP), intended to imitate the conduct of a visual cortex. These
models alleviate the difficulties postured by the MLP engineering by misusing the
solid spatially nearby relationship introduce in characteristic pictures. CNNs have
the accompanying recognizing highlights:
1. Detection using CNN is rugged to distortions, for example, change fit as a
fiddle because of camera focal point, distinctive lighting conditions, diverse
postures, nearness of fractional impediments, flat and vertical movements, and so
forth. Nonetheless, CNNs are move invariant since a similar weight setup is
utilized crosswise over space. In principle, it is conceivable to accomplish move
invariantness utilizing completely associated layers. In any case, the result of
preparing for this situation is various units with indistinguishable weight designs
at various areas of the info. To take in these weight designs, a substantial number
of preparing occurrences would be required to cover the space of conceivable
varieties.
2. In this same speculative situation where a completely associated layer is utilized
to extricate the highlights, the info picture of size 32x32 and a concealed layer
having 1000 highlights will require a request of 106 coefficients, an enormous
memory necessity. In the convolutional layer, similar coefficients are utilized
crosswise over various areas in the space, so the memory prerequisite is radically
lessened.
3. Utilizing the standard neural system that would be comparable to a CNN, be-
cause the quantity of parameters would be considerably higher, the preparation
time would likewise increment proportionately. In CNNs, since the quantity of pa-
rameters is lessened, preparing time is proportionately decreased. In any case, in
handy preparing, a standard neural system proportional to CNN would have more
parameters, which would prompt more commotion expansion amid the preparation
procedure.
5 Implementation
As illustrated in Fig. 9, the whole fusion system is composed of four modules:
Module 1. an image fusion module, which can fuse panchromatic image and
multispectral images using particle swarm optimization of convolutional neural
network into a optimized fused image.
Module 2. ROI classification and fine regression module, which is performed to
obtain corresponding bucolic and farming region.
18. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Module 3. Qualitative and quantitative evaluation module, which evaluates the
quality of optimized fused image with and without reference image.
Module 4. Comparitative evaluation module, which compares the classification of
bucolic and farming region of optimized fused image with and without reference
image.
Initially, the panchromatic image and the multispectral image is fed as the input to
the convolutional neural network inorder to perfom the fusion. Further the particle
swarm optimation is performed for every 5x5 convolution and 2x2 subsampling is
obtained as illustrated in Fig. 10. At the outset, the Particle Swarm Optimization
(PSO) parametes are defined and the fusion of PAN image and MS image are
achieved for 5x5 optimizated convolution. Then 2x2 subsampling optimization is
applied for every 5x5 optimized convolutions. The fitness function is evaluated
based on the required conditions for maximim number of iterations. If the number
of iterations reaches the maximum or the reqired condition is satisfied, the gbest is
saved or else the PSO is updated and the fusion if performed so on. Once the gbest
is saved, it is fed to full connection stage i.e. the CNNs classifier to obtain the
optimized fused image of the PAN image and MS image. Further details of fusing
panchromatic and multispectral images is conveyed in section 5.1. In the second
module, the ROI classification and regression is applied to classifiy the bucolic
and farming region as described in section 5.2. The third module evaluates the
quality of the optimized fused images with and without reference image as
described in section 5.3. The fourth module is the comparitative evaluation of the
bucolic and farming region that is dealt in section 5.4.
19. P.S.Jagadeesh Kumar et al. 19
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Fig. 9. Block Diagram of Proposed Fusion System
20. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Fig. 10. Particle Swarm Optimization (PSO) Framework
5.1 Performing Image Fusion of PAN and MS Images
The panchromatic image and the multispectral image from various sensors have
been tested in the proposed fusion system. A dataset containing 50 panchromatic
image and 50 multispectral image from various sensors like Worldview, GeoEye,
SPOT, QuickBird, Landsat, SeaStar for different scenes heve been utilized. The
fusion is performed for the same scene taken at different time from different
sensors were tested. The challenge is to classify the bucolic and farming region of
the fused image. In general, the fused image obtained from fusing PAN and MS
image from day vision gives combined and detailed information but it is not the
same for night vision. This problem is overcomed by the proposed fusion system,
since the fusion is based on optimization method. The proposed system is based
on particle swarm optimization due to its inherent stable arrangement and less
computational complexity. The panchromatic picture and the multispectral picture
is encouraged as the contribution to the convolutional neural network inorder to
perfom the fusion. Facilitate the particle swarm optimation is performed for each
5x5 convolution and its corresponding 2x2 subsampling is acquired. At the start,
21. P.S.Jagadeesh Kumar et al. 21
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
the Particle Swarm Optimization parametes are characterized and the combination
of PAN picture and MS picture are accomplished for 5x5 optimizated convolution.
At that point 2x2 subsampling advancement is connected for each 5x5 enhanced
convolutions. The wellness of the proposed is assessed in view of the required
conditions for maximim number of emphasess. On the off chance that the quantity
of cycles achieves the greatest or the reqired condition is fulfilled, the gbest is
spared or else the PSO is refreshed and the combination if performed so on. Once
the gbest is spared, it is nourished to full association organize i.e. the CNNs
classifier to get the optimized fused image of the PAN image and MS image.
5.2 ROI Classification and Regression
An image decomposition strategy is considered whereby the Region of Interests
(ROIs) are spoken to utilizing a quad-tree portrayal. All the more particularly the
Minimum Bounding Rectangles (MBR) encompassing the ROIs. The advantage
offered is that a quad-tree portrayal will keep up the auxiliary data of the ROI
contained in MBR. By applying a weighted regular subgraph mining calculation,
gSpan-ATW, to this portrayal, visit subgraphs that happen over the tree to set of
MBR can be recognized. The distinguished successive subgraphs each portraying,
regarding size, contour, color, intensity, edge, some piece of the MBR, would then
be able to be utilized to frame the central components of an element space. Thus,
this element space can be utilized to portray an arrangement of highlight vectors
for standard classification of bucolic and farming region as shown in Fig.11.
Fig. 11. ROI Based Classification and Regression Using Quadtree Approach
22. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
5.3 Evaluation of Optimized Fused Image
The assessment techniques depend on the check of the protection of ghastly
qualities and the change of the spatial determination. To start with, the intertwined
pictures are outwardly thought about. The visual appearance might be subjective
and relies upon the human mediator, yet the energy of the visual perception as a
last background can't be thought little of. Second, various factual assessment
techniques are utilized to gauge the shading safeguarding. These strategies must
be objective, reproducible, and of quantitative nature. The following quantitative
evaluation strategies were utilized:
5.3.1 Correlation Coefficient (CC) between the first multispectral groups and the
intertwined groups. This esteem ranges from -1 to 1. The best correspondence
amongst fused and unique picture information demonstrates the most astounding
relationship esteems.
5.3.2 Per-pixel Deviation (PD), it is important to corrupt the combined picture to
the spatial determination of the first picture. This picture is then subtracted from
the first picture on each pixel premise. As a definite advance, compute the normal
deviation per pixel estimated as digital number, which depends on a 8-bit or 16-bit
run. Here, zero is the best esteem.
5.3.3 Root Mean Square Error (RMSE) is figured as the distinction of the
standard deviation and the mean of the combined and the first picture. The most
ideal esteem is again zero.
5.3.4 Structure Similarity Index (SSIM) is a technique that consolidates a
correlation of luminance, differentiation and structure and is connected locally in a
8x8 square window. This window is moved pixel-by-pixel over the whole picture.
At each progression, the neighborhood measurements and the SSIM record are
computed inside the window. The qualities change in the vicinity of 0 and 1.
Qualities near 1 demonstrate the most elevated correspondence with the first
pictures. The goal is to locate the intertwined picture with the ideal blend of
otherworldly attributes conservation and spatial change.
5.3.5 High Pass Correlation (HCC) is the correlation between the first
panchromatic band and the combined groups after high pass separating. The high
pass channel is connected to the panchromatic picture and each band of the
melded picture. At that point the connection coefficients between the high pass
separated groups and the high pass sifted panchromatic picture are ascertained.
5.3.6 Edge Detection (ED) in the panchromatic picture and the intertwined
multispectral groups; for this, a Sobel channel is choosen and played out a visual
investigation of the edges recognized in the panchromatic and the combined
multispectral pictures. This was done autonomously for each band. The esteem is
given in percent and shifts in the vicinity of 0 and 100. 100% implies that every
one of the edges in the panchromatic picture were identified in the fused picture.
23. P.S.Jagadeesh Kumar et al. 23
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
TABLE I. EVALUATION OF FUSED IMAGE QUALITY
Type CC PD RMSE SSIM HCC ED
Qualitative
Assessment
With Reference Image
(WorldView-1, 0.5m Night
Vision Image)
0.9261 0.0141 0.0011 0.8025 0.8453 97.12% Best
With Reference Image
(WorldView-2, 0.5m
Daylight Vision Image)
0.9142 0.0125 0.0015 0.9001 0.9678 98.34% Best
Without Reference Image
(GeoEye-2, 0.5m Daylight
Vision Image)
0.9522 0.0016 0.0014 0.9911 0.9541 99.12% Best
Without Reference Image
(QuickBird-1, 0.7m
Daylight Vision Image)
0.9691 0.0019 0.0009 0.9898 0.9346 98.56% Best
CC – Correlation coefficient; PD – Per pixel deviation; RMSE – Root mean square error;
SSIM – Structure similarity index; HCC – High pass correlation; ED – Edge detection
TABLE II. COMPARISON OF BUCOLIC AND FARMING REGION CLASSIFICATION
Type
Overall
Accuracy
Kappa
Index*
Qualitative
Assessment
With Reference Image
(WorldView-1, 0.5m Night
Vision Image)
89.25% 0.89 Best
With Reference Image
(WorldView-2, 0.5m Daylight
Vision Image)
89.92% 0.92 Best
Without Reference Image
GeoEye-2, 0.5m Daylight
Vision Image)
93.54% 0.97 Best
Without Reference Image
(QuickBird-1, 0.7m Daylight
Vision Image)
94.56% 0.96 Best
*Poor classification = Less than 0.20
*Fair classification = 0.20 to 0.40
*Moderate classification = 0.40 to 0.60
*Good classification = 0.60 to 0.80
*Very good classification = 0.80 to 1.00
24. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
5.4 Effective Comparison of Bucolic and Farming Region
Farming is the foundation of the national economy and the ensuing segment for
shielding sustenance haven. Convenient availability of data on farming is
noteworthy for taking acquainted decisions on nourishment of human wellbeing.
Numerous countries on the planet that practices space innovation and land-based
explanations for causing intermittent advises on generation data and taking
endeavors to accomplish expressive farming. Satellite-based optical and radar
symbolism are extensively utilized in horticulture examine. Radar symbolism are
especially recycled amid blustery season. Joint use of geospatial instruments with
trim preparations and the perception arrange favors reasonable product yield
forecasts, shortage assessment and observing for suitable horticultural needs. The
appraisal methods for near assessment of bucolic and farming area relies upon the
overall accuracy and the kappa index. The overall accuracy provides the efficacy
of region classification. Higher the exactness, best is the grouping. Kappa is an
amount of agreement amid the two elements. Kappa is exactly or equivalent to 1.
An estimation of 1 indicates culminate grouping and qualities under 1 determines
underneath consummate characterization.
6 Result and Analysis
Table I and Table II illustrates the sample of the evaluation results of optimized
fused image quality and comparison of bucolic and farming region classification
respectively tested on the proposed fusion system. Bascially, the evaluation of the
optimized fused image quality is aspired with reference image and without the
reference image. WorldView-1, 0.5m and WorldView-2, 0.5m respresents the
same scene captured during night vision and daylight vision respectively. Here,
the quality evaluation is computed with respect to a reference image. The results
illustrates that the evaluation metrics such as CC–Correlation coefficient, PD–Per
pixel deviation, RMSE–Root mean square error, SSIM–Structure similarity index,
HCC–High pass correlation, ED–Edge detection were found to be relatively
convincing and their quantitative assessment is at its best as shown in Fig. 12-14
and Fig. 15-17. Consequently, GeoEye-2, 0.5m and QuickBird-1, 0.7m respresents
different scene captured during daylight vision. The fused quality evaluation here
is ascertained without reference image. The test results depicts that the evaluation
metrics were found to be imperative and the quantitative evaluation is at its best as
shown in Fig. 18-20 and Fig. 21-23. Similarly, the entire dataset is simulated and
tested for the quality of the optimized fused image. The comparison of bucolic and
farming region is evaluated with respect to the overall accuracy and kappa index.
The classification of bucolic and farming region performed on ROI based quadtree
shows higher efficieny both with respect to overall accuracy and kappa index as
osbserved in Fig. 24-27, which clearly and precisely indicates the farming region,
road map, bucolic region and land cover correspondingly.
25. P.S.Jagadeesh Kumar et al. 25
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Fig. 12. WorldView-1, 0.5m Night Vision Panchromatic Image
Fig. 13. WorldView-1, 0.5m Night Vision Multispectral Image
Fig. 14. Optimized Fused WorldView-1, 0.5m Night Vision Image
26. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Fig. 15. WorldView-2, 0.5m Daylight Vision Panchromatic Image
Fig. 16. WorldView-2, 0.5m Daylight Vision Multispectral Image
Fig. 17. Optimized Fused WorldView-2, 0.5m Daylight Vision Image
28. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Fig. 21. QuickBird-1, 0.7m Daylight Vision Panchromatic Image
Fig. 22. QuickBird-1, 0.7m Daylight Vision Multispectral Image
Fig. 23. Optimized Fused QuickBird-1, 0.7m Daylight Vision Image
29. P.S.Jagadeesh Kumar et al. 29
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
Fig. 24. Bucolic and Farming Region Classification of Optimized Fused
WorldView-1, 0.5m Night Vision
Fig. 25. Bucolic and Farming Region Classification of Optimized Fused
WorldView-2, 0.5m Daylight Vision
30. Panchromatic and Multispectral Remote Sensing Image Fusion using Particle Swarm
Optimization of CNN for Effective Comparison of Bucolic and Farming Region
Series of Remote Sensing/Photogrammetry, Springer, 2018 (Print ISSN: 2198-0721)
Fig. 26. Bucolic and Farming Region Classification of Optimized Fused
GeoEye-2, 0.5m Daylight Vision
Fig. 27. Bucolic and Farming Region Classification of Optimized Fused
QuickBird-1, 0.7m Daylight Vision
31. P.S.Jagadeesh Kumar et al. 31
Earth Science and Remote Sensing Applications, Vol. 43, pp.1-31, 2018, Springer
7 Conclusion
Machine learning is one plausible strategy to lever the high measurement nature of
satellite sensor information. It can be discernable and reliably discovered that
machine learning based convolutional neural networks bear the best image fusion
quality with adoration to spatial quality, spectral quality and global quality with
and without a reference picture. Thusly, just with a best-melded fusion image, the
superlative arrangement of bucolic and farming region of remote sensing image
can be for all intents and purposes viable. All in all, the fused picture acquired
from melding PAN and MS picture from day vision gives joined and nitty-gritty
data however it isn't the same for night vision. This issue is overcome by the
proposed fusion framework since the combination depends on advancement
technique utilizing particle swarm optimization. Although the proposed fusion
framework provides higher precision and productivity, it suffers from halfway
hopefulness and can't work out the issues of scattering.
References
1. P.S.Jagadeesh Kumar, X Li, TL Huan. (2018) ‘Panchromatic and Multispectral Remote
Sensing Image Fusion Using Machine Learning for Classifying Bucolic and Farming
Region’, Int. J. Computational Science and Engineering, Vol. 15, No. 5/6, pp.340-370.
2. Fonseca et al. (2008) ‘Multitemporal image registration based on multiresolution
decomposition’, Revista Brasileira de Cartografia, Vol.60, No.3, pp.271-286.
3. P.S.Jagadeesh Kumar, TL Huan, RK Rossi, Y Yuan, X Li. (2018) ‘Color Fusion of
Remote Sensing Images for Imparting Fluvial Geomorphological Features of River
Yamuna and Ganga over Doon Valley’, Journal of Geomatics, 12 (1), pp. 270-286.
4. Laporterie F et al. (2005) ‘Thematic and statistical evaluations of five panchromatic/
multispectral fusion methods’, Information Fusion, Vol.6, No.3, pp.193-212.
5. L. Dong, Yang et al. (2015) ‘High quality multi-spectral and panchromatic image fusion
technologies based on curvelet transform’, Neurocomputing, Vol.159, pp.268–274.
6. P.S.Jagadeesh Kumar, RK Rossi, Y Yuan, TL Huan. (2017) ‘Image Fusion Intervening
Hybrid Fusion for Monitoring Erosion and Rock Formation in Eastern and Western
Ghats’, Journal of Earth Science, 28(6), pp. 333-354, Springer.
7. Choi M, Kim R.Y, Nam M.R, Kim, H.O. (2005) ‘Fusion of multispectral and
panchromatic satellite images using the curvelet transform’, IEEE Geoscience and
Remote Sensing Letters, Vol.2, No.2, pp.136-140.
8. P.S.Jagadeesh Kumar, X Li, R Rossi, Y Yuan, T Huan. (2018) ‘Multispectral and
Hyperspectral Remote Sensing Image Fusion in Mapping Bucolic and Farming Region
for Land Use’, Int. Conference on Multispectral Remote Sensing Systems and Image
Interpretation, Singapore, WASET.
9. Palubinskas. (2015) ‘Joint quality measure for evaluation of pansharpening accuracy’,
Remote Sensing’, Vol.7, No. 7, pp.9292–9310.
10. P.S.Jagadeesh Kumar, X Li, R Rossi, Y Yuan, T Huan. (2017) ‘Congenital Bucolic and
Farming Region Taxonomy Using Neural Networks for Remote Sensing Imagery and
Pattern Classification’, IAENG Int. Journal of Computer Science, 56 (3), pp. 183-192.