Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Semi-Automatic Classification Algorithm: The differences between Minimum Dist...Fatwa Ramdani
This remote sensing e-course will focus on comparing the Minimum Distance, Maximum Likelihood, and Spectral Angle Mapper algorithms for semi-automatic classification of Landsat 8 OLI imagery in QGIS. The course will explain the concepts, demonstrate the algorithms in QGIS, and have students complete exercises to classify land cover and assess accuracy. Minimum Distance classifies pixels based on distance to class means, Maximum Likelihood uses probability, and Spectral Angle Mapper compares spectral angles insensitive to illumination.
Different Image Fusion Techniques –A Critical ReviewIJMER
This document reviews and compares different image fusion techniques, including spatial domain and transform domain methods. Spatial domain techniques like simple averaging and maximum selection are disadvantageous because they can produce spatial distortions and reduce contrast in the fused image. Transform domain methods like discrete wavelet transform (DWT) and principal component analysis (PCA) perform better by preserving more spatial and spectral information. DWT fusion in particular minimizes spectral distortion and improves the signal-to-noise ratio over pixel-based approaches, though it results in lower spatial resolution. Tables in the document provide quantitative comparisons of different techniques using performance measures like peak signal-to-noise ratio, entropy, and normalized cross-correlation.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
OBIA on Coastal Landform Based on Structure Tensor csandit
This paper presents the OBIA method based on structure tensor to identify complex coastal
landforms. That is, develop Hessian matrix by Gabor filtering and calculate multiscale structure
tensor. Extract edge information of image from the trace of structure tensor and conduct
watershed segment of the image. Then, develop texons and create texton histogram. Finally,
obtain the final results by means of maximum likelihood classification with KL divergence as
the similarity measurement. The study findings show that structure tensor could obtain
multiscale and all-direction information with small data redundancy. Moreover, the method
described in the current paper has high classification accuracy
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
International Journal of Engineering Research and Applications (IJERA) aims to cover the latest outstanding developments in the field of all Engineering Technologies & science.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Semi-Automatic Classification Algorithm: The differences between Minimum Dist...Fatwa Ramdani
This remote sensing e-course will focus on comparing the Minimum Distance, Maximum Likelihood, and Spectral Angle Mapper algorithms for semi-automatic classification of Landsat 8 OLI imagery in QGIS. The course will explain the concepts, demonstrate the algorithms in QGIS, and have students complete exercises to classify land cover and assess accuracy. Minimum Distance classifies pixels based on distance to class means, Maximum Likelihood uses probability, and Spectral Angle Mapper compares spectral angles insensitive to illumination.
Different Image Fusion Techniques –A Critical ReviewIJMER
This document reviews and compares different image fusion techniques, including spatial domain and transform domain methods. Spatial domain techniques like simple averaging and maximum selection are disadvantageous because they can produce spatial distortions and reduce contrast in the fused image. Transform domain methods like discrete wavelet transform (DWT) and principal component analysis (PCA) perform better by preserving more spatial and spectral information. DWT fusion in particular minimizes spectral distortion and improves the signal-to-noise ratio over pixel-based approaches, though it results in lower spatial resolution. Tables in the document provide quantitative comparisons of different techniques using performance measures like peak signal-to-noise ratio, entropy, and normalized cross-correlation.
Abstract: Many applications such as robot navigation, defense, medical and remote sensing performvarious processing tasks, which can be performed more easily when all objects in different images of the same scene are combined into a single fused image. In this paper, we propose a fast and effective method for image fusion. The proposed method derives the intensity based variations that is large and small scale, from the source images. In this approach, guided filtering is employed for this extraction. Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained. Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
This remote sensing e-course focuses on principal component analysis (PCA) and classification techniques using remotely sensed SPOT 6 and Landsat 8 data. The course will illustrate how to analyze and classify the satellite imagery for land use mapping using open source GRASS software. Students will learn about PCA, how it is calculated in GRASS, and its benefits for classification. Exercises will have students run PCA on SPOT6 data to determine optimal band ratios for classification and produce a land use map.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
The document describes a method for multiresolution image fusion using nonsubsampled contourlet transform (NSCT). NSCT provides multiresolution analysis with shift invariance and directionality. The method uses NSCT to learn edges from a high-resolution panchromatic image and fuse it with low-resolution multispectral images. Maximum a posteriori estimation with a Markov random field prior is used to optimize the fused image. Experimental results on fusing QuickBird satellite images show the proposed method produces higher quality fused images compared to other fusion methods.
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
This summarizes a document about classifying rural and agricultural areas using neural networks and remote sensing imagery. It discusses extracting texture features from gray scale and multispectral images to train a neural network for classification. Different features like histogram pixel intensity, texture constraints, and spatial pixel matrices were used. The neural network was able to effectively classify rural and agricultural regions by leveraging these extracted texture features from aerial images.
This document summarizes a proposed method for super-resolution of multispectral images using principal component analysis. It begins with background on multispectral imaging and issues with resolution. The proposed method first uses PCA to reduce the dimensionality of the multispectral data. It then learns edge details from a high-resolution database by matching blocks of the principal components. After learning, the modified principal components are inverse transformed to generate a higher resolution multispectral image. The method is tested on real multispectral data sets and shown to reconstruct higher resolution images.
This document discusses image fusion techniques at different levels of abstraction: pixel level, feature level, and decision level. It describes various fusion methods including numerical (e.g. multiplicative, Brovey), color related (e.g. IHS), statistical (e.g. PCA, Gram Schmidt), and feature level (e.g. Ehlers) techniques. Both qualitative (visual) and quantitative (statistical measures like RMSE, correlation coefficient, entropy) methods to assess fusion quality are outlined. Image fusion has applications in improving classification and displaying sharper resolution images.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSijcga
This document presents an appearance-based method for representing and rendering cast shadows without explicit geometric modeling. A cubemap-like illumination array is constructed to sample shadow images on a plane. The sampled object and shadow images are represented using Haar wavelets. This allows rendering of shadows onto an arbitrary 3D background by linearly combining the wavelet basis images based on the scene geometry and lighting. Experiments demonstrate soft, realistic shadows can be rendered this way under novel illumination distributions specified by environment maps.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
Satellite Imagery is used in various research domains. These images contain major quality issues. However, it can be improvised by image enhancement algorithms in terms of contrast, brightness, feature reduction from noise contents, etc. These algorithms are employed to focus, sharp or smooth image to exhibit and examine the image attributes. Hence, the objective of image enhancement depends on the precise application. The objective of this paper is to provide brief information about the image enhancement techniques; which fetches progressive and optimum results for remote-sensing satellite imagery.
An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Re...IDES Editor
In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...DR.P.S.JAGADEESH KUMAR
This document discusses machine learning techniques for fusing panchromatic and multispectral remote sensing images to classify rural and farming regions. It provides background on panchromatic and multispectral imaging systems and describes common fusion algorithms like IHS transform and PCA. The authors applied machine learning methods to fuse high resolution panchromatic and low resolution multispectral images. Qualitative and quantitative assessment showed the machine learning approach provided better performance than other fusion algorithms in enhancing image quality and enabling effective classification of rural and farming areas.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
The document evaluates and compares the performance of three background subtraction algorithms: frame difference, statistical approach for real-time robust background subtraction and shadow detection, and adaptive background mixture models for real-time tracking. The frame difference method is the simplest but depends on threshold selection, while the statistical approach provides average accuracy and speed and the adaptive mixture models method provides the best accuracy but highest computational cost. Experimental results on video data demonstrate the tradeoffs between accuracy, computational requirements, and ability to handle challenges like lighting changes for each algorithm.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱algous
This document discusses advancements in hyperspectral and multi-spectral imaging. It begins with an abstract describing how a spectrograph's design impacts its performance. It then provides an introduction to hyperspectral imaging, describing its use in applications such as agriculture, forensics, and biomedical research. The document emphasizes that hyperspectral sensors require maintaining precise spatial and spectral integrity over a wide field of view. It evaluates different types of spectrograph designs and their ability to accurately reproduce spectral images across a focal plane without distortion or corruption between wavelengths and spatial positions.
This document discusses wavelet-based image fusion techniques. Image fusion combines information from multiple images of the same scene to create a fused image that is more informative than any single input image. The wavelet transform decomposes images into different frequency bands, and image fusion algorithms merge the corresponding bands from input images. Common fusion rules include choosing the maximum, minimum, mean, or a value from one image at each band location. The inverse wavelet transform then reconstructs the fused image. Wavelet-based fusion can integrate high spatial and high spectral information from images like panchromatic and multispectral satellite data.
This document proposes a convolutional neural network (CNN) to automatically classify aerial and remote sensing images. The CNN has six layers - three convolutional layers to extract visual features from the images at different levels of abstraction, two fully-connected layers to integrate the extracted features, and a final softmax classifier layer to classify the images. The CNN is evaluated on two datasets and is shown to outperform state-of-the-art baselines in terms of classification accuracy, demonstrating its ability to learn spatial features directly from images without relying on handcrafted features or descriptors.
This document provides an overview of remote sensing and describes its key principles and applications. It defines remote sensing as acquiring information about planetary surfaces from a distance without direct contact. The main components of a remote sensing system are described as the energy source, atmosphere, target interaction, sensor recording, transmission and processing, interpretation and analysis, and applications. Common data types like raster and vector data are also explained. Remote sensing techniques like digital image processing, classification, and analysis are outlined. Examples of satellite imagery and classifications are provided.
This document discusses using k-means clustering to detect minerals from remote sensing images. It begins with an abstract describing using k-means clustering on hyperspectral images to segment and extract features to detect minerals like giacomo. It then provides background on remote sensing, k-means clustering algorithms, and describes the giacomo mineral deposit in Peru that contains silicon dioxide and titanium dioxide. It concludes with discussing using sobel edge detection as part of the mineral detection process from remote sensing images.
This remote sensing e-course focuses on principal component analysis (PCA) and classification techniques using remotely sensed SPOT 6 and Landsat 8 data. The course will illustrate how to analyze and classify the satellite imagery for land use mapping using open source GRASS software. Students will learn about PCA, how it is calculated in GRASS, and its benefits for classification. Exercises will have students run PCA on SPOT6 data to determine optimal band ratios for classification and produce a land use map.
A new approach of edge detection in sar images using region based active cont...eSAT Journals
Abstract This paper presents a new methodology for the edge detection of complex radar images. The approach includes the edge improvisation algorithm and followed with edge detection. The nature of complex radar images made edge enhancement part before the edge detection as the data is highly heterogeneous in nature. Thus, the use of discrete wavelet transform in the edge improvisation algorithm is justified. Then region based active contour model is used as edge detection algorithm. The paper proposes the distribution fitting energy with a level set function and neighborhood means and variances as variables. The performance is tested by applying it on different images and the results are been analyzed. Keywords: Edge detection, Edge improvisation, Synthetic Aperture radar (SAR), wavelet transforms.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
The document describes a method for multiresolution image fusion using nonsubsampled contourlet transform (NSCT). NSCT provides multiresolution analysis with shift invariance and directionality. The method uses NSCT to learn edges from a high-resolution panchromatic image and fuse it with low-resolution multispectral images. Maximum a posteriori estimation with a Markov random field prior is used to optimize the fused image. Experimental results on fusing QuickBird satellite images show the proposed method produces higher quality fused images compared to other fusion methods.
Congenital Bucolic and Farming Region Taxonomy Using Neural Networks for Remo...DR.P.S.JAGADEESH KUMAR
This summarizes a document about classifying rural and agricultural areas using neural networks and remote sensing imagery. It discusses extracting texture features from gray scale and multispectral images to train a neural network for classification. Different features like histogram pixel intensity, texture constraints, and spatial pixel matrices were used. The neural network was able to effectively classify rural and agricultural regions by leveraging these extracted texture features from aerial images.
This document summarizes a proposed method for super-resolution of multispectral images using principal component analysis. It begins with background on multispectral imaging and issues with resolution. The proposed method first uses PCA to reduce the dimensionality of the multispectral data. It then learns edge details from a high-resolution database by matching blocks of the principal components. After learning, the modified principal components are inverse transformed to generate a higher resolution multispectral image. The method is tested on real multispectral data sets and shown to reconstruct higher resolution images.
This document discusses image fusion techniques at different levels of abstraction: pixel level, feature level, and decision level. It describes various fusion methods including numerical (e.g. multiplicative, Brovey), color related (e.g. IHS), statistical (e.g. PCA, Gram Schmidt), and feature level (e.g. Ehlers) techniques. Both qualitative (visual) and quantitative (statistical measures like RMSE, correlation coefficient, entropy) methods to assess fusion quality are outlined. Image fusion has applications in improving classification and displaying sharper resolution images.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWSijcga
This document presents an appearance-based method for representing and rendering cast shadows without explicit geometric modeling. A cubemap-like illumination array is constructed to sample shadow images on a plane. The sampled object and shadow images are represented using Haar wavelets. This allows rendering of shadows onto an arbitrary 3D background by linearly combining the wavelet basis images based on the scene geometry and lighting. Experiments demonstrate soft, realistic shadows can be rendered this way under novel illumination distributions specified by environment maps.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
Remotely sensed image segmentation using multiphase level set acmKriti Bajpai
Satellite Imagery is used in various research domains. These images contain major quality issues. However, it can be improvised by image enhancement algorithms in terms of contrast, brightness, feature reduction from noise contents, etc. These algorithms are employed to focus, sharp or smooth image to exhibit and examine the image attributes. Hence, the objective of image enhancement depends on the precise application. The objective of this paper is to provide brief information about the image enhancement techniques; which fetches progressive and optimum results for remote-sensing satellite imagery.
An Efficient K-Nearest Neighbors Based Approach for Classifying Land Cover Re...IDES Editor
In recent times, researchers in the remote
sensing community have been greatly interested in
utilizing hyperspectral data for in-depth analysis of
Earth’s surface. In general, hyperspectral imaging comes
with high dimensional data, which necessitates a pressing
need for efficient approaches that can effectively process
on these high dimensional data. In this paper, we present
an efficient approach for the analysis of hyperspectral
data by incorporating the concepts of Non-linear manifold
learning and k-nearest neighbor (k-NN). Instead of
dealing with the high dimensional feature space directly,
the proposed approach employs Non-linear manifold
learning that determines a low-dimensional embedding of
the original high dimensional data by computing the
geometric distances between the samples. Initially, the
dimensionality of the hyperspectral data is reduced to a
pairwise distance matrix by making use of the Johnson's
shortest path algorithm and Multidimensional scaling
(MDS). Subsequently, based on the k-nearest neighbors,
the classification of the land cover regions in the
hyperspectral data is achieved. The proposed k-NN based
approach is evaluated using the hyperspectral data
collected by the NASA’s (National Aeronautics and Space
Administration) AVIRIS (Airborne Visible/Infrared
Imaging Spectrometer) from Kennedy Space Center,
Florida. The classification accuracies of the proposed k-
NN based approach demonstrate its effectiveness in land
cover classification of hyperspectral data.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Panchromatic and Multispectral Remote Sensing Image Fusion Using Machine Lear...DR.P.S.JAGADEESH KUMAR
This document discusses machine learning techniques for fusing panchromatic and multispectral remote sensing images to classify rural and farming regions. It provides background on panchromatic and multispectral imaging systems and describes common fusion algorithms like IHS transform and PCA. The authors applied machine learning methods to fuse high resolution panchromatic and low resolution multispectral images. Qualitative and quantitative assessment showed the machine learning approach provided better performance than other fusion algorithms in enhancing image quality and enabling effective classification of rural and farming areas.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
The document evaluates and compares the performance of three background subtraction algorithms: frame difference, statistical approach for real-time robust background subtraction and shadow detection, and adaptive background mixture models for real-time tracking. The frame difference method is the simplest but depends on threshold selection, while the statistical approach provides average accuracy and speed and the adaptive mixture models method provides the best accuracy but highest computational cost. Experimental results on video data demonstrate the tradeoffs between accuracy, computational requirements, and ability to handle challenges like lighting changes for each algorithm.
Attenuation correction designed for PET/MR hybrid imaging frameworks along with portion making arrangements used for MR-based radiation treatment remain testing because of lacking high-energy photon weakening data. We present a new method so as to uses the learned nonlinear neighborhood descriptors also highlight coordinating toward foresee pseudo-CT pictures starting T1w along with T2w MRI information. The nonlinear neighborhood descriptors are acquired through anticipating the direct descriptors interested in the nonlinear high-dimensional space utilizing an unequivocal constituent guide also low-position guess through regulated complex regularization. The nearby neighbors of every near descriptor inside the data MR pictures are looked during an obliged spatial extent of the MR pictures among the training dataset. By that point, the pseudo-CT patches are evaluated through k-closest neighbor relapse. The planned procedure designed for pseudo-CT forecast is quantitatively broke downward on top of a dataset comprising of coordinated mind MRI along with CT pictures on or after 13 subjects.
上海必和 Advancements in hyperspectral and multi-spectral ima超光谱高光谱多光谱algous
This document discusses advancements in hyperspectral and multi-spectral imaging. It begins with an abstract describing how a spectrograph's design impacts its performance. It then provides an introduction to hyperspectral imaging, describing its use in applications such as agriculture, forensics, and biomedical research. The document emphasizes that hyperspectral sensors require maintaining precise spatial and spectral integrity over a wide field of view. It evaluates different types of spectrograph designs and their ability to accurately reproduce spectral images across a focal plane without distortion or corruption between wavelengths and spatial positions.
This document discusses wavelet-based image fusion techniques. Image fusion combines information from multiple images of the same scene to create a fused image that is more informative than any single input image. The wavelet transform decomposes images into different frequency bands, and image fusion algorithms merge the corresponding bands from input images. Common fusion rules include choosing the maximum, minimum, mean, or a value from one image at each band location. The inverse wavelet transform then reconstructs the fused image. Wavelet-based fusion can integrate high spatial and high spectral information from images like panchromatic and multispectral satellite data.
This document proposes a convolutional neural network (CNN) to automatically classify aerial and remote sensing images. The CNN has six layers - three convolutional layers to extract visual features from the images at different levels of abstraction, two fully-connected layers to integrate the extracted features, and a final softmax classifier layer to classify the images. The CNN is evaluated on two datasets and is shown to outperform state-of-the-art baselines in terms of classification accuracy, demonstrating its ability to learn spatial features directly from images without relying on handcrafted features or descriptors.
This document provides an overview of remote sensing and describes its key principles and applications. It defines remote sensing as acquiring information about planetary surfaces from a distance without direct contact. The main components of a remote sensing system are described as the energy source, atmosphere, target interaction, sensor recording, transmission and processing, interpretation and analysis, and applications. Common data types like raster and vector data are also explained. Remote sensing techniques like digital image processing, classification, and analysis are outlined. Examples of satellite imagery and classifications are provided.
This document discusses using k-means clustering to detect minerals from remote sensing images. It begins with an abstract describing using k-means clustering on hyperspectral images to segment and extract features to detect minerals like giacomo. It then provides background on remote sensing, k-means clustering algorithms, and describes the giacomo mineral deposit in Peru that contains silicon dioxide and titanium dioxide. It concludes with discussing using sobel edge detection as part of the mineral detection process from remote sensing images.
Gravitational lensing for interstellar power transmissionSérgio Sacani
We investigate light propagation in the gravitational field of multiple gravitational lenses. Assuming these lenses are sufficiently spaced to prevent interaction, we consider a linear alignment for the
transmitter, lenses, and receiver. Remarkably, in this axially-symmetric configuration, we can solve
the relevant diffraction integrals – result that offers valuable analytical insights. We show that the
point-spread function (PSF) is affected by the number of lenses in the system. Even a single lens is
useful for transmission either it is used as a part of the transmitter or it acts on the receiver’s side.
We show that power transmission via a pair of lenses benefits from light amplification on both ends
of the link. The second lens plays an important role by focusing the signal to a much tighter spot;
but in practical lensing scenarios, that lens changes the structure of the PSF on scales much smaller
than the telescope, so that additional gain due to the presence of the second lens is independent of its
properties and is govern solely by the transmission geometry. While evaluating the signal-to-noise
ratio (SNR) in various transmitting scenarios, we see that a single-lens transmission performs on par
with a pair of lenses. The fact that the second lens amplifies the brightness of the first one, creates a
challenging background for signal reception. Nevertheless, in all the cases considered here, we have
found practically-relevant SNR values. As a result, we were able to demonstrate the feasibility of
establishing interstellar power transmission links relying on gravitational lensing – a finding with
profound implications for applications targeting interstellar power transmission.
A mathematical algorithm was developed to calculate effective density values within Apollo lunar core samples using digitized radiographs. Code was written in MATLAB to produce density maps based on the algorithm. Factors like varying X-ray intensities due to the Inverse Square Law and different material thicknesses were accounted for. The resulting density maps provide reasonable values compared to previously measured bulk densities, but more information is needed to address issues in radiography and the physical parameters of the X-ray setup. With this additional information, density values as a function of depth and porosity can be accurately evaluated.
Investigatng MultIfractality of Solar Irradiance Data Through Wavelet Based M...CSCJournals
It has been already revealed that the daily Solar Irradiance Data during the time period from October, 1984 to October, 2003 obtained by Earth Radiation Budget Satellite (ERBS) exhibits an Anti-persistent trend having multi-periodic phenomena. The solar irradiance time series data being a complex non linear signal in this paper we have tried to detect the irregularity and multifractality in the signal using continuous wavelet transform modulus maxima(WTMM) algorithm. Singularity spectrum of the signal has been obtained to measure the degree of multifractality of the Solar Irradiance signal.
Angular and position stability of a nanorod trapped in an optical tweezersAmélia Moreira
The document summarizes the analysis of angular and position stability of a nanorod trapped in an optical tweezers. It computes the optical trapping forces and torques on a nano-cylinder using T-matrix and radiation stress integration approaches. The results show that lateral forces are several times stronger than axial forces, and lateral torques are 1-2 orders stronger than end-face torques. Torques due to surface stress are much stronger than spin torques. The analysis explains why low aspect ratio nanorods are stably trapped normal to the beam axis.
The journal publishes original works with practical significance and academic value. Authors are invited to submit theoretical or empirical papers in all aspects of management, including strategy, human resources, marketing, operations, technology, information systems, finance and accounting, business economics, and public sector management. IJMRR is an international forum for research that advances the theory and practice of management. All papers submitted to IJMRR are subject to a double-blind peer review process.
Hyperspectral Data Compression Using Spatial-Spectral Lossless Coding TechniqueCSCJournals
Hyperspectral imaging is widely used in many applications; especially in vegetation, climate changes, and desert studies. Such kind of imaging has a huge amount of data, which requires transmission, processing, and storage resources especially for space borne imaging. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we analyze the spectral cross correlation between bands for Hyperion hyperspectral data; spectral cross correlation matrix is calculated, assessing the strength of the spectral matrix, and finally, we propose new technique to find highly correlated groups of bands in the hyperspectral data cube based on "inter band correlation square", from the resultant groups of bands we propose a new predictor that can predict efficiently the whole bands within data cube based on weighted combination of spectral and spatial prediction, the results are evaluated versus other state of the art predictor for lossless compression.
Super resolution imaging using frequency wavelets and three dimensionalIAEME Publication
This document describes a method for achieving super resolution imaging using frequency wavelets and three-dimensional views with holographic technique. The method involves three main steps: 1) Registering low resolution images taken of the same specimen from different angles using digital holographic equipment to determine sub-pixel shifts between images; 2) Performing interpolation using frequency wavelet methods to increase resolution and add high frequency information; 3) Reconstructing a super resolution image by minimizing degradation from aliasing, noise, and blur. The resulting image has high resolving power, clarity, and a 3D view of the specimen.
Application of extreme learning machine for estimating solar radiation from s...mehmet şahin
This paper presents an extreme learning machine (ELM) model to estimate solar radiation in Turkey using satellite data from the National Oceanic and Atmospheric Administration advanced very high-resolution radiometer from 20 locations. The ELM model uses land surface temperature, altitude, latitude, longitude, month, and city as inputs to estimate solar radiation. The ELM model provided better estimation accuracy than an artificial neural network model and was about 23.5 times faster to train, demonstrating the potential of ELM for solar radiation estimation using remote sensing data.
Hyperspectral Imaging Camera using Wavefront??? ?????
This document describes a new approach for hyperspectral imaging using wavefront division interference. The approach uses a spatial light modulator placed in the exit pupil of an imaging system to divide the wavefront from each object point into two parts with a variable phase delay. As the phase delay is changed, an interferogram is obtained for each wavelength, allowing the spectrum at each point to be determined. The approach converts a general imaging system into a hyperspectral imaging system with only a minor impact on optical performance. A prototype optical system was built to demonstrate this new wavefront division interference approach to hyperspectral imaging.
IRJET- Detection of Angle-of-Arrival of the Laser Beam using a Cylindrica...IRJET Journal
This document describes a method for detecting the angle of arrival of an incoming laser beam using a cylindrical lens and Gaussian curve fitting. A cylindrical lens is used to focus the laser beam onto a linear array of photodetectors. Gaussian curve fitting is then applied to the signal detected by the photodetectors to determine the centroid or peak of the Gaussian curve, which corresponds to the angle of arrival. The system is able to measure the angle of arrival with high angular resolution depending on the number of data points used in the Gaussian fit. Simulation and testing was performed using MATLAB to generate laser beams, add noise, fit curves to detector signals, and analyze errors at different angles of arrival.
Calculation of solar radiation by using regression methodsmehmet şahin
Abstract. In this study, solar radiation was estimated at 53 location over Turkey with
varying climatic conditions using the Linear, Ridge, Lasso, Smoother, Partial least, KNN
and Gaussian process regression methods. The data of 2002 and 2003 years were used to
obtain regression coefficients of relevant methods. The coefficients were obtained based on
the input parameters. Input parameters were month, altitude, latitude, longitude and landsurface
temperature (LST).The values for LST were obtained from the data of the National
Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer
(NOAA-AVHRR) satellite. Solar radiation was calculated using obtained coefficients in
regression methods for 2004 year. The results were compared statistically. The most
successful method was Gaussian process regression method. The most unsuccessful method
was lasso regression method. While means bias error (MBE) value of Gaussian process
regression method was 0,274 MJ/m2, root mean square error (RMSE) value of method was
calculated as 2,260 MJ/m2. The correlation coefficient of related method was calculated as
0,941. Statistical results are consistent with the literature. Used the Gaussian process
regression method is recommended for other studies.
This document provides an overview of synthetic aperture radar (SAR). SAR uses motion of a radar antenna mounted on a moving platform to synthesize a large antenna and create high-resolution radar images. It describes the basic principles of SAR, including how successive radar pulses are transmitted and echoes received to build up an image. Applications of SAR include remote sensing, mapping, and monitoring changes over time. Spectral estimation techniques are used to process SAR data and improve resolution. Polarimetry and interferometry are additional SAR techniques. Typical SAR systems are mounted on aircraft or satellites.
LOCALIZATION SCHEMES FOR UNDERWATER WIRELESS SENSOR NETWORKS: SURVEYIJCNCJournal
Underwater Wireless Sensor Networks (UWSNs) enable a variety of applications such as fish farming and water quality monitoring. One of the critical tasks in such networks is localization. Location information can be used in sensor networks for several purposes such as (i) data tagging in which sensed information is not useful for the application unless the location of the sensed information is known, (ii) tracking objects or (iii) multi-hop data transmission in geographic routing protocols. Since GPS does not work well underwater, several localization schemes have been developed for UWSNs. This paper surveys the state-ofthe-art of localization schemes for UWSNs. It describes the existing schemes and classifies them into different categories. Furthermore, the paper discusses some open research issues that need further investigation in this area.
Adaptive optics are used in ground-based telescopes to directly image extrasolar planets and overcome atmospheric turbulence. Atmospheric turbulence causes distortions that blur planetary images. Adaptive optics systems measure wavefront distortions using a wavefront sensor and correct for them using a deformable mirror in a closed-loop system. This results in sharper, diffraction-limited images that help verify exoplanets. Future extremely large telescopes will use many more actuators on deformable mirrors to provide substantial correction, aiding the search for Earth-like exoplanets.
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rXavier Davias
The document describes a new rendering method called bidirectional ray tracing using adaptive radiosity textures. It separates surface interaction into diffuse and specular components. It computes the specular component on the fly during ray tracing, and stores the diffuse component (radiosity) in adaptive radiosity textures on diffuse surfaces. These textures adaptively subdivide to resolve sharp shadows. It uses a three-pass algorithm: 1) a size pass records visibility, 2) a light pass traces light rays to deposit photons and construct textures, 3) an eye pass traces eye rays to render the image using the textures. This hybrid approach aims to provide accurate global illumination simulation for realistic image synthesis.
Similar to Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain (20)
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Communicating effectively and consistently with students can help them feel at ease during their learning experience and provide the instructor with a communication trail to track the course's progress. This workshop will take you through constructing an engaging course container to facilitate effective communication.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
1. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 497
Fusion of Multispectral And Full Polarimetric SAR Images In
NSST Domain
Gh. S. El-Tawel ghada_eltawel@ci.suez.edu.eg
Computer Science Dept., Faculty of Computers and Informatics,
Suez Canal University,
Ismailia, Egypt
A. K. Helmy akhelmy@narss.sci.eg
Data Reception, Analysis and Receiving Station Affairs Division,
National Authority for Remote Sensing and Space Sciences, Cairo, Egypt
Abstract
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the
imaged objects. Multispectral provides information about surface material while POLSAR
provides information about geometrical and physical properties of the objects. Merging both
should resolve many of object recognition problems that exist when they are used separately.
Through this paper, we propose a new scheme for image fusion of full polarization radar image
(POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based
on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural
Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub-
band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on
local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide
how the fused sub-band coefficients are calculated using image textural information.
The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and
radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are
selected to react differently with different polarization. Visual assessment of the obtained fused
image gives excellent information on clarity and delineation of different objects. Quantitative
evaluations show the proposed method can superior the other data fusion methods.
Keywords: Multi-spectral Data Fusion, POLSAR, NSST, m-PCNN.
1. INTRODUCTION
Image fusion is a process of incorporating different images originating from different sources to
create more reliable information than that from individual sources, and recently it has received
great attention in the remote sensing field.
In the processing of optical images, many land cover types and surface materials are identical in
their spectral characteristics. This leads to great difficulty in image segmentation, classification
and feature extraction [1] [2]. Usually, Optical satellites use different sensors (visible, near
infrared and shortwave infrared) to form images of the earth's surface. Different targets reflect
and absorb in a different way at different wavelengths. Thus, their spectral signatures can
characterize the targets.
Synthetic aperture radar (SAR) system hits the objects over the earth with a guided microwave
[3], then the object either absorb or scatter these waves in all directions. Absorption or scattering
of incident wave depends mainly on the physical characteristics of the object. The SAR system
records only part of the scattered wave in the direction of the receiving antenna. Some SAR
systems transmit and receive waves with different polarization (POLSAR). In this, the system
2. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 498
records the polarization of returned waves and measures both the intensity and phase of the
backscattered waves. It mainly characterizes intrinsic structural and dielectric properties of the
target [3].
Therefore, the imaging system for an optical satellite images and radar satellite signals are
evidently different. The optical images are mainly characterized by spectral resolution, which is a
measure of its ability to discriminate features in electromagnetic spectrum [4]. Polarimetric Radar
images provide a tool to identify different features based on dielectric properties and surface
roughness. Integration of spectral characteristics of an object, devised from multispectral images,
and physical properties (and surface roughness) originated from Polarimetric SAR images will
provide a great role in many remote sensing applications.
In this paper, we propose a new scheme for integrating Multispectral optical images (MS) and
Multi-polarization POLSAR images. Non-Sampled Shearlet Transform (NSST) is used to
decompose the input images into low and band-pass sub-bands. For low frequency coefficients,
we introduce an adaptive weight fusion structure based on regional local energy and local image
texture features. Different textural factors, gradient, entropy, and spatial frequency, are taken as
multiple stimuli to Multi channel Pulse Coupled Neural Network (m-PCNN) to guide the fusion
process of band-pass sub-bands. The rest of the paper is organized as follows: Section 2 and 3
discs in the short-term the theory of the NSST and m-PCNN. Section 3 demonstrates the main
frame of the proposed scheme and pronounces the fusion algorithm. Experimental results and
the evaluations are discussed in Section 4. We present the conclusions in the last section.
2. POLARIMETRIC SYNTACTIC APERTURE RADAR (POLSAR)
POLSAR images uses that fact that the status of the received scattered signal reflects the
characteristics of the illuminated objects such as roughness and dielectric constants [5].
Accordingly the polarimetric images can be used efficiently to recognize these properties. The
scattering matrix [S] is a kind of relation between the incident and scattered wave [5,6] and is
being expressed by:
ܧ
ܧ
൨ = ሾܵሿ
ܧ
௧
ܧ
௧ ൨ =
ܵுு ܵு
ܵு ܵ
൨
ܧ
௧
ܧ
௧ ൨ (1)
Where ܧு
, ܧ
, ܧு
௧
, ܽ݊݀ ܧ
௧
are the received and transmitted electric fields of corresponding
polarizations respectively, and Smn is the matrix elements and defined as:
ܵ = |ܵ|݁ ఝ
(2)
The incident and scattered wave are a complex quantity (amplitude and phase) and usually
expressed in polarization term (the direction of incident / received wave). Four different
combinations of transmitted and received polarizations are listed below.
HH: horizontal transmission and reception.
HV: horizontal transmission and vertical reception.
VV: vertical transmission and vertical reception.
VH: vertical transmission and horizontal reception.
A SAR system used in this paper (RADARSAT-2) has a single antenna for both transmission and
reception, the relation SHV = SVH [7] holds for the rest of this paper.
From equation 2, the main parameters that characterized POLSAR data are the amplitudes
(|SHH|, |SHV |, |SVV |) and the phases (φHV and φVV). The phase values are not absolute values [7],
it is a relative to a certain phase plan, generally HH-polarization element is chosen as a reference
3. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 499
phase. In this paper, we use the first three parameters, |SHH|, |SHV |, |SVV |, to be fused with
multispectral Egypt-sat data (see app-1).
3. THE NON-SUBSAMPLED SHEARLET TRANSFORM (NSST)
Multi-resolution analysis tools, Discrete Wavelet Transform (DWT), have been widely applied to
image fusion [8, 9]. DWT are mainly depends on multi-scale geometric analysis, and has many
advantages such as localization and direction. On the other hand, the wavelet transform suffers
from imperfect directionality (directional selectivity is very limited and cannot get optimal detail
information) moreover; it is not shift-invariant, result in degraded information and bad fusion
output [10]. In order to get better signal representation researchers introduce new signal analysis
tools and used extensively in image fusion, including: Curvelet [11], Ridgelet [12], Contourlet
[10,13], and so on.
Easley et al. proposed Non-Subsampled Shearlet Transform (NSST) [14], which is the mixture of
non-subsampled Laplacian pyramid transform and different shearing filters. NSST provides a
multi-scale and multi-directional framework which decomposes into one low-frequency sub-band
(signifies the approximation component of the source image) and a series of directional band
pass sub-bands. NSST also satisfies the prerequisite of the shift-invariance property. So it can
capture more further information on different directional sub bands than that of the wavelet
transform and contourlet transform. The decomposition of shearlet is close to contourlet
transform, but it has an advantage over contourlet transform that the number of directions in
NSST for the shearing filter is non-limited.
Additionally, inverse contourlet requires inverting directional filter banks, instead of a summation
of the shearing filter in case of inverse shearlet transform. Consequently the implementation of
shearlet is more efficient computationally [14].
Through this work, we used NSST to decompose the input images into low and sub-bands
coefficients, apply a fusion rule followed by the inverse of NSST to construct the fused image.
4. PULSE COUPLED NEURAL NETWORK
The Pulse Coupled Neural Network “PCNN” is a neural model with single layer architecture. To
model an image with a PCNN, we consider the following: the input neurons represent image
pixels, pixel’s information (e.g. Intensity or texture) are represented as an external stimulus
received by each neuron and the relation between neighbored pixels is represented as internal
stimuli fed to each neuron. In this model each neuron connects with its surrounded neighbors.
PCNN uses an internal activation system to accumulate the stimuli until it surpasses a dynamic
threshold, resulting in a pulse output. Images generated at different iterations indicate the fine
details of the input image (edges and small objects). The PCNN model is fully described in [15,
16].
Many researchers use PCNN for data fusion for instant Wang and Ma design single PCNN for
medical image fusion [17, 18]. Miao introduces an adaptive system for image fusion with different
spatial resolution by the adaptive linking coefficient of PCNN [19]. Others integrate PCNN with
multi-layer decomposition to get the fused image [20, 21].
Recently a parallel version of PCNN is introduced in which many PCNNs are working in parallel,
in this each network operates on a separate channel of the input [17]. We present an image
fusion method based on dual pulse couple neural network [22]. Wang presented an excellent
review of application of PCNN [23].
Our proposed scheme of data fusion is based on m-PCNN that is a modified version of PCNN.
This model is proposed by Wang and Ma, [17, 18]. m-PCNN can extend the number of inputs that
depends on practical request. We use some texture information as multiple stimuli of m-PCNN,
and the output is used as a weight guide for the fusion process.
4. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 500
5. GENERAL FRAMEWORK OF MULTISPECTRAL-POLSAR FUSION
ALGORITHM
Through this paper, we assume that Egyptsat, and POLSAR images have been co-registered,
and noted by R (x, y), M (x, y) respectively. F (x, y) is the output fused image. Figure-1
demonstrates the proposed framework of the image fusion process. As a summary, the fusion
approach is listed in the following steps.
1- POLSAR image has been de-speckled and the Egyptsat image is down-sampled (from
7.8 to 7 meters) to match the POLSAR spatial resolution.
2- Low frequency and band-pass sub-bands coefficients of the source images are
calculated using NSST.
3- The low-frequency sub-band coefficients and the directional band-pass sub-bands
coefficients of the source images are merged according to specific rules, the rules will be
introduced in the next sections.
4- Calculate the inverse NSST to get the fused image F (x, y).
5.1 Fusion Rule for Lowpass Coefficients
As the lowpass sub-band coefficients mainly retain the main energy and represent the
approximation component of the source images, fusion procedures should be adopted to
preserve this information. Many authors process the lowpass sub-band coefficients use direct
averaging rule [24], which is simple, however, this scheme always results in a low-contrast effect,
due to fact that both information (3-bands) of POLSAR and multispectral images are
complementary and both of them are desirable in a fused image. A new average weighted
formula has been presented to model fusion hierarchy based on regional local energy and local
image texture features.
It is difficult to know or even estimate, which bands from POLSAR and multispectral should be
fused together (in our case we have three-bands for multispectral image and three-bands
represent POLSAR image). Figure 2 shows a schematic diagram of our method to fuse lowpass
coefficients, and the procedures are summarized as follows:
1- For all bands, the local energy (equation-5) is calculated for each coefficient, using its
neighborhood (5 x 5 in our case). This is done for the multispectral and POLSAR
bands.
Fusion rule
Fusion rule
NSST
NSST
De-speckle
Down sampling
EgyptsatImagePOLSARImage
Low frequency
Band pass
frequency
Band pass
Low frequency
Fused Image
InverseNSST
FIGURE 1: Block diagram of image fusion based on the NSST.
5. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 501
2- Coefficient retains maximum value of energy from POLSAR is fused with a coefficient
that has the least energy from multispectral (equation-3). This represents the first-
band first-fused coefficient.
3- To get the second-band first-fused coefficient, next coefficient with maximum energy
from POLSAR is fused with preceding minimum energy from multispectral coefficient.
4- Finally, to get the third one, coefficient with minimum energy from POLSAR is fused
with maximum energy from multispectral images. The mathematical notation of the
process calculations is described below.
The coefficient of the fused image at location (x, y) can be calculated by:
ܫி (,ݔ )ݕ =
ݓଵ ܫோ(,ݔ )ݕ + ݓଶܫெ(,ݔ )ݕ
2
(3)
Where:
ܫோ(,ݔ ,)ݕ ܫெ(,ݔ )ݕ , and ܫி(,ݔ )ݕ denote the lowpass subband coefficient located at(,ݔ )ݕ for
POLSAR, multispectral and fused images respectively.
The value of ܫோ(,ݔ )ݕ is chosen among the coefficients of three POLSAR bands according to the
following:
ܫோ(,ݔ )ݕ = ݉ݑ݉݅ݔܽ݉(ܥ (݁݊, ݁݊௩, ݁݊௩௩)) (4)
Where: C (…) is the lowpass suband coefficients.
݁݊௫௫(݁݊݁)ݕ݃ݎ is a parameter used to measure the textural uniformity of an image,
ℎℎ, ℎݒ ݎ ݒݒ, and calculated as follows:
݁݊ (,ݔ )ݕ =
1
݊ x n
ୀ
ିଵ
ଶ
ୀି
ିଵ
ଶ
ܥଶ(ݔ + ݉, ݕ + )ݎ (5)
ୀ(ିଵ)/ଶ
ୀି(ିଵ)/ଶ
Where C is the lowpass coefficients of HH, HV or VH, and n and m are those defined the
neighborhood areas.
In contrast the value of ܫெ(,ݔ )ݕ is picked from the coefficients of three multispectral bands
according to the following: ܫெ(,ݔ )ݕ =݉ݑ݉݅݊݅݉(ܥ (݁݊ଵ, ݁݊ଶ, ݁݊ଷ) ), again C (…) is the
lowpass suband coefficients.
Input Low pass coefficient of
POLSAR (HH, HV, VV)
Input Low pass coefficient of
Multispectral (B1, B2, B3)
Energy
Calculator Coefficient with least energy
Coefficient with max. energy
Coefficient with next max.
Energy
Calculator Coefficient with least energy
Coefficient with max. energy
Coefficient with next max.
Three-bandFused
Coefficients
FIGURE 2: Block Diagram of Image Fusion rules for Low Coefficients.
6. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 502
The weights in the equation (3) can be calculated as follows:
ݓଵ =
݉ܽ݉ݑ݉݅ݔ (݁݊, ݁݊௩, ݁݊௩௩)
݉݅݊݅݉݉ݑ (݁݊ଵ, ݁݊ଶ, ݁݊ଷ)
(6)
ݓଶ =
1
ݓଵ
(7)
In this ݂݅ ݓଵ > 1: i.e.) maximum energy picked from POLSAR image is greater than the
minimum energy of multispectral image, the proposed weighting strategy leads to maximize the
contribution of POLSAR image and minimize that of multispectral image.
On the other hand ݂݅ ݓଵ < 1 i.e.) minimum energy picked from multispectral image is greater
than the maximum energy of POLSAR image, again our strategy leads to maximize the
contribution of multispectral image and minimize that in POLSAR image. The previous
procedures are repeated with next maximum and minimum values to obtain 3-bands fused
image.
In any case, the high-energy contribution is maximized, while the minimum energy contribution
minimizes through generation of the fused image. This procedure can overcome the low contrast
drawback of weighting average scheme.
To fine-tune the results of fused coefficients and to obtain better effect than that explained earlier,
we modified equations (6, 7) taking into account texture information when calculating the weight
factor. We added a dispersion index “D” which is a normalized degree of dispersion of a
probability distribution: it is a measure used to enumerate whether a set of observed occurrences
is clustered or dispersed with respect to a standard statistical model. It is defined as the ratio of
the variance “ߜଶ
" to mean “ߤ” and calculated as an average mean for both images.
ܦ =
ఋమ
ఓ
(8)
Then the new weight
ݓෝଵ = ݓଵ +
ೄ
(9)
Where ܦௌ ܽ݊݀ ܦ are dispersion index of POLSAR and multispectral images.
5.2 Fusion Rule for High Frequency Sub-band Coefficients
The image edges, corners and other fine details are concentrated in the high-frequency
components obtained from shearlet transform. Thus, the clarity and distortion of the fused image
depend mainly on how these components are fused. Voting strategy has been popularly applied
to composite the high-frequency components. It relies on constructing decision map using
different parameters to regulate where the fused coefficients are from image ‘A’ or image ‘B’.
In most existing fusion algorithms, usually, some textural information is used to decide from which
the fused image come from [25]. Features such as the entropy, gradient, variance, and the spatial
frequency [26, 27] can represent image texture information. Essentially, it can reflect detailed
information in different ways, and can be used to discriminate sharp and indistinct regions. These
textural measurements can be used independently [26, 27] or may be joint using some specific
rules [28]. In our method, we present some texture information (the entropy, gradient and spatial
frequency) as an indicator of weighting factor in constructing a fusion rule. These different texture
factors are taken as multiple stimuli of m-PCNN. Then the output determines the fusion weight
according to the values coming from PCNN that reflect the overall image clarity. The whole
process shown in figure 3 and can be summarized as follows:
7. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 503
For the current subband, let ܥூ
,
(,ݔ )ݕ be the band-pass coefficient at a location (,ݔ )ݕ in the ݈௧
band at ݇௧
level, ܫ represents HH, HV, VV, b1, b2 or b3.
For the same subband, calculate the maximum gradient(ܴܩ୫ୟ୶), entropy (ܰܧ୫ୟ୶), and spatial
frequency (݂ܵ୫ୟ୶) values among all bands.
ܴܩ୫ୟ୶ = ݉ܽ݉ݑ݉݅ݔ ( ܥ(ܴܩூ
,
(,ݔ ))ݕ ∀()ܫ (10)
ܰܧ୫ୟ୶ = ݉ܽ݉ݑ݉݅ݔ ( ܥ(ܰܧூ
,
(,ݔ ))ݕ ∀()ܫ (11)
ܵܨ୫ୟ୶ = ݉ܽ݉ݑ݉݅ݔ ( ܵܥ(ܨூ
,
(,ݔ ))ݕ ∀()ܫ (12)
Where: ܥ(ܴܩூ
,
(,ݔ ))ݕ is the gradient of the high-pass coefficient at location (,ݔ )ݕ in the ݈௧
subband at ݇௧
level. Similarly ܥ(ܰܧூ
,
(,ݔ ))ݕ ܽ݊݀ ܥ(݂ݏூ
,
(,ݔ ))ݕ are the largest value of entropy
and spatial frequency respectively of the high-pass coefficients.
Maximum gradient, entropy, and spatial frequency will be used as different features of image to
motivate dual-channel m-PCNN. The fusion will be done between the three winner bands (bands
that have maximum gradient, entropy and spatial frequency). We should keep track which bands
got those maxima. The coefficient of the fused image at location (x, y) can be calculated by:
ܫܥி (,ݔ )ݕ =
௪
ଷ
∑ ܫ
ଷ
ୀଵ (,ݔ )ݕ (13)
Where:
ܫ(,ݔ )ݕ Denote the high-pass subband coefficient located at(,ݔ )ݕ with a maximum value of
gradient, entropy, and spatial frequency. W is the weight factor outputs from m-PCNN.
m-pcnn systematic Diagram
InputhighPassCoefficients(HH,HV,
VV,B1,B2andB3)
GR
EN
SF
SEL()
SEL()
SEL()
ܴܩ୫ୟ୶
ܰܧ୫ୟ୶
ܵܨ୫ୟ୶
Inputs Linking Pulse
Generator
Fused high pass
coefficients
Weight
-matrix
Normalization
FIGURE 3: Block diagram of image fusion for subbands coefficients.
8. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 504
5.3 Assessment Criteria
The visual contrast between the fused images and original images are conducted. Furthermore,
quantitative analysis is also applied to the results of different fusion algorithms in terms of
correlation coefficients, entropy, average gradient [29] and the Q/
[30]. The clarifications of
these measures are discussed below.
The correlation reflects the amount of similarity of two images. The average gradient mirrors the
difference between image structure (sharper image normally has a greater average gradient), the
entropy specifies the overall randomness level in the image (higher value of entropy, more
detailed information will be contained in the image), While ܳ/ி
measures the amount of edge
information transferred from the source images to the fused image using a Sobel edge detector,
its larger value, imply better fusion result is.
In the two images f(x,y) and B(x,y) of size M x N, the correlation coefficient of each band is
defined as:
݊ݐ݈ܽ݁ݎݎܥ ܿݐ݂݂݊݁݅ܿ݅݁ =
∑ ∑ ሾ( ݂(,ݔ )ݕ − ܧ ) × (,ݔ(ܤ )ݕ − ܧ )ሿ∀௬∀௫
ට∑ ∑ ൣ( ݂(,ݔ )ݕ − (ܧ )ଶ ሿ × ∑ ∑∀௬∀௫ ሾ( ,ݔ(ܤ )ݕ − (ܧ )ଶ ൧∀௬∀௫
(14)
Where Ef and EB are the mean of two images, respectively.
ܽ݁݃ܽݎ݁ݒ ݃ݐ݊݁݅݀ܽݎ =
ଵ
(ெିଵ)(ேିଵ)
∑ ∑ ට
ଵ
ଶ
൬
డ((௫,௬)
డ௫
ቁ
ଶ
+ ቀ
డ((௫,௬)
డ௬
ଶ൰൨∀ே∀ெ (15)
݁݊ݕݎݐ = − ∑ )݈( ln )݈(∀ (16)
Where )݈( means the probability of the gray value (݈) appearing in the image.
ܳ/ி
=
∑ ∑ (ொಲಷ(,)ௐಲ(,)ାொಳಷ(,)ௐಳ(,))ಾ
షభ
ಿ
షభ
∑ ∑ (ௐಲ(,)ାௐಳ(,))ಾ
షభ
ಿ
షభ
(17)
Where Q(n, m) = Q
(n, m)Q
(n, m); Q
(n, m) and Q
(n, m) are the edge strength and
orientation preservation values respectively; n, m represent the image location; and N ,M are the
size of images. Q
(n, m) is similar to Q
(n, m). W
(n, m) and W
(n, m) reflect the importance of
Q
(n, m) and Q
(n, m) ,respectively. The dynamic range of Q
ఽా
ూ is [0 1], and it should be as
close to 1 as possible.
6. EXPERIMENTAL RESULTS
Data fusion algorithms in literatures take advantage of the complementary spatial/spectral
resolution characteristics of multi-spectral and panchromatic data for producing spatially
enhanced multi-spectral observations. In our instance, spatial resolution is almost the same while
the complementary information resides in spectral bands. We seek to integrate information from
Red, Green and infrared bands (exist in Egyptsat images) with other information originated from
POLSAR data that represent geometric and physical characteristics of objects
9. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 505
The proposed scheme is used to merge multispectral, Egyptsat, image (Band 1, Band 2 and
Band 3-see-App1), with scattering matrix of POLSAR with HH, HV and VH polarization taking into
account that HV=VH. The POLSAR data set used is C-band Radarsat-2 (RS-2) data in a full
parametric mode. Spatial resolution of POLSAR is 7 m; while Egyptsat is 7.8 m., Speckles of
POLSAR are reduced using enhanced LEE filter [31], while the spatial resolution of Egypt sat is
down-sampled to match that of POLSAR. The data set has been accurately co-registered. To
illuminate the results, three batches (pat-1, pat-2 and pat-3) of images are used as shown in
figure 4. The batches are carefully chosen to be sensitive to radar-objects interaction (i.e. vertical
and horizontal polarized). Comparing the proposed scheme with traditional methods [32] used in
remote sensing such as principal component, brovey, IHS… is not appropriate in our case, since
these techniques aim to merge multispectral with PAN SAR images.
In order to evaluate the proposed method we had made a comparison with three fusion methods,
SIST-based [33], PCNN-based [22], and Contourlet-based [13]. In these methods, the fusion
process is performed separately with respect to ܵுு ,ܵு and ܵ , then the fused results are
displayed in RGB. Moreover, the proposed scheme is compared with that proposed by Lei Wang
that has the ability to fuse two multispectral images [34]; he used Hidden Markov Tree (HMT) in
SIST domain to perform fusion process.
FIGURE 4: Three Input batches a-b-c: Multispectral images (in RGB color composite). b,d,f: C-band
raw polarimetric SAR image (R=HH,G=HV,B=VV), copyright (MDA)
(a) (b) (c) (d)
(e) (f)
10. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 506
(a) (b)
(c) (d)
11. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 507
Figure 4 shows the source images used in this research. We use three batches of images in the
fusion experiment; the images are selected to react differently with different POLSAR
polarization. One group focuses on a hilly area, second emphasis flat region and the last one
contain both of them. Figure 5 shows the fusion results of batch-1. Generally, although all
methods inject fair information of the source images into the fused image, but they fail to achieve
acceptable transfer of information, specifically that related to low frequency regions, from input
images to the fused image. To clarify the visual assessment, figure 6 shows closer look of these
images.
Closer looks at these results disclose the following:
As can be seen in figures 5 and 6, the fused image, which is obtained by HMT-based, presents
low contrast and vague attendance; Moreover, it showed a loss of edge information and mixing
of color in low frequency sub-bands, this due to only use of intensity component in the fusion
process [34].
The fused image, SIST-based Method, lost information to some extent in low frequency
subbands. In addition, edges suffer from over smoothness.
The fused images outs from Contourlet-based appear noticeable noise at high frequency regions
(the edges), as it does not have shift-invariant, which leads humble visual effects.
PCNN method shows an improvement in visual effects, although it suffers from overall haziness
appearance. The proposed methods introduced significant migration of information from input
images to the fused one in both low and sub-bands frequency.
FIGURE 5: The fusion results: of Fig. 4 (a) and (b): (a)–(e) fused images using the proposed method,
SIST-based, Contourlet-based, HMT-based and PCNN-based, respectively.
(e)
12. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 508
According to the quantitative evaluation, table-1 lists different metric measures used through this
study, best values achieved by the proposed model regarding the amount of information,
transferred from input images to the output, and the strength of the edges. Zoomed areas of the
second and third batches of the image are shown in figures 7, 8. Tables 2, 3 show the
quantitative measures of fused images.
TABLE 2: Comparison of fusion results with different fusion methods (batch-2, the radar image is sensitive
to horizontal polarization, it is almost flat region).
Proposed
Model
SIST-based Contourlet -
Based
HMT-Based PCNN-Based
Correlation 0.88 0.81 0.78 0.7 0.785
Average-gradient 6.192 6.1 5.87 5.4 5.7
Entropy 7.069 7 7.1 6.5 6.9
ۿ۴/۰ۯ 0.75 0.7 0.66 0.711 0.74
TABLE 1: Comparison of fusion results with different fusion methods (batch-1, the radar image is sensitive
to vertical polarization, it is a steep region)
Proposed
Model
SIST-Based Contourlet-
Based
HMT-Based PCNN-Based
Correlation 0.88 0.85 0.83 0.8 0.83
Average-gradient 8.644 8.1 8.8 7.8 8.4
Entropy 8.457 8.2 8.5 7.38 8.1
ۿ۴/۰ۯ 0.792 0.77 0.73 0.61 0.69
FIGURE 6: Zoomed shot of the fusion results: (a) and (b): Multispectral and POLSAR input images. (c)– (g):
zoomed area of fused images using the proposed method, NSST-based, Contourlet-based, HMT-based and
PCNN-based respectively.
(a) (b) (c) (d)
(e) (f) (g)
13. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 509
Figure 7, 8 give same indication, some high frequency components eliminated from fusion
results; it was pointed by the yellow arrow in figure 7. In Figure 8, it is found that the valley
structures are fully preserved in the proposed method, while the color information is variegated
between multispectral and POLSAR images.
TABLE 3: Comparison of fusion results with different fusion methods (batch-3, the radar image is sensitive
to both horizontal and vertical polarization).
Proposed
Model
SIST Contourlet -
Based
HMT-Based PCNN-Based
Correlation 0.91 0.91 0.88 0.73 0.86
Average-
gradient
7.2 6.87 6.99 7.04 5.9
Entropy 8.15 7.59 8.1 7.9 6.5
ۿ۴/۰ۯ 0.68 0.679 0.523 0.61 0.59
(a) (b) (c)
(d) (e) (f)
(g)
FIGURE 7: Zoomed area of the second batch of images: (a) and (b): Multispectral and POLSAR input
images. (c)– (g): fused images using the proposed method, NSST-based, Contourlet-based, HMT-based
and PCNN-based respectively.
14. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 510
7. CONCLUSION
This research investigates the fusion process of full polarimetric POSAR data, RS2, with
Multispectral optical imagery (Egyptsat). By applying the fusion process, we obtain a new image
that can be considered neither optical nor POLSAR. It is a synthesized image that is produced for
better image understanding. To meet this requirement, we propose a new weighting average
scheme based on NSST and m-PCNN. Firstly, input images are transformed into low and band-
pass sub-bands. Low frequencies of both images are fused to each other by relating maximum
and minimum energy of both images-bands with a reciprocal way, in addition, the fusion process
takes into account a texture distribution by adding a dispersion index to the fused coefficients.
Secondly, m-PCNN is used to guide the fusion process of band-pass sub-bands by incorporating
edge information measurements. The experimental results indicate that the fused image has
strengthened object structure detail.
FIGURE 8: Zoomed area of the third batch of images: (a) and (b): Multispectral and POLSAR input images.
(c)– (g): fused images using the proposed method, NSST-based, Contourlet-based, HMT-based and PCNN-
based respectively.
(a) (b) (c) (d)
(e) (f) (g)
15. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 511
ACKNOWLEDGMENTS
We express our deepest thanks and appreciation to the Canadian Space Agency (CSA) for
providing the Radar Sat 2 images for this study under the agreement of the project ID5128. We
also acknowledge the National Authority for Remote Sensing and Space Sciences for resourcing
this work.
8. REFERENCES
[1] King, RL. 2000. “A challenge for high spatial, spectral, and temporal resolution data fusion”
In: Proceedings, IEEE international geoscience and remote sensing symposium,
IGARSS no. 6: 2602–2604. doi: 10.1109/IGARSS.2000.859654.
[2] Zhu, X., and Yin L. 2001. “Data fusion of multi-sensor for petroleum geological exploration”
In: Proceedings, international conferences on infotech. and info-net, ICII, Beijing no.1: 70–
75. doi: 10.1109/ICII.2001.982723.
[3] Sabry R., and Vachon P. W. 2014. “A Unified Framework for General Compact and Quad
Polarimetric SAR Data and Imagery Analysis.” IEEE Transactions on Geoscience and
Remote Sensing 52(1): 582-602. doi:10.1109/TGRS.2013.2242479.
[4] Campbell, J. B. 2002. Introduction to Remote Sensing, New York London: The Guilford
Press.
[5] Giorgio, L. 2014. “A novel approach to polarimetric SAR data processing based on Nonlinear
PCA” Pattern Recognition no. 47:1953–1967. doi: 10.1016/j.patcog.2013.11.009.
[6] Kajimoto, M. 2013. “Urban density estimation frompolarimetric SAR images based on a POA
correction method” IEEE Journal of Selected Topics in Applied Earth Observations and
Remote Sensing 6(3): 1418–1429. doi: 10.1109/JSTARS.2013.2255584.
[7] Jong-Sen, Lee, and Eric Pottier. 2009. Polarimetric Radar Imaging: From Basics to
Applications. CRCPress: BocaRaton.
[8] Huang, W., and Jing Z.L. 2003. “Evaluation of focus measures in multi-focus image fusion”
Pattern Recognition Letter no. 28: 493–500. doi: 10.1016/j.patrec.2006.09.005.
[9] Shutao, Li., and Bin Yang. 2008. “Multi-focus image fusion using region segmentation and
spatial frequency” Image and Vision Computing no. 26: 971–979. doi:
10.1016/j.imavis.2007.10.012.
[10]Zhang, Q., and Guo, B.L. 2009. “Multifocus fusion using the non subsampled contourlet
transform” Signal Processing 89(7): 1334–1346. doi: 10.1016/j.sigpro.2009.01.012.
[11]Starck, J.L., Candes, E.J., and Donoho, D.L. 2002. “The curvelet transform for image
denoising” IEEE Transaction on Image Processing 11(6): 131–141. doi:
10.1109/TIP.2002.1014998.
[12]Candes, E. 1998. “Ridgelets: Theory and Applications”, Technical report, Department of
Statistics, Stan-ford University, https://statistics.stanford.edu/sites/default/files/1998-17.pdf.
[13]Huafeng Li., Chai, Yi., and Zhaofei Li. 2013. “Multi-focus image fusion based on
nonsubsampled contourlet transform and focused regions detection” Optik - International
Journal for Light and Electron Optics 124(1): 40–51. doi: 10.1016/j.ijleo.2011.11.088.
16. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 512
[14]Easley, G., Labate, D., and Lim, W.Q. 2008. “Sparse directional image representations using
the discrete shearlet transform” Applied and Computational Harmonic Analysis 25 (1): 25–46.
doi: 10.1016/j.acha.2007.09.003.
[15]Ranganath , H. S. and Kuntimad, G., 1999. “Object Detection Using Pulse Coupled Neural
Networks” IEEE Transactions on Neural Networks 10, (3): 615–620. doi: 045-
9227(99)03181-1.
[16]John L. Johnson and Mary Lou Padgett. 1999. “PCNN Models and Applications” IEEE
Transactions on Neural Networks. 10(3): 480- 498. doi: 1045-9227(99)03191-4.
[17]Wang, Z., and Yide, Ma. 2008. “Medical image fusion using m-PCNN” Information Fusion 9
(2): 176– 185. doi: 10.1016/j.inffus.2007.04.003.
[18]Wang, Z., and Yide. Ma. 2007 “Dual-channel PCNN and its application in the field of image
fusion” Proc. of the 3rd International Conference on Natural Computation no. (1):755–759.
doi: 10.1109/ICNC.2007.338.
[19]Miao, Q. and Wang, B. 2005. “A novel algorithm of multi-focus image fusion using adaptive
PCNN”, "A novel adaptive multi-focus image fusion algorithm based on PCNN and
sharpness" Proc. SPIE 5778, Sensors, and Command, Control, Communications, and
Intelligence (C3I) Technologies for Homeland Security and Homeland Defense IV, (704):
doi:10.1117/12.603092.
[20]Xu, B., and Chen, Z. 2004. “A multisensor image fusion algorithm based on PCNN” Proc. of
the 5th World Congress on Intelligent Control and Automation no. (4): 3679–3682. doi:
10.1109/WCICA.2004.1343284.
[21]Li, W. and Zhu, X. 2005. “A new image fusion algorithm based on wavelet packet analysis
and PCNN”, Proc. of the 4th International Conference on Machine Learning and Cybernetics
no. 9: pp. 5297–5301. doi: 10.1109/ICMLC.2005.1527879.
[22]El-taweel, Ghada Sami and Helmy, Ashraf Khaled. 2013. “Image fusion scheme based on
modified dual pulse coupled neural network”, IET Image Processing 7(5): 407–414, doi:
10.1049/iet-ipr.2013.0045.
[23]Zhaobin Wang, Yide Ma , Feiyan Cheng, Lizhen. 2010. “Yang Review of pulse-coupled
neural networks” Image and Vision Computing no. 28: 5–13,
doi:10.1016/j.imavis.2009.06.007.
[24]Wang, Zhao-hui, Wang Jia-qi, Zhao De-gong, and Wei FU. 2012. “Image fusion based on
shearlet and improved PCNN” Laser Infrared. 42(2): 213–216.
http://caod.oriprobe.com/order.htm?id=29200739&ftext=base.
[25]Xuan Liu, Yue Zhou, and Jiajun Wang. 2014. “Image fusion based on shearlet transform
and regional features” Int Journal Electronics and Communications (AEÜ) 68(6): 471-477.
doi:10.1016/j.aeue.2013.12.003.
[26]Jingbo Zhang, et. al., 2008 . “Multi-focus image fusion using quality assessment of spatial
domain and genetic algorithm” Conference on Human System Interactions 8(2): 71– 75. doi:
10.1109/HSI.2008.4581411.
[27]Liu, Wei, YIN Ming, LUAN Jing, and Guo Yu. 2013. “Adaptive image fusion algorithm based
on Shift-invariant shearlet transform” Acta Photonica Sinica 42(4): 496-503. doi:
10.3788/gzxb20134204.0496.
17. Gh. S. El-Tawel & A. K. Helmy
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 513
[28]Lei, Wang, and Bin Li. 2012. “Multi-modal medical image fusion using the inter-scale and
intra-scale dependencies between image shift-invariant shearlet coefficients” Information
Fusion no. 19: 20-28. doi: /10.1016/j.inffus.2012.03.002.
[29]Wang, Z., Bovik, A.C., Sheikh, H.R., and Simoncelli, E.P. 2004. “Image quality assessment:
from error visibility to structural similarity”, IEEE Transaction on Image Processing 13 (4):
600–612. doi: 10.1109/TIP.2003.819861.
[30] Petrovic, V., and Xydeas, C. 2000. “On the effects of sensor noise in pixel-level image
fusion performance” Proc. of the Third International Conference on Image Fusion no. 2: 14–
19. doi: 10.1109/IFIC.2000.859842.
[31] Lee, J.S., Grunes, M. R. and Grandi, G. De. 1999. “Polarimetric SAR Speckle Filtering and
its Implication for Classification”, IEEE Transaction on Geoscience and Remote Sensing
37(5): 2363-2373. doi: 10.1109/36.789635.
[32] Jixian, Zhang. 2010. “Multi-source remote sensing data fusion: status and trends”
International Journal of Image and Data Fusion 1(1): 5-24, doi:
10.1080/19479830903561035.
[33] Liu X, et al., .2014. “Image fusion based on shearlet transform and regional features”.
International Journal of Electronics Communications (AEÜ) 68(6): 471-477.
doi:10.1016/j.aeue.2013.12.003.
[34]Lei Wang, and Tian Lian-fang. 2014. “Multi-modal medical image fusion using the inter-scale
and intra-scale dependencies between image shift-invariant shearlet coefficients” Information
Fusion no. (19): 20-28. doi:10.1016/j.inffus.2012.03.002.
Appendix 1. The spectral resolutions of the Egyptsat-1 data
Bands Description Wavelength
(μm)
Resolution
(m)
Band 1 Green 0.51-0.59 7.80
Band 2 Red 0.61-0.68 7.80
Band 3 Near infrared 0.80-0.89 7.80
Band 4 Panchromatic 0.50-0.89 7.80
Band 5 Mid infrared 1.10-1.70 39.5