Image-fusion provide users with detailed information about the urban and rural environment, which is useful for applications such as urban planning and management when higher spatial resolution images are not available. There are different image fusion methods. This paper implements, evaluates, and compares six satellite image-fusion methods, namely wavelet 2D-M transform, gram schmidt, high-frequency modulation, high pass filter (HPF) transform, simple mean value, and PCA. An Ikonos image (PanchromaticPAN and multispectral-MULTI) showing the northwest of Bogotá (Colombia) is used to generate six fused images: MULTIWavelet 2D-M, MULTIG-S, MULTIMHF, MULTIHPF, MULTISMV, and MULTIPCA. In order to assess the efficiency of the six image-fusion methods, the resulting images were evaluated in terms of both spatial quality and spectral quality. To this end, four metrics were applied, namely the correlation index, erreur relative globale adimensionnelle de synthese (ERGAS), relative average spectral error (RASE) and the Q index. The best results were obtained for the MULTISMV image, which exhibited spectral correlation higher than 0.85, a Q index of 0.84, and the highest scores in spectral assessment according to ERGAS and RASE, 4.36% and 17.39% respectively.
Novel Approach for Image Restoration and TransmissionCSCJournals
This paper develops a new technique in image restoration and transmission process, where the image size is halved after transforming it to the frequency domain by applying discrete Fourier transform. The conjugate symmetry and mirror property of transformed image spectrums could be utilized by deleting the redundant spectrums from second half image after tracking and keeping the conjugated locations. Those redundant locations are kept using one- to- one relationship. Depending on the halving procedure, the new image size will be divided by two. A reconstructed procedure is created to redistribute the deleted spectrum with their associated locations. The reconstructed image is ready now for restoring again by applying the inverse discrete Fourier transform back to the spatial domain. The restored image is qualified using Peak Signal to Noise Ratio measurement and the result was very satisfied. The advantages of this technique appear in the storage cost, where the memory locations will be reduced to the half. Also, from communication side, this work approved that the image transmission time needs to transmit the halved image is half of the original one.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
REGION CLASSIFICATION AND CHANGE DETECTION USING LANSAT-8 IMAGESADEIJ Journal
The change detection in remote sensing images remains an important and open problem for damage assessment. A new change detection method for LANSAT-8 images based on homogeneous pixel transformation (HPT) is proposed. Homogeneous Pixel Transformation transfers one image from its original feature space (e.g., gray space) to another feature space (e.g., spectral space) in pixel-level to make the pre-event images and post-event images to be represented in a common space or projection space for the convenience of change detection. HPT consists of two operations, i.e., forward transformation and backward transformation. In the forward transformation, each pixel of pre-event image in the first feature space is taken and will estimate its mapping pixel in the second space corresponding to post-event image based on the known unchanged pixels. A multi-value estimation method with the noise tolerance is produced to determine the mapping pixel using K-nearest neighbours technique. Once the mapping pixels of pre-event image are identified, the difference values between the mapping image and the post-event image can be directly generated. Then the similar work is done for backward transformation to combine the post-event image with the first space, and one more difference value for each pixel will be generated. Then, the two difference values are taken and combined to improve the robustness of detection with respect to the noise and heterogeneousness of images. (FRFCM) Fast and Robust Fuzzy C-means clustering algorithm is employed to divide the integrated difference values into two clusters- changed pixels and unchanged pixels. This detection results may contain few noisy regions as small error detections, and a spatial-neighbor based noise filter is developed to reduce the false alarms and missing detections. The experiments for change detection with real images of LANSAT-8 in Tuticorin between 2013-2019 are given to validate the percentage of the changed regions in the proposed method.
The usage of a fused image and compressed model in a VLSI implementation is demonstrated. In this study, distortion correction is also considered. In distortion correction models, least-squares estimate is utilized. The technique of picture fusion is widely employed in medical imaging. Many pictures are obtained from various sensors (or) multiple images are captured at different times by one sensor in the image fusion approach. CT scans give useful information on denser tissue with the least amount of distortion. The information obtained from a magnetic resonance imaging (MRI) of soft tissue with significant distortion is useful. The DWT-based image fusion approach employs discrete wavelet transforms, a novel multi-resolution analytic tool. Back mapping expansion polynomial is used to reduce computer complexity. Using 0.18um technology, the suggested VLSI design achieves 218MHz with 1480 logical components.
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
Novel Approach for Image Restoration and TransmissionCSCJournals
This paper develops a new technique in image restoration and transmission process, where the image size is halved after transforming it to the frequency domain by applying discrete Fourier transform. The conjugate symmetry and mirror property of transformed image spectrums could be utilized by deleting the redundant spectrums from second half image after tracking and keeping the conjugated locations. Those redundant locations are kept using one- to- one relationship. Depending on the halving procedure, the new image size will be divided by two. A reconstructed procedure is created to redistribute the deleted spectrum with their associated locations. The reconstructed image is ready now for restoring again by applying the inverse discrete Fourier transform back to the spatial domain. The restored image is qualified using Peak Signal to Noise Ratio measurement and the result was very satisfied. The advantages of this technique appear in the storage cost, where the memory locations will be reduced to the half. Also, from communication side, this work approved that the image transmission time needs to transmit the halved image is half of the original one.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
www.irjes.com
REGION CLASSIFICATION AND CHANGE DETECTION USING LANSAT-8 IMAGESADEIJ Journal
The change detection in remote sensing images remains an important and open problem for damage assessment. A new change detection method for LANSAT-8 images based on homogeneous pixel transformation (HPT) is proposed. Homogeneous Pixel Transformation transfers one image from its original feature space (e.g., gray space) to another feature space (e.g., spectral space) in pixel-level to make the pre-event images and post-event images to be represented in a common space or projection space for the convenience of change detection. HPT consists of two operations, i.e., forward transformation and backward transformation. In the forward transformation, each pixel of pre-event image in the first feature space is taken and will estimate its mapping pixel in the second space corresponding to post-event image based on the known unchanged pixels. A multi-value estimation method with the noise tolerance is produced to determine the mapping pixel using K-nearest neighbours technique. Once the mapping pixels of pre-event image are identified, the difference values between the mapping image and the post-event image can be directly generated. Then the similar work is done for backward transformation to combine the post-event image with the first space, and one more difference value for each pixel will be generated. Then, the two difference values are taken and combined to improve the robustness of detection with respect to the noise and heterogeneousness of images. (FRFCM) Fast and Robust Fuzzy C-means clustering algorithm is employed to divide the integrated difference values into two clusters- changed pixels and unchanged pixels. This detection results may contain few noisy regions as small error detections, and a spatial-neighbor based noise filter is developed to reduce the false alarms and missing detections. The experiments for change detection with real images of LANSAT-8 in Tuticorin between 2013-2019 are given to validate the percentage of the changed regions in the proposed method.
The usage of a fused image and compressed model in a VLSI implementation is demonstrated. In this study, distortion correction is also considered. In distortion correction models, least-squares estimate is utilized. The technique of picture fusion is widely employed in medical imaging. Many pictures are obtained from various sensors (or) multiple images are captured at different times by one sensor in the image fusion approach. CT scans give useful information on denser tissue with the least amount of distortion. The information obtained from a magnetic resonance imaging (MRI) of soft tissue with significant distortion is useful. The DWT-based image fusion approach employs discrete wavelet transforms, a novel multi-resolution analytic tool. Back mapping expansion polynomial is used to reduce computer complexity. Using 0.18um technology, the suggested VLSI design achieves 218MHz with 1480 logical components.
In this paper, we analyze and compare the performance of fusion methods based on four different
transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled
contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of
images are used in our experiments. Furthermore, eight different performancemetrics are adopted to
comparatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet
transform method performs better than the other three methods, both spatially and spectrally. We also
observed from additional experiments that the decomposition level of 3 offered the best fusion performance,
anddecomposition levels beyond level-3 did not significantly improve the fusion results.
Detection of Bridges using Different Types of High Resolution Satellite Imagesidescitation
Automatic detection of geographical objects such as roads, buildings and bridges
from remote sensing imagery is a very meaningful but difficult work. Bridges over water is
a typical geographical object and its automatic detection is of great significance for many
applications. Finding Region Of Interest (ROI) having water areas alone is the most crucial
task in bridge detection. This can be done with image processing / soft computing methods
using images in spatial domain or with Normalized Differential Water Index (NDWI) using
images in spectral domain. We have developed an efficient algorithm for bridge detection
where the ROI segmentation is done using both methods. Exact locations of bridges are
obtained by knowledge models and spatial resolution of the image. These knowledge models
are applied in the algorithm in such a way that the thresholds are automatically fixed
depending on the quality of the image. Using the algorithm any type of bridges are extracted
irrespective of their inclination and shape.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
In the present paper, a novel method of partial differential equation (PDE) based features for texture
analysis using wavelet transform is proposed. The aim of the proposed method is to investigate texture
descriptors that perform better with low computational cost. Wavelet transform is applied to obtain
directional information from the image. Anisotropic diffusion is used to find texture approximation from
directional information. Further, texture approximation is used to compute various statistical features.
LDA is employed to enhance the class separability. The k-NN classifier with tenfold experimentation is
used for classification. The proposed method is evaluated on Brodatz dataset. The experimental results
demonstrate the effectiveness of the method as compared to the other methods in the literature.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
Analysis of the visualizing changes in radar time series using the REACTIV m...IJECEIAES
A visualizing temporal stack of synthetic aperture radar (SAR) images are presented in this work, the method is called REACTIV, which enabled us to highlight color zones that have undergone change over the detected period of time. This work has been widely tested using Google earth engine (GEE) platform, this method depends on the hue-saturation-value (HSV) of visualizing space and supports estimation only in the time domain; the method does not support the spatial estimation. The coefficient of temporal coefficient variation is coded depending on the saturation color, of which several statistical properties are described. The limitations are studied, and some applications are implemented in this study.A visualizing temporal stack of synthetic aperture radar (SAR) images are presented in this work, the method is called REACTIV, which enabled us to highlight color zones that have undergone change over the detected period of time. This work has been widely tested using Google earth engine (GEE) platform, this method depends on the hue-saturation-value (HSV) of visualizing space and supports estimation only in the time domain; the method does not support the spatial estimation. The coefficient of temporal coefficient variation is coded depending on the saturation color, of which several statistical properties are described. The limitations are studied, and some applications are implemented in this study.
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic ijsc
Image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or processing tasks like medical imaging, remote sensing, concealed weapon detection, weather forecasting, biometrics etc. Image fusion combines registered images to produce a high quality fused image with spatial and spectral information. The fused image with more information will improve the performance of image analysis algorithms used in different applications. In this paper, we proposed a fuzzy logic method to fuse images from different sensors, in order to enhance the quality and compared proposed method with two other methods i.e. image fusion using wavelet transform and weighted average discrete wavelet transform based image fusion using genetic algorithm (here onwards abbreviated as GA) along with quality evaluation parameters image quality index (IQI), mutual information measure ( MIM), root mean square error (RMSE), peak signal to noise ratio (PSNR), fusion factor (FF), fusion symmetry (FS) and fusion index (FI) and entropy. The results obtained from proposed fuzzy based image fusion approach improves quality of fused image as compared to earlier reported methods, wavelet transform based image fusion and weighted average discrete wavelet transform based image fusion using genetic algorithm.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Image fusion is an image enhancement approach for increasing the visual perception from two or more image into a single image. Each image is obtained from different object in focus. This process is now broadly used in various application of image processing such as medical imaging such as MRI, CT [18] and PET, remote sensing, satellite imaging, in design of intelligent robot etc. In this paper we have gone through the literature work done by the various researchers to obtain high quality improved image by combining important and desirable features from two or more images into a single image. Different image fusion rules, like maximum selection scheme, weighted average scheme and window based verification scheme are discussed. We also discuss about the image fusion techniques using DWT. Distinct blurred images are fused using DWT, SWT and using local correlation and their results are compared. The role of fusion in image enhancement is also taken into consideration. The results of different fusion technique are also compared, which are furnished in pictures and tables.
Slantlet transform used for faults diagnosis in robot armIJEECSIAES
The robot arm systems are the most target systems in the fields of faults detection and diagnosis which are electrical and the mechanical systems in many fields. Fault detection and diagnosis study is presented for two robot arms. The disturbance due to the faults at robot's joints causes oscillations at the tip of the robot arm. The acceleration in multi-direction is analysed to extract the features of the faults. Simulations for planar and space robots are presented. Two types of feature (faults) detection methods are used in this paper. The first one is the discrete wavelet transform, which is applied in many research's works before. The second type, is the Slantlet transform, which represents an improved model of the discrete wavelet transform. The multi-layer perceptron artificial neural network is used for the purpose of faults allocation and classification. According to the obtained results, the Slantlet transform with the multi-layer perceptron artificial neural network appear to possess best performance (4.7088e-05), lower consuming time (71.017308 sec) and higher accuracy (100%) than the results obtained when applying discrete wavelet transform and artificial neural network for the same purpose.
The IoT and registration of MRI brain diagnosis based on genetic algorithm an...IJEECSIAES
The technology of the multimodal brain image registration is the key method for accurate and rapid diagnosis and treatment of brain diseases. For achieving high-resolution image registration, a fast sub pixel registration algorithm is used based on single-step discrete wavelet transform (DWT) combined with phase convolution neural network (CNN) to classify the registration of brain tumors. In this work apply the genetic algorithm and CNN clasifcation in registration of magnetic resonance imaging (MRI) image. This approach follows eight steps, reading the source of MRI brain image and loading the reference image, enhencment all MRI images by bilateral filter, transforming DWT image by applying the DWT2, evaluating (fitness function) each MRI image by using entropy, applying the genetic algorithm, by selecting the two images based on rollout wheel and crossover of the two images, the CNN classify the result of subtraction to normal or abnormal, “in the eighth one,” the Arduino and global system for mobile (GSM) 8080 are applied to send the message to patient. The proposed model is tested on MRI Medical City Hospital in Baghdad database consist 550 normal and 350 abnormal and split to 80% training and 20 testing, the proposed model result achieves the 98.8% accuracy.
More Related Content
Similar to A comparative study for the assessment of Ikonos satellite image-fusion techniques
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
Sub-windowed laser speckle image velocimetry by fast fourier transform technique
Abstract
In this work, laser speckle velocimetry, a unique optical method for velocity measurement of fluid flow has been described. A laser sheet is developed and is illuminated on microscopic seeded particles to produce the speckle pattern at the recording plane. Double frame- single-exposure speckle images are captured in such a way that the second speckle image is shifted exactly in a known direction. The auto-correlation method has the ambiguity of direction of flow. To rectify this, spatial shift of the second image has been premeditated. Cross-correlation of sub interrogation areas is obtained by Fast Fourier Transform technique. Four sub-windows processed to obtain the velocity information with vector map analysis precisely.
EFFICIENT IMAGE COMPRESSION USING LAPLACIAN PYRAMIDAL FILTERS FOR EDGE IMAGESijcnac
This project presents a new image compression technique for the coding of retinal and
fingerprint images. Retinal images are used to detect diseases like diabetes or
hypertension. Fingerprint images are used for the security purpose. In this work, the
contourlet transform of the retinal and fingerprint image is taken first. The coefficients of
the contourlet transform are quantized using adaptive multistage vector quantization
scheme. The number of code vectors in the adaptive vector quantization scheme depends
on the dynamic range of the input image.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
An Unsupervised Change Detection in Satellite IMAGES Using MRFFCM ClusteringEditor IJCATR
This paper presents a new approach for change detection in synthetic aperture radar images by incorporating Markov random field (MRF) within the framework of FCM. The objective is to partition the difference image which is generated from multitemporal satellite images into changed and unchanged regions. The difference image is generated from log ratio and mean ratio images by image fusion technique. The quality of difference image depends on image fusion technique. In the present work; we have proposed an image fusion method based on stationary wavelet transform. To process the difference image is to discriminate changed regions from unchanged regions using fuzzy clustering algorithms. The analysis of the DI is done using Markov random field (MRF) approach that exploits the interpixel class dependency in the spatial domain to improve the accuracy of the final change-detection areas. The experimental results on real synthetic aperture radar images demonstrate that change detection results obtained by the MRFFCM exhibits less error than previous approaches. The goodness of the proposed fusion algorithm by well-known image fusion measures and the percentage correct classifications are calculated and verified.
Application of Image Retrieval Techniques to Understand Evolving Weatherijsrd.com
Multispectral satellite images provide valuable information to understand the evolution of various weather systems such as tropical cyclones, shifting of intra tropical convergence zone, moments of various troughs etc., accurate prediction and estimation will save live and property. This work will deal with the development of an application which will enable users to search an image from database using either gray level, texture and shape features for meteorological satellite image retrieval .Gray level feature is extracted using histogram method. The Texture feature is extracted using gray level co-occurrence method and wavelet approach. The shape feature vector is extracted using morphological operations. The similarity between query image and database images is calculated using Euclidian distance. The performance of the system is evaluated using precision
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
Analysis of the visualizing changes in radar time series using the REACTIV m...IJECEIAES
A visualizing temporal stack of synthetic aperture radar (SAR) images are presented in this work, the method is called REACTIV, which enabled us to highlight color zones that have undergone change over the detected period of time. This work has been widely tested using Google earth engine (GEE) platform, this method depends on the hue-saturation-value (HSV) of visualizing space and supports estimation only in the time domain; the method does not support the spatial estimation. The coefficient of temporal coefficient variation is coded depending on the saturation color, of which several statistical properties are described. The limitations are studied, and some applications are implemented in this study.A visualizing temporal stack of synthetic aperture radar (SAR) images are presented in this work, the method is called REACTIV, which enabled us to highlight color zones that have undergone change over the detected period of time. This work has been widely tested using Google earth engine (GEE) platform, this method depends on the hue-saturation-value (HSV) of visualizing space and supports estimation only in the time domain; the method does not support the spatial estimation. The coefficient of temporal coefficient variation is coded depending on the saturation color, of which several statistical properties are described. The limitations are studied, and some applications are implemented in this study.
Quality Assessment of Pixel-Level Image Fusion Using Fuzzy Logic ijsc
Image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or processing tasks like medical imaging, remote sensing, concealed weapon detection, weather forecasting, biometrics etc. Image fusion combines registered images to produce a high quality fused image with spatial and spectral information. The fused image with more information will improve the performance of image analysis algorithms used in different applications. In this paper, we proposed a fuzzy logic method to fuse images from different sensors, in order to enhance the quality and compared proposed method with two other methods i.e. image fusion using wavelet transform and weighted average discrete wavelet transform based image fusion using genetic algorithm (here onwards abbreviated as GA) along with quality evaluation parameters image quality index (IQI), mutual information measure ( MIM), root mean square error (RMSE), peak signal to noise ratio (PSNR), fusion factor (FF), fusion symmetry (FS) and fusion index (FI) and entropy. The results obtained from proposed fuzzy based image fusion approach improves quality of fused image as compared to earlier reported methods, wavelet transform based image fusion and weighted average discrete wavelet transform based image fusion using genetic algorithm.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Image fusion is an image enhancement approach for increasing the visual perception from two or more image into a single image. Each image is obtained from different object in focus. This process is now broadly used in various application of image processing such as medical imaging such as MRI, CT [18] and PET, remote sensing, satellite imaging, in design of intelligent robot etc. In this paper we have gone through the literature work done by the various researchers to obtain high quality improved image by combining important and desirable features from two or more images into a single image. Different image fusion rules, like maximum selection scheme, weighted average scheme and window based verification scheme are discussed. We also discuss about the image fusion techniques using DWT. Distinct blurred images are fused using DWT, SWT and using local correlation and their results are compared. The role of fusion in image enhancement is also taken into consideration. The results of different fusion technique are also compared, which are furnished in pictures and tables.
Similar to A comparative study for the assessment of Ikonos satellite image-fusion techniques (20)
Slantlet transform used for faults diagnosis in robot armIJEECSIAES
The robot arm systems are the most target systems in the fields of faults detection and diagnosis which are electrical and the mechanical systems in many fields. Fault detection and diagnosis study is presented for two robot arms. The disturbance due to the faults at robot's joints causes oscillations at the tip of the robot arm. The acceleration in multi-direction is analysed to extract the features of the faults. Simulations for planar and space robots are presented. Two types of feature (faults) detection methods are used in this paper. The first one is the discrete wavelet transform, which is applied in many research's works before. The second type, is the Slantlet transform, which represents an improved model of the discrete wavelet transform. The multi-layer perceptron artificial neural network is used for the purpose of faults allocation and classification. According to the obtained results, the Slantlet transform with the multi-layer perceptron artificial neural network appear to possess best performance (4.7088e-05), lower consuming time (71.017308 sec) and higher accuracy (100%) than the results obtained when applying discrete wavelet transform and artificial neural network for the same purpose.
The IoT and registration of MRI brain diagnosis based on genetic algorithm an...IJEECSIAES
The technology of the multimodal brain image registration is the key method for accurate and rapid diagnosis and treatment of brain diseases. For achieving high-resolution image registration, a fast sub pixel registration algorithm is used based on single-step discrete wavelet transform (DWT) combined with phase convolution neural network (CNN) to classify the registration of brain tumors. In this work apply the genetic algorithm and CNN clasifcation in registration of magnetic resonance imaging (MRI) image. This approach follows eight steps, reading the source of MRI brain image and loading the reference image, enhencment all MRI images by bilateral filter, transforming DWT image by applying the DWT2, evaluating (fitness function) each MRI image by using entropy, applying the genetic algorithm, by selecting the two images based on rollout wheel and crossover of the two images, the CNN classify the result of subtraction to normal or abnormal, “in the eighth one,” the Arduino and global system for mobile (GSM) 8080 are applied to send the message to patient. The proposed model is tested on MRI Medical City Hospital in Baghdad database consist 550 normal and 350 abnormal and split to 80% training and 20 testing, the proposed model result achieves the 98.8% accuracy.
Elliptical curve cryptography image encryption scheme with aid of optimizatio...IJEECSIAES
Image encryption enables users to safely transmit digital photographs via a wireless medium while maintaining enhanced anonymity and validity. Numerous studies are being conducted to strengthen picture encryption systems. Elliptical curve cryptography (ECC) is an effective tool for safely transferring images and recovering them at the receiver end in asymmetric cryptosystems. This method's key generation generates a public and private key pair that is used to encrypt and decrypt a picture. They use a public key to encrypt the picture before sending it to the intended user. When the receiver receives the image, they use their private key to decrypt it. This paper proposes an ECC-dependent image encryption scheme utilizing an enhancement strategy based on the gravitational search algorithm (GSA) algorithm. The private key generation step of the ECC system uses a GSAbased optimization process to boost the efficiency of picture encryption. The image's output is used as a health attribute in the optimization phase, such as the peak signal to noise ratio (PSNR) value, which demonstrates the efficacy of the proposed approach. As comparison to the ECC method, it has been discovered that the suggested encryption scheme offers better optimal PSNR values.
Design secure multi-level communication system based on duffing chaotic map a...IJEECSIAES
Cryptography and steganography are among the most important sciences that have been properly used to keep confidential data from potential spies and hackers. They can be used separately or together. Encryption involves the basic principle of instantaneous conversion of valuable information into a specific form that unauthorized persons will not understand to decrypt it. While steganography is the science of embedding confidential data inside a cover, in a way that cannot be recognized or seen by the human eye. This paper presents a high-resolution chaotic approach applied to images that hide information. A more secure and reliable system is designed to properly include confidential data transmitted through transmission channels. This is done by working the use of encryption and steganography together. This work proposed a new method that achieves a very high level of hidden information based on non-uniform systems by generating a random index vector (RIV) for hidden data within least significant bit (LSB) image pixels. This method prevents the reduction of image quality. The simulation results also show that the peak signal to noise ratio (PSNR) is up to 74.87 dB and the mean square error (MSE) values is up to 0.0828, which sufficiently indicates the effectiveness of the proposed algorithm.
A new function of stereo matching algorithm based on hybrid convolutional neu...IJEECSIAES
This paper proposes a new hybrid method between the learning-based and handcrafted methods for a stereo matching algorithm. The main purpose of the stereo matching algorithm is to produce a disparity map. This map is essential for many applications, including three-dimensional (3D) reconstruction. The raw disparity map computed by a convolutional neural network (CNN) is still prone to errors in the low texture region. The algorithm is set to improve the matching cost computation stage with hybrid CNN-based combined with truncated directional intensity computation. The difference in truncated directional intensity value is employed to decrease radiometric errors. The proposed method’s raw matching cost went through the cost aggregation step using the bilateral filter (BF) to improve accuracy. The winner-take-all (WTA) optimization uses the aggregated cost volume to produce an initial disparity map. Finally, a series of refinement processes enhance the initial disparity map for a more accurate final disparity map. This paper verified the performance of the algorithm using the Middlebury online stereo benchmarking system. The proposed algorithm achieves the objective of generating a more accurate and smooth disparity map with different depths at low texture regions through better matching cost quality.
An internet of things-based automatic brain tumor detection systemIJEECSIAES
Due to the advances in information and communication technologies, the usage of the internet of things (IoT) has reached an evolutionary process in the development of the modern health care environment. In the recent human health care analysis system, the amount of brain tumor patients has increased severely and placed in the 10th position of the leading cause of death. Previous state-of-the-art techniques based on magnetic resonance imaging (MRI) faces challenges in brain tumor detection as it requires accurate image segmentation. A wide variety of algorithms were developed earlier to classify MRI images which are computationally very complex and expensive. In this paper, a cost-effective stochastic method for the automatic detection of brain tumors using the IoT is proposed. The proposed system uses the physical activities of the brain to detect brain tumors. To track the daily brain activities, a portable wrist band named Mi Band 2, temperature, and blood pressure monitoring sensors embedded with Arduino-Uno are used and the system achieved an accuracy of 99.3%. Experimental results show the effectiveness of the designed method in detecting brain tumors automatically and produce better accuracy in comparison to previous approaches.
A YOLO and convolutional neural network for the detection and classification ...IJEECSIAES
The developing of deep learning systems that used for chronic diseases diagnosing is challenge. Furthermore, the localization and identification of objects like white blood cells (WBCs) in leukemia without preprocessing or traditional hand segmentation of cells is a challenging matter due to irregular and distorted of nucleus. This paper proposed a system for computer-aided detection depend completely on deep learning with three models computeraided detection (CAD3) to detect and classify three types of WBC which is fundamentals of leukemia diagnosing. The system used modified you only look once (YOLO v2) algorithm and convolutional neural network (CNN). The proposed system trained and evaluated on dataset created and prepared specially for the addressed problem without any traditional segmentation or preprocessing on microscopic images. The study proved that dividing of addressed problem into sub-problems will achieve better performance and accuracy. Furthermore, the results show that the CAD3 achieved an average precision (AP) up to 96% in the detection of leukocytes and accuracy 94.3% in leukocytes classification. Moreover, the CAD3 gives report contain a complete information of WBC. Finally, the CAD3 proved its efficiency on the other dataset such as acute lymphoblastic leukemia image database (ALL-IBD1) and blood cell count dataset (BCCD).
Development of smart machine for sorting of deceased onionsIJEECSIAES
Today, we are thinking to raise farmer’s income through various means and measures. Implementation of new crop patterns, technology inclusion and promoting the eshtablishment of numerous agro processing industries will play a major role in agriculture sector. The labour issue is also one of the main concerns in many of the agricultural activities. In this paper we propose a technological evolvement in onion detection process, where we apply image processing and sensory mechanism to identify sprouted and rotten onions respectively. This will yield to quick, accurate and prompt supply of goods to the market, irrespective of lack of consistent but costly manpower. The efficiency of this prototype in identifying the sprouted onions with the help of camera is observed to be upto 87% and also the response of Gas sensing system in detecting rooten onions under prescribed chamber dimensions is analysed and obtained encouraging results.
Efficient resampling features and convolution neural network model for image ...IJEECSIAES
The extended utilization of picture-enhancing or manipulating tools has led to ease of manipulating multimedia data which includes digital images. These manipulations will disturb the truthfulness and lawfulness of images, resulting in misapprehension, and might disturb social security. The image forensic approach has been employed for detecting whether or not an image has been manipulated with the usage of positive attacks which includes splicing, and copy-move. This paper provides a competent tampering detection technique using resampling features and convolution neural network (CNN). In this model range spatial filtering (RSF)-CNN, throughout preprocessing the image is divided into consistent patches. Then, within every patch, the resampling features are extracted by utilizing affine transformation and the Laplacian operator. Then, the extracted features are accumulated for creating descriptors by using CNN. A wide-ranging analysis is performed for assessing tampering detection and tampered region segmentation accuracies of proposed RSF-CNN based tampering detection procedures considering various falsifications and post-processing attacks which include joint photographic expert group (JPEG) compression, scaling, rotations, noise additions, and more than one manipulation. From the achieved results, it can be visible the RSF-CNN primarily based tampering detection with adequately higher accurateness than existing tampering detection methodologies.
State and fault estimation based on fuzzy observer for a class of Takagi-Suge...IJEECSIAES
Singular nonlinear systems have received wide attention in recent years, and can be found in various applications of engineering practice. On the basis of the Takagi-Sugeno (T-S) formalism, which represents a powerful tool allowing the study and the treatment of nonlinear systems, many control and diagnostic problems have been treated in the literature. In this work, we aim to present a new approach making it possible to estimate simultaneously both non-measurable states and unknown faults in the actuators and sensors for a class of continuous-time Takagi-Sugeno singular model (CTSSM). Firstly, the considered class of CTSSM is represented in the case of premise variables which are non-measurable, and is subjected to actuator and sensor faults. Secondly, the suggested observer is synthesized based on the decomposition approach. Next, the observer’s gain matrices are determined using the Lyapunov theory and the constraints are defined as linear matrix inequalities (LMIs). Finally, a numerical simulation on an application example is given to demonstrate the usefulness and the good performance of the proposed dynamic system.
Hunting strategy for multi-robot based on wolf swarm algorithm and artificial...IJEECSIAES
The cooperation and coordination in multi-robot systems is a popular topic in the field of robotics and artificial intelligence, thanks to its important role in solving problems that are better solved by several robots compared to a single robot. Cooperative hunting is one of the important problems that exist in many areas such as military and industry, requiring cooperation between robots in order to accomplish the hunting process effectively. This paper proposed a cooperative hunting strategy for a multi-robot system based on wolf swarm algorithm (WSA) and artificial potential field (APF) in order to hunt by several robots a dynamic target whose behavior is unexpected. The formation of the robots within the multi-robot system contains three types of roles: the leader, the follower, and the antagonist. Each role is characterized by a different cognitive behavior. The robots arrive at the hunting point accurately and rapidly while avoiding static and dynamic obstacles through the artificial potential field algorithm to hunt the moving target. Simulation results are given in this paper to demonstrate the validity and the effectiveness of the proposed strategy.
Agricultural harvesting using integrated robot systemIJEECSIAES
In today's competitive world, robot designs are developed to simplify and improve quality wherever necessary. The rise in technology and modernization has led people from the unskilled sector to shift to the skilled sector. The agricultural sector's solution for harvesting fruits and vegetables is manual labor and a few other agro bots that are expensive and have various limitations when it comes to harvesting. Although robots present may achieve harvesting, the affordability of such designs may not be possible by small and medium-scale producers. The integrated robot system is designed to solve this problem, and when compared with the existing manual methods, this seems to be the most cost-effective, efficient, and viable solution. The robot uses deep learning for image detection, and the object is acquired using robotic manipulators. The robot uses a Cartesian and articulated configuration to perform the picking action. In the end, the robot is operated where carrots and cantaloupes were harvested. The data of the harvested crops are used to arrive at the conclusion of the robot's accuracy.
Characterization of silicon tunnel field effect transistor based on charge pl...IJEECSIAES
The aim of the proposed paper is an analytical model and realization of the characteristics for tunnel field-effect transistor (TFET) based on charge plasma (CP). One of the most applications of the TFET device which operates based on CP technique is the biosensor. CP-TFET is to be used as an effective device to detect the uncharged molecules of the bio-sample solution. Charge plasma is one of some techniques that recently invited to induce charge carriers inside the devices. In this proposed paper we use a high work function in the source (ϕ=5.93 eV) to induce hole charges and we use a lower work function in drain (ϕ=3.90 eV) to induce electron charges. Many electrical characterizations in this paper are considered to study the performance of this device like a current drain (ID) versus voltage gate (Vgs), ION/IOFF ratio, threshold voltage (VT) transconductance (gm), and subthreshold swing (SS). The signification of this paper comes into view enhancement the performance of the device. Results show that high dielectric (K=12), oxide thickness (Tox=1 nm), channel length (Lch=42 nm), and higher work function for the gate (ϕ=4.5 eV) tend to best charge plasma silicon tunnel field-effect transistor characterization.
A new smart approach of an efficient energy consumption management by using a...IJEECSIAES
Many consumers of electric power have excesses in their electric power consumptions that exceed the permissible limit by the electrical power distribution stations, and then we proposed a validation approach that works intelligently by applying machine learning (ML) technology to teach electrical consumers how to properly consume without wasting energy expended. The validation approach is one of a large combination of intelligent processes related to energy consumption which is called the efficient energy consumption management (EECM) approaches, and it connected with the internet of things (IoT) technology to be linked to Google Firebase Cloud where a utility center used to check whether the consumption of the efficient energy is satisfied. It divides the measured data for actual power (Ap) of the electrical model into two portions: the training portion is selected for different maximum actual powers, and the validation portion is determined based on the minimum output power consumption and then used for comparison with the actual required input power. Simulation results show the energy expenditure problem can be solved with good accuracy in energy consumption by reducing the maximum rate (Ap) in a given time (24) hours for a single house, as well as electricity’s bill cost, is reduced.
Parameter selection in data-driven fault detection and diagnosis of the air c...IJEECSIAES
Data-driven fault detection and diagnosis system (FDD) has been proven as simple yet powerful to identify soft and abrupt faults in the air conditioning system, leading to energy saving. However, the challenge is to obtain reliable operation data from the actual building. Therefore, a lab-scaled centralized chilled water air conditioning system was successfully developed in this paper. All necessary sensors were installed to generate reliable operation data for the data-driven FDD. Nevertheless, if a practical system is considered, the number of sensors required would be extensive as it depends on the number of rooms in the building. Hence, parameters impact in the dataset were also investigated to identify critical parameters for fault classifications. The analysis results had identified four critical parameters for data-driven FDD: the rooms' temperature (TTCx), supplied chilled water temperature (TCHWS), supplied chilled water flow rate (VCHWS) and supplied cooled water temperature (TCWS). Results showed that the data-driven FDD successfully diagnosed all six conditions correctly with the proposed parameters for more than 92.3% accuracy; only 0.6-3.4% differed from the original dataset's accuracy. Therefore, the proposed parameters can reduce the number of sensors used for practical buildings, thus reducing installation costs without compromising the FDD accuracy.
Electrical load forecasting through long short term memoryIJEECSIAES
For a power supplier, meeting demand-supply equilibrium is of utmost importance. Electrical energy must be generated according to demand, as a
large amount of electrical energy cannot be stored. For the proper
functioning of a power supply system, an adequate model for predicting load is a necessity. In the present world, in almost every industry, whether it be healthcare, agriculture, and consulting, growing digitization and automation is a prominent feature. As a result, large sets of data related to these industries are being generated, which when subjected to rigorous analysis,
yield out-of-the-box methods to optimize the business and services offered. This paper aims to ascertain the viability of long short term memory (LSTM)
neural networks, a recurrent neural network capable of handling both longterm and short-term dependencies of data sets, for predicting load that is to
be met by a Dispatch Center located in a major city. The result shows appreciable accuracy in forecasting future demand.
A simple faulted phase-based fault distance estimation algorithm for a loop d...IJEECSIAES
This paper presents a single ended faulted phase-based traveling wave fault localization algorithm for loop distribution grids which is that the sensor can
get many reflected signals from the fault point to face the complexity of localization. This localization algorithm uses a band pass filter to remove noise from the corrupted signal. The arriving times of the faulted phasebased filtered signals can be obtained by using phase-modal and discrete
wavelet transformations. The estimated fault distance can be calculated using the traveling wave method. The proposed algorithm presents detail
level analysis using three detail levels coefficients. The proposed algorithm is tested with MATLAB simulation single line to ground fault in a 10 kV grounded loop distribution system. The simulation result shows that the
faulted phase time delay can give better accuracy than using conventional time delays. The proposed algorithm can give fault distance estimation accuracy up to 99.7% with 30 dB contaminated signal-to-noise ratio (SNR)
for the nearest lines from the measured terminal.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
A comparative study for the assessment of Ikonos satellite image-fusion techniques
1. Indonesian Journal of Electrical Engineering and Computer Science
Vol. 25, No. 1, January 2022, pp. 256~264
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v25.i1.pp256-264 256
Journal homepage: http://ijeecs.iaescore.com
A comparative study for the assessment of Ikonos satellite
image-fusion techniques
Javier Medina1,2
, Nelson Vera2
, Erika Upegui1,2
1
Department of Cadastral Engineering and Geodesy, School of Engineering, Universidad Distrital Francisco José de Caldas (UDFJC),
Bogotá, Colombia
2
Department of Communication and Information Science, MSc Program, Universidad Distrital Francisco José de Caldas (UDFJC),
Bogotá, Colombi
Article Info ABSTRACT
Article history:
Received Jun 30, 2021
Revised Nov 17, 2021
Accepted Dec 1, 2021
Image-fusion provide users with detailed information about the urban and
rural environment, which is useful for applications such as urban planning
and management when higher spatial resolution images are not available.
There are different image fusion methods. This paper implements, evaluates,
and compares six satellite image-fusion methods, namely wavelet 2D-M
transform, gram schmidt, high-frequency modulation, high pass filter (HPF)
transform, simple mean value, and PCA. An Ikonos image (Panchromatic-
PAN and multispectral-MULTI) showing the northwest of Bogotá
(Colombia) is used to generate six fused images: MULTIWavelet 2D-M,
MULTIG-S, MULTIMHF, MULTIHPF, MULTISMV, and MULTIPCA. In order to
assess the efficiency of the six image-fusion methods, the resulting images
were evaluated in terms of both spatial quality and spectral quality. To this
end, four metrics were applied, namely the correlation index, erreur relative
globale adimensionnelle de synthese (ERGAS), relative average spectral
error (RASE) and the Q index. The best results were obtained for the
MULTISMV image, which exhibited spectral correlation higher than 0.85, a Q
index of 0.84, and the highest scores in spectral assessment according to
ERGAS and RASE, 4.36% and 17.39% respectively.
Keywords:
Fusion
Gram schmidt
High frequency modulation
PCA
Satellite images
Simple mean value
Wavelet 2D-M
This is an open access article under the CC BY-SA license.
Corresponding Author:
Javier Medina
Department of Cadastral Engineering and Geodesy, School of Engineering
Universidad Distrital Francisco José de Caldas
Carrera 7, No. 40b-53, Bogotá DC, Colombia
Email: rmedina@udistrital.edu.co
1. INTRODUCTION
Image fusion is a solution that allows satisfying the frequent need for obtaining a single satellite
image that carries both spectral (multi-spectral) and spatial (panchromatic) high-resolution data. The input to the
fusion process is commonly taken from satellite images originated with the same type of sensor, or else, from
several remote sensors so that decision makers are provided with the most effective basis available [1]-[12].
Image fusion delivers detailed information about rural and urban environments [13], which is useful for
several applications such as territorial planning [14], agriculture [15], environmental monitoring [16], [17],
medecine [18]-[21] among others.
Conventional image fusion processes are based on several methods such as RGB-to-IHS
transformation, Brovey techniques, and multiplication, among others. However, these methods are not
completely successful since spatial information is enhanced at the cost of degrading spectral information.
Given this limitation, other processes have been explored. Processes involving the two-dimensional wavelet
2. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A comparative study for the assessment of Ikonos satellite image-fusion techniques (Javier Medina)
257
transform have been employed due to a reduced degradation of the spectral information contained in the
original multi-spectral images, also leading to improvement in the spatial resolution [22]-[31]. According to
wavelet-based studies, showing promising results for image fusion, [5], [26], [27], [32], [33] the wavelet
transform contributes to obtaining better fused images due to the way in which coefficients are computed as
part of the transformation process [34], [35]. This process yields the wavelet planes, which keep more spatial
and spectral information from the original images than other techniques. Wavelet-based techniques also produce
an intensity component that maintains the spatial richness of images along with tone and saturation components
that that maintain spectral richness when transforming the image composition from RGB to IHS [36].
The present work focuses on six satellite-image fusion methods, namely the wavelet 2D-M
transform using the RGB-to-HSV color model [37], the gram schmidt (G-S) method, high-frequency
modulation (HFM), high pass filter transform (HPF), simple mean value (SMV), and principal component
analysis (PCA). A step-by-step implementation process is proposed for applying each of the six methods.
Moreover, a reference framework is established to assess the spectral and spatial quality of the resulting
fused images (MULTIWavelet 2D-M, MULTIG-S, MULTIMHF, MULTIHPF, MULTISMV, y MULTIPCA) in terms of
specific numerial indexes, namely the correlation coefficient, the relative average spectral error (RASE)
index, the erreur relative globale adimensionnelle de synthese (ERGAS) index, and the Q universal quality
index.
2. METHODOLOGY
This section is threefold: a description of the satellite images employed in the study is provided;
followed by a description of the six image-fusion methods, which includes a proposal for implementing the
methods; finally, the metrics applied to assess and compare the (spatial and spectral) quality of the resulting
images are presented.
2.1. Satellite images and the region covered
The region covered by the satellite images corresponds to a western area of Bogotá (Colombia),
specifically the area around the airport of the city, namely “El Dorado” airport. This region is covered by
Ikonos of sub-images multi-spectral (MULTI) origin as shown in Figure 1(a) and panchromatic (PAN) as
shown in Figure 1(b). The PAN sub-image has a spatial resolution of one (1) meter; this image was captured
on December 13th
, 2007 according to the UTM/WGS 84 reference system. The MULTI sub-image includes
four-channel information; however, only three channels were involved in the present study (R-red, G-green
and B-blue). The image has a spatial resolution of four (4) meters and was captured on the same date as the
PAN sub-image, also sharing the same reference system. The two images were cropped to have a width of
2048 pixels and a height of 2048 pixels, satisfying dyadic properties [23].
(a) (b)
Figure 1. Original images are (a) MULTI Sub-image and (b) PAN sub-image; 2048x2048 pixels
2.2. 2D wavelet transform
The wavelet discrete transform is a useful tool in the field of signal processing. This transform is
mainly applied to divide data sets into smaller components associated to different spatial frequencies, which
are represented by a common scale. Algorithms known as Mallat and ‘à trous’ are typical in applying the
discrete wavelet transform to image fusion. Each of these algorithms has its own mathematical properties and
leads to a different type of image decomposition; therefore, different types of fused images can be expected.
Although the ‘á trous’ algorithm appears to be less adequate than the Mallat algorithm when extracting
spatial details in the context of multiresolution analysis (from a theoretical perspective), this algorithm yields
3. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 25, No. 1, January 2022: 256-264
258
images with significantly higher global quality [38]. For the six methods applied in this study, an RGB (true)
color composition must be rendered from a combination of the MULTI and the PAN sub-images, using the
same pixel size of the latter (1 meter).
Procedure for implementing 2D-M wavelet transform, (A modification to the wavelet haar
transform) [33], [34]:
- Step 1. Transform the RGB image into hue, saturation, value (HSV) and adjust the histograms of the PAN
image and the V component, yielding the new Vap component.
- Step 2. Apply the 2D-M wavelet transform to the V component (second decomposition level), yielding
the corresponding approximation and detail coefficients; the A1v approximation coeficientes contain the
spectral information of the image; meanwhile, detail coefficients cV1v, cH1v and cD1v store the spatial
information of the image. Decompose the A1v a second time, yielding the A2v approximation
coefficients that contain the spectral information of the image; meanwhile, cV2v, cH2v and cD2v along
with cV1v, cH1v and cD1v correspond to the detail coefficients that hold the spatial information of the
transformed image.
- Step 3. Apply the 2D-M wavelet transform to the panchromatic image (second decomposition level),
yielding the corresponding approximation and detail coefficients; the A1p approximation coefficients
contain the spectral information of the image; meanwhile, detail coefficients cV1p, cH1p and cD1p store
the spatial information. Decompose A1p a second time to obtain the A2p second-level approximation
coefficients, which contain the spectral information; meanwhile, cV2p, cH2p and cD2p along with cV1,
cH1p and cD1p correspond to the detail coefficients bearing the spatial information of the transformed
image.
- Step 4. Generate a new value component (NI), using the A2v coefficients that store the information of the
V-component image together with the second-level detail coefficients of the panchromatic image (cV2p,
cH2p and cD2p) and the detail coefficients from the first-level decomposition (cV1p, cH1p and cD1p).
- Step 5. Apply the inverse 2D-M wavelet transform to obtain the new intensity component (NV).
- Step 6. Given the new NV, and the original hue and saturation components, generate the new HS-NV.
- Step 7. Conduct the inverse image transformation from HS-NV to RGB. Thus, the new multispectral
image is obtained, which maintains the spectral resolution, leading to a gain in spatial resolution.
2.3. Gram-schmidt
The Gram-Schmidt image-fusion method is based on a general vector orthogonalization algorithm.
The algorithm consists in taking non-orthogonal vectors and apply rotation to make them orthogonal. When
applied to image processing, each band (the panchromatic, the red, the green, the blue and the infrared)
corresponds to an n-dimensional vector (# dimensions = # pixels).
Implementation procedure:
- Step 1. Simulate the panchromatic band from the low-spatial-resolution spectral bands.
- Step 2. Apply the Gram-Schmidt transform to the simulated panchromatic band and also to the spectral
bands, employing the simulated panchromatic band as the first band.
- Step 3. Intercambiar la alta resolución espacial de banda pancromática con la primera banda de Gram-
Schmidt.
- Step 4. Apply the inverse Gram-Schmidt transform to construct the high-resolution spectral bands.
2.4. High-frequency modulation
This method is a variation of the so-called spatial-domain fusion methods [39], which focus on
transferring the high frequencies of a high-resolution image onto a low-resolution image. The high-frequency
representation of an image contains the information related to the details and such information can be
obtained by means of filtering operators or convolution. Basically, these methods consist in adding the high-
frequency components of the panchromatic image to each of the bands of a multi-spectral image:
𝑅𝑖 = (𝑀𝑆)𝑖 + 𝑃𝐴(𝑃𝑎𝑛) (1)
where i
R represents a generic band in the fusion and PA(Pan) is the result of applying a high-pass filter to
the panchromatic image. The operation in (1) is similar to that of applying an enhancement filter to the high
frequency components of an image (high-boost), although with a different image providing the high-pass
information, namely the high-resolution panchromatic image in this case. Thus, the efficiency of this filter is
based on the existence of radiometric correlation, which arises from the high-frequency components of the
two images.
Implementation procedure:
- Step 1. Apply the high-pass filter and the low-pass filter to the panchromatic image.
4. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A comparative study for the assessment of Ikonos satellite image-fusion techniques (Javier Medina)
259
- Step 2. In order to generate each of the fused images, add each band to the product obtained from dividing
the band over the low-pass filtered image times the high-pass filtered image.
- Step 3. Concatenate the fused images to obtain the new RGB image.
2.5. High-pass filter transform
This method consists in adding the spatial information from the panchromatic band to the multi-
spectral information that has lower spatial resolution [40]. To this end, a high-pass filter is applied along with
a map-algebra operation [41].
Implementation procedure:
- Step 1. Obtain a constant R by dividing the multi-spectral image cell size into the panchromatic image
cell size.
- Step 2. Obtain a constant R by dividing the multi-spectral image cell size into the panchromatic image
cell size. Based on this constant, a high-pass filter is generated and applied to the panchromatic image,
yielding the standard deviations of both the multispectral image and the panchromatic image. A
coefficient W is obtained, resulting from dividing the sum of the standard deviations into the product of
the panchromatic standard deviation times the central value (M) as a function of the values of R [40].
- Step 3. Obtain the fused images, each image results from the addition of its corresponding band and the
filtered image times W.
- Step 4. Concatenate the fused images to obtain the new RGB image.
2.6. Simple mean value
This method applies a simple average equation to each of the combinations of output bands as:
- High-resolution Red Band = 0.5 * (original Red Band + Panchromatic Band)
- High-resolution Blue Band = 0.5 * (original Blue Band + Panchromatic Band)
- High-resolution Green Band = 0.5 * (original Green Band + Panchromatic Band)
Implementation procedure:
- Step 1. Generate fused images from the average of the Simple Mean Value, which consists of the sum of
a single band and the panchromatic band (a result devided by 2), thus obtaining the fused images.
- Step 2. Concatenate the fused bands so as to obtain the new fused RGB image.
2.7. PCA
The principal component analysis, also coined PCA transformation, Karhunen-Loève transform, or
Hotelling transform [42], is intended to create new images (Principal Components-PC) from the original
images; such Principal Components (PCs) are non-correlated and lead to a reorganization of the original
information. By having the PCs, redundant information is avoided. Thus, the first PC is defined as the
direction of maximum data variance. The essence of this analysis is the transformation of a set of correlated
variables into a new set of non-correlated variables [43].
Implementation procedure:
- Step 1. Obtain as many principal components as the number of bands in the multi-spectral image. Thus,
PC1 holds spatial information, and the remaining PCs contain the spectral information.
- Step 2. Equate the histogram from the panchromatic image to that of the first principal component (PC1),
that is, to the component that holds information related to the set of bands.
- Step 3. Substitute the first principal component (PC1) with the modified panchromatic image (once the
histogram has been adjusted).
- Step 4. Apply the inverse transform to the resulting components so as to obtain the fused image, namely
the new, fused RGB image.
2.8. Assessment metrics
For the evaluation and analysis of the resulting fused images obtained by means of MULTIWavelet 2D-
M, MULTIG-S, MULTIMHF, MULTIHPF, MULTISMV, and MULTIPCA, the following indexes were computed:
the correlation coefficient (corr), the relative average spectral error (RASE) index, the ERGAS index (erreur
relative globale adimensionallede synthèse) and the universal quality index Q. This set of indexes is
described below.
i) Correlation coefficient (corr). The correlation between pairs of bands ( / )
corr A B from a fused
image and the bands of an original image can be computed as (2):
𝑐𝑜𝑟𝑟(𝐴/𝐵) =
∑ (𝐴𝑗−𝐴
−
)
𝑛𝑝𝑖𝑥
𝑗=1 (𝐵𝐽−𝐵
−
)
√∑ (𝐴𝑗−𝐴
−
)
2
∑ (𝐵𝑗−𝐵
−
)
2
𝑛𝑝𝑖𝑥
𝑗=1
𝑛𝑝𝑖𝑥
𝑗=1
(2)
5. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 25, No. 1, January 2022: 256-264
260
Where A and B are the mean values of the two images. ( / )
corr A B can take values between –1 and +1
and it has no units as shown in Table 1, that is, the coefficient does not depend on the units of the original
variables. The ideal value of the correlation coefficient, for both spectral and spatial assessment,
is 1 [44].
ii) ERGAS index. The evaluation of the quality of fused images has been conducted by observing
the value of the ERGAS indexes obtained for the spectral and spatial features as shown in Table 2. The
spectral ERGAS index is defined in (3) [45]:
𝐸𝑅𝐺𝐴𝑆𝑆𝑝𝑒𝑐𝑡𝑟𝑎𝑙 = 100
ℎ
𝑙
√
1
𝑁𝐵𝑎𝑛𝑑𝑠
∑ [
(𝑅𝑀𝑆𝐸𝑆𝑝𝑒𝑐𝑡𝑟𝑎𝑙(𝐵𝑎𝑛𝑑𝑠𝑖))2
(𝑀𝑈𝐿𝑇𝐼𝑖)2 ]
𝑁𝐵𝑎𝑛𝑑𝑠
𝑖=1 (3)
Where h and l represent the spatial resolution of the PAN and MULTI images; NBands corresponds to
the number of bands of the fused image; i
MULTI is the radiance value associated to the i ésima
band of
the MULTI image [46], and RMSE is defined in (4):
𝑅𝑀𝑆𝐸𝐸𝑠𝑝𝑒𝑐𝑡𝑟𝑎𝑙(𝐵𝑎𝑛𝑑𝑎𝑖) =
1
𝑁𝑃
√∑ (𝑀𝑈𝐿𝑇𝐼𝑖 − 𝐹𝑈𝑆𝑖)2
𝑁𝑃
𝑖=1 (4)
In (4), NP denotes the number of pixels of the ( , )
i
FUS x y image. Moreover, in [47], an index
coined Espacial
ERGAS is proposed. This index follows the same principles of the ERGAS index.
Espacial
ERGAS is aimed at evaluating the spatial quality of fused images and is defined in (5):
𝐸𝑅𝐺𝐴𝑆𝑆𝑝𝑎𝑡𝑖𝑎𝑙 = 100
ℎ
𝑙
√
1
𝑁𝐵𝑎𝑛𝑑𝑠
∑ [
(𝑅𝑀𝑆𝐸𝑆𝑝𝑎𝑐𝑖𝑎𝑙(𝐵𝑎𝑛𝑑𝑠𝑖))2
(𝑃𝐴𝑁𝑖)2 ]
𝑁𝐵𝑎𝑛𝑑𝑠
𝑖=1 (5)
Where Espacial
RMSE is a term defined in (6):
𝑅𝑀𝑆𝐸𝐸𝑠𝑝𝑎𝑐𝑖𝑎𝑙(𝐵𝑎𝑛𝑑𝑎𝑖) =
1
𝑁𝑃
√∑ (𝑃𝐴𝑁𝑖 − 𝐹𝑈𝑆𝑖)2
𝑁𝑃
𝑖=1 (6)
After evaluation of image quality usign these indexes (i.e. spatial and spectral ERGAS), values close
to zero indicate excellent quality (see Table 2).
RASE index. The RASE index is expressed as a percentage (7), as indicated by the results in Table 2:
𝑅𝐴𝑆𝐸 = 100
ℎ
𝑙
√
1
𝑁
∑ [
(𝑅𝑀𝑆𝐸(𝐵𝑖))2
𝑀𝑖
2 ]
𝑛
𝑖=1 (7)
Where h is the value of resolution associeted to the high-spatial-resolution (PAN) image, and l is the value
of resolution associated to the low-spatial-resolution (MULTI) image [46]. The best results are obtained
when the percentage indicated by RASE goes to zero.
Universal quality Q index. This quality index is intended to identify image distortion as a
combination of three factors, namely correlation loss, luminance distortion and contrast distortion [48]. This
index is copmputed according to (8) (see results in Table 2).
𝑄 =
𝜎𝑥𝑦
𝜎𝑥𝜎𝑦
⋅
2𝑥̄𝑦̄
(𝑥)2+(𝑦)2 ⋅
2𝜎𝑥𝜎𝑦
𝜎𝑥
2+𝜎𝑥
2 (8)
A higher quality of the fused image will be obtained as the value of Q approaches 1.
3. RESULTS AND DISCUSSION
For qualitative assessment purposes, a visual inspection of the images is provided in Figure 2. The
figure shows segments of the MULTI as shown in Figure 2(a) and PAN MULTI Figure 2(b) images as well
as segments of the fused images obtained after applying the image fusion methods Figure 2(c)-(h). In
addition to the visual assessment provided in Figure 2, quantitative results are presented in Tables 1 and 2.
Table 2 shows the results that describe the spatial and spectral correlation of fused images after applying the
six methods.
6. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A comparative study for the assessment of Ikonos satellite image-fusion techniques (Javier Medina)
261
(a) (b)
(c) (d)
(e) (f)
(g) (h)
Figure 2. Detail of sub-images; (a) MULTI, (b) PAN, (c) Wavelet fused, d) Gram-schmitd fused,
(e) High-frequency modulation, (f) HPF fused, (g) Simple mean value fused, and (h) PCA fused
Table 1. Quantitative evaluation of Ikonos image quality, spatial and spectral correlation
Fused Image corr spatial corr spectral
R G B R G B
MULTIWavelet 2D-M 0.77 0.81 0.77 0.85 0.79 0.76
MULTIG-S 0.92 0.97 0.95 0.78 0.69 0.62
MULTIMAF 0.66 0.68 0.65 0.94 0.92 0.90
MULTIHPF 0.53 0.54 0.49 0.70 0.69 0.60
MULTISMV 0.84 0.85 0.88 0.88 0.87 0.80
MULTIPCA 0.94 0.97 0.89 0.55 0.54 0.57
In terms of the spatial correlation of images, the results show that the highest values of the
coefficients are obtained for the MULTIG-S, MULTIPCA images in each of the three bands. Regarding the
spectal correlation of images, the highest values can be observed for the MULTIWavelet 2D-M, MULTIMAF fused
images. Table 2 shows the resulting values of computing the RASE index, the ERGAS index and the
universal quality Q index, regarding both spatial and spectral features.
7. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 25, No. 1, January 2022: 256-264
262
Table 2. Quality evaluation for the Ikonos sub-image, values of ERGAS, RASE, and Q indexes, spatial and
spectral evaluation
Fused image ERGAS spatial ERGAS spectral RASE spatial RASE spectral Q spatial Q spectral
MULTIWavelet 2D-M 6.22 5.38 24.89% 21.60% 0.77 0.79
MULTIG-S 3.63 5.97 14.52% 24.00% 0.93 0.70
MULTIMAF 8.92 5.22 35.68% 20.96% 0.64 0.89
MULTIHPF 15.32 13.93 61.31% 55.01% 0.41 0.52
MULTISMV 4.42 4.36 17.71% 17.39% 0.84 0.84
MULTIPCA 11.255.38 12.48 45.01% 49.99% 0.70 0.43
The results obtained by computing the spatial ERGAS index highlight the quality of the MULTIG-S
fused image (spatial ERGAS equal to 3.63). Meanwhile, the spectral ERGAS index indicates that the
MULTISMV fused image delivers better quality (spectral ERGAS equal to 4.36). Regarding the values of the
spatial and spectral RASE indexes, the best results follow the same trend (as observed for ERGAS), that is,
the most promising values of spatial RASE are associated to the MULTIG-S image (14.52 %, spatial and 24%
spectral) and the best spectral RASE score was obtained for the MULTISMV image (17.71%). Finally, the
values of the Q index (spatial and spectral) show excellent results for both the MULTIG-M image and the
MULTISMV image; however, the MULTISMV fused image outperforms the others since the two indexes
(spectral and spactial) maintain equivalent values.
Previous work [1], [5], [26], [27], [32], [33], [35], [37], [38], [41], [43], [47] has shown that image
fusion methods based on the wavelet transform (employing the Á trous and Mallat algorithms) are better
suited for the fusion of images than traditional methods.
Moreover, Wald [45] proposed a set of requirements that should be met by quality tests and Quality
Indexes:
1. Indexes should be unit-independent, including calibration instuments and gain values.
2. Indexes should not depend on number of spectral bands.
3. Indexes should not depend on the relation among the spatial resolution of the source images.
Therefore, the results provided for the qualitative analysis in Figure 2 and the quantitative analysis
(Tables 1 and 2) show that the best fused image (the image resulting in the smallest degradation of spectral
richness with significant spatial gain) was obtained when applying the Simple Mean Value algorithm
(MULTISMV).
4. CONCLUSION
The implementation of various mathematical proposals, namely 2D-M wavelet transform, Gram
Schmidt transform, High-frequency modulation, HPF transform, Simple Mean Value and Principal
Component Analysis, allowed generating fused images from the satellite Ikonos images that are useful for a
variety of applications. Implementation of various methods in a single study favors the interest of
researchers, since the best method has not been found, instead, the best fused image must be found. The best
fused image showing the highest spatial gain, though with some degradation of spectral richness, is the
MULTIG-S image. The Simple Mean Value method (MULTISMV) allowed fused images with higher spatial
and spectral quality; however, it can be concluded that the images with higher spatial quality tend to degrade
spectral richness, conversely, the images with higher spectral quality fail to offer excellent spatial features.
The methods proposed herein lead to fused images that provide users with detailed information about urban
and rural surroundings, which is useful for applications such as urban planning and management when there
is no availability of higher spatial resolution images (e.g. drone-based images). The usefulness of the present
study also extends to project development in fields such as agriculture, hydrology, environmental sciences
and natural dissarter scenarios (e.g. floods, forrest fires), among others.
REFERENCES
[1] L. Alparone, L. Wald, J. Chanussoat, C. Thomas, P. Gamba, and L. Bruce, “Comparison of Pansharpening Algorithms: Outcome
of the 2006 GRS-S Data Fusion Contest,” IEEE Transactions on Geoscience and Remote Sensing, vol. 45, no. 10, pp. 3012-3021,
2007, doi: 10.1109/TGRS.2007.904923.
[2] A. A. Goshtasby and S. Nikolov, “Image Fusion: Advances in the State of the Art,” Information Fusion, vol. 8, pp. 114-118, 2007.
[3] D. Wu, A. Yang, L. Zhu, and C. Zhang, “Survey of Multi-sensor Image Fusion,” S. Ma et al. (Eds.): LSMS/ICSEE 2014, Part I,
CCIS 461, pp. 358-367, 2014, doi: 10.1007/978-3-662-45283-7_37.
[4] H. Ghassemian, “A Review of Remote Sensing Image Fusion Methods,” Information Fusion, 32, Part A, pp. 75-89, 2016, doi:
10.1016/j.inffus.2016.03.003.
[5] J. Jinju, N. Santhi, K. Ramar, and B. Sathya Bama, “Spatial frequency discrete wavelet transform image fusion technique for
remote sensing applications,” Engineering Science and Technology, an International Journal, vol. 22, no. 3, pp. 715-726, 2019,
doi: 10.1016/j.jestch.2019.01.004.
8. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A comparative study for the assessment of Ikonos satellite image-fusion techniques (Javier Medina)
263
[6] Z. W. Pan and H. L. Shen, “Multispectral Image super-resolution via RGB image fusion and radiometric calibration,” IEEE
Transactions on Image Processing, vol. 28, no. 4, pp. 1783-1797, 2019, doi: 10.1109/TIP.2018.2881911.
[7] S. C. Kulkarni and P. P. Rege, “Pixel level fusion techniques for SAR and optical images: A review,” Information Fusion, vol. 59,
pp. 13-29, 2020, doi: 10.1016/j.inffus.2020.01.003.
[8] R. V. Pandit and R. J. Bhiwani, “Morphology-based spatial filtering for efficiency enhancement of remote sensing image fusion,”
Computers & Electrical Engineering, vol. 89, p. 106945, 2021, doi: 10.1016/j.compeleceng.2020.106945.
[9] M. Diwakar et al., “A comparative review: Medical image fusion using SWT and DWT,” Materials Today: Proceedings, vol. 37,
no. Part 2, pp. 3411-3416, 2021, doi: 10.1016/j.matpr.2020.09.278.
[10] S. Singh and K. C. Tiwari, “Exploring the optimal combination of image fusion and classification techniques,” Remote Sensing
Applications: Society and Environment, vol. 24, p. 100642, 2021, doi: 10.1016/j.rsase.2021.100642.
[11] X. Zhang, “Benchmarking and comparing multi-exposure image fusion algorithms,” Information Fusion, vol. 74, pp. 111-131,
2021, doi: 10.1016/j.inffus.2021.02.005.
[12] C. S. Yilmaz, V. Yilmaz, and O. Gungor, “A theoretical and practical survey of image fusion methods for multispectral
pansharpening,” Information Fusion, vol. 79, pp. 1-43, 2022, doi: 10.1016/j.inffus.2021.10.001.
[13] C. Nietupski Ty, R. E. Kennedy, H. Temesgen, and B. K. Kerns, “Spatiotemporal image fusion in Google Earth Engine for annual
estimates of land surface phenology in a heterogenous landscape,” International Journal of Applied Earth Observation and
Geoinformation, vol. 99, p. 102323, 2021, doi: 10.1016/j.jag.2021.102323.
[14] S. Wu and H. Chen, “Smart city oriented remote sensing image fusion methods based on convolution sampling and spatial
transformation,” Computer Communications, vol. 157, pp. 444-450, 2020, doi: 10.1016/j.comcom.2020.04.010.
[15] D. Li, Z. Song, C. Quan, X. Xu, and C. Liu, “Recent advances in image fusion technology in agriculture,” Computers and
Electronics in Agriculture, vol. 191, p. 106491, 2021, doi: 10.1016/j.compag.2021.106491.
[16] J. Kong et al., “Evaluation of four image fusion NDVI products against in-situ spectral-measurements over a heterogeneous rice
paddy landscape,” Agricultural and Forest Meteorology, vol. 297, p. 108255, 2021, doi: 10.1016/j.agrformet.2020.108255.
[17] Z. Cao, S. Chen, F. Gao, and X. Li, “Improving phenological monitoring of winter wheat by considering sensor spectral response
in spatiotemporal image fusion,” Physics and Chemistry of the Earth, Parts A/B/C, vol. 116, p. 102859, 2020, doi:
10.1016/j.pce.2020.102859.
[18] Y. Li, J. Zhao, Z. Lv, and J. Li, “Medical image fusion method by deep learning,” International Journal of Cognitive Computing
in Engineering, vol. 2, pp. 21-29, 2021, doi: 10.1016/j.ijcce.2020.12.004.
[19] H. Xu and J. Ma, “EMFusion: An unsupervised enhanced medical image fusion network,” Information Fusion, vol. 76,
pp. 177-186, 2021, doi: 10.1016/j.inffus.2021.06.001.
[20] X. Li, F. Zhou, and H. Tan, “Joint image fusion and denoising via three-layer decomposition and sparse representation,”
Knowledge-Based Systems, vol. 224, p. 107087, 2021, doi: 10.1016/j.knosys.2021.107087.
[21] G. Li, Y. Lin, and X. Qu, “An infrared and visible image fusion method based on multi-scale transformation and norm
optimization,” Information Fusion, vol. 71, pp. 109-129, 2021, doi: 10.1016/j.inffus.2021.02.008.
[22] M. Misiti, Y. Misiti, G. Oppenheim, and J.-M. Poggi, Wavelets toolbox For use with Matlab, The Maht works Inc, Natick, MA
01760-2098, Version 4.4.1., 2009.
[23] Y. Nievergelt, Wavelets made easy, Ed Birkhäuser, New York, 2013, doi: 10.1007/978-1-4614-6006-0.
[24] C. Burrus, Wavelets and Wavelet Transforms, Sep. 24, 2015, http://cnx.org/content/col11454/1.6/
[25] G. Siddalingesh, A. Mallikarjun, H. Sanjeevkumar, and S. Kotresh, “Feature-Level Image Fusion Using DWT, SWT, and DT-
CWT,” Emerging Research in Electronics, Computer Science, and Technology, Lecture Notes in Electrical Engineering, vol. 248,
pp. 183-194, 2014, doi: 10.1007/978-81-322-1157-0_20.
[26] N. Jha, A. K. Saxena, A. Shrivastava, and M. Manoria, “A review on various image fusion algorithms,” 2017 International
Conference on Recent Innovations in Signal Processing and Embedded Systems (RISE), Oct. 2017, pp. 27-29,
doi: 10.1109/rise.2017.8378146.
[27] C. Periyasamy, “Satellite Image Enhancement Using Dual Tree Complex Wavelet Transform,” Bulletin of Electrical Engineering
and Informatics, vol. 6, no. 49, pp. 334-336, 2017, doi: 10.11591/eei.v6i4.861.
[28] A. Sharma and T. Gulati, “Change Detection from Remotely Sensed Images Based on Stationary Wavelet Transform,”
International Journal of Electrical and Computer Engineering, vol. 7, no. 6, pp. 3395-3401, 2017, doi:
10.11591/ijece.v7i6.pp3395-3401.
[29] V. Radhika, K. Veeraswamy, and S. S. Kumar, “Digital Image Fusion Using HVS in Block Based Transforms,” J Sign Process
Syst, vol. 90, pp. 947-957, 2018, doi: 10.1007/s11265-017-1252-8.
[30] Y. Liu, L. Wang, J. Cheng, C. Li, and X. Chen, “Multi-focus image fusion: A Survey of the state of the art,” Information Fusion,
vol. 64, pp. 71-91, 2020, doi: 10.1016/j.inffus.2020.06.013.
[31] M. Jindal, E. Bajal, A. Chakraborty, P. Singh, M. Diwakar, and M. Kumar, “A novel multi-focus image fusion paradigm: A
hybrid approach,” Materials Today: Proceedings, vol. 37, Part 2, pp. 2952-2958, 2021, doi: 10.1016/j.matpr.2020.08.704.
[32] J. Nuñez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-Based Image fusion whit Additive Wavelet
Descomposition,” IEEE Transactions on Geoscience and Remote Sensing, vol. 37, no. 3, pp. 1204-1211, 1999, doi: 10.1109/36.763274.
[33] J. Medina, C. Pinilla, and L. Joyanes, “Two-Dimensional Fast Haar Wavelet Transform for Satellite-Image Fusion,” Journal of
Applied Remote Sensing, vol. 7, no. 1, p. 073698, 2013, doi: 10.1117/1.JRS.7.073698.
[34] E. Upegui and J. Medina, Análisis de Imágenes usando las trasformadas Fourier y Wavelet, Bogotá: Editorial Unioversidad
Distrital Francisco José de Caldas, 2019, ISBN 978-787-064-0.
[35] C. Gonzalo and M. Lillo-Saavedra, “A directed search algorithm for setting the spectral–spatial quality trade-off of fused images
by the wavelet à trous method,” Canadian Journal of Remote Sensing, vol. 34, no. 4, pp. 367-375, 2008, doi: 10.5589/m08-041.
[36] R. C. González and R. E. Woods, Digital Image Processing, 4th. Edition, Pearson, 2018, ISBN: 9780133356779.
[37] J. Medina, I. Carrillo, and E. Upegui, “Implementación y evaluación de la transformada Wavelet 2D À trous: usando Intensidad,
Luminancia y Value en Matlab para Fusión de Imágenes Satelitales,” RISTI, no. E41, pp. 396-409, 2021.
[38] C. Gonzalo-Martín and M. Lillo-Saavedra, “Fusión de Imágenes QuickBird Mediante una Representación Conjunta
Multirresolución-Multidireccional,” IEEE Latin America Transactions, vol. 5, no. 1, pp. 32-37, 2007, doi: 10.1109/T-
LA.2007.4444530.
[39] R. A. Schowengerdt, Remote Sensing. Models and Methods for image Processing, 3rd edition, Elseiver, 2013.
[40] Hexagon Geospatial, ERDAS Imagine 2018, Online Documentation, 2018.
[41] C. Pohl and J. L. Van Genderen, “Review Article, Structuring contemporary remote sensing image fusion,” International Journal
of Image and Data Fusion, vol. 6, no. 1, pp. 3-21, 2015, doi: 10.1080/19479832.2014.998727.
9. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 25, No. 1, January 2022: 256-264
264
[42] K. Shettigara, “A Linear Transformation Technique For Spatial Enhancement Of Multispectral Images Using A Higher
Resolution Data Set,” 12th Canadian Symposium on Remote Sensing Geoscience and Remote Sensing Symposium, July, 1989,
doi: 10.1109/IGARSS.1989.577945.
[43] R. J. Medina-Daza, N. E. Vera-Parra, and A. O. Restrepo-Rodríguez, “Sallfus, Library for Satellite Images Fusion on
Homogeneous and Heterogeneous Computing architectures,” IEEE Latin America Transactions, vol. 18, no. 12, pp. 2130-2137,
2020, doi: 10.1109/TLA.2020.9400441.
[44] S. R. Murray and J. F. Larry, Estadística, Cuarta edición, Mc Graw Hill, 2009.
[45] L. Wald, Data Fusion, Definition and Architectures: Fusion of Image of Different Spatial Resolution, Le Presses de l’Ecole des
Mines, Paris, 2002.
[46] L. Wald, “Quality of high resolution synthesized images: is there a simple criterion?,” Proceedings of the third conference Fusion
of Earth data: merging point measurements, raster maps and remotely sensed image, Sophia Antipolis, France, January 26-28,
2000.
[47] M. Lillo-Saavedra, C. Gonzalo, A. Arquero, and E. Martinez, “Fusion of multispectral and panchromatic satellite sensor imagery
based on tailored filtering in the Fourier domain,” International Journal of Remote Sensing, vol. 26, pp. 1263-1268, 2005,
doi: 10.1080/01431160412331330239.
[48] Z. Wang and A. C. Bovink, “A Universal Image Quality Index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81-84, 2002,
doi: 10.1109/97.995823.
BIOGRAPHIES OF AUTHORS
Javier Medina in Computer Science, emphasis on Geographic information
systems, Pontifical University of Salamanca Madrid-Spain Campus. Accredited in research
proficiency-Diploma Advanced Studies (DEA)-in Geographic information systems, Pontifical
University of Salamanca Madrid-Spain campus. Master in Teleinformatics, Specialist in
Software Engineering, Specialist in Geographic Information Systems and Bachelor of
Mathematics from the Francisco José District University of Caldas. Full Professor Time
complete assigned to the Faculty of Engineering from the same University, Teacher of the
Curriculum Project Cadastral Engineering and Geodesy, Master in Information Sciences and
Communications and the Doctorate of Engineering of the Faculty of Engineering of the
Francisco José de Caldas. He can be contacted at email: rmedina@udistrital.edu.co.
Nelson Vera Professor and Coordinator of the master’s program in information
and communication sciences at the Universidad Distrital Francisco José de Caldas (Bogotá,
Colombia), Doctor of Engineering from the same University, Electronic Engineer from the
Universidad Surcolombiana (Neiva, Colombia), Researcher in parallel computing, high
performance computing, science data and bioinformatics. He can be contacted at email:
neverap@udistrital.edu.co.
Erika Upegui Associate Professor for the School of Engineering at Universidad
Distrital Francisco José de Caldas. She is an active member of the Cadastral and Geodesy
programs as well as the MSc program in Information and Communication sciences, and the
PhD program in Engineering. She obtained her PhD title in Geography and Territorial
Planning at Université Franche-Comté in France. She is MSc in Teledetection and Geomatics
Aapplied to Environmental sciences at Université Paris 7 (France). She also holds the tite lof
specialist in property appraisal and geographic information systems at Universidad Distrital
Francisco José de Caldas (Colombia). She can be contacted at email: esupeguic@udistrital.edu.co.