An image fusion method that performs robustly for image sets heavily corrupted by noise is
presented in this paper. The approach combines the advantages of two state-of-the-art fusion techniques,
namely Independent Component Analysis (ICA) and Chebyshev Poly-nomial Analysis (CPA) fusion.
Fusion using ICA performs well in transferring the salient features of the input images into the composite
output, but its performance deteriorates severely under mild to moderate noise conditions. CPA fusion is
robust under severe noise conditions, but eliminates the high frequency information of the images
involved. We pro-pose to use ICA fusion within high activity image areas, identified by edges and strong
textured surfaces and CPA fusion in low activity areas identified by uniform background regions and weak
texture. A binary image map is used for selecting the appropriate method, which is constructed by a
standard edge detector followed by morphological operators. The results of the proposed approach are
very encouraging as far as joint fusion and denoising is concerned. The works presented may prove
beneficial for future image fusion tasks in real world applications such as surveillance, where noise is
heavily present.
The document presents a new approach for lossless reversible visible watermarking that allows embedding of various visible watermarks into images in a way that the original image can be recovered without any loss. The approach uses deterministic one-to-one compound mappings of pixel values to overlay visible watermarks while ensuring the mappings are reversible. Different types of visible watermarks, including opaque monochrome and translucent full color ones, can be embedded. Experimental results demonstrating the effectiveness of the proposed approach are also included.
This document proposes a remote sensing image fusion approach that combines the Brovey transform and wavelet transforms. The Brovey transform is used first to reduce spectral distortion, followed by a wavelet transform to reduce spatial distortion. The approach was tested on MODIS and SPOT data as well as ETM+ and SPOT data. Statistical analysis showed the proposed technique performed better than traditional fusion techniques like IHS, PCA, and the Brovey transform alone in terms of metrics like correlation coefficient, entropy, and structural similarity. Future work will focus on improving the technique and applying fused images to classification tasks.
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
Automated Colorization of Grayscale Images Using Texture DescriptorsIDES Editor
The document proposes a novel automated process called ACTD for colorizing grayscale images using texture descriptors without human intervention. It analyzes sample color images to extract coherent texture regions, then uses Gabor filtering for texture-based segmentation. Texture descriptors and color information are computed and stored for each region. These are then used to colorize a new grayscale image based on texture matching. The method combines techniques such as Gabor filtering, fuzzy C-means clustering with a new "Gki factor" for noise tolerance, and content-based image retrieval of texture descriptors. Preliminary results found the approach viable but improvements were needed for scale and rotation invariance.
Reduced-reference Video Quality Metric Using Spatial Information in Salient R...TELKOMNIKA JOURNAL
In multimedia transmission, it is important to rely on an objective quality metric which accurately
represents the subjective quality of processed images and video sequences. Maintaining acceptable
Quality of Experience in video transmission requires the ability to measure the quality of the video seen at
the receiver end. Reduced-reference metrics make use of side-information that is transmitted to the
receiver for estimating the quality of the received sequence with low complexity. This attribute enables
real-time assessment and visual degradation detection caused by transmission and compression errors. A
novel reduced-reference video quality known as the Spatial Information in Salient Regions Reduced
Reference Metric is proposed. The approach proposed makes use of spatial activity to estimate the
received sequence distortion after concealment. The statistical elements analysed in this work are based
on extracted edges and their luminance distributions. Results highlight that the proposed edge dissimilarit y
measure has a good correlation with DMOS scores from the LIVE Video Database.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
The document presents a new approach for lossless reversible visible watermarking that allows embedding of various visible watermarks into images in a way that the original image can be recovered without any loss. The approach uses deterministic one-to-one compound mappings of pixel values to overlay visible watermarks while ensuring the mappings are reversible. Different types of visible watermarks, including opaque monochrome and translucent full color ones, can be embedded. Experimental results demonstrating the effectiveness of the proposed approach are also included.
This document proposes a remote sensing image fusion approach that combines the Brovey transform and wavelet transforms. The Brovey transform is used first to reduce spectral distortion, followed by a wavelet transform to reduce spatial distortion. The approach was tested on MODIS and SPOT data as well as ETM+ and SPOT data. Statistical analysis showed the proposed technique performed better than traditional fusion techniques like IHS, PCA, and the Brovey transform alone in terms of metrics like correlation coefficient, entropy, and structural similarity. Future work will focus on improving the technique and applying fused images to classification tasks.
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document describes an image fusion approach that uses adaptive fuzzy logic modeling for global processing followed by Markov random field modeling for local processing.
It begins by introducing image fusion and its applications. It then discusses existing fusion approaches and their limitations. The proposed approach first uses an adaptive fuzzy logic model to minimize redundant information globally. It then applies Markov random field modeling locally for fusion. Experimental results showed the proposed approach improved the universal image quality index by 30-35% compared to fusion with Markov random field modeling alone.
Automated Colorization of Grayscale Images Using Texture DescriptorsIDES Editor
The document proposes a novel automated process called ACTD for colorizing grayscale images using texture descriptors without human intervention. It analyzes sample color images to extract coherent texture regions, then uses Gabor filtering for texture-based segmentation. Texture descriptors and color information are computed and stored for each region. These are then used to colorize a new grayscale image based on texture matching. The method combines techniques such as Gabor filtering, fuzzy C-means clustering with a new "Gki factor" for noise tolerance, and content-based image retrieval of texture descriptors. Preliminary results found the approach viable but improvements were needed for scale and rotation invariance.
Reduced-reference Video Quality Metric Using Spatial Information in Salient R...TELKOMNIKA JOURNAL
In multimedia transmission, it is important to rely on an objective quality metric which accurately
represents the subjective quality of processed images and video sequences. Maintaining acceptable
Quality of Experience in video transmission requires the ability to measure the quality of the video seen at
the receiver end. Reduced-reference metrics make use of side-information that is transmitted to the
receiver for estimating the quality of the received sequence with low complexity. This attribute enables
real-time assessment and visual degradation detection caused by transmission and compression errors. A
novel reduced-reference video quality known as the Spatial Information in Salient Regions Reduced
Reference Metric is proposed. The approach proposed makes use of spatial activity to estimate the
received sequence distortion after concealment. The statistical elements analysed in this work are based
on extracted edges and their luminance distributions. Results highlight that the proposed edge dissimilarit y
measure has a good correlation with DMOS scores from the LIVE Video Database.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
To get this project in ONLINE or through TRAINING Sessions,
Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83. Landmark: Next to Kotak Mahendra Bank. Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9. Landmark: Next to VVP Nagar Arch. Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org Blog: www.jpinfotech.blogspot.com
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
improving differently illuminant images with fuzzy membership based saturatio...INFOGAIN PUBLICATION
Illumination estimation is basic to white balancing digital color images and to color constancy. The key to automatic white balancing of digital images is to estimate precisely the color of the overall scene illumination. Many methods for estimating the illumination’s color has proposed. Though not the most exact, one of the simplest and quite extensively used methods are the gray world algorithm, white patch, max-RGB, Gray edge using first order derivative and gray edge using second order derivative, saturation weighting. The first-three methods have neglected the multiple light sources illuminate. In this work, we investigate how illuminate estimation techniques can be improved using fuzzy membership. The main aim of this paper is to evaluate performance of Fuzzy Enhancement based saturation weighting technique for different light sources (single, multiple, indoor scene and outdoor scene) under different conditions. The experiment has clearly shown the effectiveness of the proposed technique over the available methods.
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
Comparative study on image segmentation techniquesgmidhubala
This document discusses various image processing and analysis techniques. It describes image segmentation as separating an image into meaningful parts to facilitate analysis. Common segmentation techniques mentioned include thresholding, edge detection, color-based segmentation, and histograms. Thresholding involves separating foreground and background using a threshold value. Edge detection finds edges and contours. Color segmentation extracts information based on color. Histograms locate clusters of pixels to distinguish regions. The document provides examples of applying these techniques and concludes that segmentation partitions an image into homogeneous regions to extract high-level information.
PCA & CS based fusion for Medical Image FusionIJMTST Journal
Compressive sampling (CS), also called Compressed sensing, has generated a tremendous amount of excitement in the image processing community. It provides an alternative to Shannon/ Nyquist sampling when the signal under acquisition is known to be sparse or compressible. In this paper, we propose a new efficient image fusion method for compressed sensing imaging. In this method, we calculate the two dimensional discrete cosine transform of multiple input images, these achieved measurements are multiplied with sampling filter, so compressed images are obtained. we take inverse discrete cosine transform of them. Finally, fused image achieves from these results by using PCA fusion method. This approach also is implemented for multi-focus and noisy images. Simulation results show that our method provides promising fusion performance in both visual comparison and comparison using objective measures. Moreover, because this method does not need to recovery process the computational time is decreased very much.
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
This document proposes a new method called multi-surface fitting for enhancing the resolution of digital images. The method fits multiple surfaces, with one surface fitted for each low-resolution pixel, and then fuses the multi-sampling values from these surfaces using maximum a posteriori estimation. This allows more low-resolution pixel information to be utilized to reconstruct the high-resolution image compared to other interpolation-based methods. The method is shown to effectively preserve image details without requiring assumptions about the image prior, as iterative techniques do. It provides error-free high resolution for test images.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rXavier Davias
The document describes a new rendering method called bidirectional ray tracing using adaptive radiosity textures. It separates surface interaction into diffuse and specular components. It computes the specular component on the fly during ray tracing, and stores the diffuse component (radiosity) in adaptive radiosity textures on diffuse surfaces. These textures adaptively subdivide to resolve sharp shadows. It uses a three-pass algorithm: 1) a size pass records visibility, 2) a light pass traces light rays to deposit photons and construct textures, 3) an eye pass traces eye rays to render the image using the textures. This hybrid approach aims to provide accurate global illumination simulation for realistic image synthesis.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION sipij
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed. The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT. The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigenlips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips
reading modeled , which wasn’t illustrate the superior performance of the method.
The document reviews approaches to image interpolation and super-resolution. It discusses several interpolation methods including polynomial-based, edge-directed, and soft-decision approaches. Edge-directed methods aim to preserve edge sharpness during upsampling by estimating edge orientations or fusing multiple orientations. New edge-directed interpolation uses a Wiener filter to estimate missing pixel values. Soft-decision adaptive interpolation and robust soft-decision interpolation further improve results by modeling image signals within local windows and incorporating outlier weighting. The document provides formulations and comparisons of these methods.
This document provides an introduction to image segmentation. It discusses how image segmentation partitions an image into meaningful regions based on measurements like greyscale, color, texture, depth, or motion. Segmentation is often an initial step in image understanding and has applications in identifying objects, guiding robots, and video compression. The document describes thresholding and clustering as two common segmentation techniques and provides examples of segmentation based on greyscale, texture, motion, depth, and optical flow. It also discusses region-growing, edge-based, and active contour model approaches to segmentation.
A Survey on Single Image Dehazing ApproachesIRJET Journal
This document provides a survey of single image dehazing approaches. It begins with an introduction to the problem of haze in images and how it degrades quality. It then summarizes several existing single image dehazing methods, including those based on the atmospheric scattering model, dark channel prior, color attenuation prior, and deep learning approaches. The survey covers the key assumptions and limitations of each approach. Overall, the document reviews the progress that has been made in developing techniques to remove haze from a single input image.
Many applications such as robot navigation, defense, medical and remote sensing perform
various processing tasks, which can be performed more easily when all objects in different images of the
same scene are combined into a single fused image. In this paper, we propose a fast and effective
method for image fusion. The proposed method derives the intensity based variations that is large and
small scale, from the source images. In this approach, guided filtering is employed for this extraction.
Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained.
Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
To get this project in ONLINE or through TRAINING Sessions,
Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83. Landmark: Next to Kotak Mahendra Bank. Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9. Landmark: Next to VVP Nagar Arch. Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org Blog: www.jpinfotech.blogspot.com
The document compares three image fusion techniques: wavelet transform, IHS (Intensity-Hue-Saturation), and PCA (Principal Component Analysis). For each technique, it describes the methodology, syntax used, and features. It then applies each technique to sample images to produce fused images. The RGB values of the fused images are recorded and compared in a table. The wavelet technique uses max area selection and consistency verification for feature selection. IHS transforms RGB to IHS values and replaces intensity with a panchromatic image. PCA replaces the first principal component with a high-resolution panchromatic image. The document concludes no single technique is best and the quality depends on the application.
improving differently illuminant images with fuzzy membership based saturatio...INFOGAIN PUBLICATION
Illumination estimation is basic to white balancing digital color images and to color constancy. The key to automatic white balancing of digital images is to estimate precisely the color of the overall scene illumination. Many methods for estimating the illumination’s color has proposed. Though not the most exact, one of the simplest and quite extensively used methods are the gray world algorithm, white patch, max-RGB, Gray edge using first order derivative and gray edge using second order derivative, saturation weighting. The first-three methods have neglected the multiple light sources illuminate. In this work, we investigate how illuminate estimation techniques can be improved using fuzzy membership. The main aim of this paper is to evaluate performance of Fuzzy Enhancement based saturation weighting technique for different light sources (single, multiple, indoor scene and outdoor scene) under different conditions. The experiment has clearly shown the effectiveness of the proposed technique over the available methods.
Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information i.e Image inpainting is the process of reconstructing lost or deteriorated parts of images using information from surrounding areas. In fine art museums, inpainting of degraded paintings is traditionally carried out by professional artists and usually very time consuming.The purpose of inpainting is to reconstruct missing regions in a visually plausible manner so that it seems reasonable to the human eye. There have been several approaches proposed for the same.
This paper gives an overview of different Techniques of Image Inpainting.The proposed work includes the overview of PDE based inpainting algorithm and Texture synthesis based inpainting algorithm. This paper presents a brief survey on comparative study of these two techniques used for Image Inpainting.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses using super-resolution-based in-painting for object removal in images. It begins with an overview of in-painting and exemplar-based in-painting methods. It then proposes a new framework that combines exemplar-based in-painting with a single-image super-resolution method. This approach improves image quality by producing high-resolution outputs with less noise compared to exemplar-based in-painting alone. The document concludes the proposed method increases robustness for applications like satellite imaging and medical imaging by providing high quality images with damaged objects removed.
Comparative study on image segmentation techniquesgmidhubala
This document discusses various image processing and analysis techniques. It describes image segmentation as separating an image into meaningful parts to facilitate analysis. Common segmentation techniques mentioned include thresholding, edge detection, color-based segmentation, and histograms. Thresholding involves separating foreground and background using a threshold value. Edge detection finds edges and contours. Color segmentation extracts information based on color. Histograms locate clusters of pixels to distinguish regions. The document provides examples of applying these techniques and concludes that segmentation partitions an image into homogeneous regions to extract high-level information.
PCA & CS based fusion for Medical Image FusionIJMTST Journal
Compressive sampling (CS), also called Compressed sensing, has generated a tremendous amount of excitement in the image processing community. It provides an alternative to Shannon/ Nyquist sampling when the signal under acquisition is known to be sparse or compressible. In this paper, we propose a new efficient image fusion method for compressed sensing imaging. In this method, we calculate the two dimensional discrete cosine transform of multiple input images, these achieved measurements are multiplied with sampling filter, so compressed images are obtained. we take inverse discrete cosine transform of them. Finally, fused image achieves from these results by using PCA fusion method. This approach also is implemented for multi-focus and noisy images. Simulation results show that our method provides promising fusion performance in both visual comparison and comparison using objective measures. Moreover, because this method does not need to recovery process the computational time is decreased very much.
Region filling and object removal by exemplar based image inpaintingWoonghee Lee
To get rid of (an) object(s) at a picture or to restore a picture from scratches or holes, Criminisi at el. suggested an algorithm which is combied "texture synthesis" and "inpainting". I made the slide to present at a class to introduce this algorithm. I refered a slide http://bit.ly/1Ng7DNt. I wish this slide may help you to understand the algorithm. Thank you.
A Survey on Exemplar-Based Image Inpainting Techniquesijsrd.com
Preceding paper include exemplar-based image inpainting technique give idea how to inpaint destroyed region such as Criminisi algorithm, patch shifting scheme, search region prior method. Criminsi’s and Sarawut’s patch shifting scheme needed more time to inpaint an damaged region but proposed method decrease time complexity by searching only in related region of missing portion of image.
In this project we have implemented a tool to inpaint selected regions from an image. Inpainting refers to the art of restoring lost parts of image and reconstructing them based on the background information. The tool provides a user interface wherein the user can open an image for inpainting, select the parts
of the image that he wants to reconstruct. The tool would then automatically inpaint the selected area according to the background information. The image can
then be saved. The inpainting in based on the exemplar based approach. The basic aim of this approach is to find examples (i.e. patches) from the image and
replace the lost data with it. Applications of this technique include the restoration of old photographs and damaged film; removal of superimposed text like
dates, subtitles etc.; and the removal of entire objects from the image like microphones or wires in special effects.
This document proposes a new method called multi-surface fitting for enhancing the resolution of digital images. The method fits multiple surfaces, with one surface fitted for each low-resolution pixel, and then fuses the multi-sampling values from these surfaces using maximum a posteriori estimation. This allows more low-resolution pixel information to be utilized to reconstruct the high-resolution image compared to other interpolation-based methods. The method is shown to effectively preserve image details without requiring assumptions about the image prior, as iterative techniques do. It provides error-free high resolution for test images.
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rXavier Davias
The document describes a new rendering method called bidirectional ray tracing using adaptive radiosity textures. It separates surface interaction into diffuse and specular components. It computes the specular component on the fly during ray tracing, and stores the diffuse component (radiosity) in adaptive radiosity textures on diffuse surfaces. These textures adaptively subdivide to resolve sharp shadows. It uses a three-pass algorithm: 1) a size pass records visibility, 2) a light pass traces light rays to deposit photons and construct textures, 3) an eye pass traces eye rays to render the image using the textures. This hybrid approach aims to provide accurate global illumination simulation for realistic image synthesis.
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION sipij
A novel based a trajectory-guided, concatenating approach for synthesizing high-quality image real sample renders video is proposed. The lips reading automated is seeking for modeled the closest real image sample sequence preserve in the library under the data video to the HMM predicted trajectory. The object trajectory is modeled obtained by projecting the face patterns into an KDA feature space is estimated. The approach for speaker's face identification by using synthesise the identity surface of a subject face from a small sample of patterns which sparsely each the view sphere. An KDA algorithm use to the Lip-reading image is discrimination, after that work consisted of in the low dimensional for the fundamental lip features vector is reduced by using the 2D-DCT. The mouth of the set area dimensionality is ordered by a normally reduction base on the PCA to obtain the Eigenlips approach, their proposed approach by[33]. The subjective performance results of the cost function under the automatic lips
reading modeled , which wasn’t illustrate the superior performance of the method.
The document reviews approaches to image interpolation and super-resolution. It discusses several interpolation methods including polynomial-based, edge-directed, and soft-decision approaches. Edge-directed methods aim to preserve edge sharpness during upsampling by estimating edge orientations or fusing multiple orientations. New edge-directed interpolation uses a Wiener filter to estimate missing pixel values. Soft-decision adaptive interpolation and robust soft-decision interpolation further improve results by modeling image signals within local windows and incorporating outlier weighting. The document provides formulations and comparisons of these methods.
This document provides an introduction to image segmentation. It discusses how image segmentation partitions an image into meaningful regions based on measurements like greyscale, color, texture, depth, or motion. Segmentation is often an initial step in image understanding and has applications in identifying objects, guiding robots, and video compression. The document describes thresholding and clustering as two common segmentation techniques and provides examples of segmentation based on greyscale, texture, motion, depth, and optical flow. It also discusses region-growing, edge-based, and active contour model approaches to segmentation.
A Survey on Single Image Dehazing ApproachesIRJET Journal
This document provides a survey of single image dehazing approaches. It begins with an introduction to the problem of haze in images and how it degrades quality. It then summarizes several existing single image dehazing methods, including those based on the atmospheric scattering model, dark channel prior, color attenuation prior, and deep learning approaches. The survey covers the key assumptions and limitations of each approach. Overall, the document reviews the progress that has been made in developing techniques to remove haze from a single input image.
Many applications such as robot navigation, defense, medical and remote sensing perform
various processing tasks, which can be performed more easily when all objects in different images of the
same scene are combined into a single fused image. In this paper, we propose a fast and effective
method for image fusion. The proposed method derives the intensity based variations that is large and
small scale, from the source images. In this approach, guided filtering is employed for this extraction.
Gaussian and Laplacian pyramidal approach is then used to fuse the different layers obtained.
Experimental results demonstrate that the proposed method can obtain better performance for fusion of
all sets of images. The results clearly indicate the feasibility of the proposed approach.
Compressed sensing applications in image processing & communication (1)Mayur Sevak
This document discusses applications of compressed sensing in image processing and communication. It begins by explaining what compressed sensing is and how it works by acquiring signals using fewer samples than required by the Nyquist criterion through exploiting sparsity and incoherence. It then discusses how to implement compressed sensing using properties of sparsity and incoherence along with convex optimization for reconstruction. Applications discussed include image fusion, representation, denoising, remote sensing, MIMO and cognitive radio communications, and ultra-wideband channel estimation. Compressed sensing is shown to provide benefits like reducing data acquisition and transmission costs while enabling reliable signal reconstruction.
This document describes an image fusion method using pyramidal decomposition. It proposes extracting fine details from input images using guided filtering and fusing the base layers of images across multiple exposures or focal points using a multiresolution pyramid approach. A weight map is generated considering exposure, contrast, and saturation to guide the fusion of base layers. The fused base layer is then combined with extracted fine details to produce a detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions of the input images. It is argued that this method can effectively fuse images from different exposures or focal points without introducing artifacts.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This document presents a new method for image compression called Haar Wavelet Based Joint Compression Method Using Adaptive Fractal Image Compression (DWT+AFIC). It combines discrete wavelet transform with an existing adaptive fractal image compression technique to improve compression ratio and reconstructed image quality compared to previous fractal image compression methods. The document introduces fractal image compression and its limitations, describes the proposed DWT+AFIC method and 5 other compression techniques, provides simulation results on test images showing DWT+AFIC achieves higher peak signal to noise ratios and compression ratios than other methods, and concludes DWT+AFIC decreases encoding time while increasing compression ratio and maintaining reconstructed image quality.
Noise Level Estimation for Digital Images Using Local Statistics and Its Appl...TELKOMNIKA JOURNAL
In this paper, an automatic estimation of additive white Gaussian noise technique is proposed. This technique is built according to the local statistics of Gaussian noise. In the field of digital signal processing, estimation of the noise is considered as pivotal process that many signal processing tasks relies on. The main aim of this paper is to design a patch-based estimation technique in order to estimate the noise level in natural images and use it in blind image removal technique. The estimation processes is utilized selected patches which is most contaminated sub-pixels in the tested images sing principal component analysis (PCA). The performance of the suggested noise level estimation technique is shown its superior to state of the art noise estimation and noise removal algorithms, the proposed algorithm produces the best performance in most cases compared with the investigated techniques in terms of PSNR, IQI and the visual perception.
Image Denoising of various images Using Wavelet Transform and Thresholding Te...IRJET Journal
The document discusses image denoising using wavelet transforms and thresholding techniques. It first provides background on image denoising and wavelet transforms. It then reviews several existing studies that used wavelet transforms like Haar, db4, and sym4 along with thresholding to denoise images corrupted with Gaussian and salt-and-pepper noise. Next, it describes the proposed denoising algorithm which involves adding noise to test images, decomposing the noisy images using different wavelet transforms, applying thresholding, and calculating metrics like PSNR to evaluate performance. The algorithm aims to eliminate noise in the wavelet domain using soft and hard thresholding followed by reconstruction.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
This document summarizes a research paper that proposes a new method for blind source separation (BSS) of bearing vibration signals. The method combines an improved k-means clustering algorithm with a Laplace potential function to better estimate the mixing matrix in underdetermined blind source separation (UBSS) problems. UBSS aims to separate signal sources from mixed measurements without knowledge of the mixing process. Existing k-means-based methods for estimating the mixing matrix in UBSS are sensitive to noise and initial cluster locations. The proposed method initializes cluster centroids using maximum distances between data points and updates centroids based on average distances within clusters. It then applies a Laplace potential function to fine-tune centroids and make the clustering more robust to noise. The method
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
A PROJECT REPORT ON REMOVAL OF UNNECESSARY OBJECTS FROM PHOTOS USING MASKINGIRJET Journal
This document presents a project report on removing unnecessary objects from photos using masking techniques. It discusses using algorithms like Fast Marching and Navier-Stokes to fill in missing image data and maintain continuity across boundaries. The Fast Marching method begins at region boundaries and works inward, prioritizing completion of boundary pixels first. Navier-Stokes uses fluid dynamics equations to continue intensity value functions and ensure they remain continuous at boundaries. Color filtering can also be used to segment specific colored objects or regions. The project aims to implement these techniques to remove unwanted objects from images and fill the resulting gaps seamlessly.
This document discusses image compression using discrete wavelet transform (DWT) and principal component analysis (PCA). It first reviews several related works that use transforms like curvelet, wavelet and discrete cosine transform for image compression. It then describes preprocessing the input image using DWT to decompose it into sub-bands, and applying PCA on the high-frequency sub-bands to reduce dimensions and compress the image while preserving important boundaries. The algorithm is implemented and evaluated based on metrics like peak signal-to-noise ratio, standard deviation and entropy. Results show 95% accuracy in image identification from a database, though processing time increases significantly with database size.
Comparative Study of Compressive Sensing Techniques For Image EnhancementCSCJournals
Compressive Sensing is a new way of sampling signals at a sub-Nyquist rate. For many signals, this revolutionary technology strongly relies on the sparsity of the signal and incoherency between sensing basis and representation basis. In this work, compressed sensing method is proposed to reduce the noise of the image signal. Noise reduction and image reconstruction are formulated in the theoretical framework of compressed sensing using Basis Pursuit de-noising (BPDN) and Compressive Sampling Matching Pursuit (CoSaMP) algorithm when random measurement matrix is utilized to acquire the data. Ultimately, it is demonstrated that the proposed methods can't perfectly recover the image signal. Therefore, we have used a complementary approach for enhancing the performance of CS recovery with non-sparse signals. In this work, we have used a new designed CS recovery framework, called De-noising-based Approximate Message Passing (D-AMP). This method uses a de-noising algorithm to recover signals from compressive measurements. For de-noising purpose the Non-Local Means (NLM), Bayesian Least Squares Gaussian Scale Mixtures (BLS-GSM) and Block Matching 3D collaborative have been used. Also, in this work, we have evaluated the performance of our proposed image enhancement methods using the quality measure peak signal-to-noise ratio (PSNR).
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Deep Convolutional Neural Networks (CNNs) have achieved impressive performance in
edge detection tasks, but their large number of parameters often leads to high memory and energy
costs for implementation on lightweight devices. In this paper, we propose a new architecture, called
Efficient Deep-learning Gradients Extraction Network (EDGE-Net), that integrates the advantages of Depthwise Separable Convolutions and deformable convolutional networks (DeformableConvNet) to address these inefficiencies. By carefully selecting proper components and utilizing
network pruning techniques, our proposed EDGE-Net achieves state-of-the-art accuracy in edge
detection while significantly reducing complexity. Experimental results on BSDS500 and NYUDv2
datasets demonstrate that EDGE-Net outperforms current lightweight edge detectors with only
500k parameters, without relying on pre-trained weights.
Corrosion Detection Using A.I : A Comparison of Standard Computer Vision Tech...csandit
In this paper we present a comparison between stand
ard computer vision techniques and Deep
Learning approach for automatic metal corrosion (ru
st) detection. For the classic approach, a
classification based on the number of pixels contai
ning specific red components has been
utilized. The code written in Python used OpenCV li
braries to compute and categorize the
images. For the Deep Learning approach, we chose Ca
ffe, a powerful framework developed at
“Berkeley Vision and Learning Center” (BVLC). The
test has been performed by classifying
images and calculating the total accuracy for the t
wo different approaches.
Similar to A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional Saliency (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
2. TELKOMNIKA ISSN: 1693-6930
A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional… (Zaid Omar)
935
algorithms, which are able to estimate a denoised image of acceptable quality, with the least
amount of transferred noise possible.
A standard approach to achieve joint fusion and denoising is to use an energy compact
model, such as those in the transfer domain, whereby only prominent basis components are
retained. To date there exist few techniques to specifically address this. ICA, via Sparse Code
Shrinkage (SCS), has been successfully tested for denoising on several occasions but suffers
from the limitation of estimating the noise variance beforehand [10]. SCS can similarly be
generalised to most other transform domain methods to handle noise. CPA on the other hand is
a type of low-pass filter that removes noise in heavily corrupted images without estimating its
variance, though at a cost of lower signal accuracy [1].
This gives rise to a new type of fusion technique, which can be viewed as an algorithm
fusion process that manages to combine the best aspects of both ICA and CPA. In this paper an
image fusion approach is proposed for the denoising of digital images corrupted with noise. We
provide a new technique for image fusion that complements the best aspects of both ICA and
CPA, which we call the Hybrid fusion method [11].
1.2. A Proposed Region-Based Approach
The main aspect of Hybrid fusion is detecting edges, boundaries and texture. This is
achieved by dividing the pixels of a given image into active and inactive regions, in other words
distinguishing edges and rich texture as opposed to uniform background or weak texture. For
this task, we employ the Canny edge detection method. The primary purpose of classification is
to create a boundary among regions which correspond to different aspects of an image. Fusion
is then applied individually on each bounded region.
There are two main attractions of this technique: pixels can be processed more
efficiently if they are treated as a collective group within a region, rather than separate entities.
Region-based fusion may therefore help to overcome some drawbacks of pixel-based fusion,
like blurring, susceptibility to noise and misregistration [12]. The second advantage is that this
approach enables us to perform image fusion using a mixture of CPA and ICA, thus getting the
best of both approaches.
2. Algorithms Comprising Hybrid Fusion
In this section we discuss the methods that make up our proposed technique, namely
ICA and CPA. The basic properties of each method are presented, along with their advantages.
A detailed comparison has also been made between ICA and CPA is relevant aspects.
2.1. Introduction to Independent Component Analysis
Independent Components Analysis (ICA) is a popular method for distinguishing
underlying components in a random data set [13]. The rationale behind ICA may be explained
by the central limit theorem, whereby the linear mixture of two non-Gaussian and independent
source signals results in a signal that is more Gaussian-like than its sources. ICA searches
through the given mixed signal to precisely identify the most independent components – which
are defined to be the source signals. Given a number of observed mixed signals x, the ICA
attempts to identify the source signals s, by means of deriving a mixing matrix A [10] through the
Equation .
We solve for A by optimising the objective function of the distribution pertaining s, either
by maximising its non-Gaussianity or minimising the mutual information between the
independent components. Further, a study by Hoyer and Hyvarinen [15] noted that performing
ICA on a given set of image patches will yield a number of very localised, independent
components that tend to resemble the human’s visual cortex in analysing scenery and detecting
edges. They are also very similar to wavelet bases, and are therefore suitable for image
analysis applications [16]. The aim is thus to estimate a finite number of bases that are able to
represent most of the image patch’s structure.
The new domain is described by a set of 2D basis functions (basis images). The image
of interest is then represented as a linear combination of these bases. Bases are trained from a
selection of varying image patches, belonging to training images that are similar in content and
type to the complex set of images of interest. Estimation and optimisation methods are then
employed to solve the mixing matrix and subsequently generate the source signals via the
3. ISSN: 1693-6930
TELKOMNIKA Vol. 15, No. 2, June 2017 : 934 – 941
936
equation where W is the inverse of A, and is the derived signal aimed to
approximate the source .
Using the kurtosis method in practice, we begin by defining an arbitrary value for and
calculate the direction in which the kurtosis of is increasing the most, based on the
available samples of the observed vector . A gradient method is then used to find a new vector
. The process is reiterated until a convergence criterion is satisfied and the source signal’s
kurtosis is found. Thus theoretically, kurtosis can be used as an optimisation tool to address the
ICA problem [2]. For a more detailed description of ICA, [10, 2, 17] are excellent references.
2.2. Introduction to Chebyshev Polynomials Analysis
One-dimensional Chebyshev Polynomials, written mathematically as can be
defined via the recursive equation
(1)
whereby their properties have been explained in [19]. For one-dimensional signal
approximation, the polynomials can be used to estimate a given signal :
̃ ∑ (2)
where ̃ is the approximation, and an a coefficient on n which was proven to have the
following form [1]:
∑ (3)
The Chebyshev polynomials are sorted based on their order. Thus a finite order n used
in CPA expansion enables basic signal features to be retained while more complex polynomials
can be omitted. In contrast, ICA decom-poses a signal into a set of equally independent basis
components which are randomly arranged. The difference is evident in performance evaluation
of non-noisy images whereby ICA outperforms CPA considerably due to its ability to detect and
enhance strong, independent features within a given scene. However due to the same reason,
CPA per-forms better than ICA when tested under noisy conditions [1]. The concept of CPA can
be generalised to other signal decomposition approaches whereby a finite number of bases are
acquired and used to adequately represent a signal.
A separable extension of 1D CPA, similar to the discrete cosine transform (DCT), was
subsequently intro-duced for use on image signals, called two-dimensional separable
Chebyshev Polynomials. Its definition and proper-ties are given below [1]:
̃ ∑ ∑ (4)
and the coefficient is given by
∑ ∑ (5)
For corrupted images, noise components tend to mostly occupy the higher frequency
spectrum. Incidentally, as higher order polynomials comprise of high frequency components, the
idea therefore is to limit the CPA order so as to remove noise components at a cost of also
removing high energy information, including edges and strong texture. Effectively, CPA
approximation acts as a low-pass filter that eliminates unwanted noise at the expense of lower
signal accuracy. To extend this useful feature to fusion applications, comparisons are made
between image/data coefficients as done in [1, 20].
4. TELKOMNIKA ISSN: 1693-6930
A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional… (Zaid Omar)
937
3. Hybrid Fusion Framework
3.1. Region-based fusion
In region-based schemes, pixels are segmented into larger regions. Each region is
formulated to be based on such criteria as ‘objects of interest’ or ‘degree of activity’. We are
then able to determine the actual contribution of regions from each input. There has been
numerous researches involving region-based fusion as can be found in [3], [12], [21].
We define an image or scene as being composed of active and inactive regions. A
region is considered to be active if and only if there happens to be sufficient amount of
‘interesting’ information within, such as edges or strong texture. On the other hand, inactive
regions imply a lack of interesting information or a plain area with weak textural details.
In images corrupted with noise, the presence of noise is prone to spread through the
entire image, triggering a notable increase in the overall pixel activity. Our premise here is that
noise is more distinguishable, and therefore more easily suppressed, in inactive regions than
active ones. Noise components near an edge region, for example, would be mixed with the
textural details associated to that edge. It would therefore be difficult to differentiate and
subsequently remove noise without affecting the regional texture. It is due to this that blurring
tend to occur in some image denoising applications such as Chebyshev Polynomials.
Of the previous fusion methods, ICA is extremely adept at modelling edges and
textured regions by virtue of its ability to capture essential image features comprehensively.
However, despite its denoising capability by use of sparse-code features [14], the performance
of ICA has been shown as being prone to decline especially in very noisy examples. This has
been highlighted in [1] and also in [11]. In contrast, the smoothness property of CPA makes it
better suited to filter noisy signals, and as such would be of convenient use on inactive regions
mentioned above.
(a) (b) (c) (d)
Figure 1. Edge map derived from selecting all hard and soft edges from both input images (a)
UN visual camera, (b) Infrared, (c) Edge map, (d) Enhanced edge map
The ideal solution in dealing with noisy scenarios would therefore be to use ICA on
active regions which contain important structural image details, whilst inactive or background
regions are processed by CPA. We will attempt to implement this idea in the paper. In a way,
we are performing a second stage fusion on the results of the fusion methods that were initially
obtained. The method is flexible enough for us to replace ICA and CPA with a variety of other
methods; though for the purpose of denoising, the aforementioned methods are appropriate.
3.2. Determining active and inactive regions
Two input images from the UN Camp sequence database i.e. in Figure 1 are set as
examples of the region defining process. Both images are processed by Canny edge detection,
which is mostly robust to noise and is essentially a multistage edge detection algorithm that
locates critical edge pixels at various scales. The filtered product, named the edge map, is
meant to contain salient information regarding the image. The respective edge maps for the
inputs are then combined via the logical OR operator to form a single, global edge map. This
step aims not to accurately determine correct edges or regions from the source images, but
rather to ensure all possible edges and regions have been preserved for further processing [16].
This is similar to a scheme proposed by Drajic in [16] and [22], where active regions are defined
5. ISSN: 1693-6930
TELKOMNIKA Vol. 15, No. 2, June 2017 : 934 – 941
938
by an activity indicator, , of an image patch. A region is considered active if
∑ .
The resulting map is shown in Figure 1. The enhanced edge map provides a guide of
the areas to be processed by either ICA or CPA. Active regions, denoting all hard edges and
texture are represented by white pixels and will be fused by ICA whereas inactive regions i.e.
those in black are fused by CPA.
However this is an overly simple model – blocking artefacts and pixel discontinuities
tend to appear in the result which require refining. The enhanced process comprises a
morphological opening step, as explained below.
3.3. Mathematical Morphology for Edge Map Enhancement
To preserve edge details, an intermediary step comprising an opening morphological
operation is added to the edge map that will incorporate more pixels from ICA. Opening is a
combination of morphological operations that consists of dilating an image after it has been
eroded. The appeal of this step is that it discards small pixel artefacts and discontinuities, whilst
smoothing larger pixel structures. In reference to the edge map, the opening step will suppress
any single active pixel or small active region while at the same time increase the breadth of
larger active regions. This is clearly evident in Figure 1d where the edges have been thickened,
as compared to the original edge map in Figure 1c.
The map also illustrates inactive areas which form the majority of the image space. This
maneuver allows us to maximise the use of CPA for autonomous noise suppression on the
majority of the scenery, and at the same time enables ICA to process edge lines and hard
textural regions which represent important features in an image. In general the additional step
provides a clearer and more balanced usage ratio between pixels belonging to ICA and CPA
In practice, consider two output images of ICA and CPA, FICA and FCPA respectively. The
edge map is a binary map with binary values, with white being 1 and black being 0.
The expression used to derive Hybrid fusion, FHybrid can be written as a Hadamard product,
̅ (6)
Table 1. Average Petrovic score i.e. E(Q) for fusion experiments at noise levels 16dB to 10dB.
Comparisons between ICA, CPA and Hybrid fusion.
Image set Clock 3M Desk
Fusion method Max-abs Weighted Max-abs Weighted Max-abs Weighted
ICA 0.2893 0.3293 0.4860 0.5048 0.4165 0.4189
CPA 0.3144 0.3342 0.4954 0.4875 0.3926 0.3937
Hybrid fusion 0.3174 0.3453 0.5109 0.5076 0.4269 0.4117
∑ ∑ ̅ (7)
where and are the pixel coordinates and ( is the image size.
The concept brought forth in this paper is similar to the way humans generally perceive
objects of interest within our line of vision. Uniform areas in the background tend to be plain and
carry little or no information of high frequency. The general appeal of our method is it can
enhance strong texture and edges, thus giving priority to meaningful regions whilst de-
prioritising the less important regions in the background.
4. Performance Evaluation of Hybrid Fusion
We tested our algorithm using corrupted images, to reflect real world conditions
whereby the transmission of data may be prone to noise. Incremental Gaussian noise was
added to multiple sets of input images, ranging from 16dB to 10dB to represent the various
degrees of image corruption. Scenarios were used from the multifocal Clock, 3M and Desk: all
of which are common image datasets for evaluation of fusion performances. For the experiment,
6. TELKOMNIKA ISSN: 1693-6930
A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional… (Zaid Omar)
939
we obtained two grayscale images as source signals, from which the fusion will generate a
composite output image. For the sake of brevity only the Clock experiment is illustrated, as
shown in Figure 2a.
For benchmarking purposes, we tested the performance of Hybrid fusion against its
predecessor methods, namely ICA and CPA. ICA training was carried out on 1000 patches
belonging to images of similar content, each patch measuring 7 7 pixels. A total of 48
independent bases were derived. After several configurations and experiment, we concluded
that the size and number of patches are large enough without being overly complex, and
presents a good balance between information content and computational complexity. Similarly
for CPA, and degree of Chebyshev polynomials were chosen and 7 7
overlapping patches were used. Overlapping is performed by a shift of one pixel per iteration. All
fusion methods utilise the max-abs and weighted average fusion rule [3].
In non-noisy environments, CPA’s approximation results in a smooth fusion with less
discernible edges. CPA filters out most high frequency components, creating a blurred artefact
on the image. Yet through the incremental in-clusion of noise elements, the image quality only
degrades slightly as has been shown in [1]. This may be explained by the universal property
within CPA that automatically removes additive noise and other components of high frequency.
The limitation of CPA however is that it does not yet solve the problem of blurred edge details.
Therefore a combined output containing the best aspects of the above two methods is
desired. An output image should clearly detect and differentiate edges and important texture,
whilst at the same time adequately suppress the noise. The Hybrid method is able to
accomplish this by combining edge and textural regions from ICA and background regions from
CPA [11]. The result is an enhanced output with distinctively better subjective visual quality and
higher scores in objective evaluation.
(a) (b)
(c) (d) (e)
Figure 2. Multifocal Clock image fusion results for different methods at SNR = 10dB using Max-
abs rule. a) Clock 1 (background focused), b) Clock 2 (foreground focused), c) ICA, d) CPA, e)
Hybrid fusion
The objective image fusion performance measure [23], also called the Petrovic metric,
assesses the quality of visual information obtained from the fusion of input images. The metric
works by extracting all perceptually important information in the inputs, and subsequently
measure quantitatively the ability of the fusion method to transfer this information to the output
image. The metric attributes salient information with the edge strength and orientation of each
7. ISSN: 1693-6930
TELKOMNIKA Vol. 15, No. 2, June 2017 : 934 – 941
940
pixel, which are derived by the Sobel operator. A normalised weighted performance metric
is used to measure fusion performance. In the experiment, two fusion rules were
used – max-abs and weighted average. Employing the Petrovic metric [23] on our set of images
produces the following results in Table 1. The table displays the average Petrovic score, or
( ) over SNR levels 16dB to 10dB for each fusion method. An observation of the scores
confirm that Hybrid fusion has the best performance of the three.
Figure 3. Multifocal Clock fusion performance for different fusion methods for SNR = 16dB to
10dB. a) Max-abs, b) Weighted average fusion rule. (ICA=blue, CPA=green, Hybrid=red)
The graphs displaying the performance of each method under the different noise levels
is given in Figure 3. The graphs for CPA depict a very low gradient against the rising level of
noise. The reason behind this has been discussed earlier in the section. In contrast, ICA tends
to suffer drastically as the noise increases. In contrast, Hybrid fusion almost always retains the
highest score. In the case of Clock and ‘3M’, Hybrid fusion scores have consistently been the
highest throughout all SNR levels. The promising results of Hybrid fusion are likely due to the
combinatory factor of the technique. Strong edge details belonging to ICA in active regions have
been retained while inactive and plain regions have been fittingly smoothed.
5. Conclusion
We have described a novel combinatory method for fusing images and have
successfully tested Hybrid fu-sion’s ability alongside existing methods. In our tests we have
looked into fusing two grayscale images. The positive results are proof that our approach is
valid and meaningful. The final output was derived by a piece-wise composition of two
prominent fusion techniques: ICA and CPA. It is therefore entirely possible to extend the scope
of our applica-tion to three or more methods, and not necessarily limited to the aforementioned
two. Our original aim was to generate the best possible image from a higher-level fusion step.
This is a feasible direction in which to take our research.
Hybrid fusion can essentially be defined as a regional composite of the fusion results of
ICA and CPA, whereby selected pixels of ICA and CPA are injected into an image framework
based on the regions from the edge map. The criteria for selection is defined according to active
regions, translated as areas with edge and textural details, and inactive regions. It is due to this
therefore that the findings are positive; and Hybrid fusion is able to score higher than other
methods when tested using the objective fusion measure. The work presented here introduces
us to new possibilities for improvements especially in the sub-area of edge and saliency-based
image fusion.
References
[1] Z Omar, N Mitianoudis, T Stathaki. Two-dimensional Chebyshev polynomials for image fusion, 28th
Picture Coding Symposium, Nagoya. 2010.
8. TELKOMNIKA ISSN: 1693-6930
A Hybrid Chebyshev-ICA Image Fusion Method Based on Regional… (Zaid Omar)
941
[2] A Hyvarinen, E Oja, Independent component analysis: algorithms and applications. Neural Networks.
2000; 13(4-5): 411-430.
[3] T Stathaki. Image Fusion: Algorithms and Applications, Edited Book, Academic Press, 500. 2008.
[4] P Hill, N Canagarajah, D Bull. Image Fusion using Complex Wavelets. Proceedings of the 13th
British Machine Vision Conference, Cardiff, United Kingdom. 2002.
[5] www.metapix.de/methods.htm, accessed 3/7/2011.
[6] F Nencini, A Garzelli, S Baronti, L Alparone. Remote sensing image fusion using the curvelet
transforms, Information Fusion. 2007; 8: 143-156.
[7] M Choi, RY Kim, MR Nam, HO Kim. Fusion of multispectral and panchromatic satellite images using
the curvelet transform, IEEE Geoscience and Remote Sensing Letters. 2005; 2(2): 136-140.
[8] JL Starck, EJ Candes, DL Donoho. Gray and color image contrast enhancement by the curvelet
transform. IEEE Transaction on Image Processing. 2003; 12(6): 706-717.
[9] H Hariharan, A Gribok, B Abidi, M Abidi. Multi-modal face image fusion using Empirical Mode
Decomposition, Biometric Consortium Conference, Virginia, USA. 2005.
[10] DP Acharya, G Panda. A review of independent component analysis techniques and their applications.
IETE Technical Review. 2008; 25(6).
[11] Z Omar, N Mitianoudis, T Stathaki. Region-based image fusion using a combinatory Chebyshev-ICA
method. Proc. Intl. Conf. Acoustics, Speech and Signal Processing, Prague. 2011; 1213-1216.
[12] G Piella. A Region-based Multiresolution Image Fusion Algorithm. ISIF Fusion Conference 2002,
Annapolis. 2002.
[13] JV Stone. Independent component analysis: an introduction, TRENDS in Cognitive Sciences. 2002;
6(2): 59-64.
[14] E Oja, A Hyvarinen, P Hoyer. Image Feature Extraction and Denoising by Sparse Coding, Pattern
Analysis & Applications. 1999; 2: 104-110.
[15] A Hyvarinen, PO Hoyer. Emergence of phase and shift invariant features by decomposition of natural
images into independent feature subspaces, Neural Computation. 2000; 12(7): 1705–1720.
[16] N Mitianoudis, T Stathaki. Pixel-based and Region-based Image Fusion schemes using ICA bases.
Information Fusion. 2007; 8(2): 131-142.
[17] A Hyvarinen. Fast and robust fixed-point algorithm for independent component analysis. IEEE
Transactions on Neural Networks. 1999; 10(3): 626-634.
[18] A Hyvrinen, P Hoyer, E Oja. Image Denoising by Sparse Code Shrinkage. In S. Haykin and B. Kosko
(eds), Intelligent Signal Processing, IEEE Press. 2001.
[19] JC Mason, DC Handscomb. Chebyshev Polynomials, Chapman & Hall/CRC, Florida. 2003; 105-141.
[20] N Amthul. Image fusion using two dimensional Chebyshev polynomials, MSc Dissertation. Imperial
College. London. 2009.
[21] N Cvejic, J Lewis, D Bull, N Canagarajah. Region-based multimodal image fusion using ICA bases.
International Conference on Image Processing. Atlanta. 2006; 1801-1804.
[22] D Drajic, N Cvejic. Multimodal image fusion in presence of noise using sparse coding of ICA
coefficients. IEEE International Symposium on Signal Processing and Information Technology, 2007:
343-346.
[23] V Petrovic, T Cootes. Information Representation for Image Fusion Evaluation. 9th international
Conference on Information Fusion. 2006; 1-7.