This document presents a new method for fusing multi-focus images called the Neighbor Local Variability (NLV) method. The NLV method calculates the local variability of each pixel based on the quadratic difference between the pixel value and those of neighboring pixels. This variability is used to weight pixel values when fusing the images. The optimal size of the neighborhood considered depends on the variance and size of the blur filter, and can be estimated using a provided model. The NLV method is compared to other fusion methods and is shown to produce better results based on the Root Mean Square Error evaluation metric.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
De-speckling of underwater ultrasound images ajujohnkk
This document summarizes a research paper on enhancing the quality of underwater images using a speckle reducing anisotropic diffusion algorithm. The key points are:
1) Underwater images suffer from issues like limited visibility, low contrast, and speckle noise which degrades image quality. Speckle reducing anisotropic diffusion (SRAD) is proposed to reduce speckle noise while preserving edges.
2) SRAD is based on partial differential equations and minimum mean square error filtering. It allows diffusion within homogeneous regions but inhibits diffusion across edges.
3) The document outlines the SRAD algorithm which uses a coefficient of variation to control diffusion based on local intensity gradients, reducing noise while sharpening edges.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
This document discusses edge detection algorithms for digital images. It begins with an abstract that introduces edge detection as an important part of image processing for object detection. The introduction then defines what constitutes an edge and explains that edge detection can help locate objects and measure properties in an image.
The body of the document first discusses what digital images are composed of and defines image processing. It then explains that edge detection aims to identify areas of high intensity variation in pixels to find object boundaries. Various edge detection algorithms are introduced, including Sobel, Roberts, Prewitt, Laplacian of Gaussian, and Canny. The document concludes by explaining that edge detection algorithms are either gradient-based, which calculate intensity gradients, or Laplacian-based
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
LOCAL DISTANCE AND DEMPSTER-DHAFER FOR MULTI-FOCUS IMAGE FUSION sipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability (DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part. The results of the proposed method are significantly better when comparing them to results of other methods.
De-speckling of underwater ultrasound images ajujohnkk
This document summarizes a research paper on enhancing the quality of underwater images using a speckle reducing anisotropic diffusion algorithm. The key points are:
1) Underwater images suffer from issues like limited visibility, low contrast, and speckle noise which degrades image quality. Speckle reducing anisotropic diffusion (SRAD) is proposed to reduce speckle noise while preserving edges.
2) SRAD is based on partial differential equations and minimum mean square error filtering. It allows diffusion within homogeneous regions but inhibits diffusion across edges.
3) The document outlines the SRAD algorithm which uses a coefficient of variation to control diffusion based on local intensity gradients, reducing noise while sharpening edges.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
This document discusses edge detection algorithms for digital images. It begins with an abstract that introduces edge detection as an important part of image processing for object detection. The introduction then defines what constitutes an edge and explains that edge detection can help locate objects and measure properties in an image.
The body of the document first discusses what digital images are composed of and defines image processing. It then explains that edge detection aims to identify areas of high intensity variation in pixels to find object boundaries. Various edge detection algorithms are introduced, including Sobel, Roberts, Prewitt, Laplacian of Gaussian, and Canny. The document concludes by explaining that edge detection algorithms are either gradient-based, which calculate intensity gradients, or Laplacian-based
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
This paper introduces the concept of Blind Deconvolution for restoration of a digital image and
small segments of a single image that has been degraded due to some noise. Concept of Image Restoration is
used in various areas like in Robotics to take decision, Biomedical research for analysis of tissues, cells and
cellular constituents etc. Segmentation is used to divide an image into multiple meaningful regions. Concept of
segmentation is helpful for restoration of only selected portion of the image hence reduces the complexity of the
system by focusing only on those parts of the image that need to be restored. There exist so many techniques for
the restoration of a degraded image like Wiener filter, Regularized filter, Lucy Richardson algorithm etc. All
these techniques use prior knowledge of blur kernel for restoration process. In Blind Deconvolution technique
Blur kernel initially remains unknown. This paper uses Gaussian low pass filter to convolve an image. Gaussian
low pass filter minimize the problem of ringing effect. Ringing effect occurs in image when transition between
one point to another is not clearly defined. After removing these ringing effects from the restored image,
resultant image will be clear in visibility. The aim of this paper is to provide better algorithm that can be helpful
in removing unwanted features from the image and the quality of the image can be measured in terms of
PSNR(Peak Signal-to-Noise Ratio) and MSE(Mean Square error). Proposed Technique also works well with
Motion Blur.
Novel DCT based watermarking scheme for digital imagesIDES Editor
There is an ever growing interest in copyright
protection of multimedia content, thus digital
watermarking techniques are widely practiced. Due to
the internet connectivity and digital libraries the
research interest of protecting digital content
watermarking is extensively researched. In this paper
we present a novel watermark generation scheme
based on the histogram of the image and apply it to the
original image in the transform(DCT) domain. Further
we study the performance of the watermark against
some common attacks that can take place with images.
Experimental results show that the embedded
watermark is imperceptible and image quality is not
degraded.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
An adaptive method for noise removal from real world imagesIAEME Publication
The document summarizes an adaptive method for noise removal from real world images. It proposes modifying the bilateral filter, which considers both spatial and intensity distances between pixels. The modified filter adapts its strength based on the local noise level in the image. It estimates the smoothing parameter by analyzing noise strength factors within blocks of different sizes. This helps determine the appropriate block size to use for a given image region. The filter aims to remove Gaussian noise while preserving edges and details to enhance image quality. Experimental results show it performs well across different images for a wide range of noise levels.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Qcce quality constrained co saliency estimation for common object detectionKoteswar Rao Jerripothula
Despite recent advances in joint processing of images,
sometimes it may not be as effective as single image
processing for object discovery problems. In this paper while
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained
Co-saliency Estimation method. The approach here is to iteratively
update the saliency maps through co-saliency estimation
depending upon quality scores, which indicate the degree of
separation of foreground and background likelihoods (the easier
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality
of saliency maps. Moreover, the proposed method can be applied
to both unsupervised and supervised scenarios, unlike other
methods which are particularly designed for one scenario only.
Experimental results demonstrate superior performance of the
proposed method compared to the state-of-the-art methods.
This document summarizes an article that proposes an automated technique for detecting JPEG ghosts, which indicate image tampering. The technique segments the image using graph-based segmentation. It then generates difference images by subtracting recompressed versions of the original image at different quality levels. Ghosts that coincide with image segments and fall within a size range are flagged as valid. The technique was able to automatically detect ghosts in 80% of images from a tampered image database where ghosts were manually visible. However, the overall forgery detection rate was low, as pasted regions sometimes appeared lighter instead of darker in difference images. Future work will aim to improve classification of ghosts as dark or light regions to enhance detection performance.
Improving Performance of Texture Based Face Recognition Systems by Segmenting...IDES Editor
Textures play an important role in recognition of
images. This paper investigates the efficiency of performance
of three texture based feature extraction methods for face
recognition. The methods for comparative study are Grey Level
Co_occurence Matrix (GLCM), Local Binary Pattern (LBP)
and Elliptical Local Binary Template (ELBT). Experiments
were conducted on a facial expression database, Japanese
Female Facial Expression (JAFFE). With all facial expressions
LBP with 16 vicinity pixels is found to be a better face
recognition method among the tested methods. Experimental
results show that classification based on segmenting face
region improves recognition accuracy.
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
The document proposes a hybrid method called Wavelet Embedded Anisotropic Diffusion (WEAD) for image denoising. WEAD is a two-stage filter that first applies anisotropic diffusion to reduce noise, followed by wavelet-based Bayesian shrinkage. This reduces the convergence time of anisotropic diffusion, allowing the image to be denoised with less blurring compared to anisotropic diffusion or wavelet methods alone. Experimental results on various images demonstrate that WEAD achieves better denoising performance than anisotropic diffusion or Bayesian shrinkage methods, as measured by higher PSNR and SSIM scores and fewer required iterations.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The document presents a study comparing fingerprint image compression using wavelets and wave atoms transforms. It finds that wave atoms transforms provide better performance than current wavelet-based standards like WSQ. Specifically:
- Wave atoms achieved higher PSNR values and compression ratios than wavelets when reconstructing images from a reduced number of coefficients.
- An algorithm was proposed using wave atom decomposition, non-uniform quantization, and entropy coding that achieved a compression ratio of 18 with a PSNR of 35.04 dB, outperforming the WSQ standard.
- Minutiae detection on original and reconstructed images showed wave atoms better preserved local fingerprint structures. Therefore, wave atoms are concluded to be more suitable than wavelets
Image enhancement is a method of improving the quality of an image and contrast is a major aspect. Traditional methods of contrast enhancement like histogram equalization results in over/under enhancement of the image especially a lower resolution one. This paper aims at developing a new Fuzzy Inference System to enhance the contrast of the low resolution images overcoming the shortcomings of the traditional methods. Results obtained using both the approaches are compared.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
This document discusses boundary detection techniques for images. It proposes a generalized boundary detection method (Gb) that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation and contour grouping methods are also introduced to further improve boundary detection accuracy with minimal extra computation. The document presents outputs of Gb on sample images and concludes that Gb effectively detects boundaries in a principled manner by jointly resolving constraints from multiple image interpretation layers in closed form.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
The document describes a new method for fusing multi-focus images using Dempster-Shafer theory and local variability (DST-LV). The method calculates the quadratic distance between each pixel value and its neighboring pixels to determine local variability, which is then used to define mass functions in Dempster-Shafer theory. The proposed method was tested on 150 images and found to produce higher quality fused images compared to other popular fusion methods like PCA, DWT, and gradient bilateral, with lower RMSE values. Potential applications of the method include image fusion for drones, medical imaging, and food quality inspection.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
The pel-recursive computation of 2-D optical flow raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. Our proposed approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Generalised Cross Validation to estimate the best regularisation scheme for a given pixel. In our model, the regularisation parameter is a general matrix whose entries can account for different sources of error. The motion vector estimation takes into consideration local image properties following a spatially adaptive approach where each moving pixel is supposed to have its own regularisation matrix. Preliminary experiments indicate that this approach provides robust estimates of the optical flow.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Denoising and Edge Detection Using SobelmethodIJMER
The main aim of our study is to detect edges in the image without any noise , In many of the images edges carry important information of the image, this paper presents a method which consists of sobel operator and discrete wavelet de-noising to do edge detection on images which include white Gaussian noises. There were so many methods for the edge detection, sobel is the one of the method, by using this sobel operator or median filtering, salt and pepper noise cannot be removed properly, so firstly we use complex wavelet to remove noise and sobel operator is used to do edge detection on the image. Through the pictures obtained by the experiment, we can observe that compared to other methods, the method has more obvious effect on edge detection.
An adaptive method for noise removal from real world imagesIAEME Publication
The document summarizes an adaptive method for noise removal from real world images. It proposes modifying the bilateral filter, which considers both spatial and intensity distances between pixels. The modified filter adapts its strength based on the local noise level in the image. It estimates the smoothing parameter by analyzing noise strength factors within blocks of different sizes. This helps determine the appropriate block size to use for a given image region. The filter aims to remove Gaussian noise while preserving edges and details to enhance image quality. Experimental results show it performs well across different images for a wide range of noise levels.
Region duplication forgery detection in digital imagesRupesh Ambatwad
Region duplication or copy move forgery is a common type of tampering scheme carried out to create a fake image. The field on blind image forensics depends upon the authenticity of the digital image. As in copy move forgery the duplicated region belongs to the same image, the detection of tampering is complex as it does not leave a visual clue. But the tampering gives rise to glitches at pixel level
VARIATION-FREE WATERMARKING TECHNIQUE BASED ON SCALE RELATIONSHIPcsandit
Most watermark methods use pixel values or coefficients as the judgment condition to embed or
extract a watermark image. The variation of these values may lead to the inaccurate condition
such that an incorrect judgment has been laid out. To avoid this problem, we design a stable
judgment mechanism, in which the outcome will not be seriously influenced by the variation.
The principle of judgment depends on the scale relationship of two pixels. From the observation
of common signal processing operations, we can find that the pixel value of processed image
usually keeps stable unless an image has been manipulated by cropping attack or halftone
transformation. This can greatly help reduce the modification strength from image processing
operations. Experiment results show that the proposed method can resist various attacks and
keep the image quality friendly.
Texture Unit based Monocular Real-world Scene Classification using SOM and KN...IDES Editor
In this paper a method is proposed to discriminate
real world scenes in to natural and manmade scenes of similar
depth. Global-roughness of a scene image varies as a function
of image-depth. Increase in image depth leads to increase in
roughness in manmade scenes; on the contrary natural scenes
exhibit smooth behavior at higher image depth. This particular
arrangement of pixels in scene structure can be well explained
by local texture information in a pixel and its neighborhood.
Our proposed method analyses local texture information of a
scene image using texture unit matrix. For final classification
we have used both supervised and unsupervised learning using
K-Nearest Neighbor classifier (KNN) and Self Organizing
Map (SOM) respectively. This technique is useful for online
classification due to very less computational complexity.
Qcce quality constrained co saliency estimation for common object detectionKoteswar Rao Jerripothula
Despite recent advances in joint processing of images,
sometimes it may not be as effective as single image
processing for object discovery problems. In this paper while
aiming for common object detection, we attempt to address
this problem by proposing a novel QCCE: Quality Constrained
Co-saliency Estimation method. The approach here is to iteratively
update the saliency maps through co-saliency estimation
depending upon quality scores, which indicate the degree of
separation of foreground and background likelihoods (the easier
the separation, the higher the quality of saliency map). In this
way, joint processing is automatically constrained by the quality
of saliency maps. Moreover, the proposed method can be applied
to both unsupervised and supervised scenarios, unlike other
methods which are particularly designed for one scenario only.
Experimental results demonstrate superior performance of the
proposed method compared to the state-of-the-art methods.
This document summarizes an article that proposes an automated technique for detecting JPEG ghosts, which indicate image tampering. The technique segments the image using graph-based segmentation. It then generates difference images by subtracting recompressed versions of the original image at different quality levels. Ghosts that coincide with image segments and fall within a size range are flagged as valid. The technique was able to automatically detect ghosts in 80% of images from a tampered image database where ghosts were manually visible. However, the overall forgery detection rate was low, as pasted regions sometimes appeared lighter instead of darker in difference images. Future work will aim to improve classification of ghosts as dark or light regions to enhance detection performance.
Improving Performance of Texture Based Face Recognition Systems by Segmenting...IDES Editor
Textures play an important role in recognition of
images. This paper investigates the efficiency of performance
of three texture based feature extraction methods for face
recognition. The methods for comparative study are Grey Level
Co_occurence Matrix (GLCM), Local Binary Pattern (LBP)
and Elliptical Local Binary Template (ELBT). Experiments
were conducted on a facial expression database, Japanese
Female Facial Expression (JAFFE). With all facial expressions
LBP with 16 vicinity pixels is found to be a better face
recognition method among the tested methods. Experimental
results show that classification based on segmenting face
region improves recognition accuracy.
Random Valued Impulse Noise Removal in Colour Images using Adaptive Threshold...IDES Editor
To remove random valued impulse noise from
colour images, an efficient impulse detection and filtering
scheme is presented. The locally adaptive threshold for
impulse detection is derived from the pixels of the filtering
window. The restoration of the noisy pixel is done on the basis
of brightness and chromaticity information obtained from the
neighbouring pixels in the filtering window. Experimental
results demonstrate that the proposed scheme yields much
superior performance in comparison with other colour image
filtering methods.
The document proposes a hybrid method called Wavelet Embedded Anisotropic Diffusion (WEAD) for image denoising. WEAD is a two-stage filter that first applies anisotropic diffusion to reduce noise, followed by wavelet-based Bayesian shrinkage. This reduces the convergence time of anisotropic diffusion, allowing the image to be denoised with less blurring compared to anisotropic diffusion or wavelet methods alone. Experimental results on various images demonstrate that WEAD achieves better denoising performance than anisotropic diffusion or Bayesian shrinkage methods, as measured by higher PSNR and SSIM scores and fewer required iterations.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The document presents a study comparing fingerprint image compression using wavelets and wave atoms transforms. It finds that wave atoms transforms provide better performance than current wavelet-based standards like WSQ. Specifically:
- Wave atoms achieved higher PSNR values and compression ratios than wavelets when reconstructing images from a reduced number of coefficients.
- An algorithm was proposed using wave atom decomposition, non-uniform quantization, and entropy coding that achieved a compression ratio of 18 with a PSNR of 35.04 dB, outperforming the WSQ standard.
- Minutiae detection on original and reconstructed images showed wave atoms better preserved local fingerprint structures. Therefore, wave atoms are concluded to be more suitable than wavelets
Image enhancement is a method of improving the quality of an image and contrast is a major aspect. Traditional methods of contrast enhancement like histogram equalization results in over/under enhancement of the image especially a lower resolution one. This paper aims at developing a new Fuzzy Inference System to enhance the contrast of the low resolution images overcoming the shortcomings of the traditional methods. Results obtained using both the approaches are compared.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
This document discusses boundary detection techniques for images. It proposes a generalized boundary detection method (Gb) that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation and contour grouping methods are also introduced to further improve boundary detection accuracy with minimal extra computation. The document presents outputs of Gb on sample images and concludes that Gb effectively detects boundaries in a principled manner by jointly resolving constraints from multiple image interpretation layers in closed form.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
This work proposes a new method of fusion image using Dempster-Shafer theory and local variability
(DST-LV). This method takes into account the behaviour of each pixel with its neighbours. It consists in
calculating the quadratic distance between the value of the pixel I (x, y) of each point and the value of all
the neighbouring pixels. Local variability is used to determine the mass function defined in DempsterShafer theory. The two classes of Dempster-Shafer theory studied are : the fuzzy part and the focused part.
The results of the proposed method are significantly better when comparing them to results of other
methods.
Local Distance and Dempster-Dhafer for Multi-Focus Image Fusionsipij
The document describes a new method for fusing multi-focus images using Dempster-Shafer theory and local variability (DST-LV). The method calculates the quadratic distance between each pixel value and its neighboring pixels to determine local variability, which is then used to define mass functions in Dempster-Shafer theory. The proposed method was tested on 150 images and found to produce higher quality fused images compared to other popular fusion methods like PCA, DWT, and gradient bilateral, with lower RMSE values. Potential applications of the method include image fusion for drones, medical imaging, and food quality inspection.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
Call for paper 2012, hard copy of Certificate, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJCER, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, research and review articles, IJCER Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathematics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer review journal, indexed journal, research and review articles, engineering journal, www.ijceronline.com, research journals,
yahoo journals, bing journals, International Journal of Computational Engineering Research, Google journals, hard copy of Certificate,
journal of engineering, online Submission
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
The pel-recursive computation of 2-D optical flow raises a wealth of issues, such as the treatment of outliers, motion discontinuities and occlusion. Our proposed approach deals with these issues within a common framework. It relies on the use of a data-driven technique called Generalised Cross Validation to estimate the best regularisation scheme for a given pixel. In our model, the regularisation parameter is a general matrix whose entries can account for different sources of error. The motion vector estimation takes into consideration local image properties following a spatially adaptive approach where each moving pixel is supposed to have its own regularisation matrix. Preliminary experiments indicate that this approach provides robust estimates of the optical flow.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Denoising and Edge Detection Using SobelmethodIJMER
The main aim of our study is to detect edges in the image without any noise , In many of the images edges carry important information of the image, this paper presents a method which consists of sobel operator and discrete wavelet de-noising to do edge detection on images which include white Gaussian noises. There were so many methods for the edge detection, sobel is the one of the method, by using this sobel operator or median filtering, salt and pepper noise cannot be removed properly, so firstly we use complex wavelet to remove noise and sobel operator is used to do edge detection on the image. Through the pictures obtained by the experiment, we can observe that compared to other methods, the method has more obvious effect on edge detection.
ANALYSIS OF INTEREST POINTS OF CURVELET COEFFICIENTS CONTRIBUTIONS OF MICROS...sipij
This paper focuses on improved edge model based on Curvelet coefficients analysis. Curvelet transform is
a powerful tool for multiresolution representation of object with anisotropic edge. Curvelet coefficients
contributions have been analyzed using Scale Invariant Feature Transform (SIFT), commonly used to study
local structure in images. The permutation of Curvelet coefficients from original image and edges image
obtained from gradient operator is used to improve original edges. Experimental results show that this
method brings out details on edges when the decomposition scale increases.
This document describes a new method for designing digital filters using fractional-pixel sums. The method involves converting an image between two resolutions and using the pixel values from the intermediate resolution to generate directional masks. These masks can then be combined linearly to form different types of digital filters, such as moving-average filters. The filters produced by this method were tested on inverted bi-level images and shown to effectively remove blocking artifacts when compared to original continuous-tone images. The quality of inverted images was found to improve with higher resolution.
Deferred Pixel Shading on the PLAYSTATION®3Slide_N
This document summarizes a deferred pixel shading algorithm implemented on the PlayStation 3 system. The algorithm runs pixel shaders on the Synergistic Processing Elements of the Cell processor concurrently with the GPU for rendering images. Experimental results found that running the pixel shading on 5 SPEs achieved a performance of up to 85Hz at 720p resolution, comparable to running on a high-end GPU. This indicates that the Cell processor can effectively enhance GPU performance by offloading pixel shading work.
WAVELET DECOMPOSITION AND ALPHA STABLE FUSIONsipij
This article gives a new method of fusing multifocal images combining the Laplacian pyramid and the wavelet decomposition using the stable distance alpha as a selection rule. We start by decomposing multifocal images into several pyramid levels, then applying the wavelet decomposition to each level. the originality of this work is to use the stable distance alpha to fuse the wavelet images at each level of the Pyramid. To obtain the final fused image, we reconstructed the combined image at each level of the pyramid. We compare our method to other existing methods in the literature and we deduce that it is almost better.
FINGERPRINTS IMAGE COMPRESSION BY WAVE ATOMScsandit
The fingerprint images compression based on geometric transformed presents important
research topic, these last year’s many transforms have been proposed to give the best
representation to a particular type of image “fingerprint image”, like classics wavelets and
wave atoms. In this paper we shall present a comparative study between this transforms, in
order to use them in compression. The results show that for fingerprint images, the wave atom
offers better performance than the current transform based compression standard. The wave
atoms transformation brings a considerable contribution on the compression of fingerprints
images by achieving high values of ratios compression and PSNR, with a reduced number of
coefficients. In addition, the proposed method is verified with objective and subjective testing.
Fuzzy Entropy Based Optimal Thresholding Technique for Image Enhancement ijsc
Soft computing is likely to play aprogressively important role in many applications including image enhancement. The paradigm for soft computing is the human mind. The soft computing critique has been particularly strong with fuzzy logic. The fuzzy logic is facts representationas a rule for management of uncertainty. Inthis paperthe Multi-Dimensional optimized problem is addressed by discussing the optimal thresholding usingfuzzyentropyfor Image enhancement. This technique is compared with bi-level and multi-level thresholding and obtained optimal thresholding values for different levels of speckle noisy and low contrasted images. The fuzzy entropy method has produced better results compared to bi-level and multi-level thresholding techniques.
Image Interpolation Techniques with Optical and Digital Zoom Concepts -semina...mmjalbiaty
full details about Spatial and Intensity Resolution , optical and digital zoom concepts and the common three interpolation algorithms for implementing zoom in image processing
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
In the near future, there is an eminent demand for High Resolution images. In order to fulfil this
demand, Super Resolution (SR) is an approach used to renovate High Resolution (HR) image from one or more
Low Resolution (LR) images. The aspiration of SR is to dig up the self-sufficient information from each LR
image in that set and combine the information into a single HR image. Conventional interpolation methods can
produce sharp edges; however, they are approximators and tend to weaken fine structure. In order to overcome
the drawback, a new approach of Effective Pixel Interpolation method is incorporated. It has been numerically
verified that the resulting algorithm reinstate sharp edges and enhance fine structures satisfactorily,
outperforming conventional methods. The suggested algorithm has also proved efficient enough to be applicable
for real-time processing for resolution enhancement of image. Statistical examples are shown to verify the claim.
Image fusion technology is also used to fuse two processed images obtained through the algorithm
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
Abstract: In the near future, there is an eminent demand for High Resolution images. In order to fulfil this demand, Super Resolution (SR) is an approach used to renovate High Resolution (HR) image from one or more Low Resolution (LR) images. The aspiration of SR is to dig up the self-sufficient information from each LR image in that set and combine the information into a single HR image. Conventional interpolation methods can produce sharp edges; however, they are approximators and tend to weaken fine structure. In order to overcome the drawback, a new approach of Effective Pixel Interpolation method is incorporated. It has been numerically verified that the resulting algorithm reinstate sharp edges and enhance fine structures satisfactorily, outperforming conventional methods. The suggested algorithm has also proved efficient enough to be applicable for real-time processing for resolution enhancement of image. Statistical examples are shown to verify the claim. Image fusion technology is also used to fuse two processed images obtained through the algorithm. Keywords: Super Resolution, Interpolation, EESM, Image Fusion
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
The document discusses various edge detection algorithms used in image processing. It begins with an introduction to edge detection and its importance in image processing tasks like object detection, segmentation and data compression. It then classifies and describes some commonly used edge detection methods like Sobel, Roberts, Prewitt, Laplacian of Gaussian (LoG). Gradient based methods like Sobel, Roberts and Prewitt detect edges by calculating the gradient of the image. LoG is a laplacian based method that searches for zero crossings in the second derivative of an image to find edges. The document provides detailed explanations of how these algorithms work including the convolution masks and equations used to calculate edges.
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
Abstract: Due to higher processing power to cost ratio, it is now possible to replace the manual detection methods used in the IC (Integrated Circuit) industry by Image-processing based automated methods, to detect a broken pin of an IC connected on a PCB during manufacturing, which will make the process faster, easier and cheaper. In this paper an accurate and fast automatic detection method is used where the top view camera shots of PCBs are processed using advanced methods of 2-dimensional discrete wavelet pre-processing before applying edge-detection. Comparison with conventional edge detection methods such as Sobel, Prewitt and Canny edge detection without 2-D DWT is also performed. Keywords :2-dimensional wavelets, Edge detection, Machine vision, Image processing, Canny.
This document discusses techniques for image segmentation and edge detection. It proposes a generalized boundary detection method called Gb that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation is also introduced to improve boundary detection accuracy with minimal extra computation. Common methods for edge detection are described, including gradient-based, texture-based, and projection profile-based approaches. Improved Harris and corner detection algorithms are presented to more accurately detect edges and corners. The output of Gb using soft segmentations as input is shown to correlate well with occlusions and whole object boundaries while capturing general boundaries.
Similar to NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSION (20)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
NEIGHBOUR LOCAL VARIABILITY FOR MULTIFOCUS IMAGES FUSION
1. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
DOI: 10.5121/sipij.2020.11603 37
NEIGHBOUR LOCAL VARIABILITY FOR MULTI-
FOCUS IMAGES FUSION
Ias Sri Wahyuni1
and Rachid Sabre2
1
Universitas Gunadarma, Jl. Margonda Raya No. 100 Depok 16424, Indonesia
2
Laboratory Biogéosciences CNRS, University of Burgundy/Agrosup Dijon, France
ABSTRACT
The goal of multi-focus image fusion is to integrate images with different focus objects in order to obtain a
single image with all focus objects. In this paper, we give a new method based on neighbour local
variability (NLV) to fuse multi-focus images. At each pixel, the method uses the local variability calculated
from the quadratic difference between the value of the pixel and the value of all pixels in its
neighbourhood. It expresses the behaviour of the pixel with respect to its neighbours. The variability
preserves the edge function because it detects the sharp intensity of the image. The proposed fusion of each
pixel consists of weighting each pixel by the exponential of its local variability. The quality of this fusion
depends on the size of the neighbourhood region considered. The size depends on the variance and the size
of the blur filter. We start by modelling the value of the neighbourhood region size as a function of the
variance and the size of the blur filter. We compare our method to other methods given in the literature.
We show that our method gives a better result.
KEYWORDS
Neighbour Local Variability; Multi-focus image fusion; Root Mean Square Error (RMSE)
1. INTRODUCTION
The limitation of the depth of field of optical lenses often causes difficulty in capturing an image
containing all objects in focus on the scene. Only objects in depth of field are sharp, while other
objects are out of focus. Merging multi-focus images is the solution to this problem. There are
different approaches that have been carried out in the literature. These approaches can be
separated into two kinds: the spatial domain method and the multi-scale fusion method. The
spatial domain fusion method is performed directly on the source images. The techniques of the
spatial domain directly consider the pixels of the image. We quote some methods of fusion of the
spatial domain approaches: the mean, the principal component analysis (PCA) [1], the rule of
maximal selection, the methods bilateral gradient based [2] and guided image filter (GIF) method
[3] and maximum selection rule. We relate to the spatial domain approaches the fact that they
produce a spatial distortion in the merged image. Spatial distortion can be handled very well by
multiscale approaches to image fusion. The merging process in multi-scale merging methods is
done on the source images after their decomposition into several scales. we cite some examples
of these methods: The discrete wavelet transform (DWT) [4] - [9], fusion of images of the
Laplacian pyramid [10] - [17], discrete cosine transform with variance calculation (DCT + var)
[18], method based on salience detection (SD) [19] are examples of image fusion techniques
under transformation domain. The objective of this work is to propose a multi-focus image fusion
at the pixel level based on the neighboring local variability (NLV). It consists in calculating for
each pixel the quadratic distance which separates it from the other pixels belonging to its
neighborhood. It expresses the behavior of the pixel with respect to all the pixels belonging to its
2. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
38
neighborhood. The variability preserves the edge function because it detects the sharp intensity of
the image. The fusion of each pixel is given by a linear model of the values of the pixel in each
image hanging by the exponential of its local variability.
The precision of this fusion depends on the width of the region of pixels considered in the
neighbourhood. To determine this optimal width, we minimize the RMSE error. We propose a
model giving this width as a function of the variance and the size of the blur filter.
A comparative study of our method with other existing methods in the literature (DWT and LP-
DWT), it was carried out and showed that our method gave the best result using the mean
squared error (RMSE).
This paper is organized as follows: The second section reveals the stages of the process of fusing
the proposed method and a model giving the size of the neighbourhood. In section 3, we studied
the experimental results and compared our method to some recent methods. Section 4 gives the
conclusion of this work. In section 5, we give mathematical details to show the property of local
variability..
2. NEIGHBORING LOCAL VARIABILITY METHOD
Let be the fusion of two images, I1 and I2 that respectively have blurred parts B1 and B2. These
images have the same size: N1 x N2 where B1 and B2 are disjoint. The idea of the NLV merge
method is to add the pixel values of the two images weighted by the local variability of each
image. This local variability at (x, y) is calculated from the exponential of average of the square
difference between the value of the pixel (x, y) and the value of its neighbors. The NLV at is
defined as follows:
𝑣 𝑎,𝑘(𝑥, 𝑦) = √
1
𝑅
∑ ∑ |𝐼𝑘( 𝑥, 𝑦) − 𝐼𝑘
′ ( 𝑥 + 𝑚, 𝑦 + 𝑛)|
2
𝑎
𝑛=−𝑎
𝑎
𝑚=−𝑎
(1)
where k is the index of kth
source image (k = 1, 2), a is the size of neighborhood
1 2'
( , ), 1 1
,
( , ),
k
k
k
I x m y n if x m N and y n N
I x m y n
I x y otherwise
,
𝑅 = (2𝑎 + 1)2
− 𝑐𝑎𝑟𝑑(𝑆),
2 '
, , {(0,0)} , ( , ) .k kS m n a a I x m y n I x y
In the annex 1,we show that this local variability is small enoughwherethe location is on
theblurred area (B1 or B2).
In this work, we give a novel fusion method where each pixel of each image is weighting by
exponential of neighbour local variability. We calculate the neighbour local variabilityfrom the
quadratic difference between the value of the pixel and the all pixel values of its neighbours. The
idea came from the fact that the variation of the value in blurred region is smaller than the
variation of the value in focused region. We used the neighbour, with the size "a", of a pixel
defined as follows:
( , )x i y j where , 1, , 1,i a a a a and , 1, , 1,j a a a a .
3. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
39
For example, the neighbor with the small size ("a" = 1) contains: ( 1, 1)x y , ( 1, )x y ,
( 1, 1)x y , ( , 1)x y , ( , 1)x y , ( 1, 1)x y , ( 1, )x y ,( 1, 1)x y .
(x-1, y-1) (x-1,y) (x-1,y+1)
(x,y-1) (x,y) (x,y+1)
(x+1,y-1) (x+1,y) (x+1,y+1)
Fig. 2. Pixel at (x,y) within its neighborhood, a = 1.
The fusion with size "a"follows the steps:
Consider M original source images, I1, ..., IM, with different focus and same size( 𝑁1x𝑁2).
Step 1: For each pixel of each image, we calculated the neighbor local variability (NLV) of every
source image, va,k(x,y) defined in (1).
Step 2: The proposed fusion for the pixel (x,y) is F(x,y) calculated by the following model:
𝐹( 𝑥, 𝑦) =
∑ exp(𝑣 𝑎,𝑘(𝑥,𝑦))𝐼 𝑘(𝑥,𝑦)
𝑀
𝑘=1
∑ exp(𝑣 𝑎,𝑘(𝑥,𝑦))
𝑀
𝑖=1
(2)
We remark that method depends on the size "a". To determine the value of “a” , we find the value
minimizing the Root Mean Square Error (RMSE), where RMSE is defined in section 3. For that,
we showed that the value of "a" depends on the blurred area.
The choice of the size of the neighborhood "a" used in NLV method depends on variance (v) and
the size(s) of the blurring filter. Our problem is to have a model that gives the value of the "a"
according to the "v" and "s"; we take a sample of 1000 images that we blurred using Gaussian
filter with different values of v and s (v=1,2,3,...,35 and s=1,2,3,...,20).
After that, for each image we blurred with parameters "v" and "s", we applied our fusion method
with different values of "a"("a=1,2,...,17")and determined the value of "a" that gives the
minimum RMSE, denoted by
,Ia v s
. Then, we took the mean of the
,Ia v s
for 1000 images,
denoted
,a v s
, because the coefficient of variation is smaller than 0,1.
Firstly, we have studied the variation of "a" in according to variance "v" for each fixed size of
blurring filter "s". We noted that this variation is logarithmic. For example, "s=8" on Fig. 4.By
using nonlinear regression, we obtained the model:
a=2.1096ln 2.8689v .
4. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
40
Fig. 4. Graph between "a" and variance of blurring filter where "s"=8.
In general, the model is:
1 2( )ln ( )a c s v c s (3)
where the 1c
and 2c are functions that depend on "s". The graphs that describe 1c
and 2c ,
respectively, are revealed in Fig. 5. and Fig. 6.
Fig. 5. graph of 1( )c s
Fig. 6. graph of 2 ( )c s
.
By giving a model of 1c
and a model of 2c
and introducing these models in (3), we get the
general following model:
2
log 2.6555513.0348761 75.062269
( , ) ln( ) 0.434 exp 0.5
1 29.0909139exp 0.5324955 1.225175 1.225175
s
a v s v
s s
(4)
As "a" is integer, we have two choices of a. It is either the floor of
,a v s
, denoted by
( , )a v s
or the ceiling of a(v,s) , denoted by
( , )a v s where
min |x n n x and
max |x m m x . Since the RMSE values of both "a" are very slightlydifferent, then
we can choose any "a" of them. We use "a" =
( , )a v s in the remaining part of this paper.
To validate this model we applied it to 100 images (we generated 100 pairs of multi-focus images
with different variance and blur filter size values) and the result is as good as expected. To use
this NLV method, you must first estimate the variance and size of the blur filter from the blur
image. For this, there are some articles giving the methods to estimate the variance of the blur
5. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
41
filter and the blur detection as in [23] - [27]. subsequently, we developed another method in
which we combined the method of the Laplacian pyramid and the NLV method where the
Laplacian pyramid was used with NLV as selection rule, denoted LP-NLV.
3. EXPERIMENTAL STUDY
We have applied the NLV method on a dataset of images [26] using Matlab2013a. These images
were blurred using a Gaussian filter with many values of variance and size. To simplify the
reading of the article, we have presented only two examples in 256x256 format (N1 = N2 = 256).
The first “bird” image Fig.1 and the second “bottle” image Fig.2. All images consist of two
images with a different focus and a reference image.
We then compare the NLV method to other methods existing in the literature such as: PCA
method [1], Discrete Wavelet Transform (DWT) method [6], Laplacian Pyramid LP_PCA [15],
LP_DWT [17] and Bilateral gradient (BG) [2].
For that, we used the following the evaluation criteria frequently used:
Root Mean Square Error (RMSE)
RMSE gives the information how the pixel values of fused image F deviate from the reference
imageR. RMSE is defined as follows:
2
1 1
1
( , ) ( , )
c c
i j
RMSE R x y F x y
rc
(5)
whereR is a reference image, F is a fused image,rx c is the size of the input image, and x, y
represents to the pixel locations. A smaller value of RMSE shows a good fusion result. If the
value of RMSE is equal to zero then it means the fused image is same the reference image.
For two images that are presented in this paper and blurred with variance = 10 and size of
blurring filter = 5, the model (5) gives the neighbour size "a" = 5 and "a" = 6. Here, we use "a" =
6 because it results the smaller RMSE compared to "a" = 5 however the RMSE values of "a" = 5
and "a" = 6 are very slightly different.
6. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
42
Blurred image 1 Blurred image 2
Figure1. Fusion by proposed method NLV
We have found that the NLV method better fusion compared to other methods, see Fig.1.
Table 1. Performance evaluation of image 'bird'
PCA DWT LP-
DWT
LP-
PCA
DCT+va
r
Bilatera
l
gradien
t
GIF SD NLV LP-
NLV
RMS
E
6.920
5
3.567
8
1.519
0
1.468
1
2.6860 8.8378 2.279
2
10.454
7
0.546
6
0.843
1
The table 1 gives the values of RMSE calculated for ten methods. As we can see on the Table 1
for image 'bird': the smallest RMSE is given by NLV method, the second smallest is LP-NLV,
the third is LP-PCA,. NLV method is the best method among the above methods and LP-NLV is
better than LP-PCA and LP-DWT.
7. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
43
Blurred image1 Blurred image2
Figure2. Fusion by proposed method (NLV)
We have found that the NLV method performs better compared to other methods, see Fig.2.
To confirm our visually result, we calculated the evaluation metrics: RMSE for ten methods see
Table 2. We can classify these methods from the smaller value of RMSE. The smallest value is
NLV, the second smallest is LP-NLV and the third smallest is LP-PCA.
Table 2. Performance evaluation of images of ‘the bottle'
PCA DWT LP-
DWT
LP-
PCA
DCT+va
r
Bilater
al
gradien
t
GIF SD NLV LP-
NLV
RMSE 15.00
5
5.384 2.528 2.485 2.642 20.380 3.681 16.9
19
0.902 1.584
According to the evaluation measure RMSE, the Table 3 gives the mean and standard deviation
of RMSE for the considered methods applied on 150 images.
8. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
44
Table 3. Statistic parameters of the sample (150 images)
Methods PC
A
DWT LP_DW
T
LP_PCA DCT_va
r
BG NLV LP_NLV
Mean 8,71
3
4,194 2,049 1,995 2,839 11,044 0,591 1,344
Standard
deviation
3,86
6
1,381 0,756 0,743 1,308 4,859 0,204 0,697
Time of
execution
by image
7s 5s 7s 7s 6s 6s 5s 7s
Table 3 shows that the proposed method (NLV) has a smaller average of the RMSE. The RMSE
histograms for 150 images by different methods (Figures 3, 4, 5, 6, 7, 8 and 9) show for almost
all methods that the RMSE values are almost symmetrically centered around the mean value.
Figure 3. Histogram of PCA method Figure 4. Histogram of LP_PCA method
Figure 5. The Histogram of DWT method Figure 6. Histogram of LP_DWT method
Figure 7. Histogram of Bilateral gradient method Figure 8. Histogram of NLV method
9. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
45
Figure 9. Histogram of LP_NLV method
In order to compare analytically the proposed method to other methods,a study of the Analysis of
variance (ANOVA) with dependent samples (dependence by image) is done. The software R
gives the following Anova table:
Table 4. Anova table with one factor: method
Df Sum Sq Mean Sq F value Pr(>F)
Method 9 25467 2829.6 742 <2e-16 ***
Residuals 1341 5114 3.8
As Pr(>F) is smaller than 1% , in table 4., the methods are significantly different. Then, the
Newman Keuls test is performed for comparing the methods two-by-two and giving groups
having significantly the same mean of RMSE. The software R shows the results of the test as
follows:
Table 5. Test of Newman Keuls
$groups
RMSE groups
SD 12.6072900 a
BG 11.0447767 b
PCA 8.7139600 c
DWT 4.1941660 d
DCT_var 2.8395233 e
GIF 2.5146353 e
LP_DWT 2.0496413 f
LP_PCA 1.9954953 f
LP_NLV 1.3446913 g
NLV 0.5921593 h
From table 5, we obtained the methods significantly different except the methods DCT_var and
GIF form the group “e” and the methods LP_DWT and LP_PCA form the group “f”.
The proposed method NLV in group “h” has a smaller mean (0.5921593) and significantly
different of the all methods. We conclude that the proposed method gives the better result than
other methods.
4. CONCLUSION
In this paper we propose an image fusion method based on local neighbour variability (NLV).
This method has been described in detail. Applying this method to 150 images gives a result of
significant improvement in both visual and quantitative image fusion compared to other fusion
10. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
46
methods. We used Laplacian pyramid with NLV as selection criterion, LP-NLV. Based on the
result of the experiment, we notice that LP-NLV is better than LP-DWT and DWT.
The proposed method has the advantage of taking into account the variability between each pixel
and its neighbours. Thus the coefficient of the pixels located in the focused part will be more
imposing. This method can be extended to multimodal images used in particular in medicine
(scanner, ultrasound, radiography, etc.) to give the presence of certain cancer cells seen in one
image and not visible in another image.
1) Drone is considered as a technological means allowing to capture images at different
altitudes. It opened up limitless possibilities for improving photography. The captured
images can be on the same scene by zooming in on different objects and at different
altitudes. It produces several images on the same scene but with different objects in focus.
2) In the food industry to control the quality of some products, cameras have been used to
take pictures. Each camera targets one of many objects placed on a conveyor belt to detect
an anomaly. To have a photo containing all the objects clearly, we can use the proposed
merge method which gives more details.
The perspectives of this work:
• We intend to extend our method to color images because color conveys additional
information.
• Encouraged by these results, we plan to extend the NLV method to the fusion of more than
two images, taking into account the local variability of each image (intra-variability) and
the variability between the images (inter-variability). Inter-variability can detect "abnormal
pixels" among images.
REFERENCES
[1] Naidu, V.P.S. and. Raol, J.R. (2008) “Pixel-level Image Fusion using Wavelets and Principal
Component Analysis”, Defence Science Journal, Vol. 58, No. 3, pp. 338-352.
[2] Tian, J., Chen, L., Ma, L., Yu, W., (2011) “Multi-focus image fusion using a bilateral gradient-based
sharpness criterion”, Optic Communications, 284, pp 80-87.
[3] Zhan, K., Teng, J., Li, Q., Shi, J. (2015) “A novel explicit multi-focus image fusion method”, Journal
of Information Hiding and Multimedia Signal Processing, vol. 6, no. 3, pp.600-612.
[4] Mallat, S.G. (1989) “A Theory for multiresolution signal decomposition: The wavelet
representation”, IEEE Trans. Pattern Anal. Mach. Intel., 11(7), 674-93.
[5] Pajares, G., Cruz, J.M. (2004) “A Wavelet-Based Image Fusion Tutorial”, Pattern Recognition 37.
Science Direct.
[6] Guihong, Q., Dali, Z., Pingfan, Y. (2001) “Medical image fusion by wavelet transform modulus
maxima”. Opt. Express 9, pp. 184-190.
[7] Indhumadhi, N., Padmavathi, G., (2011) “Enhanced Image Fusion Algorithm Using Laplacian
Pyramid and Spatial Frequency Based Wavelet Algorithm”, International Journal of Soft Computing
and Engineering (IJSCE). ISSN: 2231-2307, Vol. 1, Issue 5.
[8] Sabre, R. Wahyuni, I.S, (2020) “Wavelet Decomposition and Alpha Stable”, Signal and Image
Processing (SIPIJ), Vol. 11, No. 1. pp. 11-24.
[9] Jinjiang Li , Genji Yuan and Hui Fan (2019) “Multifocus Image Fusion Using Wavelet-Domain-
Based Deep CNN”, Computational Intelligence and Neuroscience, Vol. 2019 Article ID 4179397 |
https://doi.org/10.1155/2019/4179397
[10] Burt, P.J., Adelson, E.H. (1983) “The Laplacian Pyramid as a Compact Image Code”, IEEE
Transactions on communication, Vol.Com-31, No 40.
11. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
47
[11] Burt, P.J. (1984) “The Pyramid as a Structure for Efficient Computation. Multiresolution Image
Processing and Analysis”, A. Rosenfeld, Ed., Springer-Verlag. New York.
[12] Burt, P.J., Kolezynski, R.J. (1993) “Enhanced Image Capture Through Fusion”, in: International
Conference on Computer Vision, pp. 173-182.
[13] Wang, W., Chang, F. (2011) “A Multi-focus Image Fusion Method Based on Laplacian Pyramid”,
Journal of Computers, Vol.6, No 12.
[14] Zhao, P., Liu, G., Hu, C., Hu, and Huang, H. (2013) “Medical image fusion algorithm on the Laplace-
PCA”. Proc. 2013 Chinese Intelligent Automation Conference, pp. 787-794.
[15] Verma, S. Kaur K.,, Kumar M., R..(2016) “Hybrid image fusion algorithm using Laplacian Pyramid
and PCA method”, proceeding of the Second International Conference on Information and
Communication Technology for Competitive Strategies.
[16] Wahyuni, I.S, Sabre, R. (2019) “Multifocus Image Fusion Using Laplacian Pyramid Technique Based
on Alpha Stable Filter”, CRASE Vol. 5, No. 2. pp. 58-62.
[17] Wahyuni, I.S, Sabre, R. (2016) “Wavelet Decomposition in Laplacian Pyramid for Image Fusion”,
International Journal of Signal Processing Systems Vol. 4, No. 1. pp. 37-44.
[18] Haghighat, M.B.A, Aghagolzadeh, A., Seyedarabi, H. (2010) “Real-time fusion of multifocus images
for visual sensor networks”. Machine vision and image processing (MVIP), 2010 6th Iranian. 2010.
[19] Bavirisetti, D.P., and Dhuli, R.(2016) ‘‘Multi-focus image fusion using multi-scale image
decomposition and saliency detection’’, Ain Shams Eng. J., to be published. [Online]. Available:
http://dx.doi.org/ 10.1016/j.asej.2016.06.011.
[20] Petland, A. (1984) “A new sense for depth of field”, IEEE Transactions on Pattern Analysis and
Machine Intelligent, Vol. 9, No. 4, pp. 523-531.
[21] Nayar, S.K.(1992) ”Shape from Focus System”, Proc. of IEEE Conf. Computer Vision and Pattern
Recognition, pp. 302-308.
[22] Gonzales, R.C., Woods, R.E. (2002) “Ditigal Image Processing” 2nd edition. Prentice Hall.
[23] Liu,R., Li, Z., Jia, J.(2008) “Image Partial Blur Detection and Classification”, Computer Vision and
Pattern Recognition, CVPR 2008. IEEE Conference DOI: 10.1109/CVPR.2008.4587465
[24] Aslantas, V. (2007) “A depth estimation algorithm with a single image.” Optic Express, Vol. 15,
Issue 8. OSA Publishing.
[25] Elder, J.H., Zucker, S.W. (1998) “ Local Scale Control for Edge Detection and BlurEstimation.”
IEEE Transactions on Pattern Analysis and Machine Intelligence,Vol.20, No.7.
[26] www.rawsamples.ch. Accessed: 15 November 2017.
[27] Kumar, A., Paramesran, R., Lim, C. L., and Dass, S.C. (2016) “Tchebichef moment based restoration
of Gaussian blurred images”. Applied Optics, Vol. 55, Issue 32, pp. 9006-9016.
5. ANNEX 1
Consider, without loss the generality that we have a focus pixel (x, y) in image I1 and blurred
in image I2 as in Fig. 1.
Fig. 1. Two multi-focus images, the yellow part is blurred area and the white part is clear (focus) area.
The neighbor local variability of images I1 and I2, respectively is defined in (1) by:
12. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
48
𝑣 𝑎,1( 𝑥, 𝑦) = exp (√
1
𝑅
𝑟1(𝑥, 𝑦)) and 𝑣 𝑎,2( 𝑥, 𝑦) = exp (√
1
𝑅
𝑟2(𝑥, 𝑦)) where 𝑟1(𝑥, 𝑦)and
𝑟2(𝑥, 𝑦)
can be written as follows:
𝑟1(𝑥, 𝑦) = ∑ ∑ |𝐼1(𝑥, 𝑦) − 𝐼1(𝑥 + ( 𝑚 − 𝑎), 𝑦 + ( 𝑛 − 𝑎))|
2
2𝑎
𝑛=0
2𝑎
𝑚=0
(6)
𝑟2(𝑥, 𝑦) = ∑ ∑ |𝐼2(𝑥, 𝑦) − 𝐼2(𝑥 + ( 𝑚 − 𝑎), 𝑦 + ( 𝑛 − 𝑎))|
2
2𝑎
𝑛=0
2𝑎
𝑚=0
(7)
Let IR is the reference image of multi-focus images I1 and I2. Moreover, it is shown in [20] and
[21] that the blurred image can be seen as the product convolution between the reference image
and Gaussian filter:
1 1
1
1
* ( , ), ( , )
( , )
( , ), ( , )
R
R
w I x y x y B
I x y
I x y x y B
2 2
2
2
* ( , ), ( , )
( , )
( , ), ( , )
R
R
w I x y x y B
I x y
I x y x y B
, (8)
where w1 and w2 are Gaussian filter defined by:
𝑤1( 𝑘, 𝑙) = 𝑤1( 𝑘, 𝑙) =
exp (−
𝑘2
+ 𝑙2
2𝜎1
2 )
∑ ∑ exp (−
𝑘2 + 𝑙2
2𝜎1
2 )
𝑠1
𝑙=−𝑠1
𝑠1
𝑘=−𝑠1
, (k ,l) ∈ [−𝑠1, 𝑠1]2
,
𝑤2(𝑘, 𝑙) =
exp (−
𝑘2
+ 𝑙2
2𝜎2
2 )
∑ ∑ exp (−
𝑘2 + 𝑙2
2𝜎2
2 )
𝑠2
𝑙=−𝑠2
𝑠2
𝑘=−𝑠2
, ( 𝑘, 𝑙) ∈ [−𝑠2, 𝑠2]2
The product convolution is defined as follows:
1 1
1 1
1 1* ( , ) ( , ) ( , ),
s s
R R
k s l s
w I x y w k l I x k y l
2 2
2 2
2 2* ( , ) ( , ) ( , ),
s s
R R
k s l s
w I x y w k l I x k y l
Put
2 2 21
1 ( , )
0 0
( , ) ( , )
a a
m n
m n
r x y D x y
and
2 2 22
2 ( , )
0 0
( , ) ( , )
a a
m n
m n
r x y D x y
(9)
where
1
( , ) 1 1( , ) ( , ) ,m nD x y I x y I x m a y n a
(10)
13. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
49
2
( , ) 2 2( , ) ( , ) ,m nD x y I x y I x m a y n a
(11)
Proposition:
The local variability on blurred part is smaller than the local variability on focused part. Let
2( , )x y B (the blurred part of I2) and 1( , )x y B (focus par of I1 ), then 2 1( , ) ( , )r x y r x y
.
Proof:
For that, we use Plancherel theorem:
∑ ∑ |𝐷(𝑚,𝑛)
1
(𝑥, 𝑦)|
2
2𝑎
𝑛=0
2𝑎
𝑚=0
=
1
(2𝑎+1)2 ∑ ∑ |𝐷̂(𝑛,𝑚)
1
(𝑥, 𝑦)|
2
2𝑎
𝑛=0
2𝑎
𝑚=0
(12)
where 𝐷̂(𝑛,𝑚)
1
( 𝑥, 𝑦) is Fourier transform of𝐷(𝑚,𝑛)
1
(𝑥, 𝑦).
𝐷̂(𝑛,𝑚)
1
( 𝑥, 𝑦) = 𝐹𝑇[𝐷(𝑚,𝑛)
1
( 𝑥, 𝑦)] = 𝐹𝑇[𝐼1(𝑥, 𝑦) − 𝐼1(𝑥 + ( 𝑚 − 𝑎), 𝑦 + ( 𝑛 − 𝑎))]
(13)
As 2( , )x y B
therefore 1( , )x y B
, from (12), equation (13) can be written as follows:
𝐷̂(𝑛,𝑚)
1
(𝑥, 𝑦) = 𝐹𝑇[𝐼 𝑅(𝑥, 𝑦) − 𝐼 𝑅(𝑥 + ( 𝑚 − 𝑎), 𝑦 + ( 𝑛 − 𝑎))] (14)
and
2 2
2 2
2 2( , ) ( , )* ( , )
s s
R
k s l s
I x y w k l I x k y l
(15)
By using the definition of convolution, equation (15) can be written as:
𝐼2(𝑥, 𝑦) = ∑ ∑ 𝑤2(𝑘, 𝑙)1[−𝑠2,𝑠2]2 𝐼 𝑅(𝑥 − 𝑘, 𝑦 − 𝑙)
∞
𝑙=−∞
∞
𝑘=−∞
(16)
and
𝐼2(𝑥, 𝑦) = (𝑤21[−𝑠2,𝑠2]2) ∗ 𝐼 𝑅(𝑥, 𝑦) (17)
where
1[−𝑠2,𝑠2]2( 𝑘, 𝑙) = {
1, 𝑖𝑓 (k, l) ∈ [−𝑠2, 𝑠2]2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
.
The Fourier transform of
2
( , ) ( , )m nD x y
is
𝐷̂(𝑛,𝑚)
2
( 𝑥, 𝑦) = 𝐹𝑇[𝑤21[𝑠2,𝑠2]2 ∗ 𝐼 𝑅( 𝑥, 𝑦) − 𝑤21[𝑠2,𝑠2]2 ∗ 𝐼 𝑅(𝑥 + ( 𝑚 − 𝑎), 𝑦 + ( 𝑛 − 𝑎))]
15. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.6, December 2020
51
This proves that the local variability in blurred part is smaller than local variability in focused
part.
AUTHORS
Rachid Sabre received the PhD degree in statistics from the University of Rouen, Rouen, France, in 1993
and Habilitation (HdR) from the University of Burgundy, Dijon, France, in 2003.He joined Agrosup Dijon,
Dijon, France, in 1995, where he is an Associate Professor. From 1998 through 2010, he served as a
member of “Institut de Mathématiques de Bourgogne”, France. He was a member of the Scientific Council
AgroSup Dijon from 2009 to 2013. In 2012, he has been a member of “ Laboratoire Electronique,
Informatique, et Image” (Le2i), France. Since 2019 has been a member of Laboratory Biogeosciences
UMR CNRS, University of Burgundy. He is author/co-author of numerous papers in scientific and
technical journals and conference proceedings. His research interests lie in areas of statistical process and
spectral analysis for signal and image processing.
Ias Sri Wahyuni was born in Jakarta, Indonesia, in 1986. She earned the B.Sc. and M.Sc. degrees in
mathematics from the University of Indonesia, Depok, Indonesia, in 2008 and 2011, respectively.In 2009,
she joined the Department of Informatics (COMPUTING) System, Gunadarma University, Depok,
Indonesia, as a Lecturer. She is currently a PhD student at University of Burgundy, Dijon, France. Her
current research interests include statistics and image processing.