Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
Medical Images are regularly of low contrast and boisterous/Noisy (absence of clarity) because of
the circumstances they are being taken. De-noising these pictures is a troublesome undertaking as they
ought to exclude any antiquities or obscuring of edges in the pictures. The Bayesian shrinkage strategy has
been chosen for thresholding in light of its sub band reliance property. The spatial space and Wavelet
based de-noising systems utilizing delicate thresholding strategy are contrasted and the proposed technique
utilizing GA (Genetic Algorithm) is used. The GA procedure is proposed in view of PSNR and results are
contrasted and existing spatial space and wavelet based de-noising separating strategies. The proposed
calculation gives improved visual clarity to diagnosing the restorative pictures. The proposed strategy in
view of GA surveys the better execution on the premise of the quantitative metric i.e PSNR (Peak Signal
to Noise-Ratio) and visual impacts. Reenactment results demonstrate that the GA based proposed
technique beats the current de-noising separating strategies.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
Feature isolation and extraction of satellite images for remote sensing appli...IAEME Publication
This document discusses techniques for enhancing satellite images using linear contrast stretching and intensity level slicing. It begins with an abstract describing image enhancement and its purpose of improving visual appearance and aspects of images. It then provides background on sources of image degradation and the need for enhancement. The document describes two techniques - linear contrast stretching and intensity level slicing. It provides examples of their application and results, showing how they can increase contrast and emphasize certain intensity levels. In conclusion, it states that choice of enhancement technique depends on image characteristics and application, and computational cost is important for real-time use.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
This document discusses atmospheric turbulence degraded image restoration using back propagation neural network. It proposes using a feed-forward neural network with 20 hidden layers and one output layer trained with backpropagation to restore images degraded by atmospheric turbulence and noise. The network is trained on normalized input images and tested on blurred images. Results show the proposed method achieves higher PSNR values than other techniques like kurtosis minimization and PCA, indicating better image quality restoration. Future work may incorporate median filtering and using first order image features for network weight assignment.
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
Medical Images are regularly of low contrast and boisterous/Noisy (absence of clarity) because of
the circumstances they are being taken. De-noising these pictures is a troublesome undertaking as they
ought to exclude any antiquities or obscuring of edges in the pictures. The Bayesian shrinkage strategy has
been chosen for thresholding in light of its sub band reliance property. The spatial space and Wavelet
based de-noising systems utilizing delicate thresholding strategy are contrasted and the proposed technique
utilizing GA (Genetic Algorithm) is used. The GA procedure is proposed in view of PSNR and results are
contrasted and existing spatial space and wavelet based de-noising separating strategies. The proposed
calculation gives improved visual clarity to diagnosing the restorative pictures. The proposed strategy in
view of GA surveys the better execution on the premise of the quantitative metric i.e PSNR (Peak Signal
to Noise-Ratio) and visual impacts. Reenactment results demonstrate that the GA based proposed
technique beats the current de-noising separating strategies.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
Feature isolation and extraction of satellite images for remote sensing appli...IAEME Publication
This document discusses techniques for enhancing satellite images using linear contrast stretching and intensity level slicing. It begins with an abstract describing image enhancement and its purpose of improving visual appearance and aspects of images. It then provides background on sources of image degradation and the need for enhancement. The document describes two techniques - linear contrast stretching and intensity level slicing. It provides examples of their application and results, showing how they can increase contrast and emphasize certain intensity levels. In conclusion, it states that choice of enhancement technique depends on image characteristics and application, and computational cost is important for real-time use.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
This document discusses atmospheric turbulence degraded image restoration using back propagation neural network. It proposes using a feed-forward neural network with 20 hidden layers and one output layer trained with backpropagation to restore images degraded by atmospheric turbulence and noise. The network is trained on normalized input images and tested on blurred images. Results show the proposed method achieves higher PSNR values than other techniques like kurtosis minimization and PCA, indicating better image quality restoration. Future work may incorporate median filtering and using first order image features for network weight assignment.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
This document analyzes the use of Lyapunov exponents to determine chaotic structure in hyperspectral satellite data. It investigates an EO-1 Hyperion hyperspectral image of a mixed forest site in Turkey. Lyapunov exponents are calculated from reconstructed phase spaces of spectral signals for different object classes. Positive and negative Lyapunov exponents indicate chaotic behavior is present. The results demonstrate Lyapunov exponents can be used as discriminative features to improve hyperspectral image classification accuracy by capturing the chaotic structures in the data.
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
REMOVING RAIN STREAKS FROM SINGLE IMAGES USING TOTAL VARIATIONijma
ABSTRACT
Rainy image restoration is considered asone of the most important image restorations aspects to improve the outdoor vision. Many fields have used this kind of restorations such as driving assistant, environment monitoring, animals monitoring, computer vision, face recognition, object recognition and personal photos. Image restoration simply means how to remove the noise from the images. Most of the images have some noises from the environment. Moreover, image quality assessment plays an important role in the valuation of image enhancement algorithms. In this research, we will use a total variation to remove rain streaks from a single image. It shows a good performance compared to other methods, using some measurements MSE, PSNR, and VIF for an image with references and BRISQUE for an image without references.
Speckle noise reduction from medical ultrasound images using wavelet threshIAEME Publication
This document proposes a method for reducing speckle noise from medical ultrasound images using wavelet thresholding and anisotropic diffusion. It first takes the logarithm of noisy images to convert speckle noise from multiplicative to additive. It then applies median filtering, followed by wavelet denoising using soft thresholding of detail coefficients. Finally, it uses SRAD filtering to enhance contrast while retaining edges. The proposed method is tested on three images and is found to improve SNR and PSNR compared to other filters based on statistical measurements.
NIR Three dimensional imaging of breast model using f-DOT Nagendra Babu
NIR three dimensional optical imaging of breast model using f-DOT using f-DOT with target specified contrast agent.
Three dimensional mathematical modeling of DOT,f-DOT.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
A Novel Approach for Edge Detection using Modified ACIES Filteringidescitation
The most important humiliating attribute of an image is noise, which may
become obvious during image capturing, communication or processing and may conceivably
be reliant on or sovereign of image. In order to supplement the high- frequency components
in this paper we are proposing a new class of filter called ACIES filter which implies spatial
filter shape that has a high positive component at the centre. Since restraint of noise can
only be achieved by smoothing the image. Sharpening with ACIES filter highlights the fine
details of an image and enhances the clarity of the image at the boundaries. Experimental
results show that the concert of the proposed ACIES filter is acceptable and Quality of the
consequent images is remarkably well even under the strongly noise-corrupted conditions.
If the edges in an image can be recognized specifically, then all the objects in the image can
be located and basic properties such as region, edge, and contour can be measured. The
performance of the proposed approach is measured with image quality metrics such as
PSNR, MSE, NAE and NK.
Analysis of remote sensing imagery involves identifying targets through their tone, shape, size, pattern, texture, and relationships to other objects. Targets may be environmental or artificial features appearing as points, lines, or areas. Interpretation relies on how radiation is reflected or emitted from targets and recorded by sensors to form images. The key to interpretation is recognizing targets based on these visual elements.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
This document summarizes a research paper that proposes a new method for removing speckle noise from ultrasound and optical coherence tomography medical images in the stationary wavelet domain. It first reviews existing techniques for speckle noise reduction such as wavelet shrinkage methods. It then presents the mathematical model of speckle noise and formulates the problem that existing wavelet methods do not provide shift invariance. The proposed method uses two-dimensional stationary wavelet transform to overcome this issue. It involves decomposing the noisy input image into subbands, estimating clean coefficients, and applying the inverse transform to obtain a denoised image. Results showed the method was able to remove speckle noise while better preserving edges.
Raw remote sensing images contain errors that must be corrected through pre-processing before analysis. Pre-processing involves radiometric, geometric, and atmospheric corrections. Radiometric corrections address distortions in pixel values from issues like noise, striping, or dropped scan lines. Geometric corrections rectify distortions caused by terrain, sensor geometry, and platform movement using ground control points. Atmospheric corrections reduce haze effects through techniques like dark object subtraction that assume minimum surface reflectance values. Pre-processing is essential for producing accurate, georeferenced images suitable for analysis and interpretation.
This document summarizes a study on automatically detecting boundaries and regions of interest in ultrasound images of focal liver lesions. The researchers used texture analysis and gradient vector flow snakes to extract boundaries after reducing speckle noise. They tested several noise filters and found median filtering worked best, achieving the highest PSNR. Texture analysis via gray-level co-occurrence matrix extraction detected regions more accurately than range or standard deviation filters. Morphological operations and seed point determination were then used to generate the final region of interest. The proposed automatic method facilitates ultrasound image segmentation and analysis of focal liver lesions.
Hybrid Speckle Noise Reduction Method for Abdominal Circumference Segmentatio...IJECEIAES
Fetal biometric size such as abdominal circumference (AC) is used to predict fetal weight or gestational age in ultrasound images. The automatic biometric measurement can improve efficiency in the ultrasonography examination workflow. The unclear boundaries of the abdomen image and the speckle noise presence are the challenges for the automated AC measurement techniques. The main problem to improve the accuracy of the automatic AC segmentation is how to remove noise while retaining the boundary features of objects. In this paper, we proposed a hybrid ultrasound image denoising framework which was a combination of spatial-based filtering method and multiresolution based method. In this technique, an ultrasound image was decomposed into subbands using wavelet transform. A thresholding technique and the anisotropic diffusion method were applied to the detail subbands, at the same time the bilateral filtering modified the approximation subband. The proposed denoising approach had the best performance in the edge preservation level and could improve the accuracy of the abdominal circumference segmentation.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Review on Methods of Identifying and Counting Aedes Aegypti Larvae using Im...TELKOMNIKA JOURNAL
Aedes aegypti mosquitoes are a small slender fly insect that spreads the arbovirus from flavivirus
vector through its sucking blood. An early detection of this species is very important because once these
species turn into adult mosquitoes a population control becomes more complicated. Things become worse
when difficult access places like water storage tank becomes one of the breeding favorite places for Aedes
aegypti mosquitoes. Therefore, there is a need to help the field operator during the routine inspection for
an automated identification and detection of Aedes aegypti larvae, especially at difficult access places.
This paper reviews different methodologies that have been used by various researchers in identifying and
counting Aedes aegypti. The objective of the review was to analyze the techniques and methods in
identifying and counting the Aedes Aegypti larvae of various fields of study from 2008 and above by taking
account their performance and accuracy. From the review, thresholding method was the most widely used
with high accuracy in image segmentation followed by hidden Markov model, histogram correction and
morphology operation region growing.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Developing a classification framework for landcover landuse change analysis i...Carolina
The document discusses developing a land cover/land use classification framework for Chile using remote sensing data. It describes using mathematical morphology techniques like morphological attribute profiles to integrate spatial context into spectral classifications, which significantly improved classification accuracy in three test subsets of a Landsat image - forests (61.3% to 80.8%), urban (75.5% to 92.2%) and agriculture (62.2% to 89.2%). However, the high-dimensional feature space requires specialized machine learning methods and high-performance computing.
This document presents a methodology for motion blur image restoration using an alternating direction balanced regularization filter. It begins with an introduction discussing image restoration and types of image degradation like blur and noise. It then discusses a literature review of existing techniques for motion blur parameter estimation and image restoration. The proposed methodology is described as estimating the motion blur angle and length using Gabor filters and radial basis functions, then restoring the image using an alternating direction balanced regularization filter. Experimental results on various standard test images are provided, comparing the proposed method to existing techniques based on metrics like PSNR and MSE. The conclusions discuss that the proposed method provides improved restoration quality over existing methods.
Tropical Cyclone Determination using Infrared Satellite Imageijtsrd
Many sub continents in the world have the region that are affected by the cyclone in every year. To prevent the loss of life and their assets, cyclone prediction is a major role because of directly related to the lives and household of human being. Satellite images provide an excellent view of clouds which can be used in weather forecasting and especially Infrared Red IR satellite images play in many environmental applications. To find the tropical cyclone TC center, the basic stage is to extract the main cloud of the cyclone. In manual segmentation, selection of the storm region is complicated, time consuming task and it also need the human experts for every time processing. The semi and fully automatic storm detection is sophisticated and difficult process because of the overlapping between boundaries of the cloud. Fuzzy C means FCM clustering and morphological image processing is applied for segmentation each infrared satellite images. The effectiveness is tested for infrared cyclone image over Kalpana satellite which is obtained from the INSAT satellite of India. 45 tropical cyclones are occurred during the period 1989 2014 over Bay of Bengal. Cyclones Nargis that mainly affected to Myanmar in 2 May 2008 as case study. Experimental results show that the high location accuracy can be obtained. Thu Zar Hsan | Myint Myint Sein "Tropical Cyclone Determination using Infrared Satellite Image" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27934.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/27934/tropical-cyclone-determination-using-infrared-satellite-image/thu-zar-hsan
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
This document analyzes the use of Lyapunov exponents to determine chaotic structure in hyperspectral satellite data. It investigates an EO-1 Hyperion hyperspectral image of a mixed forest site in Turkey. Lyapunov exponents are calculated from reconstructed phase spaces of spectral signals for different object classes. Positive and negative Lyapunov exponents indicate chaotic behavior is present. The results demonstrate Lyapunov exponents can be used as discriminative features to improve hyperspectral image classification accuracy by capturing the chaotic structures in the data.
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
Sonar images produced due to the coherent nature of scattering phenomenon inherit a multiplicative component called speckle and contain almost homogeneous as well as textured regions with relatively rare edges. Speckle removal is a pre-processing step required in applications like the detection and classification of objects in the sonar image. In this paper computationally efficient Fractional Integral Mask algorithms to remove the speckle noise from sonar images is proposed. Riemann- Liouville definition of fractional calculus is used to create Fractional integral masks in eight directions. The use of a mask incorporated with the significant coefficients from the eight directional masks and a single convolution operation required in such case helps in obtaining the computational efficiency. The sonar image heterogeneous patch classification is based on a new proposed naive homogeneity index which depends on the texture strength of the patches and despeckling filters can be adjusted to these patches. The application of the mask convolution only to the selected patches again reduce the computational complexity. The non-homomorphic approach used in the proposed method avoids the undesired bias occurring in the traditional homomorphic approach. Experiments show that the mask size required directly depends on the fractional order. Mask size can be reduced for lower fractional orders thus ensuring the computation complexity reduction for lower orders. Experimental results substantiate the effectiveness of the despeckling method. The different non reference image performance evaluation criterion are used to evaluate the proposed method.
REMOVING RAIN STREAKS FROM SINGLE IMAGES USING TOTAL VARIATIONijma
ABSTRACT
Rainy image restoration is considered asone of the most important image restorations aspects to improve the outdoor vision. Many fields have used this kind of restorations such as driving assistant, environment monitoring, animals monitoring, computer vision, face recognition, object recognition and personal photos. Image restoration simply means how to remove the noise from the images. Most of the images have some noises from the environment. Moreover, image quality assessment plays an important role in the valuation of image enhancement algorithms. In this research, we will use a total variation to remove rain streaks from a single image. It shows a good performance compared to other methods, using some measurements MSE, PSNR, and VIF for an image with references and BRISQUE for an image without references.
Speckle noise reduction from medical ultrasound images using wavelet threshIAEME Publication
This document proposes a method for reducing speckle noise from medical ultrasound images using wavelet thresholding and anisotropic diffusion. It first takes the logarithm of noisy images to convert speckle noise from multiplicative to additive. It then applies median filtering, followed by wavelet denoising using soft thresholding of detail coefficients. Finally, it uses SRAD filtering to enhance contrast while retaining edges. The proposed method is tested on three images and is found to improve SNR and PSNR compared to other filters based on statistical measurements.
NIR Three dimensional imaging of breast model using f-DOT Nagendra Babu
NIR three dimensional optical imaging of breast model using f-DOT using f-DOT with target specified contrast agent.
Three dimensional mathematical modeling of DOT,f-DOT.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
A Novel Approach for Edge Detection using Modified ACIES Filteringidescitation
The most important humiliating attribute of an image is noise, which may
become obvious during image capturing, communication or processing and may conceivably
be reliant on or sovereign of image. In order to supplement the high- frequency components
in this paper we are proposing a new class of filter called ACIES filter which implies spatial
filter shape that has a high positive component at the centre. Since restraint of noise can
only be achieved by smoothing the image. Sharpening with ACIES filter highlights the fine
details of an image and enhances the clarity of the image at the boundaries. Experimental
results show that the concert of the proposed ACIES filter is acceptable and Quality of the
consequent images is remarkably well even under the strongly noise-corrupted conditions.
If the edges in an image can be recognized specifically, then all the objects in the image can
be located and basic properties such as region, edge, and contour can be measured. The
performance of the proposed approach is measured with image quality metrics such as
PSNR, MSE, NAE and NK.
Analysis of remote sensing imagery involves identifying targets through their tone, shape, size, pattern, texture, and relationships to other objects. Targets may be environmental or artificial features appearing as points, lines, or areas. Interpretation relies on how radiation is reflected or emitted from targets and recorded by sensors to form images. The key to interpretation is recognizing targets based on these visual elements.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
This document summarizes a research paper that proposes a new method for removing speckle noise from ultrasound and optical coherence tomography medical images in the stationary wavelet domain. It first reviews existing techniques for speckle noise reduction such as wavelet shrinkage methods. It then presents the mathematical model of speckle noise and formulates the problem that existing wavelet methods do not provide shift invariance. The proposed method uses two-dimensional stationary wavelet transform to overcome this issue. It involves decomposing the noisy input image into subbands, estimating clean coefficients, and applying the inverse transform to obtain a denoised image. Results showed the method was able to remove speckle noise while better preserving edges.
Raw remote sensing images contain errors that must be corrected through pre-processing before analysis. Pre-processing involves radiometric, geometric, and atmospheric corrections. Radiometric corrections address distortions in pixel values from issues like noise, striping, or dropped scan lines. Geometric corrections rectify distortions caused by terrain, sensor geometry, and platform movement using ground control points. Atmospheric corrections reduce haze effects through techniques like dark object subtraction that assume minimum surface reflectance values. Pre-processing is essential for producing accurate, georeferenced images suitable for analysis and interpretation.
This document summarizes a study on automatically detecting boundaries and regions of interest in ultrasound images of focal liver lesions. The researchers used texture analysis and gradient vector flow snakes to extract boundaries after reducing speckle noise. They tested several noise filters and found median filtering worked best, achieving the highest PSNR. Texture analysis via gray-level co-occurrence matrix extraction detected regions more accurately than range or standard deviation filters. Morphological operations and seed point determination were then used to generate the final region of interest. The proposed automatic method facilitates ultrasound image segmentation and analysis of focal liver lesions.
Hybrid Speckle Noise Reduction Method for Abdominal Circumference Segmentatio...IJECEIAES
Fetal biometric size such as abdominal circumference (AC) is used to predict fetal weight or gestational age in ultrasound images. The automatic biometric measurement can improve efficiency in the ultrasonography examination workflow. The unclear boundaries of the abdomen image and the speckle noise presence are the challenges for the automated AC measurement techniques. The main problem to improve the accuracy of the automatic AC segmentation is how to remove noise while retaining the boundary features of objects. In this paper, we proposed a hybrid ultrasound image denoising framework which was a combination of spatial-based filtering method and multiresolution based method. In this technique, an ultrasound image was decomposed into subbands using wavelet transform. A thresholding technique and the anisotropic diffusion method were applied to the detail subbands, at the same time the bilateral filtering modified the approximation subband. The proposed denoising approach had the best performance in the edge preservation level and could improve the accuracy of the abdominal circumference segmentation.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Review on Methods of Identifying and Counting Aedes Aegypti Larvae using Im...TELKOMNIKA JOURNAL
Aedes aegypti mosquitoes are a small slender fly insect that spreads the arbovirus from flavivirus
vector through its sucking blood. An early detection of this species is very important because once these
species turn into adult mosquitoes a population control becomes more complicated. Things become worse
when difficult access places like water storage tank becomes one of the breeding favorite places for Aedes
aegypti mosquitoes. Therefore, there is a need to help the field operator during the routine inspection for
an automated identification and detection of Aedes aegypti larvae, especially at difficult access places.
This paper reviews different methodologies that have been used by various researchers in identifying and
counting Aedes aegypti. The objective of the review was to analyze the techniques and methods in
identifying and counting the Aedes Aegypti larvae of various fields of study from 2008 and above by taking
account their performance and accuracy. From the review, thresholding method was the most widely used
with high accuracy in image segmentation followed by hidden Markov model, histogram correction and
morphology operation region growing.
Coherence enhancement diffusion using robust orientation estimationcsandit
In this paper, a new robust orientation estimation for Coherence Enhancement Diffusion (CED)
is proposed. In CED, proper scale selection is very important as the gradient vector at that
scale reflects the orientation of local ridge. For this purpose, a new scheme is proposed in
which pre calculated orientation, by using orientation diffusion, is used to find the correct true
local scale. From the experiments it is found that the proposed scheme is working much better
in noisy environment as compared to the traditional Coherence Enhancement Diffusion.
Developing a classification framework for landcover landuse change analysis i...Carolina
The document discusses developing a land cover/land use classification framework for Chile using remote sensing data. It describes using mathematical morphology techniques like morphological attribute profiles to integrate spatial context into spectral classifications, which significantly improved classification accuracy in three test subsets of a Landsat image - forests (61.3% to 80.8%), urban (75.5% to 92.2%) and agriculture (62.2% to 89.2%). However, the high-dimensional feature space requires specialized machine learning methods and high-performance computing.
This document presents a methodology for motion blur image restoration using an alternating direction balanced regularization filter. It begins with an introduction discussing image restoration and types of image degradation like blur and noise. It then discusses a literature review of existing techniques for motion blur parameter estimation and image restoration. The proposed methodology is described as estimating the motion blur angle and length using Gabor filters and radial basis functions, then restoring the image using an alternating direction balanced regularization filter. Experimental results on various standard test images are provided, comparing the proposed method to existing techniques based on metrics like PSNR and MSE. The conclusions discuss that the proposed method provides improved restoration quality over existing methods.
Tropical Cyclone Determination using Infrared Satellite Imageijtsrd
Many sub continents in the world have the region that are affected by the cyclone in every year. To prevent the loss of life and their assets, cyclone prediction is a major role because of directly related to the lives and household of human being. Satellite images provide an excellent view of clouds which can be used in weather forecasting and especially Infrared Red IR satellite images play in many environmental applications. To find the tropical cyclone TC center, the basic stage is to extract the main cloud of the cyclone. In manual segmentation, selection of the storm region is complicated, time consuming task and it also need the human experts for every time processing. The semi and fully automatic storm detection is sophisticated and difficult process because of the overlapping between boundaries of the cloud. Fuzzy C means FCM clustering and morphological image processing is applied for segmentation each infrared satellite images. The effectiveness is tested for infrared cyclone image over Kalpana satellite which is obtained from the INSAT satellite of India. 45 tropical cyclones are occurred during the period 1989 2014 over Bay of Bengal. Cyclones Nargis that mainly affected to Myanmar in 2 May 2008 as case study. Experimental results show that the high location accuracy can be obtained. Thu Zar Hsan | Myint Myint Sein "Tropical Cyclone Determination using Infrared Satellite Image" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27934.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/27934/tropical-cyclone-determination-using-infrared-satellite-image/thu-zar-hsan
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Visibility Enhancement of Hazy Images using Depth Estimation ConceptIRJET Journal
This document presents a methodology to improve the visibility of hazy images using depth estimation. The proposed method first converts the input hazy image into white balance and contrast enhanced images. Depth estimation is then performed on these images to estimate the unknown depth from the camera to objects in the scene. Weight maps are generated from the white balance and contrast images and applied to Gaussian and Laplacian pyramids to estimate depth. A gamma correction is applied to the depth estimated image to further improve visibility. Experimental results show that the gamma corrected image has better visibility and a higher PSNR than the depth estimated image alone.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
A Survey on Single Image Dehazing ApproachesIRJET Journal
This document provides a survey of single image dehazing approaches. It begins with an introduction to the problem of haze in images and how it degrades quality. It then summarizes several existing single image dehazing methods, including those based on the atmospheric scattering model, dark channel prior, color attenuation prior, and deep learning approaches. The survey covers the key assumptions and limitations of each approach. Overall, the document reviews the progress that has been made in developing techniques to remove haze from a single input image.
High Efficiency Haze Removal Using Contextual Regularization AlgorithmIRJET Journal
This document presents a new contextual regularization algorithm for high efficiency haze removal. It begins with an overview of existing haze removal techniques and their limitations, such as halo effects and reduced image quality. It then proposes a method that estimates airlight using multiple transmission maps and cross bilateral filtering to remove noise and enhance edges. This integrated approach yields faster execution speeds and superior recovery effects compared to existing filters. The key contribution is a new contextual regularization that allows incorporating a filter bank into dehazing images. Experimental results show the proposed method removes haze without changing the original scene or producing saturated images, while existing techniques can remove wanted image information or produce unnatural results.
Review on Various Algorithm for Cloud Detection and Removal for ImagesIJERA Editor
Clouds is one of the significant obstacles in extracting information from tea lands using remote sensing imagery Different approaches have been attempted to solve this problem with varying levels of success In the past decade, a number of cloud removal approaches have been proposed . In this paper we review and discuss about the cloud detection & removal, need of cloud computing , its principles, and cloud removal process and various algorithm of cloud removal. This paper attempts to give a recipe for selecting one of the popular cloud removal algorithms like The Information Cloning Algorithm, Cloud Distortion Model And Filtering Procedure, Semi-Automated Cloud/Shadow, And Haze Identification And Removal etc. A cloud removal approach based on information cloning is introduced...Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Based on the specific requirements of the project that necessitates the utilization of certain types of cloud detection algorithms is decided
Enhanced Vision of Hazy Images Using Improved Depth Estimation and Color Anal...IRJET Journal
The document presents a method for enhancing hazy images using improved depth estimation and color analysis. It proposes using an enhanced refined transmission technique to better estimate haze thickness and reduce color cast problems in hazy images. The method involves median filtering to reduce noise, adaptive gamma correction to enhance contrast, estimating depth maps, and using visibility restoration to recover hazy images. Experimental results show the proposed method produces superior results to other state-of-the-art methods by dramatically improving hazy images captured during inclement weather conditions.
1. The document proposes a new method for shadow detection and removal in satellite images using image segmentation, shadow feature extraction, and inner-outer outline profile line (IOOPL) matching.
2. Key steps include segmenting the image, detecting suspected shadows, eliminating false shadows through analysis of color and spatial properties, extracting shadow boundaries, and obtaining homogeneous sections through IOOPL similarity matching to determine radiation parameters for shadow removal.
3. Experimental results showed the method could successfully detect shadows and remove them to improve image quality for applications like object classification and change detection.
This document summarizes a research paper on using a hybrid dark channel prior method for visibility restoration in images degraded by haze or fog. It begins by introducing the problem of image degradation in poor weather conditions like fog. Then it describes existing techniques for image restoration, including the dark channel prior method, which estimates atmospheric light and transmission maps to recover the original scene. The document proposes a hybrid dark channel prior method that uses both small and large patch sizes to better estimate the haze density. Simulation results demonstrate that the hybrid method more effectively removes haze than traditional techniques. The paper concludes that the hybrid approach works well for fog and noise removal in single images under different weather conditions.
A Review on Deformation Measurement from Speckle Patterns using Digital Image...IRJET Journal
This document reviews digital image correlation (DIC) for deformation measurement using speckle patterns. DIC is a non-contact optical method that uses digital images of a speckle pattern on a surface before and after deformation. By comparing the speckle patterns in the images, DIC can determine displacement and strain fields with high accuracy. The document discusses speckle pattern types, the DIC process, related works that have improved DIC methods, and applications of DIC such as for high-temperature testing. DIC provides full-field measurements and greater accuracy compared to conventional contact methods.
A Novel Dehazing Method for Color Accuracy and Contrast Enhancement Method fo...IRJET Journal
The document proposes a novel dehazing method for color accuracy and a contrast enhancement method for low light intensity images. The dehazing method involves three steps: 1) Region division based on white balance segmentation, 2) Estimation of local atmospheric light in each region, and 3) An iterative dehazing algorithm to remove haze from each region. The contrast enhancement method inverts the input image, applies the dehazing algorithm, and then inverts the dehazed image to produce an enhanced output. Experimental results show the proposed methods can effectively enhance images taken with mobile devices or cameras without color distortion.
The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
1. The document discusses techniques for removing haze from digital images. It begins with an introduction to how haze forms and degrades image quality.
2. It then describes several categories of haze removal techniques, including multiple image dehazing methods that use multiple images and single image dehazing methods that rely on statistical assumptions. Specific techniques discussed include dark channel prior, guided image filtering, and bilateral filtering.
3. The document focuses on comparing different haze removal approaches and evaluating which methods produce higher quality results for single image dehazing.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
IRJET-Design and Implementation of Haze Removal SystemIRJET Journal
This document describes a system to remove haze from images using a dark channel algorithm and median filtering. It begins with an abstract that outlines the goals of improving visibility in bad weather conditions by removing haze from images. It then provides details on the proposed method, which uses a dark channel prior algorithm to estimate atmospheric light, calculate a transmission map, and recover scene radiance. A hybrid median filter is also used to remove impulse noise from the dehazed image. The method is tested on hazy images and results show improved visibility and color contrast compared to previous methods.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET Journal
This document proposes a method for removing haze from underwater images using fusion techniques. It involves three main steps:
1. Removal of haze from the input underwater image using a water shield filter to extract a dehazed image.
2. Denoising the dehazed image using a sequential algorithm to compensate for uneven lighting and enhance image features.
3. Fusing the dehazed and denoised images to produce a clear output image with both haze and noise removed.
The method aims to improve underwater image visibility and contrast correction in a simple and effective manner. Evaluation on sample images demonstrates reduced haze and artifacts after processing.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
This document summarizes a research paper that examines pricing strategy in a two-stage supply chain consisting of a supplier and retailer. The supplier offers a credit period to the retailer, who then offers credit to customers. A mathematical model is formulated to maximize total profit for the integrated supply chain system. The model considers three cases based on the relative lengths of the credit periods offered at each stage. Equations are developed to represent the profit functions for the supplier, retailer and overall system in each case. The goal is to determine the optimal selling price that maximizes total integrated profit.
The document discusses melanoma skin cancer detection using a computer-aided diagnosis system based on dermoscopic images. It begins with an introduction to skin cancer and melanoma. It then reviews existing literature on automated melanoma detection systems that use techniques like image preprocessing, segmentation, feature extraction and classification. Features extracted in other studies include asymmetry, border irregularity, color, diameter and texture-based features. The proposed system collects dermoscopic images and performs preprocessing, segmentation, extracts 9 features based on the ABCD rule, and classifies images using a neural network classifier to detect melanoma. It aims to develop an automated diagnosis system to eliminate invasive biopsy procedures.
This document summarizes various techniques for image segmentation that have been studied and proposed in previous research. It discusses edge-based, threshold-based, region-based, clustering-based, and other common segmentation methods. It also reviews applications of segmentation in medical imaging, plant disease detection, and other fields. While no single technique can segment all images perfectly, hybrid and adaptive methods combining multiple approaches may provide better results. Overall, image segmentation remains an important but challenging task in digital image processing and computer vision.
This document presents a test for detecting a single upper outlier in a sample from a Johnson SB distribution when the parameters of the distribution are unknown. The test statistic proposed is based on maximum likelihood estimates of the four parameters (location, scale, and two shape) of the Johnson SB distribution. Critical values of the test statistic are obtained through simulation for different sample sizes. The performance of the test is investigated through simulation, showing it performs well at detecting outliers when the contaminant observation represents a large shift from the original distribution parameters. An example application to census data is also provided.
This document summarizes a research paper that proposes a portable device called the "Disha Device" to improve women's safety. The device has features like live location tracking, audio/video recording, automatic messaging to emergency contacts, a buzzer, flashlight, and pepper spray. It is designed using an Arduino microcontroller connected to GPS and GSM modules. When the button is pressed, it sends an alert message with the woman's location, sets off an alarm, activates the flashlight and pepper spray for self-defense. The goal is to provide women a compact, one-click safety system to help them escape dangerous situations or call for help with just a single press of a button.
- The document describes a study that constructed physical fitness norms for female students attending social welfare schools in Andhra Pradesh, India.
- Researchers tested 339 students in classes 6-10 on speed, strength, agility and flexibility tests. Tests included 50m run, bend and reach, medicine ball throw, broad jump, shuttle run, and vertical jump.
- The results showed that 9th class students had the best average time for the 50m run. 10th class students had the highest flexibility on average. Strength and performance generally improved with increased class level.
This document summarizes research on downdraft gasification of biomass. It discusses how downdraft gasifiers effectively convert solid biomass into a combustible producer gas. The gasification process involves pyrolysis and reactions between hot char and gases that produce CO, H2, and CH4. Downdraft gasifiers are well-suited for biomass gasification due to their simple design and ability to manage the gasification process with low tar production. The document also reviews previous studies on gasifier configuration upgrades and their impact on performance, and the principles of downdraft gasifier operation.
This document summarizes the design and manufacturing of a twin spindle drilling attachment. Key points:
- The attachment allows a drilling machine to simultaneously drill two holes in a single setting, improving productivity over a single spindle setup.
- It uses a sun and planet gear arrangement to transmit power from the main spindle to two drilling spindles.
- Components like gears, shafts, and housing were designed using Creo software and manufactured. Drill chucks, bearings, and bits were purchased.
- The attachment was assembled and installed on a vertical drilling machine. It is aimed at improving productivity in mass production applications by combining two drilling operations into one setup.
The document presents a comparative study of different gantry girder profiles for various crane capacities and gantry spans. Bending moments, shear forces, and section properties are calculated and tabulated for 'I'-section with top and bottom plates, symmetrical plate girder, 'I'-section with 'C'-section top flange, plate girder with rolled 'C'-section top flange, and unsymmetrical plate girder sections. Graphs of steel weight required per meter length are presented. The 'I'-section with 'C'-section top flange profile is found to be optimized for biaxial bending but rolled sections may not be available for all spans.
This document summarizes research on analyzing the first ply failure of laminated composite skew plates under concentrated load using finite element analysis. It first describes how a finite element model was developed using shell elements to analyze skew plates of varying skew angles, laminations, and boundary conditions. Three failure criteria (maximum stress, maximum strain, Tsai-Wu) were used to evaluate first ply failure loads. The minimum load from the criteria was taken as the governing failure load. The research aims to determine the effects of various parameters on first ply failure loads and validate the numerical approach through benchmark problems.
This document summarizes a study that investigated the larvicidal effects of Aegle marmelos (bael tree) leaf extracts on Aedes aegypti mosquitoes. Specifically, it assessed the efficacy of methanol extracts from A. marmelos leaves in killing A. aegypti larvae (at the third instar stage) and altering their midgut proteins. The study found that the leaf extract achieved 50% larval mortality (LC50) at a concentration of 49 ppm. Proteomic analysis of larval midguts revealed changes in protein expression levels after exposure to the extract, suggesting its bioactive compounds can disrupt the midgut. The aim is to identify specific inhibitor proteins in the midg
This document presents a system for classifying electrocardiogram (ECG) signals using a convolutional neural network (CNN). The system first preprocesses raw ECG data by removing noise and segmenting the signals. It then uses a CNN to extract features directly from the ECG data and classify arrhythmias without requiring complex feature engineering. The CNN architecture contains 11 convolutional layers and is optimized using techniques like batch normalization and dropout. The system was tested on ECG datasets and achieved classification accuracy of over 93%, demonstrating its effectiveness at automated ECG classification.
This document presents a new algorithm for extracting and summarizing news from online newspapers. The algorithm first extracts news related to the topic using keyword matching. It then distinguishes different types of news about the same topic. A term frequency-based summarization method is used to generate summaries. Sentences are scored based on term frequency and the highest scoring sentences are selected for the summary. The algorithm was evaluated on news datasets from various newspapers and showed good performance in intrinsic evaluation metrics like precision, recall and F-score. Thus, the proposed method can effectively extract and summarize online news for a given keyword or topic.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
76201940
1. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
61
doi: 10.32622/ijrat.76201940
Abstract— Remote sensing images taken under hazy
conditions have poor visibility of the scene, therefore, haze
removal or de-hazing is highly desired. A high-quality haze
free image can be recovered by combining the Dark Channel
Prior and the haze imaging model. Atmospheric light and
transmission medium are estimated based on the Dark
Channel Prior. Once the atmospheric light and transmission
medium are obtained, scene radiance can be easily estimated.
The recovered scene radiance is further refined using
restoration filters to remove the artifacts present on the image
boundaries. In this paper, the haze imaging model is
implemented for single remote sensing images and the
effectiveness of haze imaging model is analyzed by varying
of the patch size, pixel count p% and haze quality parameter
ω. To improve the de-hazing performance further, restoration
filters such as Gaussian filter, median filter, Wiener filter,
bilateral filter and guided filter have been implemented and a
performance analysis is carried out. A better performance is
obtained for a patch size of 32x32, the pixel count of 0.05%
and the haze quality parameter ω as 0.85. The performance of
restoration filters is also analyzed in detail by computing
subjective and objective metrics. Experimental results show
that the Gaussian filter outperforms well and give visually
pleasing haze free images.
IndexTerms— Atmospheric light, Dark Channel Prior,
Haze imaging model, Restoration filters, Transmission
medium.
I. INTRODUCTION
The objective of the image restoration is to reconstruct the
original image from a degraded image without the knowledge
of either the true image or the degradation process. Due to
light scattering and absorption by atmospheric particles,
images/videos obtained from video surveillance, traffic
monitoring, astronomical imaging, medical imaging and
remote sensing will have poor visibility. Remote sensing
images have been widely used in various fields including
agriculture, forestry, hydrology and military with the
advantages of rich information, high spatial resolution, and a
Manuscript revised June 10, 2019 and published on July 10, 2019
N.Ameena Bibi, Assistant Professor, Department of Electronics and
Communication Engineering, Government College of Technology,
Coimbatore, India
Dr.C.Vasanthanayaki, Professor, Department of Electronics and
Communication Engineering, Government College of Engineering, Salem,
India
stable geometric location. Remote sensing images show the
environmental information at terrestrial, atmospheric and
ocean phenomenology [1].Remote sensing images are taken
at a considerable distance from the earth’s surface. During
propagation, the incoming energy interacts with the
atmosphere which degrades the quality of the received image
and this degradation is due to certain atmospheric effects
such as haze, fog, smoke, cloud etc. and they differ by
visibility distance. The atmospheric contribution is due to
scattering by air molecules and aerosols and the effect of the
path radiance is to reduce the image contrast leading to image
degradation [2]. Increasing hazy weather limits the potential
application of remote sensing images and poses a huge
challenge for the atmospheric correction of remote sensing
images [3].Though it is difficult to obtain haze free image
without a reference image, an efficient haze imaging model is
developed and implemented to restore the single remote
sensing hazy image.
Haze is due to particulate matter and the main sources
include smoke, road dust, and other particles emitted directly
into the atmosphere, as well as particulate matter formed
when gaseous pollutants react in the atmosphere and when
these dissolve with vapour and grow into droplets when the
humidity exceeds approximately 70% and causing opaque
situation known as haze [4].Burning of fuels from transport
sector is one of the air pollutants in urban areas [5].These
particles often grow in size as humidity increases and affect
the interpretation of remote sensing images. Hazy weather
can lead to image color distortion and affect the resolution
and the contrast of the observed object. Haze formation is the
combination of airlight and attenuation where as attenuation
reduces the contrast and airlight increases the whiteness in
the scene radiance [6]. Remote sensing images taken under
hazy conditions often lack visual vividness and appeal and
therefore have poor visibility. Therefore, an effective haze
removal method is of great significance for improving the
visibility of remote sensing images.
There are different approaches to improve the visibility in
hazy images and they are subdivided into three categories
such as additional information approaches, multiple image
approaches, and single image approaches. The additional
image approaches employ scene information to remove haze
and recover the realistic colors. Multiple image approaches
employ two or more images under different weather
conditions to remove haze and hence this requires additional
expense or hardware to perform effectively. Single image
haze removal techniques use strong assumptions and priors
and produce acceptable results, Yet there is the need to
restore the image using suitable filters. Single image
de-hazing is a challenging problem because only a hazy
image is given and also it does not require any additional
Study on Haze Imaging Model and Image Restoration
Schemes for Hazy Remote Sensing Images
N.Ameena Bibi, Dr.C.Vasanthanayaki
2. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
62
doi: 10.32622/ijrat.76201940
information. Single image de-hazing methods can be
classified into two main categories i) image enhancement
method based on image processing ii) image restoration
based on a physical model. Image enhancement based image
processing methods improve the contrast of the image but
lose some of the information regarding image and hence
image restoration methods are preferred.
He et al proposed a simple and effective haze imaging
model [7]. Fattal R proposed a method to estimate
atmospheric light and transmission medium based on
assumption that transmission and surface shading are locally
correlated [8]. Tien Ying Kuo et al recommended the depth
estimation module and color analysis module for better
de-hazing. The Depth Estimation module uses a median
filter, and adopts the gamma correction to avoid halo artifacts
and the color analysis module is based on the gray world
assumption [9]. Bingquan Huo et al proposed a water shed
segmentation method in order to estimate the atmospheric
light and also proposed a method to remove the halo artifacts
[10].
Bo Hao Chen et al developed an edge collapse de-hazing
algorithm to dynamically repair the transmission map and to
achieve better results [11]. Shih chia Huang et al have
proposed a novel based visibility restoration approach to
effectively solve inadequate haze thickness estimation and
alleviate color cast problems [12]. Qingsong Zhu et al have
presented a linear modeling to model the scene depth of the
hazy image with a supervised learning method, as a result of
which the transmission medium can be easily estimated [13].
Xiaoxi Pan et al proposed a deformed haze imaging model to
recover remote sensing from the non uniform haze [14].
Xiaoguang Li et al employed an appropriate method to
extract the atmospheric light and also proposed a contour
preserving estimation to obtain the transmission medium
[15].
Single image de-hazing methods have proved to be
simple and effective. All the researchers have tested the haze
imaging model with fixed patch size, pixel count representing
A and haze quality parameter ω. In this paper, a survey was
carried out to check the effectiveness of haze imaging model
for various patch size, pixel count p% representing A, and
haze quality parameter and also a comparative analysis of
various restoration filters were carried out.
.
II. HAZE IMAGING MODEL
Visibility restoration of hazy images comprises of two
stages, the haze imaging model and the refinement phase. A
hazy image is formed when there are problems with
absorption of light and scattering by atmospheric particles
between the sensor and the object. The Haze imaging model
is an optical model, and it is used to describe the formation of
a hazy image captured in hazy weather condition. The Haze
free image obtained from haze imaging model contains
artifacts and this is removed using restoration filters in the
refinement phase. Fig.1 shows the block diagram of the
visibility restoration of hazy images.
Figure 1: Flow of visibility restoration
Visibility degradation of hazy image is caused by absorption
and scattering of light by particles and gases in the
atmosphere. The haze imaging model is used to describe the
formation of the hazy image and is given as
I(x)=J(x)t(x)+A(1-t(x)) (1)
where I(x) stands for the observed image, J(x)is the scene
radiance, A is the global atmospheric light and transmission
medium t(x) referred to the portion of the light that is not
scattered and reaches the sensor. J(x)t(x) is direct attenuation
referred to as the portion of light not scattered and reaches the
sensor. Fig. 2 shows the pictorial representation of hazy
image formation. Hazy image obtained is due to a linear
combination of direct transmission and atmospheric light.
Therefore J(x) need to be recovered from the hazy image I(x).
Figure 2:Pictorial representation of Hazy image
formation.
Obviously, it is an ill-posed problem since only I(x) is given
and hence A and t(x) need to be estimated. Once A and t(x)
are estimated, the scene radiance J(x) can be obtained. In
order to estimate Atmospheric light and transmission
medium, the Dark Channel Prior is used.
A. Dark channel prior
Dark Channel Prior proposed by He at al is based on
Blackbody theory [7]. As the black body object in the scene
has no reflection, pixel values are very close to zero. But in
the hazy image pixel values are non zero which is used to
describe the characteristics of atmospheric light in the
environment. Therefore intensity of dark channel is a rough
approximation of the thickness of haze. The dark channel of a
hazy image approximates the haze denseness [16].
( ) ( )( * + ( )) (2)
In order to find the dark channel (DC) for a hazy image
3. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
63
doi: 10.32622/ijrat.76201940
appropriate patch size should be selected with proper pad
size. Selection of Patch size is very important in the DCP. A
small local patch is sufficient to estimate the dark channel.
which results in reduced calculation time but a small patch
size leads to oversaturation and this can be avoided by
selecting a large patch size[17]. However selection of a large
Patch size leads to the generation of halo effects and block
artifacts along the edges of the recovered scene radiance.
Image with complicated local textures needs larger local
patch size to exclude false textures from the dark channel.
Minimum possible patch size that does not produce false
textures in the dark channel need to be found for every hazy
image by considering application dependent image details.
B. Estimation of atmospheric light
The air light becomes more dominant as the distance
from the object to the observer increases. The haze imaging
model indicates that two objects with different radiant
energy, located at the same distance from the observer have
the same air light[18].The atmospheric light is best
estimated in the most haze-opaque regions and choose the top
of p% brightest pixels in the dark channel as the most
haze-opaque region and the brightest pixel is considered as
the global atmospheric light. This approach is more reliable
than only searching for the single brightest pixel in the entire
image[18]. When a small patch size is used, the pixel for
bright objects is considered as candidate pixels yielding
inaccurate air light estimation. Therefore the large patch size
can prevent selecting of such pixels. Patch size 32x32 is
effective and it is less sensitive to the robustness of any pixel
selection p%.
C. Estimation of transmission medium
The transmission describes the portion of light that is
not scattered and reaches the sensor, and it is dark in dense
haze regions and bright in light haze regions [19]. The
transmission depends on two factors: i) the distance between
an object and the observer; ii) the scattering coefficient which
is related to the turbidity of haze and wavelength and also
transmission depends on unknown depths [20].Transmission
medium can be defined as
( ) ( )
(3)
d(x) is the distance between the sensor and observable object
J(x), ß is scattering coefficient and in clear weather condition
there is no scattering of light and hence ß=0, therefore t(x)=1
and the captured image will be the haze free image I(x)=J(x).
In hazy weather condition, transmission medium t(x)
decreases as the scene depth increases, scene radiance is
attenuated exponentially with depth. A(1-t(x)) is air light
which results from scattered light and this leads to shifting of
the scene color. When the distance increases, scene depth
also increases and when t(x) is zero, only air light reaches the
sensor and captured image contain only air light and hence no
scene radiance. t(x) can also be defined as
( ) ( ( ( ) )) (4)
The results may seem unnatural when the haze is removed
completely. Thus, He et al. added a constant ω within the
range of 0 to 1 and it will prevent from under estimation of
the transmission medium[7]. A good transmission medium
should be constant otherwise halo artifacts will occur in the
de-hazed image. Edge information of obtained transmission
will be seriously lost, therefore to improve accuracy there is
the need to refine the transmission medium.
D. Scene radiance
Once A and t(x) are estimated, the scene radiance can
be computed as
( )
( )
( )
(5)
Recovered scene radiance is prone to noise production when
the transmission t(x) is close to zero and therefore restrict the
transmission by lower bound to and it is assigned the value of
0.1
( )
( ( ) )
( ( ( ) ))
(6)
Incorrect estimation for the transmission medium can lead to
some problems such as false texture and blocking artifacts in
the recovered scene radiance. Therefore the recovered scene
radiance is to be refined using filters.
III. REFINEMENT PHASE-SCENE RECOVERY BY
RESTORATION FILTERS
The recovered scene radiance J(x) contains artifacts and
this can be removed by restoration filters such as Gaussian,
median, Bilateral, wiener, and Guided filter. In this work,
implementation and performance of restoration filters are
done by computing subjective and objective metrics.
A. Gaussian filter
The Gaussian filter is a low pass nonlinear filter that
smoothens the image. It is useful in removing false color
textures, but it unnecessarily blurs the scene radiance when
there are no annoying false textures. The Gaussian Filter is
effective when proper patch size is used. The 2D Gaussian
function is defined as
G(x,y)= (7)
The value of sigma σ governs the degree of smoothing,
and eventually how the edges are preserved. For the
refinement of a hazy image, this filter restores the recovered
scene radiance faster and efficiently.
B. Median filter
The median filter is a well known nonlinear filter which
effectively removes noise while preserving edges. Its
performance is not much better than a Gaussian blur for a
high level of noise. The major limitation of this filter is that
fine details will be lost for large window size and also it is a
time-consuming process.
C. Bilateral filter
Bilateral filter is a nonlinear filter which computes the
weighted average of nearby pixels, in a manner very similar
to Gaussian convolution. In order to preserve edges while
4. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
64
doi: 10.32622/ijrat.76201940
smoothing, the bilateral filter takes into account the
difference in value with the neighbors The key idea of the
bilateral filter is that for a pixel to influence another pixel, it
should not only occupy a nearby location but also have a
similar value. The bilateral filter is defined as
BF[Ip]= ∑ (|| ||) (|| ||) (8)
Where normalization factor
wp=∑ (|| ||) (|| ||) (9)
is spatial Gaussian weighting that decreases the
influence of distant pixels and is range Gaussian that
decreases the influence of pixel q which their intensity value
differ from Ip. Parameters σs and σr will specify the amount
of filtering for the image I. The bilateral filter is controlled
by two parameters σs and σr. As the range parameter σr
increases, the bilateral filter gradually approximates
Gaussian more closely and when the spatial parameter σs
increases, the larger features get smoothened. The
disadvantage of Bilateral filter is that it introduces gradient
reversal and staircase effect in the restored image.
D. Wiener filter
Wiener filter is an optimum filter which minimizes the
mean square error. It has the capability of handling both the
degradation function as well as noise. The main advantage of
Wiener filter is that it is stable in the presence of the nulls.
The Wiener filter is used to detect color distortion when
image with large white objects is processed using the DCP
and this filter is useful in recovering the contrast of large
white areas of hazy image[16].
E. Guided filter
Guided filter is a fast, nonapproximate linear time filter.
It has edge preserving, smoothing property like the bilateral
filter but does not suffer from gradient reversal artifacts. With
the help of the guidance image, it can make the filtering
output more structured. The filtering output is computed by
( 10)
where ak and bk are the average of a and b respectively on
the window wk centered at i.
∑
(11)
(12)
where i is the index of a pixel, and k is the index of a local
square window w with a radius r, where μk and σk are the
mean and variance of I in the window k, and Є is a
regularization parameter controlling the degree of
smoothness. The restored image by guided filter is closer to
the real scene. Obviously, transmissions estimated by these
methods are darker in dense haze regions than in light haze
regions, which is consistent with the haze distribution in a
real scene. It has better performance near edges when
compared to the bilateral filter[21].
IV. PERFORMANCE ANALYSIS OF THE HAZE
IMAGING MODEL
In this work, experiments were conducted to implement
and analyze the haze imaging model and restoration filters to
obtain high-quality haze free images. To test the
effectiveness of the haze imaging model, atmospheric light,
transmission map and elapsed time were analyzed by varying
the patch size, the pixel count p% representing A and haze
quality parameter ω. The above performance analysis was
carried out for 40 input samples and in this paper, results
were shown for three input samples Fig. 3a, 3b, 3c are the
hazy images used for the analysis of haze imaging model.
Atmospheric light, transmission map and elapsed time of
haze imaging model are analyzed by varying of the patch
size, count representing A and haze quality parameter ω.
A. Effect of patch size
Selection of patch size plays a significant role in the
dark channel prior. Most of the researchers used fixed patch
size for estimating the dark channel prior .As atmospheric
light mainly depends on the dark channel, the study is carried
out for various patch size. Table I. gives the estimated
atmospheric light with respect to patch size for two different
samples and from results obtained atmospheric light has no
significance for a patch size greater than 32x32 hence a patch
size of 32x32 is acceptable. It is observed that small patch
size results in inaccurate atmospheric light estimation
because bright pixels are considered as candidate pixels. A
good transmission map should be uniform to avoid halo
artifacts in a de-hazed image. Fig. 4 shows the effect of patch
size on dark channel estimation. Fig. 5a and 5b show
transmission map for patch size of 32x32 and 7x7
respectively. It is shown that uniform transmission can be
obtained for large patch size of 32x32 and small patch size
tends to have color textures.
B. Effect of pixel count p%
After computing the dark channel, a suitable quantity of
bright pixels needs to be computed for atmospheric light
estimation. A 400x400 hazy image was considered and study
was made with 0.05%, 0.1% and0.2%. of the total number of
pixels. Table II. gives the effect of pixel count p% and patch
size on atmospheric light estimation. It is found that for large
patch size, the dark channel value obtained is small and hence
the value of atmospheric light obtained is smaller. Even the
p% is varied, the variation of atmospheric light is not
significant for large patch size. Patch size of 32x32 and pixel
count p% = 0.05% gives better estimation of atmospheric
light .
C. Effect of haze quality parameter (ω)
If the haze is removed completely, the image seems
unnatural due to loss of depth information, hence ω is
considered in transmission medium estimation. In this work,
ω =0.85 and ω =0.9 are used for transmission medium
estimation. Better estimation of transmission medium can be
obtained for ω =0.85. From Fig 6 shows scene radiance for
two samples and the image seems natural for ω value of 0.85
and it results in a good estimation of the transmission
5. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
65
doi: 10.32622/ijrat.76201940
medium.
D. Computational time
To study the effect of variation in parameters(patch size,
the pixel count p% representing A and haze quality
parameter ω) on the complexity of haze imaging model
implementation, computational time is calculated and
compared in Table III. When patch size increases, elapsed
time increases proportionally for the DC (dark channel) and
transmission medium. Computation time is constant for
atmospheric light and scene radiance for different patch size
and its around 0.02seconds for atmospheric light and 0.018
seconds for scene radiance. Similarly, there is no effect of
p% on computation time for atmospheric light.
a)sample1 b)sample2 c)sample3
Figure 3: Hazy images
a) 32x32 b) 15x15 c) 7x7
Figure 4: Effect of patch size on dark channel
estimation
a)patch size 32x32 b) patch size 7x7
Figure 5: Transmission map
a) b)
c) d)
Figure 6: scene radiance for image samples
a) and b) for ω=0.85 c) and d) for ω=0.9
Table I Atmospheric light for various patch size
Table II Comparison of Atmospheric light for various
patch size and pixel selection p%
Patch
Size
Atmospher
ic light
A
(sample1)
Atmospheri
c light A
(sample 2 )
42x42 0.5928 0.6275
40x40 0.5905 0.6287
36x36 0.5925 0.627
32x32 0.5745 0.6463
17x17 0.5847 0.8060
15x15 0.6086 0.8188
11x11 0.6764 0.7323
7x7 0.7169 0.746
Pixel
selection
p%
Atmospheric
light
A for 7x7
Atmospheric
light
A for
15x15
Atmospheric
light A
for 32x32
0.1% 0.7092 0.5925 0.5984
0.2% 0.6958 0.6049 0.5840
0.05% 0.7169 0.6086 0.5745
6. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
66
doi: 10.32622/ijrat.76201940
Table III Comparison of computational time for DC and t(x) for various parameters
ω=0.85, patch
size= 32x32
ω =0.9, patch
size =32x32
ω =0.85, patch
size =17x17
ω =0.9, patch
size= 17x17
ω =0.85, patch
size =15x15
ω =0.9, patch
size=15x15
p
%
0.1
0%
0.2
%
.05
%
0.1
0%
0.2
%
.05
%
0.1
0%
0.2
%
.05
%
0.1
0%
0.2
%
.05
%
0.1
0%
0.2
%
.05
%
0.1
0%
.02
%
.05
%
DC 6.5 4.6 4.6 4.02 5.31 4.81 4.6 3.48 4.18 4.8 3.92 3.2 4.5 3.9 3.6 4.2 4.12 4.15
t(x) 4.23 4.35 6.14 5.8 5.37 5.53 3.31 4.84 3.24 3.88 2.96 3.6 3.4 4.9 4.37 3.06 3.63 3.98
Table IV
Relation between relative humidity and correlation coefficient
Correlation
Coefficient
Relative
humidity
0.95 below 50%.
0.76. 50-60%
0.42 to 0.37 greater than
60%.
Table V performance metrics of various restoration filters
PATCH SIZE 32X32 p%=0.1,ω =0.9 PATCH SIZE 32X32 p%=0.1,ω=0.85
PERFORMAN
CE METRICS
HAZE
DENSITY
EPI CORRELATI
ON
COEFFICIENT
SSI
M
HAZE
DENSITY
EPI CORRELATI
ON
COEFFICIENT
SSI
M
BEFORE
RESTORATION
0.2302 0.7051 0.7051 0.9722 0.2659 0.7007 0.7007 0.9757
GAUSSIAN 0.2302 0.996 0.996 1 0.2658 0.9959 0.9959 1
MEDIAN 0.2297 0.9733 0.9733 1 0.2653 0.9737 0.9737 1
BILTERAL 0.2303 0.8975 0.8975 1 0.2659 0.899 0.899 1
WIENER 0.2304 0.9493 0.9493 1 0.266 0.9489 0.9489 1
GUIDED 0.2303 1 1 1 0.2659 1 1 1
PATCH SIZE 32X32 p%=0.2,ω =0.9 PATCH SIZE 32X32 p%=0.2,ω =0.85
BEFORE
RESTORATION
0.2226 0.7052 0.7052 0.9722 0.2592 0.7008 0.7008 0.9747
GAUSSIAN 0.2225 0.9961 0.9961 1 0.2591 0.9959 0.9959 1
MEDIAN 0.222 0.9736 0.9736 1 0.2586 0.974 0.974 1
BILTERAL 0.2226 0.8987 0.8987 1 0.2593 0.9001 0.9001 1
WIENER 0.2227 0.9501 0.9501 1 0.2594 0.9496 0.9496 1
GUIDED 0.2226 1 1 1 0.2593 1 1 1
PATCH SIZE 32X32 p%=.05,ω=0.9 PATCH SIZE 32X32 p%=.05,ω=0.85
BEFORE
RESTORATION
0.2179 0.7043 0.7043 0.9727 0.2551 0.6999 0.6999 0.9752
GAUSSIAN 0.2178 0.9961 0.9961 1 0.255 0.996 0.996 1
MEDIAN 0.2173 0.9738 0.9738 1 0.2545 0.9743 0.9743 1
BILTERAL 0.2179 0.8997 0.8997 1 0.2551 0.9011 0.9011 1
WIENER 0.218 0.9508 0.9508 1 0.2553 0.9502 0.9502 1
GUIDED 0.2179 1 1 1 0.2551 1 1 1
7. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
67
doi: 10.32622/ijrat.76201940
V. QUANTITATIVE ANALYSIS OF VARIOUS
RESTORATION FILTERS
An efficient restoration filter is to be identified for better
refinement of de-hazing. For that the performance of
Gaussian, median, wiener, bilateral and guided filter are
analyzed with the following performance indicator.
A. Structural Similarity Index(SSIM)
It measures the structural similarity between input and
dehazed images through correlation (s), luminance (l) and
contrast(c).
S(IGS,IF)= ; l(IGS,IF)= ; c(IGS,IF)= ; (13)
SSIM(IGS,IF)= S(IGS,IF)* l(IGS,IF)* c(IGS,IF) (14)
where IGS , and are the input hazy image, input hazy
mean and input hazy image variance, respectively. IF , and
are the filtered image, filtered image mean and filtered
image variance, respectively. is the covariance between
input hazy image and filtered images. After simplifications,
the SSIM is given by,
SSIM(IGS,IF)=
( )( )
( )( )
(15)
Where c1 =0.01 .dr and c2 =0.03 .dr with dr=255 . Larger
values indicate good SSIM.
B. Correlation coefficient
It is a measure of similarity between original image and the
dehazed image. It also used to estimate relative humidity in
the hazy image. Relative humidity is the major factor that
influences the formation of hazy weather. When relative
humidity is below 60%, visibility is highly influenced by
concentration of particulate matter. Relation between relative
humidity and correlation coefficient is shown in Table IV.
C. Edge Preservation Index(EPI)
It measures the edge preservation ability of a filter and it is
expressed as
EPI=
∑( ̅̅̅̅) ( ̅̅̅̅)
∑( ̅̅̅̅) ( ̅̅̅̅)
(16)
Larger value indicates good edge preservation.
D. Haze density
Haze density of a hazy image is obtained by convolving
the hazy image with the smoothing function. Then, the haze
veil is generated by the means of L(x,y) [22]. The
computation process of the haze veil can also be expressed as
follows:
L(x,y)=I(x,y) . F(x,y) (17)
A smaller value implies that the restored image has less haze
density.
To study the effect of variation in parameters (patch size,
pixel count p% representing A and haze quality parameter ω)
on the complexity of haze imaging model implementation,
the computation time is calculated and compared in Table III.
Performance metrics of various filters such as the Gaussian
filter, median filter, Wiener filter, bilateral filter and guided
filter are shown in Table V. and Computational time for the
various filters is shown in Table VI. A hazy image of size
400x400 with haze density of 0.5249 is considered.
From the performance metrics obtained the Guided and
the Gaussian filters restore the image well when compared to
other filters. Thus the Gaussian filter can be preferred for
restoring the hazy image when compared to the guided filter
because of three reasons i) computation time of guided filter
is twice that of Gaussian filter ii) Guided filter process on a
grayscale image and output obtained by guided is not visually
pleasing and seems to be dim when compared with the
Gaussian filtered output. iii) The Guided filter also requires
an additional information i.e. guidance image.
Table VI Computation time for various filters
When computational times were compared, the wiener and
Gaussian filter compute faster but the Gaussian filter is
preferred due to its good performance metrics.
VI. QUALITATIVE EVALUATION
A input of a hazy image of size 400x400 was considered
and analyzed by varying the patch size , pixel selection p%
and parameter ω. The quality of output image is compared in
terms of visual pleasure and haze. Fig. 7 represents the hazy
image of size 400x400 with haze density of 0.5249.
Fig. 8 shows the various filter output for a patch size of
32x32,p=0.05% and ω=0.85 and Fig. 9 shows the various
filter output for a patch size of 32x32,p=0.05% and ω=0.9.
By comparing both Fig. 8 and 9 better results were obtained
for a patch size of 32x32 , pixel percentage0.05% and
ω=0.85.
Figure 7: Input hazy image of size 400x400
Computation time in seconds
GAUSSIAN 0.2056
MEDIAN 4.248
BILTERAL 10.745
WIENER 0.1803
GUIDED 0.453
8. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
68
doi: 10.32622/ijrat.76201940
Figure 8 :For a patch size of 32x32,p=.05% and ω=.85
(a) Haze Free Image before filtering
(b) Gaussian Filtered Image
(c) Median Filtered Image
(d) Bilateral Filtered Image
(e) Wiener Filtered Image
(f) Guided Filtered Image
Figure 9: For a patch size of 32x32,p=.05% and ω=.9
(a) Haze Free Image before Filtering
(b) Gaussian Filtered Image
(c) Median Filtered Image
(d) Bilateral Filtered Image
(e) Wiener Filtered Image
(f) Guided Filtered Image
VII. CONCLUSION
For the estimation of Atmospheric light, patch size and
pixel count p% representing A are important. In transmission
medium estimation, the patch size and haze quality parameter
ω are important. This paper aims at finding suitable
parameters such as patch size, pixel selection p% and
parameter ω for effective de-hazing. From the results, it was
found that suitable parameters for the haze imaging model are
such as patch size of 32x32, the pixel count of 0.05% and
ω=0.85. While considering the refinement phase ,by using
the Gaussian filter, haze is removed completely without
sacrificing the fidelity of the colors and all the performance
metrics obtained are good especially with low processing
time. The Computational time is highly related to the
hardware and hence in the future a dedicated hardware can
be used to reduce the computational time. We believe that our
detailed survey and performance analysis on the haze
imaging model and restoration filters will help researchers to
have better understanding of the DCP based de-hazing.
REFERENCES
[1] J. Amanollahi, S. Kaboodvandpour, A. M. Abdullah, M. F. Ramli,
―Accuracy assessment of moderate resolution image spectro
radiometer products for dust storms in semiarid environment,‖ Int. J.
Environ. Sci. Technol., vol.8, pp. 373–380 , March 2011.
[2] M.B. Potdar , ―Modelling of radiation transfer in earth's hazy
atmosphere and calculation of path radiances in IRS LISS-I bands,‖
Journal of the Indian Society of Remote Sensing , vol.18, pp:65-75,
1990.
[3] Zheng Wang, Junshi Xia, Lihui Wang et al. ―Atmospheric Correction
Methods for GF-1 WFV1 Data in Hazy Weather,‖ Journal of the
Indian Society of Remote Sensing, pp: 1–12, April 2017.
[4] L. C. Abdullah, L. I. Wong, M. Saari, A. Salmiaton M. S. Abdul Rashid
, ―Particulate matter dispersion and haze occurrence potential studies
at a local palm oil mill,‖ Int. J. Environ. Sci. Technol, vol. 4, pp.
271–278, March 2007.
[5] Perez Martínez, P.J. Miranda, R.M. Nogueira, et al., ―Emission
factors of air pollutants from vehicles measured inside road tunnels in
São Paulo: case study comparison,‖ Int. J. Environ. Sci. Technol.
,vol.11, pp: 2155–2168, April 2014.
[6] Yadwinder Singh , Er. Rajan Goyal , ―Haze Removal in Color Images
Using Hybrid Dark Channel Prior and Bilateral Filter,‖ International
Journal on Recent and Innovation Trends in Computing and
Communication, vol.2, pp:4165-4171, December 2014.
[7] K M. He, J. Sun, X. Tang, ―Single Image Haze Removal Using Dark
Channel Prior,‖ IEEE Trans. Pattern Anal. Machine Intell., vol. 33,
pp: 2341-2353, December 2011.
[8] 8.R. Fattal, ―Single Image Dehazing,‖ Proc. ACM SIGGRAPH,
vol.98, pp:1-10, 2008.
[9] Tien Ying Kuo, Yi-chung Lo, ―Depth estimation from a monocular
view of the outdoors,‖ IEEE Transactions on consumer electronics,
vol.57, pp: 817-822, June 2011.
[10] Bingquan Huo, Fengling Yin, ―Image dehazing with dark channel prior
and novel estimation mode,‖ International Journal of multimedia and
Ubiquitous engineering, vol. 10, pp: 13-22, 2015.
[11] Bo Hao Chen, Shih-chia Huang, ―Edge collapse-based dehazing
algorithm for visibility restoration in real scenes,‖ Journal of display
technology, vol:12, pp:964-970, 2015.
[12] Shih Chia Huang, Bo Hao Chen, ―An advanced single-image visibility
restoration algorithm for real world hazy scenes,‖ IEEE Transactions
on industrial engineering , vol. 62, pp:2962-2972, May 2015.
[13] Qingsong Zhu , Jiamimg Mai, Ling Shao, ―A fast single image haze
removal algorithm using color attenuation prior,‖ IEEE transactions
on image processing, vol. 4, pp :3522-3533, 2015.
[14] Xiaoxi Pan, FengyingXie, Zhiguo Jiang, Jihao Yin, ―Haze Removal
for a Single Remote Sensing Image Based on Deformed Haze Imaging
Model,‖ IEEE signal processing letters, vol.22, pp: 1806-1810, 2015.
[15] Xiaoguang Li , Huiying Huang, ―An Improved Haze Removal
Algorithm Based on Genetic Fuzzy Clustering,‖ International Journal
of Hybrid Information Technology, vol. 8, pp:261-270, 2015.
[16] Harpoonamdeep Kaur, Dr. Rajiv Mahajan, ―A Review on Various
Visibility Restoration Techniques,‖ International Journal of Advanced
Research in Computer and Communication Engineering, vol.3, pp :
6622-6625, May 2014.
[17] Sungmin Lee, Seokmin Yun, Ju-Hun Nam, Chee Sun Won,
Seung-Won Jung, ―A review on dark channel prior based image
dehazing algorithms,‖ EURASIP Journal on Image and Video
Processing, vol.4, pp:1-23, 2016.
[18] Jiao Long, Zhenwei Shi, Wei Tang, Changshui Zhang, ―Single remote
sensing image dehazing,‖ IEEE. Geosci. Remote Sens. Letters,
vol.11, pp:59–63, January 2014.
[19] A. Elisee, Zhengning Wang, Liping Li, ―A comprehensive study on
fast image dehazing techniques‖ International Journal of Computer
Science and mobile computing, vol.2, pp :146-152, September 2013.
9. International Journal of Research in Advent Technology, Vol.7, No.6, June 2019
E-ISSN: 2321-9637
Available online at www.ijrat.org
69
doi: 10.32622/ijrat.76201940
[20] Bolun Cai, Xiangmin Xu, Kui Jia, Chunmei Qing, Dacheng Tao, ―
Dehaze net :An end to end system for single image haze removal,‖
IEEE Transactions on Image Processing, vol. 25, pp:5187-5198, May
2016.
[21] Chieh Chi Kao, Jui Hsin Lai , Shao-Yi Chien, ―VLSI Architecture
Design of Guided Filter for 30fps full HD video,‖ IEEE Transactions
on Circuits and systems for videos, vol:24, pp:513-524, March 2014.
[22] Guo Fan, Tang Jin, Cai Zi-xing, ―Objective measurement for image
defogging algorithms,‖ J. Cent. South Univ., vol.21, pp: 272−286,
2014.
AUTHORS PROFILE
N.Ameena Bibi is a Assistant Professor of Electronics
and Communication Engineering at Government
College of Technology, Coimbatore. Her work focuses
specifically on Dehazing and their impact on the
remote sensing and real world images.
Dr.C.Vasanthanayaki is a Professor & Head of
Department of Electronics and Communication
Engineering at Government College of Engineering,
Salem. Her work focuses specifically on Image
Processing and VLSI Systems