IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
1. The document discusses techniques for removing haze from digital images. It begins with an introduction to how haze forms and degrades image quality.
2. It then describes several categories of haze removal techniques, including multiple image dehazing methods that use multiple images and single image dehazing methods that rely on statistical assumptions. Specific techniques discussed include dark channel prior, guided image filtering, and bilateral filtering.
3. The document focuses on comparing different haze removal approaches and evaluating which methods produce higher quality results for single image dehazing.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
The document discusses sources of distortion in underwater images such as light scattering and color change. It proposes a method called Wavelength Compensation and Dehazing (WCID) to enhance underwater image visibility and color fidelity. WCID uses a hazy image formation model and dark channel prior to estimate depth maps and remove haze. It can also detect and remove effects of artificial light sources. The method is shown to outperform other dehazing techniques in experiments by achieving higher signal-to-noise ratios and more robust performance at different water depths.
This document provides an overview of digital image processing. It discusses what image processing entails, including enhancing images, extracting information, and pattern recognition. It also describes various image processing techniques such as radiometric and geometric correction, image enhancement, classification, and accuracy assessment. Radiometric correction aims to reduce noise from sources like the atmosphere, sensors, and terrain. Geometric correction geometrically registers images. Image enhancement improves interpretability. Classification categorizes pixels. The document outlines both supervised and unsupervised classification methods.
Raw remote sensing images contain errors that must be corrected through pre-processing before analysis. Pre-processing involves radiometric, geometric, and atmospheric corrections. Radiometric corrections address distortions in pixel values from issues like noise, striping, or dropped scan lines. Geometric corrections rectify distortions caused by terrain, sensor geometry, and platform movement using ground control points. Atmospheric corrections reduce haze effects through techniques like dark object subtraction that assume minimum surface reflectance values. Pre-processing is essential for producing accurate, georeferenced images suitable for analysis and interpretation.
This survey paper has provided clear and detailed information about the degradation of the underwater images and enhancement techniques for improving the image quality. It describes the overall insight on the restoration techniques using neural networks and physical based methods. The datasets and subjective tasks required for the filtering of the underwater images are also covered.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
1. The document discusses techniques for removing haze from digital images. It begins with an introduction to how haze forms and degrades image quality.
2. It then describes several categories of haze removal techniques, including multiple image dehazing methods that use multiple images and single image dehazing methods that rely on statistical assumptions. Specific techniques discussed include dark channel prior, guided image filtering, and bilateral filtering.
3. The document focuses on comparing different haze removal approaches and evaluating which methods produce higher quality results for single image dehazing.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
The document discusses sources of distortion in underwater images such as light scattering and color change. It proposes a method called Wavelength Compensation and Dehazing (WCID) to enhance underwater image visibility and color fidelity. WCID uses a hazy image formation model and dark channel prior to estimate depth maps and remove haze. It can also detect and remove effects of artificial light sources. The method is shown to outperform other dehazing techniques in experiments by achieving higher signal-to-noise ratios and more robust performance at different water depths.
This document provides an overview of digital image processing. It discusses what image processing entails, including enhancing images, extracting information, and pattern recognition. It also describes various image processing techniques such as radiometric and geometric correction, image enhancement, classification, and accuracy assessment. Radiometric correction aims to reduce noise from sources like the atmosphere, sensors, and terrain. Geometric correction geometrically registers images. Image enhancement improves interpretability. Classification categorizes pixels. The document outlines both supervised and unsupervised classification methods.
Raw remote sensing images contain errors that must be corrected through pre-processing before analysis. Pre-processing involves radiometric, geometric, and atmospheric corrections. Radiometric corrections address distortions in pixel values from issues like noise, striping, or dropped scan lines. Geometric corrections rectify distortions caused by terrain, sensor geometry, and platform movement using ground control points. Atmospheric corrections reduce haze effects through techniques like dark object subtraction that assume minimum surface reflectance values. Pre-processing is essential for producing accurate, georeferenced images suitable for analysis and interpretation.
This survey paper has provided clear and detailed information about the degradation of the underwater images and enhancement techniques for improving the image quality. It describes the overall insight on the restoration techniques using neural networks and physical based methods. The datasets and subjective tasks required for the filtering of the underwater images are also covered.
A Review on Airlight Estimation Haze Removal AlgorithmsIRJET Journal
This document reviews algorithms for estimating airlight to remove haze from images. It discusses how haze degrades image quality by attenuating light reflected from objects and adding atmospheric light. Common haze removal techniques rely on a atmospheric scattering model. The dark channel prior method estimates atmospheric light using the fact that at least one color channel will have some pixels with very low intensities in haze-free images. Bilateral, trilateral, and CLAHE filters can then be used as post-processing steps to improve results. The document aims to develop new airlight estimation methods with lower computational complexity.
A fast single image haze removal algorithm using color attenuation priorLogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
The document discusses underwater image enhancement techniques. It states that underwater images suffer from light scattering and color changes that reduce visibility and introduce haze. It proposes using the Wavelength Compensation and Dehazing (WCID) algorithm to enhance underwater images by compensating for these effects. WCID achieves superior visibility and color fidelity over other techniques like dark-channel dehazing. It works by using an underwater image formation model and a residual energy ratio to remove haze and restore clarity. The results show WCID produces the highest signal-to-noise ratio, demonstrating its effectiveness for underwater image enhancement.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
This document summarizes an algorithm for reducing speckle noise in images using a two-stage approach combining wavelet and spatial domain filtering. The first stage estimates the optimal parameter value for a spatial speckle reduction filter based on edge pixel statistics and noise variance. The second stage then uses the optimized spatial filter to additionally smooth wavelet approximation sub-band coefficients. A complexity reduction method for wavelet decomposition is also proposed. Existing noise reduction methods like the Lee, Kuan and Frost filters are reviewed for context. The results of applying the proposed two-stage algorithm are promising in terms of improved image quality.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
High Efficiency Haze Removal Using Contextual Regularization AlgorithmIRJET Journal
This document presents a new contextual regularization algorithm for high efficiency haze removal. It begins with an overview of existing haze removal techniques and their limitations, such as halo effects and reduced image quality. It then proposes a method that estimates airlight using multiple transmission maps and cross bilateral filtering to remove noise and enhance edges. This integrated approach yields faster execution speeds and superior recovery effects compared to existing filters. The key contribution is a new contextual regularization that allows incorporating a filter bank into dehazing images. Experimental results show the proposed method removes haze without changing the original scene or producing saturated images, while existing techniques can remove wanted image information or produce unnatural results.
Visibility Enhancement of Hazy Images using Depth Estimation ConceptIRJET Journal
This document presents a methodology to improve the visibility of hazy images using depth estimation. The proposed method first converts the input hazy image into white balance and contrast enhanced images. Depth estimation is then performed on these images to estimate the unknown depth from the camera to objects in the scene. Weight maps are generated from the white balance and contrast images and applied to Gaussian and Laplacian pyramids to estimate depth. A gamma correction is applied to the depth estimated image to further improve visibility. Experimental results show that the gamma corrected image has better visibility and a higher PSNR than the depth estimated image alone.
Review on Various Algorithm for Cloud Detection and Removal for ImagesIJERA Editor
Clouds is one of the significant obstacles in extracting information from tea lands using remote sensing imagery Different approaches have been attempted to solve this problem with varying levels of success In the past decade, a number of cloud removal approaches have been proposed . In this paper we review and discuss about the cloud detection & removal, need of cloud computing , its principles, and cloud removal process and various algorithm of cloud removal. This paper attempts to give a recipe for selecting one of the popular cloud removal algorithms like The Information Cloning Algorithm, Cloud Distortion Model And Filtering Procedure, Semi-Automated Cloud/Shadow, And Haze Identification And Removal etc. A cloud removal approach based on information cloning is introduced...Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Based on the specific requirements of the project that necessitates the utilization of certain types of cloud detection algorithms is decided
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
Analysis of remote sensing imagery involves identifying targets through their tone, shape, size, pattern, texture, and relationships to other objects. Targets may be environmental or artificial features appearing as points, lines, or areas. Interpretation relies on how radiation is reflected or emitted from targets and recorded by sensors to form images. The key to interpretation is recognizing targets based on these visual elements.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
IRJET- Color Balance and Fusion for Underwater Image Enhancement: SurveyIRJET Journal
This document summarizes research on methods for enhancing underwater images. It discusses how underwater images suffer from poor visibility due to light scattering and absorption in water. Several approaches are then summarized that aim to restore and enhance degraded underwater images through techniques like color balance and fusion. Specifically, the document surveys methods that use single-image approaches without specialized hardware by fusing color-compensated and white-balanced versions of the original image. It also discusses other literature on underwater image enhancement through techniques like dehazing, wavelength compensation, and contrast adjustment.
The document discusses superresolution technology that can improve the resolution of infrared camera images. It begins by explaining the basic problem that small objects may be invisible or measured incorrectly in infrared images due to pixel size limitations. It then describes how superresolution works by using multiple images and deconvolution algorithms to effectively decrease pixel pitch by 1.6x and increase usable resolution also by 1.6x compared to normal images. Experimental results show that superresolution detects spatial frequencies about 50% higher than the camera's detector cutoff and improves temperature measurement accuracy compared to interpolation. The technology will be available as a software update for all current Testo infrared cameras.
IRJET- Analysis of Underwater Image Visibility using White Balance AlgorithmIRJET Journal
1) The document discusses several techniques for analyzing and improving the visibility of underwater images, including white balancing algorithms, red channel methods, and image fusion approaches.
2) White balancing aims to remove unwanted color casts to improve image aspects like color and contrast. Image fusion techniques combine input images and weight maps to enhance color contrast and visibility of distant objects degraded by the underwater medium.
3) The techniques were evaluated using metrics like PSNR and by comparing restored images to originals. Results found white balancing produced high accuracy recovery of important faded features while image fusion generally improved global contrast, color, and details.
Comparative study on image fusion methods in spatial domainIAEME Publication
This document provides a comparative study of various image fusion methods in the spatial domain. It begins by introducing image fusion and its applications. Section 2 then describes several common fusion algorithms in the spatial domain, including average, select maximum/minimum, Brovey transform, intensity hue saturation (IHS), and principal component analysis (PCA). Section 3 defines image fusion quality measures like entropy, mean squared error, and normalized cross correlation. Section 4 provides a comparative analysis of the spatial domain fusion techniques based on parameters like simplicity, type of resources, and disadvantages. It finds that spatial domain methods provide high spatial resolution but have issues like image blurring and producing less informative outputs. The document concludes that while the best algorithm depends on the problem, spatial
Image interpretation keys and resolutions are essential for remote sensing. There are several keys that aid visual interpretation including tone, size, shape, texture, pattern, location, association, shadow and site. Higher image resolution means more discernible details, with pixel resolution referring to image size in pixels and spatial resolution depending on ground sample distance. Other types of resolutions include spectral, temporal, and radiometric resolutions which influence how finely differences can be distinguished.
An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Int...madhuricts
The document proposes an efficient algorithm to enhance visibility in road scene images captured during inclement weather. It uses a hybrid dark channel prior technique along with color analysis to estimate atmospheric light and transmission map. This is then used along with 3D geometric models to recover scene radiance and remove haze. The proposed method accurately estimates atmospheric light position and improves visibility while being computationally efficient compared to existing techniques. It has applications in intelligent transportation systems for traffic surveillance and vehicle detection/tracking.
Infrared image enhancement using wavelet transformAlexander Decker
This document summarizes an article about enhancing infrared images using wavelet transforms. It discusses how wavelet transforms can be used to separate image details into different frequency subbands. Then a homomorphic enhancement algorithm is applied to transform the details into illumination and reflectance components, amplifying the reflectance to make details more clear. Finally, an inverse wavelet transform is performed to reconstruct an enhanced infrared image with more visible details. The document provides background on infrared imaging and different infrared bands. It also reviews literature on using wavelets for target detection by exploiting scale, edge, and contrast differences between targets and clutter.
A Review on Airlight Estimation Haze Removal AlgorithmsIRJET Journal
This document reviews algorithms for estimating airlight to remove haze from images. It discusses how haze degrades image quality by attenuating light reflected from objects and adding atmospheric light. Common haze removal techniques rely on a atmospheric scattering model. The dark channel prior method estimates atmospheric light using the fact that at least one color channel will have some pixels with very low intensities in haze-free images. Bilateral, trilateral, and CLAHE filters can then be used as post-processing steps to improve results. The document aims to develop new airlight estimation methods with lower computational complexity.
A fast single image haze removal algorithm using color attenuation priorLogicMindtech Nologies
IMAGE PROCESSING Projects for M. Tech, IMAGE PROCESSING Projects in Vijayanagar, IMAGE PROCESSING Projects in Bangalore, M. Tech Projects in Vijayanagar, M. Tech Projects in Bangalore, IMAGE PROCESSING IEEE projects in Bangalore, IEEE 2015 IMAGE PROCESSING Projects, MATLAB Image Processing Projects, MATLAB Image Processing Projects in Bangalore, MATLAB Image Processing Projects in Vijayangar
The document discusses underwater image enhancement techniques. It states that underwater images suffer from light scattering and color changes that reduce visibility and introduce haze. It proposes using the Wavelength Compensation and Dehazing (WCID) algorithm to enhance underwater images by compensating for these effects. WCID achieves superior visibility and color fidelity over other techniques like dark-channel dehazing. It works by using an underwater image formation model and a residual energy ratio to remove haze and restore clarity. The results show WCID produces the highest signal-to-noise ratio, demonstrating its effectiveness for underwater image enhancement.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial DomainsCSCJournals
This document summarizes an algorithm for reducing speckle noise in images using a two-stage approach combining wavelet and spatial domain filtering. The first stage estimates the optimal parameter value for a spatial speckle reduction filter based on edge pixel statistics and noise variance. The second stage then uses the optimized spatial filter to additionally smooth wavelet approximation sub-band coefficients. A complexity reduction method for wavelet decomposition is also proposed. Existing noise reduction methods like the Lee, Kuan and Frost filters are reviewed for context. The results of applying the proposed two-stage algorithm are promising in terms of improved image quality.
Image Denoising Using Earth Mover's Distance and Local HistogramsCSCJournals
In this paper an adaptive range and domain filtering is presented. In the proposed method local histograms are computed to tune the range and domain extensions of bilateral filter. Noise histogram is estimated to measure the noise level at each pixel in the noisy image. The extensions of range and domain filters are determined based on pixel noise level. Experimental results show that the proposed method effectively removes the noise while preserves the details. The proposed method performs better than bilateral filter and restored test images have higher PSNR than those obtained by applying popular Bayesshrink wavelet denoising method.
High Efficiency Haze Removal Using Contextual Regularization AlgorithmIRJET Journal
This document presents a new contextual regularization algorithm for high efficiency haze removal. It begins with an overview of existing haze removal techniques and their limitations, such as halo effects and reduced image quality. It then proposes a method that estimates airlight using multiple transmission maps and cross bilateral filtering to remove noise and enhance edges. This integrated approach yields faster execution speeds and superior recovery effects compared to existing filters. The key contribution is a new contextual regularization that allows incorporating a filter bank into dehazing images. Experimental results show the proposed method removes haze without changing the original scene or producing saturated images, while existing techniques can remove wanted image information or produce unnatural results.
Visibility Enhancement of Hazy Images using Depth Estimation ConceptIRJET Journal
This document presents a methodology to improve the visibility of hazy images using depth estimation. The proposed method first converts the input hazy image into white balance and contrast enhanced images. Depth estimation is then performed on these images to estimate the unknown depth from the camera to objects in the scene. Weight maps are generated from the white balance and contrast images and applied to Gaussian and Laplacian pyramids to estimate depth. A gamma correction is applied to the depth estimated image to further improve visibility. Experimental results show that the gamma corrected image has better visibility and a higher PSNR than the depth estimated image alone.
Review on Various Algorithm for Cloud Detection and Removal for ImagesIJERA Editor
Clouds is one of the significant obstacles in extracting information from tea lands using remote sensing imagery Different approaches have been attempted to solve this problem with varying levels of success In the past decade, a number of cloud removal approaches have been proposed . In this paper we review and discuss about the cloud detection & removal, need of cloud computing , its principles, and cloud removal process and various algorithm of cloud removal. This paper attempts to give a recipe for selecting one of the popular cloud removal algorithms like The Information Cloning Algorithm, Cloud Distortion Model And Filtering Procedure, Semi-Automated Cloud/Shadow, And Haze Identification And Removal etc. A cloud removal approach based on information cloning is introduced...Using generic interpolation machinery based on solving Poisson equations, a variety of novel tools are introduced for seamless editing of image regions. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Based on the specific requirements of the project that necessitates the utilization of certain types of cloud detection algorithms is decided
Fusion of Wavelet Coefficients from Visual and Thermal Face Images for Human ...CSCJournals
In this paper we present a comparative study on fusion of visual and thermal images using different wavelet transformations. Here, coefficients of discrete wavelet transforms from both visual and thermal images are computed separately and combined. Next, inverse discrete wavelet transformation is taken in order to obtain fused face image. Both Haar and Daubechies (db2) wavelet transforms have been used to compare recognition results. For experiments IRIS Thermal/Visual Face Database was used. Experimental results using Haar and Daubechies wavelets show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...IRJET Journal
This document provides a summary and analysis of various single image defogging techniques. It begins with an abstract that outlines how different fog removal algorithms detect and remove fog to improve image visibility. It then reviews several fog removal techniques from research papers. These include using fog density perception to estimate transmission maps, enhancing contrast using dark channel priors, combining dark channel priors with fuzzy logic for efficiency, using dark channel priors and guided filters to extract transmission maps and enhance images. The document aims to analyze and compare different techniques for efficiently removing fog from digital images.
This lecture is about particle image velocimetry technique. It include discussion about the basic element of PIV setup, image capturing, laser lights, synchronize and correlation analysis.
Analysis of remote sensing imagery involves identifying targets through their tone, shape, size, pattern, texture, and relationships to other objects. Targets may be environmental or artificial features appearing as points, lines, or areas. Interpretation relies on how radiation is reflected or emitted from targets and recorded by sensors to form images. The key to interpretation is recognizing targets based on these visual elements.
This document discusses band ratioing, image differencing, and principal and canonical component analysis techniques in remote sensing. Band ratioing involves dividing pixel values in one band by another band to enhance spectral differences. Image differencing calculates differences between images after alignment. Principal component analysis transforms correlated spectral data into fewer uncorrelated bands retaining most information, while canonical component analysis aims to maximize separability of user-defined features. These techniques can help analyze multispectral and hyperspectral remote sensing data.
IRJET- Color Balance and Fusion for Underwater Image Enhancement: SurveyIRJET Journal
This document summarizes research on methods for enhancing underwater images. It discusses how underwater images suffer from poor visibility due to light scattering and absorption in water. Several approaches are then summarized that aim to restore and enhance degraded underwater images through techniques like color balance and fusion. Specifically, the document surveys methods that use single-image approaches without specialized hardware by fusing color-compensated and white-balanced versions of the original image. It also discusses other literature on underwater image enhancement through techniques like dehazing, wavelength compensation, and contrast adjustment.
The document discusses superresolution technology that can improve the resolution of infrared camera images. It begins by explaining the basic problem that small objects may be invisible or measured incorrectly in infrared images due to pixel size limitations. It then describes how superresolution works by using multiple images and deconvolution algorithms to effectively decrease pixel pitch by 1.6x and increase usable resolution also by 1.6x compared to normal images. Experimental results show that superresolution detects spatial frequencies about 50% higher than the camera's detector cutoff and improves temperature measurement accuracy compared to interpolation. The technology will be available as a software update for all current Testo infrared cameras.
IRJET- Analysis of Underwater Image Visibility using White Balance AlgorithmIRJET Journal
1) The document discusses several techniques for analyzing and improving the visibility of underwater images, including white balancing algorithms, red channel methods, and image fusion approaches.
2) White balancing aims to remove unwanted color casts to improve image aspects like color and contrast. Image fusion techniques combine input images and weight maps to enhance color contrast and visibility of distant objects degraded by the underwater medium.
3) The techniques were evaluated using metrics like PSNR and by comparing restored images to originals. Results found white balancing produced high accuracy recovery of important faded features while image fusion generally improved global contrast, color, and details.
Comparative study on image fusion methods in spatial domainIAEME Publication
This document provides a comparative study of various image fusion methods in the spatial domain. It begins by introducing image fusion and its applications. Section 2 then describes several common fusion algorithms in the spatial domain, including average, select maximum/minimum, Brovey transform, intensity hue saturation (IHS), and principal component analysis (PCA). Section 3 defines image fusion quality measures like entropy, mean squared error, and normalized cross correlation. Section 4 provides a comparative analysis of the spatial domain fusion techniques based on parameters like simplicity, type of resources, and disadvantages. It finds that spatial domain methods provide high spatial resolution but have issues like image blurring and producing less informative outputs. The document concludes that while the best algorithm depends on the problem, spatial
Image interpretation keys and resolutions are essential for remote sensing. There are several keys that aid visual interpretation including tone, size, shape, texture, pattern, location, association, shadow and site. Higher image resolution means more discernible details, with pixel resolution referring to image size in pixels and spatial resolution depending on ground sample distance. Other types of resolutions include spectral, temporal, and radiometric resolutions which influence how finely differences can be distinguished.
An Efficient Visibility Enhancement Algorithm for Road Scenes Captured by Int...madhuricts
The document proposes an efficient algorithm to enhance visibility in road scene images captured during inclement weather. It uses a hybrid dark channel prior technique along with color analysis to estimate atmospheric light and transmission map. This is then used along with 3D geometric models to recover scene radiance and remove haze. The proposed method accurately estimates atmospheric light position and improves visibility while being computationally efficient compared to existing techniques. It has applications in intelligent transportation systems for traffic surveillance and vehicle detection/tracking.
Infrared image enhancement using wavelet transformAlexander Decker
This document summarizes an article about enhancing infrared images using wavelet transforms. It discusses how wavelet transforms can be used to separate image details into different frequency subbands. Then a homomorphic enhancement algorithm is applied to transform the details into illumination and reflectance components, amplifying the reflectance to make details more clear. Finally, an inverse wavelet transform is performed to reconstruct an enhanced infrared image with more visible details. The document provides background on infrared imaging and different infrared bands. It also reviews literature on using wavelets for target detection by exploiting scale, edge, and contrast differences between targets and clutter.
JPJ1450 Friendbook: A Semantic-based Friend Recommendation System for Social...chennaijp
We are good IEEE java projects development center in Chennai and Pondicherry. We guided advanced java technologies projects of cloud computing, data mining, Secure Computing, Networking, Parallel & Distributed Systems, Mobile Computing and Service Computing (Web Service).
For More Details:
http://jpinfotech.org/final-year-ieee-projects/2014-ieee-projects/java-projects/
Social networking on internet is becoming very popular day to day.
Everyday people are connecting themselves with those websites.
It is now a great media of communication and interaction as well as socialization.
The document proposes MobiContext, a hybrid cloud-based bi-objective recommendation framework (BORF) for mobile social networks. It uses multi-objective optimization techniques to generate personalized venue recommendations. To address cold start and data sparsity issues, BORF performs data preprocessing using a Hub-Average inference model. It then implements a Weighted Sum Approach for scalar optimization and NSGA-II evolutionary algorithm for vector optimization to provide optimal venue suggestions to users. Experimental results on a large real dataset confirm the accuracy of the proposed framework.
Friendbook is a semantic-based friend recommendation system for social networks that recommends friends based on users' lifestyles rather than social graphs. It uses sensors in smartphones to discover users' lifestyles from daily activities and measures lifestyle similarity between users. Users are recommended as friends if their lifestyles are highly similar. Lifestyles are extracted from "life documents" of daily activities using Latent Dirichlet Allocation. Friendbook also incorporates feedback to improve recommendation accuracy. It was implemented on Android smartphones and evaluated on small and large-scale tests, finding recommendations accurately reflected real-life friend preferences.
The document provides a campaign analysis report for Blueprint clothing store. It details the client, target market of Queen's engineering students aged 18-30, and a campaign idea playing on the word "blueprint" where students could fill out an in-store form and be entered to win a dinner and drinks prize pack. The campaign used lifestyle, emotional, and timing appeals on Facebook in the Queen's Engineering group. The results were modest with few likes and comments, but the exposure to the targeted demographic was significant. Future efforts may see greater success holding the event earlier in the year with a single larger prize.
This document is a final project report submitted by Sailendra Sagar Patra and Sandeep Kumar Panda to Biju Patnaik University of Technology in partial fulfillment of their B.Tech degree. The report details their work on developing a fingerprint recognition system based on minutiae matching. It describes the algorithms used for fingerprint enhancement, segmentation, minutiae extraction and matching. Results demonstrating the different steps are also provided and compared.
This document summarizes a study that investigated the effect of water-to-cement ratio on sulfate corrosion of fine-grained concrete. Concrete samples with water-to-cement ratios of 0.4, 0.45, and 0.5 were exposed to 0.5% sulfuric acid or a solution simulating wastewater for periods of 1, 3, and 6 months. The pH and sulfate content of layers cut from the samples were measured. After 1 month of exposure, a small decrease in pH was observed only in surface layers, and sulfate penetration was limited to 5 mm. Longer exposure times showed slightly deeper sulfate penetration but did not significantly reduce pH or compromise reinforcement protection. Higher water-to-
Testing the flexural fatigue behavior of e glass epoxy laminateseSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Determinants of global competitiveness on industrial performance an applicati...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document proposes a method for video copy detection using segmentation, MPEG-7 descriptors, and graph-based sequence matching. It extracts key frames from videos, extracts features from the frames using descriptors like CEDD, FCTH, SCD, EHD and CLD, and stores them in a database. When a query video is input, its features are extracted and compared to the database to detect if it matches any videos already in the database. Graph-based sequence matching is also used to find the optimal matching between video sequences despite transformations like changed frame rates or ordering. The method is shown to perform better than previous techniques at detecting copied videos through transformations.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Design and implementation of an ancrchitecture of embedded web server for wir...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Contribution to the valorization of moroccan wood in industry of laminated wo...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A Novel Dehazing Method for Color Accuracy and Contrast Enhancement Method fo...IRJET Journal
The document proposes a novel dehazing method for color accuracy and a contrast enhancement method for low light intensity images. The dehazing method involves three steps: 1) Region division based on white balance segmentation, 2) Estimation of local atmospheric light in each region, and 3) An iterative dehazing algorithm to remove haze from each region. The contrast enhancement method inverts the input image, applies the dehazing algorithm, and then inverts the dehazed image to produce an enhanced output. Experimental results show the proposed methods can effectively enhance images taken with mobile devices or cameras without color distortion.
Enhanced Vision of Hazy Images Using Improved Depth Estimation and Color Anal...IRJET Journal
The document presents a method for enhancing hazy images using improved depth estimation and color analysis. It proposes using an enhanced refined transmission technique to better estimate haze thickness and reduce color cast problems in hazy images. The method involves median filtering to reduce noise, adaptive gamma correction to enhance contrast, estimating depth maps, and using visibility restoration to recover hazy images. Experimental results show the proposed method produces superior results to other state-of-the-art methods by dramatically improving hazy images captured during inclement weather conditions.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
A Survey on Single Image Dehazing ApproachesIRJET Journal
This document provides a survey of single image dehazing approaches. It begins with an introduction to the problem of haze in images and how it degrades quality. It then summarizes several existing single image dehazing methods, including those based on the atmospheric scattering model, dark channel prior, color attenuation prior, and deep learning approaches. The survey covers the key assumptions and limitations of each approach. Overall, the document reviews the progress that has been made in developing techniques to remove haze from a single input image.
IRJET- A Comprehensive Study on Image Defogging TechniquesIRJET Journal
This document summarizes techniques for removing haze and other pollutants from images. It discusses using a dark channel prior method based on observations that at least one color channel has pixels with low values. Transmission maps and atmospheric light can be estimated using this dark channel prior. The document also discusses using depth estimation, wavelet-based techniques, enhancement-based techniques, filtering-based techniques, supervised learning-based techniques, fusion-based techniques, and meta-heuristic system-based techniques for haze removal. It provides an overview of these different haze removal techniques.
IRJET- Photo Restoration using Multiresolution Texture Synthesis and Convolut...IRJET Journal
The document discusses techniques for removing haze and fog from images. It presents a technique called IDeRS that uses an iterative dehazing model to remove haze and fog from remote sensing images. The IDeRS technique estimates atmospheric light independently of haze-opaque regions using a haze-line prior method. It then computes a transmission map using the dark channel prior model to estimate a raw transmission map. The technique achieves high signal-to-noise ratios and improves on other methods that did not completely remove haze and suffered from artifacts.
This document describes an image fusion method using pyramidal decomposition. It proposes extracting fine details from input images using guided filtering and fusing the base layers of images across multiple exposures or focal points using a multiresolution pyramid approach. A weight map is generated considering exposure, contrast, and saturation to guide the fusion of base layers. The fused base layer is then combined with extracted fine details to produce a detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions of the input images. It is argued that this method can effectively fuse images from different exposures or focal points without introducing artifacts.
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET Journal
This document proposes a method for removing haze from underwater images using fusion techniques. It involves three main steps:
1. Removal of haze from the input underwater image using a water shield filter to extract a dehazed image.
2. Denoising the dehazed image using a sequential algorithm to compensate for uneven lighting and enhance image features.
3. Fusing the dehazed and denoised images to produce a clear output image with both haze and noise removed.
The method aims to improve underwater image visibility and contrast correction in a simple and effective manner. Evaluation on sample images demonstrates reduced haze and artifacts after processing.
AUTOMATIC IDENTIFICATION OF CLOUD COVER REGIONS USING SURF ijcseit
Weather forecasting has become an indispensable application to predict the state of the atmosphere for a
future time based on cloud cover identification. But it generally needs the experience of a well-trained
meteorologist. In this paper, a novel method is proposed for automatic cloud cover estimation, typical to
Indian Territory Speeded Up Robust Feature Transform(SURF) is applied on the satellite images to obtain
the affine corrected images. The extracted cloud regions from the affine corrected images based on Otsu
threshold are superimposed on the artistic grids representing latitude and longitude over India. The
segmented cloud and grid composition drive a look up table mechanism to identify the cloud cover regions.
Owing to its simplicity, the proposed method processes the test images faster and provides accurate
segmentation for cloud cover regions.
1. The document proposes a new method for shadow detection and removal in satellite images using image segmentation, shadow feature extraction, and inner-outer outline profile line (IOOPL) matching.
2. Key steps include segmenting the image, detecting suspected shadows, eliminating false shadows through analysis of color and spatial properties, extracting shadow boundaries, and obtaining homogeneous sections through IOOPL similarity matching to determine radiation parameters for shadow removal.
3. Experimental results showed the method could successfully detect shadows and remove them to improve image quality for applications like object classification and change detection.
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
Shadows create significant problems in many computer vision and image analysis tasks such as object
recognition, object tracking, and image segmentation. For a machine, it is very difficult to distinguish
between a shadow and a real object. As a result, an object recognition system may incorrectly recognize a
shadow region as an object. So the detection of shadows in images will enhance the performance of many
machine vision tasks. This paper implements a shadow detection method, which is based on Tricolor
Attenuation Model (TAM) enhanced with adaptive histogram equalization (AHE). TAM uses the concept of
intensity attenuation of pixels in the shadow region which is different for the three color channels. It
originates from the idea that if the minimum attenuated color channel is subtracted from the maximum
attenuated one, the shadow areas become darker in the resulting TAM image. But this resulting image will
be of low contrast due to the high correlation among R, G and B color channels. In order to enhance the
contrast, adaptive histogram equalization is used. The incorporation of AHE significantly improved the
quality of the detected shadow region.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
This document discusses atmospheric turbulence degraded image restoration using back propagation neural network. It proposes using a feed-forward neural network with 20 hidden layers and one output layer trained with backpropagation to restore images degraded by atmospheric turbulence and noise. The network is trained on normalized input images and tested on blurred images. Results show the proposed method achieves higher PSNR values than other techniques like kurtosis minimization and PCA, indicating better image quality restoration. Future work may incorporate median filtering and using first order image features for network weight assignment.
IRJET- Improving Interpretability of an Underwater using Undecimated Wavelet ...IRJET Journal
This document presents a method for improving the interpretability of underwater images using undecimated wavelet transform. It involves several steps: (1) performing histogram equalization on the underwater image to improve contrast, (2) using white balancing to enhance dark regions, and (3) fusing the outputs of steps 1 and 2 using undecimated wavelet transform. Undecimated wavelet transform is a type of stationary wavelet transform that avoids downsampling, improving shift invariance over discrete wavelet transform. The proposed method is compared to existing wavelength decomposition and image dehazing algorithms. Initial results show improvements in color correction and contrast after applying white balancing, histogram equalization, and image fusion with undecimated wavelet transform
This document summarizes a research paper on using a hybrid dark channel prior method for visibility restoration in images degraded by haze or fog. It begins by introducing the problem of image degradation in poor weather conditions like fog. Then it describes existing techniques for image restoration, including the dark channel prior method, which estimates atmospheric light and transmission maps to recover the original scene. The document proposes a hybrid dark channel prior method that uses both small and large patch sizes to better estimate the haze density. Simulation results demonstrate that the hybrid method more effectively removes haze than traditional techniques. The paper concludes that the hybrid approach works well for fog and noise removal in single images under different weather conditions.
IRJET- Image De-Blurring using Blind De-Convolution AlgorithmIRJET Journal
The document describes a study on blind image deblurring using a blind deconvolution algorithm. It discusses how blurring occurs in images and various techniques used for image restoration. The proposed method uses a blind deconvolution technique to restore an original sharp image from a blurred input image without prior knowledge of the point spread function. It adds blur to test images using Gaussian, motion and average blur models. The algorithm then applies maximum likelihood estimation and blind deconvolution to restore the blurred image. Experimental results show that the blind deconvolution method can deblur images faster than conventional approaches.
REMOVING RAIN STREAKS FROM SINGLE IMAGES USING TOTAL VARIATIONijma
ABSTRACT
Rainy image restoration is considered asone of the most important image restorations aspects to improve the outdoor vision. Many fields have used this kind of restorations such as driving assistant, environment monitoring, animals monitoring, computer vision, face recognition, object recognition and personal photos. Image restoration simply means how to remove the noise from the images. Most of the images have some noises from the environment. Moreover, image quality assessment plays an important role in the valuation of image enhancement algorithms. In this research, we will use a total variation to remove rain streaks from a single image. It shows a good performance compared to other methods, using some measurements MSE, PSNR, and VIF for an image with references and BRISQUE for an image without references.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
VISUAL MODEL BASED SINGLE IMAGE DEHAZING USING ARTIFICIAL BEE COLONY OPTIMIZA...ijistjournal
Images are often degraded by atmospheric haze , a phenomenon due to the particles in the air that scatter light. Haze induces a loss of contrast,its visual effect is blurring of distant objects. This paper presents a novel algorithm for improving the visibility of an image degraded by haze. The proposed method uses a cost function based on human visual model to estimate airlight map. It employs Artificial Bee Colony optimization (ABC) as the optimization technique for estimating air light map. Image is dehazed by removing the estimatedairlight from the degraded image. The performance of the algorithm is tested and compared with various other dehazing methods and the proposed algorithm dehazes the image effectively outperforming other methods.
Similar to Improved single image dehazing by fusion (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 432
IMPROVED SINGLE IMAGE DEHAZING BY FUSION
Nitish Gundawar1
, V. B. Baru2
1
ME Student, Department of Electronics and Telecommunication, Sinhgad College of Engineering, Pune,
2
Associate Professor, Department of Electronics and Telecommunication, Sinhgad College of Engineering, Pune,
Abstract
One of the major problems in image processing is the restoration of images corrupted by various types of degradations. Images of
outdoor scenes often contain atmospheric degradation, such as haze and fog caused by particles in the atmospheric medium
absorbing and scattering light as it travels to the observer. Although, this effect may be desirable from an artistic stand point, for a
variety of reasons one may need to restore an image corrupted by these effects, a process generally referred to as haze removal. This
paper introduces improved haze removal technique based on fusion strategy that combines two derived images from original image.
These images can be obtain by performing white balancing and contrast enhancement operation. These derived images are weighted
by specific weight map followed by Laplacian and Gaussian pyramid representations to reduce artifacts introduce due to weight
maps. Unlike other techniques this approach requires only original degraded image to remove haze which makes it simple,
straightforward and effective.
Keywords: Outdoor applications, fusion, dehazing, image pyramid
-----------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
Images of outdoor scenes often contain haze, fog, or other
types of atmospheric degradation caused by particles in the
atmospheric medium absorbing and scattering light as it
travels from the source to the observer. Image obtained at
other end is characterized by reduced contrast and faded
colours. While this effect may be desirable in an artistic
setting, it is sometimes necessary to undo this degradation.
Weather conditions differ mainly in the types and sizes of the
particles involved and their concentration in space. A great
deal of effort has gone into measuring particle sizes and
concentrations for a variety of conditions as shown in table I.
For example, many computer vision algorithms rely on the
assumption that the input image is exactly the scene radiance,
i.e. there is no disturbance from haze. When this assumption is
violated, algorithmic errors can be catastrophic. One could
easily see how a car navigation system that did not take this
effect into account could have dangerous consequences.
Accordingly, finding effective methods for haze removal is an
ongoing area of interest in the image processing and computer
vision fields. This task is important in several outdoor
applications such as remote sensing, intelligent vehicles,
underwater imaging and many more.
In this paper improved fusion based haze removal technique is
discussed. The main concept of fusion is to combine two or
more images into single image that can be more suitable for
some intended purposed [16]. Therefore, image fusion is
effective technique that is designed to maximize relevant
information into fused image.
Table-1: Weather conditions and associated particles types,
sizes and concentration [2]
Conditions Particle Size Radius (µm) Concentration
(cm−3
)
Air Molecule 10−4
10−19
Haze Aerosol 10−2
- 1 103
- 10
Fog Water
Droplet
1 - 10 100 – 10
Cloud Water
Droplet
1 - 10 300 – 10
Rain Water Drop 102
−104
10−2
- 10−5
The main idea behind fusion based dehazing technique is to
combine images derived from degrade image. Two images are
derived by performing white balance and contrast
enhancement operation on original degraded image. This
ensures the visibility in hazy and haze free region of image
and also eliminate unrealistic color cast introduced due to
atmospheric color. In fusion framework the derived inputs are
weighted by three weight maps i.e. luminance, chromatic and
saliency weight maps [1]. These weight maps ensure to
preserve regions with good visibility. However, artifacts
introduced by weight maps can be eliminated by fusing
Laplacian pyramid representation of derived inputs and
Gaussian pyramid representation of normalized weight that
yields dehaze version of original degraded image.
The rest of the paper is structured as follows. Below in section
2 previous dehazing methods are briefly discussed. In section
3 theoretical aspects of light propagation is discussed. In
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 433
section 4 details of fusion based dehazing method is presented.
In next section experimental results analysis is performed
based various parameters. In section 6 conclusion is
highlighted and future work is predicted.
2. LITERATURE REVIEW
In many image processing and vision applications, enhancing
and restoring images represents fundamental task. There are
large numbers of dehazing methods and these existing
methods can be grouped into several main classes.
Earlier haze removal techniques require multiple images of
same scene or additional supplemental equipments. Methods
[2], [3] include under this class. These methods improves
visibility in restored image, but their main drawback is due to
their acquisition step that in many cases is time consuming
and difficult to carry out.
Another class of methods are polarization techniques [4]-[7],
that are based on fact that airlight is partially polarized. By
taking difference of two images of same scene under different
polarization angles, it becomes possible to estimate the
magnituide of polarized haze light. The methods include under
this class have shown less robustness for scenes with dense
haze where polarization light is not major degradation factor.
Another category of techniques assumes known model of
scene [8]. These techniques employ an approximated depth
map obtained after collecting information from several users
about areas that are degraded or not by poor weather
conditions. The Deep photo [9] is a more precise system since
it uses the existing geo referenced digital terrain and urban
models to restore foggy images. The depth information is
obtained by iteratively aligning the 3D models with the
outdoor images.
Restoration of images from single image is more challenging
problem. Solutions for such cases have been introduced only
recently [10]-[13]. These methods are roughly divided into
contrast based and statistical based approaches. Tan [11] and
Tarel [13] methods belongs to first category whereas, methods
of Fattal [10], He [12], kartz [14] belong to second category.
Tan [11] observes that a haze free image must have higher
contrast compared with the input hazy image and removes the
haze by maximizing the local contrast of the restored image.
The results are visually compelling but may not be physically
valid. The contrast-based enhancing approach of Tarel [13]
has shown to be computationally effective technique, but
assumes as well that the depth-map must be smooth except
along edges with large depth jump. Fattal [10] decomposed
image into two components i.e. light which is reflected from
surface (albedo) and shading, and then estimate the scene
radiance based on independent component analysis (ICA)
assuming that the shading and object depth are locally
uncorrelated. The main drawback of this method is that it
cannot handle heavy haze images. He [12] builds their
approach on statistical observation of dark channel. In which
object depth in hazy image is estimated based on dark channel
prior, which assumes at least one color channel should have
small pixel value in haze free image. To refine depth map of
objects alpha matting performed. The dark channel prior may
be invalid when the scene object is similar to airlight (e.g.
snowy ground or white wall) also it requires additional post
processing which leads to higher complexity. An attempt was
made to remove haze effect from image using fusion principle
in [15]. However, this approach require visible and near
infrared (NIR) image of same scene in order to perform
fusion. This approach is hard to carry out and hence it is time
consuming.
3. BACKGROUND: LIGHT PRPOGATION
In case of outdoor applications it is not necessary that amount
of light emitted by camera the same amount reflected back to
camera. On the other hand, in almost every practical scenario
light reflected from target object is scattered or absorbed in
atmosphere before it reaches to camera. This happens due to
presence of turbid medium in atmosphere which deflects light
from original course of propagation as a result images with
poor visibility and contrast are captured.
In computer vision, the optical model, which is widely used to
approximate the image formation in bad weather condition, is
as shown in figure 1. This model is commonly known as
image degradation model or atmospheric scattering model
proposed by McCartney. From figure1 it has been observed
that captured image is represented by linear combination of
two main components i.e. direct attenuation (D) and airlight
(A) as describe in equation 1.
I(x) = Direct attenuation (A) + Airlight (A)
I(x) = J(x) T(x) + A (1-T(x))….……… (1)
Fig-1: Image degradation model [1]
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 434
The first component, i.e D(x) = J(x) T(x) represents how the
scene radiance is attenuated due to medium properties. The
second component, i.e. A(x) = A (1-T(x)) represents the main
cause of color shifting.
Where I(x) is the observed intensity for each pixel x, J(x) is
the scene radiance (the original haze-free image to be
recovered), A is the global atmospheric light, and T(x) is the
medium transmission indicating the portion of light that is not
scattered and reaches the camera. Assuming a homogenous
medium, the transmission T is determined as
T(x) = 𝑒(−𝛽 𝑑(𝑥))
……………. (2)
Where,
𝞫 is medium attenuation coefficient due to scattering
d is distance between observer and the considered surface
4. FUSION BASED DEHAZING
This section presents the details fusion technique that employs
only the inputs and weights derived from the original hazy
image. The fundamental idea is to combine several input
images (guided by weight maps) into single one, keeping only
the most significant features of them. This technique performs
following three steps in order to remove haze from degraded
image.
Step1: Generation of two input images from original.
Step2: Defining weight measures.
Step3: Fusion of inputs and weight measures.
Fig-2: Flow of fusion based dehazing approach
4.1 Definition of Inputs
Fusion based dehazing approach takes two inputs derived
from original image. The first input is obtained by performing
white balance operation on original image. White balancing is
an important processing step that aims to enhance the image
appearance by discarding unwanted color casts, due to various
illuminations. Nevertheless, white balancing solely is not able
to solve the problem of visibility, and therefore we derive
additional input in order to enhance the contrast of degraded
image.
The second input is selected in order to increase contrast in
those regions that suffers due to airlight influence. This can be
done by performing either histogram equalization or gamma
correction on first derived input as shown in figure 3. It has
been observed that this step significantly amplify the visibility
in hazy part but on the other hand fine details of image get
destroy. Therefore, in order to eliminate this degradation a
proper weight maps are defined for each input.
Original Hazy Image White Balance Contrast Enhancement
Fig -3: Derived inputs from original image
4.2 Weight Measures
The derived inputs are weighted by following three weight
maps. These weight maps aim to preserve the regions with
good visibility.
The Luminance weight map measures visibility of each pixel.
It assigns high values to regions with good visibility and small
values to the rest. Based on the RGB color channel
information this weight map is processed. Luminance weight
map is computed as the deviation between RGB color channel
and luminance from input [1]. However, it is observed that
this weight map reduce global contrast and color information.
In order to overcome these effects two additional weight maps
are defined: a chromatic map (color information) and saliency
map (global contrast).
The Chromatic weight map is designed to control saturation
gain in output image. This weight map is simply computed as
distance between its saturation value and the maximum of
saturation range [1]. Thus small values are assigned to pixels
with reduced saturation while the most saturated pixels get
high values. This weight map is motivated by the fact that
human being generally prefers images that are characterized
by high level of saturation.
The Saliency weight map identifies the degree of
conspicuousness with respect to the neighborhood regions.
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 435
Visual saliency is the perceptual quality that makes an object,
person, or pixel stand out relative to its neighbors and thus,
capture our attention [17]. Detection of visually salient image
regions plays important role for applications like object
recognition, object segmentation and adaptive compression.
Following are some requirements for a saliency detector [17]:
Maximizes the largest salient objects.
Uniformly highlighted whole salient regions.
Establish well-defined boundaries of salient objects.
Discard high frequencies arising from texture, noise
and blocking artifacts.
Efficiently output full resolution saliency maps.
Figure 4 shows different weight maps for each derived input.
It has been observed that impact of all these measures is
equally important. However, the first measure has the highest
impact on the visibility. The resulted weight is obtained by
multiply all three weight maps. To yield consistent results
resultant weight map is normalized.
Fig -4: Weight maps of derived inputs
4.3 Multi-Scale Fusion
In practice, each input is decomposed into pyramid by
applying Laplacian operator at different scales. Laplacian
pyramid of image is obtained by applying band pass filter
followed by down sampling operation [18]. As a band pass
filter, pyramid construction tends to enhance image features
such as edges, which plays important role for image
interpretation. Each level in Laplacian pyramid represents the
difference between successive levels of Gaussian pyramid.
Similarly, for each normalized weight map Gaussian pyramid
is estimated. The Gaussian pyramid is a sequence of images
obtained by applying low pass filter followed by down
sampling operation [18]. The Gaussian and Laplacian pyramid
representation is as shown in figure 5. Now considering that
both Gaussian and Laplacian pyramids have the same number
of levels, mixing or fusion between Laplacian inputs and
Gaussian normalized weight map is performed at each level
independently yielding fused pyramid which is considered as
dehazed version of original hazy degraded image as shown in
figure 6.
Fig-5: Laplacian and Gaussian Pyramid representation
Fusion based dehazing approach has several advantages over
existing dehazing methods. First, it performs an effective per-
pixel computation, different from the majority of existing
methods that processes patch. Secondly, complexity of this
approach is lower than most of the previous strategy [10]-[14],
as it is not necessary to estimate depth map. Finally, this
approach performs faster, which makes it suitable for real time
applications.
Orignal Hazy Image Final Dehazed Image
Fig-6: Final output
Input 1 Input 2
Luminance
Weight Map
Chromatic
Weight Map
Saliency
Weight Map
Normalized
Weight Map
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 436
Fig-7: Comparison of Fattal [10] and Tarel [13] techniques with Fusion based dehazing
5. EXPERMENTAL RESULTS
To evaluate the performance of fusion based dehazing method,
we apply this method to recover several hazy images and
compare this method with the two popular existing single
image based methods of Fattal [10] and Tarel [13]. In [10], an
albedo estimation-based method was proposed whereas
method in [13] based on contrast enhancement. In our
experiment, we perform the method introduce in section III
using MATLAB 12 on PC with a 2.20 GHz Intel core2duo
CPU. Figure 7 shows the comparative analysis of Fattal, Tarel
and fusion based dehazing approaches.
In order to check the robustness of fusion based dehazing
approach Peak signal to noise ratio and Mean square error is
estimated and it has been observed that this approach
outperform the other two single image based dehazing
technique in [10],[13] as shown in table 2 and table 3.
Table-2: Comparison of PSNR
Images Fattal [10] Tarel [13] Fusion based
approach [1]
Img1 6.5947 5.5342 5.5162
Img2 11.829 4.484 4.3151
Img3 12.572 6.1993 6.1832
Img4 8.7858 3.4572 3.4525
Img5 9.1482 4.5801 4.5784
Table-3: Comparison of MSE
Images Fattal [10] Tarel [13] Fusion based
approach [1]
Img1 14243 18183 18258
Img2 4257.6 23157 24675
Img3 2958.7 15584 15601
Img4 8600.1 29334 29365
Img5 7911.6 22650 22659
The key advantage of this technique is that it does not
necessary to estimate depth map, which reduces complexity to
a great the extent. Moreover it has been observed that final
output obtained is more pleasing than any other methods.
Furthermore, compared with most of the existing techniques,
an important advantage of fusion based dehazing is required
computation time, which is able to process a 600x800 image
in approximately 30-32 seconds as shown in table 4.
Table-4: Comparison of computation time
Images Fattal [10] Tarel [13] Fusion based
approach [1]
Img1 167.59 sec 100.91 sec 30 sec
Img2 125.30 sec 107.82 sec 32 sec
Img3 116.56 sec 110.48 sec 31 sec
Img4 52 sec 41 sec 23 sec
Img5 75 sec 66 sec 24 sec
Original Hazy Image Fattal [10] Tarel [13] Fusion Based Dehazing [1]
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________________
Volume: 03 Issue: 05 | May-2014, Available @ http://www.ijret.org 437
6. CONCLUSIONS AND FUTURE WORK
The fusion based dehazing approach disussed in this paper can
effectively restore image color balance and remove haze and
fogg. This is first fusion dehazing approach that is able to
solve such problems using only one degraded image. This
technique is based on selection of appropriate weight maps
and inputs, a fusion approach can be used to obtain dehazed
version of hazy images. Moreover, it has been observed that
this approach outperform the other single image based
dehazing techniques. The method is faster than existing single
image dehazing strategies and yields accurate results. In future
work we would like to test this approach on underwater
images and images from intelligent vehicles.
ACKNOWLEDGEMENTS
I wish to express my sincere thanks and deep sense of
gratitude to respected mentor and guide Mr. V. B. Baru
Associate Professor in Department of Electronics and
Telecommunication Engineering of Singhgad College of
Engineering, Pune for the technical advice, encouragement
and constructive criticism.
REFERENCES
[1] C.O Ancuti and C. Ancuti, “Single image dehazing by
multi-scale fusion,” IEEE Trans. Image Processing,
Aug.2013, vol.22, pp. 3271-3282.
[2] S. Narasimhan and S. Nayar, “Contrast restoration of
weather degraded images,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 25, no. 6, pp. 713–724, Jun. 2003.[3]
S. Narasimhan and S. Nayar, “Vision in bad wheather,”
in Proc. IEEE Int. Conf. Comput. Vis., Sep. 1999, pp.
820–827.
[3] S. Narasimhan and S. Nayar, “Chromatic framework
for vision in bad weather,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit., Jun. 2000, pp. 598–
605.
[4] T. Treibitz and Y. Y. Schechner, “Polarization:
Beneficial for visibility enhancement” in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp.
525–532.
[5] Y. Y. Schechner, S. Narasimhan, and S. Nayar,
“Polarization-based vision through haze,” Appl. Opt.,
vol. 42, no. 3, pp. 511–525, 2003.
[6] S. Shwartz, E. Namer, and Y. Schechner, “Blind haze
separation,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., 2006, pp. 1984–1991.
[7] Y. Schechner and Y. Averbuch, “Regularized image
recovery in scattering media,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 29, no. 9, pp. 1655–1660, Sep.
2007.
[8] S. Narasimhan and S. Nayar, “Interactive de-
wheathering of an image using physical models,” in
Proc. IEEE Workshop Color Photometric Methods
Comput. Vis., Oct. 2003, p. 1.
[9] J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or,
O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep
photo: Model-based photograph enhancement and
viewing,” ACM Trans. Graph., vol. 27, no. 5, p. 116,
2008.
[10] R. Fattal, “Single image dehazing,” ACM Trans.
Graph., SIGGRAPH, vol. 27, no. 3, p. 72, 2008.
[11] R. T. Tan, “Visibility in bad weather from a single
image,” in Proc. IEEE Conf. Comput. Vis. Pattern
Recognit., Jun. 2008, pp. 1–8.
[12] K. He, J. Sun, and X. Tang, “Single image haze
removal using dark channel prior,” in Proc. IEEE Conf.
Comput. Vis. Pattern Recognit., Jun. 2009, pp. 1956–
1963.
[13] J.-P. Tarel and N. Hautiere, “Fast visibility restoration
from a single color or gray level image,” in Proc. IEEE
Int. Conf. Comput. Vis.,Sep.–Oct. 2009, pp. 2201–
2208.
[14] K. Nishino, L. Kratz, and S. Lombardi, “Bayesian
defogging,” Int. J. Comput. Vis., vol. 98, no. 3, pp.
263–278, 2012.
[15] L. Schaul, C. Fredembach, and S. Süsstrunk, “Color
image dehazing using the near-infrared,” in Proc. IEEE
Int. Conf. Image Process., Nov. 2009, pp. 1629–1632.
[16] P. J. Burt, K. Hanna, and R. J. Kolczynski, “Enhanced
image capture through fusion,” in Proc. IEEE Int.
Conf. Comput. Vis., May 1993, pp. 173–182.
[17] R. Achanta, S. Hemami, F. Estrada, and S. Süsstrunk,
“Frequency-tuned salient region detection,” in Proc.
IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009,
pp. 1597–1604.
[18] P. Burt and T. Adelson, “The Laplacian pyramid as a
compact image code,” IEEE Trans. Commun., vol. 31,
no. 4, pp. 532–540, Apr. 1983.
[19] T. O. Aydin, R. Mantiuk, K. Myszkowski & H.-S.
Seidel.Dynamic range independent image quality
assessment. In SIGGRAPH, ACM Transactions on
Graphics (TOG), volume 27, pages 1–10, 2008.