The document summarizes a research project on single image haze removal using a variable fog-weight. It begins with an introduction on how haze degrades image quality and the need for haze removal techniques. It then discusses the motivation, literature review, objective, and main contribution of the proposed method. The method uses the dark channel prior to estimate the transmission map and atmospheric light. It then applies a variable fog-weight to modify the transmission map and reduce halo artifacts. A guided filter is used for transmission refinement before recovering the haze-free scene radiance. The method aims to improve on existing techniques by reducing time complexity and halo artifacts while enhancing image visibility.
This document discusses digital image processing. It defines digital images as two-dimensional representations of values stored as pixels in computer memory. Digital image processing involves enhancing images, extracting information and features, and manipulating images using computer software. The document outlines common image processing techniques like image compression, enhancement, and measurement extraction. It also describes the basics of digital image editing using software to alter pixel values and change image properties.
This document discusses different types of electromagnetic radiation and their uses in digital image processing. It covers gamma rays, X-rays, ultraviolet rays, visible and infrared rays, microwaves, and radio bands. Applications described include medical imaging techniques like MRI, industrial inspection, astronomy, remote sensing, and law enforcement applications like license plate and fingerprint recognition. Radar imaging is also discussed as a key application using microwaves.
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
Homomorphic filtering is a technique used to remove multiplicative noise from images by transforming the image into the logarithmic domain, where the multiplicative components become additive. This allows the use of linear filters to separate the illumination and reflectance components, with a high-pass filter used to remove low-frequency illumination variations while preserving high-frequency reflectance edges. The filtered image is then transformed back to restore the original domain. Homomorphic filtering is commonly used to correct non-uniform illumination and simultaneously enhance contrast in grayscale images.
The HSI (hue, saturation, intensity) color model represents color in a way that is more perceptually relevant to humans compared to the RGB (red, green, blue) model. Hue represents the color (such as red, yellow, blue), saturation represents the amount of gray, and intensity represents the brightness. The HSI model separates intensity from color information. Converting an image to HSI allows color manipulations like changing hue or saturation before converting back to RGB for display.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
This document discusses digital image processing. It defines digital images as two-dimensional representations of values stored as pixels in computer memory. Digital image processing involves enhancing images, extracting information and features, and manipulating images using computer software. The document outlines common image processing techniques like image compression, enhancement, and measurement extraction. It also describes the basics of digital image editing using software to alter pixel values and change image properties.
This document discusses different types of electromagnetic radiation and their uses in digital image processing. It covers gamma rays, X-rays, ultraviolet rays, visible and infrared rays, microwaves, and radio bands. Applications described include medical imaging techniques like MRI, industrial inspection, astronomy, remote sensing, and law enforcement applications like license plate and fingerprint recognition. Radar imaging is also discussed as a key application using microwaves.
This document provides an overview of mathematical morphology and its applications to image processing. Some key points:
- Mathematical morphology uses concepts from set theory and uses structuring elements to probe and extract image properties. It provides tools for tasks like noise removal, thinning, and shape analysis.
- Basic operations include erosion, dilation, opening, and closing. Erosion shrinks objects while dilation expands them. Opening and closing combine these to smooth contours or fill gaps.
- Hit-or-miss transforms allow detecting specific shapes. Skeletonization reduces objects to 1-pixel wide representations.
- Morphological operations can be applied to binary or grayscale images. Structuring elements are used to specify the neighborhood of pixels
This slides about brief Introduction to Image Restoration Techniques. How to estimate the degradation function, noise models and its probability density functions.
This document discusses image histogram equalization. It begins by defining an image histogram as a graphical representation of the number of pixels at each intensity value. Histogram equalization automatically determines a transformation function to produce a new image with a uniform histogram and increased contrast. This technique works by mapping the intensity values of the input image to a new range of values such that the histogram of the output image is uniform. The document provides an example of performing histogram equalization on an image and assigns related homework on digital image processing applications.
Homomorphic filtering is a technique used to remove multiplicative noise from images by transforming the image into the logarithmic domain, where the multiplicative components become additive. This allows the use of linear filters to separate the illumination and reflectance components, with a high-pass filter used to remove low-frequency illumination variations while preserving high-frequency reflectance edges. The filtered image is then transformed back to restore the original domain. Homomorphic filtering is commonly used to correct non-uniform illumination and simultaneously enhance contrast in grayscale images.
The HSI (hue, saturation, intensity) color model represents color in a way that is more perceptually relevant to humans compared to the RGB (red, green, blue) model. Hue represents the color (such as red, yellow, blue), saturation represents the amount of gray, and intensity represents the brightness. The HSI model separates intensity from color information. Converting an image to HSI allows color manipulations like changing hue or saturation before converting back to RGB for display.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
After an image has been segmented into regions ; the resulting pixels is usually is represented and described in suitable form for further computer processing.
The document discusses noise models and methods for removing additive noise from digital images. It describes several types of noise that can affect images, such as Gaussian, impulse, uniform, Rayleigh, gamma and exponential noise. It also presents various noise filters that can be used to remove noise, including mean filters like arithmetic, geometric and harmonic filters, and order statistics filters such as median, max, min and midpoint filters. The filters aim to reduce noise while retaining image detail as much as possible.
This document summarizes image compression techniques. It discusses:
1) The goal of image compression is to reduce the amount of data required to represent a digital image while preserving as much information as possible.
2) There are three main types of data redundancy in images - coding, interpixel, and psychovisual - and compression aims to reduce one or more of these.
3) Popular lossless compression techniques, like Run Length Encoding and Huffman coding, exploit coding and interpixel redundancies. Lossy techniques introduce controlled loss for further compression.
This document discusses various image compression standards and techniques. It begins with an introduction to image compression, noting that it reduces file sizes for storage or transmission while attempting to maintain image quality. It then outlines several international compression standards for binary images, photos, and video, including JPEG, MPEG, and H.261. The document focuses on JPEG, describing how it uses discrete cosine transform and quantization for lossy compression. It also discusses hierarchical and progressive modes for JPEG. In closing, the document presents challenges and results for motion segmentation and iris image segmentation.
Applications of Digital image processing in Medical FieldAshwani Srivastava
This document discusses different types of electromagnetic radiation used for imaging. It describes digital images as composed of pixels and notes that digital image processing involves manipulating digital images on a computer. It outlines different levels of image processing from low-level tasks like noise reduction to mid-level tasks like segmentation to high-level tasks like image analysis. It provides examples of imaging applications using gamma rays, X-rays, ultraviolet light, microwaves, radio waves, and magnetic resonance imaging.
This document discusses data compression techniques for digital images. It explains that compression reduces the amount of data needed to represent an image by removing redundant information. The compression process involves an encoder that transforms the input image, and a decoder that reconstructs the output image. The encoder uses three main stages: a mapper to reduce interpixel redundancy, a quantizer to reduce accuracy and psychovisual redundancy, and a symbol encoder to assign variable-length codes to the quantized values. The decoder performs the inverse operations of the encoder and mapper to reconstruct the original image, but does not perform the inverse of quantization which is a lossy process.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
1. The document discusses the key elements of digital image processing including image acquisition, enhancement, restoration, segmentation, representation and description, recognition, and knowledge bases.
2. It also covers fundamentals of human visual perception such as the anatomy of the eye, image formation, brightness adaptation, color fundamentals, and color models like RGB and HSI.
3. The principles of video cameras are explained including the construction and working of the vidicon camera tube.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
Frequency Domain Image Enhancement TechniquesDiwaker Pant
The document discusses various techniques for enhancing digital images, including spatial domain and frequency domain methods. It describes how frequency domain techniques work by applying filters to the Fourier transform of an image, such as low-pass filters to smooth an image or high-pass filters to sharpen it. Specific filters discussed include ideal, Butterworth, and Gaussian filters. The document provides examples of applying low-pass and high-pass filters to images in the frequency domain.
The document discusses various techniques for image segmentation including discontinuity-based approaches, similarity-based approaches, thresholding methods, region-based segmentation using region growing and region splitting/merging. Key techniques covered include edge detection using gradient operators, the Hough transform for edge linking, optimal thresholding, and split-and-merge segmentation using quadtrees.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
Wavelet transform is one of the important methods of compressing image data so that it takes up less memory. Wavelet based compression techniques have advantages such as multi-resolution, scalability and tolerable degradation over other techniques.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
Bilateral filtering for gray and color imagesHarshal Ladhe
The document summarizes a research paper on bilateral filtering, which is an edge-preserving smoothing technique. Bilateral filtering smooths images while preserving edges by combining nearby pixel values based on both their geometric closeness and photometric similarity. It can smooth colors in a way that is perceptually tuned to human vision. In contrast to standard filters, bilateral filtering does not produce phantom colors along edges in color images. The paper introduces the concept of bilateral filtering and discusses its advantages over traditional filtering methods for edge-preserving smoothing of both gray-scale and color images.
This document presents a blur classification approach using a Convolution Neural Network (CNN). It discusses types of image degradation including blur, different blur models, and prior work on blur classification using features and neural networks. The proposed method uses a CNN to classify images into four blur categories (motion, defocus, box, and Gaussian blur) based on the images' frequency spectra. The method is evaluated on a dataset with over 2800 synthetically blurred images from 24 people performing 10 gestures. The CNN achieves an average accuracy of 97% for blur classification, outperforming alternatives using multilayer perceptrons or handcrafted features.
After an image has been segmented into regions ; the resulting pixels is usually is represented and described in suitable form for further computer processing.
The document discusses noise models and methods for removing additive noise from digital images. It describes several types of noise that can affect images, such as Gaussian, impulse, uniform, Rayleigh, gamma and exponential noise. It also presents various noise filters that can be used to remove noise, including mean filters like arithmetic, geometric and harmonic filters, and order statistics filters such as median, max, min and midpoint filters. The filters aim to reduce noise while retaining image detail as much as possible.
This document summarizes image compression techniques. It discusses:
1) The goal of image compression is to reduce the amount of data required to represent a digital image while preserving as much information as possible.
2) There are three main types of data redundancy in images - coding, interpixel, and psychovisual - and compression aims to reduce one or more of these.
3) Popular lossless compression techniques, like Run Length Encoding and Huffman coding, exploit coding and interpixel redundancies. Lossy techniques introduce controlled loss for further compression.
This document discusses various image compression standards and techniques. It begins with an introduction to image compression, noting that it reduces file sizes for storage or transmission while attempting to maintain image quality. It then outlines several international compression standards for binary images, photos, and video, including JPEG, MPEG, and H.261. The document focuses on JPEG, describing how it uses discrete cosine transform and quantization for lossy compression. It also discusses hierarchical and progressive modes for JPEG. In closing, the document presents challenges and results for motion segmentation and iris image segmentation.
Applications of Digital image processing in Medical FieldAshwani Srivastava
This document discusses different types of electromagnetic radiation used for imaging. It describes digital images as composed of pixels and notes that digital image processing involves manipulating digital images on a computer. It outlines different levels of image processing from low-level tasks like noise reduction to mid-level tasks like segmentation to high-level tasks like image analysis. It provides examples of imaging applications using gamma rays, X-rays, ultraviolet light, microwaves, radio waves, and magnetic resonance imaging.
This document discusses data compression techniques for digital images. It explains that compression reduces the amount of data needed to represent an image by removing redundant information. The compression process involves an encoder that transforms the input image, and a decoder that reconstructs the output image. The encoder uses three main stages: a mapper to reduce interpixel redundancy, a quantizer to reduce accuracy and psychovisual redundancy, and a symbol encoder to assign variable-length codes to the quantized values. The decoder performs the inverse operations of the encoder and mapper to reconstruct the original image, but does not perform the inverse of quantization which is a lossy process.
This paper analyzed different haze removal methods. Haze causes trouble to
many computer graphics/vision applications as it reduces the visibility of the scene. Air light and
attenuation are two basic phenomena of haze. air light enhances the whiteness in scene and on
the other hand attenuation reduces the contrast. the colour and contrast of the scene is recovered
by haze removal techniques. many applications like object detection , surveillance, consumer
electronics etc. apply haze removal techniques. this paper widely focuses on the methods of
effectively eliminating haze from digital images. it also indicates the demerits of current
techniques.
This document discusses color image processing and different color models. It begins with an introduction and then covers color fundamentals such as brightness, hue, and saturation. It describes common color models like RGB, CMY, HSI, and YIQ. Pseudo color processing and full color image processing are explained. Color transformations between color models are also discussed. Implementation tips for interpolation methods in color processing are provided. The document concludes with thanks to the head of the computer science department.
1. The document discusses the key elements of digital image processing including image acquisition, enhancement, restoration, segmentation, representation and description, recognition, and knowledge bases.
2. It also covers fundamentals of human visual perception such as the anatomy of the eye, image formation, brightness adaptation, color fundamentals, and color models like RGB and HSI.
3. The principles of video cameras are explained including the construction and working of the vidicon camera tube.
The single image dehazing based on efficient transmission estimationAVVENIRE TECHNOLOGIES
We propose a novel haze imaging model for single image haze removal. Haze imaging model is formulated using dark channel prior (DCP), scene radiance, intensity, atmospheric light and transmission medium. The dark channel prior is based on the statistics of outdoor haze-free images. We find that, in most of the local regions which do not cover the sky, some pixels (called dark pixels) very often have very low intensity in at least one color (RGB) channel. In hazy images, the intensity of these dark pixels in that channel is mainly contributed by the air light. Therefore, these dark pixels can directly provide an accurate estimation of the haze transmission. Combining a haze imaging model and a interpolation method, we can recover a high-quality haze free image and produce a good depth map.
The document discusses edge detection methods including gradient based approaches like Sobel and zero crossing based techniques like Laplacian of Gaussian. It proposes a new algorithm that applies fuzzy logic to the results of gradient and zero crossing edge detection on an image to more accurately identify edges. The algorithm calculates gradient and zero crossings, applies fuzzy rules to classify pixels, and thresholds to determine final edge pixels.
Frequency Domain Image Enhancement TechniquesDiwaker Pant
The document discusses various techniques for enhancing digital images, including spatial domain and frequency domain methods. It describes how frequency domain techniques work by applying filters to the Fourier transform of an image, such as low-pass filters to smooth an image or high-pass filters to sharpen it. Specific filters discussed include ideal, Butterworth, and Gaussian filters. The document provides examples of applying low-pass and high-pass filters to images in the frequency domain.
The document discusses various techniques for image segmentation including discontinuity-based approaches, similarity-based approaches, thresholding methods, region-based segmentation using region growing and region splitting/merging. Key techniques covered include edge detection using gradient operators, the Hough transform for edge linking, optimal thresholding, and split-and-merge segmentation using quadtrees.
This document provides an overview of digital image fundamentals and operations. It defines what a digital image is, how it is represented as a matrix, and common image types like RGB, grayscale, and binary. Pixels, resolution, neighborhoods, and basic relationships between pixels are discussed. The document also covers different types of image operations including point, local, and global operations as well as examples like arithmetic, logical, and geometric transformations. Finally, it introduces concepts of linear and nonlinear operations and announces the topic of the next lecture on image enhancement in the spatial domain.
Wavelet transform is one of the important methods of compressing image data so that it takes up less memory. Wavelet based compression techniques have advantages such as multi-resolution, scalability and tolerable degradation over other techniques.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
Bilateral filtering for gray and color imagesHarshal Ladhe
The document summarizes a research paper on bilateral filtering, which is an edge-preserving smoothing technique. Bilateral filtering smooths images while preserving edges by combining nearby pixel values based on both their geometric closeness and photometric similarity. It can smooth colors in a way that is perceptually tuned to human vision. In contrast to standard filters, bilateral filtering does not produce phantom colors along edges in color images. The paper introduces the concept of bilateral filtering and discusses its advantages over traditional filtering methods for edge-preserving smoothing of both gray-scale and color images.
This document presents a blur classification approach using a Convolution Neural Network (CNN). It discusses types of image degradation including blur, different blur models, and prior work on blur classification using features and neural networks. The proposed method uses a CNN to classify images into four blur categories (motion, defocus, box, and Gaussian blur) based on the images' frequency spectra. The method is evaluated on a dataset with over 2800 synthetically blurred images from 24 people performing 10 gestures. The CNN achieves an average accuracy of 97% for blur classification, outperforming alternatives using multilayer perceptrons or handcrafted features.
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET Journal
This document proposes a method for removing haze from underwater images using fusion techniques. It involves three main steps:
1. Removal of haze from the input underwater image using a water shield filter to extract a dehazed image.
2. Denoising the dehazed image using a sequential algorithm to compensate for uneven lighting and enhance image features.
3. Fusing the dehazed and denoised images to produce a clear output image with both haze and noise removed.
The method aims to improve underwater image visibility and contrast correction in a simple and effective manner. Evaluation on sample images demonstrates reduced haze and artifacts after processing.
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing MethodsISAR Publications
Most of the computer applications use digital images. Digital image processing acts an important
role in the analysis and interpretation of data, which is in the digital form. Images taken in foggy
weather condition often suffer from poor visibility and clarity. After the study of several fast
dehazing methods like Tan’s dehazing technique, Fattal’s dehazing technique and aiming Heat al
dehazing technique, Dark Channel Prior (DCP) intended by He et al is most substantive technique
for dehazing.This survey aims to study about various existing methods such as polarization, dark
channel prior, depth map based method etc. are used for dehazing.
Numerous studies have been conducted on enhancing sand-dust images using techniques like histogram equalization, Retinex-based methods, and treating it as a dehazing problem. Convolutional neural networks (CNNs) have also been applied to tasks like transmission map estimation, underwater image enhancement, and image restoration in various conditions. Challenges include reducing noise without losing details, and addressing issues like light absorption and scattering that cause low contrast, visibility and color distortion in hazy, sand-dust and underwater images. Ongoing research continues advancing knowledge in scene recovery fields like these.
A Review over Different Blur Detection Techniques in Image Processingpaperpublications3
Abstract: In last few years there is lot of development and attentions in area of blur detection techniques. The Blur detection techniques are very helpful in real life application and are used in image segmentation, image restoration and image enhancement. Blur detection techniques are used to remove the blur from a blurred region of an image which is due to defocus of a camera or motion of an object. In this literature review we represent some techniques of blur detection such as Blind image de-convolution, Low depth of field, Edge sharpness analysis, and Low directional high frequency energy. After studying all these techniques we have found that there are lot of future work is required for the development of perfect and effective blur detection technique.
1. The document proposes a new method for shadow detection and removal in satellite images using image segmentation, shadow feature extraction, and inner-outer outline profile line (IOOPL) matching.
2. Key steps include segmenting the image, detecting suspected shadows, eliminating false shadows through analysis of color and spatial properties, extracting shadow boundaries, and obtaining homogeneous sections through IOOPL similarity matching to determine radiation parameters for shadow removal.
3. Experimental results showed the method could successfully detect shadows and remove them to improve image quality for applications like object classification and change detection.
A Survey on Single Image Dehazing ApproachesIRJET Journal
This document provides a survey of single image dehazing approaches. It begins with an introduction to the problem of haze in images and how it degrades quality. It then summarizes several existing single image dehazing methods, including those based on the atmospheric scattering model, dark channel prior, color attenuation prior, and deep learning approaches. The survey covers the key assumptions and limitations of each approach. Overall, the document reviews the progress that has been made in developing techniques to remove haze from a single input image.
Semantic mapping of road scenes, PhD thesis. The main aim of the thesis is to investigate and propose solutions to the scene understanding problem of finding 'what' objects are present in the world and 'where' are they located.
A Review on Deformation Measurement from Speckle Patterns using Digital Image...IRJET Journal
This document reviews digital image correlation (DIC) for deformation measurement using speckle patterns. DIC is a non-contact optical method that uses digital images of a speckle pattern on a surface before and after deformation. By comparing the speckle patterns in the images, DIC can determine displacement and strain fields with high accuracy. The document discusses speckle pattern types, the DIC process, related works that have improved DIC methods, and applications of DIC such as for high-temperature testing. DIC provides full-field measurements and greater accuracy compared to conventional contact methods.
EXTENDED WAVELET TRANSFORM BASED IMAGE INPAINTING ALGORITHM FOR NATURAL SCENE...cscpconf
This paper proposes an exemplar based image inpainting using extended wavelet transform. The
Image inpainting modifies an image with the available information outside the region to be
inpainted in an undetectable way. The extended wavelet transform is in two dimensions. The
Laplacian pyramid is first used to capture the point discontinuities, and then followed by a
directional filter bank to link point discontinuities into linear structures. The proposed model
effectively captures the edges and contours of natural scene images
Visual Environment by Semantic Segmentation Using Deep Learning: A Prototype ...Tomohiro Fukuda
This document describes a proposed method for estimating sky view factor (SVF) using semantic segmentation with deep learning networks. Specifically:
- It develops a system using SegNet and U-Net deep learning models to perform pixel-wise semantic segmentation of sky and non-sky areas from images to calculate SVF ratios.
- The system was trained on 300 manually segmented images and tested on 100 fisheye photographs, achieving 98% accuracy in estimating SVF under different sky conditions.
- Future work is needed to apply the system to live video streams rather than static images. The method provides an efficient, high-precision way to estimate important urban environmental metrics like SVF.
Single Image Fog Removal Based on Fusion Strategy csandit
Images of outdoor scenes are degraded by absorption and scattering by the suspended particles and water droplets in the atmosphere. The light coming from a scene towards the camera is attenuated by fog and is blended with the airlight which adds more whiteness into the scene. Fog removal is highly desired in computer vision applications. Removing fog from images can
significantly increase the visibility of the scene and is more visually pleasing. In this paper, we propose a method that can handle both homogeneous and heterogeneous fog which has been tested on several types of synthetic and real images. We formulate the restoration problem
based on fusion strategy that combines two derived images from a single foggy image. One of
the images is derived using contrast based method while the other is derived using statistical
based approach. These derived images are then weighted by a specific weight map to restore
the image. We have performed a qualitative and quantitative evaluation on 60 images. We use
the mean square error and peak signal-to-noise ratio as the performance metrics to compare
our technique with the state-of-the-art algorithms. The proposed technique is simple and shows
comparable or even slightly better results with the state-of-the-art algorithms used for
defogging a single image.
Adversarial Photo Frame: Concealing Sensitive Scene Information in a User-Acc...multimediaeval
Paper: http://ceur-ws.org/Vol-2670/MediaEval_19_paper_24.pdf
Youtube: https://www.youtube.com/watch?v=keLM9fmKJSI
Zhuoran Liu and Zhengyu Zhao, Adversarial Photo Frame: Concealing Sensitive Scene Information of Social Images in a User-Acceptable Manner. Proc. of MediaEval 2018, 27-29 October 2019, Sophia Antipolis, France.
Abstract:
Personal privacy protection has become more and more crucial in the era of big multimedia data and artificial intelligence. This paper presents our submission to pixel privacy task, where we propose to fool the deep visual classification model that is for recognition of sensitive scenes by adding adversarial frame to the image. Experimental results indicate that our method can achieve strong adversarial effects while maintaining the visual appeal and social function of the transformed images.
Presented by Zhengyu Zhao
Two Dimensional Image Reconstruction Algorithmsmastersrihari
This document discusses two-dimensional image reconstruction algorithms. It begins with an introduction to image projections and reconstruction. It then describes different types of projections like parallel beam, fan beam, and truncated projections. It discusses the convolution back projection algorithm and its digital implementation. Results are shown for different filters. Applications include medical imaging. Present research focuses on limited data reconstruction. The document concludes that image reconstruction is an ill-posed problem.
In this paper, an attempt has been made to extract texture
features from facial images using an improved method of
Illumination Invariant Feature Descriptor. The proposed local
ternary Pattern based feature extractor viz., Steady Illumination
Local Ternary Pattern (SIcLTP) has been used to extract texture
features from Indian face database. The similarity matching
between two extracted feature sets has been obtained using Zero
Mean Sum of Squared Differences (ZSSD). The RGB facial images
are first converted into the YIQ colour space to reduce the
redundancy of the RGB images. The result obtained has been
analysed using Receiver Operating Characteristic curve, and is
found to be promising. Finally the results are validated with
standard local binary pattern (LBP) extractor.
Implemented an Advanced 2D Otsu method for image segmentation which solved the problems of the traditional Otsu method such as sensitive to noise and shadow. Wrote and debugged the program in C and VisionX system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
1. Department of Electronics and
Communication Engineering
Single image haze removal using variable
fog-weight
Presented by:
Name : MD Mohsin Ghazi
Roll No. : 1609731055
B.Tech (ECE)
3. Introduction
• The quality of image is generally degraded due to bad weather condition and presence of
suspended particle like fog, dust, mist and haze etc. in the atmosphere.
• Therefore, the dehazing of the image is needed to overcome the impact this unwanted weather
factors.
• Dehazing is the procedure to extract the haze effect from the degraded image and reconstruct
the original colours of the degraded image.
• Reconstruction of original colours of the degraded image captured under bad weather condition
is highly desired in both computational photography as well as computer vision applications.
4. Introduction (continued)
• Therefore, extraction of haze from the captured image is most challenging task.
• To enhance the visibility and make image usable, many of the researchers had made
numerous efforts and proposed different haze removal techniques.
• The role of haze removal is to remove the impact of weather factor and improve the visibility of
the image.
• Figure Shows the Image degraded by haze with respect to dehazed image.
5. Motivation
• All Conventional vision system are designed to perform in clear weather.
• Under adverse weather conditions such as “Mist, Fog, Rain, and Snow” the contrast and
color of images are drastically altered or degraded.
• Most outdoor vision applications such as “Autonomous Navigation, real-time Surveillance,
Remote Sensing, and Automatic Target Recognition (ATR)” are incomplete without
mechanism that guarantee satisfactory performance under poor weather conditions.
• It is imperative to remove the weather effects from images in order to make Vision Systems
more reliable.
6. Literature review
S. No. References Findings Limitations
1. Narasimhan, S.G. and Nayar, S.K., 2003. Contrast
restoration of weather degraded images. IEEE
transactions on pattern analysis and machine
intelligence, 25(6), pp.713-724.
Restoration Based
Method, Algorithm is
based on multiple
number of images.
Scene information is
needed from the sensors
or an existing database.
2. Narasimhan, S.G. and Nayar, S.K., 2000, June.
Chromatic framework for vision in bad weather. In
Proceedings IEEE Conference on Computer Vision
and Pattern Recognition. CVPR 2000 (Cat. No.
PR00662) (Vol. 1, pp. 598-605). IEEE.
General Chromatic
Framework Model,
Algorithm is based
on multiple number
of images.
Scene information is
needed from the sensors
or an existing database.
7. Literature Review (continued)
S. No. References Findings Limitations
3. Treibitz, T. and Schechner, Y.Y., 2009, June.
Polarization: Beneficial for visibility
enhancement?. In 2009 IEEE Conference on
Computer Vision and Pattern Recognition (pp.
525-532). IEEE.
Polarization Based
Method
It is complicated to obtain
source image.
4. Schechner, Y.Y., Narasimhan, S.G. and Nayar,
S.K., 2003. Polarization-based vision through
haze. Applied optics, 42(3), pp.511-525.
Different Polarizing
Condition
Scene information is
needed from the sensors
or an existing database.
8. Literature Review (continued)
S. No. References Findings Limitations
5. Shwartz, S., Namer, E. and Schechner, Y.Y.,
2006, June. Blind haze separation. In 2006
IEEE Computer Society Conference on
Computer Vision and Pattern Recognition
(CVPR'06) (Vol. 2, pp. 1984-1991). IEEE.
Different Polarizing
Condition, Independent
Component Analysis
It is complicated to obtain
source image. Method is
incompatible, Complex
and Time Consuming.
6. Schechner, Y.Y. and Averbuch, Y., 2007.
Regularized image recovery in scattering
media. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 29(9), pp.1655-1660.
Polarization Based
Method.
It is complicated to obtain
source image.
9. Literature Review (continued)
S. No. References Findings Limitations
7. Kopf, J., Neubert, B., Chen, B., Cohen, M.,
Cohen-Or, D., Deussen, O., Uyttendaele, M.
and Lischinski, D., 2008. Deep photo: Model-
based photograph enhancement and viewing.
ACM transactions on graphics (TOG), 27(5),
pp.1-10.
Additional information
based
Lengthy Process, Time
Complexity and not
effective output.
8. Kim, J.H., Sim, J.Y. and Kim, C.S., 2011, May.
Single image dehazing based on contrast
enhancement. In 2011 IEEE International
Conference on Acoustics, Speech and Signal
Processing (ICASSP) (pp. 1273-1276). IEEE.
Local Contrast
Enhancement.
Output image become
over saturated and looks
like unnatural.
10. Literature Review (continued)
S. No. References Findings Limitations
9. Tan, R.T., 2008, June. Visibility in bad weather
from a single image. In 2008 IEEE Conference
on Computer Vision and Pattern Recognition
(pp. 1-8). IEEE.
Local Contrast
Enhancement.
Output image become
over saturated and looks
like unnatural.
10. Schaul, L., Fredembach, C. and Süsstrunk, S.,
2009, November. Color image dehazing using
the near-infrared. In 2009 16th IEEE
International Conference on Image Processing
(ICIP) (pp. 1629-1632). IEEE.
Multi-solution Image
Fusion, Near Infrared.
It is complicated to obtain
source image and yield
few halo artifacts.
11. Literature Review (continued)
S. No. References Findings Limitations
11. Tarel, J.P. and Hautiere, N., 2009, September.
Fast visibility restoration from a single color or
gray level image. In 2009 IEEE 12th
International Conference on Computer Vision
(pp. 2201-2208). IEEE.
Dedicated Tone
Mapping, White
Balance
Output image is over
saturated and halo
artifacts are also
appeared at the boundary
of the edge.
12. Fattal, R., 2008, Single image dehazing. ACM
SIGGRAPH 2008 Papers on - SIGGRAPH ’08.
Constant Albedo
Image, Multi Albedo
Image.
Output image is over
saturated and looks like
unnatural.
12. Literature Review (continued)
S. No. References Findings Limitations
13. He, K., Sun, J. and Tang, X., 2010. Single
image haze removal using dark channel
prior. IEEE transactions on pattern analysis
and machine intelligence, 33(12), pp.2341-
2353.
Dark Channel Prior,
Soft Matting
Due to the use of soft matting
algorithm takes a lot of time to
execute and in output image,
few halo artifacts are also
appeared.
14. Huang, S.C., Chen, B.H. and Wang, W.J.,
2014. Visibility restoration of single hazy
images captured in real-world weather
conditions. IEEE Transactions on Circuits
and Systems for Video Technology, 24(10),
pp.1814-1824.
Depth Estimation
Module, Colour
Analysis Module,
Visibly Restoration
Module.
Does not remove haze properly
in case of heavy hazy input
image.
13. Literature Review (continued)
S. No. References Findings Limitations
15. Wang, J.B., He, N., Zhang, L.L. and Lu, K.,
2015. Single image dehazing with a physical
model and dark channel prior.
Neurocomputing, 149, pp.718-728.
Dark Channel Prior,
Variogram
Failed in case of heavy
hazy input image.
16. Gao, R., Fan, X., Zhang, J. and Luo, Z., 2012,
September. Haze filtering with aerial
perspective. In 2012 19th IEEE International
Conference on Image Processing (pp. 989-
992). IEEE.
Dark Channel Prior Failed in sky region and
heavy hazy input image.
14. Literature Review (continued)
S. No. References Findings Limitations
17. Kim, K., Kim, S. and Kim, K.S., 2017. Effective
image enhancement techniques for fog-affected
indoor and outdoor images. IET Image
Processing, 12(4), pp.465-471.
Dark Channel Prior,
Contrast Limited
Adaptive Histogram
Equalization with
Discrete Wavelet
Transform.
Inaccurate estimation of
transmission map in DCP
and highly affected by
halo artifact.
18. Wang, Z., Hou, G., Pan, Z. and Wang, G.,
2017. Single image dehazing and denoising
combining dark channel prior and variational
models. IET Computer Vision, 12(4), pp.393-
402.
Layered Total Variation,
Multichannel Total
Variation, Colour Total
Variation, Dark Channel
Prior.
Failed in case of heavy
hazy input image and
also highly affected by
halo effect at the
boundary of the edge.
15. Literature Review (continued)
S. No. References Findings Limitations
19. Xu, L., Wei, Y., Hong, B. and Yin, W.,
2019. A Dehazing Algorithm Based on
Local Adaptive Template for Transmission
Estimation and Refinement. IEEE Access,
7, pp.125000-125010.
Dark Channel
Prior, Local
Adaptive
Template
Output become over saturated and
looks like unnatural image. this
algorithm is failed in case of heavy
haze and change only colour of the
hazy part.
20. Kim, S.E., Park, T.H. and Eom, I.K., 2019.
Fast single image dehazing using
saturation based transmission map
estimation. IEEE Transactions on Image
Processing, 29, pp.1985-1998.
Dark Channel
Prior, Removing
Colour Veil.
Not effective in case of heavy hazy
input image and output image is still
hazy.
16. Literature Review (continued)
S. No. References Findings Limitations
21. He, K., Sun, J. and Tang, X., 2012. Guided
image filtering. IEEE transactions on pattern
analysis and machine intelligence, 35(6),
pp.1397-1409.
Guided-Filter Halo effect highly
appeared at the boundary
of the edge.
22. Wang, W., Chang, F., Ji, T. and Wu, X., 2018. A
fast single-image dehazing method based on a
physical model and gray projection. IEEE
Access, 6, pp.5641-5653.
Dark Channel Prior Failed to remove haze
properly in output image
and halo artifacts are
present at the boundary of
the edge.
17. Literature Review (continued)
S. No. References Findings Limitations
23. Shen, L., Zhao, Y., Peng, Q., Chan, J.C.W. and
Kong, S.G., 2018. An iterative image dehazing
method with polarization. IEEE Transactions on
Multimedia, 21(5), pp.1093-1107.
Dark Channel Prior,
Polarization
Failed in case of heavy
haze present in the input
image.
24. Salazar-Colores, S., Cabal-Yepez, E., Ramos-
Arreguin, J.M., Botella, G., Ledesma-Carrillo,
L.M. and Ledesma, S., 2018. A fast image
dehazing algorithm using morphological
reconstruction. IEEE Transactions on Image
Processing, 28(5), pp.2357-2366.
Dark Channel Prior,
Morphological
Reconstruction
Failed to remove haze
properly in output image,
especially the areas
where depth is rapidly
changing in the image.
18. Literature Review (continued)
S. No. References Findings Limitations
25. Kang, C. and Kim, G., 2018. Single image haze
removal method using conditional random
fields. IEEE Signal Processing Letters, 25(6),
pp.818-822.
Conditional Random
Field, Tree-reweighted
Failed to remove haze
properly from the input
image if haze density is
high in input image.
26. Liu, F. and Yang, C., 2014, August. A fast
method for single image dehazing using dark
channel prior. In 2014 IEEE International
Conference on Signal Processing,
Communications and Computing (ICSPCC)
(pp. 483-486). IEEE.
Dark Channel Prior Output image is affected
by block effect and also
failed to remove haze
properly where haze
density is high in input
image.
19. Objective
• In this proposed work, the main focus is to reduce the time complexity, minimizing the
halo artifact and improve the visibility of the input image.
20. Main contribution
• The main contribution of the proposed framework is to vary the fog weight and to modify the
inaccurate transmission map in DCP depending on the haze density of the input hazy image.
• After modifying the transmission map, a guided filter is used to avoid the halo artifact up to
the threshold which results a high-quality haze free image.
21. Haze removal method
Input image
Dark channel
prior
Atmospheric
light
Modified transmission
map
Guided filter
Scene radiance
recovery
Haze free image
Flow chart of the proposed method
22. Haze removal method (continued)
Atmospheric scattering model:
• To describe the formation of hazy image, the atmospheric scattering model is widely used in
computer vision and image processing [22].
𝑯 𝒙 = 𝑺 𝒙 𝑻 𝒙 + 𝑨 𝟏 − 𝑻 𝒙
Hazy image Recovered image Transmission map
Atmospheric lightAtmospheric light
23. Haze removal method (continued)
Dark channel prior (DCP):
• DCP is focused on statistical measurements of various out-of-door haze-free images. It has
been found that haze-free images consist of a few pixels that have very small intensities in at
least one color channel in the most of the non-sky areas [13].
𝑺 𝒅𝒂𝒓𝒌 𝒙 = 𝐦𝐢𝐧
𝒄∈ 𝒓,𝒈,𝒃
𝐦𝐢𝐧
𝒚∈𝛀 𝒙
𝑺 𝒄 𝒚
Estimate atmospheric light:
• The pixels of the maximum intensity have been regarded as atmospheric light in [9] and is
further refined in [12].
• In the proposed method, the top 0.1% brightest pixels in the dark channel are picked. In these
pixels, the highest intensity pixels are elected as atmospheric light.
24. Haze removal method (continued)
Modified transmission map:
• The transmission map T is obtained by applying minimal operation on the dark channel prior.
• The modified transmission map using DCP statistic is presented by equation:
𝑻 𝒙 = 𝟏 − 𝝎 𝒗𝒓. 𝐦𝐢𝐧
𝒄
𝐦𝐢𝐧
𝒚∈𝛀 𝒙
𝑯 𝒄 𝒚
𝑨 𝒄
𝝎 𝒗𝒓. = 𝐥𝐨𝐠 𝟏𝟎 𝑹 4≤ 𝑹 ≤ 𝟏𝟎
𝝎 𝒗𝒓. = Variable fog-weight
𝛀(𝒙) = Local patch in dark channel prior
R = Real Number
25. Haze removal method (continued)
Guided Filter:
• Transmission refiner Guided Filter is used to avoid the halo artifact at the boundary
of the object into the image and produce high-quality haze free image.
• Considering the guidance image is H, p is image to be filtered and q is resulting
image. The local linear model of the Guided Filter is given as:
𝒒𝒊 = 𝒂 𝒌 𝑯𝒊 + 𝒃 𝒌 , 𝒊 ∈ 𝝎 𝒌
• Where 𝝎 𝒌 is a window centred at pixel k, with radius r, and 𝒂 𝒌 , 𝒃 𝒌 are known to be
linear coefficient [17].
26. Haze removal method (continued)
Recovery of scene radiance:
• With the help of modified transmission map and estimated atmospheric light, the scene radiance
S 𝒙 is recovered by using atmospheric scattering model.
S 𝒙 =
𝑯 𝒙 −𝑨
𝐦𝐚𝐱(𝑻 𝒙 ,𝑻 𝟎)
+ 𝑨
• Where 𝑻 𝟎 is Lower Bound of transmission map and its value is restricted to 0.1 in this paper.
27. Results and comparison
Input hazy image Our result without refinement Our result with refinement
• All the input hazy image are taken from the dataset of He et. al. [13] and dataset of Kim et. al. [21].
31. Conclusion
• This algorithm, mainly focused on avoiding block effect at the boundary of the edge and
modify the transmission map to provide haze free and under saturated image, even when
haze density is low or high in the input image.
• After experiments on different type of hazy image, it is confirmed that the proposed algorithm
can accurately estimate the transmission map and effectively avoid the block effect.
• The experimental results demonstrated that the performance of the proposed algorithm is
best in terms of both computational complexity as well as quality of the image.
32. List of Publications
IOP Science : Journal of Physics Conference Series
1. Mohammed Shoaib, Mohd Mohsin, Imbeshat Khalid Ansari, Harshat Maddhesiya, Upendra Kumar
Acharya, “Single image haze removal using variable fog-weight”, 2020 First International Conference
on Advances in Physical Science and Material (ICAPSM 2020). (Accepted and Presented)
33. References
1. Narasimhan, S.G. and Nayar, S.K., 2003. Contrast restoration of weather degraded images. IEEE
transactions on pattern analysis and machine intelligence, 25(6), pp.713-724.
2. Narasimhan, S.G. and Nayar, S.K., 2000, June. Chromatic framework for vision in bad weather. In
Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.
PR00662) (Vol. 1, pp. 598-605). IEEE.
3. Treibitz, T. and Schechner, Y.Y., 2009, June. Polarization: Beneficial for visibility enhancement?. In 2009
IEEE Conference on Computer Vision and Pattern Recognition (pp. 525-532). IEEE.
4. Schechner, Y.Y., Narasimhan, S.G. and Nayar, S.K., 2003. Polarization-based vision through haze.
Applied optics, 42(3), pp.511-525.
5. Shwartz, S., Namer, E. and Schechner, Y.Y., 2006, June. Blind haze separation. In 2006 IEEE Computer
Society Conference on Computer Vision and Pattern Recognition (CVPR'06) (Vol. 2, pp. 1984-1991).
IEEE.
34. References (continued)
6. Schechner, Y.Y. and Averbuch, Y., 2007. Regularized image recovery in scattering media. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 29(9), pp.1655-1660.
7. Kopf, J., Neubert, B., Chen, B., Cohen, M., Cohen-Or, D., Deussen, O., Uyttendaele, M. and Lischinski,
D., 2008. Deep photo: Model-based photograph enhancement and viewing. ACM transactions on
graphics (TOG), 27(5), pp.1-10.
8. Kim, J.H., Sim, J.Y. and Kim, C.S., 2011, May. Single image dehazing based on contrast enhancement.
In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp.
1273-1276). IEEE.
9. Tan, R.T., 2008, June. Visibility in bad weather from a single image. In 2008 IEEE Conference on
Computer Vision and Pattern Recognition (pp. 1-8). IEEE.
10.Schaul, L., Fredembach, C. and Süsstrunk, S., 2009, November. Color image dehazing using the near-
infrared. In 2009 16th IEEE International Conference on Image Processing (ICIP) (pp. 1629-1632).
IEEE.
35. References (continued)
11.Tarel, J.P. and Hautiere, N., 2009, September. Fast visibility restoration from a single color or gray level
image. In 2009 IEEE 12th International Conference on Computer Vision (pp. 2201-2208). IEEE.
12.Fattal, R., 2008, Single image dehazing. ACM SIGGRAPH 2008 Papers on - SIGGRAPH ’08.
13.He, K., Sun, J. and Tang, X., 2010. Single image haze removal using dark channel prior. IEEE
transactions on pattern analysis and machine intelligence, 33(12), pp.2341-2353.
14.Huang, S.C., Chen, B.H. and Wang, W.J., 2014. Visibility restoration of single hazy images captured in
real-world weather conditions. IEEE Transactions on Circuits and Systems for Video Technology, 24(10),
pp.1814-1824.
15.Wang, J.B., He, N., Zhang, L.L. and Lu, K., 2015. Single image dehazing with a physical model and
dark channel prior. Neurocomputing, 149, pp.718-728.
16.Gao, R., Fan, X., Zhang, J. and Luo, Z., 2012, September. Haze filtering with aerial perspective. In 2012
19th IEEE International Conference on Image Processing (pp. 989-992). IEEE.
36. References (continued)
17.He, K., Sun, J. and Tang, X., 2012. Guided image filtering. IEEE transactions on pattern analysis and
machine intelligence, 35(6), pp.1397-1409.
18.Kim, K., Kim, S. and Kim, K.S., 2017. Effective image enhancement techniques for fog-affected indoor
and outdoor images. IET Image Processing, 12(4), pp.465-471.
19.Wang, Z., Hou, G., Pan, Z. and Wang, G., 2017. Single image dehazing and denoising combining dark
channel prior and variational models. IET Computer Vision, 12(4), pp.393-402.
20.Xu, L., Wei, Y., Hong, B. and Yin, W., 2019. A Dehazing Algorithm Based on Local Adaptive Template for
Transmission Estimation and Refinement. IEEE Access, 7, pp.125000-125010.
21.Kim, S.E., Park, T.H. and Eom, I.K., 2019. Fast single image dehazing using saturation based
transmission map estimation. IEEE Transactions on Image Processing, 29, pp.1985-1998.
22.McCartney, E.J., 1976. Optics of the atmosphere: scattering by molecules and particles. nyjw.
37. References (continued)
23.Wang, W., Chang, F., Ji, T. and Wu, X., 2018. A fast single-image dehazing method based on a
physical model and gray projection. IEEE Access, 6, pp.5641-5653.
24.Shen, L., Zhao, Y., Peng, Q., Chan, J.C.W. and Kong, S.G., 2018. An iterative image dehazing method
with polarization. IEEE Transactions on Multimedia, 21(5), pp.1093-1107.
25.Salazar-Colores, S., Cabal-Yepez, E., Ramos-Arreguin, J.M., Botella, G., Ledesma-Carrillo, L.M. and
Ledesma, S., 2018. A fast image dehazing algorithm using morphological reconstruction. IEEE
Transactions on Image Processing, 28(5), pp.2357-2366.
26.Kang, C. and Kim, G., 2018. Single image haze removal method using conditional random fields. IEEE
Signal Processing Letters, 25(6), pp.818-822.
27.Liu, F. and Yang, C., 2014, August. A fast method for single image dehazing using dark channel prior. In
2014 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)
(pp. 483-486). IEEE.