The document describes a proposed method for ghost and noise removal in exposure fusion for high dynamic range imaging. Key points:
- It uses exposure fusion within a subband architecture, where Haar wavelet filters decompose input images into subband images.
- A histogram-based method is used to generate motion maps for ghost removal, which are combined with exposure fusion weight maps.
- Multi-resolution bilateral filtering is applied to remove noise from the fused subband images.
- A gain control map is computed and applied to the denoised subband images to enhance details, before reconstructing the final HDR image.
The real world scenes have a very wide range of luminance levels. But in the field of photography, the ordinary cameras are not capable of capturing the true dynamic range of a natural scene. To enhance the dynamic range of the captured image, a technique known as High Dynamic Range (HDR) imaging is generally used. HDR imaging is the process of capturing scenes with larger intensity range than what conventional sensors can capture. It can faithfully capture the details in dark and bright part of the scene. In this paper HDR generation method such as multiple exposure fusion in image domain and radiance domain are reviewed. The main issues in HDR imaging using multiple exposure combination technique are Misalignment of input images, Noise in data sets and Ghosting artefacts. The removal of these artefacts is a major step in HDR reconstruction. Methods for removing misalignment and noise are discussed and detailed survey of ghost detection and removal techniques are given in this paper. Single shot HDR imaging is a recent technique in the field of HDR reconstruction. Here instead of taking multiple exposure input images, a single image is used for generating HDR image. Various methods for Single shot HDR imaging are also reviewed.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
This document provides an overview of digital image processing and is divided into multiple parts. Part I discusses digital image fundamentals, image transforms, image enhancement, image restoration, image compression, and image segmentation. It introduces key concepts such as digital image systems, sampling and quantization, pixel relationships, and image transforms in both the spatial and frequency domains. Image processing techniques like filtering, histogram processing, and frequency domain filtering are also summarized.
This document provides an overview of image processing and various image segmentation techniques. It begins with basics of image processing, types of images, and image formats. Then it discusses basics of image segmentation, including watershed transformation, point detection, and region-based segmentation. It also covers various edge detection methods like first order derivative, second order derivative, and optimal edge detection using Canny edge detection. Specific techniques discussed include Roberts operator, Sobel operator, Prewitt operator, Laplacian, watershed transformation, and region growing segmentation.
Digital image processing - Image Enhancement (MATERIAL)Mathankumar S
This document discusses various image enhancement techniques including contrast stretching, compression of dynamic range, histogram equalization, and histogram specification. It provides definitions and explanations of these concepts with examples. Histogram equalization aims to produce a linear histogram to enhance an image, while histogram specification allows specifying a desired output histogram. Local enhancement can be achieved by applying these histogram processing methods over small non-overlapping regions instead of globally to reduce edge effects.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses digital image processing and various image enhancement techniques. It begins with introductions to digital image processing and fundamental image processing systems. It then covers topics like image sampling and quantization, color models, image transforms like the discrete Fourier transform, and noise removal techniques like median filtering. Histogram equalization and homomorphic filtering are also summarized as methods for image enhancement.
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...ijsrd.com
Uneven illumination always affects the visual quality images which results in poor understanding about the content of the images. There is no accepted universal image enhancement algorithm or specific criteria which can fulfill user needs. The processed image may be very different with the original image in the visual effects, but it also may be similar to the original image [1]. It will be a developing tradition to integrate the advantage of various algorithms to practical application to image enhancements [2]. Zhang et al. [3] presents an adaptive image contrast enhancement method. The proposed method is based on a local gamma correction piloted by histogram analysis. In this paper , to avoid uneven Illuminance image is divided into different segments . It works locally to decrease contrast as if we perform enhancement techniques globally on portions which are already bright then this gives poor results. Enhancement techniques are applied only to those dark portions. We need accurate method that not only enhance the image but also preserve the information.
The real world scenes have a very wide range of luminance levels. But in the field of photography, the ordinary cameras are not capable of capturing the true dynamic range of a natural scene. To enhance the dynamic range of the captured image, a technique known as High Dynamic Range (HDR) imaging is generally used. HDR imaging is the process of capturing scenes with larger intensity range than what conventional sensors can capture. It can faithfully capture the details in dark and bright part of the scene. In this paper HDR generation method such as multiple exposure fusion in image domain and radiance domain are reviewed. The main issues in HDR imaging using multiple exposure combination technique are Misalignment of input images, Noise in data sets and Ghosting artefacts. The removal of these artefacts is a major step in HDR reconstruction. Methods for removing misalignment and noise are discussed and detailed survey of ghost detection and removal techniques are given in this paper. Single shot HDR imaging is a recent technique in the field of HDR reconstruction. Here instead of taking multiple exposure input images, a single image is used for generating HDR image. Various methods for Single shot HDR imaging are also reviewed.
Feature based ghost removal in high dynamic range imagingijcga
This paper presents a technique to reduce the ghost artifacts
in a high dynamic range (HDR) image. In HDR
imaging, we need to detect the motion between multiple exp
osure images of the same scene in order to
prevent the ghost artifacts
. First, w
e
establish
correspondences between the aligned reference image and the
other exposure images using the zero
-
mean normalized cross correlation (ZNCC
).
T
hen
, we
find object
moti
on regions
using
adaptive local thresholding of ZNCC feature maps and motion map clustering. In this
process, we focus on finding accurate motion regions and on reducing false detection in order to minimize
the side effects as well.
Through
experiments wit
h several sets of
low dynamic range
images captured with
different exposures, we show that the proposed method can remove the ghost artifacts better than existing
methods
.
This document provides an overview of digital image processing and is divided into multiple parts. Part I discusses digital image fundamentals, image transforms, image enhancement, image restoration, image compression, and image segmentation. It introduces key concepts such as digital image systems, sampling and quantization, pixel relationships, and image transforms in both the spatial and frequency domains. Image processing techniques like filtering, histogram processing, and frequency domain filtering are also summarized.
This document provides an overview of image processing and various image segmentation techniques. It begins with basics of image processing, types of images, and image formats. Then it discusses basics of image segmentation, including watershed transformation, point detection, and region-based segmentation. It also covers various edge detection methods like first order derivative, second order derivative, and optimal edge detection using Canny edge detection. Specific techniques discussed include Roberts operator, Sobel operator, Prewitt operator, Laplacian, watershed transformation, and region growing segmentation.
Digital image processing - Image Enhancement (MATERIAL)Mathankumar S
This document discusses various image enhancement techniques including contrast stretching, compression of dynamic range, histogram equalization, and histogram specification. It provides definitions and explanations of these concepts with examples. Histogram equalization aims to produce a linear histogram to enhance an image, while histogram specification allows specifying a desired output histogram. Local enhancement can be achieved by applying these histogram processing methods over small non-overlapping regions instead of globally to reduce edge effects.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses digital image processing and various image enhancement techniques. It begins with introductions to digital image processing and fundamental image processing systems. It then covers topics like image sampling and quantization, color models, image transforms like the discrete Fourier transform, and noise removal techniques like median filtering. Histogram equalization and homomorphic filtering are also summarized as methods for image enhancement.
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...ijsrd.com
Uneven illumination always affects the visual quality images which results in poor understanding about the content of the images. There is no accepted universal image enhancement algorithm or specific criteria which can fulfill user needs. The processed image may be very different with the original image in the visual effects, but it also may be similar to the original image [1]. It will be a developing tradition to integrate the advantage of various algorithms to practical application to image enhancements [2]. Zhang et al. [3] presents an adaptive image contrast enhancement method. The proposed method is based on a local gamma correction piloted by histogram analysis. In this paper , to avoid uneven Illuminance image is divided into different segments . It works locally to decrease contrast as if we perform enhancement techniques globally on portions which are already bright then this gives poor results. Enhancement techniques are applied only to those dark portions. We need accurate method that not only enhance the image but also preserve the information.
Digital Image Processing_ ch2 enhancement spatial-domainMalik obeisat
The document discusses image enhancement techniques in the spatial domain. It describes how image enhancement aims to process an image to make it more suitable for display or analysis by sharpening, smoothing, or normalizing illumination. Enhancement can be done as preprocessing or postprocessing. Common approaches include linear and non-linear operators that manipulate pixel values. Specific techniques covered include histogram equalization, thresholding, gamma correction, and filtering to modify contrast and brightness. The goal of histogram manipulation is to design transforms that modify an image histogram to have desired properties like increased contrast or matched to a reference histogram.
The document discusses various image enhancement techniques in digital image processing. It describes point operations like image negative, contrast stretching, thresholding, brightness enhancement, log transformation, and power law transformation. Contrast stretching expands the range of intensity levels and can be done by multiplying pixels with a constant, using a transfer function, or histogram equalization. Thresholding converts an image to binary by assigning pixel values above a threshold to one level and below to another. Log and power law transformations compress high intensity values and expand low values to enhance an image. Matlab code examples are provided for each technique.
Abstract
Field of image processing has vast applications in medical, forensic, research etc., It includes various domains like enhancement,
classification, segmentation, etc., which are widely used for these applications. Image Enhancement is the pre processing step on
which the accuracy of the result lies. Image enhancement aims to improve the visual appearance of an image, without affecting
the original attributes (i.e.,) image contrast is adjusted and noise is removed to produce better quality image. Hence image
enhancement is one of the most important tasks in image processing. Enhancement is classified into two categories spatial domain
enhancement and frequency domain enhancement. Spatial domain enhancement acts upon pixel value whereas frequency domain
enhancement acts on the Fourier transform of the image. The enhancement techniques to be used depend on modality, climatic
and visual perspective etc., In this paper, we present a survey on various existing image enhancement techniques.
Keywords: Enhancement, Spatial domain enhancement, Frequency domain enhancement, Contrast, Modality.
This document provides an overview of digital image processing. It discusses what digital image processing is, provides a brief history, and outlines some of the key stages involved, including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses some example applications like medical imaging, autonomous vehicles, traffic monitoring, and biometrics. The document uses images to illustrate different concepts and stages in digital image processing.
Digital image processing short quesstion answersAteeq Zada
This document discusses several techniques for 2D spatial image filtering and background subtraction in digital image processing. It covers linear filtering, Gaussian filtering, frame differencing, running averages, and mixtures of Gaussian models. The key techniques are linear filtering using a kernel or mask, Gaussian filtering to smooth images, and running averages or mixtures of Gaussians to model the background pixels over time while adapting to changes in illumination, motion, or scene geometry.
This document discusses various techniques for image enhancement in spatial domain. It defines image enhancement as improving visual quality or converting images for better analysis. Key techniques covered include noise removal, contrast adjustment, intensity adjustment, histogram equalization, thresholding, gray level slicing, and image rotation. Conversion methods like grayscale and different file formats are also summarized. Experimental results and applications in fields like medicine, astronomy, and security are mentioned.
1) The document proposes a gradient-based method for low-light image enhancement. It extracts gradients from the input image, manipulates the gradients by applying higher gain to darker regions, and integrates the gradients while constraining the intensity range.
2) Experimental results show that the proposed method enhances low-light images effectively while avoiding saturation, compared to other techniques like histogram equalization.
3) The method runs in real-time and MATLAB code is available online for researchers.
This document provides an overview of digital image processing. It discusses key topics including digital image fundamentals, image transforms, image enhancement, image restoration, image compression, image segmentation, representation and description, and recognition and interpretation. The document outlines concepts and techniques within each of these topics at a high level over multiple sections and pages with headings, content lists, and explanatory diagrams.
The document discusses key concepts in digital image fundamentals including:
1. The electromagnetic spectrum and how light attributes like intensity and luminance are measured.
2. How digital images are acquired through image sensing and sampling/quantization.
3. Methods for representing digital images through matrices and binary values, and how resolution affects gray-level detail.
4. Digital zooming techniques like nearest neighbor, bilinear, and bicubic interpolation that control blurring and edge effects.
5. Concepts like pixel adjacency, connectivity, and distance measures between pixels.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
IRJET- Analysis of Underwater Image Visibility using White Balance AlgorithmIRJET Journal
1) The document discusses several techniques for analyzing and improving the visibility of underwater images, including white balancing algorithms, red channel methods, and image fusion approaches.
2) White balancing aims to remove unwanted color casts to improve image aspects like color and contrast. Image fusion techniques combine input images and weight maps to enhance color contrast and visibility of distant objects degraded by the underwater medium.
3) The techniques were evaluated using metrics like PSNR and by comparing restored images to originals. Results found white balancing produced high accuracy recovery of important faded features while image fusion generally improved global contrast, color, and details.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Digital image processing Tool presentationdikshabehl5392
The development of this image processing software will help editing process to be done effectively. It requires less space on hard disk; emphasizing only on the crucial image processing functions and the executable program will take less space.
This paper describes a strategic approach to enhance underwater images. The image gets degraded due to the absorption and scattering of light falling on the objects.This degraded version of the image is enhanced by fusion principles by deriving inputs and weight measures from it. Our strategy is very simple in which white balance and global contrast technologies are applied to the original image. This implementation is followed by taking these two processed outputs as inputs that are weighted by specific maps. This strategy provides better exposedness of the dark regions, improves contrast and the edges, preserved and enhanced significantly. This algorithm effectively enhances the underwater images which is clearly demonstrated in our experimental results of our images.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document discusses image enhancement techniques in digital image processing. It defines image enhancement as modifying image attributes to make an image more suitable for a given task. The main techniques discussed are spatial domain enhancement methods like noise removal, contrast adjustment, and histogram equalization. Examples are provided to demonstrate the effects of these enhancement methods on images.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
Pan sharpening is a process that merges high-resolution panchromatic imagery with lower-resolution multispectral imagery to create a single high-resolution color image. It provides the best of both types of imagery - high spectral resolution and high spatial resolution. There are several common pan sharpening methods including Brovey, IHS, Esri, simple mean, and Gram-Schmidt transformations. Landsat 8 imagery for example consists of bands with 30m resolution except for the panchromatic band which has 15m resolution, and pan sharpening can be applied in GIS software like ArcGIS to improve the resolution of the multispectral bands.
IRJET- Color Balance and Fusion for Underwater Image Enhancement: SurveyIRJET Journal
This document summarizes research on methods for enhancing underwater images. It discusses how underwater images suffer from poor visibility due to light scattering and absorption in water. Several approaches are then summarized that aim to restore and enhance degraded underwater images through techniques like color balance and fusion. Specifically, the document surveys methods that use single-image approaches without specialized hardware by fusing color-compensated and white-balanced versions of the original image. It also discusses other literature on underwater image enhancement through techniques like dehazing, wavelength compensation, and contrast adjustment.
This document discusses various techniques for image enhancement. It begins with an introduction to image enhancement and its objectives. Then it describes several categories of enhancement techniques including point operations, histogram processing, and spatial and frequency domain filtering. Point operations include intensity transformations like contrast stretching and histogram equalization. Histogram processing techniques manipulate the image histogram for enhancement. Spatial filtering uses convolution with filters like smoothing and sharpening filters. The document provides detailed explanations and examples of these various image enhancement methods.
Efficient video indexing for fast motion videoijcga
Due to advances in recent multimedia technologies, various digital video contents become available from different multimedia sources. Efficient management, storage, coding, and indexing of video are required because video contains lots of visual information and requires a large amount of memory. This paper proposes an efficient video indexing method for video with rapid motion or fast illumination change, in which motion information and feature points of specific objects are used. For accurate shot boundary detection, we make use of two steps: block matching algorithm to obtain accurate motion information and modified displaced frame difference to compensate for the error in existing methods. We also propose an object matching algorithm based on the scale invariant feature transform, which uses feature points to group shots semantically. Computer simulation with five fast-motion video shows the effectiveness of the proposed video indexing method.
Ghost free image using blur and noise estimationijcga
This paper presents an efficient image enhancement method by fusion of two different exposure images in
low-light condition. We use two degraded images with different exposures: one is a long-exposure image
that preserves the brightness but contains blur and the other is a short-exposure image that contains a lot of
noise but preserves object boundaries. The weight map used for image fusion without artifacts of blur and
noise is computed using blur and noise estimation. To get a blur map, edges in a long-exposure image are
detected at multiple scales and the amount of blur is estimated at detected edges. Also, we can get a noise
map by noise estimation using a denoised short-exposure image. Ghost effect between two successive
images is avoided according to the moving object map that is generated by a sigmoid comparison function
based on the ratio of two input images. We can get result images by fusion of two degraded images using
the weight maps. The proposed method can be extended to high dynamic range imaging without using
information of a camera response function or generating a radiance map. Experimental results with various
sets of images show the effectiveness of the proposed method in enhancing details and removing ghost
artifacts.
Digital Image Processing_ ch2 enhancement spatial-domainMalik obeisat
The document discusses image enhancement techniques in the spatial domain. It describes how image enhancement aims to process an image to make it more suitable for display or analysis by sharpening, smoothing, or normalizing illumination. Enhancement can be done as preprocessing or postprocessing. Common approaches include linear and non-linear operators that manipulate pixel values. Specific techniques covered include histogram equalization, thresholding, gamma correction, and filtering to modify contrast and brightness. The goal of histogram manipulation is to design transforms that modify an image histogram to have desired properties like increased contrast or matched to a reference histogram.
The document discusses various image enhancement techniques in digital image processing. It describes point operations like image negative, contrast stretching, thresholding, brightness enhancement, log transformation, and power law transformation. Contrast stretching expands the range of intensity levels and can be done by multiplying pixels with a constant, using a transfer function, or histogram equalization. Thresholding converts an image to binary by assigning pixel values above a threshold to one level and below to another. Log and power law transformations compress high intensity values and expand low values to enhance an image. Matlab code examples are provided for each technique.
Abstract
Field of image processing has vast applications in medical, forensic, research etc., It includes various domains like enhancement,
classification, segmentation, etc., which are widely used for these applications. Image Enhancement is the pre processing step on
which the accuracy of the result lies. Image enhancement aims to improve the visual appearance of an image, without affecting
the original attributes (i.e.,) image contrast is adjusted and noise is removed to produce better quality image. Hence image
enhancement is one of the most important tasks in image processing. Enhancement is classified into two categories spatial domain
enhancement and frequency domain enhancement. Spatial domain enhancement acts upon pixel value whereas frequency domain
enhancement acts on the Fourier transform of the image. The enhancement techniques to be used depend on modality, climatic
and visual perspective etc., In this paper, we present a survey on various existing image enhancement techniques.
Keywords: Enhancement, Spatial domain enhancement, Frequency domain enhancement, Contrast, Modality.
This document provides an overview of digital image processing. It discusses what digital image processing is, provides a brief history, and outlines some of the key stages involved, including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses some example applications like medical imaging, autonomous vehicles, traffic monitoring, and biometrics. The document uses images to illustrate different concepts and stages in digital image processing.
Digital image processing short quesstion answersAteeq Zada
This document discusses several techniques for 2D spatial image filtering and background subtraction in digital image processing. It covers linear filtering, Gaussian filtering, frame differencing, running averages, and mixtures of Gaussian models. The key techniques are linear filtering using a kernel or mask, Gaussian filtering to smooth images, and running averages or mixtures of Gaussians to model the background pixels over time while adapting to changes in illumination, motion, or scene geometry.
This document discusses various techniques for image enhancement in spatial domain. It defines image enhancement as improving visual quality or converting images for better analysis. Key techniques covered include noise removal, contrast adjustment, intensity adjustment, histogram equalization, thresholding, gray level slicing, and image rotation. Conversion methods like grayscale and different file formats are also summarized. Experimental results and applications in fields like medicine, astronomy, and security are mentioned.
1) The document proposes a gradient-based method for low-light image enhancement. It extracts gradients from the input image, manipulates the gradients by applying higher gain to darker regions, and integrates the gradients while constraining the intensity range.
2) Experimental results show that the proposed method enhances low-light images effectively while avoiding saturation, compared to other techniques like histogram equalization.
3) The method runs in real-time and MATLAB code is available online for researchers.
This document provides an overview of digital image processing. It discusses key topics including digital image fundamentals, image transforms, image enhancement, image restoration, image compression, image segmentation, representation and description, and recognition and interpretation. The document outlines concepts and techniques within each of these topics at a high level over multiple sections and pages with headings, content lists, and explanatory diagrams.
The document discusses key concepts in digital image fundamentals including:
1. The electromagnetic spectrum and how light attributes like intensity and luminance are measured.
2. How digital images are acquired through image sensing and sampling/quantization.
3. Methods for representing digital images through matrices and binary values, and how resolution affects gray-level detail.
4. Digital zooming techniques like nearest neighbor, bilinear, and bicubic interpolation that control blurring and edge effects.
5. Concepts like pixel adjacency, connectivity, and distance measures between pixels.
This document discusses various intensity transformation and spatial filtering techniques for digital image enhancement. It covers single pixel operations like negative image and contrast stretching. It also discusses neighborhood operations such as averaging and median filters. Finally, it discusses geometric spatial transformations like scaling, rotation and translation. The document provides details on basic intensity transformation functions including log, power law, and piecewise linear transformations. It also covers histogram processing techniques like histogram equalization, matching and local histogram processing. Spatial filtering and its mechanics are explained.
IRJET- Analysis of Underwater Image Visibility using White Balance AlgorithmIRJET Journal
1) The document discusses several techniques for analyzing and improving the visibility of underwater images, including white balancing algorithms, red channel methods, and image fusion approaches.
2) White balancing aims to remove unwanted color casts to improve image aspects like color and contrast. Image fusion techniques combine input images and weight maps to enhance color contrast and visibility of distant objects degraded by the underwater medium.
3) The techniques were evaluated using metrics like PSNR and by comparing restored images to originals. Results found white balancing produced high accuracy recovery of important faded features while image fusion generally improved global contrast, color, and details.
Image enhancement techniques can be divided into spatial and frequency domain methods. Spatial domain methods operate directly on pixel values using techniques like basic gray level transformations, contrast stretching and thresholding. These manipulations are used to accentuate image features, improve display quality or aid machine analysis by modifying pixel intensities within an image.
Digital image processing Tool presentationdikshabehl5392
The development of this image processing software will help editing process to be done effectively. It requires less space on hard disk; emphasizing only on the crucial image processing functions and the executable program will take less space.
This paper describes a strategic approach to enhance underwater images. The image gets degraded due to the absorption and scattering of light falling on the objects.This degraded version of the image is enhanced by fusion principles by deriving inputs and weight measures from it. Our strategy is very simple in which white balance and global contrast technologies are applied to the original image. This implementation is followed by taking these two processed outputs as inputs that are weighted by specific maps. This strategy provides better exposedness of the dark regions, improves contrast and the edges, preserved and enhanced significantly. This algorithm effectively enhances the underwater images which is clearly demonstrated in our experimental results of our images.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document discusses image enhancement techniques in digital image processing. It defines image enhancement as modifying image attributes to make an image more suitable for a given task. The main techniques discussed are spatial domain enhancement methods like noise removal, contrast adjustment, and histogram equalization. Examples are provided to demonstrate the effects of these enhancement methods on images.
Digital images can be enhanced in various ways to improve quality. There are three main categories of enhancement techniques: spatial domain, frequency domain, and combination methods. Spatial domain methods operate directly on pixel values using point processing or neighborhood filtering. Key spatial techniques include contrast stretching, thresholding, and histogram equalization. Frequency domain methods modify an image's Fourier transform. Common transformations include logarithmic, power-law, and piecewise linear functions, which can increase contrast or highlight certain grayscale ranges. Proper enhancement improves an image's features for desired applications.
Pan sharpening is a process that merges high-resolution panchromatic imagery with lower-resolution multispectral imagery to create a single high-resolution color image. It provides the best of both types of imagery - high spectral resolution and high spatial resolution. There are several common pan sharpening methods including Brovey, IHS, Esri, simple mean, and Gram-Schmidt transformations. Landsat 8 imagery for example consists of bands with 30m resolution except for the panchromatic band which has 15m resolution, and pan sharpening can be applied in GIS software like ArcGIS to improve the resolution of the multispectral bands.
IRJET- Color Balance and Fusion for Underwater Image Enhancement: SurveyIRJET Journal
This document summarizes research on methods for enhancing underwater images. It discusses how underwater images suffer from poor visibility due to light scattering and absorption in water. Several approaches are then summarized that aim to restore and enhance degraded underwater images through techniques like color balance and fusion. Specifically, the document surveys methods that use single-image approaches without specialized hardware by fusing color-compensated and white-balanced versions of the original image. It also discusses other literature on underwater image enhancement through techniques like dehazing, wavelength compensation, and contrast adjustment.
This document discusses various techniques for image enhancement. It begins with an introduction to image enhancement and its objectives. Then it describes several categories of enhancement techniques including point operations, histogram processing, and spatial and frequency domain filtering. Point operations include intensity transformations like contrast stretching and histogram equalization. Histogram processing techniques manipulate the image histogram for enhancement. Spatial filtering uses convolution with filters like smoothing and sharpening filters. The document provides detailed explanations and examples of these various image enhancement methods.
Efficient video indexing for fast motion videoijcga
Due to advances in recent multimedia technologies, various digital video contents become available from different multimedia sources. Efficient management, storage, coding, and indexing of video are required because video contains lots of visual information and requires a large amount of memory. This paper proposes an efficient video indexing method for video with rapid motion or fast illumination change, in which motion information and feature points of specific objects are used. For accurate shot boundary detection, we make use of two steps: block matching algorithm to obtain accurate motion information and modified displaced frame difference to compensate for the error in existing methods. We also propose an object matching algorithm based on the scale invariant feature transform, which uses feature points to group shots semantically. Computer simulation with five fast-motion video shows the effectiveness of the proposed video indexing method.
Ghost free image using blur and noise estimationijcga
This paper presents an efficient image enhancement method by fusion of two different exposure images in
low-light condition. We use two degraded images with different exposures: one is a long-exposure image
that preserves the brightness but contains blur and the other is a short-exposure image that contains a lot of
noise but preserves object boundaries. The weight map used for image fusion without artifacts of blur and
noise is computed using blur and noise estimation. To get a blur map, edges in a long-exposure image are
detected at multiple scales and the amount of blur is estimated at detected edges. Also, we can get a noise
map by noise estimation using a denoised short-exposure image. Ghost effect between two successive
images is avoided according to the moving object map that is generated by a sigmoid comparison function
based on the ratio of two input images. We can get result images by fusion of two degraded images using
the weight maps. The proposed method can be extended to high dynamic range imaging without using
information of a camera response function or generating a radiance map. Experimental results with various
sets of images show the effectiveness of the proposed method in enhancing details and removing ghost
artifacts.
Terrain generation finds many applications such as
in CGI movies, animations and video games. This
paper describes a new and simple-to-implement terra
in generator called the Uplift Model. It is based o
n
the theory of crustal deformations by uplifts in Ge
ology. When a number of uplifts are made on the Ear
th’s
surface, the final net effect is an average of the
influence of each uplift at each point on the terra
in. The
result of applying this model from Nature is a very
realistic looking effect in the generated terrains
. The
model uses 6 parameters which allow for a great var
iety in landscape types produced. Comparisons are
made with other existing terrain generation algorit
hms. Averaging causes erosion of the surface wherea
s
fractal surfaces tend to be very jagged and more su
ited to alien worlds.
MAGE Q UALITY A SSESSMENT OF T ONE M APPED I MAGESijcga
This paper proposes an objective assessment method
for perceptual image quality of tone mapped images.
Tone mapping algorithms are used to display high dy
namic range (HDR) images onto standard display
devices that have low dynamic range (LDR). The prop
osed method implements visual attention to define
perceived structural distortion regions in LDR imag
es, so that a reasonable measurement of distortion
between HDR and LDR images can be performed. Since
the human visual system is sensitive to structural
information, quality metrics that can measure struc
tural similarity between HDR and LDR images are
used. Experimental results with a number of HDR and
tone mapped image pairs show the effectiveness of
the proposed method.
Visual hull construction from semitransparent coloured silhouettesijcga
This paper attempts to create coloured
semi
-
transparent shadow images that can be projected onto
multiple screens simultaneously from different viewpoints. The inputs to this approach are a set of coloured
shadow images and view angles, projection information and light configurations for the f
inal projections.
We propose a method to convert coloured semi
-
transparent shadow images to a 3D visual hull
. A
shadowpix type method is used to incorporate varying ratio RGB values for each voxel. This computes the
desired image independently for each vie
wpoint from an arbitrary angle. An attenuation factor is used to
curb the coloured shadow images beyond a certain distance. The end result is a continuous animated
image that changes due to the rotated projection of the transparent visual hull.
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREijcga
We develop a polygonal mesh simplification algorithm based on a novel analysis of the mesh
geometry.
Particularly, we propose first a characterization of vertices as hyperbolic or non
-
hyperbolic depend
-
ing
upon their discrete local geometry. Subsequently, the simplification process computes a volume cost for
each non
-
hyperbolic vertex, in anal
-
ogy with spherical volume, to capture the loss of fidelity if that vertex
is decimated. Vertices of least volume cost are then successively deleted and the resulting holes re
-
triangulated using a method based on a novel heuristic. Preliminary experiments i
ndicate a performance
comparable to that of the best known mesh simplification algorithms
3D-COMPUTER ANIMATION FOR A YORUBA NATIVE FOLKTALEijcga
Computer graphics has wide range of applications which are implemented into computer animation,
computer modeling e.t.c. Since the invention of computer graphics researchers haven’t paid much of
attentions toward the possibility of converting oral tales otherwise known as folktales into possible cartoon
animated videos. This paper is based on how to develop cartoons of local folktales that will be of huge
benefits to Nigerians. The activities were divided into 5 stages; analysis, design, development,
implementation and evaluation which involved various processes and use of various specialized software
and hardware. After the implementation of this project, the video characteristics were evaluated using
likert scale. Analysis of 30 user responses indicated that 17 users (56.7%) rated the image quality as
excellent, the video and image synchronization was rated as excellent by 9 users (30%), the Background
noise was rated excellent by 18 users (60%), the Character Impression was rated Excellent by 11 users
(36.67%), the general assessment of the storyline was rated excellent by 17 users (56.7%), the video
Impression was rated excellent by 11 users (36.67%) and the voice quality was rated by 10 users (33.33%)
as excellent.
Analysis of design principles and requirements for procedural rigging of bipe...ijcga
Character rigging is a process of endowing a character with a set of custom manipulators and controls
making it easy to animate by the animators. These controls consist of simple joints, handles, or even
separate character selection windows.This research paper present an automated rigging system for
quadruped characters with custom controls and manipulators for animation.The full character rigging
mechanism is procedurally driven based on various principles and requirements used by the riggers and
animators. The automation is achieved initially by creating widgets according to the character type. These
widgets then can be customized by the rigger according to the character shape, height and proportion.
Then joint locations for each body parts are calculated and widgets are replaced programmatically.Finally
a complete and fully operational procedurally generated character control rig is created and attached with
the underlying skeletal joints. The functionality and feasibility of the rig was analyzed from various source
of actual character motion and a requirements criterion was met. The final rigged character provides an
efficient and easy to manipulate control rig with no lagging and at high frame rate.
AUTOMATIC DETECTION OF LOGOS IN VIDEO AND THEIR REMOVAL USING INPAINTIijcga
In this paper, we propose an algorithm that automatically detects and removes moving logos in video. The
proposed algorithm consists of logo detection, tracking and extraction, and removal. In the logo detection
step, we automatically detect moving logos using the saliency map and the scale invariant feature transform.
In the logo tracking and extraction step, an improved mean shift tracking method and backward tracking
technique are used in which only logo regions are extracted by using color information. Finally, in the logo
removal step, the detected logo region is filled with the region of neighborhood using an exemplar-based
inpainting that can fill a large detected region without artifacts. These steps are effectively interconnected
using control flags. Experimental results with various test sequences show that the proposed algorithm is
effective to detect, track, and remove moving logos in video.
Terra formation control or how to move mountainsijcga
The new Uplift Model of terrain generation is generalized here and provides new possibilities for terra
formation control unlike previous fractal terrain generation methods. With the Uplift Model fine-grained
editing is possible allowing the designer to move mountains and small hills to more suitable locations
creating gaps or valleys or deep bays rather than only being able to accept the positions dictated by the
algorithm itself. Coupled with this is a compressed file storage format considerably smaller in size that the
traditional height field or height map storage requirements.
3D M ESH S EGMENTATION U SING L OCAL G EOMETRYijcga
We develop an algorithm to segment a 3D surface mesh intovisually meaningful regions based
on
analyzing the local geometry ofvertices. We begin with a novel characterization of mesh vertices as
convex,concave or hyperbolic depending upon their discrete local geometry.Hyperbolic and concave
vertices are considered potential feature regionboundari
es. We propose a new region growing technique
starting fromthese boundary vertices leading to a segmentation of the surface thatis subsequently simplified
by a region
-
merging method. Experiments indicatethat our algorithm segments a broad range of 3D
model
s at aquality comparable to existing algorithms. Its use of methods that belongnaturally to discretized
surfaces and ease of implementation makeit an appealing alternative in various applications
Virtual Navigator Real-Time Ultrasound Fusion Imaging with Positron Emission ...rosopeplaton
Enzo Di Mauro, Marco Solbiati, Stefano De Beni, Leonardo Forzoni, Sara D’Onofrio, Luigi Solbiati
Real-time fusion imaging technologies are
increasingly being used among interventional radiologists,
mostly Computed Tomography (CT) or Magnetic Resonance
Imaging (MRI) dataset, fused with Ultrasound (US) imaging. In
addition, fusion of Positron Emission Tomography (PET) and
CT is increasingly diffused in clinical practice, due to the wide
availability of PET scanners and the capability to make either a
direct (acquisitions performed within the same system) or an
indirect (procedure performed on an external workstation,
merging the two different sets of acquired data) fusion with CT
data. The present work describes the feasibility of real-time
fusion imaging directly between PET data and US imaging,
with CT scans being used only for PET-US fusion registration.
Data on multimodality registration precision and clinical
applications are presented as well.
This document summarizes the work done by an intern during their summer internship in the Medical Physics Department of Radiology. The intern conducted research to predict cancer outcomes based on breast lesion features. Key work included feature extraction from mammograms, analyzing features to differentiate malignant and benign lesions using ROC analysis and LDA, and exploring features to predict invasive vs. non-invasive cancer. Top predictive features were FWHM ROI, diameter, and margin sharpness. The intern gained skills in medical image analysis, statistical analysis, and evaluating results to identify trends.
One of the most efficient ways of generating goal-directed walking motions is synthesising the final motion
based on footprints. Nevertheless, current implementations have not examined the generation of continuous
motion based on footprints, where different behaviours can be generated automatically. Therefore, in this
paper a flexible approach for footprint-driven locomotion composition is presented. The presented solution
is based on the ability to generate footprint-driven locomotion, with flexible features like jumping, running,
and stair stepping. In addition, the presented system examines the ability of generating the desired motion
of the character based on predefined footprint patterns that determine which behaviour should be
performed. Finally, it is examined the generation of transition patterns based on the velocity of the root and
the number of footsteps required to achieve the target behaviour smoothly and naturally.
This document provides an overview of the physics of film screens used in radiography. It discusses the structure and components of radiographic film, intensifying screens, and cassettes. Intensifying screens absorb x-rays and convert them to light, which is more easily detected by the film. This increases the film's sensitivity and reduces the necessary radiation dose. The characteristic curve describes the film-screen combination's response to radiation by relating optical density to exposure. It is used to determine the film's speed, gamma, latitude, and fog level. Intensifying screens allow lower radiation doses to produce diagnostic images compared to film alone.
This study developed a new fluorescence-based assay to quantify endosome fusion in living cells. The assay uses BODIPY-labeled avidin, which exhibits a 10-fold increase in fluorescence upon binding to biotin. BHK fibroblasts were pulse-labeled with BODIPY-avidin and a red fluorescent marker. After specified chase times, a second cohort of endosomes was pulse-labeled with biotin-conjugated probes. Fusion was detected by increased BODIPY fluorescence in individual endosomes, measured by ratio imaging. Applying this assay, the study found that over 90% of avidin-labeled endosomes fused within 10 minutes, with fusion decreasing at longer chase times, indicating endosome
1. Physics deals with matter and energy through defining and characterizing interactions between the two.
2. Mechanics studies motion and forces, including quantities like speed, velocity, acceleration, force, momentum and Newton's Laws of Motion.
3. Solving physics problems involves identifying known and unknown quantities, selecting the appropriate equation, and using the correct units.
1. Doppler ultrasound works by detecting the frequency shift of reflected ultrasound waves from moving objects like blood cells. The frequency shift is proportional to the velocity of the moving object.
2. Continuous wave Doppler instruments emit a continuous ultrasound wave and detect the frequency shift of the reflected waves. This allows measurement of flow velocity but not location of the flow along the ultrasound beam.
3. Pulsed Doppler instruments emit short ultrasound pulses and measure the frequency shift of returning echoes. This allows separation of flow signals from different depths, providing velocity information with range resolution.
Pediatric stroke can be caused by a variety of factors such as cardiac diseases, infections like varicella, sickle cell disease, moyamoy disease, cerebral sinus thrombosis, and genetic conditions like MELAS. The presentation of pediatric stroke depends on the location and size of the lesion in the brain. Diagnosis involves imaging techniques like CT, MRI, MRA and angiography. Early diagnosis and treatment is important to prevent long term neurological deficits in children.
GHOST AND NOISE REMOVAL IN EXPOSURE FUSION FOR HIGH DYNAMIC RANGE IMAGINGijcga
For producing a single high dynamic range image (HDRI), multiple low dynamic range images (LDRIs)
are captured with different exposures and combined. In high dynamic range (HDR) imaging, local motion
of objects and noise in a set of LDRIs can influence a final HDRI: local motion of objects causes the ghost
artifact and LDRIs, especially captured with under-exposure, make the final HDRI noisy. In this paper, we
propose a ghost and noise removal method for HDRI using exposure fusion with subband architecture, in
which Haar wavelet filter is used. The proposed method blends weight map of exposure fusion in the
subband pyramid, where the weight map is produced for ghost artifact removal as well as exposure fusion.
Then, the noise is removed using multi-resolution bilateral filtering. After removing the ghost artifact and
noise in subband images, details of the images are enhanced using a gain control map. Experimental
results with various sets of LDRIs show that the proposed method effectively removes the ghost artifact and
noise, enhancing the contrast in a final HDRI.
RADAR Images are strongly preferred for analysis of geospatial information about earth surface to assesse envirmental conditions radar images are captured by different remote sensors and that images are combined together to get complementary information. To collect radar images SAR(Synthetic Aperture Radar) sensors are used which are active sensors and can gather information during day and night without affecting weather conditions. We have discussed DCT and DWT image fusion methods,which gives us more informative fused image simultaneously we have checked performance parameters among these two methods to get superior method from these two techniques
Registration is a process that transforms multiple sets of data into a common coordinate system, allowing comparison and integration of data obtained from different measurements or viewpoints. It is used in applications like computer vision, medical imaging, and satellite imagery analysis. The registration process is necessary to align data collected under different conditions into a unified view.
Image fusion is the process of combining two or more images with specific objects with more precision. It is very common that when one object is focused remaining objects will be less highlighted. To get an image highlighted in all areas, a different means is necessary. This is done by the Image Fusion. In remote sensing, the increasing availability of Space borne images and synthetic aperture radar images gives a motivation to different kinds of image fusion algorithms. In the literature a number of time domain image fusion techniques are available. Few transform domain fusion techniques are proposed. In transform domain fusion techniques, the source images will be decomposed, then integrated into a single data and will be reconstructed back into time domain. In this paper, singular value decomposition as a tool to have transform domain data will be utilized for image fusion. In the literature, the quality assessment of fusion techniques is mainly by subjective tests. In this paper, objective quality assessment metrics are calculated for existing and proposed techniques. It has been found that the new image fusion technique outperformed the existing ones.
An Improved Image Fusion Scheme Based on Markov Random Fields with Image Enha...Editor IJCATR
Image fusion is an important field in many image processing and analysis tasks in which fusion image data are acquired
from multiple sources. In this paper, we investigate the Image fusion of remote sensing images which are highly corrupted by salt and
pepper noise. In our paper we propose an image fusion technique based Markov Random Field (MRF). MRF models are powerful
tools to analyze image characteristics accurately and have been successfully applied to a large number of image processing
applications like image segmentation, image restoration and enhancement, etc.,. To de-noise the corrupted image we propose a
Decision based algorithm (DBA). DBA is a recent powerful algorithm to remove high-density Salt and Pepper noise using sheer
sorting method is proposed. Previously many techniques have been proposed to image fusion. In this paper experimental results are
shown our proposed Image fusion algorithm gives better performance than previous techniques.
1) The document proposes a method for color image enhancement using Laplacian pyramid decomposition and histogram equalization. It separates an input image into red, green, and blue color channels.
2) Each color channel is decomposed into a Laplacian pyramid, and histogram equalization is applied to enhance the contrast in each band-pass image.
3) The enhanced band-pass images are then recombined using the Laplacian pyramid reconstruction equation to produce enhanced color channels, which are combined to generate the output enhanced color image. The method aims to improve both local and global contrast while maintaining natural image quality.
1) The document proposes a method for color image enhancement using Laplacian pyramid decomposition and histogram equalization. It separates an input image into red, green, and blue color channels.
2) Each color channel is decomposed into a Laplacian pyramid, and histogram equalization is applied to enhance the contrast in each level. The enhanced levels are then recombined to improve both local and global contrast.
3) The method aims to overcome issues with traditional histogram equalization like over-enhancement, by applying a smoothing technique before contrast adjustment in each level of the pyramid. The final enhanced image is reconstructed by combining the processed color channels.
Iaetsd a novel approach to assess the quality of tone mappedIaetsd Iaetsd
This document summarizes a research paper that proposes a new objective quality assessment algorithm called Tone Mapped image Quality Index (TMQI) to evaluate the quality of tone mapped low dynamic range (LDR) images derived from high dynamic range (HDR) images. The TMQI algorithm combines a multi-scale structural fidelity measure based on a modified structural similarity index with a naturalness measure based on intensity statistics of natural images. Validations using subject-rated image databases showed good correlations between subjective ratings and scores from the proposed TMQI algorithm. The TMQI provides an overall quality score for an entire image and can be used to optimize tone mapping operator (TMO) parameters and fuse multiple tone mapped images.
This document describes an image fusion method using pyramidal decomposition. It proposes extracting fine details from input images using guided filtering and fusing the base layers of images across multiple exposures or focal points using a multiresolution pyramid approach. A weight map is generated considering exposure, contrast, and saturation to guide the fusion of base layers. The fused base layer is then combined with extracted fine details to produce a detail-enhanced fused image. The goal is to preserve details in both very dark and extremely bright regions of the input images. It is argued that this method can effectively fuse images from different exposures or focal points without introducing artifacts.
With the improvements in Image acquisition systems there is an increasing concentration in the direction of
High Dynamic Range (HDR) images where the amount of intensity levels varies among 2 to 10,000. With these
numerous intensity levels the exact representation of luminance variations is entirely possible. But, because the
normal display devices are shaped to exhibit Low Dynamic Range (LDR) images, there is necessary to translate
HDR images to LDR images without down significant image structures in HDR images. In this paper four TMOs
like Reinhard, Gamma and color correction TMOs are evaluated .In this paper two novel TMOs are projected.
Keywords — HDR, LDR, Tone mapping, Gamma correction.
An Efficient Approach for Image Enhancement Based on Image Fusion with Retine...ijsrd.com
Aiming at problems of poor contrast and blurred edges in degraded images, a novel enhancement algorithm is proposed in present research. Image fusion refers to a technique that combines the information from two or more images of a scene into a single fused image.The Algorithm uses Retinex theory and gamma correction to perform a better enhancement of images. The algorithm can efficiently combine the advantages of Retinex and Gamma correction improving both color constancy and intensity of image.
SINGLE IMAGE SUPER RESOLUTION IN SPATIAL AND WAVELET DOMAINijma
This document presents a single image super resolution algorithm that uses both spatial and wavelet domains. The algorithm takes advantage of both domains by upsampling the image in the spatial domain using bicubic interpolation, then refining the high frequency subbands in the wavelet domain. It is iterative, using back projection to minimize reconstruction error between the original low resolution image and the downsampled output. Wavelet-based denoising is applied to the high frequency subband to remove noise before reconstruction. Experimental results on various test images show the proposed algorithm achieves similar or better PSNR than other compared methods.
Efficient Method of Removing the Noise using High Dynamic Range Imagerahulmonikasharma
The document summarizes a proposed method for removing noise from high dynamic range images using wavelet filtering and thresholding. It begins by introducing the concepts of tone mapping and high dynamic range imaging. It then describes the proposed local tone mapping algorithm which includes subband decomposition using Daubechies wavelets, denoising the subbands using wavelet filtering and soft thresholding, and color reproduction while limiting saturation. Simulation results show the method can reduce noise 4% faster than other methods while preserving small details and improving contrast. Key parameters like PSNR and MSE are calculated and figures demonstrate noise removal from grayscale and color images.
A Novel Color Image Fusion for Multi Sensor Night Vision ImagesEditor IJCATR
In this paper presents a simple and fast color fusion approach for night vision images. Image fusion involves merging of two
or more images in such a way, to get the most advantageous characteristics of each image. Here the Visible image is fused with the
InfraRed (IR) image, so the desired result will be single, highly informative image providing full information. This paper focuses on
color constancy and color contrast problem.
Firstly the contrast of the infrared and visible image is enhanced using Local Histogram Equation. Then the two enhanced
images are fused in three compounds of a LAB image using aDWT image fusion. This paper adopts an approach which transfer color
from the reference image to the fused image using Color Transfer Technology. To enhance the contrast between the target and the
background, a scaling factor is introduced in the transferring equation in the b channel. Finally our approach gives the Multiband
Fused image with the natural day-time color appearance and the hot targets are popped out with intense colors while the background
details present with the natural color appearance.
Image Quality Assessment of Tone Mapped Images ijcga
This paper proposes an objective assessment method for perceptual image quality of tone mapped images. Tone mapping algorithms are used to display high dynamic range (HDR) images onto standard display devices that have low dynamic range (LDR). The proposed method implements visual attention to define perceived structural distortion regions in LDR images, so that a reasonable measurement of distortion between HDR and LDR images can be performed. Since the human visual system is sensitive to structural information, quality metrics that can measure structural similarity between HDR and LDR images are used. Experimental results with a number of HDR and tone mapped image pairs show the effectiveness of the proposed method.
This document discusses an investigation of high dynamic range video using a conventional camera. It outlines approaches taken, including recovering luminance from multiple exposures, tone mapping, and exposure fusion. Results show that while high dynamic range images can be achieved, the frame rate is too low for video. Future work could focus on algorithms to correct alignment issues and parallel processing to enable real-time high dynamic range video.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
Icamme managed brightness and contrast enhancement using adapted histogram eq...Jagan Rampalli
This document describes a new method called Controlled Contrast Modified Histogram Equalization (CCMHE) to enhance image contrast while managing brightness. CCMHE divides the input image histogram into four sub-histograms based on the median brightness value. It then applies a clipping process to prevent over-enhancement before independently equalizing each sub-histogram. CCMHE also introduces an enhancement rate parameter to control the level of contrast adjustment in the output images. The proposed method aims to produce enhanced images with improved contrast and maintained overall brightness compared to other contemporary enhancement techniques.
Atmospheric Correction of Remotely Sensed Images in Spatial and Transform DomainCSCJournals
Remotely sensed data is an effective source of information for monitoring changes in land use and land cover. However remotely sensed images are often degraded due to atmospheric effects or physical limitations. Atmospheric correction minimizes or removes the atmospheric influences that are added to the pure signal of target and to extract more accurate information. The atmospheric correction is often considered critical pre-processing step to achieve full spectral information from every pixel especially with hyperspectral and multispectral data. In this paper, multispectral atmospheric correction approaches that require no ancillary data are presented in spatial domain and transform domain. We propose atmospheric correction using linear regression model based on the wavelet transform and Fourier transform. They are tested on Landsat image consisting of 7 multispectral bands and their performance is evaluated using visual and statistical measures. The application of the atmospheric correction methods for vegetation analyses using Normalized Difference Vegetation Index is also presented in this paper.
Using A Application For A Desktop ApplicationTracy Huang
An Intentional Infliction of Emotional Distress claim has four elements that must be proven:
1. The defendant's conduct was intentional or reckless.
2. The conduct was outrageous and intolerable, exceeding all bounds of decency.
3. The defendant's conduct caused the plaintiff to suffer emotional distress.
4. The plaintiff's emotional distress was severe.
Here, Samuel Taylor would need to prove all four elements against his former employer:
1. The employer intentionally terminated Samuel and made disparaging comments, satisfying the intent requirement.
2. Firing an employee without cause after 20 years and making false statements that severely damaged his reputation could be considered outrageous conduct exceeding all bounds of dec
Similar to Ghost and Noise Removal in Exposure Fusion for High Dynamic Range Imaging (20)
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Ghost and Noise Removal in Exposure Fusion for High Dynamic Range Imaging
1. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
GHOST AND NOISE REMOVAL IN EXPOSURE FUSION
FOR HIGH DYNAMIC RANGE IMAGING
Dong-Kyu Lee1, Rae-Hong Park1 and Soonkeun Chang2
1Department of Electronic Engineering, School of Engineering, Sogang University,
35 Baekbeom-ro (Sinsu-dong), Mapo-gu, Seoul 121-742, Korea
2Samsung Electronics Co., Ltd., Suwon, Gyeonggi-do 443-742, Korea
ABSTRACT
For producing a single high dynamic range image (HDRI), multiple low dynamic range images (LDRIs)
are captured with different exposures and combined. In high dynamic range (HDR) imaging, local motion
of objects and noise in a set of LDRIs can influence a final HDRI: local motion of objects causes the ghost
artifact and LDRIs, especially captured with under-exposure, make the final HDRI noisy. In this paper, we
propose a ghost and noise removal method for HDRI using exposure fusion with subband architecture, in
which Haar wavelet filter is used. The proposed method blends weight map of exposure fusion in the
subband pyramid, where the weight map is produced for ghost artifact removal as well as exposure fusion.
Then, the noise is removed using multi-resolution bilateral filtering. After removing the ghost artifact and
noise in subband images, details of the images are enhanced using a gain control map. Experimental
results with various sets of LDRIs show that the proposed method effectively removes the ghost artifact and
noise, enhancing the contrast in a final HDRI.
KEYWORDS
High Dynamic Range Imaging, Exposure Fusion, Ghost Removal, Noise Removal
1. INTRODUCTION
Digital sensors and display devices such as digital camera, television, etc., have a limited
dynamic range. They cannot capture and display the full dynamic range with which people can
perceive a real world scene. For example, when a scene in which bright and dark regions coexist
is captured, these regions tend to be under- or over-saturated because of the limited dynamic
range. The dynamic range is one of the important criteria for evaluating image quality, especially
in devices supporting high resolution images. Images and display devices supporting high
dynamic range (HDR) are attractive to producers and customers of today.
In order to acquire a HDR image (HDRI), HDR imaging techniques were proposed [1–5].
Multiple low dynamic range images (LDRIs) are captured with multiple exposures using auto
exposure bracketing of a camera. Then, captured LDRIs are combined into a single HDRI by
HDR imaging. However, when LDRIs are combined, several problems can occur due to global
and local motions [6–9] of a camera or an object, and noise [10–12] in LDRIs. The ghost artifact
and noise are major problems in HDR imaging. Since HDRI is generated from multiple images,
moving camera or object causes the ghost artifact. HDR imaging is generally used for high-quality
image even in the low-light or back-light condition in which captured images tend to have
much noise due to camera setting with short-exposure time or high sensitivity. Therefore, the
noise is also one of the critical issues in HDR imaging.
DOI : 10.5121/ijcga.2014.4401 1
2. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
HDR imaging can be generally classified into two approaches. In the first approach, HDR
imaging [1–3] consists of radiance map generation and tone-mapping. First, a HDR radiance map,
which covers the entire dynamic range of LDRIs, is generated [1]. Generally, in this radiance map
generation process, the ghost artifact due to local motion and noise are removed [2, 3, 11]. Then,
the radiance-map is tone-mapped back to a LDR representation to fit the range of display or
printing devices [13]. The second approach is image fusion, where LDRIs are blended directly
into a single HDRI using weight map [4, 5, 14–17]. To remove the ghost artifact due to local
motion and noise, the weight map is computed using image quality measures such as ghost and
noise as well as contrast, well-exposedness, and saturation.
In this paper, we propose a ghost and noise removal method using exposure fusion for HDR
imaging. In the proposed method, exposure fusion is used in the subband architecture, where
exposure fusion blends directly LDRIs using the weight map guided by quality measures for
HDR effect. To generate motion maps for removing ghost artifact, the proposed histogram based
motion maps [18] are used, where the motion maps are combined with the weight maps of
exposure fusion. Fused subband images are denoised by multi-resolution bilateral filtering [19],
which is very effective in removing noise. After denoising, details of the subband images are
enhanced through the gain control [20]. Next, detail-preserved subband images are reconstructed
to a single final fused image.
The rest of the paper is organized as follows. In Section 2, image fusion for HDR imaging, ghost
removal methods, and noise reduction methods in HDR imaging are reviewed. Section 3 proposes
an exposure fusion method using subband architecture. In this section, the proposed histogram
based ghost removal method, noise reduction method, and gain control method in subband
architecture are described. Experimental results of the proposed and existing ghost and noise
removal methods for HDR imaging are compared and discussed in Section 4. Finally, Section 5
concludes the paper.
2
2. PREVIOUS WORK
To get a single HDRI, LDRIs are captured by exposure bracketing in a camera and combined in
HDR imaging [1–5]. However, image quality is rather degraded in HDR imaging unless some
artifacts such as ghost artifact and noise are reduced. In this section, we review previous work on
HDR imaging, ghost removal, and noise reduction.
2.1. Image Fusion for HDR Imaging
Image fusion for HDR imaging skips the process of generating an HDR radiance map, and
directly fuses a set of multi-exposed images to a single HDRI [4]. It measures the quality of each
pixel in LDRIs and computes weighted average guided by quality measures for high-quality
image. It has several advantages that it is implemented in a simple acquisition pipeline and does
not require to know exposure times of every LDRI and to calculate the camera response curve
with the exposure times. Compared with the case in which a single image is used for HDR
imaging [21], image fusion enhances better contrast and dynamic range because more
information such as contrast, detail, and structure in the images can be used.
Goshtasby [4] proposed a block-based fusion method for HDR imaging. This method selects
blocks that contain the most information within that block and then the selected blocks are
blended together using blending function that is defined by rational Gaussian. Raman and
3. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
Chaudhuri [15] proposed a bilateral filter based composition method. They computed the
difference between LDRIs and the bilateral filtered image, and designed weight function using
the difference, where weak edges and textures are given high weight values to preserve detail.
Li
et al. [22] proposed a layered-based fusion algorithm. They used a global
robustness and color consistency
, details.
Bertalmio and Levine [23] introduce
color. They
measured the difference in edge information in the short
difference in the long-exposure image
In exposure fusion [5], quality measures for contrast, saturation, and well
compute the weight maps for each LDRI. Then, the weight map and LDRIs are fused using
Gaussian and Laplacian pyramid decomposition, respectively. Shen
preserving exposure fusion. They applied the exposure fusion [5] to subband architecture using
quadrature mirror filter (QMF), in which the gain control strategy
details in HDRI.
2.2. Ghost Artifact Removal
] detail-preserving
Ghost artifact is due to global motion of a camera and local motion of objects in a scene. Even
though image registration [6, 25
ghost artifact still remains due to local motion of objects, because LDRIs are not
simultaneously and moving objects can be located at different positions during taking LDRIs.
Figure 1 illustrates an example of the ghost artifact in HDRI. Fig
(Bench, 1168× 776) with different exposu
of images and object positions are changed according to exposure time of the LDRIs. Fig
shows the final HDRI using exposure fusion [5]. Although the dynamic range in the image is
extended, the ghost artifact appears in region that contains a moving object.
Figure 1. Example of the ghost effect in HDRI imaging. (a) three LDRIs (1168
using exposure fusion [
] global-layer to improve the
consistency, and the gradient domain was used to preserve
introduced an energy function to preserve edge and
short-exposure image while the local color
image.
, well-exposedness are used to
et al. [20] presented a det
[24] was used to preserve
5, 26] is used to remove global motion with a hand-artifact
-held camera,
captured
Figure 1(a) shows three LDRIs
exposures after image registration [11]. Note that illuminations
(a)
(b)
. 1168× 776, Bench
5] and its cropped image.
3
. Figure 1(b)
, Bench), (b) HDRI
4. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
To generate the motion map, region with large intensity change can be simply considered as
region of object motion. However, LDRIs are captured with different exposure times, thus this
method is not directly applicable to HDR imaging and the illumination change between LDRIs
should be considered. Therefore, the removal method of the ghost artifact caused by local motion
uses the property, which is not sensitive to exposure time, such as variance, entropy, and
histogram between LDRIs.
To detect object motion, a variance based method [2] uses the weighted variance of pixel values
between multiple-exposure LDRIs and divides images by thresholding into two regions: motion
region and background. On the other hand, Jacobs et al. [3] used the difference of local entropy
between multiple-exposure LDRIs to detect motion region. The measure using the difference of
local entropy is effective for detecting the features such as intensity edges and corners regardless
of exposures. However, it is sensitive to parameter values used to define motion and to the
window size for the local entropy. Khan et al. [27] proposed the ghost artifact removal method
based on the kernel density estimator. They computed the probability that the pixel is contained in
the background and used the probability as a weighting factor for constructing the radiance map.
Jia and Tang [28] used a voting method for color/intensity correction of input images. In this
method, global and local replacement functions are iteratively estimated using intensity values in
voting space and the replacement functions are employed to detect occlusion which causes ghost
artifact. While this method gives a high contrast image from defective input images with global
and local alignment, the computational load is high because of optimization process for
computing the replacement function.
Histogram based methods [7, 11] classify the intensity values into multi-levels. They regard the
region with large difference between the level indices as motion region. The computational load
of the methods is relatively low, however they excessively detect wrong region by dividing the
intensity range into a number of levels.
4
2.3. Noise Removal
Tico et al. [14] proposed a noise and blur reduction method in HDR imaging. They used the
property of LDRIs that LDRIs captured with under-exposure are noisy, whereas those with over-exposure
are blurred. They first photometrically calibrated LDRIs using brightness transfer
function between the longest exposure image and the remaining shorter exposure images, and
fused calibrated LDRIs with noise estimation in the wavelet domain. In the fusion step, the
weighted average is used, where the larger noise variance of the pixel neighborhood is, the
smaller the computed weight of the pixel is.
Akyuz and Reinhard [10] reduced noise in radiance map generation of HDR imaging, in which
input LDRIs captured at high sensitivity setting were used. They first generated the radiance map
of each LDRI using an inverse camera response curve, and computed the pixel-wise weighted
average of subsequent exposure images to reduce the noise. The weighting function depends on
exposure time and pixel values. They gave more weight to pixels of LDRIs captured with longer
exposure, but excluded over-saturated pixels from the averaging.
Min et al.’s method [11] selectively applies different types of denoising filters to motion regions
and static regions in radiance map generation that is based on Debevec and Malik’s method [1].
In motion regions, a structure-adaptive spatio-temporal smoothing filter is used, whereas in static
regions, a structure-adaptive spatial smoothing filter is used for each LDRI and then the weighted
averaging for filtered LDRIs is performed. This filter is effective for low-light noise removal with
edge preservation and comparably low computational load
5. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
5
3. HDRI GENERATION
The proposed method is based on the subband architecture [20]. Exposure fusion [5] with ghost
removal [18] for HDR, denoising [19], and gain control [24] to enhance detail are performed in
the subband framework of the proposed method. Figure 2 shows a block diagram of the proposed
HDR imaging framework using subband architecture [20], where thick lines represent more than
two images or subband images, whereas thin lines represent a single image or subband image.
Given kth exposure LDRIs I (1 k K), k £ £ they are decomposed into a number of subband images
X (1£ i £ 3L +1) i
k by analysis filter. With decomposed subband images, exposure fusion is
performed, where the motion maps k M¢ are generated and then combined with the weight maps of
exposure fusion for ghost artifact removal. Fused subband images i F are denoised by multi-resolution
bilateral filter and soft thresholding, where the lowest-frequency subband image 3L+1 F
and the high-frequency subband images F j (1 £ j £ 3L) are denoised by bilateral filtering and soft
thresholding, respectively. Denoised subband images are denoted as F ˆ i . Next, the gain control
map for detail preserving is computed and applied to denoised subband images. Finally, detail
~
preserved subband images F
i are reconstructed to a single final result image I ¢ by synthesis filer.
Figure 2. Block diagram of the proposed HDR imaging framework using subband architecture.
This section is organized as follows. In Section 3.1, exposure fusion using subband architecture is
described. Sections 3.2 and 3.3 present the proposed histogram based ghost removal method and
denoising method using multi-resolution bilateral filter, respectively. In Section 3.4, to preserve
and enhance detail of a final image, gain control method is described.
3.1. Exposure Fusion Using Subband Architecture
The proposed method is based on exposure fusion [5], in which pixel-wise quality measures for
contrast, saturation, and well-exposedness are defined. Contrast measure C is computed as the
absolute value of the Laplacian filter response to the grayscale of each image. The use of this
measure makes edges and details preserved. Saturation measure S is defined as the standard
deviation within the color channel to assign a higher weight to pixels with more saturated color.
Well-exposedness measure E uses a Gaussian curve, where the closer the pixel value is to 0.5, the
higher the weight is given, where the pixel value is normalized to the range 0 to 1. Gaussian curve
is applied to each color channel, and results of three (red, green, and blue) color channels are
multiplied. Three measures at pixel (x, y) in kth exposure image are combined using
multiplication to produce the weight map
6. International Journal of Computer Graphics & Animation (IJCGA) Vol.4, No.4, October 2014
6
( ) C ( ) S ( ) E w
k
w
k
w
k k W (x, y) = C (x, y) × S (x, y) × E (x, y) (1)
where the subscript k represents kth exposure image and the influence of three measures of
contrast, structure, and well-exposedness are controlled by power terms wC, wS, and wE,
respectively. For consistent result, the weight map is normalized as
( , )
W x y
ˆ ( , ) (2)
=
=
K
k
k
k
k
W x y
W x y
1
( , )
where K is the total number of LDRIs used to fuse for generating a single HDRI.
In exposure fusion [5], LDRIs and the weight maps are separately fused using a multi-resolution
technique [29]. Let L{I}denote a Laplacian pyramid of an input LDRI I. Similarly, G{Wˆ } denotes
a Gaussian pyramid of the normalized weight map . ˆW
The Laplacian pyramid of the final fused
HDRI is generated as
=
L{ I ¢ } l =
G{ W ˆ } L{ I
} (3)
K
k
l
k
l
k
1
where the superscript l (1£ l £ L ) represents the layer index in a pyramid. A final HDRI I ¢ is
obtained by reconstructing L{ } . l I ¢ Shen et al. [20] applied exposure fusion [5] to subband
architecture using QMF. They built the subband pyramid through an L-level QMF for
decomposing LDRIs and blended the subband images with the weighted maps of exposure fusion
[5]. The blended subband images are modified according to gain control maps [24] to preserve
details of the subband signals. A fused HDRI is reconstructed using detail-preserved subband
images.
The proposed method constructs the subband architecture with Haar wavelet filter and blends
weight maps of exposure fusion in the subband pyramid. Next, the proposed method computes
the gain control map to control the strength of the subband signals and modifies all the subbands
using the gain control map. The modified subband images are reconstructed using the subband
pyramid to a final result image.
For L-level subband pyramid, subband images X (x, y) i
k of kth exposure LDRI Ik are generated.
To obtain a fused subband image , i F the proposed method computes a weighted average of i
k X
at each pixel of subband image as
F x y W x y X x y i
( , ) ( , )
~
( , )
1
k
K
k
k
i =
= (4)
~
where k W
is a Gaussian filtered version of the normalized weight map k W ˆ
, which prevents
undesirable halo around edge. This weighted averaging function makes a fused image have good
image quality defined by the quality measures. Each subband image contains image features
through image filtering. Thus, the weighted averaging blends image features instead of intensity
using the subband architecture, which is effective for avoiding seams.
7. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
ˆ ( , ) floor (5)
7
3.2. Histogram Based Ghost Removal
In the proposed method for motion map generation [18], the histogram based multi-level
threshold map [7] is extended to remove the ghost artifact caused by local motion. When contrast
between moving objects and background is small, or the textureless objects move, these motions
give relatively large intensity change than that of variance and local entropy [7]. Thus, the
intensity histogram provides useful methods for local motion detection. The multi-level threshold
map was extended from a median threshold bitmap (MTB), which is a binary bitmap constructed
by partitioning an image using median pixel value of the image as a threshold [6]. The median
threshold is robust to illumination change and thus useful for comparing LDRIs. With this
robustness, the multi-level threshold map [7] was proposed to detect local motion more accurately
than MTB. This method classifies the intensity values into multi-levels and then finds motion
region using the difference between the level indices of intensity values, whereas MTB classifies
pixels into two levels according to intensity value.
The multi-level threshold method [7] detects better local motion than MTB, however, this method
may excessively detect wrong region by dividing the intensity range into more than two levels.
Since its excessive detection finds not only motion region but also region without motion as
motion region, the effect on HDR imaging can be reduced. To avoid this drawback, the method
that adaptively detects motion was introduced [11]. The method separates the LDRI into three
regions: no difference region, small difference region, and large difference region. In the small
difference region, pixels that are connected to the large difference region are selected as motion
region of the large difference. However, the adaptive motion selection has a problem of how to
divide the LDRI into the three regions, because it is difficult to decide which pixels in small
difference region are selected as motion region.
To detect local motion, the proposed motion map [18] is generated based on rank according to the
intensity. It is assumed that the ranks of intensity within an LDRI are similar to those of the other
LDRIs except for the local motion region. Let r (x, y) k (1 ) k k £ r £ R be rank at pixel (x, y) of kth
exposure image. It can be expressed using a cumulative distribution function of intensity value at
pixel (x, y). The rank r (x, y) k is normalized to N bit as
( , )
y x r £ £
k k
N
r x y
k
k
k r R
R
= × 2 , 1
where Rk is the last rank in kth exposure image and floor(x) represents the floor function that
maps a real number to the largest integer smaller than or equal to x. N is selected considering
hardware costs, where larger N gives better performance in a final HDRI, but requires higher
computational cost. It is assumed that the difference of normalized ranks between intensity values
at pixel (x, y) in the reference LDRI and kth exposure image is due to local motion. A reference
image serves as reference for detection of non-static regions. The reference image can be selected
as an image in which area of non-saturated region is the largest among LDRIs, or be produced
using pre-processing such as photometric calibration. However, we assume that in general the
middle-exposure LDRI is well-exposed and saturated region of the image is the least, thus we
select the middle-exposure LDRI as the reference image.
Then, the absolute difference of normalized ranks is computed as
d (x, y) rˆ (x, y) rˆ (x, y) k ref k = − (6)
8. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
where the subscript ref represents the middle-exposure. Using dk, the motion maps Mk are defined
as
~
W x y k in (4). Using the
~
k W the proposed method generates ghost removed subband images
8
³ ¹
=
0, for ( , ) ,
1, otherwise
( , )
d x y T k ref
M x y k rank
k (7)
where rank T is the threshold value selected to reduce error caused by normalized rank change
between LDRIs due to different exposures or non-ideal intensity change. In the motion maps, the
pixel with zero (0) value represents the pixel with motion.
k The motion ¢
maps are clustered by applying the morphological operations [30] which are needed
to avoid partial detection of a moving object. First, isolated pixels are removed, and then
disconnected pixels are linked. Finally, the holes that are surrounded by 1-value pixels are filled.
After these morphological operations, the final motion maps are denoted as M (x, y). The final
rank based motion maps M (x, y) k ¢
are combined into the weight map in exposure fusion using
multiplication as
W (x, y) (C (x, y)) (S (x, y)) (E (x, y)) M (x, y). k
¢ = C × S × w
E × ¢ (8)
k
w
k
w
k k
This weight map W (x, y) k¢ is normalized and filtered to produce ( , )
weighted average of i
k X by ,
F (x, y). i
3.3. Noise Removal Using Multi-Resolution Bilateral Filtering
The proposed method removes noise in subband architecture. The subband images decomposed
by the wavelet filter are effectively used for noise removal. In wavelet domain, whereas Tico et
al. [14] only estimated noise variance at every pixel in all LDRIs using the robust median
estimator [31] and then computed weights of each pixel according to the noise variance, the
proposed method estimates and removes the noise in the subband images of a single fused
subband image l F using Zhang and Gunturk’s method [19].
Zhang and Gunturk’s method [19] combines multi-resolution bilateral filtering with wavelet
thresholding for image denoising. This method decomposes an image into low- and high-frequency
subbands as the proposed method. The bilateral filter [32] is an edge-preserving
denoising filter, where the intensity value at each pixel is replaced by Gaussian weighted average
of intensity values of nearby pixels. This filter is applied to the lowest-frequency subband image.
Wavelet thresholding [33] can remove noise components with hard or soft thresholding
operations. Soft thresholding is applied to the high-frequency subband image. Bilateral filter and
wavelet thresholding are also used to provide an effective noise reduction method with visible-band
and wide-band image pair in Yoo et al.’s method [34].
The proposed method applies the bilateral filter to the lowest-frequency subband image 3L+1 F
and then obtains bilateral filtered image ˆ 3L+1. F On the other hand, the high-frequency subband
images F (1 j 3L) j £ £ are filtered by soft thresholding
9. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
9
0, if ( , )
( )( −
)
=
sgn ( , ) ( , ) , otherwise
ˆ ( , )
soft
j j
soft
j
j
F x y F x y T
F x y T
F x y (9)
where Tsoft is a threshold in soft thresholding. Soft thresholding suppresses the noise by applying
nonlinear transform to the wavelet coefficients.
3.4. Gain Control Map
After obtaining the fused subband images, the proposed method uses an approach similar to Li et
al.’s method [24] and Shen et al.’s method [20] to compute the gain control map. Generally, a
defect of a denoised image is that detail and texture parts in image are degraded during denoising.
Thus, strong parameter is not used in many denoising algorithms, in which strong denoising filter
is designed though. Therefore, detail enhancement is essential for restoring the artifacts by
denoising. The proposed method preserves the detail and texture parts in subband architecture
without converting other architectures. All processing, i.e., exposure fusion, ghost removal, noise
reduction and detail preservation are performed in a single architecture, where they are closely
related to each other.
When the photoreceptors adapt to the new light level, this process can be modeled as the
logarithm of the input intensity. Thus, the proposed method controls subband signals using the
logarithm of the input intensity, where the high contrast in areas with abruptly changing
illumination is reduced and the details in texture regions are preserved.
To compute the gain control map, an activity map i A (1£ i £3L +1) [24] of ith subband image
from absolute values of local filter response is constructed as
A (x, y) g(x, y, ) Fˆ (x, y) i i = s * (10)
where Gaussian filter g(x, y,s ) with subband dependent scale parameter s is used. A gain
control map G is defined using a gamma-like function from the activity map as
( 1)
( , )
+
=
( , )
−
g
d
e
i
i
i A x y
G x y (11)
where g is a weighting factor between 0 and 1, and e prevents singularity in the gain map. This
mapping is a monotonically decreasing function, where if the activity is high, gain is decreased,
otherwise, increased. d is used as a gain control stability [24].
The gain control maps are aggregated to a single gain control map [24], because each activity in
the subband is correlated with those in adjacent subbands. Thus, an activity map A (x, y) ag
aggregated over scales and orientations is computed as
L
+
A ag ( x , y ) A i ( x , y )
(13)
=
=
3 1
1
i
and from which, a single gain control map is derived as
10. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
~
F x y m G x y F x y i i ag i = × (15)
10
,
+
=
A ( x , y )
e
( , )
( −1)
g
d
G x y
ag
ag (13)
ag
A x y
x y
×
Width Height
=
( , )
( , )
d a (14)
where Width and Height are width and height of , ˆF
respectively. The single gain control map
~
F x y i
A (x, y) ag is applied to fused subband images Fˆ (x , y) l to get ( , )
( , ) ( , ) ˆ ( , )
where mi controls the modification extent of different frequency according to scale of the subband
~
[24]. Finally, the gain controlled subband images F
i are reconstructed to a single final result
image I ¢ by a wavelet synthesis filer.
4. EXPERIMENTAL RESULTS AND DISCUSSIONS
We have tested the proposed method using various sets of real LDRIs with different exposures.
Before simulating the proposed method and other ghost and noise removal methods for HDR,
input images are aligned by a registration method used in [11] to reduce the distortion caused by
global motion.
We compare the performance of the proposed and existing ghost and noise removal methods in
views of subjective quality for HDR imaging, ghost artifact removal, noise removal, and
computation time.
4.1. HDR Imaging
Figure 3 show comparison of HDR results with and without gain control (House, 752× 500).
Figure 3(a) shows input LDRIs with three different exposures. Figures 3(b) and 3(c) show results
by the proposed method without and with gain control, respectively, where the effect of HDR
imaging from three input LDRIs is produced. Edge and details in result image by the proposed
method with gain control in Figure 3(c) are stronger than those shown in Figure 3(b). By the gain
control, the proposed method can give detail-preserving results for a HDRI.
(a)
11. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
11
(b)
(c)
Figure 3. Comparison of HDR results with and without gain control (752 × 500, House). (a) input LDRIs,
(b) result by the proposed method without gain control, (c) result by the proposed method with gain control.
In Figure 4, the performances of the ghost removal methods are compared in view of HDR
imaging. Because the ghost removal methods may excessively detect wrong regions, the
performances of the ghost removal methods are evaluated by comparing area of HDR imaging
region. Three input LDRIs (Field, 728× 546) are used. Figures 4(a)–4(d) show results of variance
based method [2], Jacobs et al.’s method [3], adaptive multi-level threshold based method [11],
and proposed method, respectively. The proposed method exactly detects the motion region only,
so HDR imaging can be effectively applied to region without motion. In Figure 4(d), the
proposed method gives the best performance in view of contrast. Figures 4(a)–4(c) show the color
artifact in sky region, which is so-called color shift caused by tone mapping [1], where color
distortion can easily occur when the radiance map of luminance is tone-mapped back to fit the
range of devices. However, this artifact does not occur in the proposed method because exposure
fusion used in the proposed method fuses directly LDRIs.
(a)
12. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
12
(b)
(c)
(d)
Figure 4. Comparison of simulation results in view of HDR (728 × 546, Field). (a) variance based method
[2], (b) Jacobs et al.’s method [3], (c) adaptive multi-level threshold based method [11], (d) proposed
method.
4.2. Ghost Artifact Removal
For performance comparison of ghost artifact removal in HDR imaging, three existing methods
are used: variance based method [2], Jacobs et al.’s method [3], and tensor voting method [28].
For tone mapping method of variance based method [2] and Jacobs et al.’s method [3], Reinhard
et al.’s method [13] is used. These methods [2, 3] use the weighted variance and local entropy of
pixel values between multiple-exposure LDRIs, respectively. Tensor voting method [28]
estimates global and local replacement functions and votes for an optimal replacement function.
In our experiments of the proposed method, we set N=8 and Trank=24 and a LDRI with middle-exposure
is selected as reference image for ghost artifact removal. Therefore, the proposed
method replaces detected motion region with the region in the reference image.
The performances of ghost artifact removal in region where local motion occurs are compared in
Figure 5, where these figures are the cropped (251× 205) and enlarged from Figure 1. Figures
5(a) and 5(b) show the result of the variance based method [2] and Jacobs et al.’s method [3],
respectively. Note that the result of the proposed method in Figure 5(d) is better than those of the
two methods. These existing methods are not robust to motion of an object if the object and
background have similar structure. A result by tensor voting method [28] is shown in Figure 5(c),
13. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
where the motion region is replaced by that of under-exposure image, because in this method
under-exposure image is used as the reference image in which intensities of under-exposure
image are globally and locally aligned to intensities of middle- and over-exposure images. In
Figure 5(c), the ghost artifacts remain after HDR imaging. Then, in views of contrast and detail-preservation,
the proposed method gives better result than the other methods. In Figure 5(d) by
the proposed method, edges, corners, and details are shown sharply. The gain control in the
proposed method leads to high contrast and fine details of result HDRI.
13
(a) (b)
(c) (d)
Figure 5. Comparison of simulation results in view of ghost artifact removal (251 × 205, Bench). (a)
variance based method [2], (b) Jacobs et al.’s method [3], (c) tensor voting method [28], (d) proposed
method.
4.3. Noise Removal
The three existing methods used for comparison in view of noise removal are Debevec and
Malik’s method [1], Min et al.’s method [11], and Tico et al.’s method [14].
Figure 6(a) shows the noisy LDRI set (Window, 616× 462) consisting of three images with
different exposures. Figure 6(b) shows the cropped and enlarged region of red boxes in Figure
6(a). We can observe that dark regions in LDRIs with under-exposures are affected by noise.
(a)
14. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
14
(b)
Figure 6. Noisy LDRIs (616 × 462, Window). (a) input LDRIs, (b) cropped and enlarged regions of LDRIs.
Figure 7 shows the HDR imaging results with noise removal using LDRIs in Figure 6(a). Figures
7(a)–(d) show HDR imaging results by Debevec and Malik’s method [1], Min et al.’s method
[11], Tico et al.’s method [14], and the proposed method, respectively, where right images are the
cropped and enlarged regions from red boxes in left images. Note that Debevec and Malik’s
method and Min et al.’s method use HDR radiance map generation and tone mapping, whereas
Tico et al.’s method and the proposed method use image fusion method. Debevec and Malik used
simple weighted average functions for removing noise, however this function is not effective in
low light condition as shown in Figure 7(b). Min et al. used the information of spatially and
temporally neighboring pixels for noise removal. Although their method gives better results than
Debevec and Malik’s method, noise still remains as shown in Figure 7(b). Figure 7(c) shows the
result of Tico et al.’s method, where noise variance is estimated at each pixel and small weight is
assigned to the pixels with large noise variance, however the performance of noise removal is
lower than the proposed method in Figure 7(d). In Figure 7(d), the proposed method effectively
removes noise as well as preserves edge and color of LDRIs.
(a)
(b)
15. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
15
(c)
(d)
Figure 7. Comparison of simulation results in view of noise removal (616 × 462). (a) Debevec and Malik’s
method [1], (b) Min et al.’s method [11], (c) Tico et al.’s method [14], (d) proposed method.
4.4. Computation Time
Table 1 shows the computation times of each process of the proposed method. The proposed
motion map generation is useful because of simple implementation and low complexity. It makes
the proposed method more practical for today’s digital cameras with mega pixel images.
However, the processes for denoising and gain control take more time than other processes.
Therefore, if the processes of denoising and gain control are adaptively used according to the
amount of noise and illumination level, high-quality HDRI can be obtained with less
computation.
Table 1. Computation Times of Each Process of the Proposed Method (563× 373 × 3).
Process Time (sec)
Decomposition 2.6573
Motion map generation 0.4359
Exposure fusion 6.0985
Denoising 7.1858
Gain control 1.2826
Reconstruction 0.7087
Total 18.3645
Table 2 shows comparison of the computation time of ghost removal process for three existing
and proposed ghost removal methods. We implemented the algorithms in Matlab on an Intel Core
i5 (2.67GHz, 4GB RAM) and used image set (Field, 728× 546× 3). For ghost removal, the
proposed method takes more time than variance based method [2] and multi-level threshold based
method [7], however gives better results than three existing methods.
16. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
16
Table 2. Comparison of the Computation Time of Ghost Removal Process (728 × 546 × 3, Field).
Multi-level threshold based method [7] 0.8257
5. CONCLUSIONS
Method Time (sec)
Variance based method [2] 0.7594
Jacobs et al.’s method [3] 3.2651
Proposed method 1.5170
We propose a ghost and noise removal method in exposure fusion using subband architecture.
Exposure fusion is constructed in the subband architecture and the proposed histogram based
motion maps are used for ghost artifact removal caused by local motion, where the motion maps
are combined with the weight maps of exposure fusion. Fused subband images are denoised by
multi-resolution bilateral filtering and soft threshlolding, and then details of the subband images
are enhanced through the gain control. Finally, detail-preserved subband images are reconstructed
to a single HDRI. Experimental results show that the proposed method is effective to detect only
motion region and to remove noise, while giving good results for HDR imaging. Future research
will focus on more practical implementation of the proposed method. In real captured LDRI sets,
over-exposure LDRI is affected by blur. Thus, future research will focus on reducing the effect of
blur image degradation in generating a single high-quality HDRI.
ACKNOWLEDGEMENTS
This work was supported in part by Digital Imaging Business, Samsung Electronics, Co. Ltd.
REFERENCES
[1] P. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs”, Proc.
SIGGRAPH, Los Angeles, CA, Aug. 1997, pp. 369–378.
[2] E. Reinhard, G. Ward, P. Debevec, and S. Pattanaik, High Dynamic Range Imaging: Acquisition,
Display, and Image Based Lighting, Morgan Kaufmann, San Francisco, CA, 2006.
[3] K. Jacobs, C. Loscos, and G. Ward, “Automatic high dynamic range image generation for dynamic
scenes”, IEEE Computer Graphics and Applications, Vol. 28, No. 2, Mar. 2008, pp. 84–93.
[4] A. A. Goshtasby, “Fusion of multi-exposure images”, Image and Vision Computing, Vol. 23, No. 6,
June 2005, pp. 611−618.
[5] T. Martens, J. Kautz, and F. V. Reeth, “Exposure fusion: A simple and practical alternative to high
dynamic range photography”, Computer Graphics Forum, Vol. 28, No. 1, Mar. 2009, pp. 161–171.
[6] G. Ward, “Fast, robust image registration for compositing high dynamic range photographs for
handheld exposures”, Journal of Graphics Tools, Vol. 8, No. 2, Jan. 2003, pp. 17–30.
[7] T.-H. Min, R.-H. Park, and S. Chang, “Histogram based ghost removal in high dynamic range images”,
Proc. IEEE Int. Conf. Multimedia and Expo, New York, June/July 2009, pp. 530−533.
[8] W. Zhang and W.-K. Cham, “Gradient-directed composition of multi-exposure images,” Proc. IEEE
Conf. Computer Vision and Pattern Recognition, San Francisco, CA, June 2010, pp. 530−536.
[9] S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High dynamic range video”, ACM Trans.
Graphics, Vol. 22, No. 3, July 2003, pp. 319–325.
[10] A. O. Akyuz and E. Reinhard, “Noise reduction in high dynamic range imaging”, Journal of Visual
Communication and Image Representation, Vol. 18, No. 5, Oct. 2007, pp. 366–376.
17. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
[11] T.-H. Min, R.-H. Park, and S. Chang, “Noise reduction in high dynamic range images”, Signal, Image,
17
and Video Processing, Vol. 5, No. 3, Sept. 2011, pp. 315–328.
[12] A. A. Bell, C. Seiler, J. N. Kaftan, and T. Aach, “Noise in high dynamic range imaging”, Proc. Int.
Conf. Image Processing, San Diego, CA, Oct. 2008, pp. 561–564.
[13] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital
images”, ACM Trans. Graphics, Vol. 21, No. 3, July 2002, pp. 267–273.
[14] M. Tico, N. Gelfand, and K. Pulli, “Motion-blur-free exposure fusion”, in Proc. Int. Conf. Image
Processing, Hong Kong, China, Sept. 2010, pp. 3321–3324.
[15] S. Raman and S. Chaudhuri, “Bilateral filter based compositing for variable exposure photography”,
Proc. Eurographics Conf., Munich, Germany, Mar. 2009, pp. 1–4.
[16] B.-D. Choi, S.-W. Jung, and S.-J. Ko, “Motion-blur-free camera system splitting exposure time”,
IEEE Trans. Consumer Electronics, Vol. 54, No. 3, Aug. 2008, pp. 981−986.
[17] J. Jia, J. Sun, C.-K. Tang, and H.-Y. Shum, “Bayesian correction of image intensity with spatial
consideration”, Proc. 8th European Conf. Computer Vision, Lecture Notes in Computer Science, Vol.
3023, Prague, Czech Republic, May 2004, pp. 342−354.
[18] D.-K. Lee, R.-H. Park, and S. Chang, “Improved histogram based ghost removal in exposure fusion
for dynamic range images”, Proc. 15th IEEE Int. Symp. Consumer Electronics, Singapore, June 2011,
pp. 586–591.
[19] M. Zhang and B. Gunturk, “Multiresolution bilateral filtering for image denoising”, IEEE Trans.
Image Processing, Vol. 17, No. 12, Dec. 2008, pp. 2324–2333.
[20] J. Shen, Y. Zhao, and Y. He, “Detail-preserving exposure fusion using subband architecture”, The
Visual Computer, Vol. 28, No. 5, May 2012, pp. 463–473.
[21] S. Lee, “Compressed image reproduction based on block decomposition”, IET Image Processing, Vol.
3, No. 4, Aug. 2009, pp. 188–199.
[22] X. Li, F. Li, L. Zhuo, and D. D. Feng, “Layers-based exposure fusion algorithm”, IET Image
Processing, Vol. 7, No. 7, Oct. 2013, pp. 701–711.
[23] M. Bertalmio and S. Levine, “Variational Approach for the Fusion of Exposure Bracketed Pairs”,
IEEE Trans. Image Processing, Vol. 22, No. 2, Feb. 2013, pp. 712–723.
[24] Y. Li, L. Sharan, and E. H. Adelson, “Compressing and companding high dynamic range images with
subband architectures”, ACM Trans. Graphics, Vol. 24, No. 3, July 2005, pp. 836–844.
[25] S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework”, Int. Journal of
Computer Vision, Vol. 56, No. 3, Feb. 2004, pp. 221–255.
[26] Y. Keller and A. Averbuch, “Fast gradient methods based on global motion estimation for video
compression”, IEEE Trans. Circuits Syst. Video Technol., Vol. 13, No. 4, Apr. 2003, pp. 300−309.
[27] E. A. Khan, A. O. Akyuz, and E. Reinhard, “Robust generation of high dynamic range images”, Proc.
Int. Conf. Image Processing, Atlanta, GA, Oct. 2006, pp. 2005–2008.
[28] J. Jia and C. K. Tang, “Tensor voting for image correction by global and local intensity alignment”,
IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 27, No. 1, Jan. 2005, pp. 36–50.
[29] P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code”, IEEE Trans.
Communications, Vol. 31, No. 4, Apr. 1983, pp. 532–540.
[30] R. C. Gonzalez and R. E. Woods, Digital Image Processing. 3rd ed., Upper Saddle River, NJ: Pearson
Education Inc., 2010.
[31] D. Donoho and I. Johnstone, “Ideal spatial adaptation by wavelet shrinkage”, Biometrika, Vol. 81, No.
3, Aug. 1994, pp. 425–455.
[32] C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images”, Proc. Int. Conf. Computer
Vision, Bombay, India, Jan. 1998, pp. 839–846.
[33] S. G. Chang, B. Yu, and M. Vetterli, “Adaptive wavelet thresholding for image denoising and
compression”, IEEE Trans. Image Processing, Vol. 9, No. 9, Sept. 2000, pp. 1532–1546.
[34] Y. Yoo, W. Choe, J. Kwon, S. Park, S. Lee, and C.-Y. Kim, “Low-light imaging method with visible-band
and wide band image pair”, Proc. Int. Conf. Image Processing, Cairo, Egypt, Nov. 2009, pp.
2273–2276.
18. International Journal of Computer Graphics Animation (IJCGA) Vol.4, No.4, October 2014
18
Authors
Dong-Kyu Lee received the B.S. and M.S. degrees in electronic engineering from Sogang University in
2010 and 2012, respectively. His current research interests are image processing and image enhancement.
Rae-Hong Park received the B.S. and M.S. degrees in electronics engineering from Seoul National
University, Seoul, Korea, in 1976 and 1979, respectively, and the M.S. and Ph.D. degrees in electrical
engineering from Stanford University, Stanford, CA, in 1981 and 1984, respectively. In 1984, he joined the
faculty of the Department of Electronic Engineering, School of Engineering, Sogang University, Seoul,
Korea, where he is currently a Professor. In 1990, he spent his sabbatical year as a Visiting Associate
Professor with the Computer Vision Laboratory, Center for Automation Research, University of Maryland
at College Park. In 2001 and 2004, he spent sabbatical semesters at Digital Media Research and
Development Center (DTV image/video enhancement), Samsung Electronics Co., Ltd., Suwon, Korea. In
2012, he spent a sabbatical year in Digital Imaging Business (RD Team) and Visual Display Business
(RD Office), Samsung Electronics Co., Ltd., Suwon, Korea. His current research interests are video
communication, computer vision, and pattern recognition. He served as Editor for the Korea Institute of
Telematics and Electronics (KITE) Journal of Electronics Engineering from 1995 to 1996. Dr. Park was
the recipient of a 1990 Post-Doctoral Fellowship presented by the Korea Science and Engineering
Foundation (KOSEF), the 1987 Academic Award presented by the KITE, the 2000 Haedong Paper Award
presented by the Institute of Electronics Engineers of Korea (IEEK), the 1997 First Sogang Academic
Award, and the 1999 Professor Achievement Excellence Award presented by Sogang University. He is a
co-recipient of the Best Student Paper Award of the IEEE Int. Symp. Multimedia (ISM 2006) and IEEE Int.
Symp. Consumer Electronics (ISCE 2011).
SoonKeun Chang received his B.S. degree in astronomy and space science from KyungHee University,
Korea, in 2000 and M.S. degree in control engineering from Kanazawa University in 2003. He received a
Ph.D. degree in control engineering from Tokyo Institute of Technology (TITech) in 2007. Now he works
at Samsung Electronics Co., Ltd., Korea. His main research interests include computer vision and image
processing.