Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
A Pattern Classification Based approach for Blur Classificationijeei-iaes
Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
Shadows create significant problems in many computer vision and image analysis tasks such as object
recognition, object tracking, and image segmentation. For a machine, it is very difficult to distinguish
between a shadow and a real object. As a result, an object recognition system may incorrectly recognize a
shadow region as an object. So the detection of shadows in images will enhance the performance of many
machine vision tasks. This paper implements a shadow detection method, which is based on Tricolor
Attenuation Model (TAM) enhanced with adaptive histogram equalization (AHE). TAM uses the concept of
intensity attenuation of pixels in the shadow region which is different for the three color channels. It
originates from the idea that if the minimum attenuated color channel is subtracted from the maximum
attenuated one, the shadow areas become darker in the resulting TAM image. But this resulting image will
be of low contrast due to the high correlation among R, G and B color channels. In order to enhance the
contrast, adaptive histogram equalization is used. The incorporation of AHE significantly improved the
quality of the detected shadow region.
The improved hybrid model for molecular image denoising, proposed by NeST Software, can give a better SNR Molecular Image output. Read more on the proposed hybrid model.
Literature Survey on Image Deblurring TechniquesEditor IJCATR
Image restoration and recognition has been of great importance nowadays. Face recognition becomes difficult when it comes
to blurred and poorly illuminated images and it is here face recognition and restoration come to picture. There have been many
methods that were proposed in this regard and in this paper we will examine different methods and technologies discussed so far. The
merits and demerits of different methods are discussed in this concern
A Pattern Classification Based approach for Blur Classificationijeei-iaes
Blur type identification is one of the most crucial step of image restoration. In case of blind restoration of such images, it is generally assumed that the blur type is known prior to restoration of such images. However, it is not practical in real applications. So, blur type identification is extremely desirable before application of blind restoration technique to restore a blurred image. An approach to categorize blur in three classes namely motion, defocus, and combined blur is presented in this paper. Curvelet transform based energy features are utilized as features of blur patterns and a neural network is designed for classification. The simulation results show preciseness of proposed approach.
Despeckling of Ultrasound Imaging using Median Regularized Coupled PdeIDES Editor
This paper presents an approach for reducing speckle
in ultrasound images using Coupled Partial Differential
Equation (CPDE) which has been obtained by uniting secondorder
and the fourth-order partial differential equations. Using
PDE to reduce the speckle is the noise-smoothing methods
which is getting attention widely, because PDE can keep the
edge well when it reduces the noise. We also introduced a
median regulator to guide energy source to boost the features
in the image and regularize the diffusion. The proposed
method is tested in both simulated and real medical
ultrasound images. The proposed method is compared with
SRAD, Perona Malik diffusion and Non linear coherent
diffusion methods, our method gives better result in terms of
CNR, SSIM and FOM.
ER Publication,
IJETR, IJMCTR,
Journals,
International Journals,
High Impact Journals,
Monthly Journal,
Good quality Journals,
Research,
Research Papers,
Research Article,
Free Journals, Open access Journals,
erpublication.org,
Engineering Journal,
Science Journals,
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...ijcsit
Shadows create significant problems in many computer vision and image analysis tasks such as object
recognition, object tracking, and image segmentation. For a machine, it is very difficult to distinguish
between a shadow and a real object. As a result, an object recognition system may incorrectly recognize a
shadow region as an object. So the detection of shadows in images will enhance the performance of many
machine vision tasks. This paper implements a shadow detection method, which is based on Tricolor
Attenuation Model (TAM) enhanced with adaptive histogram equalization (AHE). TAM uses the concept of
intensity attenuation of pixels in the shadow region which is different for the three color channels. It
originates from the idea that if the minimum attenuated color channel is subtracted from the maximum
attenuated one, the shadow areas become darker in the resulting TAM image. But this resulting image will
be of low contrast due to the high correlation among R, G and B color channels. In order to enhance the
contrast, adaptive histogram equalization is used. The incorporation of AHE significantly improved the
quality of the detected shadow region.
The improved hybrid model for molecular image denoising, proposed by NeST Software, can give a better SNR Molecular Image output. Read more on the proposed hybrid model.
Literature Survey on Image Deblurring TechniquesEditor IJCATR
Image restoration and recognition has been of great importance nowadays. Face recognition becomes difficult when it comes
to blurred and poorly illuminated images and it is here face recognition and restoration come to picture. There have been many
methods that were proposed in this regard and in this paper we will examine different methods and technologies discussed so far. The
merits and demerits of different methods are discussed in this concern
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
Detection of hard exudates using simulated annealing based thresholding mecha...csandit
Diabetic retinopathy is a disease commonly found in case of diabetes mellitus patients. It causes
severe damage to retina and may lead to complete or partial visual loss. In case of diabetic
retinopathy retinal blood vessel gets damaged and protein and fat based particles gets leaked
out of the damaged blood vessels and are deposited in the intra-retinal space. They are
normally seen as whitish marks of various shape and are called as exudates. Exudates are
primary indication of diabetic retinopathy. As changes occurs due to the disease is irreversible
in nature, the disease must be detected in early stages to prevent visual loss. But detection of
exudates in early stages of the disease is extremely difficult only by visual inspection because of
small diameter of human eye. But an efficient automated computerized system can have the
ability to detect the disease in very early stage. In this paper we have proposed one such
method.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
A Review on Image Segmentation using Clustering and Swarm Optimization Techni...IJSRD
The process of dividing an image into multiple regions (set of pixels) is known as Image segmentation. It will make an image easy and smooth to evaluate. Image segmentation objective is to generate image more simple and meaningful. In this paper present a survey on image segmentation general segmentation techniques, clustering algorithms and optimization methods. Also a study of different research also been presented. The latest research in each of image segmentation methods is presented in this study. This paper presents the recent research in biologically inspired swarm optimization techniques, including ant colony optimization algorithm, particle swarm optimization algorithm, artificial bee colony algorithm and their hybridizations, which are applied in several fields.
ROBUST STATISTICAL APPROACH FOR EXTRACTION OF MOVING HUMAN SILHOUETTES FROM V...ijitjournal
Human pose estimation is one of the key problems in computer visionthat has been studied in the recent
years. The significance of human pose estimation is in the higher level tasks of understanding human
actions applications such as recognition of anomalous actions present in videos and many other related
applications. The human poses can be estimated by extracting silhouettes of humans as silhouettes are
robust to variations and it gives the shape information of the human body. Some common challenges
include illumination changes, variation in environments, and variation in human appearances. Thus there
is a need for a robust method for human pose estimation. This paper presents a study and analysis of
approaches existing for silhouette extraction and proposes a robust technique for extracting human
silhouettes in video sequences. Gaussian Mixture Model (GMM) A statistical approach is combined with
HSV (Hue, Saturation and Value) color space model for a robust background model that is used for
background subtraction to produce foreground blobs, called human silhouettes. Morphological operations
are then performed on foreground blobs from background subtraction. The silhouettes obtained from this
work can be used in further tasks associated with human action interpretation and activity processes like
human action classification, human pose estimation and action recognition or action interpretation.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
Remote sensing image fusion using contourlet transform with sharp frequency l...Zac Darcy
This paper addresses four different aspects of the remote sensing image fusion: i) image fusion method, ii)
quality analysis of fusion results, iii) effects of image decomposition level, and iv) importance of image
registration. First, a new contourlet-based image fusion method is presented, which is an improvement
over the wavelet-based fusion. This fusion method is then utilized withinthe main fusion process to analyze
the final fusion results. Fusion framework, scheme and datasets used in the study are discussed in detail.
Second, quality analysis of the fusion results is discussed using various quantitative metrics for both spatial
and spectral analyses. Our results indicate that the proposed contourlet-based fusion method performs
better than the conventional wavelet-based fusion methodsin terms of both spatial and spectral analyses.
Third, we conducted an analysis on the effects of the image decomposition level and observed that the
decomposition level of 3 produced better fusion results than both smaller and greater number of levels.
Last, we created four different fusion scenarios to examine the importance of the image registration. As a
result, the feature-based image registration using the edge features of the source images produced better
fusion results than the intensity-based imageregistration.
Multiple Person Tracking with Shadow Removal Using Adaptive Gaussian Mixture ...IJSRD
Person detection in a video surveillance system is major concern in real world. Several application likes abnormal event detection, congestion analysis, human gait characterization, fall detection, person identification, gender classification and for elderly people. In this algorithm, we use GMM method in background subtraction for multi person detection because of Gaussian Mixture Model (GMM) model is one such popular method this give a real time object detection. There is still not robustly but Multi person tracking with shadow removal fill this gap, in this work, HOG-LBP hybrid approach with GMM algorithm is presented for Multi person tracking with Shadow removal.
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
Image deblurring based on spectral measures of whitenessijma
Image Deblurring is an ill-posed inverse problem used to reconstruct the sharp image from the unknown
blurred image. This process involves restoration of high frequency information from the blurred image. It
includes a learning technique which initially focuses on the main edges of the image and then gradually
takes details into account. As blind image deblurring is ill-posed, it has infinite number of solutions leading
to an ill-conditioned blur operator. So regularization or prior knowledge on both the unknown image and
the blur operator is needed to address this problem. The performance of this optimization problem depends
on the regularization parameter and the iteration number. In already existing methods the iterations have
to be manually stopped. In this paper, a new idea is proposed to regulate the number of iterations and the
regularization parameter automatically. The proposed criteria yields, on average, an ISNR only 0.38dB
below what is obtained by manual stopping. The results obtained with synthetically blurred images are
good and considerable, even when the blur operator is ill-conditioned and the blurred image is noisy.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
REMOVING OCCLUSION IN IMAGES USING SPARSE PROCESSING AND TEXTURE SYNTHESISIJCSEA Journal
We provide a solution to problem of occlusion in images by removing the occluding region and filling in the gap left behind. Inpainting algorithms fail in filling occlusions when the occluding region is large since there is loss of both structure and texture. We decompose the image into structure and texture images using a decomposition method based on sparseness of the image. The sparse reconstruction of the decomposed images result in an inpainted image with all the structures made intact. A texture synthesis is performed on the texture only image. Finally the structure and texture images are combined to get an image where the occlusion is filled. The performance of our algorithm in terms of visual effectiveness is compared with other algorithms used for inpainting.
This is about Image segmenting.We will be using fuzzy logic & wavelet transformation for segmenting it.Fuzzy logic shall be used because of the inconsistencies that may occur during segementing or
Detection of hard exudates using simulated annealing based thresholding mecha...csandit
Diabetic retinopathy is a disease commonly found in case of diabetes mellitus patients. It causes
severe damage to retina and may lead to complete or partial visual loss. In case of diabetic
retinopathy retinal blood vessel gets damaged and protein and fat based particles gets leaked
out of the damaged blood vessels and are deposited in the intra-retinal space. They are
normally seen as whitish marks of various shape and are called as exudates. Exudates are
primary indication of diabetic retinopathy. As changes occurs due to the disease is irreversible
in nature, the disease must be detected in early stages to prevent visual loss. But detection of
exudates in early stages of the disease is extremely difficult only by visual inspection because of
small diameter of human eye. But an efficient automated computerized system can have the
ability to detect the disease in very early stage. In this paper we have proposed one such
method.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
A Review on Image Segmentation using Clustering and Swarm Optimization Techni...IJSRD
The process of dividing an image into multiple regions (set of pixels) is known as Image segmentation. It will make an image easy and smooth to evaluate. Image segmentation objective is to generate image more simple and meaningful. In this paper present a survey on image segmentation general segmentation techniques, clustering algorithms and optimization methods. Also a study of different research also been presented. The latest research in each of image segmentation methods is presented in this study. This paper presents the recent research in biologically inspired swarm optimization techniques, including ant colony optimization algorithm, particle swarm optimization algorithm, artificial bee colony algorithm and their hybridizations, which are applied in several fields.
ROBUST STATISTICAL APPROACH FOR EXTRACTION OF MOVING HUMAN SILHOUETTES FROM V...ijitjournal
Human pose estimation is one of the key problems in computer visionthat has been studied in the recent
years. The significance of human pose estimation is in the higher level tasks of understanding human
actions applications such as recognition of anomalous actions present in videos and many other related
applications. The human poses can be estimated by extracting silhouettes of humans as silhouettes are
robust to variations and it gives the shape information of the human body. Some common challenges
include illumination changes, variation in environments, and variation in human appearances. Thus there
is a need for a robust method for human pose estimation. This paper presents a study and analysis of
approaches existing for silhouette extraction and proposes a robust technique for extracting human
silhouettes in video sequences. Gaussian Mixture Model (GMM) A statistical approach is combined with
HSV (Hue, Saturation and Value) color space model for a robust background model that is used for
background subtraction to produce foreground blobs, called human silhouettes. Morphological operations
are then performed on foreground blobs from background subtraction. The silhouettes obtained from this
work can be used in further tasks associated with human action interpretation and activity processes like
human action classification, human pose estimation and action recognition or action interpretation.
An Efficient Thresholding Neural Network Technique for High Noise Densities E...CSCJournals
Medical images when infected with high noise densities lose usefulness for diagnosis and early detection purposes. Thresholding neural networks (TNN) with a new class of smooth nonlinear function have been widely used to improve the efficiency of the denoising procedure. This paper introduces better solution for medical images in noisy environments which serves in early detection of breast cancer tumor. The proposed algorithm is based on two consecutive phases. Image denoising, where an adaptive learning TNN with remarkable time improvement and good image quality is introduced. A semi-automatic segmentation to extract suspicious regions or regions of interest (ROIs) is presented as an evaluation for the proposed technique. A set of data is then applied to show algorithm superior image quality and complexity reduction especially in high noisy environments.
Remote sensing image fusion using contourlet transform with sharp frequency l...Zac Darcy
This paper addresses four different aspects of the remote sensing image fusion: i) image fusion method, ii)
quality analysis of fusion results, iii) effects of image decomposition level, and iv) importance of image
registration. First, a new contourlet-based image fusion method is presented, which is an improvement
over the wavelet-based fusion. This fusion method is then utilized withinthe main fusion process to analyze
the final fusion results. Fusion framework, scheme and datasets used in the study are discussed in detail.
Second, quality analysis of the fusion results is discussed using various quantitative metrics for both spatial
and spectral analyses. Our results indicate that the proposed contourlet-based fusion method performs
better than the conventional wavelet-based fusion methodsin terms of both spatial and spectral analyses.
Third, we conducted an analysis on the effects of the image decomposition level and observed that the
decomposition level of 3 produced better fusion results than both smaller and greater number of levels.
Last, we created four different fusion scenarios to examine the importance of the image registration. As a
result, the feature-based image registration using the edge features of the source images produced better
fusion results than the intensity-based imageregistration.
Multiple Person Tracking with Shadow Removal Using Adaptive Gaussian Mixture ...IJSRD
Person detection in a video surveillance system is major concern in real world. Several application likes abnormal event detection, congestion analysis, human gait characterization, fall detection, person identification, gender classification and for elderly people. In this algorithm, we use GMM method in background subtraction for multi person detection because of Gaussian Mixture Model (GMM) model is one such popular method this give a real time object detection. There is still not robustly but Multi person tracking with shadow removal fill this gap, in this work, HOG-LBP hybrid approach with GMM algorithm is presented for Multi person tracking with Shadow removal.
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
In the modern age, the digital image has taken
the place of the original analog photograph, and the forgery
of digital images has become increasingly easy, and harder
to detect. Image splicing is the process of making a
composite picture by cutting and joining two or more
photographs. An approach to efficient image splicing
detection is proposed here. The spliced image often
introduces a number of sharp transitions such as lines,
edges and corners. Phase congruency is a sensitive measure
of these sharp transitions and is hence proposed as a
feature for splicing detection. Statistical moments of
characteristic functions of wavelet sub-bands have been
examined to detect the differences between the authentic
images and spliced images. Image splicing detection can be
treated as a two-class pattern recognition problem, which
builds the model using moment features and some other
parameters extracted from the given test image. Artificial
neural network (ANN) is chosen as a classifier to train and
test the given images.
Image deblurring based on spectral measures of whitenessijma
Image Deblurring is an ill-posed inverse problem used to reconstruct the sharp image from the unknown
blurred image. This process involves restoration of high frequency information from the blurred image. It
includes a learning technique which initially focuses on the main edges of the image and then gradually
takes details into account. As blind image deblurring is ill-posed, it has infinite number of solutions leading
to an ill-conditioned blur operator. So regularization or prior knowledge on both the unknown image and
the blur operator is needed to address this problem. The performance of this optimization problem depends
on the regularization parameter and the iteration number. In already existing methods the iterations have
to be manually stopped. In this paper, a new idea is proposed to regulate the number of iterations and the
regularization parameter automatically. The proposed criteria yields, on average, an ISNR only 0.38dB
below what is obtained by manual stopping. The results obtained with synthetically blurred images are
good and considerable, even when the blur operator is ill-conditioned and the blurred image is noisy.
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l_1/l_2 Regular...Laurent Duval
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.
Image Deblurring using L0 Sparse and Directional FiltersCSCJournals
Blind deconvolution refers to the process of recovering the original image from the blurred image when the blur kernel is unknown. This is an ill-posed problem which requires regularization to solve. The naive MAP approach for solving the blind deconvolution problem was found to favour no-blur solution which in turn led to its failure. It is noted that the success of the further developed successful MAP based deblurring methods is due to the intermediate steps in between, which produces an image containing only salient image structures. This intermediate image is essentially called the unnatural representation of the image. L0 sparse expression can be used as the regularization term to effectively develop an efficient optimization method that generates unnatural representation of an image for kernel estimation. Further, the standard deblurring methods are affected by the presence of image noise. A directional filter incorporated as an initial step to the deblurring process makes the method efficient to be used for blurry as well as noisy images. Directional filtering along with L0 sparse regularization gives a good kernel estimate in spite of the image being noisy. In the final image restoration step, a method to give a better result with lesser artifacts is incorporated. Experimental results show that the proposed method recovers a good quality image from a blurry and noisy image.
Image Denoising Based On Sparse Representation In A Probabilistic FrameworkCSCJournals
Image denoising is an interesting inverse problem. By denoising we mean finding a clean image, given a noisy one. In this paper, we propose a novel image denoising technique based on the generalized k density model as an extension to the probabilistic framework for solving image denoising problem. The approach is based on using overcomplete basis dictionary for sparsely representing the image under interest. To learn the overcomplete basis, we used the generalized k density model based ICA. The learned dictionary used after that for denoising speech signals and other images. Experimental results confirm the effectiveness of the proposed method for image denoising. The comparison with other denoising methods is also made and it is shown that the proposed method produces the best denoising effect.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
NUMBER PLATE IMAGE DETECTION FOR FAST MOTION VEHICLES USING BLUR KERNEL ESTIM...paperpublications3
Abstract: Many recent advancements have been introduced in both hardware and software technologies. Out of these technologies, we gain a lot of interest in Image Processing. As the eccentric identification of a vehicle, number plate is a key clue to discover theft and over-speed vehicles. The captured images from the camera are always in low resolution and suffer severe loss of edge information, which cast great challenge to existing blind deblurring methods. The blur kernel can be showed as linear uniform convolution and with angle and length estimation. In this paper, sparse representation is used to identify the blur kernel. Then, the length of the motion kernel has been estimated with Radon transform in Fourier domain. We evaluate our approach on real-world images and compare with several popular blind image deblurring algorithms. Based on the results obtained the supremacy of our proposed approach in terms of efficacy.
Keywords: Image Segmentation, Angle Estimation, Length Estimation, Deconvolution Algorithm, Artificial Neural Network.
Title: NUMBER PLATE IMAGE DETECTION FOR FAST MOTION VEHICLES USING BLUR KERNEL ESTIMATION AND ANN
Author: S.KALAIVANI, K.PRAVEENA, T.PREETHI, N.PUNITHA
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Boosting ced using robust orientation estimationijma
In this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
Boosting CED Using Robust Orientation Estimationijma
n this paper, Coherence Enhancement Diffusion (CED) is boosted feeding external orientation using new
robust orientation estimation. In CED, proper scale selection is very important as the gradient vector at
that scale reflects the orientation of local ridge. For this purpose a new scheme is proposed in which pre
calculated orientation, by using local and integration scales. From the experiments it is found the proposed
scheme is working much better in noisy environment as compared to the traditional Coherence
Enhancement Diffusion
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalCSEIJJournal
This paper compares the performance of face image retrieval system based on discrete wavelet transforms
and Lifting wavelet transforms with principal component analysis (PCA). These techniques are
implemented and their performances are investigated using frontal facial images from the ORL database.
The Discrete Wavelet Transform is effective in representing image features and is suitable in Face image
retrieval, it still encounters problems especially in implementation; e.g. Floating point operation and
decomposition speed. We use the advantages of lifting scheme, a spatial approach for constructing wavelet
filters, which provides feasible alternative for problems facing its classical counterpart. Lifting scheme has
such intriguing properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image retrieval.
Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives less computation and
DWT-PCA gives high retrieval rate..
Performance Comparison of PCA,DWT-PCA And LWT-PCA for Face Image RetrievalCSEIJJournal
This paper compares the performance of face image retrieval system based on discrete wavelet transforms
and Lifting wavelet transforms with principal component analysis (PCA). These techniques are
implemented and their performances are investigated using frontal facial images from the ORL database.
The Discrete Wavelet Transform is effective in representing image features and is suitable in Face image
retrieval, it still encounters problems especially in implementation; e.g. Floating point operation and
decomposition speed. We use the advantages of lifting scheme, a spatial approach for constructing wavelet
filters, which provides feasible alternative for problems facing its classical counterpart. Lifting scheme has
such intriguing properties as convenient construction, simple structure, integer-to-integer transform, low
computational complexity as well as flexible adaptivity, revealing its potentials in Face image retrieval.
Comparing to PCA and DWT with PCA, Lifting wavelet transform with PCA gives less computation and
DWT-PCA gives high retrieval rate..
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
An ensemble classification algorithm for hyperspectral imagessipij
Hyperspectral image analysis has been used for many purposes in environmental monitoring, remote
sensing, vegetation research and also for land cover classification. A hyperspectral image consists of many
layers in which each layer represents a specific wavelength. The layers stack on top of one another making
a cube-like image for entire spectrum. This work aims to classify the hyperspectral images and to produce
a thematic map accurately. Spatial information of hyperspectral images is collected by applying
morphological profile and local binary pattern. Support vector machine is an efficient classification
algorithm for classifying the hyperspectral images. Genetic algorithm is used to obtain the best feature
subjected for classification. Selected features are classified for obtaining the classes and to produce a
thematic map. Experiment is carried out with AVIRIS Indian Pines and ROSIS Pavia University. Proposed
method produces accuracy as 93% for Indian Pines and 92% for Pavia University.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithmijtsrd
This paper addresses a unified work for achieving single image super-resolution, which consists of improving a high resolution from blurred, decimated and noisy version. Single image super-resolution is also known as image enhancement or image scaling up. In this paper mainly four steps are used for enhancement of single image resolution: input image, low sampling the image, an analytical solution and L2 regularization. This proposes to deal with the decimation and blurring operators by their particular properties in the frequency domain, which leads to a fast super-resolution approach. And an analytical solution obtained and implemented for the L2-regularization i.e. L2-L2 optimized algorithm. This aims to reduce the computational cost of the existing methods by the proposed method. Simulation results taken on different images and different priors with an advance machine learning technique and conducted results compared with the existing method. Varsha Patil | Meharunnisa SP"Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15635.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/15635/single-image-super-resolution-using-analytical-solution-for-l2-l2-algorithm/varsha-patil
Survey on Single image Super Resolution TechniquesIOSR Journals
Super-resolution is the process of recovering a high-resolution image from multiple lowresolutionimages
of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘lowresolution’
images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Similar to Uniform and non uniform single image deblurring based on sparse representation and adaptive dictionary learning (20)
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Uniform and non uniform single image deblurring based on sparse representation and adaptive dictionary learning
1. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
DOI : 10.5121/ijma.2014.6104 47
UNIFORM AND NON-UNIFORM SINGLE IMAGE
DEBLURRING BASED ON SPARSE REPRESENTATION
AND ADAPTIVE DICTIONARY LEARNING
Ashwini M. Deshpande1
and Suprava Patnaik2
1
Department of Electronics and Telecommunication Engineering,
TSSM’s Bhivarabai Sawant College of Engineering and Research, Narhe
Pune (MS), India
2
Department of Electronics Engineering, S. V. National Institute of Technology, Surat
ABSTRACT
Considering the sparseness property of images, a sparse representation based iterative deblurring method
is presented for single image deblurring under uniform and non-uniform motion blur. The approach taken
is based on sparse and redundant representations over adaptively training dictionaries from single
blurred-noisy image itself. Further, the K-SVD algorithm is used to obtain a dictionary that describes the
image contents effectively. Comprehensive experimental evaluation demonstrate that the proposed
framework integrating the sparseness property of images, adaptive dictionary training and iterative
deblurring scheme together significantly improves the deblurring performance and is comparable with the
state-of-the art deblurring algorithms and seeks a powerful solution to an ill-conditioned inverse problem.
KEYWORDS
Image Deblurring, Non-Uniform Blur, Sparse Representation, Adaptive Dictionary
1. INTRODUCTION
Single image deblurring is one of the most active research areas in the field of image processing.
Being an inherently ill-posed problem, seeking a solution in terms of correct pair of an unknown
degradation function and original image from the blurred and noisy observation, is quite
challenging as there can be multiple combinations of these two unknowns that might have
resulted into the available blurred-noisy image.
A high-quality image undergoes blurring, be it due to atmospheric blur (in astronomy),
defocusing, camera shake blur (in photography), or any other possible source. Typically, motion
blur is caused when there is a relative motion between the camera and the object being imaged
during an exposure time. Even in case of lack of ambient light, slower shutter speed is necessary
to increase exposure time, where camera shake is likely to happen and degrades the image quality
significantly. Complex motion paths as in the case of camera shake are particularly challenging as
blur kernels are arbitrary and any kind of parametric model to represent the same is infeasible.
Pertaining to this problem wide range of deblurring algorithms are available, which either remove
simple motion blurring, or need user interactions to work on with more complex cases. Only
fewest algorithms could have succeeded in giving satisfactory solution in presence of non-
uniform camera motion paths and additive noise both. Our proposed algorithm based on sparse
representation of images and adaptive dictionary learning, features one of the successful
2. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
48
repository in removal of uniform and non-uniform motion blur from a single blurred-noisy image.
Comparative performance analysis performed with some of the leading deblurring algorithms
proves the effectiveness of the proposed method in handling blurs like linear blur, camera shake,
hand shake etc.
Although the degradation process, in general, is nonlinear and space varying, a large number of
problems could be addressed with a sophisticated linear and shift-invariant (LSI) model. Hence
the observation process in the imaging system, with an assumption of blur to be translation
invariant and pixel-location independent can be modelled as,
( , ) ( , ) ( , ) ( , )g x y f x y h x y n x y= ⊗ + (1)
where ⊗ is a 2-D convolution operator, function f represents an unknown original image, h is
PSF of the image formation system, g is the observed blurred-noisy image and n denotes additive
noise. This convolutional model can approximate many real world blurring processes very well,
and also provides simplified computational form in the frequency domain. Thus, the task of single
image deblurring is to restore a sharp image from a single blurred and noisy observation.
The image formulation process can also be modelled in matrix-vector form or in frequency
domain. Defining g, h, f, and n as the vector versions of g(x, y), h(x, y), f(x, y) and n(x, y),
respectively, the matrix-vector formulation is,
g Hf n= + (2)
where H is a two-dimensional sparse matrix with elements taken from h(x, y) to have the proper
mapping from f to g, equivalently, the Fourier-domain description of the imaging model is,
( , ) ( , ) ( , ) ( , )G u v H u v F u v N u v= + (3)
where G(u, v), F(u, v), H(u, v) and N(u, v) are the Fourier transforms of g(x, y), f(x, y), h(x, y) and
n(x, y) respectively. In general, frequency domain analysis of an imaging system provides
additional cues about the behavior of the imaging system. In our work, while referring to the
sparse representation of the observed blurred-noisy image, a matrix-vector form is used and, for
the sake of convenience, frequency domain model is used for iterative image deblurring part.
The rest of the paper is organized as: in Section 2, a brief review of recent advances in image
deblurring techniques is presented. Problem formulation, sparse model for image deblurring and
dictionary learning process are covered in Section 3. Section 4 discusses an iterative image
deblurring approach taken up in the proposed algorithm whereas a complete proposed single
image deblurring technique- SparseD is put forth in Section 5. Experimental set-up as well as
subjective and objective results compared with state-of-the art methods are presented in Section 6
and conclusions are provided in Section 7.
2. RELATED WORK
For image deblurring, if the Point Spread Function (PSF) is known a priori, it becomes a non-
blind deconvolution problem. Some models and algorithms have been proposed to solve the non-
blind deconvolution problem, such as Wiener Filter [1], Lucy-Richardson method [2, 3]; they are
two popular non-blind deconvolution methods. Chan proposed a TV-L model in [4] and
Neelmani et. al [5] proposed a ForWaRD method, which combines Fourier and wavelet method
to seek a solution to this problem.
3. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
49
If the PSF is unknown then the problem is termed as blind deconvolution or blind deblurring.
Early work on blind deblurring usually consider a single image and assume a prior parametric
form of the blur kernel so that the blur kernel can be obtained by only estimating a few
parameters. Linear motion blur kernel model used in these works often is too simplified, whereas
for true motion blurring in practice blur kernel is often an arbitrary form. To solve more complex
motion blurring, either a pair of images [6] or multi-image based approaches have been proposed
to obtain more information of the blur kernel by either actively or passively capturing multiple
images on the scene [7, 8].
Recently, a progressive development has been initiated for removing complex motion blurs from
a single image. There are two typical approaches of algorithms that can be categorized as
Bayesian (probabilistic) and regularization. The class of Bayesian framework based algorithms
use some probabilistic priors either on latent image or on blur kernels or images’ edge
distribution, to derive the blur kernel [9, 10, 11] or manually selecting blurred edges to obtain the
local blur kernel [12]. Possible limitation on this type of method is that, the assumed probabilistic
priors may not always hold true for general or natural images. The other popular approach is to
formulate the blind deconvolution problem as a joint minimization problem with the
consideration of some regularization constraints on both the blur kernel and the latent image.
Among the existing regularization-based methods, TV (Total Variation) norm and its variations
have been the dominant choice of the regularization term to solve various blind deblurring
problems [13]. Shan et al. [14] presented a more sophisticated minimization model where the
fidelity term is a weighted l2 norm on the similarity of both image intensity and image gradients.
Sparse representation recently has attracted community of researchers in the field of image
processing for solving problems such as deblurring, denoising, super resolution etc. Sparsity
principle is used for approximating signals as a linear combination of a few dictionary elements.
The property of sparsity in the representation of signals has also been approved in human
perception by some studies of human vision. Olshausen et al. [15] represented a natural image
using a small number of basis functions [16]. The sparse representation problem is usually solved
through l0 (generally intractable) and l1 (tractable) minimization with non-negativity constraints.
The progress of l0-norm and l1-norm minimization techniques have been successfully applied to
many vision tasks, including face recognition [17], image super resolution [16, 18] etc. Also, with
the growing realization that regular separable 1−D wavelets are inappropriate for handling
images, several new tailored multi-scale and directional redundant transforms are introduced,
including the curvelet, contourlet, wedgelet, bandlet and the steerable wavelet [19, 20] etc. to
design suitable dictionaries.
In [21], the authors introduced the K-SVD algorithm, a way to learn a dictionary, rather than
using a predefined dictionary [22]. In parallel, the introduction of the matching pursuit [23] and
the basis pursuit denoising [24] gave rise to the ability to address the image denoising problem as
a direct sparse decomposition technique over redundant dictionaries. Begovic et al. [25] provide
an extension and upgradation of the K-SVD dictionary learning concept from non-scalable to
scalable adaptive image reconstruction by regularizing the learning process of dictionary
elements.
Based on the high sparsity of the image in framelet system and the high sparsity of the motion-
blur kernel in curvelet system, Cai et al. [26] presented an algorithm to remove camera shake
from a single image. However, ringing problem is clearly visible in the results shown in their
paper. Whyte et al. [27] proposed a geometrically motivated model of non-uniform image blur
due to camera shake. This model is substituted in the single image deblurring algorithm of Fergus
[9] to verify the performance. Our proposed method based on sparse image modelling also shows
comparable result to this method with less complexity. Recently a new spatially adaptive penalty
function is derived in [28] for non-uniform image deblurring. An image restoration method based
on sparse representation with learning dictionary is also reported by Huang et al. [29]. In this
work only linear motion blur is considered. Although similar to our approach, the deblurring
4. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
50
results presented in our proposed method are superior over the method in [29]. Instead of image
deblurring, blurred image classification based on adaptive dictionary is reported in [30].
Above mentioned methods either utilized prior statistics learned from or assumed for a single, a
pair or a set of additional images for deblurring [9, 31, 14], whereas our proposed algorithm
requires only a single input image.
Under the framework of sparse representation, the proposed method takes advantage of dictionary
learning in adaptive fashion from the observed blurred-noisy image and inclusion of iterative
deblurring approach.
3. SPARSE REPRESENTATION FOR IMAGE DEBLURRING
It is well-known that natural images can be modelled with sparse representation over an over-
complete dictionary. A signal (input image) can be approximated by a sparse linear combination
of atom signals in an over-complete dictionary. This Section introduces problem formulation and
terminologies used in sparse modeling for image deblurring problem.
3.1. Problem Formulation
To construct a sparse model of an input image it is assumed that the original image f is of size
n n× pixels, where n = 256. The input image degradation is due to a known linear space-
invariant blur kernel H and an additive zero-mean white Gaussian noise with known variance 2
σ .
A blurred and noisy image g, is also of the same size as the input image which is formulated as g
= Hf + n. Now to recover sharp image f is the desired goal, where it is also assumed that f can be
very well described as Dα with a known dictionary D and a sparse vector α [32]. It becomes a
minimization problem which is described as follows:
Consider the linear system,
n
f R f Dα∈ = (4)
where, n m
D R ×
∈ is the overcomplete dictionary and α is the decomposition coefficient of f. There
are many solutions of f = Dα, and the least number of non-zero coefficients αi could be found.
Therefore, the degradation model in Eq. (2) can be equivalently modelled as,
g HD nα= + (5)
To solve such l0-minimization problem Donoho [33] proposed the step as,
0
ˆ arg min . .s t g HD nα α α= = + (6)
where, 0
α is l0-norm (i.e., the nonzero entries of α). The problem in Eq. (5) is computationally
NP-hard, however, referring to work of Donoho [33], to relax the non-convex sparsity problem to
convex problem it can be revised as,
1
ˆ arg min . .s t g HD nα α α= = + (7)
Thus a solution to the optimization problem presented in Eq. (7), using an appropriate Lagrange
multiplier λ , can be obtained as,
5. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
51
2
1 2
1
min
2
g HD
α
λ α α+ − (8)
where the first item is called as the sparse penalty function and the second item is the data fidelity
term.
3.2. Dictionary Learning
The over-complete dictionary (i.e., a learned set of parameters) formation is an important step for
sparse representation. For the image deblurring algorithm, we should specify the dictionary D,
such that it could lead to an efficient sparse representation modeling of underlying image content.
Instead of deploying any pre-chosen set of basis functions such as the wavelet, steerable wavelet,
curvelet or contourlet, in the proposed deblurring algorithm, learning the dictionary adaptively
from the corrupted (blurred and noisy) image is the way chosen.
3.2.1. Dictionary Learning Process
Basically, K-SVD is a direct extension of a well-known K-means clustering method and it is used
for dictionary learning purpose in our work. K-means method encodes each sample using a single
clustering center, while K-SVD method sparsely encodes the samples using atoms from the
dictionary D. The K-SVD [21] dictionary learning process is an iterative method of training a
dictionary from a set of images or image patches. The goal of K-SVD is to find the overcomplete
dictionary D which is a sparse representation for the training signals. The K-SVD algorithm is
used to construct the overcomplete dictionary, flexible with pursuit algorithm (Matching pursuit
(MP), Basic pursuit (BP)). The model of K-SVD could be described as follows:
2
02 0,
min . . ,i
D
G D s t T
α
α α− ∀ < (9)
where, T0 is given sparsity level, { } 1
n
i
g g =
= are the training signals, α is the relevant coefficient
and D is the dictionary to be found. There are two stages in K-SVD algorithm:
• Sparse Coding Stage: Using MP or BP and
• Dictionary Update Stage: update one atom at a time.
The sparse coding is performed for each signal individually using any standard technique. A
dictionary update step in K-SVD, performed atom-by-atom is more easier and efficient process
over a matrix inversion task. An example of trained dictionary obtained during the
experimentation of proposed method, for uniform and non-uniform motion blurred house image
(with 0.5σ = and 1.5σ = ) is shown in Figure 1.
(a) (b)
Figure 1. Example of trained dictionaries (size 64 × 256) for (a) Non-uniform blurred (kernel f2
[20]) and noisy image, 0.5σ = and (b) Uniform motion blurred and noisy image, 1.5σ =
6. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
52
3.2.2. Pursuit algorithms and OMP
The problem in Eq. (5) is computationally NP-hard and there are two main approaches to
approximate the solution to such problem: The first approach is the greedy family of methods,
where an attempt is to build the solution one non-zero at a time. The second approach is the
relaxation methods, which attempt to solve the problem by smoothing the l0-norm and using
continuous optimization techniques [32]. All these techniques aim to approximate the solution of
the problem posed in Eq. 5 and are commonly referred to as pursuit algorithms. The most basic
algorithm is the matching pursuit (MP). This is an iterative algorithm that starts by finding one
atom that best describes the input signal.
In [34], the authors proposed a refinement of the MP algorithm which improves convergence
using an additional orthogonalization step. The orthogonal matching pursuit (OMP) is an
improvement of the MP. The OMP re-computes the set of coefficients for the entire support each
time an atom is added to the support. This stage speeds up the convergence rate, as well as
ensures that once an atom has been added to the support, it will not be selected again. This is a
result of the least-squares stage, making the residual orthogonal to all atoms in the chosen
support. For a finite dictionary with N elements, OMP is guaranteed to converge to the projection
onto the span of the dictionary elements in a maximum of N steps. Due to all such advantages of
OMP algorithm it is chosen as a relaxation method to obtain ˆα .
The restored image is thus obtained as, ˆˆg Dα= , and goal of the experimentation is to get as close
as possible to the original image in terms mean squared error minimization.
4. IMAGE RESTORATION: AN ITERATIVE DEBLURRING APPROACH
There are number of image restoration methods, such as, Wiener filter, Richardson-Lucy
algorithm, constrained least-squares filtering, Bayesian and non-Bayesian methods etc.; a detail
survey of these techniques is very well covered in [35].
Numbers of such algorithms impose spatial and Fourier domain constraints on the PSF and true
image estimates, and update initial estimates iteratively. In [36], nonnegativity on both the PSF
and true image is imposed by setting negative values to zero in spatial domain. In Fourier domain,
the current estimate F(i)
is updated as,
( 1) ( ) ( )
( , ) (1 ) ( , ) ( , ) ( , )i i i
F u v F u v G u v H u vγ γ+
= − + (10)
which is essentially a weighted average of F(i)
(u, v) and G(u, v)/H(i)
(u, v) with a weight parameter
γ . The first part Fi
is obtained by taking the Fourier transform of f(i)
and therefore imposes the
constraint of spatial-domain nonnegativity. Second part G(u, v)/H(i)
(u, v) is a result of Fourier
domain constraint G(u, v) = (H(i)
(u, v) F(i)
(u, v)). H(i)
(u, v) is updated in the same way as in Eq.
(10) by exchanging H(i)
(u, v)and F(i)
(u, v). In this algorithm there is no theoretical optimal value
for γ and needs to be found empirically. Another way to represent Eq. (10) is to express it in
terms of Wiener-like update:
( 1)
2( )
2( )
( , ) ( , )
( , )
( , )
( , )
i
i
i
H u v G u v
F u v
H u v
F u v
η
∗
+
=
+
(11)
where η is a representative of the noise energy as in the Wiener filter solution [37].
7. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
53
In this paper, we have formulated a new iterative image deblurring approach by combining the
Wiener-like [1] update approach with the conventional iterative blind deconvolution [36]
approach. This formulation is given as:
( 1) ( )
2( )
( , ) ( , )
( , ) ( , )
( , )
i i
i
H u v G u v
F u v F u v
H u v η
∗
+
= +
+
(12)
This iterative deblurring step tries to minimize frequency residual error and it is alternated with
dictionary update stage which gives rise to a proposed sparse iterative deblurring algorithm -
SparseD, which is summarized in the next Section.
5. PROPOSED SPARSE ITERATIVE DEBLURRING ALGORITHM - SPARSED
With a combination of alternate step of dictionary updating for sparse representation of input
observation and iterative image deblurring approach as described in Section 4, a new Sparse
iterative deblurring algorithm is put forth with adaptive dictionary learning. The main steps
performed in the algorithm are as follows:
1. Initialize the dictionary D from Discrete cosine transform (DCT) bases
2. Repeat following steps till mean squared error (MSE) gets minimized or there is further
improvement in terms of peak signal-to-noise ratio (PSNR) for deblurred image
3. Initialize 0 0f g= (as initial estimate)
4. Update dictionary D from input blurred-noisy image using K-SVD algorithm
5. Perform iterative deblurring (set noise energy, 0.35η = ) with known PSF
• Take Fourier transform of ith
estimation f (i)
i.e., F(i)
(u, v)
• Compute F(i+1)
(u, v) according to Eq. (12)
• Apply inverse Fourier transform on F(i+1)
(u, v)
• Impose non-negativity constraint on above equation and obtain deblurred image
6. Using trained dictionary from step 4 and deblurred image from step 5, obtain ˆα
with the help of OMP algorithm
7. Obtain restored image as: ˆˆg Dα=
Alternating between iterative deblurring and denoising, by updating dictionary from recent output
has lead proposed SparseD algorithm into satisfactory performance for removal of a complex
motion blurs in presence of noise. Experimental evaluation proves the effectiveness of this
algorithm.
6. EXPERIMENTAL EVALUATION
Experimental results are evaluated on test images as: house, boat, lena, cameraman and 4 test
images from the dataset in [10] of size 256 × 256, as shown in Figure 2. Two different types of
8. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
54
blur kernels are used to simulate motion blur in presence of varying amount of additive Gaussian
noise as described below. The experimental setup used to simulate blurred images is as given as,
• Uniform motion blur kernel (Blur parameters: 15L = pixels and 0
40θ = )
• 8 different Non-uniform blur kernels as given in [10]
• Additive Gaussian white noise with 0.5nσ = and 1.5nσ =
Quantitative comparison of proposed algorithm and competitive single image deblurring methods
is carried on the basis of PSNR and Structural SIMilarity (SSIM) index as performance metrics.
(a)
(b)
Figure 2. Example of test images (a) house, boat and lena, (b) test images collected from [20]
Combination of one of the types of blur as listed above, along with an additive Gaussian white
noise with different values of standard deviation, σ are added to simulate blurred-noisy images.
The deblurring results by the proposed SparseD method are then compared with three state-of-
the-art methods i.e., Wiener filter, Richardson-Lucy (RL) algorithm and deblurring algorithm of
Shan et. al [14]. To verify the effectiveness and the robustness of proposed SparseD method, both
quantitative (PSNR and SSIM measures) and qualitative results are presented. PSNR and SSIM
results are reported in a tabular form followed by deblurring results as well. One can see in the
visual comparison that there are many noise residuals and artifacts around edges in the deblurred
images in the competing methods.
6.1. Experiment 1: Uniform motion blur with noise
After simulating motion blur with the blur parameters blur length, L = 15 pixels and blur angle,
0
40θ = , deblurring is carried out using proposed SparseD method, Wiener filter, Richardson-
Lucy (RL) algorithm and algorithm of Shan et. al [14]. PSNR and SSIM results reported in Table
1 for nσ = 0.5) and Table 2 for nσ = 1.5) indicates how the SparseD method is superior over
other methods.
9. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
55
Table 1. PSNR (dB) and SSIM results of deblurred images (uniform motion blur with blur parameters
15L = pixels and
0
40θ = ) ( 0.5nσ = )
Figure 3. Test image: house; case: uniform motion blur ( 15L = pixels and
0
40θ = ) ( nσ = 0.5) (a)
Original image, (b) Motion blur kernel, (c) Noisy-blurred image, and Deblurred images by: (d) SparseD
method, (e)Wiener filter, (f) R-L algorithm (g) Shan deblur [14]
Table 2. PSNR (dB) and SSIM results of deblurred images (uniform motion blur with blur parameters L =
15 pixels and
0
40θ = ) ( 1.5nσ = )
10. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
56
Figure 4. Test image: house; case: uniform motion blur (L = 15 pixels and
0
40θ = ) ( nσ = 1.5) (a)
Original image, (b) Motion blur kernel, (c) Noisy-blurred image, and Deblurred images by: (d) SparseD
method, (e)Wiener filter, (f) R-L algorithm (g) Shan deblur [14]
As shown in Figures 3 and 4 the visual appearance is pleasing in case of proposed method
whereas some noise residuals and ringing artifacts around edges in the deblurred images are
present in other cases. Even after changing the blur parameter values over a wide range of blur
parameters and sigma values of noise from smaller to larger extent, proposed method maintains
superiority over the competing methods.
6.2. Experiment 2: Non-uniform blur with noise
Most promising results are obtained in case of non-uniform blur removal with additive noise. This
experiment is of particular interest as the non-uniform kernels portray camera shake effect, which
is tedious to remove from single blurred image even after knowing (by some means either
estimation or hardware attachment) the blur kernels. In this experiment, 8 different non-uniform
blur kernels are collected from the dataset of Levin et al. [10] which are generated with actual
camera setup. These kernels are shown in Figure 5.
Figure 5. Non-uniform blur kernels [10] and their sizes: (a) f1 (19 × 19), (b) f2 (17 × 17), (c) f3 (15 × 15),
(d) f4 (27 × 27), (e) f5 (13 × 13), (f) f6 (21 × 21), (g) f7 (23 × 23), (h) f8 (23 × 23)
Blurred images are obtained from these kernels by carrying out simulation in frequency domain.
PSNR and SSIM measures reported in Table 3 for nσ = 0.5) and Table 4 for nσ = 1.5) and
deblurred images (as shown in Figures 6 and 7) produced by all deblurring algorithms depict the
effectiveness of proposed SparseD algorithm. Other competing methods fail to handle such type
of complex blurs even with the prior knowledge of PSF.
11. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
57
Table 3. SNR (dB) and SSIM results of deblurred images (Non-uniform blur kernel f4 [10]) ( 0.5nσ = )
Figure 6. Test image: boat; case: non-uniform blur kernel (kernel f4 [10])( nσ = 0.5) (a) Original image,
(b) Non-uniform blur kernel, (c) Noisy-blurred image, and Deblurred images by: (d) SparseD method,
(e)Wiener filter, (f) R-L algorithm (g) Shan deblur [14]
Table 4. PSNR (dB) and SSIM results of deblurred images (Non-uniform blur kernel f4 [10]) ( 1.5nσ = )
12. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
58
Figure 7. Test image: boat; case: non-uniform blur kernel (kernel f4 [10])( nσ = 1.5) (a) Original
image, (b) Non-uniform blur kernel, (c) Noisy-blurred image, and Deblurred images by: (d) SparseD
method, (e)Wiener filter, (f) R-L algorithm (g) Shan deblur [14]
In the same experimental scenario, some more results are added for different non-uniform kernels
with moderate to high noise, so as to verify the robustness of the proposed method. The results
are shown in Figure 8.
Figure 8. (a) Original image, (b) Non-uniform blur kernel (f1 [10]), (c) Noisy-blurred image (moderate
noise, nσ = 25), (d) Deblurred using proposed SparseD method, (e)Non-uniform blur kernel (f2 [10]), (f)
Noisy-blurred image (high noise, nσ = 100), (g) Deblurred using proposed SparseD method
7. CONCLUSIONS
In this paper a sparse representation and adaptive dictionary learning based iterative image
deblurring technique is proposed to deblur a single image degraded with uniform and non-
uniform blur kernels, in presence of additive noise. Strength of our algorithm lies in its capability
of deblurring complex, arbitrary motion paths with negligible or no ringing effect. Use of a very
efficient sparse modelling of natural images doesn’t require any kind of assumption of priors on
blur kernels or input image. Adapting dictionary learning from the blurred-noisy observation
image with the K-SVD step strengthens denoising task, whereas inclusion of iterative image
deblurring used to minimize frequency domain residual error helps in deblurring the image
13. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
59
effectively. The quality of the deblurred image inspected visually as well as quantitatively (PSNR
and SSIM) shows that the proposed method performs far superior in comparison with two non-
blind and one blind deblurring algorithm. It shows the usefulness of this method to deblur camera
shaken images in practical applications such as photography, forensics, satellite and medical
imaging fields. Proposed algorithm can be further enhanced in terms of fast optimization step.
ACKNOWLEDGEMENTS
The authors would like to thank anonymous reviewers for their constructive comments and
valuable suggestions that helped to improve the quality of this work.
REFERENCES
[1] N. Wiener, (1949) “Extrapolation, Interpolation and Smoothing of Stationary Time Series with
Engineering Applications”, The MIT Press, Cambridge (Mass.), Wiley and Sons, New York.
[2] L. Lucy, (1974) “An iterative technique for the rectification of observed distributions,” The
Astronomical Journal, vol. 79, no. 6, pp. 745–754.
[3] W. H. Richardson, (1972) “Bayesian-based iterative method of image restoration,” Journal of the
Optical Society of America, vol. 62, no. 1, pp. 55–59.
[4] T. Chan and S. Esedoglu, (2005) “Aspects of total variation regularized ill function approximation,”
in SIAM Journals, vol. 5, pp. 1817–1837.
[5] R. Neelamani, H. Choi, and R. Baraniuk, (2004) “Forward: Fourier-wavelet regularized
deconvolution for ill-conditioned systems,” IEEE Transactions on Signal Processing, vol. 52, no. 2,
pp. 418–433.
[6] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, (2007) “Image deblurring with blurred/noisy image
pairs,” in SIGGRAPH ’07: ACM SIGGRAPH 2007, ACM, New York.
[7] M. Ben-Ezra and S. K. Nayar, (2004) “Motion-based motion deblurring,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 689–698.
[8] R. Raskar, A. Agrawal, and J. Tumblin, (2006) “Coded exposure photography: motion deblurring
using fluttered shutter,” ACM Trans. Graph., vol. 25, pp. 795–804.
[9] R. Fergus, B. Singh, A. Hertzmann, S. Roweis, and W. Freeman, (2006 ) “Removing camera shake
from a single photograph,” ACM Transactions on Graphics(TOG), vol. 25, no. 3, pp. 787–794.
[10] A. Levin, Y. Weiss, F. Durand, and W. Freeman, (2009) “Understanding and evaluating blind
deconvolution algorithms,” in Proceedings of the IEEE Conference on CVPR.
[11] N. Joshi, R. Szeliski, and D. Kriegman, (2008) “PSF estimation using sharp edge prediction,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[12] J. Jia, (2007) “Single image motion deblurring using transparency,” IEEE Computer Society
conference on computer vision and pattern recognition, pp. 1–8.
[13] T. F. Chan and C. K. Wong, (1998) “Total variation blind deconvolution,” in IEEE Trans. Image
Process., vol. 7, no. 3, pp. 370–375.
[14] Q. Shan, J. Jia, and A. Agarwala, (2008) “High-quality motion deblurring from a single image,” in
ACM Transactions on Graphics (SIGGRAPH), vol. 27, no. 3, pp. 73(1)–73(10).
[15] B. Olshausen and D. Field, (1997) “Sparse coding with an over-complete basis set: A strategy
employed by v1,” in Vision Research, pp. 3311–3325.
[16] W. Dong, L. Zhang, G. Shi, and X. Wu, (2011) “Image deblurring and super-resolution by adaptive
sparse domain selection and adaptive regularization,” in IEEE Trans. on Image Processing, pp. 1838–
1857.
[17] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, (2009) “Robust face recognition via sparse
representation,” in IEEE Trans. on Pattern Analysis and Machine Intelligence.
[18] J. Yang, J. Wright, T. Huang, and Y. Ma, (2008) “Image super resolution as sparse representation of
raw patches,” in Proceedings of IEEE International Conference on CVPR.
[19] W. T. Freeman and E. H. Adelson, (1991) “The design and use of steerable filters,” in IEEE Pattern
Anal. Mach. Intell., vol. 13, no. 9, pp. 891–906.
[20] M. N. Do and M. Vetterli, (2003) in Contourlets, Beyond Wavelets, E. N. Y. A. G. V. Welland, Ed.,.
[21] M. Aharon, M. Elad, and A. Bruckstein, (2006) “K-SVD: An algorithm for designing of overcomplete
dictionaries for sparse representation,” in IEEE Trans. on Signal Processing, vol. 54, no. 11, pp.
4311–4322.
14. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.1, February 2014
60
[22] J. Mairal, M. Elad, , and G. Sapiro, (2008) “Sparse representation for color image restoration,” in
IEEE Trans. on Image Processing, vol. 17, no. 1, pp. 53–69.
[23] S. Mallat and Z. Zhang, (1993) “Matching pursuits with time-frequency dictionaries,” in IEEE
Transaction on Signal Processing, vol. 12, pp. 3397–3415.
[24] S. S. Chen, D. L. Donoho, and M. A. Saunders, (2001) “Atomic decomposition by basis pursuit,” in
SIAM Rev., vol. 43, no. 1, pp. 129–159.
[25] B. Begovic, V. Stankovic, and L. Stankovic, (2014) “Dictionary learning for scalable sparse image
representation with applications,” in Advances in Signal Processing, vol. 2, no. 2, pp. 55 –74.
[26] J.-F. Cai, H. Ji, C.-Q. Liu, and Z. Shen, (2009) “Blind motion deblurring from a single image using
sparse approximation,” in Computer Vision and Pattern Recognition, pp. 104 – 111.
[27] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, (2012) “Non-uniform deblurring for shaken images,”
in IJCV. vol. 98, no. 2, pp. 168 – 186.
[28] H. Zhang, D.P. Wipf, and Y. Zhang, (2013) “Multi-image blind deblurring using a coupled adaptive
sparse prior, ” in CVPR.
[29] H. Huang and N. Xiao, (2013) “Image deblurring based on sparse model with dictionary learning,” in
Journal of Information and Computational Science, vol. 10, no. 1, pp. 129 – 137.
[30] G. Sun, J. Yin, and X. Zhou, (2013) “Blurred image classification based on adaptive dictionary,” in
The International Journal of Multimedia & Its Applications (IJMA), vol.5, no.1, pp. 1–9.
[31] A. Levin, (2006) “Blind motion deblurring using image statistics,” In Advances in Neural Information
Processing Systems (NIPS).
[32] M. Elad, (2010) Sparse and Redundant Representations: From Theory to Applications in Signal and
Image Processing, Berlin: Springer.
[33] D. L. Donoho, (1995) “De-noising by soft-thresholding,” IEEE Transactions on Information Theory,
vol. 41, no. 3, pp. 613–627.
[34] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, (1993) “Orthogonal matching pursuit: Recursive
function approximation with applications to wavelet decomposition,” in Proceedings of the Asilomar
Conference on Signals, Systems, and Computers, vol. 1, pp. 40–44.
[35] D. Kundur and D. Hatzinakos, (1996) “Blind image deconvolution,” IEEE Signal Processing Mag.,
vol. 13, no. 3, pp. 43–64.
[36] G. R. Ayers and J. C. Dainty, (1988) “Iterative blind deconvolution method and its applications,”
Optics Letters, vol. 13, no. 7, pp. 547–549.
[37] R. L. Lagendijk, J. Biemond, and D. E. Boekee, (1988) “Regularized iterative image restoration with
ringing reduction,” in IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 36, no.
12, pp. 1874–1888.
AUTHORS
Ashwini M. Deshpande received the Bachelor’s Degree in Electronics Engineering
from University of Pune and Master’s Degree from Shivaji University, Kolhapur.
Presently she is a Research Scholar in the Department of Electronics Engineering,
Sardar Vallabhbhai National Institute of Technology, Surat, (Gujarat) India. Her main
research interests are in Signal Processing, Digital Image Processing and Analysis,
Image Restoration, Signal Modelling etc.
Suprava Patnaik received Bachelor’s and Master’s degree from National Institute of
Technology, Rourkela and Ph.D. Degree from Indian Institute of Technology,
Kharagpur, India. Earlier she was associated with ECED Department, SVNIT, Surat
and presently is working as a Professor in the Department of E&Tc Engg., Xavier
Institute of Engineering, Mumbai (MS), India. The areas of her research interest are
Signal Approximation, Image Processing, Computer Vision, Image Enhancement, Soft
Computing etc. She has published number of research papers in peer reviewed
international journals and conferences organized by IEEE societies.