Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given. Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...mlaij
Depth estimation has made great progress in the last few years due to its applications in robotics science
and computer vision. Various methods have been implemented and enhanced to estimate the depth without
flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers,
especially for the video applications which have more complexity of the neural network which af ects the
run time. Moreover to use such input like monocular video for depth estimation is considered an attractive
idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing
pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on
enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of
RAM and with using less number of parameters without having a significant reduction in the quality of the
depth estimation.
To get this project in ONLINE or through TRAINING Sessions,
Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83. Landmark: Next to Kotak Mahendra Bank. Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9. Landmark: Next to VVP Nagar Arch. Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org Blog: www.jpinfotech.blogspot.com
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Single Image Depth Estimation using frequency domain analysis and Deep learningAhan M R
Using Machine Learning and Deep Learning Techniques, we train the ResNet CNN Model and build a model for estimating Depth using the Discrete Fourier Domain Analysis, and generate results including the explanation of the Loss function and code snippets.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Single image super resolution with improved wavelet interpolation and iterati...iosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
AN ENHANCEMENT FOR THE CONSISTENT DEPTH ESTIMATION OF MONOCULAR VIDEOS USING ...mlaij
Depth estimation has made great progress in the last few years due to its applications in robotics science
and computer vision. Various methods have been implemented and enhanced to estimate the depth without
flickers and missing holes. Despite this progress, it is still one of the main challenges for researchers,
especially for the video applications which have more complexity of the neural network which af ects the
run time. Moreover to use such input like monocular video for depth estimation is considered an attractive
idea, particularly for hand-held devices such as mobile phones, they are very popular for capturing
pictures and videos, in addition to having a limited amount of RAM. Here in this work, we focus on
enhancing the existing consistent depth estimation for monocular videos approach to be with less usage of
RAM and with using less number of parameters without having a significant reduction in the quality of the
depth estimation.
To get this project in ONLINE or through TRAINING Sessions,
Contact:JP INFOTECH, Old No.31, New No.86, 1st Floor, 1st Avenue, Ashok Pillar, Chennai -83. Landmark: Next to Kotak Mahendra Bank. Pondicherry Office: JP INFOTECH, #45, Kamaraj Salai, Thattanchavady, Puducherry -9. Landmark: Next to VVP Nagar Arch. Mobile: (0) 9952649690 , Email: jpinfotechprojects@gmail.com, web: www.jpinfotech.org Blog: www.jpinfotech.blogspot.com
This is a paper I wrote as part of my seminar "Inverse Problems in Computer Vision" while pursuing my M.Sc Medical Engineering at FAU, Erlangen, Germany.
The paper details a state-of-the-art method used for Single Image Super Resolution using Deep Convolutional Networks and the possible extensions to the original approach by considering compression and noise artifacts.
Single Image Depth Estimation using frequency domain analysis and Deep learningAhan M R
Using Machine Learning and Deep Learning Techniques, we train the ResNet CNN Model and build a model for estimating Depth using the Discrete Fourier Domain Analysis, and generate results including the explanation of the Loss function and code snippets.
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithmijtsrd
This paper addresses a unified work for achieving single image super-resolution, which consists of improving a high resolution from blurred, decimated and noisy version. Single image super-resolution is also known as image enhancement or image scaling up. In this paper mainly four steps are used for enhancement of single image resolution: input image, low sampling the image, an analytical solution and L2 regularization. This proposes to deal with the decimation and blurring operators by their particular properties in the frequency domain, which leads to a fast super-resolution approach. And an analytical solution obtained and implemented for the L2-regularization i.e. L2-L2 optimized algorithm. This aims to reduce the computational cost of the existing methods by the proposed method. Simulation results taken on different images and different priors with an advance machine learning technique and conducted results compared with the existing method. Varsha Patil | Meharunnisa SP"Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15635.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/15635/single-image-super-resolution-using-analytical-solution-for-l2-l2-algorithm/varsha-patil
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
Image Fusion is a process of combining relevant information from a set of images, into a
single image, wherein the resultant fused image will be more informative and complete than any of
the input images. This paper discusses Laplacian Pyramid (LP) based image fusion techniques for
fingerprint application. The technique is implemented in MatLab and evaluation parameters Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Matching score are discussed. As well
the same implemented on Virtex-5 FPGA development board using Verilog HDL. LP based
technique provides better results for image fusion than other techniques.
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...IJERA Editor
Image Resolution is one of the important quality metrics of images. Images with high resolution are required in
many fields. In this paper, a new resolution enhancement technique is proposed based on the interpolation of
four sub band images generated by Discrete Wavelet Transform (DWT) and the original Low Resolution (LR)
input image. In this technique, the four sub band images generated by DWT and the input LR image are
interpolated with scaling factor, α and then performed inverse DWT to obtain the intermediate High Resolution
(HR) Image. The difference between the intermediate HR image and the interpolated LR input image is added
to the intermediate HR image to obtain final output HR Image. Lanczos interpolation is used in this technique.
The proposed technique is tested on well known bench mark images. The quantitative and visual results shows
the superiority of the proposed technique over the conventional and state of art image resolution enhancement
techniques in wavelet domain using haar wavelet filter.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
Digital Implementation of Fuzzy Logic Controller for Real Time Position Contr...IOSR Journals
Abstract: Fuzzy Logic Controller (FLC) systems have emerged as one of the most promising areas for Industrial Applications. The highly growth of fuzzy logic applications led to the need of finding efficient way to hardware implementation. Field Programmable Gate Array (FPGA) is the most important tool for hardware implementation due to low consumption of energy, high speed of operation and large capacity of data storage. In this paper, instead of an introduction to fuzzy logic control methodology, we have demonstrated the implementation of a FLC through the use of the Very high speed integrated circuits Hardware Description Language (VHDL) code. FLC is designed for position control of BLDC Motor. VHDL has been used to develop FLC on FPGA. A Mamdani type FLC structure has been used to obtain the controller output. The controller algorithm developed synthesized, simulated and implemented on FPGA Spartan 3E board. Keywords – BLDC Motor, FLC, Hardware Implementation, Spartan3 FPGA, VHDL
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithmijtsrd
This paper addresses a unified work for achieving single image super-resolution, which consists of improving a high resolution from blurred, decimated and noisy version. Single image super-resolution is also known as image enhancement or image scaling up. In this paper mainly four steps are used for enhancement of single image resolution: input image, low sampling the image, an analytical solution and L2 regularization. This proposes to deal with the decimation and blurring operators by their particular properties in the frequency domain, which leads to a fast super-resolution approach. And an analytical solution obtained and implemented for the L2-regularization i.e. L2-L2 optimized algorithm. This aims to reduce the computational cost of the existing methods by the proposed method. Simulation results taken on different images and different priors with an advance machine learning technique and conducted results compared with the existing method. Varsha Patil | Meharunnisa SP"Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018, URL: http://www.ijtsrd.com/papers/ijtsrd15635.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/15635/single-image-super-resolution-using-analytical-solution-for-l2-l2-algorithm/varsha-patil
Filtering Based Illumination Normalization Techniques for Face RecognitionRadita Apriana
The main challenge experienced by the present face recognition techniques and smooth filters
are the difficulty in managing illumination. The differences in face images that are created by illumination
are normally bigger compared to the differences in inter-person that is utilized to differentiate identities.
However, face recognition over illumination has more uses in a lot of applications that deal with subjects
that are not cooperative where the highest potential of the face recognition as a non-intrusive biometric
feature can be executed and utilized. A lot of work has been put into the research and development of
illumination and face recognition in the present era and a lot of critical methods have been introduced.
Nevertheless, there are some concerns with face recognition and illumination that require further
considerations which include the deficiencies in comprehending the sub-spaces in illumination pictures,
problems with intractability in face modelling and complicated mechanisms of face surface reflections.
Image segmentation by modified map ml estimationsijesajournal
Though numerous algorithms exist to perform image segmentation there are several issues
related to execution time of these algorithm. Image Segmentation is nothing but label relabeling
problem under probability framework. To estimate the label configuration, an iterative
optimization scheme is implemented to alternately carry out the maximum a posteriori (MAP)
estimation and the maximum likelihood (ML) estimations. In this paper this technique is
modified in such a way so that it performs segmentation within stipulated time period. The
extensive experiments shows that the results obtained are comparable with existing algorithms.
This algorithm performs faster execution than the existing algorithm to give automatic
segmentation without any human intervention. Its result match image edges very closer to
human perception.
Fpga implementation of fusion technique for fingerprint applicationIAEME Publication
Image Fusion is a process of combining relevant information from a set of images, into a
single image, wherein the resultant fused image will be more informative and complete than any of
the input images. This paper discusses Laplacian Pyramid (LP) based image fusion techniques for
fingerprint application. The technique is implemented in MatLab and evaluation parameters Mean
Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Matching score are discussed. As well
the same implemented on Virtex-5 FPGA development board using Verilog HDL. LP based
technique provides better results for image fusion than other techniques.
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...IJERA Editor
Image Resolution is one of the important quality metrics of images. Images with high resolution are required in
many fields. In this paper, a new resolution enhancement technique is proposed based on the interpolation of
four sub band images generated by Discrete Wavelet Transform (DWT) and the original Low Resolution (LR)
input image. In this technique, the four sub band images generated by DWT and the input LR image are
interpolated with scaling factor, α and then performed inverse DWT to obtain the intermediate High Resolution
(HR) Image. The difference between the intermediate HR image and the interpolated LR input image is added
to the intermediate HR image to obtain final output HR Image. Lanczos interpolation is used in this technique.
The proposed technique is tested on well known bench mark images. The quantitative and visual results shows
the superiority of the proposed technique over the conventional and state of art image resolution enhancement
techniques in wavelet domain using haar wavelet filter.
A New Approach of Medical Image Fusion using Discrete Wavelet TransformIDES Editor
MRI-PET medical image fusion has important
clinical significance. Medical image fusion is the important
step after registration, which is an integrative display method
of two images. The PET image shows the brain function with
a low spatial resolution, MRI image shows the brain tissue
anatomy and contains no functional information. Hence, a
perfect fused image should contains both functional
information and more spatial characteristics with no spatial
& color distortion. The DWT coefficients of MRI-PET
intensity values are fused based on the even degree method
and cross correlation method The performance of proposed
image fusion scheme is evaluated with PSNR and RMSE and
its also compared with the existing techniques.
Digital Implementation of Fuzzy Logic Controller for Real Time Position Contr...IOSR Journals
Abstract: Fuzzy Logic Controller (FLC) systems have emerged as one of the most promising areas for Industrial Applications. The highly growth of fuzzy logic applications led to the need of finding efficient way to hardware implementation. Field Programmable Gate Array (FPGA) is the most important tool for hardware implementation due to low consumption of energy, high speed of operation and large capacity of data storage. In this paper, instead of an introduction to fuzzy logic control methodology, we have demonstrated the implementation of a FLC through the use of the Very high speed integrated circuits Hardware Description Language (VHDL) code. FLC is designed for position control of BLDC Motor. VHDL has been used to develop FLC on FPGA. A Mamdani type FLC structure has been used to obtain the controller output. The controller algorithm developed synthesized, simulated and implemented on FPGA Spartan 3E board. Keywords – BLDC Motor, FLC, Hardware Implementation, Spartan3 FPGA, VHDL
Mechanical Properties of Tere-Phthalic Unsaturated Polyester Resin Reinforced...IOSR Journals
Abstract: The objective of this work is to investigate the mechanical properties of particulate snail shell
reinforced unsaturated polyester composite. 5wt% ground snail shell of particle size 625microns was
introduced to unsaturated polyester matrix to produce a composite. Other specimens were produced at 10, 15,
20, 25 and 30 weight percentages of the particulate filler in unsaturated polyester matrix. Mechanical tests were
conducted on prepared samples of the composite material. The results showed that the flexural strength of the
composite with 20wt% snail shell particulate reinforcement was greatly enhanced and the impact and hardness
properties were greatly improved at 5wt% filler loading. The composite could be considered for applications in
areas where high impact strength is a requirement such as in shipping containers. The 20wt% snail shell
reinforced unsaturated polyester can be used in place of pure polyester for applications where flexibility is of
utmost importance. Keywords: Snail Shell, Unsaturated Polyester, Composite, Mechanical Properties, filler
Performance Analysis of Various Symbol Detection Techniques in Wireless MIMO ...IOSR Journals
Abstract : Wireless communication is one of the most effective areas of technology development of our time. Wireless communications today covers a very wide array of applications. In this paper, we study the performance of general MIMO system, the performance of Zero Forcing (ZF), Linear Least Square Estimator (LLSE), V-BLAST/ZF, V-BLAST/LLSE of 4x4, 4x6 & 4x8 with 4-QAM & 16-QAM modulation in i i d Rayleigh fading channel. We seen that SER performance of 4x8 antennas and 4-QAM modulation scheme outperforms others. Result shows that for higher modulation schemes SER performance degrades as well as SER performance increases for higher no of receiver antennas. Keywords - Multi Input Multi Output, Zero-forcing receiver, Linear Least Square Estimation, V-BLAST.
Reduction of Side Lobes by Using Complementary Codes for Radar ApplicationIOSR Journals
Abstract: The analysis of new types of Complementary direct sequence complex signals which have synthesized
with well – know code sequences like Barker, Walsh, Golay, and complementary codes. Build on the
autocorrelation function (ACF) and ambiguity function (AF) of signals, the numerical method estimates the
volume of side lobes separately for each signal. The results obtained show that the signals, which have a low
volume of side lobes, means of approximately zero in by using complementary codes with compare to different
codes.
Keywords: Ambiguity function, complementary codes, autocorrelation function etc.
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...cscpconf
This paper presents a hybrid approach for images and video super-resolution. We have proposed the approach for enhancing the resolution of images and low resolution, under
sampled videos. We exploited the shift and motion based robust super-resolution (SR)algorithm [1] and the diffusion image regularization method proposed in [2] to obtain the alias free and jerk free smooth SR image.We presented a framework for obtaining super-resolution video thatis robust,even in the presence of fast changing video frames. Wecompare our hybrid
approach framework’s simulation results with different resolution enhancement techniques i.e. Robust Super-resolution, IBP and Interpolation methods reported in the literature. This
approach shows good results in term of different quality parameters.
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...IJMER
Images with high resolution (HR) often required in most electronic imaging applications.
There are two types of resolution first is high resolution and other one is low resolution. Now high
resolution means pixel density with in an image is high and low resolution means pixel density with in
an image is low. Therefore high resolution image can offer more detail compare to low resolution
image that may be critical in many application. Super resolution is the process to obtain high
resolution image from one or more low resolution images. Here in paper explain such robust methods
of image super resolution. This paper describes the learning-based SR technique that utilizes an
example-based algorithm. This technique divides a large volume of training images into small
rectangular pieces called patches and patch pairs of low-resolution and high-resolution images are
stored in dictionary. After that there are low resolution patch is extracted from the input images. The
most alike patch pair is searched in the dictionary to synthesize high resolution image using the
searched high resolution patch in the pair.
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
In this paper, we propose a framework detecting and locating the land cover
classes from a low-resolution image, which can play a very important role in the satellite
surveillance image from the MODIS data. The lands cover classes by constructing super-
resolution images from the MODIS data. The highest resolution of the MODIS images is 250
meters per pixel. By magnifying and de-blurring the low-resolution satellite image through
the kernel regression. SR reconstruction is image interpolation that has been used to
increase the size of a single image. The SRKR algorithm takes a single low-resolution image
and generates a de-blurred high-resolution image. We perform bi-cubic interpolation on the
input low-resolution image (LR) with a desired scaling factor. Finally, the KR model is then
used to generate the de-blurred HR image. K-means is one of the simplest unsupervised
learning algorithms that solve the well-known clustering problem, which generates a
specific number of disjoint, flat (non-hierarchical) clusters. K-means clustering is employ in
order to compare MODIS data and recognize land cover type, i.e., “Forest”, “Land”, “sea”,
and “Ice”.
A Novel and Robust Wavelet based Super Resolution Reconstruction of Low Resol...CSCJournals
High Resolution images can be reconstructed from several blurred, noisy and aliased low resolution images using a computational process know as super resolution reconstruction. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. In this paper we concentrate on a special case of super resolution problem where the wrap is composed of pure translation and rotation, the blur is space invariant and the noise is additive white Gaussian noise. Super resolution reconstruction consists of registration, restoration and interpolation phases. Once the Low resolution image are registered with respect to a reference frame then wavelet based restoration is performed to remove the blur and noise from the images, finally the images are interpolated using adaptive interpolation. We are proposing an efficient wavelet based denoising with adaptive interpolation for super resolution reconstruction. Under this frame work, the low resolution images are decomposed into many levels to obtain different frequency bands. Then our proposed novel soft thresholding technique is used to remove the noisy coefficients, by fixing optimum threshold value. In order to obtain an image of higher resolution we have proposed an adaptive interpolation technique. Our proposed wavelet based denoising with adaptive interpolation for super resolution reconstruction preserves the edges as well as smoothens the image without introducing artifacts. Experimental results show that the proposed approach has succeeded in obtaining a high-resolution image with a high PSNR, ISNR ratio and a good visual quality.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Abstract: Primarily due to the progresses in super resolution imagery, the methods of segment-based image analysis for generating and updating geographical information are becoming more and more important. This work presents a image segmentation based on colour features with K-means clustering. The entire work is divided into two stages. First enhancement of color separation of satellite image using de correlation stretching is carried out and then the regions are grouped into a set of five classes using K-means clustering algorithm. At first, the spatial data is concentrated focused around every pixel, and at that point two separating procedures are added to smother the impact of pseudoedges. What's more, the spatial data weight is built and grouped with k-means bunching, and the regularization quality in every district is controlled by the bunching focus esteem. The exploratory results, on both reenacted and genuine datasets, demonstrate that the proposed methodology can adequately lessen the pseudoedges of the aggregate variety regularization in the level.
Analysis of Image Super-Resolution via Reconstruction Filters for Pure Transl...CSCJournals
In this work, a special case of the image super-resolution problem where the only type of motion is global translational motion and the blurs are shift-invariant is investigated. The necessary conditions for exact reconstruction of the original image by using finite impulse-response reconstruction filters are investigated and determined. If the number of available low-resolution images is larger than a threshold and the blur functions meet a certain property, a reconstruction filter set for perfect image super-resolution can be generated even in the absence of motion. Given that the conditions are satisfied, a method for exact super-resolution is presented to validate the analysis results and it is shown that for the fully determined case, perfect reconstruction of the original image is achieved. Finally, some realistic conditions that make the super-resolution problem ill-posed are treated and their effects on exact super-resolution are discussed.
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...ijcsit
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
In the near future, there is an eminent demand for High Resolution images. In order to fulfil this
demand, Super Resolution (SR) is an approach used to renovate High Resolution (HR) image from one or more
Low Resolution (LR) images. The aspiration of SR is to dig up the self-sufficient information from each LR
image in that set and combine the information into a single HR image. Conventional interpolation methods can
produce sharp edges; however, they are approximators and tend to weaken fine structure. In order to overcome
the drawback, a new approach of Effective Pixel Interpolation method is incorporated. It has been numerically
verified that the resulting algorithm reinstate sharp edges and enhance fine structures satisfactorily,
outperforming conventional methods. The suggested algorithm has also proved efficient enough to be applicable
for real-time processing for resolution enhancement of image. Statistical examples are shown to verify the claim.
Image fusion technology is also used to fuse two processed images obtained through the algorithm
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
Abstract: In the near future, there is an eminent demand for High Resolution images. In order to fulfil this demand, Super Resolution (SR) is an approach used to renovate High Resolution (HR) image from one or more Low Resolution (LR) images. The aspiration of SR is to dig up the self-sufficient information from each LR image in that set and combine the information into a single HR image. Conventional interpolation methods can produce sharp edges; however, they are approximators and tend to weaken fine structure. In order to overcome the drawback, a new approach of Effective Pixel Interpolation method is incorporated. It has been numerically verified that the resulting algorithm reinstate sharp edges and enhance fine structures satisfactorily, outperforming conventional methods. The suggested algorithm has also proved efficient enough to be applicable for real-time processing for resolution enhancement of image. Statistical examples are shown to verify the claim. Image fusion technology is also used to fuse two processed images obtained through the algorithm. Keywords: Super Resolution, Interpolation, EESM, Image Fusion
This paper presents a new approach for the enhancement of Synthetic Radar Imagery using Discrete Wavelet Transform and its variants. Some of the approaches like nonlocal filtering (NLF) techniques, and multiscale iterative reconstruction (e.g., the BM3D method) do not solve the RE/SR imaging inverse problems in descriptive settings imposing some structured regularization constraints and exploits the sparsity of the desired image representations for resolution enhancement (RE) and superresolution (SR) of coherent remote sensing (RS). Such approaches are not properly adapted to the SR recovery of the speckle-corrupted low resolution (LR) coherent radar imagery. These pitfalls are eradicated by using DWT approach wherein the despeckled/deblurred HR image is recovered from the LR speckle/blurry corrupted radar image by applying some of the descriptive-experiment-design-regularization (DEDR) based re-constructive steps. Next, the multistage RE is consequently performed in each scaled refined SR frame via the iterative reconstruction of the upscaled radar images, followed by the discrete-wavelet-transform-based sparsity promoting denoising with guaranteed consistency preservation in each resolution frame. The performance of the method proposed is compared in terms of the number of iterations taken by it with other techniques existing in the literature.
Single Image Super Resolution using Interpolation and Discrete Wavelet Transformijtsrd
An interpolation-based method, such as bilinear, bicubic, or nearest neighbor interpolation, is regarded as a simple way to increase the spatial resolution for the LR image It uses the interpolation kernel to predict the missing pixel values, which fails to approximate the underlying image structure and leads to some blurred edges In this work a super resolution technique based on Sparse characteristics of wavelet transform Hence, we proposed a wavelet based super-resolution technique, which will be of the category of interpolative methods, using sparse property of wavelets It is based on sparse representation property of the wavelets Simulation results prove that the proposed wavelet based interpolation method outperforms all other existing methods for single image super resolution The proposed method has 7 7 dB improvement in PSNR compared with Adaptive sparse representation and self-learning ASR-SL 1 for test image Leaves, 12 92 dB improvement for test image Mountain Lion and 7 15 dB improvement for test image Hat compared with ASR-SL 1 Similarly, 12 improvement in SSIM for test image Leaves compared with 1 , 29 improvement in SSIM for test image Mountain Lion compared with 1 and 17 improvement in SSIM for test image Hat compared with 1 Shalini Dubey | Prof. Pankaj Sahu | Prof. Surya Bazal "Single Image Super Resolution using Interpolation & Discrete Wavelet Transform" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-6 , October 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18340.pdf
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
High-resolution (HR) images play a vital role in all imaging applications as they offer more details. The images captured by the camera system are of degraded quality due to the imaging system and are low-resolution (LR) images. Image super-resolution (SR) is a process, where HR image is obtained from combining one or multiple LR images of same scene. In this paper, learning based single frame image super-resolution technique is proposed by using Fast Discrete Curvelet Transform (FDCT) coefficients. FDCT is an extension to Cartesian wavelets having anisotropic scaling with many directions and positions, which forms tight wedges. Such wedges allow FDCT to capture the smooth curves and fine edges at multiresolution level. The finer scale curvelet coefficients of LR image are learnt locally from a set of high-resolution training images. The super-resolved image is reconstructed by inverse Fast Discrete Curvelet Transform (IFDCT). This technique represents fine edges of reconstructed HR image by extrapolating the FDCT coefficients from the high-resolution training images. Experimentation based results show appropriate improvements in MSE and PSNR.
Super resolution image reconstruction via dual dictionary learning in sparse...IJECEIAES
Patch-based super resolution is a method in which spatial features from a low-resolution (LR) patch are used as references for the reconstruction of high-resolution (HR) image patches. Sparse representation for each patch is extracted. These coefficients obtained are used to recover HR patch. One dictionary is trained for LR image patches, and another dictionary is trained for HR image patches and both dictionaries are jointly trained. In the proposed method, high frequency (HF) details required are treated as combination of main high frequency (MHF) and residual high frequency (RHF). Hence, dual-dictionary learning is proposed for main dictionary learning and residual dictionary learning. This is required to recover MHF and RHF respectively for recovering finer image details. Experiments are carried out to test the proposed technique on different test images. The results illustrate the efficacy of the proposed algorithm.
Literature Review on Single Image Super Resolutionijtsrd
In this paper, a detailed survey study on single image super-resolution (SR) has been presented, which aims at recovering a high-resolution (HR) image from a given low-resolution (LR) one. It is always the research emphasis because of the requirement of higher definition video displaying, such as the new generation of Ultra High Definition (UHD) TVs. Super-resolution (SR) is a popular topic of image processing that focuses on the enhancement of image resolution. In general, SR takes one or several low-resolution (LR) images as input and maps them as output images with high resolution (HR), which has been widely applied in remote sensing, medical imaging, biometric identification. Shalini Dubey | Prof. Pankaj Sahu | Prof. Surya Bazal"Literature Review on Single Image Super Resolution" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-5 , August 2018, URL: http://www.ijtsrd.com/papers/ijtsrd18339.pdf http://www.ijtsrd.com/engineering/electronics-and-communication-engineering/18339/literature-review-on-single-image-super-resolution/shalini-dubey
A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times
that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to
be possible to perfectly reconstruct a signal from a series of measurements. Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if the signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly. The main idea is that with prior knowledge about constraints on the signal’s frequencies, fewer samples are needed to reconstruct the signal. Sparse sampling (also known as, compressive sampling, or compressed sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions tounder determined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible.[1] The first one is sparsity which requires the signal to be sparse in some domain. The second one is incoherence which is applied through the isometric property which is sufficient for sparse signals Possibility
of compressed data acquisition protocols which directly acquire just the important information Sparse sampling (CS) is a fast growing area of research. It neglects the extravagant acquisition process by measuring lesser values to reconstruct the image or signal. Sparse sampling is adopted successfully in various fields of image processing and proved its efficiency. Some of the image processing applications like face recognition, video encoding, Image encryption and reconstruction are presented here.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Similar to Survey on Single image Super Resolution Techniques (20)
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
Online aptitude test management system project report.pdfKamal Acharya
The purpose of on-line aptitude test system is to take online test in an efficient manner and no time wasting for checking the paper. The main objective of on-line aptitude test system is to efficiently evaluate the candidate thoroughly through a fully automated system that not only saves lot of time but also gives fast results. For students they give papers according to their convenience and time and there is no need of using extra thing like paper, pen etc. This can be used in educational institutions as well as in corporate world. Can be used anywhere any time as it is a web based application (user Location doesn’t matter). No restriction that examiner has to be present when the candidate takes the test.
Every time when lecturers/professors need to conduct examinations they have to sit down think about the questions and then create a whole new set of questions for each and every exam. In some cases the professor may want to give an open book online exam that is the student can take the exam any time anywhere, but the student might have to answer the questions in a limited time period. The professor may want to change the sequence of questions for every student. The problem that a student has is whenever a date for the exam is declared the student has to take it and there is no way he can take it at some other time. This project will create an interface for the examiner to create and store questions in a repository. It will also create an interface for the student to take examinations at his convenience and the questions and/or exams may be timed. Thereby creating an application which can be used by examiners and examinee’s simultaneously.
Examination System is very useful for Teachers/Professors. As in the teaching profession, you are responsible for writing question papers. In the conventional method, you write the question paper on paper, keep question papers separate from answers and all this information you have to keep in a locker to avoid unauthorized access. Using the Examination System you can create a question paper and everything will be written to a single exam file in encrypted format. You can set the General and Administrator password to avoid unauthorized access to your question paper. Every time you start the examination, the program shuffles all the questions and selects them randomly from the database, which reduces the chances of memorizing the questions.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Survey on Single image Super Resolution Techniques
1. IOSR Journal of Electronics and Communication Engineering (IOSR-JECE)
e-ISSN: 2278-2834,p- ISSN: 2278-8735.Volume 5, Issue 5 (Mar. - Apr. 2013), PP 23-33
www.iosrjournals.org
www.iosrjournals.org 23 | Page
Survey on Single image Super Resolution Techniques
Rujul R. Makwana 1
, Nita D. Mehta2
1
(PG Student, EC Department,Government Engineering College, Surat, India)
2
(Associate Professor, EC Department,Government Engineering College, Surat, India)
Abstract:Super-resolution is the process of recovering a high-resolution image from multiple low-
resolutionimages of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a
higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-
resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for
facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review
of existing super-resolution techniques and highlight the future research challenges. This includes the
formulation of an observation model and coverage of the dominant algorithm – Iterative back projection.We
critique these methods and identify areas which promise performance improvements. In this paper, future
directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Keywords: Super-resolution, POCS, IBP, Canny Edge Detection
I. Introduction
Super resolution is a method for reconstructing a high resolution image from several overlapping low-
resolution images[1]. The low resolution input images are the result of re-sampling a high resolution image. The
goal is to find the high resolutionimage which, when re-sampled in the lattice of the input images according to
the imaging model, predicts well the low resolution input images.
The success of super resolution algorithm is highly dependent on the accuracy of the model of the
imaging process.If, for example, the motion computed for some of the images is not correct, the algorithm may
degrade the image rather than enhance it.One solution proposed to handle local model inaccuracies and noise is
regularization [2].In most cases the enforced smoothness results in the suppression of high-frequency
information, and the results are blurred. Regularization may be successful when the scene is strongly restricted,
e.g. a binary text image[2].
Applications for super-resolution abound. NASA has been using super-resolution techniques for years
to obtain more detailed images of planets and other celestial objects. Closer to home, super-resolution can be
used to enhance surveillance videos to more accurately identify objects in the scene. One particular example of
this are systems capable of automatically reading license plate numbers from severely pixelated video streams.
Another application is the conversion of standard NTSC television recordings to the newer HDTV format which
is of a higher resolution.
A variety of approaches for solving the super-resolution problem have been proposed. Initial attempts
worked in the frequency domain, typically recovering higher frequency components by taking advantage of the
shifting and aliasing properties of the Fourier transform. Deterministic regularization approaches, which work in
the spatial domain, enable easier inclusion of a priori constraints on the solution space (typically with a
smoothness prior). Stochastic methods have received the most attention lately as they generalize the
deterministic regularization approaches and enable more natural inclusion of prior knowledge. Other approaches
include nonuniform interpolation, projection onto convex sets, iterative back projection, and adaptive filtering.
With the increased emphasis on stochastic techniques has also come increased emphasis on learning priors from
from example data rather than relying on more heuristicallyderived information.
The paper is organized as follows. Section 2 presents Super Resolution Processing and a description of
the general model of imaging systems (observation model) that provides the SR image formulations. Section 3
presents the SR image reconstruction approaches that reconstruct a single high-resolution image from a set of
given low-resolution images acquired from the same scene. Section 4 presents Results and Discussion. Section 5
discusses several research challenges that remain open in this area for future investigation. Finally, Section.6
concludes this paper.
II. Super-Resolution Processing
Given a set of low resolution images that result from the observation of the same scene from slightly
different views, super resolution algorithm produce a single high resolution image by fusing the input LR
images such that the final HR image reproduces the scene with a better fidelity than any of the LR images[11].
2. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 24 | Page
The central idea in super resolution processing is to convert the temporal resolution in to spatial resolution. In
broad sense, this approach can be used to perform any combination of the following image processing tasks:
- Registration
- Interpolation
- De-bluriring
Fig.1 Phases of Super-Resolution[3]
First, the SRR algorithm receive several low-resolution corrupted images as the inputs then the registration or
Motion Estimation process estimate the relative shifts between LR images compared to the reference LR image
with fractional pixel accuracy. Obviously, accurate sub-pixel motion estimation is a very important factor in the
success of the SRR algorithm. Since the shifts between LR images are arbitrary, the registered HR image will
not always match up to a uniformly spaced HR grid. Thus, non-uniform interpolation is necessary to obtain a
uniformly spaced HR image from a composite of non-uniformly spaced LR images. Finally, image
restoration(De-blurring) is applied to the up-sampled image to remove blurring and noise.Before presenting the
review of existing SR algorithms, we first model the LR image acquisition process.
1.1 Observation Model
Based on the most common observation model (Fig.2) it is considered that the available low-resolution
input images are obtained from the high-resolution original scene by warping, blurring and down sampling the
scene. Consider the desired HR image of size 1 1 2 2L N L N written in lexicographical notation as the vector
1 2[ , ,..., ]T
Nx x x x where 1 1 2 2N L N L N . Namely,x is the ideal un-degraded image that is sampled at
orabove the Nyquist rate from a continuous scene which isassumed to be bandlimited. Now, let the parameters
L1and L2 represent the down-sampling factors in the observationmodel for the horizontal and vertical directions,
respectively.Thus, each observed LR image is of size 1 1 2 2L N L N . Let the kth LR image be denoted in
lexicographicnotation as ,1 ,2 ,[ , ,...., ]T
k k k k My y y y for 1,2,..,k p and 1 1 2 2M L N L N . Now, it is
assumed that xremains constant during the acquisition of the multipleLR images, except for any motion and
degradation allowedby the model. Therefore, the observed LR imagesresult from warping, blurring, and
subsampling operatorsperformed on the HR image x. Assuming that eachLRimage is corrupted by additive
noise, we can then represent the observation model as [3],
k k k ky DB M X n for 1≤ k ≤ p (1)
Where kM is awarpmatrix of size 1 1 2 2 1 1 2 2L N L N L N L N , kB represents a 1 1 2 2 1 1 2 2L N L N L N L N blur
matrix, D is a
2
1 2 1 1 2 2( )N N L N L N subsampling matrix, and kn represents a lexicographically ordered noise
vector. Ablock diagram for the observation model is illustrated in Fig.2.
Fig.2 Observation Model
3. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 25 | Page
III. Super-resolution techniques
Image super resolution techniques can be mainly categorized as reconstruction based techniques and
learning based techniques.
Table 1 Comparison between Frequency Domain and Spatial Domain Super Resolution Techniques
Frequency Domain Spatial Domain
Observation model Frequency domain Spatial domain
Motion Model Global Translation Almost Unlimited
Degradation model Global translation Almost unlimited
Noise model Limited Almost unlimited
A-priori information Limited Almost unlimited
Simplicity Very Simple Generally Complex
Computational Cost Low High
Regularization Limited Excellent
Extensibility Poor Excellent
Applicability Limited Wide
Performance Good for Specific Application Good
In learning based approach, the relationship between an LR image and its corresponding high resolution (HR)
image is examined via a pair of LR and HR patches. The training data is used to predict the higher-resolution
image[4]. Reconstruction based approach can be employed in the frequency domain or spatial domain.
Reconstruction based approach require either single image or multiple low resolution image.Simplicity in theory
is a major advantage of the frequency domain approach. In addition, the frequency approach is also convenient
for parallel implementation. However, this approach allows low flexibility to add priori constraints, noise
models, and spatially varying degradation models. Thus, the development in practical use is limited. On the
other hand, spatial domain techniques are more flexible in incorporating priori constraints and have better
performance in reconstructed images. Nevertheless, these methods also have drawbacks such as complicated
theoretical work and relatively large amount of computation load. A comparison of the two main classes of
super-resolution techniques, frequency domain and spatial domain, is found in Table 1[5-9].
1.2 Frequency Domain Approach
The super-resolution problem was posed, along with a frequency domain solution, by Tsai and
Huang[10]. Prior to their paper, interpolation was the best technique for increasing the resolution of images.The
frequency domain approach makes explicit use of the aliasing that exists in each LR image to reconstruct an HR
image. Tsai and Huang [10] first derived a system equation that describes the relationship between LR images
and a desired HR image by using the relative motion between LR images. The frequency domain approach is
based on the following three principles: i) the shifting property of the Fourier transform, ii) the aliasing
relationship between the continuous Fourier transform (CFT) of an original HR image and the discrete Fourier
transform (DFT) of observed LR images, iii) and the assumption that an original HR image is band limited.
These properties make it possible to formulate the system equation relating the aliased DFT coefficients of the
observed LR images to a sample of the CFT of an unknown image. For example, let us assume that there are
two 1-D LR signals sampled below the Nyquist sampling rate. From the above three principles, the aliased LR
signals can be decomposed into the de-aliased HR signal as shown in Fig. 3.
Fig.3. Aliasing relationship between LR image and HR image
4. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 26 | Page
Frequency-domain super resolution method typically rely on familiar Fourier transform properties, specifically
the shifting and sampling theorems. Since these properties are generally very well known and understood, the
frequency domain approaches are easy to grasp and are intuitively appealing. Many of the frequency-domain
approaches are based on assumptions which enable the use of efficient procedures for computing the restoration,
the most important of which is the Fast Fourier Transform (FFT).
Let 1 2( , )x t t denote a continuous HR image and 1 2( , )X w w be its CFT. The global translations, which are the
only motion considered in the frequency domain approach, yield the th
k shifted image of
1 2 1 1 2 2( , ) ( , )k k kx t t x t t , where 1k and 2k are arbitrary but known values, and k =1,2,.., p. By the
shifting property of the CFT, the CFT of the shifted image, 1 2( , )kX w w , can be written as
1 2 1 1 2 2 1 2( , ) exp[ 2 ( )] ( , )k k kX w w j w w X w w (2)
The shifted image 1 2( , )kx t t is sampled with the sampling period T1 and T2 to generate the observed LR image
1 2( , )ky n n . Fromthe aliasing relationship and the assumption of bandlimitedness of 1 2( , )X w w
1 2(| ( , ) |) 0X w w for 1 1 1| | ( / )w L T , 2 2 2| | ( / ))w L T , the relationship between the CFT of
theHRimage and the DFT of the th
k observed LR image can be written as [11]
1 2
1 2
1 1
1 2
1 2 1 2
0 01 2 1 1 2 2
1 2 2
( , ) ,
L L
k k
n n
X n n
TT T N T N
(3)
By using lexicographic ordering for the indices n1 ,n2 on the right-hand side and k on the left-hand side, a
matrix
vector form is obtained as:
Y X (4)
whereY is a p ×1 column vector with the th
k element of the DFT coefficients of 1 2[ , ]ky n n , X is a 1 2 1L L
column vector with the samples of the unknown CFT of 1 2( , )x t t , and Φ is a 1 2p L L matrix which relates the
DFT of the observed LR images to samples of the continuous HR image. Therefore, the reconstruction of a
desired HR image requires us to determine Φ and solve this inverse problem. An extension of this approach for
a blurred and noisy image was provided by Kim et al. [12], resulting in a weighted least squares formulation. In
their approach, it is assumed that all LR images have the same blur and the same noise characteristics. This
method was further refined by Kim and Su [13] to consider different blurs for each LR image. Here, the
Tikhonov regularization
method is adopted to overcome the ill-posed problem resulting from blur operator. Bose et al. [14] proposed the
recursive total least squares method for SR reconstruction to reduce effects of registration errors (errors in Φ). A
discrete cosine transform (DCT)-based method was proposed by Rhee and Kang [15]. They reduce memory
requirements and computational costs by using DCT instead of DFT. They also apply multichannel adaptive
regularization parameters to overcome ill-posedness such as underdetermined cases or insufficient motion
information cases. Theoretical simplicity is a major advantage of the frequency domain approach. That is, the
relationship between LR images and the HR image is clearly demonstrated in the frequency domain. The
frequencymethod is also convenient for parallel implementation capable of reducing hardware complexity.
1.3 Spatial Domain Approach
In this class of SR reconstruction methods, the observation model is formulated, and reconstruction is
effected in the spatial domain. The linear spatial domain observation model can accommodate global and non-
global motion, optical blur, motion blur, spatially varying PSF, non-ideal sampling, compression artifacts and
more. Spatial domainreconstruction allows natural inclusion of (possibly nonlinear) spatial domain a-priori
constraints (e.g. Markov random fields or convex sets) which result in bandwidth extrapolation in
reconstruction. Consider estimating a SR image z from multiple LR images , {1,2,..., }ry r R . Images are
written as lexicographically ordered vectors. ry and z are related as r ry H z . The matrix rH , which must be
estimated, incorporates motion compensation, degradation effects and subsampling. The observation equation
may be generalized to Y = Hz + N where 1[ ... ]T T T
RY y y and 1[ ... ]T T T
RH H H with N representing
observation noise.
5. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 27 | Page
Since the superresolution problem is fundamentally ill-posed, incorporation of prior knowledge is
essential to achieve good results. A variety of techniques exist for the super-resolution problem in the spatial
domain. These solutions include interpolation, deterministic regularized techniques, stochastic methods,
iterative back projection, and projection onto convex sets among others. The primary advantages to working in
the spatial domain are support for unconstrained motion between frames and ease of incorporating prior
knowledge into the solution.
3.2.1 Interpolation of Non Uniformly Spaced Samples
Registering a set of LR images using motion compensation results in a single, dense composite image
of non uniformly spaced samples. A SR image may be reconstructed from this composite using techniques for
reconstruction from non-uniformly spaced samples. Unfortunately, this technique generally works very poorly
because of some inherent assumptions; the main problem being that camera sensors do not act as impulse
functions. Since the observed data result from severely under sampled, spatially averaged areas, the
reconstruction step (which typically assumes impulse sampling) is incapable of reconstructing significantly
more frequency content than is present in a single LR frame. Degradation models are limited, and no a-priori
constraints are used. There is also question as to the optimality of separate merging and restoration steps.
3.2.2 Deterministic Regularization
The deterministic regularized SR approach solves the inverse problem by using the prior information
about the solution which can be used to make the problem well posed. For example, CLS can be formulated by
choosing an x to minimize the Lagrangian [16]
2 2
1
[ || || || || ]
p
k k
k
X Xy W C
(5)
where the operator C is generally a high-pass filter, and||.|| represents a l2 -norm. In eq(5), a priori knowledge
concerning a desirable solution is represented by a smoothness constraint, suggesting that most images are
naturally
smooth with limited high-frequency activity, and therefore it is appropriate to minimize the amount of high-pass
energy in the restored image. In eq(5), α represents the Lagrange multiplier, commonly referred to as the
regularization parameter, that controls the tradeoff between fidelity to the data (as expressed by
2
1
|| ||
p
k k
k
Xy W
) andsmoothness of the solution (as expressed by
2
|| ||XC ). TheLarger values of α will
generally lead to a smoothersolution.This is useful when only a small number of LR imagesare available (the
problem is underdetermined) orthe fidelity of the observed data is low due to registrationerror and noise. On the
other hand, if a large number ofLR images are available and the amount of noise is small,smallαwill lead to a
good solution. The cost functional ineq(6) is convex and differentiable with the use of a quadraticregularization
term. Therefore, we can find a unique estimateimage
^
X which minimizes the cost functional in eq(5).
One of the most basic deterministic iterative techniquesconsiders solving
^
1 1
p p
T T T
k k k k
k k
W W C C X W y
(6)
and this leads to the following iteration for
^
X :
( 1) ( ) ( ) ( )^ ^ ^ ^
1
n n n np
T T
k k k
k
X X W y W X C C X
(7)
where β represents the convergence parameter and
T
kW contains an up-sampling operator and a type of blur
andwarping operator.
3.2.3 Projection Onto Convex Sets(POCS)
Another method for reducing the space of possible reconstructions is projection onto convex sets
(POCS). The POCS method describes an alternative iterative approach to incorporating prior knowledge about
the solution into the reconstruction process. With the estimates of registration parameters, this algorithm
simultaneously solves the restoration and interpolation problem to estimate the SR image. This is a set-theoretic
approach where each piece of a priori knowledge is formulated as a constraining convex set. Once the group of
6. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 28 | Page
convex sets is formed, an iterative algorithm is employedto recover a point on the intersection of the convex
sets,
1 1 2 1..... { }i M M ig P P P P g (8)
where jP is the projection of a given point onto the
th
j convex set and M is the number of convex sets. In
essence, we are restricting the final restored image to lie on the intersection of the constrainingsets, 1{ }M
j jP . The
reason we require convex sets is that convergenceis guaranteed for the case where each set is convex.
One potential group of convex sets is based on the l2 distance measure,
2
{ || || 1}k k kG g W g y ,1 k K (9)
This defines a set of ellipsoids (one for each input image) and restrictsthe final solution to lie inside the
ellipsoids. Other possibleconvex sets include ones based on the l¥ norm, those imposingsmoothness, and those
constraining the image intensity to be positive.Two problems with the POCS approach are that uniqueness isnot
guaranteed for the final recovered image and that defining theprojections jP can be difficult.
3.2.4 Iterative Back Projection(IBP)
Super resolution of the image is model as an inverse problem. That is the goal of super resolution is to
reverse the effect of the down sampling, blurring and warping that relate the LR image to desired HR image.
Mathematically it
is written as
( * )y X W S (10)
Where, X is original HR image; y is LR image; W is degeneration function; ↓ s is down sampling process.
In IBP approach HR image is estimated by back projecting the difference between the simulated LR image and
captured LR on interpolated image. This iterative process of SR does iterations until the minimization of the
cost function is achieved. Block diagram of IBP algorithm is given in Fig. 4. Mathematically the SR step
according to IBP is written as
(0)
eX X X (11)
Where, X is interpolated image; eX is error correction.
Fig. 4 Simple IBP Algorithm[17]
Given only one LR input image, the updating procedure can be summarized as doing the following two steps
iteratively:
1) Compute the LR error as
( ) ( )
( )n n
eX y y S
(12)
Estimation of the simulated LR image is given as
( ) ( )
( * )n n
y X W S (13)
2) Update the HR image by back-projecting the error as equation (11).
3.2.4.1 Proposed IBP using Canny Edge Detection
7. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 29 | Page
Though IBP method can minimize the restoration error significantly in iterative manner and gives good
effect, it projected the error back without edge guidance. In proposed algorithm, extra high frequency
information is added by Canny edge detection and difference error of up-sampled images from initial and
simulated LR images and so that it works as edge preserving algorithm. Block diagram of proposed algorithm is
shown in Fig. 5. Steps for proposedalgorithm are given in Table II.
Fig. 5 Canny Edge Detection Algorithm
Mathematically SR according to proposed algorithm is written as
(0)
e HX X X X (14)
Where,
(0)
X is initial interpolated image; eX is error correction; HX is high frequency estimation given by
( * )H CannyX X H
(15)
Where HX is the estimated HR image, CannyH is the Canny Edge High Pass filter.
In Summary, proposed algorithm in mathematical form is expressed as below. The estimated HR image after n
iterations is given by
( 1) ( ) ( )n n n
e HX X X X
(16)
The estimation of the high frequency is given as
( ) (0) ( )
{(( ) )* }n n
H H CannyX X y S H
(17)
Where
(0)
HX is high frequency component of image
(0)
X . The formula for
(0)
HX is given as
(0) (0)
( * )H CannyX X H
(18)
So, the final iterations process given in (13) is rewritten for the combination of IBP and Canny Edge as
( 1) ( ) ( ) (0) ( )
( ) { {(( )* }}n n n n
H CannyX X y y S X y S H
(19)
Table II Steps of Canny Edge Detection Algorithm
NO STEPS
1 Read ground truth image
2 Apply Gaussian blurring and down sampling to generate observed LR image
3 Apply initial interpolation on observed LR image for initial estimation of HR image
4 Apply Canny Edge on initial estimated HR image to obtain high frequency component
5 Apply degradation on initial estimated HR image to generate simulated LR image
6 Apply Canny Edge on simulated LR image
7 Subtract simulated LR from Observed LR image that gives error correction
8
9
10
11
Subtract step 6 output from step 4 output to get lost high frequency component
Take summation of step 3 output with step 8 output. This will give improvement in quality of initial
estimated HR image
Now, this improved HR becomes our initial estimated HR image for the second iteration
Repeat step 5 to step 10 until some predefined iteration
8. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 30 | Page
IV. Result and Discussion
To evaluate the SR systems effectively, this work assumes that the original HR images exist and the
image qualitydegradation is resulted from Gaussian blurring, and such Gaussian blurring function is known.
To present the performance of this algorithms, several images are tested and results are compared with Nearest
Neighbourhood interpolation(NN), Bilinear interpolation (BI), bicubic interpolation (BC), POCS, IBP and IBP
+ Canny Edge Detection.The image quality is determined based on PSNR evaluation.
As performance criteria, Peak Signal to Noise Ratio (PSNR) is calculated. The mathematical equations for M×N
image analysis are as given bellow[18].
( ) 2
[ ( , ) ( , )]n
i j
X i j X i j
MSE
M N
(20)
10
255 255
10logPSNR
MSE
(21)
Where, ( , )X i j is the original HR image and
( )
( , )n
X i j is the estimated HR image through this algorithm.
The image quality is determined based on PSNR evaluation. A good reconstruction algorithm generally provides
low value of MSE and high value of MSSIM and PSNR. The resultant images are shown in figure 6 and 7 .
PSNR is obtained using various techniques are given in Table III.
Table III PSNR Comparison
Test
Image
Single Frame Algorithms
NN BI Bicubic POCS IBP IBP
+Canny
Edge
Apple 26.6500 29.1617 31.6337 30.7417 28.7719 42.3146
Football 26.9158 29.3864 30.4000 30.3136 29.2219 42.8838
Greens 19.9802 22.1634 24.6540 28.2204 22.1575 34.4326
Monkey 29.6891 31.2404 33.4968 32.7840 31.0809 43.6170
Flower 29.1997 32.0320 33.9559 32.4331 31.7295 46.0387
Fruits 28.4520 31.0726 34.1511 32.9744 30.9810 43.7963
(a)Original Image (b) LR Image
(c) NN Interpolation (d) Bilinear Interpolation
9. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 31 | Page
(e) Bicubic Interpolation (f) POCS
(g) IBP (h) IBP + Canny Edge
Fig.6. Resultant football Image using different Algorithms
(a)Original Image (b) LR Image
(c) NN Interpolation (d) Bilinear Interpolation
(e) Bicubic Interpolation (f) POCS
11. Survey on Single image Super Resolution Techniques
www.iosrjournals.org 33 | Page
[13] S.P. Kim and W.Y. Su, “Recursive high-resolution reconstruction of blurred multiframe images,” IEEE Trans. Image Processing,
vol. 2, pp. 534-539, Oct. 1993.
[14] N.K. Bose, H.C. Kim, and H.M. Valenzuela, “Recursive implementation of total least squares algorithm for image reconstruction
from noisy, under sampled multi frames,” in Proc. IEEE Conf. Acoustics, Speech and Signal Processing, Minneapolis, MN, Apr.
1993, vol. 5, pp. 269-272.
[15] S.H. Rhee and M.G. Kang, “Discrete cosine transform based regularized high-resolution image reconstruction algorithm,” Opt.
Eng., vol. 38, no. 8, pp. 1348-1356, Aug. 1999.
[16] A.K. Katsaggelos, Ed. Digital Image Restoration. Heidelberg, Germany: Springer-Verlag. Springer. vol. 23, 1991.
[17] V. B. Patel, Chintan K. Modi, C. N. Paunwala and S. Patnaik, “Hybrid Approach for Single Image Super Resolution using ISEF and
IBP”, in IEEE Conference on Communication System and Network Technology (CSNT), 03-05 june, 2011.
[18] Liyakathunisa and C.N .Ravi Kumar,” A NOVEL SUPER RESOLUTION RECONSTRUCTION OF LOW REOSLUTION
IMAGESPROGRESSIVELY USING DCT AND ZONAL FILTER BASED DENOISING”, International Journal of Computer
Science & Information Technology (IJCSIT), Vol 3, No 1, Feb 2011.