This paper proposes an image quality metric that can effectively measure the quality of an image that correlates well with human judgment on the appearance of the image. The present work adds a new dimension to the structural approach based full-reference image quality assessment for gray scale images. The proposed method assigns more weight to the distortions present in the visual regions of interest of the reference (original) image than to the distortions present in the other regions of the image, referred to as perceptual weights. The perceptual features and their weights are computed based on the local energy modeling of the original image. The proposed model is validated using the image database provided by LIVE (Laboratory for Image & Video Engineering, The University of Texas at Austin) based on the evaluation metrics as suggested in the video quality experts group (VQEG) Phase I FR-TV test.
A novel predicate for active region merging in automatic image segmentationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
A novel predicate for active region merging in automatic image segmentationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Effect of kernel size on Wiener and Gaussian image filteringTELKOMNIKA JOURNAL
In this paper, the effect of the kernel size of Wiener and Gaussian filters on their image restoration qualities has been studied and analyzed. Four sizes of such kernels, namely 3x3, 5x5, 7x7 and 9x9 were simulated. Two different types of noise with zero mean and several variances have been used: Gaussian noise and speckle noise. Several image quality measuring indices have been applied in the computer simulations. In particular, mean absolute error (MAE), mean square error (MSE) and structural similarity (SSIM) index were used. Many images were tested in the simulations; however the results of three of them are shown in this paper. The results show that the Gaussian filter has a superior performance over the Wiener filter for all values of Gaussian and speckle noise variances mainly as it uses the smallest kernel size. To obtain a similar performance in Wiener filtering, a larger kernel size is required which produces much more blur in the output mage. The Wiener filter shows poor performance using the smallest kernel size (3x3) while the Gaussian filter shows the best results in such case. With the Gaussian filter being used, similar results of those obtained with low noise could be obtained in the case of high noise variance but with a higher kernel size.
A NOVEL METRIC APPROACH EVALUATION FOR THE SPATIAL ENHANCEMENT OF PAN-SHARPEN...cscpconf
Various and different methods can be used to produce high-resolution multispectral images
from high-resolution panchromatic image (PAN) and low-resolution multispectral images (MS),
mostly on the pixel level. The Quality of image fusion is an essential determinant of the value of
processing images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image. However, the jury is still
out of fused image’s benefits if it compared with its original images. In addition, there is a lack
of measures for assessing the objective quality of the spatial resolution for the fusion methods.
So, an objective quality of the spatial resolution assessment for fusion images is required.
Therefore, this paper describes a new approach proposed to estimate the spatial resolution
improve by High Past Division Index (HPDI) upon calculating the spatial-frequency of the edge
regions of the image and it deals with a comparison of various analytical techniques for evaluating the Spatial quality, and estimating the colour distortion added by image fusion including: MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to concentrate on the comparison of various image fusion techniques based on pixel and feature fusion technique.
Image Segmentation Using Pairwise Correlation ClusteringIJERA Editor
A pairwise hypergraph based image segmentation framework is formulated in a supervised manner for various images. The image segmentation is to infer the edge label over the pairwise hypergraph by maximizing the normalized cuts. Correlation clustering which is a graph partitioning algorithm, was shown to be effective in a number of applications such as identification, clustering of documents and image segmentation.The partitioning result is derived from a algorithm to partition a pairwise graph into disjoint groups of coherent nodes. In the pairwise correlation clustering, the pairwise graph which is used in the correlation clustering is generalized to a superpixel graph where a node corresponds to a superpixel and a link between adjacent superpixels corresponds to an edge. This pairwise correlation clustering also considers the feature vector which extracts several visual cues from a superpixel, including brightness, color, texture, and shape. Significant progress in clustering has been achieved by algorithms that are based on pairwise affinities between the datasets. The experimental results are shown by calculating the typical cut and inference in an undirected graphical model and datasets.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
MRIIMAGE SEGMENTATION USING LEVEL SET METHOD AND IMPLEMENT AN MEDICAL DIAGNOS...cseij
Image segmentation plays a vital role in image processing over the last few years. The goal of image segmentation is to cluster the pixels into salient image regions i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. In this paper, we propose a medical diagnosis system by using level set method for segmenting the MRI image which investigates a new variational level set algorithm without re- initialization to segment the MRI image and to implement a competent medical diagnosis system by using MATLAB. Here we have used the speed function and the signed distance function of the image in segmentation algorithm. This system consists of thresholding technique, curve evolution technique and an eroding technique. Our proposed system was tested on some MRI Brain images, giving promising results by detecting the normal or abnormal condition specially the existence of tumers. This system will be applied to both simulated and real images with promising results
Rough Set based Natural Image Segmentation under Game Theory Frameworkijsrd.com
The Since past few decades, image segmentation has been successfully applied to number of applications. When different image segmentation techniques are applied to an image, they produce different results especially if images are obtained under different conditions and have different attributes. Each technique works on a specific concept, such that it is important to decide as to which image segmentation technique should for a given application domain. On combining the strengths of individual segmentation techniques, the resulting integrated method yields better results thus enhancing the synergy of the individual methods alone. This work improves the segmentation technique of combining results of different methods using the concept of game theory. This is achieved through Nash equilibrium along with various similarity distance measures. Using game theory the problem is divided into modules which are considered as players. The number of modules depends on number of techniques to be integrated. The modules work in parallel and interactive manner. The effectiveness of the technique will be demonstrated by simulation results on different sets of test images.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
This paper presented a simple and efficient algorithm
for multi-focus image fusion, which used a
multiresolution signal decomposition scheme called
Laplacian pyramid method. The principle of Laplacian
pyramid transform is introduced, and based on it the
fusion strategy is described in detail. By analyzing the
experimental results, it showed that this method has
good performance, and the quality of the fused image is
better than the results of other methods
Ijip 742Image Fusion Quality Assessment of High Resolution Satellite Imagery ...CSCJournals
Considering the importance of fusion accuracy on the quality of fused images, it seems necessary to evaluate the quality of fused images before using them in further applications. Current quality evaluation metrics are mainly developed on the basis of applying quality metrics in pixel level and to evaluate the final quality by average computation. In this paper, an object level strategy for quality assessment of fused images is proposed. Based on the proposed strategy, image fusion quality metrics are applied on image objects and quality assessment of fusion are conducted based on inspecting fusion quality in those image objects. Results clearly show the inconsistency of fusion behavior in different image objects and the weakness of traditional pixel level strategies in handling these heterogeneities.
COMPUTATIONALLY EFFICIENT TWO STAGE SEQUENTIAL FRAMEWORK FOR STEREO MATCHINGijfcstjournal
Almost all the existing stereo algorithms fall under a common assumption that corresponding color or
intensity values will be similar like one another. On the other hand, it is also not that true in practice
where the image color or intensity values are regularly affected by different radiometric factors like
illumination direction, change in image device, illuminant color and so on. For this issue, the information
about the raw color of the images which is recorded by the camera should not depend on it totally, and
also the common assumptions on color consistency doesn’t influence good (great) between the stereo
images in real scenario. Therefore, most of the conventional stereo algorithms can be seriously degraded
in terms of performance under radiometric variations. In this work, we intend to develop a new stereo
matching algorithm which will be insensitive to change in radiometric conditions between stereo pairs i.e.
left image as well as right image. Unlike the other stereo algorithms, we propose a computationally
efficient two stage sequential framework for stereo matching which can handle the various radiometric
variations between the stereo pairs.Experimental results proves that the proposed method outperforms
extremely well compare to other state of the art stereo methods under change in various radiometric
conditions for a given stereo pair and it is also found from the results that the execution time is less
compare to existing methods.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
The purpose of this paper is to present a survey of image registration techniques. Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors, or from different viewpoints. It geometrically aligns two images the reference and sensed images. Specific examples of systems where image registration is a significant component include matching a target with a real-time image of a scene. Various applications of image registration are target recognition, monitoring global land usage using satellite images, matching stereo images to recover shape for navigation, and aligning images from different medical modalities for diagnosis.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Image Fusion and Image Quality Assessment of Fused ImagesCSCJournals
Accurate diagnosis of tumor extent is important in radiotherapy. This paper presents the use of image fusion of PET and MRI image. Multi-sensor image fusion is the process of combining information from two or more images into a single image. The resulting image contains more information as compared to individual images. PET delivers high-resolution molecular imaging with a resolution down to 2.5 mm full width at half maximum (FWHM), which allows us to observe the brain\'s molecular changes using the specific reporter genes and probes. On the other hand, the 7.0 T-MRI, with sub-millimeter resolution images of the cortical areas down to 250 m, allows us to visualize the fine details of the brainstem areas as well as the many cortical and sub-cortical areas. The PET-MRI fusion imaging system provides complete information on neurological diseases as well as cognitive neurosciences. The paper presents PCA based image fusion and also focuses on image fusion algorithm based on wavelet transform to improve resolution of the images in which two images to be fused are firstly decomposed into sub-images with different frequency and then the information fusion is performed and finally these sub-images are reconstructed into result image with plentiful information. . We also propose image fusion in Radon space. This paper presents assessment of image fusion by measuring the quantity of enhanced information in fused images. We use entropy, mean, standard deviation and Fusion Mutual Information, cross correlation , Mutual Information Root Mean Square Error, Universal Image Quality Index and Relative shift in mean to compare fused image quality. Comparative evaluation of fused images is a critical step to evaluate the relative performance of different image fusion algorithms. In this paper, we also propose image quality metric based on the human vision system (HVS).
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate
the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and
edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing
algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of
Synthetic Aperture Radar (SAR) images is still a challenging problem. We proposed a fast SAR image
segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA). In
this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in
a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold.
Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in
terms of segmentation accuracy, segmentation time, and Thresholding.
Analysis of wavelet-based full reference image quality assessment algorithmjournalBEEI
Measurement of Image Quality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective Image Quality Assessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for full reference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-wavelet based IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image.
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
The goal of quality assessment (QA) research is to design algorithms that can automatically
assess the quality of images in a perceptually consistent manner. Image QA algorithms generally
interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual
space. In order to improve the assessment accuracy of white noise, Gauss blur, JPEG2000 compression
and other distorted images, this paper puts forward an image quality assessment method based on phase
congruency and gradient magnitude. The experimental results show that the image quality assessment
method has a higher accuracy than traditional method and it can accurately reflect the image visual
perception of the human eye. In this paper, we propose an image information measure that quantifies the
information that is present in the reference image and how much of this reference information can be
extracted from the distorted image.
A novel predicate for active region merging in automatic image segmentationeSAT Journals
Abstract Image segmentation is an elementary task in computer vision and image processing. This paper deals with the automatic image segmentation in a region merging method. Two essential problems in a region merging algorithm: order of merging and the stopping criterion. These two problems are solved by a novel predicate which is described by the sequential probability ratio test and the minimal cost criterion. In this paper we propose an Active Region merging algorithm which utilizes the information acquired from perceiving edges in color images in L*a*b* color space. By means of color gradient recognition method, pixels with no edges are clustered and considered alone to recognize some preliminary portion of the input image. The color information along with a region growth map consisting of completely grown regions are used to perform an Active region merging method to combine regions with similar characteristics. Experiments on real natural images are performed to demonstrate the performance of the proposed Active region merging method. Index Terms: Adaptive threshold generation, CIE L*a*b* color gradient, region merging, Sequential Probability Ratio Test (SPRT).
Image Segmentation Using Pairwise Correlation ClusteringIJERA Editor
A pairwise hypergraph based image segmentation framework is formulated in a supervised manner for various images. The image segmentation is to infer the edge label over the pairwise hypergraph by maximizing the normalized cuts. Correlation clustering which is a graph partitioning algorithm, was shown to be effective in a number of applications such as identification, clustering of documents and image segmentation.The partitioning result is derived from a algorithm to partition a pairwise graph into disjoint groups of coherent nodes. In the pairwise correlation clustering, the pairwise graph which is used in the correlation clustering is generalized to a superpixel graph where a node corresponds to a superpixel and a link between adjacent superpixels corresponds to an edge. This pairwise correlation clustering also considers the feature vector which extracts several visual cues from a superpixel, including brightness, color, texture, and shape. Significant progress in clustering has been achieved by algorithms that are based on pairwise affinities between the datasets. The experimental results are shown by calculating the typical cut and inference in an undirected graphical model and datasets.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
MRIIMAGE SEGMENTATION USING LEVEL SET METHOD AND IMPLEMENT AN MEDICAL DIAGNOS...cseij
Image segmentation plays a vital role in image processing over the last few years. The goal of image segmentation is to cluster the pixels into salient image regions i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. In this paper, we propose a medical diagnosis system by using level set method for segmenting the MRI image which investigates a new variational level set algorithm without re- initialization to segment the MRI image and to implement a competent medical diagnosis system by using MATLAB. Here we have used the speed function and the signed distance function of the image in segmentation algorithm. This system consists of thresholding technique, curve evolution technique and an eroding technique. Our proposed system was tested on some MRI Brain images, giving promising results by detecting the normal or abnormal condition specially the existence of tumers. This system will be applied to both simulated and real images with promising results
Rough Set based Natural Image Segmentation under Game Theory Frameworkijsrd.com
The Since past few decades, image segmentation has been successfully applied to number of applications. When different image segmentation techniques are applied to an image, they produce different results especially if images are obtained under different conditions and have different attributes. Each technique works on a specific concept, such that it is important to decide as to which image segmentation technique should for a given application domain. On combining the strengths of individual segmentation techniques, the resulting integrated method yields better results thus enhancing the synergy of the individual methods alone. This work improves the segmentation technique of combining results of different methods using the concept of game theory. This is achieved through Nash equilibrium along with various similarity distance measures. Using game theory the problem is divided into modules which are considered as players. The number of modules depends on number of techniques to be integrated. The modules work in parallel and interactive manner. The effectiveness of the technique will be demonstrated by simulation results on different sets of test images.
Analysis of Multi-focus Image Fusion Method Based on Laplacian PyramidRajyalakshmi Reddy
This paper presented a simple and efficient algorithm
for multi-focus image fusion, which used a
multiresolution signal decomposition scheme called
Laplacian pyramid method. The principle of Laplacian
pyramid transform is introduced, and based on it the
fusion strategy is described in detail. By analyzing the
experimental results, it showed that this method has
good performance, and the quality of the fused image is
better than the results of other methods
Ijip 742Image Fusion Quality Assessment of High Resolution Satellite Imagery ...CSCJournals
Considering the importance of fusion accuracy on the quality of fused images, it seems necessary to evaluate the quality of fused images before using them in further applications. Current quality evaluation metrics are mainly developed on the basis of applying quality metrics in pixel level and to evaluate the final quality by average computation. In this paper, an object level strategy for quality assessment of fused images is proposed. Based on the proposed strategy, image fusion quality metrics are applied on image objects and quality assessment of fusion are conducted based on inspecting fusion quality in those image objects. Results clearly show the inconsistency of fusion behavior in different image objects and the weakness of traditional pixel level strategies in handling these heterogeneities.
COMPUTATIONALLY EFFICIENT TWO STAGE SEQUENTIAL FRAMEWORK FOR STEREO MATCHINGijfcstjournal
Almost all the existing stereo algorithms fall under a common assumption that corresponding color or
intensity values will be similar like one another. On the other hand, it is also not that true in practice
where the image color or intensity values are regularly affected by different radiometric factors like
illumination direction, change in image device, illuminant color and so on. For this issue, the information
about the raw color of the images which is recorded by the camera should not depend on it totally, and
also the common assumptions on color consistency doesn’t influence good (great) between the stereo
images in real scenario. Therefore, most of the conventional stereo algorithms can be seriously degraded
in terms of performance under radiometric variations. In this work, we intend to develop a new stereo
matching algorithm which will be insensitive to change in radiometric conditions between stereo pairs i.e.
left image as well as right image. Unlike the other stereo algorithms, we propose a computationally
efficient two stage sequential framework for stereo matching which can handle the various radiometric
variations between the stereo pairs.Experimental results proves that the proposed method outperforms
extremely well compare to other state of the art stereo methods under change in various radiometric
conditions for a given stereo pair and it is also found from the results that the execution time is less
compare to existing methods.
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSIONIJCI JOURNAL
The availability of imaging sensors operating in multiple spectral bands has led to the requirement of
image fusion algorithms that would combine the image from these sensors in an efficient way to give an
image that is more informative as well as perceptible to human eye. Multispectral image fusion is the
process of combining images from different spectral bands that are optically acquired. In this paper, we
used a pixel-level image fusion based on principal component analysis that combines satellite images of the
same scene from seven different spectral bands. The purpose of using principal component analysis
technique is that it is best method for Grayscale image fusion and gives better results. The main aim of
PCA technique is to reduce a large set of variables into a small set which still contains most of the
information that was present in the large set. The paper compares different parameters namely, entropy,
standard deviation, correlation coefficient etc. for different number of images fused from two to seven.
Finally, the paper shows that the information content in an image gets saturated after fusing four images.
AUTOMATED IMAGE MOSAICING SYSTEM WITH ANALYSIS OVER VARIOUS IMAGE NOISEijcsa
Mosaicing is blending together of several arbitrarily shaped images to form one large balanced image such
that boundaries between the original images are not seen. Image mosaicing creates a large field of view
using of scene and the result image can be used for texture mapping of a 3D environment too. Blended
image has become a wide necessity in images captured from real time sensor devices, bio-medical
equipment, satellite images from space, aerospace, security systems, brain mapping, genetics etc. Idea
behind this work is to automate the Image Mosaicing System so that blending may be fast, easy and
efficient even if large number of images are considered. This work also provides an analysis of blending
over images containing different kinds of distortion and noise which further enhances the quality of the
system and make the system more reliable and robust.
The purpose of this paper is to present a survey of image registration techniques. Registration is a fundamental task in image processing used to match two or more pictures taken, for example, at different times, from different sensors, or from different viewpoints. It geometrically aligns two images the reference and sensed images. Specific examples of systems where image registration is a significant component include matching a target with a real-time image of a scene. Various applications of image registration are target recognition, monitoring global land usage using satellite images, matching stereo images to recover shape for navigation, and aligning images from different medical modalities for diagnosis.
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Image Fusion and Image Quality Assessment of Fused ImagesCSCJournals
Accurate diagnosis of tumor extent is important in radiotherapy. This paper presents the use of image fusion of PET and MRI image. Multi-sensor image fusion is the process of combining information from two or more images into a single image. The resulting image contains more information as compared to individual images. PET delivers high-resolution molecular imaging with a resolution down to 2.5 mm full width at half maximum (FWHM), which allows us to observe the brain\'s molecular changes using the specific reporter genes and probes. On the other hand, the 7.0 T-MRI, with sub-millimeter resolution images of the cortical areas down to 250 m, allows us to visualize the fine details of the brainstem areas as well as the many cortical and sub-cortical areas. The PET-MRI fusion imaging system provides complete information on neurological diseases as well as cognitive neurosciences. The paper presents PCA based image fusion and also focuses on image fusion algorithm based on wavelet transform to improve resolution of the images in which two images to be fused are firstly decomposed into sub-images with different frequency and then the information fusion is performed and finally these sub-images are reconstructed into result image with plentiful information. . We also propose image fusion in Radon space. This paper presents assessment of image fusion by measuring the quantity of enhanced information in fused images. We use entropy, mean, standard deviation and Fusion Mutual Information, cross correlation , Mutual Information Root Mean Square Error, Universal Image Quality Index and Relative shift in mean to compare fused image quality. Comparative evaluation of fused images is a critical step to evaluate the relative performance of different image fusion algorithms. In this paper, we also propose image quality metric based on the human vision system (HVS).
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSIONijistjournal
A different image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The existing fusion techniques based on either direct operation on pixels or segments fail to produce fused images of the required quality and are mostly application based. The existing segmentation algorithms become complicated and time consuming when multiple images are to be fused. A new method of segmenting and fusion of gray scale images adopting Self organizing Feature Maps(SOM) is proposed in this paper. The Self Organizing Feature Maps is adopted to produce multiple slices of the source and reference images based on various combination of gray scale and can dynamically fused depending on the application. The proposed technique is adopted and analyzed for fusion of multiple images. The technique is robust in the sense that there will be no loss in information due to the property of Self Organizing Feature Maps; noise removal in the source images done during processing stage and fusion of multiple images is dynamically done to get the desired results. Experimental results demonstrate that, for the quality multifocus image fusion, the proposed method performs better than some popular image fusion methods in both subjective and objective qualities.
Image segmentation is useful in many applications. It can identify the regions of interest in a scene or annotate
the data. It categorizes the existing segmentation algorithm into region-based segmentation, data clustering, and
edge-base segmentation. Region-based segmentation includes the seeded and unseeded region growing
algorithms, the JSEG, and the fast scanning algorithm. Due to the presence of speckle noise, segmentation of
Synthetic Aperture Radar (SAR) images is still a challenging problem. We proposed a fast SAR image
segmentation method based on Particle Swarm Optimization-Gravitational Search Algorithm (PSO-GSA). In
this method, threshold estimation is regarded as a search procedure that examinations for an appropriate value in
a continuous grayscale interval. Hence, PSO-GSA algorithm is familiarized to search for the optimal threshold.
Experimental results indicate that our method is superior to GA based, AFS based and ABC based methods in
terms of segmentation accuracy, segmentation time, and Thresholding.
Analysis of wavelet-based full reference image quality assessment algorithmjournalBEEI
Measurement of Image Quality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective Image Quality Assessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for full reference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-wavelet based IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image.
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
The goal of quality assessment (QA) research is to design algorithms that can automatically
assess the quality of images in a perceptually consistent manner. Image QA algorithms generally
interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual
space. In order to improve the assessment accuracy of white noise, Gauss blur, JPEG2000 compression
and other distorted images, this paper puts forward an image quality assessment method based on phase
congruency and gradient magnitude. The experimental results show that the image quality assessment
method has a higher accuracy than traditional method and it can accurately reflect the image visual
perception of the human eye. In this paper, we propose an image information measure that quantifies the
information that is present in the reference image and how much of this reference information can be
extracted from the distorted image.
A novel predicate for active region merging in automatic image segmentationeSAT Journals
Abstract Image segmentation is an elementary task in computer vision and image processing. This paper deals with the automatic image segmentation in a region merging method. Two essential problems in a region merging algorithm: order of merging and the stopping criterion. These two problems are solved by a novel predicate which is described by the sequential probability ratio test and the minimal cost criterion. In this paper we propose an Active Region merging algorithm which utilizes the information acquired from perceiving edges in color images in L*a*b* color space. By means of color gradient recognition method, pixels with no edges are clustered and considered alone to recognize some preliminary portion of the input image. The color information along with a region growth map consisting of completely grown regions are used to perform an Active region merging method to combine regions with similar characteristics. Experiments on real natural images are performed to demonstrate the performance of the proposed Active region merging method. Index Terms: Adaptive threshold generation, CIE L*a*b* color gradient, region merging, Sequential Probability Ratio Test (SPRT).
Blind Image Quality Assessment with Local Contrast Features ijcisjournal
The aim of this research is to create a tool to evaluate distortion in images without the information about
original image. Work is to extract the statistical information of the edges and boundaries in the image and
to study the correlation between the extracted features. Change in the structural information like shape and
amount of edges of the image derives quality prediction of the image. Local contrast features are effectively
detected from the responses of Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Using
the joint adaptive normalisation, G and L are normalised. Normalised values are quantized into M and N
levels respectively. For these quantised M levels of G and N levels of L, Probability (P) and conditional
probability(C) are calculated. Four sets of values namely marginal distributions of gradient magnitude Pg,
marginal distributions of Laplacian of Gaussian Pl, conditional probability of gradient magnitude Cg and
probability of Laplacian of Gaussian Cl are formed. These four segments or models are Pg, Pl, Cg and Cl.
The assumption is that the dependencies between features of gradient magnitude and Laplacian of
Gaussian can formulate the level of distortion in the image. To find out them, Spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. Four different correlation values of each image are
the area of interest. Results are also compared with classical tool Structural Similarity Index Measure
A HVS based Perceptual Quality Estimation Measure for Color ImagesIDES Editor
Human eyes are the best evaluation model for
assessing the image quality as they are the ultimate receivers
in numerous image processing applications. Mean squared
error (MSE) and peak signal-to-noise ratio (PSNR) are the
two most common full-reference measures for objective
assessment of the image quality. These are well known for
their computational simplicity and applicability for
optimization purposes, but somehow fail to correlate with the
Human Visual System (HVS) characteristics. In this paper a
novel HVS based perceptual quality estimation measure for
color images is proposed. The effect of error, structural
distortion and edge distortion have been taken in account in
order to determine the perceptual quality of the image
contaminated with various types of distortions like noises,
blurring, compression, contrast stretching and rotation.
Subjective evaluation using Difference Mean Opinion Score
(DMOS), is also performed for assessment of the perceived
image quality. As depicted by the correlation values, the
proposed quality estimation measure proves to be an efficient
HVS based quality index. The comparisons in results also
show better performance than conventional PSNR and
Structural Similarity (SSIM).
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
Image fusion is a technique of
intertwining at least two pictures of same scene to
shape single melded picture which shows indispensable
data in the melded picture. Picture combination
system is utilized for expelling clamor from the
pictures. Commotion is an undesirable material which
crumbles the nature of a picture influencing the
lucidity of a picture. Clamor can be of different kinds,
for example, Gaussian commotion, motivation clamor,
uniform commotion and so forth. Pictures degenerate
some of the time amid securing or transmission or
because of blame memory areas in the equipment.
Picture combination should be possible at three
dimensions, for example, pixel level combination,
highlight level combination and choice dimension
combination. There are essentially two kinds of picture
combination methods which are spatial area
combination systems and transient space combination
procedures. (PCA) combination, Normal strategy, high
pass sifting are spatial area techniques and strategies
which incorporate change, for example, Discrete
Cosine Transform, Discrete wavelet change are
transient space combination strategies. There are
different techniques for picture combination which
have numerous favorable circumstances and
detriments. Numerous procedures experience the ill
effects of the issue of shading curios that comes in the
intertwined picture shaped. Also, the Cyclopean One
of the most astonishing properties of human stereo
vision is the combination of the left and right
perspectives of a scene into a solitary cyclopean one.
Under typical survey conditions, the world shows up as
observed from a virtual eye set halfway between the
left and right eye positions. The apparent picture of
the world is never recorded specifically by any tangible
exhibit, however developed by our neural equipment.
The term cyclopean alludes to a type of visual
upgrades that is characterized by binocular
dissimilarity alone. He suspected that stereo-psis may
find concealed articles, this may be helpful to discover
disguised items. The critical part of this examination
when utilizing arbitrary dab stereo-grams was that
uniqueness is adequate for stereo-psis, and where had
just demonstrated that binocular difference was vital
for stereo-psis.
Super-resolution (SR) is the process of obtaining a high resolution (HR) image or
a sequence of HR images from a set of low resolution (LR) observations. The block
matching algorithms used for motion estimation to obtain motion vectors between the
frames in Super-resolution. The implementation and comparison of two different types of
block matching algorithms viz. Exhaustive Search (ES) and Spiral Search (SS) are
discussed. Advantages of each algorithm are given in terms of motion estimation
computational complexity and Peak Signal to Noise Ratio (PSNR). The Spiral Search
algorithm achieves PSNR close to that of Exhaustive Search at less computation time than
that of Exhaustive Search. The algorithms that are evaluated in this paper are widely used
in video super-resolution and also have been used in implementing various video standards
like H.263, MPEG4, H.264.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
An Experimental Study into Objective Quality Assessment of Watermarked ImagesCSCJournals
In this paper, we study the quality assessment of watermarked and attacked images using extensive experiments and related analysis. The process of watermarking usually leads to loss of visual quality and therefore it is crucial to estimate the extent of quality degradation and its perceived impact. To this end, we have analyzed the performance of 4 image quality assessment (IQA) metrics – Structural Similarity Index (SSIM), Singular Value Decomposition Metric (M-SVD) and Image Quality Score (IQS) and PSNR on watermarked and attacked images. The watermarked images are obtained by using three different schemes viz., (1) DCT based random number sequence watermarking, (2) DWT based random number sequence watermarking and (3) RBF Neural Network based watermarking. The signed images are attacked by using five different image processing operations. We observe that the metrics behave identically in case of all the three watermarking schemes. An important conclusion of our study is that PSNR is not a suitable metric for IQA as it does not correlate well with the human visual system’s (HVS) perception. It is also found that the M-SVD scatters significantly after embedding the watermark and after attacks as compared to SSIM and IQS. Therefore, it is a less effective quality assessment metric for watermarked and attacked images. In contrast to PSNR and M-SVD, SSIM and IQS exhibit more stable and consistent performance. Their comparison further reveals that except for the case of counterclockwise rotation, IQS relatively scatters less for all other four attacks used in this work. It is concluded that IQS is comparatively more suitable for quality assessment of signed and attacked images.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
MRI Image Segmentation Using Level Set Method and Implement an Medical Diagno...CSEIJJournal
Image segmentation plays a vital role in image processing over the last few years. The goal of image
segmentation is to cluster the pixels into salient image regions i.e., regions corresponding to individual
surfaces, objects, or natural parts of objects. In this paper, we propose a medical diagnosis system by using
level set method for segmenting the MRI image which investigates a new variational level set algorithm
without re- initialization to segment the MRI image and to implement a competent medical diagnosis
system by using MATLAB. Here we have used the speed function and the signed distance function of the
image in segmentation algorithm. This system consists of thresholding technique, curve evolution technique
and an eroding technique. Our proposed system was tested on some MRI Brain images, giving promising
results by detecting the normal or abnormal condition specially the existence of tumers. This system will be
applied to both simulated and real images with promising results.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
An Experiment with Sparse Field and Localized Region Based Active Contour Int...CSCJournals
This paper discusses various experiments conducted on different types of Level Sets interactive segmentation techniques using Matlab software, on select images. The objective is to assess the effectiveness on specific natural images, which have complex image composition in terms of intensity, colour mix, indistinct object boundary, low contrast, etc. Besides visual assessment, measures such as Jaccard Index, Dice Coefficient and Hausdorrf Distance have been computed to assess the accuracy of these techniques, between segmented and ground truth images. This paper particularly discusses Sparse Field Matrix and Localized Region Based Active Contours, both based on Level Sets. These techniques were not found to be effective where object boundary is not very distinct and/or has low contrast with background. Also, the techniques were ineffective on such images where foreground object stretches up to the image boundary.
A MORPHOLOGICAL MULTIPHASE ACTIVE CONTOUR FOR VASCULAR SEGMENTATIONijbbjournal
This paper presents a morphological active contour ideal for vascular segmentation in biomedical images.
The unenhanced images of vessels and background are successfully segmented using a two-step
morphological active contour based upon Chan and Vese’s Active Contour without Edges. Using dilation
and erosion as an approximation of curve evolution, the contour provides an efficient, simple, and robust
alternative to solving partial differential equations used by traditional level-set Active Contour models. The
proposed method is demonstrated with segmented data set images and compared to results garnered from
multiphase Active Contour without Edges, morphological watershed, and Fuzzy C-means segmentations.
MAGE Q UALITY A SSESSMENT OF T ONE M APPED I MAGESijcga
This paper proposes an objective assessment method
for perceptual image quality of tone mapped images.
Tone mapping algorithms are used to display high dy
namic range (HDR) images onto standard display
devices that have low dynamic range (LDR). The prop
osed method implements visual attention to define
perceived structural distortion regions in LDR imag
es, so that a reasonable measurement of distortion
between HDR and LDR images can be performed. Since
the human visual system is sensitive to structural
information, quality metrics that can measure struc
tural similarity between HDR and LDR images are
used. Experimental results with a number of HDR and
tone mapped image pairs show the effectiveness of
the proposed method.
Similar to Perceptual Weights Based On Local Energy For Image Quality Assessment (20)
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Perceptual Weights Based On Local Energy For Image Quality Assessment
1. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 468
Perceptual Weights Based On Local Energy For Image Quality
Assessment
Sudhakar Nagalla sudhakar.nagalla@becbapatla.ac.in
Department of Computer Science and Engineering
Bapatla Engineering College
Bapatla, 522102, India
Ramesh Babu Inampudi rinampudi@hotmail.com
Department of Computer Science and Engineering
Acharya Nagarjuna University
Guntur, 522510, India
Abstract
This paper proposes an image quality metric that can effectively measure the quality of an image
that correlates well with human judgment on the appearance of the image. The present work
adds a new dimension to the structural approach based full-reference image quality assessment
for gray scale images. The proposed method assigns more weight to the distortions present in the
visual regions of interest of the reference (original) image than to the distortions present in the
other regions of the image, referred to as perceptual weights. The perceptual features and their
weights are computed based on the local energy modeling of the original image. The proposed
model is validated using the image database provided by LIVE (Laboratory for Image & Video
Engineering, The University of Texas at Austin) based on the evaluation metrics as suggested in
the video quality experts group (VQEG) Phase I FR-TV test.
Keywords: Image Quality, HVS, Full-reference Quality Assessment, Perceptual Weights.
1. INTRODUCTION
Any image processing system should be aware of the impacts of processing on the visual quality
of the resulting image. Numerous algorithms for image quality assessment (IQA) have been
investigated and developed over the last several decades. The objective image quality
measurement seeks to measure the quality of images algorithmically. Objective image quality
metrics can be classified as full-reference in which the algorithm has access to the original
(considered to be distortion free) image, no-reference in which the algorithm has access only to
the distorted image and reduced-reference in which the algorithm has partial information
regarding the original image. A comprehensive review of research and challenges in image
quality assessment is presented in [1].
In [2], a number of simple statistical image quality metrics based on numerical errors are
compared for gray scale image compression. These metrics include average difference,
maximum difference, absolute error, mean square error (MSE), peak MSE, Laplacian MSE,
histogram and Hosaka plot. It is observed that although some numerical measures correlate well
with the human response for a specific compression technique, they are not found to be reliable
for evaluation across various methods of compression. The most widely adopted statistical
feature is the Mean Squared Error (MSE). However, MSE and its variants may not correlate well
with subjective quality measures because human perception of image distortions and artifacts is
unaccounted for. A detailed discussion on MSE is presented by Girod [3].
Most HVS based quality assessment metrics share an error-sensitivity based paradigm [4], which
aims to quantify the strength of the errors between the reference and the distorted signals in a
2. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 469
perceptually meaningful way. A well-known method, Visible Differences Predictor (VDP) [5],
Lubin's algorithm [6], Teo and Heeger's metric [7], a perceptual image quality metric named
information mean square error (IMSE) proposed by David Tompa et al. [8], a measure of
perceptual image quality of Westen et al. [9], a comprehensive distortion metric for digital color
images presented by Stefan Winkler [10], an image quality metric using contrast signal-to-noise
ratio (CSNR) by Susu Yao et al. [11], image quality metric named visual information fidelity (VIF)
introduced by Sheikh and Bovik [12] belong to this category. The rest of the paper is organized as
follows. Section 2 explains structural similarity measure, Section 3 presents local energy model
for detecting image features, Section 4 explains weighting of structural similarity indices and
formulation of Perceptual Structural Similarity index. Section 5 presents the results followed by
conclusions.
2. STRUCTURAL SIMILARITY MEASURE
One distinct feature that makes natural image signals different from a "typical" image randomly
picked from the image space is that they are highly structured and the signal samples exhibit
strong dependencies amongst themselves. These dependencies carry important in-formation
about the structures of objects in the visual scene. An image quality metric that ignores such
dependencies may fail to provide effective predictions of image quality. Structural similarity based
methods [13, 14] of image quality assessment claim to account for such dependencies in
assessing the image quality. In [14] a more generalized and stable version of the universal quality
index was proposed named as Structural SIMilarity quality measure (SSIM).
Let x and y be two discrete non-negative signals where { | 1,2... }ix x i N= = and
{ | 1,2... }iy y i N= = are aligned with each other (e.g. two image patches extracted from the
same spatial location of original image and distorted image being compared). Let
, , , ,x y x xyyµ µ σ σ σ represent mean intensity of signal ,x mean intensity of signal ,y standard
deviation of ,x standard deviation of ,y and covariance between x and y respectively. The
Structural Similarity measure between the image patches is defined in (1), where 1C and 2C are
small constants introduced to avoid instability when the denominator is close to zero.
1 2
2 2 2 2
1 2
(2 )(2 )
( , )
( )( )
x y xy
x y x y
C C
SSIM x y
C C
µ µ σ
µ µ σ σ
+ +
=
+ + + +
(1)
Let X and Y be the two images being compared. A local moving window approach is followed,
to compute ( , ).SSIM X Y The window moves pixel-by-pixel from the top left corner to the bottom
right corner of the image. In each step, the local statistics and ( , )SSIM x yj j index are calculated
using (1) within the local window .j The SSIM index between X and Y is defined in (2) where
Ns is the number of local windows in the image, and ( , )W x yj j j is the weight given to the j-th
window of the image. If all the local regions in the image are equally weighted, then
( , ) 1.W x yj j j = This results in the mean ( )SSIM MSSIM measure employed in [14].
1
1
( , ). ( , )
( , )
( , )
s
s
N
j j j j jj
N
j j jj
W x y SSIM x y
SSIM X Y
W x y
=
=
=
∑
∑
(2)
It may be noted that the MSSIM algorithm gives equal importance to distortions for all local
regions of the image. Wang et al. [14] suggest that the performance of SSIM can be improvised
by weighting the local SSIM indices. They also suggest that the prior knowledge about the
3. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 470
importance of different regions in the image if available can be converted into a weighting
function. A variety of such approaches can be found in [15]-[20].
Studies of visual attention and eye movements [6, 21, 22] have shown that humans attend to few
areas in the image. Even though unlimited viewing time is provided, subjects will continue to
focus on few areas rather than scan the whole image. These areas are often highly correlated
amongst different subjects, when viewed in the same context. In order to automatically determine
the parts of an image that a human is likely to attend to, we need to understand the operation of
human visual attention and eye movements. In [23], many algorithms for defining Visual regions
of interest were evaluated in comparison with eye fixations. The present work adopts the local
energy model to identify feature rich local regions which are normally attended to by humans and
to define a weighting function that is proportional to feature richness of the region. The weighting
function is used in (2) to define the Perceptual Structural SIMilarity index ePSSIM proposed in this
paper.
3. LOCAL ENERGY MODELING FOR FEATURE DETECTION
The local energy model of feature detection postulates that features are perceived at points of
maximum phase congruency in an image. Venkatesh and Owens [24] show that points of
maximum phase congruency can be calculated equivalently by searching for peaks in the local
energy function. The calculation of energy from spatial filters in quadrature pairs has been central
to the models of human visual perception proposed by Heeger [25], Adelson and Bergen [26].
Local frequency and, in particular, phase information in signals are of importance in calculating
local energy. To preserve phase information, linear-phase filters must be used. That is, one must
use non orthogonal filters that are in symmetric/antisymmetric quadrature pairs. In this work, the
approach of Morlet et al. [27] is followed with a modification in the usage of filters. Logarithmic
Gabor functions [28, 29] are used instead of Gabor filters as the maximum bandwidth of a Gabor
filter is limited to approximately one octave and Gabor filters are not optimal if one needs broad
spectral information with maximal spatial localization.
Field [28] suggests that natural images are better coded by filters that have Gaussian transfer
functions when viewed on the logarithmic frequency scale. Firstly, log-Gabor functions, by
definition, always have no DC component, and secondly, the transfer function of the log Gabor
function has an extended tail at the high frequency end. Field's studies of the statistics of natural
images indicate that natural images have amplitude spectra that fall off at approximately inverse
of the frequency. To encode images having such spectral characteristics one should use filters
having spectra that are similar. Field suggests that log Gabor functions, having extended tails,
should be able to encode natural images more efficiently than, say, ordinary Gabor functions,
which would over-represent the low frequency components and under-represent the high
frequency components in any encoding. Another point in support of the log Gabor function is that
it is consistent with measurements on mammalian visual systems which indicate we have cell
responses that are symmetric on the log frequency scale.
The local energy function is computed using log-Gabor filters in 4 scales with center frequencies
of 1/3 cycles/pixel, 1/6 cycles/pixel, 1/12 cycles/pixel and 1/24 cycles/pixel and 6 orientations at
0
0
(horizontal), 30
0
, 60
0
, 90
0
(vertical), 120
0
, and 150
0
. The following discussion [29] is made for a
specific orientation θ of the filter. If ( )I x denotes the image signal and eM and oM denote the
even-symmetric (cosine) and odd-symmetric (sine) filters at a scale n , the respective responses
ne and no of each quadrature pair of filters can be represented by the vector,
( ) ( ) ( ) ( ), * , *e o
n n n ne x o x I x M I x M = (3)
The amplitude ( )nA x and phase ( )n xφ of the transform at any given scale is given by
4. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 471
( ) ( ) ( )
2 2
n n nA x e x o x= + (4)
( ) ( ) ( )( )2 ,n n nx atan e x o xφ = (5)
At each point x in a signal, an array of these response vectors is obtained, one vector for each
scale of filter of the chosen orientation. These response vectors form the basis of localized
representation of the signal, and they can be used in calculating the resultant local energy vector
at point x . The design of the filter bank needs to be such that the transfer function of each filter
overlaps sufficiently with its neighbors so that the sum of all the transfer functions forms a
relatively uniform coverage of the spectrum. If the local energy should accurately represent the
feature strength at point x , then a broad range of frequencies in the signal are to be retained. The
local energy at point x of the image ( )E xθ for a given orientation θ can be calculated from
( )F x which can be formed by summing the even filter convolutions over all scales and ( )H x
which can be estimated by summing the odd filter convolutions over all scales given by
( ) ( )n
n
F x e x= ∑ (6)
( ) ( )n
n
H x o x= ∑ (7)
( ) 2 2
( ) ( )FE x H xxθ = + (8)
Figure 1(a) and Figure 1(b) show the normalized maps of the local energy function of the Lena
image considering different ranges of frequencies. Figure 1(a) is the result of considering
frequencies larger than 0.2pixels/cycle. Figure 1(b) is the result of considering the complete set of
frequencies. One can observe that the latter makes a clear distinction among the significance of
features than the former. The former shows that the majority of features are equally important
while the latter shows a broad scale distinction.
(a) Energy function considering 2 scales (b) Energy function considering 4 scales
FIGURE 1: Energy function of Lena Image.
At each location in the image, the weighted local oriented energy ( )E xθ in each orientation is
calculated, and the sum over all orientations ( )E x is computed. The following algorithm
illustrates the above steps.
5. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 472
Let ( )I x be the original image;
0.5c ← ;
.0001ε ← ;
10γ ← ;
[ ]( )E x ←
;
for each orientation θ do
[ ]_ ( )sum e x ←
;
[ ]_ ( )sum o x ←
;
[ ]_ ( )sum A xn ←
;
for each scale n do
compute ( ), ( )e x o xn n as in (3);
compute ( )A xn using as in (4);
_ ( ) _ ( ) ( )sum e x sum e x e xn← + ;
_ ( ) _ ( ) ( )sum o x sum o x o xn← + ;
_ ( ) _ ( ) ( )sum A x sum A x A xn n n← + ;
if first scale then
( ) ( )A x A xmax n← ;
else
( ) ( ( ), ( ))A x max A x A xmax n max← ;
end if
end for
compute ( )E xθ as in (8);
( ) ( ) ( )E x E x E xθ← + ;
end for
Figure 2 shows the perceptual map of Lena image based on local energy. The perceptual
importance map assigned more weights (more bright) to image features in face, hair, hat and
background. It can also be observed that the perceptual weights assigned to the features are well
distributed.
FIGURE 2: Perceptual map of Lena image.
6. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 473
4. LOCAL ENERGY WEIGHTED STRUCTURAL SIMILARITY
We assume that the width w and the height h of the original image X and the distorted image
Y are exact multiples of 9. If the size does not conform to these dimensions the images are
cropped on all sides so that minimum amount of details is lost. This requirement comes from the
fact that SSIM indices are computed in non-overlapping 9 9× regions. These regions in X and
Y are referred to as ijx and ijy respectively.
The computation of perceptual weights of local regions in the original image begins with the log-
Gabor decomposition of the image. Log-Gabor filters with 4 scales and 6 orientations are used for
this purpose. The algorithm presented earlier explains the computation of local maxima ( )E x of
local energy function at each pixel location x for the original image X . Let the matrix ( )E x be
divided into non-overlapping blocks of size 9 9× . Each resulting block in corresponds to a non-
overlapping block ijx of the original image ,X 1 9i m h≤ ≤ = and 1 9j n w≤ ≤ = . The local
maxima values present in each such block are summed up to obtain the local maxima for the
block ijx . The resulting matrix is normalized and these values are proposed as the perceptual
weights of the corresponding to 9 9× local regions which will be indicative of the human attention
the regions call for. Let the resulting matrix be E of size m n× .
The structural similarity index between corresponding blocks of X and Y of is computed using
(1) to obtain the matrix SSIM of size .m n× The Weighted Structural SIMilarity measure
e
PSSIM between X and Y is calculated using (9).
e
PSSIM indicates the quality of distorted
image on a scale of 0 to 1, where a value of 1 indicates that the images are identical.
[ ] [ ]
[ ]
1 1
1 1
m n
i je
m n
i j
E SSIM
PSSIM
E
= =
= =
=
∑∑
∑∑
(9)
5. RESULTS
The proposed models are validated using the image database provided by LIVE (Laboratory for
Image & Video Engineering, The University of Texas at Austin) [30]. The psychometric study for
the development of the database contained 779 images distorted using five different distortion
types and more than 25,000 human image quality evaluations.
The distorted image database consists of twenty-nine high resolution 24-bits/pixel RGB color
images (typically 768 × 512). The distortions include white Gaussian noise, Gaussian blur,
simulated fast fading Rayleigh (wireless) channel, JPEG compression and JPEG2000
compression and with each type the perceptual quality covered the entire quality range.
Observers are asked to provide their perception of quality on a continuous linear scale that was
divided into five equal regions marked with adjectives "Bad", "Poor", "Fair", "Good", and
"Excellent". About 20-29 human observers rated each image. The raw scores for each subject
are converted to difference scores (between test and reference images) and then converted to Z-
scores , scaled back to 1-100 range, and finally a Difference Mean Opinion Score (DMOS) value
for each distorted image is computed.
The proposed models and other models used for comparison are validated using the LIVE image
database based on the evaluation metrics as suggested in the video quality expert’s group
(VQEG) Phase I FR-TV test [31]. A nonlinear regression model is fitted to the DMOS values in
the database, and calculated image quality metric values (IQ) of the distorted images for each
7. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 474
distortion and for each quality assessment model used in comparison. The following 4 parameter
logistic function is used in the present work.
( )( )( )1 1 *pDMOS exp b IQ c d= + − − + (10)
The nonlinear regression function is used to transform the set of IQ values to a set of predicted
DMOS values, DMOSp, which are then compared with the actual DMOS values from the
subjective tests. The Correlation Coefficient (CC), the Mean Absolute Error (MAE), and the Root
Mean Squared Error (RMSE) between the subjective scores DMOS and predicted scores
DMOSp are evaluated as measures of prediction accuracy. The prediction consistency is
quantified using the Outlier Ratio (OR), which is defined as the percentage of the number of
predictions outside the range of 2 times the standard deviation of errors between DMOS and
DMOSp. Finally, the prediction monotonicity is measured using the Spearman Rank-Order-
Correlation Coefficient (ROCC).
Table 1 shows the results for the proposed method to estimate the image quality index PSSIM
e
in
comparison with the three image quality assessment models PSNR (Peak Signal-to-Noise Ratio)
and the Structural SIMilarity quality measure SSIM [14]. The Correlation Coefficient (CC), the
Mean Absolute Error (MAE), and the Root Mean Squared Error (RMSE) values for the three
assessment models considered prove that the prediction accuracy of the proposed model is
superior to the others. The values of the Spearman Rank-Order-Correlation Coefficient (ROCC)
indicate that the proposed model correlates well with the human judgment. However, the values
of Outlier Ratio (OR) are inferior marginally when compared with the other two models. This can
be attributed to the fact that the human judgment is impulsive in case of images with higher levels
of distortion in contrast to the computational algorithms for image quality assessment.
Figure 4 shows the scatter plots for different distortions in which each data point represents true
mean opinion score (DMOS) versus the predicted score of one test image by the proposed
method after the nonlinear mapping.
Model CC ROCC MAE RMSE OR
White Noise
PSNR 0.922 0.938 4.524 6.165 0.055
SSIM 0.94 0.914 4.475 5.459 0.027
PSSIM
e
0.967 0.958 3.344 4.073 0.034
Gaussian Blur
PSNR 0.744 0.725 8.395 10.501 0.034
SSIM 0.947 0.940 3.992 5.027 0.034
PSSIM
e
0.971 0.966 3.02 3.711 0.041
Fast Fading
PSNR 0.857 0.859 6.383 8.476 0.068
SSIM 0.956 0.945 3.806 4.799 0.055
PSSIM
e
0.965 0.964 3.471 4.315 0.041
JPEG Compression
PSNR 0.842 0.828 6.636 8.622 0.062
SSIM 0.891 0.863 5.386 7.236 0.057
PSSIM
e
0.916 0.888 4.727 6.393 0.062
JPEG2000 Compression
PSNR 0.859 0.851 6.454 8.269 0.059
SSIM 0.899 0.894 5.687 7.077 0.023
PSSIM
e
0.931 0.926 4.754 5.876 0.041
.
TABLE 1: Performance comparison of image quality assessment models.
8. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 475
6. CONCLUSION
In this paper, we proposed a full-reference perceptual image quality metric for gray scale images
based on structural approaches unified with perceptual regions humans attend to in a natural
image. The local energy model was used to indentify feature rich regions in natural images and to
formulate a weighting function for distortions a given image.
FIGURE 3: Scatter plots between DMOS and PSSIM
e
for different image distortions.
9. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 476
The proposed models and other models used for comparison are validated using the metrics
suggested by VQEG. The results prove that the performance of the metric is close to the human
(subjective) judgment. The metrics are also found to be superior in performance in comparison
with the other models of quality assessment considered. The metrics are generic and they are
applicable to a wide variety of image distortions like white noise, Gaussian blur, fast fading, and
different compression artifacts.
7. FUTURE RESEARCH
The present work formulated a framework for the image quality assessment is evolved in which
the model of identifying perceptual regions and the process of computing the image distortions
are independent. Such a frame work facilitates a modular approach so that the individual models
can be modified and optimized independently. As the framework for formulating perceptual quality
metric is flexible, different combinations of distortion modeling and perceptual region modeling
can be explored. In the present work, the notion of perceptual regions is used in image quality
assessment. It can be extended to other possible areas of image processing like face recognition
and watermarking.
8. REFERENCES
[1] Damon M. Chandler, “Seven challenges in image quality assessment: Past, present, and
future research,” ISRN Signal Processing, Article ID 905685, [Online].
Available:http://dx.doi.org/10.1155/2013/905685, 2013.
[2] A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance”, IEEE
Transactions on Communications, vol. 43, Dec. 1995, pp. 2959-2965.
[3] B. Girod, “What's wrong with mean-squared error,” in Digital Images and Human Vision, A.
B. Watson, Ed., MIT press, 1993, pp. 207-220.
[4] Z.Wang, H.R. Sheikh, and A.C. Bovik, Objective video quality assessment: The Handbook of
Video Databases: Design and Applications, B.Furht and O. Marques, Eds., CRC press, 2003.
[5] S. Daly, “The visible differences predictor: an algorithm for the assessment of image fidelity,”
in Digital Images and Human Vision, A. B. Watson, Ed., MIT press, 1993, pp.197-206.
[6] J.Lubin, “A visual discrimination model for image system design and evaluation,” in Visual
Models for Target Detection and Recognition, E.Peli, Ed., World Scientific Publishers,
Singapore, 1995, pp. 245-283.
[7] P. C. Teo and D. J. Heeger, “Perceptual image distortion,” Proc. SPIE, vol. 2179, 1994, pp.
127-141,
[8] David Tompa, John Morton, and Ed Jernigan, “Perceptually based image comparison”,
International conference on image processing, ICIP, vol. 1, 2000, pp. 489-492.
[9] S. Western, K.L. Lagendijk, and J. Biemond, “Perceptual image quality based on a multiple
channel hvs model,” ICASSP, vol. 4, 1995, pp. 2351-2354..
[10] Stefan Winkler, “A perceptual distortion metric for digital color images,” Proc. International
Conference on Image Processing, ICIP98, vol. 3, Oct. 1998, pp. 399-403.
[11] Susu Yao, Weisi Lin, EePing Ong, and Zhongkang Lu, “Contrast signal -to-noise ratio for
image quality assessment,” Proc. IEEE International Conference on Image Processing, ICIP
2005, vol. 1, Sept. 2005, pp. 397-400.
[12] Hamid Rahim Sheikh and Alan C.Bovik, “Image information and visual quality,” IEEE
Transactions on Image Processing, vol. 15, Feb. 2006.
10. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 477
[13] Z.Wang and A.C. Bovik, “A universal image quality index,” IEEE Signal Processing Letters,
vol. 9, pp. 81-84, Mar. 2002.
[14] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Si-moncelli, “Image quality assessment: From
error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, pp.
600-612, Apr. 2004.
[15] D.Venkata Rao, N.Sudhakar, B.R.Babu, and L.Pratap Reddy, “An image quality assessment
technique based on visual regions of interest weighted structural similarity,” ICGST
international journal on Graphics Vision and Image Processing, vol. 6, pp. 69-75, Sept. 2006.
[16] D.Venkata Rao, N.Sudhakar, B.R.Babu, and L.Pratap Reddy, “Image quality assessment
complemented with visual regions of interest,” ICCTA 2007-Proc. International Conference on
Computing: Theory and Applications, IEEE Computer Society Press, vol. 2, Mar. 2007, pp.
681-687.
[17] D.Venkata Rao and L.Pratap Reddy, “Image quality assessment based on perceptual
structural similarity,” in Pattern Recognition and Machine Intelligence, ser. Lecture Notes in
Computer Science, Springer-Verlag, vol. 4815, Dec. 2007, pp. 87-94.
[18] D.Venkata Rao and L. Pratap Reddy, “Weighted similarity based on edge strength for image
quality assessment,” International Journal of Computer Theory and Engineering, vol. 1, pp.
138-141, June. 2009.
[19] Z. Wang and Q. Li, “Information content weighting for perceptual image quality assessment,”
IEEE Transactions on Image Processing, vol. 20, pp. 1185-1198, 2011.
[20] Punit Singh and Damon M. Chandler, “F-mad: a feature-based extension of the most
apparent distortion algorithm for image quality assessment,” Proc. SPIE, Image Quality and
System Performance X, SPIE Digital Library , vol. 8653, Feb. 2013.
[21] J.Findlay. “The visual stimulus for saccadic eye movement in human observers,” Perception,
vol.9, pp. 7-21, Sept. 1980.
[22] J. Senders. “Distribution of attention in static and dynamic scenes,” Proceedings of SPIE, vol.
3016, Feb. 1997, pp. 186-194.
[23] Claudio M. Privitera and Lawrence W. Stark, “Algorithms for defining visual regions of
interest: Comparison with eye fixations,” IEEE Tans. on Pattern Analysis and Machine
Intelligence, vol. 22, Sept. 2000.
[24] S. Venkatesh and R. Owens, “An energy feature detection scheme,” Proc. of The
International Conference on Image Processing, 1989, pp. 553-557.
[25] D. J. Heeger, “Normalization of cell responses in cat striate cortex,” Visual Neuroscience,
vol.9, pp.181-197, 1992.
[26] E. H. Adelson and J. R. Bergen, “Spatiotemporal energy models for the perception of
motion,” Journal of the Optical Society of America A, vol. 2, pp. 284-299, 1985.
[27] J. Morlet, G. Arens, E. Fourgeau, and D. Giard, “Wave propagation and sampling theory -
part ii: Sampling theory and complex waves,” Geophysics, vol. 47, pp. 222-236, Feb. 1982.
[28] D. J. Field, “Relations between the statistics of natural images and the response properties of
cortical cells,” Journal of Optical Society of America, vol. 12, pp. 2379-2394, 1987.
[29] Peter Kovesi, “Image features from phase congruency,” Videre: A Journal of Computer Vision
Research, vol.1, 1999.
11. Sudhakar Nagalla & Ramesh Babu Inampudi
International Journal of Image Processing (IJIP), Volume (8) : Issue (6) : 2014 478
[30] H. R. Sheikh, A. C. Bovik, L. Cormack, and Z. Wang, LIVE image quality database, [Online].
Available: http://live.ece.utexas.edu/research/quality.
[31] Ann Marie Rohaly, Philip Corriveau, John Libert, Arthur Webster, Vittorio Baroncini, and
John Beerends, VQEG, “Final report from the video quality experts group on the validation
of objective models of video quality assessment,” [Online]. Available: http://www.vqeg.org/.