The aim of this research is to create a tool to evaluate distortion in images without the information about
original image. Work is to extract the statistical information of the edges and boundaries in the image and
to study the correlation between the extracted features. Change in the structural information like shape and
amount of edges of the image derives quality prediction of the image. Local contrast features are effectively
detected from the responses of Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Using
the joint adaptive normalisation, G and L are normalised. Normalised values are quantized into M and N
levels respectively. For these quantised M levels of G and N levels of L, Probability (P) and conditional
probability(C) are calculated. Four sets of values namely marginal distributions of gradient magnitude Pg,
marginal distributions of Laplacian of Gaussian Pl, conditional probability of gradient magnitude Cg and
probability of Laplacian of Gaussian Cl are formed. These four segments or models are Pg, Pl, Cg and Cl.
The assumption is that the dependencies between features of gradient magnitude and Laplacian of
Gaussian can formulate the level of distortion in the image. To find out them, Spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. Four different correlation values of each image are
the area of interest. Results are also compared with classical tool Structural Similarity Index Measure
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
MRIIMAGE SEGMENTATION USING LEVEL SET METHOD AND IMPLEMENT AN MEDICAL DIAGNOS...cseij
Image segmentation plays a vital role in image processing over the last few years. The goal of image segmentation is to cluster the pixels into salient image regions i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. In this paper, we propose a medical diagnosis system by using level set method for segmenting the MRI image which investigates a new variational level set algorithm without re- initialization to segment the MRI image and to implement a competent medical diagnosis system by using MATLAB. Here we have used the speed function and the signed distance function of the image in segmentation algorithm. This system consists of thresholding technique, curve evolution technique and an eroding technique. Our proposed system was tested on some MRI Brain images, giving promising results by detecting the normal or abnormal condition specially the existence of tumers. This system will be applied to both simulated and real images with promising results
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
Ijip 742Image Fusion Quality Assessment of High Resolution Satellite Imagery ...CSCJournals
Considering the importance of fusion accuracy on the quality of fused images, it seems necessary to evaluate the quality of fused images before using them in further applications. Current quality evaluation metrics are mainly developed on the basis of applying quality metrics in pixel level and to evaluate the final quality by average computation. In this paper, an object level strategy for quality assessment of fused images is proposed. Based on the proposed strategy, image fusion quality metrics are applied on image objects and quality assessment of fusion are conducted based on inspecting fusion quality in those image objects. Results clearly show the inconsistency of fusion behavior in different image objects and the weakness of traditional pixel level strategies in handling these heterogeneities.
IDENTIFICATION OF SUITED QUALITY METRICS FOR NATURAL AND MEDICAL IMAGESsipij
To assess quality of the denoised image is one of the important task in image denoising application.
Numerous quality metrics are proposed by researchers with their particular characteristics till today. In
practice, image acquisition system is different for natural and medical images. Hence noise introduced in
these images is also different in nature. Considering this fact, authors in this paper tried to identify the
suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images
respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with
respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to
develop noise dependent quality metric is also identified in this work.
1) The document discusses various medical image fusion techniques including pixel level, feature level, and decision level fusion.
2) It proposes a novel pixel level fusion method called Iterative Block Level Principal Component Averaging fusion that divides images into blocks and calculates principal components for each block.
3) Experimental results on fusing noise free and noise filtered MR images show that the proposed method performs well in terms of average mutual information and structural similarity compared to other algorithms.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
ZERNIKE-ENTROPY IMAGE SIMILARITY MEASURE BASED ON JOINT HISTOGRAM FOR FACE RE...AM Publications
The direction of image similarity for face recognition required a combination of powerful tools and stable in case of any challenges such as different illumination, various environment and complex poses etc. In this paper, we combined very robust measures in image similarity and face recognition which is Zernike moment and information theory in one proposed measure namely Zernike-Entropy Image Similarity Measure (Z-EISM). Z-EISM based on incorporates the concepts of Picard entropy and a modified one dimension version of the two dimensions joint histogram of the two images under test. Four datasets have been used to test, compare, and prove that the proposed Z-EISM has better performance than the existing measures
MRIIMAGE SEGMENTATION USING LEVEL SET METHOD AND IMPLEMENT AN MEDICAL DIAGNOS...cseij
Image segmentation plays a vital role in image processing over the last few years. The goal of image segmentation is to cluster the pixels into salient image regions i.e., regions corresponding to individual surfaces, objects, or natural parts of objects. In this paper, we propose a medical diagnosis system by using level set method for segmenting the MRI image which investigates a new variational level set algorithm without re- initialization to segment the MRI image and to implement a competent medical diagnosis system by using MATLAB. Here we have used the speed function and the signed distance function of the image in segmentation algorithm. This system consists of thresholding technique, curve evolution technique and an eroding technique. Our proposed system was tested on some MRI Brain images, giving promising results by detecting the normal or abnormal condition specially the existence of tumers. This system will be applied to both simulated and real images with promising results
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
Ijip 742Image Fusion Quality Assessment of High Resolution Satellite Imagery ...CSCJournals
Considering the importance of fusion accuracy on the quality of fused images, it seems necessary to evaluate the quality of fused images before using them in further applications. Current quality evaluation metrics are mainly developed on the basis of applying quality metrics in pixel level and to evaluate the final quality by average computation. In this paper, an object level strategy for quality assessment of fused images is proposed. Based on the proposed strategy, image fusion quality metrics are applied on image objects and quality assessment of fusion are conducted based on inspecting fusion quality in those image objects. Results clearly show the inconsistency of fusion behavior in different image objects and the weakness of traditional pixel level strategies in handling these heterogeneities.
IDENTIFICATION OF SUITED QUALITY METRICS FOR NATURAL AND MEDICAL IMAGESsipij
To assess quality of the denoised image is one of the important task in image denoising application.
Numerous quality metrics are proposed by researchers with their particular characteristics till today. In
practice, image acquisition system is different for natural and medical images. Hence noise introduced in
these images is also different in nature. Considering this fact, authors in this paper tried to identify the
suited quality metrics for Gaussian, speckle and Poisson corrupted natural, ultrasound and X-ray images
respectively. In this paper, sixteen different quality metrics from full reference category are evaluated with
respect to noise variance and suited quality metric for particular type of noise is identified. Strong need to
develop noise dependent quality metric is also identified in this work.
1) The document discusses various medical image fusion techniques including pixel level, feature level, and decision level fusion.
2) It proposes a novel pixel level fusion method called Iterative Block Level Principal Component Averaging fusion that divides images into blocks and calculates principal components for each block.
3) Experimental results on fusing noise free and noise filtered MR images show that the proposed method performs well in terms of average mutual information and structural similarity compared to other algorithms.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...IDES Editor
Image Enhancement through De-noising is one of
the most important applications of Digital Image Processing
and is still a challenging problem. Images are often received
in defective conditions due to usage of Poor image sensors,
poor data acquisition process and transmission errors etc.,
which creates problems for the subsequent process to
understand such images. The proposed Genetic filter is capable
of removing noise while preserving the fine details, as well as
structural image content. It can be divided into: (i) de-noising
filtering, and (ii) enhancement filtering. Image Denoising
and enhancement are essential part of any image processing
system, whether the processed information is utilized for visual
interpretation or for automatic analysis. The Experimental
results performed on a set of standard test images for a wide
range of noise corruption levels shows that the proposed filter
outperforms standard procedures for salt and pepper removal
both visually and in terms of performance measures such as
PSNR.Genetic algorithms will definitely helpful in solving
various complex image processing tasks in the future.
Face Detection in Digital Image: A Technical ReviewIJERA Editor
Face detection is the method of focusing faces in input image is an important part of any face processing system. In Face detection, segmentation plays the major role to detect the face. There are many contests for effective and efficient face detection. The aim of this paper is to present a review on several algorithms and methods used for face detection. We read the various surveys and related various techniques according to how they extract features and what learning algorithms are adopted for. Face detection system has two major phases, first to segment skin region from an image and second to decide these regions cover human face or not. There are number of algorithms used in face detection namely Genetic, Hausdorff Distance etc.
This document summarizes a research paper that proposes an algorithm for detecting brain tumors in MRI images based on analyzing bilateral symmetry. The algorithm first performs preprocessing like smoothing and contrast enhancement. It then identifies the bilateral symmetry axis of the brain. Next, it segments the image into symmetric regions, enhancing asymmetric edges that may indicate a tumor. Experiments showed the algorithm can automatically detect tumor positions and boundaries. The algorithm leverages the fact that brain MRI of a healthy person is nearly bilaterally symmetric, while a tumor disrupts this symmetry.
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...IDES Editor
Biometric system is an automated method of
identifying a person based on physiological, biology and
behavioural traits. The physiological traits in include face,
fingerprint, palm print and iris which remains permanent
throughout an individual life time. In the event that these
physiological traits have been degraded then the
authentication of an individual becomes very difficult. The
challenge of restoring a degraded physiological image to an
acceptable appearance in order to authenticate an individual
is very enormous. Fingerprint is one of the most extensively
used biometric systems for authentication in areas where
security is of high importance. This is due to their accuracy
and reliability. However, extracting features out of degraded
fingerprints is the most challenging in order to obtain high
fingerprint matching performance. This paper endeavors to
enhance the clarity of fingerprint minutiae, removing false
minutiae and improve the matching performance using a
robust Gabor Filtering Technique (GFT) and Back Propagation
Artificial Neural Network (BP-ANN). The experiments showed
a remarkable improvement in the performance of the system.
Three-dimensional multimodal models of objective classes are a great tool in modeling and recognition. The multimodal involuntary emotion recognition during a mentally challenged-based communication is presented. We have easily found the mentally disorder people without a doctor. The features are built upon the emotion, motion and frequency to identifying the percentage of mentally disorder peoples. Using Different categories of an image, video, audio and emotions can be discriminated. An image using an algorithms for classification is 3DMM (Three-dimensional morph able models) used to fit the model to images, and a framework for face emotion recognition. GPSO (Guided Particle Swarm Optimization) the emotion finding problem is basically an exploration problem, where at every point; we are pointed to recognize which of the thinkable emotions ensures the current facial expression denotes and GA (Genetic Algorithm) has the virtues of overflowing coding, and decoding, assigning complex information flexibly. GA is calculating the percentage of mental disorder. We proposed using different algorithm to identify the mentally challenged persons.
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
This document presents a face recognition approach that extracts multiple facial features from images using metrics like homogeneity, energy, covariance, etc. and trains a feedforward neural network for classification. 40 face images from 8 individuals under varying conditions are used as a training dataset from the ORL database. Test images are input and their features extracted and compared to the training data using the neural network. Recognition rates are found to be high, eliminating variations from pose. The approach effectively increases robustness over single feature methods by utilizing multiple discriminative facial features to recognize faces.
This document summarizes a research paper on using bilateral symmetry analysis to detect brain tumors from MRI images. It begins by introducing the problem of brain tumor detection and importance of asymmetry analysis. It then describes the proposed algorithm which involves defining a bilateral symmetry axis between the two brain hemispheres and detecting any regions of asymmetry that could indicate a tumor. The algorithm uses edge detection techniques to find the symmetry axis. Performance is evaluated on sample patient data and results show the method can successfully identify tumor locations and sizes. In conclusion, analyzing bilateral symmetry is an effective approach for automated brain tumor detection from MRI images.
A Survey on Different Relevance Feedback Techniques in Content Based Image Re...IRJET Journal
This document summarizes several relevance feedback techniques used in content-based image retrieval to bridge the semantic gap between low-level visual features and high-level semantic concepts. It reviews subspace learning algorithms like feature adaptation and relevance feedback, probabilistic feature weighting with positive and negative examples, asymmetric bagging and random subspaces for support vector machines, navigation pattern-based relevance feedback, biased discriminative Euclidean embedding, and feature line embedding biased discriminant analysis. The goal of these techniques is to retrieve more semantically relevant images through an iterative feedback process between the user and retrieval system.
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSijma
Nowadays, image alteration in the mainstream media has become common. The degree of manipulation is
facilitated by image editing software. In the past two decades the number indicating manipulation of
images rapidly grows. Hence, there are many outstanding images which have no provenance information
or certainty of authenticity. Therefore, constructing a scientific and automatic way for evaluating image
authenticity is an important task, which is the aim of this paper. In spite of having outstanding
performance, all the image forensics schemes developed so far have not provided verifiable information
about source of tampering. This paper aims to propose a different kind of scheme, by exploiting a group of
similar images, to verify the source of tampering. First, we define our definition with regard to tampered
image. The distinctive features are obtained by exploiting Scale- Invariant Feature Transform (SIFT)
technique. We then proposed clustering technique to identify the tampered region based on distinctive
keypoints. In contrast to k-means algorithm, our technique does not require the initialization of k value. The
experimental results over and beyond the dataset indicate the efficacy of our proposed scheme
Analysis of wavelet-based full reference image quality assessment algorithmjournalBEEI
Measurement of Image Quality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective Image Quality Assessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for full reference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-wavelet based IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image.
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION sipij
Face image is an efficient biometric trait to recognize human beings without expecting any co-operation from a person. In this paper, we propose HVDLP: Horizontal Vertical Diagonal Local Pattern based face recognition using Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP). The face images of different sizes are converted into uniform size of 108×990and color images are converted to gray scale images in pre-processing. The Discrete Wavelet Transform (DWT) is applied on pre-processed images and LL band is obtained with the size of 54×45. The Novel concept of HVDLP is introduced in the proposed method to enhance the performance. The HVDLP is applied on 9×9 sub matrix of LL band to consider HVDLP coefficients. The local Binary Pattern (LBP) is applied on HVDLP of LL band. The final features are generated by using Guided filters on HVDLP and LBP matrices. The Euclidean Distance (ED) is used to compare final features of face database and test images to compute the performance parameters.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformIRJET Journal
This document discusses rotation invariant face recognition using three feature extraction techniques: Rotated Local Binary Pattern (RLBP), Local Phase Quantization (LPQ), and Contourlet transform. It first extracts features from input face images using these three techniques. It then applies Linear Discriminant Analysis to reduce the feature dimensions. Finally, it uses k-Nearest Neighbors classification to perform face recognition on the Jaffe dataset. Experimental results show that the face recognition accuracy without LDA is 99.06% and increases to 100% when LDA is used for feature dimension reduction.
Image annotation - Segmentation & AnnotationTaposh Roy
This document discusses image annotation and segmentation. It begins with an overview of different types of image annotation including whole image classification, object detection, and image segmentation. It then covers supervised and unsupervised machine learning paradigms for image annotation, with a focus on supervised learning. Specific supervised annotation techniques for medical images are discussed like mean shift, normalized cuts, and level sets algorithms. Advanced clustering techniques for image segmentation like DBSCAN, HDBSCAN, and topological data analysis are also mentioned.
IRJET- Facial Emotion Detection using Convolutional Neural NetworkIRJET Journal
The document describes a study that aims to design a convolutional neural network (CNN) model to classify facial expressions in images into seven emotions (angry, disgust, fear, happy, sad, surprise, neutral) using deep learning techniques. The proposed CNN architecture contains five convolutional layers and five max pooling layers, along with two fully connected layers and a softmax output layer. The model is trained and evaluated on the FER-2013 dataset from Kaggle, achieving an accuracy of 65.34%. The goal of the research is to develop a model that can automatically and accurately detect emotions from facial expressions using CNNs.
MSB based Face Recognition Using Compression and Dual Matching TechniquesCSCJournals
Biometrics are used in almost all communication technology applications for secure recognition. In this paper, we propose MSB based face recognition using compression and dual matching techniques. The standard available face images are considered to test the proposed method. The novel concept of considering only four Most Significant Bits (MSB) of each pixel on image is introduced to reduce the total number of bits to half of an image for high speed computation and less architectural complexity. The Discrete Wavelet Transform (DWT) is applied to an image with only MSB's, and consider only LL band coefficients as final features. The features of the database and test images are compared using Euclidian Distance (ED) an Artificial Neural Network (ANN) to test the performance of the pot method. It is observed that, the performance of the proposed method is better than the existing methods.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Segmentation of Brain MR Images for Tumor Extraction by Combining Kmeans Clus...CSCJournals
Segmentation of images holds an important position in the area of image processing. It becomes more important while typically dealing with medical images where pre-surgery and post surgery decisions are required for the purpose of initiating and speeding up the recovery process [5] Computer aided detection of abnormal growth of tissues is primarily motivated by the necessity of achieving maximum possible accuracy. Manual segmentation of these abnormal tissues cannot be compared with modern day’s high speed computing machines which enable us to visually observe the volume and location of unwanted tissues. A well known segmentation problem within MRI is the task of labeling voxels according to their tissue type which include White Matter (WM), Grey Matter (GM) , Cerebrospinal Fluid (CSF) and sometimes pathological tissues like tumor etc. This paper describes an efficient method for automatic brain tumor segmentation for the extraction of tumor tissues from MR images. It combines Perona and Malik anisotropic diffusion model for image enhancement and Kmeans clustering technique for grouping tissues belonging to a specific group. The proposed method uses T1, T2 and PD weighted gray level intensity images. The proposed technique produced appreciative results
The document proposes an efficient generalized signcryption scheme based on elliptic curve cryptography (ECC) that avoids computationally heavy bilinear pairing operations. It first identifies security issues in a previous tripartite signcryption scheme and proposes corrections. The corrected scheme is extended to support multiple receivers. It then further extends this signcryption scheme to a generalized signcryption scheme that provides either confidentiality, authentication, or a combination of both for messages with multiple receivers. The proposed schemes use only ECC operations and satisfy security properties like forward secrecy and public verification more efficiently than other existing schemes. An application of the generalized signcryption scheme for key management in wireless sensor networks is also discussed.
Performance Analsis of Clipping Technique for Papr Reduction of MB-OFDM UWB S...ijcisjournal
Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) is used as efficacious procedure for
ultra-wideband (UWB) wireless communication applications, which divides the spectrum into various subbands,
whose bandwidth is approximately 500MHz. Major arduousness in multiband-OFDM is ,it have
very large peak to average power ratio value which causes the signal to enter into dynamic region that
consequence in the loss of orthogonal properties and results in the interference of the carrier signals which
crops the amplifier saturation and finally limits the capacity of the system. Many PAPR amortize
algorithms have reported in the survey and pre-coding is PAPR reduction which is inserted after
modulation in the OFDM system. The Existing work presents the reduction of that value by different
clipping techniques namely Classical-Clipping (CC), Heavy side-Clipping (HC), Deep-Clipping (DC) and
Smooth-Clipping (SC) and their comparison analysis is done. Every clipping method is best at its own
level .The proficiency of these strategies are evaluated in locutions of average power disparity, complete
system decadence and PAPR reduction. Finally results show the MB OFDM yields better performance to
reduce PAPR in effective way.
A NOVEL PROBABILISTIC BASED IMAGE SEGMENTATION MODEL FOR REALTIME HUMAN ACTIV...sipij
Automatic human activity detection is one of the difficult tasks in image segmentation application due to
variations in size, type, shape and location of objects. In the traditional probabilistic graphical
segmentation models, intra and inter region segments may affect the overall segmentation accuracy. Also,
both directed and undirected graphical models such as Markov model, conditional random field have
limitations towards the human activity prediction and heterogeneous relationships. In this paper, we have
studied and proposed a natural solution for automatic human activity segmentation using the enhanced
probabilistic chain graphical model. This system has three main phases, namely activity pre-processing,
iterative threshold based image enhancement and chain graph segmentation algorithm. Experimental
results show that proposed system efficiently detects the human activities at different levels of the action
datasets.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...IDES Editor
Image Enhancement through De-noising is one of
the most important applications of Digital Image Processing
and is still a challenging problem. Images are often received
in defective conditions due to usage of Poor image sensors,
poor data acquisition process and transmission errors etc.,
which creates problems for the subsequent process to
understand such images. The proposed Genetic filter is capable
of removing noise while preserving the fine details, as well as
structural image content. It can be divided into: (i) de-noising
filtering, and (ii) enhancement filtering. Image Denoising
and enhancement are essential part of any image processing
system, whether the processed information is utilized for visual
interpretation or for automatic analysis. The Experimental
results performed on a set of standard test images for a wide
range of noise corruption levels shows that the proposed filter
outperforms standard procedures for salt and pepper removal
both visually and in terms of performance measures such as
PSNR.Genetic algorithms will definitely helpful in solving
various complex image processing tasks in the future.
Face Detection in Digital Image: A Technical ReviewIJERA Editor
Face detection is the method of focusing faces in input image is an important part of any face processing system. In Face detection, segmentation plays the major role to detect the face. There are many contests for effective and efficient face detection. The aim of this paper is to present a review on several algorithms and methods used for face detection. We read the various surveys and related various techniques according to how they extract features and what learning algorithms are adopted for. Face detection system has two major phases, first to segment skin region from an image and second to decide these regions cover human face or not. There are number of algorithms used in face detection namely Genetic, Hausdorff Distance etc.
This document summarizes a research paper that proposes an algorithm for detecting brain tumors in MRI images based on analyzing bilateral symmetry. The algorithm first performs preprocessing like smoothing and contrast enhancement. It then identifies the bilateral symmetry axis of the brain. Next, it segments the image into symmetric regions, enhancing asymmetric edges that may indicate a tumor. Experiments showed the algorithm can automatically detect tumor positions and boundaries. The algorithm leverages the fact that brain MRI of a healthy person is nearly bilaterally symmetric, while a tumor disrupts this symmetry.
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...IDES Editor
Biometric system is an automated method of
identifying a person based on physiological, biology and
behavioural traits. The physiological traits in include face,
fingerprint, palm print and iris which remains permanent
throughout an individual life time. In the event that these
physiological traits have been degraded then the
authentication of an individual becomes very difficult. The
challenge of restoring a degraded physiological image to an
acceptable appearance in order to authenticate an individual
is very enormous. Fingerprint is one of the most extensively
used biometric systems for authentication in areas where
security is of high importance. This is due to their accuracy
and reliability. However, extracting features out of degraded
fingerprints is the most challenging in order to obtain high
fingerprint matching performance. This paper endeavors to
enhance the clarity of fingerprint minutiae, removing false
minutiae and improve the matching performance using a
robust Gabor Filtering Technique (GFT) and Back Propagation
Artificial Neural Network (BP-ANN). The experiments showed
a remarkable improvement in the performance of the system.
Three-dimensional multimodal models of objective classes are a great tool in modeling and recognition. The multimodal involuntary emotion recognition during a mentally challenged-based communication is presented. We have easily found the mentally disorder people without a doctor. The features are built upon the emotion, motion and frequency to identifying the percentage of mentally disorder peoples. Using Different categories of an image, video, audio and emotions can be discriminated. An image using an algorithms for classification is 3DMM (Three-dimensional morph able models) used to fit the model to images, and a framework for face emotion recognition. GPSO (Guided Particle Swarm Optimization) the emotion finding problem is basically an exploration problem, where at every point; we are pointed to recognize which of the thinkable emotions ensures the current facial expression denotes and GA (Genetic Algorithm) has the virtues of overflowing coding, and decoding, assigning complex information flexibly. GA is calculating the percentage of mental disorder. We proposed using different algorithm to identify the mentally challenged persons.
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
This document presents a face recognition approach that extracts multiple facial features from images using metrics like homogeneity, energy, covariance, etc. and trains a feedforward neural network for classification. 40 face images from 8 individuals under varying conditions are used as a training dataset from the ORL database. Test images are input and their features extracted and compared to the training data using the neural network. Recognition rates are found to be high, eliminating variations from pose. The approach effectively increases robustness over single feature methods by utilizing multiple discriminative facial features to recognize faces.
This document summarizes a research paper on using bilateral symmetry analysis to detect brain tumors from MRI images. It begins by introducing the problem of brain tumor detection and importance of asymmetry analysis. It then describes the proposed algorithm which involves defining a bilateral symmetry axis between the two brain hemispheres and detecting any regions of asymmetry that could indicate a tumor. The algorithm uses edge detection techniques to find the symmetry axis. Performance is evaluated on sample patient data and results show the method can successfully identify tumor locations and sizes. In conclusion, analyzing bilateral symmetry is an effective approach for automated brain tumor detection from MRI images.
A Survey on Different Relevance Feedback Techniques in Content Based Image Re...IRJET Journal
This document summarizes several relevance feedback techniques used in content-based image retrieval to bridge the semantic gap between low-level visual features and high-level semantic concepts. It reviews subspace learning algorithms like feature adaptation and relevance feedback, probabilistic feature weighting with positive and negative examples, asymmetric bagging and random subspaces for support vector machines, navigation pattern-based relevance feedback, biased discriminative Euclidean embedding, and feature line embedding biased discriminant analysis. The goal of these techniques is to retrieve more semantically relevant images through an iterative feedback process between the user and retrieval system.
EXPLOITING REFERENCE IMAGES IN EXPOSING GEOMETRICAL DISTORTIONSijma
Nowadays, image alteration in the mainstream media has become common. The degree of manipulation is
facilitated by image editing software. In the past two decades the number indicating manipulation of
images rapidly grows. Hence, there are many outstanding images which have no provenance information
or certainty of authenticity. Therefore, constructing a scientific and automatic way for evaluating image
authenticity is an important task, which is the aim of this paper. In spite of having outstanding
performance, all the image forensics schemes developed so far have not provided verifiable information
about source of tampering. This paper aims to propose a different kind of scheme, by exploiting a group of
similar images, to verify the source of tampering. First, we define our definition with regard to tampered
image. The distinctive features are obtained by exploiting Scale- Invariant Feature Transform (SIFT)
technique. We then proposed clustering technique to identify the tampered region based on distinctive
keypoints. In contrast to k-means algorithm, our technique does not require the initialization of k value. The
experimental results over and beyond the dataset indicate the efficacy of our proposed scheme
Analysis of wavelet-based full reference image quality assessment algorithmjournalBEEI
Measurement of Image Quality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective Image Quality Assessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for full reference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-wavelet based IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image.
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION sipij
Face image is an efficient biometric trait to recognize human beings without expecting any co-operation from a person. In this paper, we propose HVDLP: Horizontal Vertical Diagonal Local Pattern based face recognition using Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP). The face images of different sizes are converted into uniform size of 108×990and color images are converted to gray scale images in pre-processing. The Discrete Wavelet Transform (DWT) is applied on pre-processed images and LL band is obtained with the size of 54×45. The Novel concept of HVDLP is introduced in the proposed method to enhance the performance. The HVDLP is applied on 9×9 sub matrix of LL band to consider HVDLP coefficients. The local Binary Pattern (LBP) is applied on HVDLP of LL band. The final features are generated by using Guided filters on HVDLP and LBP matrices. The Euclidean Distance (ED) is used to compare final features of face database and test images to compute the performance parameters.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformIRJET Journal
This document discusses rotation invariant face recognition using three feature extraction techniques: Rotated Local Binary Pattern (RLBP), Local Phase Quantization (LPQ), and Contourlet transform. It first extracts features from input face images using these three techniques. It then applies Linear Discriminant Analysis to reduce the feature dimensions. Finally, it uses k-Nearest Neighbors classification to perform face recognition on the Jaffe dataset. Experimental results show that the face recognition accuracy without LDA is 99.06% and increases to 100% when LDA is used for feature dimension reduction.
Image annotation - Segmentation & AnnotationTaposh Roy
This document discusses image annotation and segmentation. It begins with an overview of different types of image annotation including whole image classification, object detection, and image segmentation. It then covers supervised and unsupervised machine learning paradigms for image annotation, with a focus on supervised learning. Specific supervised annotation techniques for medical images are discussed like mean shift, normalized cuts, and level sets algorithms. Advanced clustering techniques for image segmentation like DBSCAN, HDBSCAN, and topological data analysis are also mentioned.
IRJET- Facial Emotion Detection using Convolutional Neural NetworkIRJET Journal
The document describes a study that aims to design a convolutional neural network (CNN) model to classify facial expressions in images into seven emotions (angry, disgust, fear, happy, sad, surprise, neutral) using deep learning techniques. The proposed CNN architecture contains five convolutional layers and five max pooling layers, along with two fully connected layers and a softmax output layer. The model is trained and evaluated on the FER-2013 dataset from Kaggle, achieving an accuracy of 65.34%. The goal of the research is to develop a model that can automatically and accurately detect emotions from facial expressions using CNNs.
MSB based Face Recognition Using Compression and Dual Matching TechniquesCSCJournals
Biometrics are used in almost all communication technology applications for secure recognition. In this paper, we propose MSB based face recognition using compression and dual matching techniques. The standard available face images are considered to test the proposed method. The novel concept of considering only four Most Significant Bits (MSB) of each pixel on image is introduced to reduce the total number of bits to half of an image for high speed computation and less architectural complexity. The Discrete Wavelet Transform (DWT) is applied to an image with only MSB's, and consider only LL band coefficients as final features. The features of the database and test images are compared using Euclidian Distance (ED) an Artificial Neural Network (ANN) to test the performance of the pot method. It is observed that, the performance of the proposed method is better than the existing methods.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Segmentation of Brain MR Images for Tumor Extraction by Combining Kmeans Clus...CSCJournals
Segmentation of images holds an important position in the area of image processing. It becomes more important while typically dealing with medical images where pre-surgery and post surgery decisions are required for the purpose of initiating and speeding up the recovery process [5] Computer aided detection of abnormal growth of tissues is primarily motivated by the necessity of achieving maximum possible accuracy. Manual segmentation of these abnormal tissues cannot be compared with modern day’s high speed computing machines which enable us to visually observe the volume and location of unwanted tissues. A well known segmentation problem within MRI is the task of labeling voxels according to their tissue type which include White Matter (WM), Grey Matter (GM) , Cerebrospinal Fluid (CSF) and sometimes pathological tissues like tumor etc. This paper describes an efficient method for automatic brain tumor segmentation for the extraction of tumor tissues from MR images. It combines Perona and Malik anisotropic diffusion model for image enhancement and Kmeans clustering technique for grouping tissues belonging to a specific group. The proposed method uses T1, T2 and PD weighted gray level intensity images. The proposed technique produced appreciative results
The document proposes an efficient generalized signcryption scheme based on elliptic curve cryptography (ECC) that avoids computationally heavy bilinear pairing operations. It first identifies security issues in a previous tripartite signcryption scheme and proposes corrections. The corrected scheme is extended to support multiple receivers. It then further extends this signcryption scheme to a generalized signcryption scheme that provides either confidentiality, authentication, or a combination of both for messages with multiple receivers. The proposed schemes use only ECC operations and satisfy security properties like forward secrecy and public verification more efficiently than other existing schemes. An application of the generalized signcryption scheme for key management in wireless sensor networks is also discussed.
Performance Analsis of Clipping Technique for Papr Reduction of MB-OFDM UWB S...ijcisjournal
Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) is used as efficacious procedure for
ultra-wideband (UWB) wireless communication applications, which divides the spectrum into various subbands,
whose bandwidth is approximately 500MHz. Major arduousness in multiband-OFDM is ,it have
very large peak to average power ratio value which causes the signal to enter into dynamic region that
consequence in the loss of orthogonal properties and results in the interference of the carrier signals which
crops the amplifier saturation and finally limits the capacity of the system. Many PAPR amortize
algorithms have reported in the survey and pre-coding is PAPR reduction which is inserted after
modulation in the OFDM system. The Existing work presents the reduction of that value by different
clipping techniques namely Classical-Clipping (CC), Heavy side-Clipping (HC), Deep-Clipping (DC) and
Smooth-Clipping (SC) and their comparison analysis is done. Every clipping method is best at its own
level .The proficiency of these strategies are evaluated in locutions of average power disparity, complete
system decadence and PAPR reduction. Finally results show the MB OFDM yields better performance to
reduce PAPR in effective way.
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
The document summarizes a research paper on gait-based person recognition using partial least squares selection. It presents an Arbitrary View Transformation Model (AVTM) that uses gait energy images and partial least squares (PLS) feature selection to improve gait recognition accuracy under varying viewing angles, clothing, and other conditions. The proposed AVTM PLS method is evaluated on the CASIA gait database and shown to achieve higher recognition rates compared to other existing methods, especially when there are changes in viewing angle, clothing, or whether the person is carrying something. Tables of results demonstrate the proposed method outperforms alternatives across different test conditions and ranges of gallery and probe viewing angles.
Fault Detection in Mobile Communication Networks Using Data Mining Techniques...ijcisjournal
This document discusses using data mining techniques and big data analytics to detect faults in mobile communication networks. It first provides background on data mining, mobile communication networks, and fault detection techniques. It then discusses using self-organizing maps, discrete wavelet transforms, and cluster analysis as data mining techniques to analyze network data and detect faults between source and destination nodes. The goal is to identify outlier nodes experiencing faults by analyzing patterns in large datasets from mobile networks.
Copy Move Forgery Detection Using GLCM Based Statistical Features ijcisjournal
The features Gray Level Co-occurrence Matrix (GLCM) are mostly explored in Face Recognition and
CBIR. GLCM technique is explored here for Copy-Move Forgery Detection. GLCMs are extracted from all
the images in the database and statistics such as contrast, correlation, homogeneity and energy are
derived. These statistics form the feature vector. Support Vector Machine (SVM) is trained on all these
features and the authenticity of the image is decided by SVM classifier. The proposed work is evaluated on
CoMoFoD database, on a whole 1200 forged and processed images are tested. The performance analysis
of the present work is evaluated with the recent methods.
An Optimized Approach for Fake Currency Detection Using Discrete Wavelet Tran...ijcisjournal
This document describes a proposed method for detecting fake currency using discrete wavelet transform (DWT). It begins with background on image forgery detection techniques, including active approaches using watermarking and passive approaches that do not require prior information. The document then focuses on copy-move forgery detection and compares different techniques like DCT and DWT. It proposes a two-phase algorithm using DWT to identify matched and reference blocks, then verify the resemblance between blocks. The algorithm segments the image, applies DWT, lexicographically sorts blocks, calculates normalized shift vectors, and detects matched blocks. Simulation outputs show the original and tampered images with tampered regions highlighted.
Agile development methods are commonly used to iteratively develop the information systems and they can
easily handle ever-changing business requirements. Scrum is one of the most popular agile software
development frameworks. The popularity is caused by the simplified process framework and its focus on
teamwork. The objective of Scrum is to deliver working software and demonstrate it to the customer faster
and more frequent during the software development project. However the security requirements for the
developing information systems have often a low priority. This requirements prioritization issue results in
the situations where the solution meets all the business requirements but it is vulnerable to potential
security threats.
The major benefit of the Scrum framework is the iterative development approach and the opportunity to
automate penetration tests. Therefore the security vulnerabilities can be discovered and solved more often
which will positively contribute to the overall information system protection against potential hackers.
In this research paper the authors propose how the agile software development framework Scrum can be
enriched by considering the penetration tests and related security requirements during the software
development lifecycle. Authors apply in this paper the knowledge and expertise from their previous work
focused on development of the new information system penetration tests methodology PETA with focus on
using COBIT 4.1 as the framework for management of these tests, and on previous work focused on
tailoring the project management framework PRINCE2 with Scrum.
The outcomes of this paper can be used primarily by the security managers, users, developers and auditors.
The security managers may benefit from the iterative software development approach and penetration tests
automation. The developers and users will better understand the importance of the penetration tests and
they will learn how to effectively embed the tests into the agile development lifecycle. Last but not least the
auditors may use the outcomes of this paper as recommendations for companies struggling with
penetrations testing embedded in the agile software development process.
A New Method for Preserving Privacy in Data Publishing Against Attribute and ...ijcisjournal
The document proposes a new method for preserving privacy in data publishing against attribute and identity disclosure. It summarizes existing anonymization techniques like k-anonymity, l-diversity, and t-closeness that aim to prevent these privacy risks but have limitations. The proposed method applies suppression to select quasi-identifiers before generalizing the data into ranges within groups to anonymize the table, aiming to prevent both attribute and identity disclosure. It arranges the records into groups, finds minimum and maximum integer values within each group, and rewrites the quasi-identifiers as ranges to anonymize the table while preserving more utility than existing techniques.
DEVELOPMENT OF SECURE CLOUD TRANSMISSION PROTOCOL (SCTP) ENGINEERING PHASES :...ijcisjournal
Cloud computing technology provides various internet-based services. Many cloud computing vendors are offering cloud services through their own service mechanism. These mechanisms consist of various service parameters such as authentication, security, performance, availability, etc. Customer can access these cloud services through web browsers using http protocols. Each protocol has its own way of achieving the request-response services, authentication, confidentiality and etc. Cloud computing is an internet-based technology, which provides Infrastructure, Storage, Platform services on demand through a browser using HTTP protocols. These protocol features can be enhanced using cloud specific protocol, which provides strong authentication, confidentiality, security, integrity, availability and accessibility. We are proposing and presenting the secure cloud transmission protocol (SCTP) engineering phases which sits on top of existing http protocols to provide strong authentication security and confidentiality using multi-models. SCTP has multi-level and multi-dimensional approach to achieve strong authentication and multi-level security technique to achieve secure channel. This protocol can add on to existing http protocols. It can be used in any cloud services. This paper presents proposed Protocol engineering phases such as Service Specification, Synthesis, Analysis, Modelling, and Implementation model with test suites. This paper is represents complete integration of our earlier proposed and published multilevel techniques
High Capacity Image Steganography Using Adjunctive Numerical Representations ...ijcisjournal
LSB steganography is a one of the most widely used methods for implementing covert data channels in
image file exchanges [1][2]. The low computational complexity and implementation simplicity of the algorithm are significant factors for its popularity with the primary reason being low image distortion. Many attempts have been made to increase the embedding capacity of LSB algorithms by expanding into the second or third binary layers of the image while maintaining a low probability of detection with minimal distortive effects [2][3][4]. In this paper,we introduce an advanced technique for covertly embedding data within images using redundant number system decomposition over non -standard digital bit planes. Both grayscale and bit-mapped images are equally effective as cover files. It will be shown that this unique steganography method has minimal visual distortive affects while also preserving the cover file statistics, making it less susceptible to most general steganography detection algorithms.
Wavelet Based on the Finding of Hard and Soft Faults in Analog and Digital Si...ijcisjournal
In this paper methods for testing both software and hardware faults are implemented in analog and digital
signal circuits are presented. They are based on the wavelet transform (WT). The limit which affected by
faults detect ability, for the reference circuits is set by statistical processing data obtained from a set of
faults free circuits .In wavelet analysis it has two algorithm one is based on a discrimination factor using
Euclidean distances and the other mahalanobis distances, are introduced both methods on wavelet energy
calculation. Simulation result from proposed test methods in the testing known analog and digital signal
circuit benchmark are given. The results shows that effectiveness of existing methods two test metrics
against three other test methods, namely a test method based on rms value of the measured signal, a test
method utilizing the harmonic magnitude component of the measured signal waveform
Hardware Implementation of Algorithm for Cryptanalysisijcisjournal
Cryptanalysis of block ciphers involves massive computations which are independent of each other and can
be instantiated simultaneously so that the solution space is explored at a faster rate. With the advent of low
cost Field Programmable Gate Arrays (FPGA’s), building special purpose hardware for computationally
intensive applications has now become possible. For this the Data Encryption Standard (DES) is used as a
proof of concept. This paper presents the design for Hardware implementation of DES cryptanalysis on
FPGA using exhaustive key search. Two architectures viz. Rolled and Unrolled DES architecture are compared
and based on experimental result the Rolled architecture is implemented on FPGA. The aim of this
work is to make cryptanalysis faster and better.
DWT Based Audio Watermarking Schemes : A Comparative Study ijcisjournal
The main problem encountered during multimedia transmission is its protection against illegal distribution
and copying. One of the possible solutions for this is digital watermarking. Digital audio watermarking is
the technique of embedding watermark content to the audio signal to protect the owner copyrights. In this
paper, we used three wavelet transforms i.e. Discrete Wavelet Transform (DWT), Double Density DWT
(DDDWT) and Dual Tree DWT (DTDWT) for audio watermarking and the performance analysis of each
transform is presented. The key idea of the basic algorithm is to segment the audio signal into two parts,
one is for synchronization code insertion and other one is for watermark embedding. Initially, binary
watermark image is scrambled using chaotic technique to provide secrecy. By using QuantizationIndex
Modulation (QIM), this method works as a blind technique. The comparative analysis of the three methods
is made by conducting robustness and imperceptibility tests are conducted on five benchmark audio
signals.
General Kalman Filter & Speech Enhancement for Speaker Identificationijcisjournal
Presence of noise increases the dimension of the information. A noise suppression algorithm is developed
with an idea of combining the General Kalman Filter and Estimate Maximization (EM) frame work.This
combination is helpful and effective in identifying noise characteristics of an acoustic environment.
Recursion between Estimate step and Maximization step enabled the algorithm to deal any model of noise.
The same Speech enhancement procedure in applied in the pre-processing stage of a conventional Speaker
identification method. Due to the non-stationary nature of noise and speech adaptive algorithms are
required. Algorithm is first applied for Speech enhancement problem and then extended to using it in the
pre-processing step of the Speaker identification. The present work is compared in terms of significant
metrics with existing and popular algorithms and results show that the developed algorithm is dominant
over them.
Cryptography is an art and science of secure communication. Here the sender and receiver are guaranteed
the security through encryption of their data, with the help of a common key. Both the parties should agree
on this key prior to communication. The cryptographic systems which perform these tasks are designed to
keep the key secret while assuming that the algorithm used for encryption and decryption is public. Thus
key exchange is a very sensitive issue. In modern cryptographic algorithms this security is based on the
mathematical complexity of the algorithm. But quantum computation is expected to revolutionize computing
paradigm in near future. This presents a challenge amongst the researchers to develop new cryptographic
techniques that can survive the quantum computing era. This paper reviews the radical use of quantum
mechanics for cryptography
SECURITY ANALYSIS OF THE MULTI-PHOTON THREE-STAGE QUANTUM KEY DISTRIBUTIONijcisjournal
The document presents a security analysis of a multi-photon three-stage quantum key distribution protocol that exploits asymmetry in detection strategies between legitimate users and eavesdroppers. It is found that under intercept-resend and photon number splitting attacks, the mean photon number can be greater than 1 while still achieving security, allowing for less efficient detectors. Error probabilities for the eavesdropper are calculated under these attacks as a function of mean photon number.
To the networks rfwkidea32 16, 32-8, 32-4, 32-2 and rfwkidea32-1, based on th...ijcisjournal
In this article, based on a network IDEA32-16, we have developed 5 new networks:RFWKIDEA32-16,RFWKIDEA32-8,
RFWKIDEA32-4, RFWKIDEA32-2, RFWKIDEA32, that do
not use round keys in round functions. It shows that in offered networks such Feistel network,
encryption and decryption using the same algorithm as a round function can be used any
transformation.
An efficient algorithm for sequence generation in data miningijcisjournal
Data mining is the method or the activity of analyzing data from different perspectives and summarizing it
into useful information. There are several major data mining techniques that have been developed and are
used in the data mining projects which include association, classification, clustering, sequential patterns,
prediction and decision tree. Among different tasks in data mining, sequential pattern mining is one of the
most important tasks. Sequential pattern mining involves the mining of the subsequences that appear
frequently in a set of sequences. It has a variety of applications in several domains such as the analysis of
customer purchase patterns, protein sequence analysis, DNA analysis, gene sequence analysis, web access
patterns, seismologic data and weather observations. Various models and algorithms have been developed
for the efficient mining of sequential patterns in large amount of data. This research paper analyzes the
efficiency of three sequence generation algorithms namely GSP, SPADE and PrefixSpan on a retail dataset
by applying various performance factors. From the experimental results, it is observed that the PrefixSpan
algorithm is more efficient than other two algorithms.
A 130-NM CMOS 400 MHZ 8-Bit Low Power Binary Weighted Current Steering DAC ijcisjournal
A low power low voltage 8-bit Digital to Analog Converter consisting of different current sources in binary
weighted array architecture is designed. The weights of current sources are depending on the binary
weights of the bits. This current steering DAC is suitable for high speed applications. The proposed DAC in
this paper has DNL, INL of ±0.04, ±0.05 respectively and the power consumption of 16.67mw.
This binary array architecture is implemented in CMOS 0.13µm 1P2M technology has good performances
in DNL, INL and area compared with other researches.
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
The goal of quality assessment (QA) research is to design algorithms that can automatically
assess the quality of images in a perceptually consistent manner. Image QA algorithms generally
interpret image quality as fidelity or similarity with a “reference” or “perfect” image in some perceptual
space. In order to improve the assessment accuracy of white noise, Gauss blur, JPEG2000 compression
and other distorted images, this paper puts forward an image quality assessment method based on phase
congruency and gradient magnitude. The experimental results show that the image quality assessment
method has a higher accuracy than traditional method and it can accurately reflect the image visual
perception of the human eye. In this paper, we propose an image information measure that quantifies the
information that is present in the reference image and how much of this reference information can be
extracted from the distorted image.
Perceptual Weights Based On Local Energy For Image Quality AssessmentCSCJournals
This paper proposes an image quality metric that can effectively measure the quality of an image that correlates well with human judgment on the appearance of the image. The present work adds a new dimension to the structural approach based full-reference image quality assessment for gray scale images. The proposed method assigns more weight to the distortions present in the visual regions of interest of the reference (original) image than to the distortions present in the other regions of the image, referred to as perceptual weights. The perceptual features and their weights are computed based on the local energy modeling of the original image. The proposed model is validated using the image database provided by LIVE (Laboratory for Image & Video Engineering, The University of Texas at Austin) based on the evaluation metrics as suggested in the video quality experts group (VQEG) Phase I FR-TV test.
A HVS based Perceptual Quality Estimation Measure for Color ImagesIDES Editor
Human eyes are the best evaluation model for
assessing the image quality as they are the ultimate receivers
in numerous image processing applications. Mean squared
error (MSE) and peak signal-to-noise ratio (PSNR) are the
two most common full-reference measures for objective
assessment of the image quality. These are well known for
their computational simplicity and applicability for
optimization purposes, but somehow fail to correlate with the
Human Visual System (HVS) characteristics. In this paper a
novel HVS based perceptual quality estimation measure for
color images is proposed. The effect of error, structural
distortion and edge distortion have been taken in account in
order to determine the perceptual quality of the image
contaminated with various types of distortions like noises,
blurring, compression, contrast stretching and rotation.
Subjective evaluation using Difference Mean Opinion Score
(DMOS), is also performed for assessment of the perceived
image quality. As depicted by the correlation values, the
proposed quality estimation measure proves to be an efficient
HVS based quality index. The comparisons in results also
show better performance than conventional PSNR and
Structural Similarity (SSIM).
RATIONAL STRUCTURAL-ZERNIKE MEASURE: AN IMAGE SIMILARITY MEASURE FOR FACE REC...AM Publications
Image similarity or image distortion assessment is the underlying technology in many computer vision applications, and is the root of many algorithms used in image processing. Many similarity measures have been proposed with the aim of achieving a high level of accuracy, and each of these measures has its strength as well as its weaknesses. In this paper, we present a highly efficient hybrid measure for image similarity that is based on structural and momental measures. We propose a similarity measure called the rational structural-Zernike measure (ZSM), to determine a reliable similarity between any two images including human faces images. This measure combines the best features of two structural measures, the well-known structural similarity index measure (SSIM) and the feature similarity index for image quality assessment (FSIM), with Zernike moments (ZMs), which have proven effective in the extraction of image features. Simulation results show that the proposed measure outperforms the SSIM, FSIM , ZMs and the state-of-art measure Feature-Based Structural Measure (FSM) through its ability to detect similarity even under distortion and to recognise the similarity between images of human faces under various conditions of facial expression and pose.
This document summarizes a paper that proposes a reduced reference image quality assessment method using local entropy and a fuzzy inference system. It first extracts local entropy features from reference and distorted images to obtain probability distributions. It then calculates the Kullback-Leibler divergence (KLD) between distributions as a measure of distortion. KLD values are used to classify images into good, average, or bad quality classes. A fuzzy inference system is created using KLD as the input and predicted quality score as the output. Rules are defined to map KLD values to quality scores based on the image quality classes determined during threshold analysis of KLD values. The method aims to assess image quality with only partial information from the reference image.
Optimized Biometric System Based on Combination of Face Images and Log Transf...sipij
The biometrics are used to identify a person effectively. In this paper, we propose optimised Face
recognition system based on log transformation and combination of face image features vectors. The face
images are preprocessed using Gaussian filter to enhance the quality of an image. The log transformation
is applied on enhanced image to generate features. The feature vectors of many images of a single person
image are converted into single vector using average arithmetic addition. The Euclidian distance(ED) is
used to compare test image feature vector with database feature vectors to identify a person. It is
experimented that, the performance of proposed algorithm is better compared to existing algorithms.
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDYcsandit
The majority of applications requiring high resolution images to derive and analyze data
accurately and easily. Image super resolution is playing an effective role in those applications.
Image super resolution is the process of producing high resolution image from low resolution
image. In this paper, we study various image super resolution techniques with respect to the
quality of results and processing time. This comparative study introduces a comparison between
four algorithms of single image super-resolution. For fair comparison, the compared algorithms
are tested on the same dataset and same platform to show the major advantages of one over the
others.
MR Image Compression Based on Selection of Mother Wavelet and Lifting Based W...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
This document proposes a new robust subspace method called Proposed Euclidean Distance Score Level Fusion (PEDSLF) for recognizing happiness facial expressions with age variations. PEDSLF performs score level fusion of three subspace methods - PCA, ICA, and SVD. It normalizes the scores from each method and takes their maximum value for classification. The method is tested on two databases from FGNET and achieves recognition rates of 81.8% for ages 1-5 training and 10-15 testing, and 72% for ages 20-25 training and 30-35 testing. The results show PEDSLF performs better than the individual subspace methods for facial expression recognition with age variations.
Mr image compression based on selection of mother wavelet and lifting based w...ijma
Magnetic Resonance (MR) image is a medical image technique required enormous data to be stored and
transmitted for high quality diagnostic application. Various algorithms have been proposed to improve the
performance of the compression scheme. In this paper we extended the commonly used algorithms to image
compression and compared its performance. For an image compression technique, we have linked different
wavelet techniques using traditional mother wavelets and lifting based Cohen-Daubechies-Feauveau
wavelets with the low-pass filters of the length 9 and 7 (CDF 9/7) wavelet transform with Set Partition in
Hierarchical Trees (SPIHT) algorithm. A novel image quality index with highlighting shape of histogram
of the image targeted is introduced to assess image compression quality. The index will be used in place of
existing traditional Universal Image Quality Index (UIQI) “in one go”. It offers extra information about
the distortion between an original image and a compressed image in comparisons with UIQI. The proposed
index is designed based on modelling image compression as combinations of four major factors: loss of
correlation, luminance distortion, contrast distortion and shape distortion. This index is easy to calculate
and applicable in various image processing applications. One of our contributions is to demonstrate the
choice of mother wavelet is very important for achieving superior wavelet compression performances based
on proposed image quality indexes. Experimental results show that the proposed image quality index plays
a significantly role in the quality evaluation of image compression on the open sources “BrainWeb:
Simulated Brain Database (SBD) ”.
Performance Analysis of SVM Classifier for Classification of MRI ImageIRJET Journal
This document discusses using support vector machines (SVM) to classify MRI brain images as normal, benign tumor, or malignant tumor. Key steps include preprocessing images using median and Gaussian filters, extracting features using gray level co-occurrence matrix (GLCM) analysis, and training and testing an SVM classifier on the extracted features to classify new MRI images. The methodology first segments regions of interest in the images using k-means clustering, then extracts GLCM texture features from those regions to train and test the SVM for tumor classification.
Development and Comparison of Image Fusion Techniques for CT&MRI ImagesIJERA Editor
Image processing techniques primarily focus upon enhancing the quality of an image or a set ofimages to derive
the maximum information from them. Image Fusion is a technique of producing a superior quality image from a
set of available images. It is the process of combining relevant information from two or more images into a
single image wherein the resulting image will be more informative and complete than any of the input images. A
lot of research is being done in this field encompassing areas of Computer Vision, Automatic object detection,
Image processing, parallel and distributed processing, Robotics and remote sensing. This project paves way to
explain the theoretical and implementation issues of seven image fusion algorithms and the experimental results
of the same. The fusion algorithms would be assessed based on the study and development of some image
quality metrics
Diabetes Mellitus Detection Based on Facial Texture Feature using the GLCMIRJET Journal
This document presents a study on detecting diabetes mellitus through analyzing facial texture features extracted from images using the Gray Level Co-occurrence Matrix (GLCM). Texture features such as contrast, correlation, energy, and homogeneity are extracted from facial images of people with and without diabetes. A Support Vector Machine classifier is then used to classify the images based on these features. The study achieved accuracy in distinguishing between diabetic and healthy samples based on a dataset of 40 facial images.
An Experimental Study into Objective Quality Assessment of Watermarked ImagesCSCJournals
In this paper, we study the quality assessment of watermarked and attacked images using extensive experiments and related analysis. The process of watermarking usually leads to loss of visual quality and therefore it is crucial to estimate the extent of quality degradation and its perceived impact. To this end, we have analyzed the performance of 4 image quality assessment (IQA) metrics – Structural Similarity Index (SSIM), Singular Value Decomposition Metric (M-SVD) and Image Quality Score (IQS) and PSNR on watermarked and attacked images. The watermarked images are obtained by using three different schemes viz., (1) DCT based random number sequence watermarking, (2) DWT based random number sequence watermarking and (3) RBF Neural Network based watermarking. The signed images are attacked by using five different image processing operations. We observe that the metrics behave identically in case of all the three watermarking schemes. An important conclusion of our study is that PSNR is not a suitable metric for IQA as it does not correlate well with the human visual system’s (HVS) perception. It is also found that the M-SVD scatters significantly after embedding the watermark and after attacks as compared to SSIM and IQS. Therefore, it is a less effective quality assessment metric for watermarked and attacked images. In contrast to PSNR and M-SVD, SSIM and IQS exhibit more stable and consistent performance. Their comparison further reveals that except for the case of counterclockwise rotation, IQS relatively scatters less for all other four attacks used in this work. It is concluded that IQS is comparatively more suitable for quality assessment of signed and attacked images.
The document discusses appearance-based face recognition using PCA and LDA algorithms. It summarizes the steps of each algorithm and compares their performance on preprocessed face images from the Faces94 database. Image preprocessing techniques like grayscale conversion and modified histogram equalization are applied before PCA and LDA to enhance image quality and improve recognition rates. The paper aims to study PCA and LDA with respect to recognition accuracy and dimensionality.
This document proposes a blind/no-reference image quality assessment algorithm called BLIINDS-II based on natural scene statistics of discrete cosine transform (DCT) coefficients. It extracts DCT coefficients from image blocks and models them using a generalized Gaussian distribution. The model parameters are used as features to predict image quality scores. When tested on the LIVE IQA database, BLIINDS-II correlates highly with human perceptions of quality, competitively with full-reference metrics like SSIM. It provides a computationally efficient, distortion-agnostic means of blind image quality assessment.
Face Recognition Using Gabor features And PCAIOSR Journals
This document summarizes a research paper on face recognition using Gabor features and principal component analysis (PCA). It begins by providing background on face recognition and discusses challenges like lighting, pose, and orientation. It then describes preprocessing faces using Gabor wavelets to extract discriminative features and reduce variations. PCA is used to further reduce the dimensionality of features into principal components. These components are used for classification, with nearest neighbor classification tested on the Yale face database. Results show the proposed approach improves recognition rates compared to Euclidean distance measures.
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING sipij
The aim of this paper is to present a comparative study of two linear dimension reduction methods namely
PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). The main idea of PCA is to
transform the high dimensional input space onto the feature space where the maximal variance is
displayed. The feature selection in traditional LDA is obtained by maximizing the difference between
classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the
whole data set where LDA tries to find the axes for best class seperability. The neural network is trained
about the reduced feature set (using PCA or LDA) of images in the database for fast searching of images
from the database using back propagation algorithm. The proposed method is experimented over a general
image database using Matlab. The performance of these systems has been evaluated by Precision and
Recall measures. Experimental results show that PCA gives the better performance in terms of higher
precision and recall values with lesser computational complexity than LDA
This document presents a study that uses pre-trained convolutional neural networks (CNNs) as feature extractors for blur detection in digital breast tomosynthesis (DBT) images. Specifically, it examines ResNet18, ResNet50, AlexNet, VGG16 and InceptionV3 CNNs connected to a support vector machine (SVM) classifier to label DBT images as blurry or not blurry. The CNN-SVM combinations are evaluated based on accuracy, receiver-operating characteristic curves, area under the curve, and execution time. The results found that InceptionV3 achieved the best accuracy of 0.9961 and area under the curve of 0.9961, while AlexNet had the shortest processing time. The study aims to
Quality assessment of resultant images after processingAlexander Decker
This document discusses quality assessment of images after processing. It provides an overview of traditional perceptual image quality assessment approaches, which are based on measuring errors between distorted and reference images. These methods involve channel decomposition, error normalization based on visual sensitivity, and error pooling. The document also discusses information theoretic approaches to quality assessment, which view it as an information fidelity problem rather than just a signal fidelity problem. These approaches relate visual quality to the mutual information shared between the reference and test images. However, these methods make assumptions that are difficult to validate.
Similar to Blind Image Quality Assessment with Local Contrast Features (20)
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
“Building and Scaling AI Applications with the Nx AI Manager,” a Presentation...
Blind Image Quality Assessment with Local Contrast Features
1. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
DOI: 10.5121/ijci.2016.5422 191
BLIND IMAGE QUALITY ASSESSMENT WITH LOCAL
CONTRAST FEATURES
Ganta Kasi Vaibhav,
PG Scholar, Department of Electronics and Communication Engineering,
University College of Engineering Vizianagaram,JNTUK.
Ch.Srinivasa Rao,
Professor, Department of Electronics and Communication Engineering,
University College of Engineering Vizianagaram,JNTUK.
ABSTRACT
The aim of this research is to create a tool to evaluate distortion in images without the information about
original image. Work is to extract the statistical information of the edges and boundaries in the image and
to study the correlation between the extracted features. Change in the structural information like shape and
amount of edges of the image derives quality prediction of the image. Local contrast features are effectively
detected from the responses of Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Using
the joint adaptive normalisation, G and L are normalised. Normalised values are quantized into M and N
levels respectively. For these quantised M levels of G and N levels of L, Probability (P) and conditional
probability(C) are calculated. Four sets of values namely marginal distributions of gradient magnitude Pg,
marginal distributions of Laplacian of Gaussian Pl, conditional probability of gradient magnitude Cg and
probability of Laplacian of Gaussian Cl are formed. These four segments or models are Pg, Pl, Cg and Cl.
The assumption is that the dependencies between features of gradient magnitude and Laplacian of
Gaussian can formulate the level of distortion in the image. To find out them, Spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. Four different correlation values of each image are
the area of interest. Results are also compared with classical tool Structural Similarity Index Measure
(SSIM)
KEYWORDS
Gradient Magnitude, Laplacian of Gaussian, Joint Adaptive Normalisation, Normalised Bivariate
Histograms, Spearman rank Correlation, Pearson Correlation Coefficient.
1. INTRODUCTION
Image quality assessment evaluates the quality of the distorted image. Factors which determine
image quality are, noise, dynamic range tone reproduction, colour accuracy, distortion, contrast,
exposure accuracy, lateral chromatic aberration, sharpness, colour moiré, vignette, artefacts.
Distortion is defined as abnormality, irregularity or variation caused in an image. This is
noticeable in low cost cameras. Distortions are caused during Acquisition, Compression,
Transmission and Storage. Changes in image or quality of image are observed either by the
human subjects called as subjective measure or calculated by mathematical operations called as
2. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
192
objective measures. Image quality assessment can also be categorised as With Reference models
and Without Reference models. First type of models finds out quality of the image by comparing
with its original image. Second type of models also called as Blind image quality assessment
finds out the quality of distorted image without comparing with its original image.
Local contrast features describe the structure of the image. The changes in the structure of the
image like shape and amount of edges are detected easily. Two general local contrast features are
Gradient magnitude and Laplacian of Gaussian. Joint adaptive normalisation (JAN) normalises G
and L channels jointly. The benefit of JAN is to make the horizontal, vertical and diagonal
features correlative in the image. It reduces the redundancies in image. Normalisation stabilizes
the profiles of these features. The G and L distributions of images are very different from natural
images. Quantising into levels and thereby giving joint probability function, statistics are derived.
Marginal distributions and conditional probability dependency measures are recorded into four
different correlations are drawn.
Over years there had been several performance measures for image quality assessment.
DMOS/MOS difference mean opinion scores is a subjective measure which evaluates over human
judgements [4]. Various databases established by IQA community are LIVE, CSIQ, TID2008.
CSIQ, TID2008 and LIVE has 4 common types of distortions they are JP2K, JPEG, WN and
Gaussian blur. These are used to identify the characteristics of the various distortions. To find out
performance of a method, a machine which calculates the correlations of the subjective scores of
the human judgements constructed. The correlations used are Spearman rank order correlation
coefficient (SRC) and Pearson correlation coefficient (PCC). Proposed Blind image quality
assessment model is compared with Structural Similarity Index Measure (SSIM).
Experimental results define that there is equivalent information in one of the four sets of statistics
we derive. However joint statistics in of Pearson model give better results. Existing models
involve in large procedures to find out the disturbances in the frequencies over different
distortions. Few models even changes the features of the image. G and L operations are close to
the results of the human visual system. G and L are independent over the distortions. The
marginal and independency distributions can determine the quality of the image. Joint adaptive
normalisation procedure normalises the G and L features. Proposed model uses independency
distributions to measure joint statistics. Which leads to highly competitive results in terms of
Quality prediction, Generalisation ability, and effectiveness. Existing models have computational
complexity. To reduce regression methods, and get finer quality measure with no training or
regression methods we derive a new method with probability statistics.
This paper is organised in four sections. Section II gives a brief study of all the existing models of
without reference image quality assessment. Section III presents the features of the proposed
model in detail and exclusive experimental results in each stage of the process. Section IV
concludes the paper.
2. RELATED WORK
2.1.Literature Survey
There are several image quality assessment models. Mean Square Error (MSE) is the primitive
measure [2]. When original image is the known reference image, a written explanation of the
measure exits in the literature. Mean Square Error (MSE) found to be very important measure to
3. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
193
compare two signals. It provides a similarity index score that gives the degree of similarity.
Similarity index map is amount of distortion between two signals. It is simple, parameter free and
inexpensive. It is employed widely for optimizing and assessing signal processing applications.
Yet it did not measure signal fidelity to certain required extinct. When altered by two different
distortions at a same level, MSE only gave the value of distortion. MSE is very converse to the
human perception. MSE led to development of Minimum Mean Square Error, Peak Signal to
Noise Ratio.
Based on the luminance, contrast and structure of an image, the Structural Similarity Index
Measure (SSIM) [3] is developed. It is an objective quality measure of image to quantify the
visibility of errors. It is a similarity measure for comparing any two signals. Like MSE, it is a full
reference model which depends on the structural similarity. It has SSIM index map which shows
the comparison better than MSE. It is a complex measure for an image with large content.
Though SSIM index could give better results compared to the traditional Mean square error, it
lacked in defining what the type of distortion is present in the image. This novel work gave scope
for the study on structure statistics of the image. Furthermore, it is used to simplify algorithms of
image processing system. For image quality measure, SSIM proved its efficiency than mean
square error. SSIM is computationally expensive than MSE.
Difference Mean Opinion Scoring (DMOS) is a performance evaluation study of existing
methods of IQA along with subjective scores collected over a period of time with numerous
human subjects [4]. There is no replacement to the Human Visual system (HVS) [5]. DMOS
when compared objective measures like PSNR, MSE and SSIM, proved that Human Visual
System had better evaluation. Subjective group verified and gave responses over LIVE database
which consists of 779 distorted images from 29 original images with five distortions [14]. It gave
importance to visual difference. Used of Spearman Rank Correlation and Root Mean Square
Error for perfection. It has 95% confidence criterion of finding out whether distorted or not. This
study gave a valuable resource of scores of distortion. It led to study of natural scene statistics.
DMOS is used as benchmark for checking of constructed models.
The reference image is not always available. Hence, there is need of without reference image
quality assessment measure also called as Blind Image Quality Assessment. It assesses the quality
of an image completely blind i.e., without any knowledge of source distortion [5]. Distorted
Image Statistics (DIS) which is used to classify images into distortion categories gave the ease to
decide type of the distortion [5]. In this literature, wavelet transform is performed on the image
indices. Shape parameter is defined using Gaussian distribution. Given a training set and testing
setoff distorted images, a classifier Support Vector Machine is used to classify image into five
distortions. This is the first without reference method. It could differentiate the type of distortion.
Results of this model correlates with reference models. Its drawback is its computational
complexity. This methodology can be replaced with any module which performs better i.e. either
by increasing the number of distortions or by increasing training set for better results. This model
can be used for video processing by adding measure of relevant perceptual features.
With the probability of usage of the indices as features, a new approach BLIINDS is proposed
[5]. It is a model with the evolution of features derived from the Discrete Cosine Transform
domain statistics. While the previous no reference is distorted specific approaches, this approach
could explain the type of the distortion. It used Support Vector Machine (SVM) which correlates
well with human visual perception. It is computationally convenient as it is based on a DCT-
framework entirely, and beats the performance of Peak signal to noise ratio. The probabilistic
prediction model was trained on a small sample of the data, and only required the computation of
4. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
194
the mean and the covariance of the training data. It computes blockiness measure. It estimates the
particular type of distortion. Time taken for computation is high.
Final evolution is the reduction of complexity by combination with Natural scene statistics and
addition of distorted image statistics. Work is to extract the statistical information of the edges
and boundaries in the image and to study the correlation between the extracted features. Change
in the structural information like shape and amount of edges of the image derives quality
prediction of the image. Local contrast features are effectively detected from the responses of
Gradient Magnitude (G) and Laplacian of Gaussian (L) operations. Adaptive procedures are used
to normalize the values of G and L. Normalized values are quantized into certain levels
respectively. Conditional probability and Marginal distribution of G and L are calculated which
are stored into three segments. They proposed three models. These three segments or models are
M1, M2, M3 which have only conditional probability values, only marginal distribution values
and both conditional probability and marginal distribution values. Loading values in the support
vector regression; over the set of images collected from LIVE database a probable score is
determined for each distortion.
There is complexity in training these values into the support vector machine. This paper is a
thorough study of the conditional probability and marginal probability values of the gradient
magnitude and Laplacian of Gaussian. Marginal distributions and conditional probabilities and
their dependencies which lead to highly competitive performance are employed in this work. This
avoided the training and learning of the features. Therefore, the complexity is reduced in the
modelling. This procedure is a direct extraction of the amount of edges and change in the image.
Intensive measurement of the structural information is derived from the correlations between the
amounts of the variation caused in the image. Four correlations namely Pearson correlation
coefficient between Pg and Pl (PRCP), Pearson correlation coefficient between Cg and Cl
(PRCQ), Spearman rank correlation between Pg and Pl (SRCP), Spearman rank correlation
between Cg and Cl (SRCQ) are proposed in this paper.
3.PROPOSED WORK
As discussed in the above section, a methodology to find out the profiles of the structural features
and to derive the correlations between the structural features is explained in stages with their
relevant outcome in each stage is given below.
3.1.Local Features
Local contrast features give the information of the amount of change in the structure of an image.
Two general local contrast features are Gradient magnitude (G) and Laplacian of Gaussian (L).
Discontinuities in the structural details like luminance of an image or the change in the intensities
are important for quality assessment. These can be derived by performing gradient magnitude and
Laplacian of Gaussian operations. We use CSIQ database which is commonly used international
database for image quality assessment. Trolley image is considered from database and denoted by
I. To standardize Convert image from RGB to grayscale. For size of the image, in the following
stages gradient magnitude and Laplacian of Gaussian operators are applied.
5. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
195
3.2.Gradient Magnitude
Gradient magnitude is the first order derivative, often used to detect the edges in the image.
Expression for gradient magnitude is given as:
G=
The vertical prewitt filter kernel (V) is considered as [-1 -1 -1; 0 0 0; 1 1 1]. The horizontal
prewitt filter kernel (H) is considered as [-1 0 1; -1 0 1; -1 0 1]. Appling these kernels on the
image and substituting in G expression given above, we get gradient magnitude image. The
original image, vertical prewitt filtered image, horizontal prewitt filtered image and the resultant
gradient magnitude image are shown in figure 1.
Figure.1. 1) Original image 2) Vertical Prewitt Filtered Image 3) Horizontal Prewitt Filtered Image 4)
Resultant gradient magnitude image.
3.3.Laplacian of Gaussian
Laplacian of Gaussian is the second order derivative as shown in the equation. Expression of the
Laplacian of Gaussian is given as,
L=I*
Where,
= g(x,y)+ g(x,y)
G and L operations reduce the spatial redundancies in the image. The Laplacian of Gaussian
applied image is shown in figure 2. Some consistencies between neighbouring structures still
remain. So, to remove these we perform joint adaptive normalisation.
6. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
196
Figure 2. Laplacian of Gaussian performed image.
3.4.Joint Adaptive Normalisation
Joint adaptive normalization (JAN) is performed to remove the spatial redundancies remained in
the image [1]. This decomposes the channel into different frequencies and orientations.
According to the normalization factor G and L are reduced.
Figure 3. 1) Laplacian of Gaussian 2) Gradient Magnitude 3) Joint Factored Image
3.5.Locally Adaptive Normalization Factor
A 3*3 mask which has values which when summated equals to 1 is applied on the image. As the
mask is run over square of joint factored image while finding out the square root of the same,
gives normalization factor. Last step of this procedure is to find out new values of G (i,j) and L
(i,j) as (i,j) and (i, j) by reducing the features by normalisation factor. Variation in Buildings
image before and after joint adaptive normalisation are shown in figure 4.
7. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
197
Figure 4. 1) Gradient Magnitude 2) GM after joint adaptive normalisation 3) Laplacian of Gaussian 4) Log
after joint adaptive normalisation
3.6.Quantization
The features obtained on applying the techniques of gradient magnitude and Laplacian of
Gaussian are quantised. This is performed to decrease the dynamic range and to bring the features
into an optimum range. We quantized (i,j) into planes as { . Similarly
(i,j)into . In this case we take 17 levels of which we assign 17 different levels of
pixels values. This may be a lossy process but it is done to derive the respective density functions
of gradient magnitude and laplacian of Gaussian features. Resultant Trolley image after
quantization into 17 levels is shown in figure 5.
Figure 5. 1) GM quantised into 17 levels 2) LOG quantised into 17 levels
3.7.Marginal Distributions and Conditional Probability
Dependency measures like Marginal distributions and conditional probability closely relate the
amount of distortion present in the image. This is more comprehensive evaluation of the extracted
features. For the 17 levels of quantized images the marginal distributions and conditional
8. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
198
probabilities are derived. To find out the marginal dependencies of and , procedure starts with
deriving the joint empirical functions for all levels.
= , =
Normalised histogram of and is . Marginal distributions of and are given in the
expression below. Marginal probabilities of and are shown in the figure 6.
Pg ( = )=
Pl ( = )=
Figure 6. Marginal distributions of quantised and .
Sometimes the marginal distribution does not show the dependencies between and . The
dependency between them are derived by dependency measure given in equation below.
=
Using the marginal distributions as the weights the conditional probabilities are derived for
and as Cg and Cl. These probability distributions are otherwise called as the independency
distributions. Independency distributions of and are shown in figure 7.
Cg ( = )=Pg ( ) .
Cl ( = )=Pl ( ) .
Figure 7. Independency distributions of and .
9. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
199
While finding out the marginal densities of GM and LOG and their corresponding profiles, it is
seen that changes in a distorted image and randomness of distortion is distinguishable through.
Vertical, horizontal and diagonal profiles of and of a not distorted are shown in figure 8. A
normal distortion less image has quite different distributions from that of a distorted version of it.
Similarly their profiles are also plotted. The individual random process of G and L features after
quantization are assumed as the random variables. The below figures fall under the assumption
of binomial distribution of the data. Further there is need to identify the dependency between the
distributions of G and L. Hence, calculating joint probability density function between them is the
solution. In a general case they find to be independent and results are in product of their
individual marginal density functions. For sample profiles over 20 bins are shown in plots in
figure 8.
Figure 8. Horizontal, vertical and diagonal profiles of and .
To present the image in the score accurately, two correlations that can measure scores
that can measure the relation between structure features are used. They are Spearman rank order
correlation coefficient (SRC) and Pearson correlation coefficient (PCC).
The dependencies between features of extracted horizontal, vertical and diagonal profiles can
formulate the level of distortion in the image. To find out them, spearman and Pearson
correlations between Pg, Pl and Cg, Cl are calculated. We propose four models Pearson
correlation coefficient between Pg and Pl (PRCP), Pearson correlation coefficient between Cg
and Cl (PRCQ), Spearman rank correlation between Pg and Pl (SRCP), Spearman rank
correlation between Cg and Cl (SRCQ). The scores of four models and their comparison with
Structural similarity value are tabulated in columns below. Highlighted values represent right
ordered values in coincidence with level of distortion.
Scores of AWGN distorted images and their relevant SSIM values are recorded in table 1. For
Blur, Pearson correlations of conditional probabilities give more equivalence. The scores of blur
distorted images and their relevant SSIM values are given in table 2. For AWGN, Spearman
correlations of marginal distributions give more equivalence. The scores of images distorted with
flicker noise and their relevant SSIM values are recorded in table 3. For flicker noise affected
images, Pearson correlations of marginal distributions and conditional probabilities give more
equivalence. The scores of JPEG distorted images and their relevant SSIM value are recorded in
table 4. For JPEG, Pearson correlations of marginal probabilities give more equivalence. The
scores of JPEG2k distorted images and their relevant SSIM values are shown in table 5. For
JPEG2k, Spearman correlations of conditional probabilities and Pearson correlations of marginal
distributions give more equivalence. Overall, Pearson correlation coefficients of marginal
distributions proves to be exemplary.
11. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
201
Table 5. Scores of JPEG2k distorted images and their relevant SSIM value.
Level of Distortion PRCP PRCQ SRCP SRCQ SSIM
1 0.2414 0.9134 0.2918 0.9031 0.989
2 0.2321 0.9064 0.3333 0.8788 0.9637
3 0.1854 0.8377 0.3617 0.9031 0.9008
4 0.1334 0.9458 0.3818 0.8349 0.7839
5 0.0741 0.9203 0.4394 0.8788 0.6097
4. CONCLUSIONS
Existing BIQA models are complex and involve either in exquisite decompositions or model
learning and support vector regression. Few explicit models unlike the proposed method change
the features of the image. Keeping this in concern, an attempt is made to use the correlations
between the statistics of the local contrast features. Since these are independent, data of the image
is not disturbed. In this paper simple procedures to normalise are used to derive joint statistics
with joint adaptive normalisation. Marginal distributions and conditional probabilities and their
dependencies led to highly competitive performance. Avoiding the training and learning of the
features derived, complexity is reduced. Amongst the four models, Pearson correlation coefficient
between Pg and Pl (PRCP) proved to be consistent. However, all the four models have affinity
with structural similarity.While Pearson correlation is a linear correlation, Spearman is a rank
correlation. Hence results are different for different types of distortions in proposed four models
with two correlations because of the variation in structural profiles. This proves that when
variation in the image structure can define the type of distortion present in the image.This can
lead to development of newer models which can determine the type of distortion.
REFERENCES
[1] Xue and A. C. Bovik, "Blind image quality assessment by joint statistics of gradient magnitude and
laplacian of gaussian", IEEE Trans on image processing, 2014
[2] Z. Wang and A. C. Bovik, "Mean squared error: Love it or leave it? A new look at Signal Fidelity
Measures", IEEE Trans Signal processing, vol. 26, no. 1, doi.10.1109/MSP.2008.930649
[3] Z. Wang and A. C. Bovik , H. R. Sheikh and E. P. Simoncelli, "Image quality assessment: From error
visibility to structural similarity", IEEE Trans. Image Processing, vol. 13, no. 4, pp. 600-612, 2004
[4] H. R. Sheikh, M. F. Sabir and A. C. Bovik , "A Statistical Evaluation of Recent Full Reference Image
Quality Assessment Algorithms", IEEE Transactions on Image Processing, vol. 15, no. 11 pp. 3440 -
3451
[5] A. K. Moorthy and A. C. Bovik, "A two-step framework for constructing blind image quality
indices",IEEE Signal Process. Lett. vol. 17, no. 5, pp. 513-516, 2010
[6] M. A. Saad, A.C. Bovik and C. Charrier, "A DCT statistics-based blind image quality index", IEEE
Signal Process. Lett. vol. 17, no. 6, pp. 583-586, 2010
[7] X. Marichal, W.Y. Ma and H. Zhang, "Blur determination in the compressed domain using DCT
information", Proc. ICIP, pp. 386-390
[8] A. Mittal, A. K. Moorthy and A. C. Bovik, "No-reference image quality assessment in the spatial
domain", IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695-4708, 2012
[9] J. Canny, “A computational approach to edge detection", IEEE Trans. Pattern Anal. Mach. Intell., vol.
PAMI-8, no. 6, pp. 679-698, 1986
[10] M. A. Saad, A. C. Bovik and C. Charrier, "Blind image quality assessment: A natural scene statistics
approach in the DCT domain", IEEE Trans. Image Process., vol. 21, no. 8, pp. 3339-3352, 2012
12. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 4, August 2016
202
[11] A. K. Moorthy and A. C. Bovik, "Blind image quality assessment: From natural scene statistics to
perceptual quality", IEEE Trans. Image Process., vol. 20, no. 12, pp. 3350-3364, 2011
[12] D. Marr and E. Hildreth, "Theory of edge detection", Proc. Roy. Soc. London B, Biol. Sci., vol. 207,
no. 1167, pp. 187-217, 1980
[13] G.-H. Chen, C.-L. Yang and S.-L. Xie, "Gradient-based structural similarity for image quality
assessment", Proc. ICIP, pp. 2929-2932
[14] H. R. Sheikh, Z. Wang, L. Cormack and A. C. Bovik, Live Image Quality Assessment Database
Release 2., 2011, [online] Available: online