This document proposes a new method called Symmetrical Weighted 2D Principal Component Analysis (SW2DPCA) for facial expression recognition. SW2DPCA aims to reduce the dimensionality of the feature space extracted from face images using Gabor filters, while removing redundant information. It does this by decomposing the training images into odd and even symmetrical components, and applying weighted PCA to equalize the variance of principal components. The method is tested on the JAFFE database, achieving a recognition accuracy of 95.24% - higher than the 2DPCA baseline method.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
An Illumination Invariant Face Recognition by Selection of DCT CoefficientsCSCJournals
The face recognition is nowadays popular in social networks and smart phones. The face recognition is more difficult for poor illumination images. The objective of the work is to create an illumination invariant face recognition system using 2D Discrete Cosine Transform and Contrast Limited Adaptive Histogram Equalization (CLAHE). Contrast Limited Adaptive Histogram Equalization is used for enhancing the poor contrast medical images. The proposed method selects 75% to 100% DCT coefficients and set the high frequency to zero. It resizes the image based on the selection percentage, and then inversed DCT is applied. Then, CLAHE is applied to adjust the contrast. The resized images reduce the computational complexity. The image obtained is illumination invariant face image and termed as ‘En-DCT’ image. The fisher face subspace method is applied on the ‘En-DCT’ image to extract the features. The matching face image is obtained using cosine similarity. The face recognition accuracy is tested on AR database. The face recognition is tested with 75% to 100% DCT coefficients and finds the best range. The performance measures recognition rate, 1% FAR (False Acceptance Rate) and Equal Error Rate (EER) are computed. The high recognition rate results prove that the proposed method is an efficient method for illumination invariant face recognition
Image restoration based on morphological operationsijcseit
Image processing including noise suppression, feature extraction, edge detection, image segmentation,
shape recognition, texture analysis, image restoration and reconstruction, image compression etc uses
mathematical morphology which is a method of nonlinear filters.
It is modulated from traditional morphology to order morphology, soft mathematical morphology and fuzzy
soft mathematical morphology. This paper is covers 6 morphological operations which are implemented in
the matlab program, including erosion, dilation, opening, closing, boundary extraction and region filling.
Medoid based model for face recognition using eigen and fisher facesijscmcj
Biometric technologies have gained a remarkable impetus in high security applications. Various biometric modalities are widely being used these days. The need for unobtrusive biometric recognition can be fulfilled through Face recognition which is the most natural and non intrusive authentication system. However the vulnerability to changes owing to variations in face due to various factors like pose,
illumination, ageing, emotions, expressions etc make it necessary to have robust face recognition systems.
Various statistical models have been developed so far with varying degree of accuracy and efficiency. This
paper discusses a new approach to utilize Eigen face and Fisher face methodology by using medoid instead
of mean as a statistic in calculating the Eigen faces and Fisher faces. The method not only requires lesser training but also demonstrates better time efficiency and performance compared to the conventional method of using mean
AN ILLUMINATION INVARIANT FACE RECOGNITION USING 2D DISCRETE COSINE TRANSFORM...ijcsit
Automatic face recognition performance is affected due to the head rotations and tilt, lighting intensity and
angle, facial expressions, aging and partial occlusion of face using Hats, scarves, glasses etc.In this paper,
illumination normalization of face images is done by combining 2D Discrete Cosine Transform and
Contrast Limited Adaptive Histogram Equalization. The proposed method selects certain percentage of
DCT coefficients and rest is set to 0. Then, inverse DCT is applied which is followed by logarithm
transform and CLAHE. Thesesteps create illumination invariant face image, termed as ‘DCT CLAHE’
image. The fisher face subspace method extracts features from ‘DCT CLAHE’ imageand features are
matched with cosine similarity. The proposed method is tested in AR database and performance measures
like recognition rate, Verification rate at 1% FAR and Equal Error Rate are computed. The experimental
results shows high recognition rate in AR database.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
Hierarchical Approach for Total Variation Digital Image InpaintingIJCSEA Journal
The art of recovering an image from damage in an undetectable form is known as inpainting. The manual work of inpainting is most often a very time consum ing process. Due to digitalization of this technique, it is automatic and faster. In this paper, after the user selects the regions to be reconstructed, the algorithm automatically reconstruct the lost regions with the help of the information surrounding them. The existing methods perform very well when the region to be reconstructed is very small, but fails in proper reconstruction as the area increases. This paper describes a Hierarchical method by which the area to be inpainted is reduced in multiple levels and Total Variation(TV) method is used to inpaint in each level. This algorithm gives better performance when compared with other existing algorithms such as nearest neighbor interpolation, Inpainting through Blurring and Sobolev Inpainting.
Multimodal Biometrics Recognition by Dimensionality Diminution MethodIJERA Editor
Multimodal biometric system utilizes two or more character modalities, e.g., face, ear, and fingerprint,
Signature, plamprint to improve the recognition accuracy of conventional unimodal methods. We propose a new
dimensionality reduction method called Dimension Diminish Projection (DDP) in this paper. DDP can not only
preserve local information by capturing the intra-modal geometry, but also extract between-class relevant
structures for classification effectively. Experimental results show that our proposed method performs better
than other algorithms including PCA, LDA and MFA.
An Illumination Invariant Face Recognition by Selection of DCT CoefficientsCSCJournals
The face recognition is nowadays popular in social networks and smart phones. The face recognition is more difficult for poor illumination images. The objective of the work is to create an illumination invariant face recognition system using 2D Discrete Cosine Transform and Contrast Limited Adaptive Histogram Equalization (CLAHE). Contrast Limited Adaptive Histogram Equalization is used for enhancing the poor contrast medical images. The proposed method selects 75% to 100% DCT coefficients and set the high frequency to zero. It resizes the image based on the selection percentage, and then inversed DCT is applied. Then, CLAHE is applied to adjust the contrast. The resized images reduce the computational complexity. The image obtained is illumination invariant face image and termed as ‘En-DCT’ image. The fisher face subspace method is applied on the ‘En-DCT’ image to extract the features. The matching face image is obtained using cosine similarity. The face recognition accuracy is tested on AR database. The face recognition is tested with 75% to 100% DCT coefficients and finds the best range. The performance measures recognition rate, 1% FAR (False Acceptance Rate) and Equal Error Rate (EER) are computed. The high recognition rate results prove that the proposed method is an efficient method for illumination invariant face recognition
Image restoration based on morphological operationsijcseit
Image processing including noise suppression, feature extraction, edge detection, image segmentation,
shape recognition, texture analysis, image restoration and reconstruction, image compression etc uses
mathematical morphology which is a method of nonlinear filters.
It is modulated from traditional morphology to order morphology, soft mathematical morphology and fuzzy
soft mathematical morphology. This paper is covers 6 morphological operations which are implemented in
the matlab program, including erosion, dilation, opening, closing, boundary extraction and region filling.
Medoid based model for face recognition using eigen and fisher facesijscmcj
Biometric technologies have gained a remarkable impetus in high security applications. Various biometric modalities are widely being used these days. The need for unobtrusive biometric recognition can be fulfilled through Face recognition which is the most natural and non intrusive authentication system. However the vulnerability to changes owing to variations in face due to various factors like pose,
illumination, ageing, emotions, expressions etc make it necessary to have robust face recognition systems.
Various statistical models have been developed so far with varying degree of accuracy and efficiency. This
paper discusses a new approach to utilize Eigen face and Fisher face methodology by using medoid instead
of mean as a statistic in calculating the Eigen faces and Fisher faces. The method not only requires lesser training but also demonstrates better time efficiency and performance compared to the conventional method of using mean
AN ILLUMINATION INVARIANT FACE RECOGNITION USING 2D DISCRETE COSINE TRANSFORM...ijcsit
Automatic face recognition performance is affected due to the head rotations and tilt, lighting intensity and
angle, facial expressions, aging and partial occlusion of face using Hats, scarves, glasses etc.In this paper,
illumination normalization of face images is done by combining 2D Discrete Cosine Transform and
Contrast Limited Adaptive Histogram Equalization. The proposed method selects certain percentage of
DCT coefficients and rest is set to 0. Then, inverse DCT is applied which is followed by logarithm
transform and CLAHE. Thesesteps create illumination invariant face image, termed as ‘DCT CLAHE’
image. The fisher face subspace method extracts features from ‘DCT CLAHE’ imageand features are
matched with cosine similarity. The proposed method is tested in AR database and performance measures
like recognition rate, Verification rate at 1% FAR and Equal Error Rate are computed. The experimental
results shows high recognition rate in AR database.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
Face detection using the 3 x3 block rank patterns of gradient magnitude imagessipij
Face detection locates faces prior to various face-
related applications. The objective of face detecti
on is to
determine whether or not there are any faces in an
image and, if any, the location of each face is det
ected.
Face detection in real images is challenging due to
large variability of illumination and face appeara
nces.
This paper proposes a face detection algorithm usin
g the 3×3 block rank patterns of gradient magnitude
images and a geometrical face model. First, the ill
umination-corrected image of the face region is obt
ained
using the brightness plane that is produced using t
he locally minimum brightness of each block. Next,
the
illumination-corrected image is histogram equalized
, the face region is divided into nine (3×3) blocks
, and
two directional (horizontal and vertical) gradient
magnitude images are computed, from which the 3×3
block rank patterns are obtained. For face detectio
n, using the FERET and GT databases three types of
the
3×3 block rank patterns are a priori determined as
templates based on the distribution of the sum of t
he
gradient magnitudes of each block in the face candi
date region that is also composed of nine (3×3) blo
cks.
The 3×3 block rank patterns roughly classify whethe
r the detected face candidate region contains a fac
e or
not. Finally, facial features are detected and used
to validate the face model. The face candidate is
validated as a face if it is matched with the geome
trical face model. The proposed algorithm is tested
on the
Caltech database images and real images. Experiment
al results with a number of test images show the
effectiveness of the proposed algorithm.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
3D Face Recognition Method Using 2DPCAEuclidean Distance ClassificationIDES Editor
In this paper, we present a 3d face recognition method
that is robust to changes in facial expressions. Instead of
locating many feature points, we just need to locate the nose
tip as a reference point. After finding this reference point,
pictures are converted to a standard size. Two dimensional
principle component analysis (2DPCA) is employed to obtain
features Matrix vectors. Finally Euclidean distance method is
employed for classifying and comparison of the features.
Experimental results implemented on CASIA 3D face database
which including 123 individuals in total, demonstrate that
our proposed method achieves up to 98% recognition accuracy
with respect to pose variation.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION sipij
Face image is an efficient biometric trait to recognize human beings without expecting any co-operation from a person. In this paper, we propose HVDLP: Horizontal Vertical Diagonal Local Pattern based face recognition using Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP). The face images of different sizes are converted into uniform size of 108×990and color images are converted to gray scale images in pre-processing. The Discrete Wavelet Transform (DWT) is applied on pre-processed images and LL band is obtained with the size of 54×45. The Novel concept of HVDLP is introduced in the proposed method to enhance the performance. The HVDLP is applied on 9×9 sub matrix of LL band to consider HVDLP coefficients. The local Binary Pattern (LBP) is applied on HVDLP of LL band. The final features are generated by using Guided filters on HVDLP and LBP matrices. The Euclidean Distance (ED) is used to compare final features of face database and test images to compute the performance parameters.
Invariant Recognition of Rectangular Biscuits with Fuzzy Moment Descriptors, ...CSCJournals
In this paper a new approach for invariant recognition of broken rectangular biscuits is proposed using fuzzy membership-distance products, called fuzzy moment descriptors. The existing methods for recognition of flawed rectangular biscuits are mostly based on Hough transform. However these methods are prone to error due to noise and/or variation in illumination. Fuzzy moment descriptors are less sensitive to noise thus making it an effective approach invariant to the above stray external disturbances. Further, the normalization and sorting of the moment vectors make it a size and rotation invariant recognition process .In earlier studies fuzzy moment descriptors has successfully been applied in image matching problem. In this paper the algorithm is applied in recognition of flawed and non-flawed rectangular biscuits. In general the proposed algorithm has potential applications in industrial quality control.
Caricatures are facial drawings by artists with exaggeration of certain facial parts. The exaggerations are often beyond realism and yet the caricatures are still recognizable by humans. With the advent of deep learning, recognition performances by computers on real-world faces has become comparable to human performance even under unconstrained situations. However, there is still a gap in caricature recognition performance between computer and human. This is mainly due to the lack of publicly available caricature data-sets of large scale. Our project deals with two tasks: Identification of a public figure’s name and gender from its caricature and Generating a Caricature drawing from a frontal face photograph. For the former task, we use the IIIT-CFW dataset and show the accuracy obtained by our model. For the purpose of the latter task, the system learns how to exaggerate features based on training examples from a particular caricaturist.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Face skin color based recognition using local spectral and gray scale featureseSAT Journals
Abstract Human face conveys more information about identity of person. Human face recognition is one of the most challenging problem and it can be used in many applications at different security places in airports, defense and banking sectors etc.In this work used colored features obtained from color segmentation because in real time scenario color provides the more information than gray scale image but it has a drawback. To overcome this drawback gray scale feature extracted from co-occurrence matrix of an image and for efficient face recognition of human Face under different illumination conditions spectral features can be extracted from face texture. These three feature vectors concatenated into a single feature vector and applied Lenc-Kral matching technique to measure similarity between the database and query image, the similarity is high then face is recognized. Keywords: Face recognition, illumination condition, local texture features, color segmentation.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
TWO-LAYER SECURE PREVENTION MECHANISM FOR REDUCING E-COMMERCE SECURITY RISKSijcsit
E-commerce is an important information system in the network and digital age. However, the network intrusion, malicious users, virus attack and system security vulnerabilities have continued to threaten the operation of the e-commerce, making e-commerce security encounter serious test. How to improve ecommerce security has become a topic worthy of further exploration. Combining routine security test and
security event detection procedures, this paper proposes the Two-Layer Secure Prevention Mechanism (TLSPM). Applying TLSPM, routine security test procedure can identify security vulnerability and defect,and develop repair operations. Security event detection procedure can timely detect security event, and assist follow repair. TLSPM can enhance the e-commerce security and effectively reduce the security risk
of e-commerce critical data and asset.
Evaluation of employee performance is an important element in enhancing the quality of the work and
improves employees’ motivation to perform well. It also presents a basis for upgrading and enhancing of
an organization. Periodical employees’ performance evaluation in an organization assists management to
recognize its strengths and weaknesses.
This paper presents a design and implementation of a performance appraisal system using the fuzzy logic.
In addition to the normal process of performance evaluation modules, the system contains step by step
inference engine processes. These processes demonstrate several calculation details in relations
composition and aggregation methods such as min operator, algebraic product, sup-min and sup-product.
The system has foundation to add-on analysis module to analyze and report the final result using various
similarity measures. MS Access database was used to maintain the data, build the inference logic and
develop all setting user interfaces.
Performance and power comparisons between nvidia and ati gpusijcsit
In recent years, modern graphics processing units have been widely adopted in high performance computing
areas to solve large scale computation problems. The leading GPU manufacturers Nvidia and ATI have
introduced series of products to the market. While sharing many similar design concepts, GPUs from these
two manufacturers differ in several aspects on processor cores and the memory subsystem. In this paper,
we choose two recently released products respectively from Nvidia and ATI and investigate the architectural
differences between them. Our results indicate that these two products have diverse advantages that
are reflected in their performance for different sets of applications. In addition, we also compare the energy
efficiencies of these two platforms since power/energy consumption is a major concern in the high
performance computing system.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
The impact of knowledge based trust (kbt) on the adoption and acceptability o...ijcsit
The introduction of cashless policy in Nigeria had gained a number of reactions over the adoption of
cashless economy (or cashless banking). The implication of this is that not many people have the
understanding of the benefits the cashless system would accrue to entire Nigerian populace. Although,
many have argued that the technological infrastructures available for the implementation of the cashless
economy is still a matter of concern. This study attempts to examine the impact of integrating Technology
Acceptance Model (TAM) with Knowledge-based Trust (KBT) on the adoption and acceptance of cashless
economy among Nigerian populace. We designed a paper-based questionnaire to harvest people’s view
about their intention on the adoption of cashless economy in Nigeria as well as their readiness to accept
the policy. Consequent upon the impact of trust on the acceptability of the cashless economy, we formulated
some hypotheses which was analysed with the use of T-statistics. The results of the hypothesis testing
indicate that the integration of KBT with TAM has a significant relationship on intention towards the
adoption and acceptance of cashless economy in Nigeria.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
In fuzzy decision-making processes based on linguistic information, operations on discrete fuzzy numbers
are commonly performed. Aggregation and defuzzification operations are some of these often used
operations. Many aggregation and defuzzification operators produce results independent to the decisionmaker’s
strategy. On the other hand, the Weighted Average Based on Levels (WABL) approach can take
into account the level weights and the decision maker's "optimism" strategy. This gives flexibility to the
WABL operator and, through machine learning, can be trained in the direction of the decision maker's
strategy, producing more satisfactory results for the decision maker. However, in order to determine the
WABL value, it is necessary to calculate some integrals. In this study, the concept of WABL for discrete
trapezoidal fuzzy numbers is investigated, and analytical formulas have been proven to facilitate the
calculation of WABL value for these fuzzy numbers. Trapezoidal and their special form, triangular fuzzy
numbers, are the most commonly used fuzzy number types in fuzzy modeling, so in this study, such numbers
have been studied. Computational examples explaining the theoretical results have been performed.
Face detection using the 3 x3 block rank patterns of gradient magnitude imagessipij
Face detection locates faces prior to various face-
related applications. The objective of face detecti
on is to
determine whether or not there are any faces in an
image and, if any, the location of each face is det
ected.
Face detection in real images is challenging due to
large variability of illumination and face appeara
nces.
This paper proposes a face detection algorithm usin
g the 3×3 block rank patterns of gradient magnitude
images and a geometrical face model. First, the ill
umination-corrected image of the face region is obt
ained
using the brightness plane that is produced using t
he locally minimum brightness of each block. Next,
the
illumination-corrected image is histogram equalized
, the face region is divided into nine (3×3) blocks
, and
two directional (horizontal and vertical) gradient
magnitude images are computed, from which the 3×3
block rank patterns are obtained. For face detectio
n, using the FERET and GT databases three types of
the
3×3 block rank patterns are a priori determined as
templates based on the distribution of the sum of t
he
gradient magnitudes of each block in the face candi
date region that is also composed of nine (3×3) blo
cks.
The 3×3 block rank patterns roughly classify whethe
r the detected face candidate region contains a fac
e or
not. Finally, facial features are detected and used
to validate the face model. The face candidate is
validated as a face if it is matched with the geome
trical face model. The proposed algorithm is tested
on the
Caltech database images and real images. Experiment
al results with a number of test images show the
effectiveness of the proposed algorithm.
Image enhancement technique plays vital role in improving the quality of the image. Enhancement
technique basically enhances the foreground information and retains the background and improve the
overall contrast of an image. In some case the background of an image hides the structural information of
an image. This paper proposes an algorithm which enhances the foreground image and the background
part separately and stretch the contrast of an image at inter-object level and intra-object level and then
combines it to an enhanced image. The results are compared with various classical methods using image
quality measures
3D Face Recognition Method Using 2DPCAEuclidean Distance ClassificationIDES Editor
In this paper, we present a 3d face recognition method
that is robust to changes in facial expressions. Instead of
locating many feature points, we just need to locate the nose
tip as a reference point. After finding this reference point,
pictures are converted to a standard size. Two dimensional
principle component analysis (2DPCA) is employed to obtain
features Matrix vectors. Finally Euclidean distance method is
employed for classifying and comparison of the features.
Experimental results implemented on CASIA 3D face database
which including 123 individuals in total, demonstrate that
our proposed method achieves up to 98% recognition accuracy
with respect to pose variation.
Fuzzy Region Merging Using Fuzzy Similarity Measurement on Image Segmentation IJECEIAES
Some image’s regions have unbalance information, such as blurred contour, shade, and uneven brightness. Those regions are called as ambiguous regions. Ambiguous region cause problem during region merging process in interactive image segmentation because that region has double information, both as object and background. We proposed a new region merging strategy using fuzzy similarity measurement for image segmentation. The proposed method has four steps; the first step is initial segmentation using mean-shift algorithm. The second step is giving markers manually to indicate the object and background region. The third step is determining the fuzzy region or ambiguous region in the images. The last step is fuzzy region merging using fuzzy similarity measurement. The experimental results demonstrated that the proposed method is able to segment natural images and dental panoramic images successfully with the average value of misclassification error (ME) 1.96% and 5.47%, respectively.
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION sipij
Face image is an efficient biometric trait to recognize human beings without expecting any co-operation from a person. In this paper, we propose HVDLP: Horizontal Vertical Diagonal Local Pattern based face recognition using Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP). The face images of different sizes are converted into uniform size of 108×990and color images are converted to gray scale images in pre-processing. The Discrete Wavelet Transform (DWT) is applied on pre-processed images and LL band is obtained with the size of 54×45. The Novel concept of HVDLP is introduced in the proposed method to enhance the performance. The HVDLP is applied on 9×9 sub matrix of LL band to consider HVDLP coefficients. The local Binary Pattern (LBP) is applied on HVDLP of LL band. The final features are generated by using Guided filters on HVDLP and LBP matrices. The Euclidean Distance (ED) is used to compare final features of face database and test images to compute the performance parameters.
Invariant Recognition of Rectangular Biscuits with Fuzzy Moment Descriptors, ...CSCJournals
In this paper a new approach for invariant recognition of broken rectangular biscuits is proposed using fuzzy membership-distance products, called fuzzy moment descriptors. The existing methods for recognition of flawed rectangular biscuits are mostly based on Hough transform. However these methods are prone to error due to noise and/or variation in illumination. Fuzzy moment descriptors are less sensitive to noise thus making it an effective approach invariant to the above stray external disturbances. Further, the normalization and sorting of the moment vectors make it a size and rotation invariant recognition process .In earlier studies fuzzy moment descriptors has successfully been applied in image matching problem. In this paper the algorithm is applied in recognition of flawed and non-flawed rectangular biscuits. In general the proposed algorithm has potential applications in industrial quality control.
Caricatures are facial drawings by artists with exaggeration of certain facial parts. The exaggerations are often beyond realism and yet the caricatures are still recognizable by humans. With the advent of deep learning, recognition performances by computers on real-world faces has become comparable to human performance even under unconstrained situations. However, there is still a gap in caricature recognition performance between computer and human. This is mainly due to the lack of publicly available caricature data-sets of large scale. Our project deals with two tasks: Identification of a public figure’s name and gender from its caricature and Generating a Caricature drawing from a frontal face photograph. For the former task, we use the IIIT-CFW dataset and show the accuracy obtained by our model. For the purpose of the latter task, the system learns how to exaggerate features based on training examples from a particular caricaturist.
A New Method Based on MDA to Enhance the Face Recognition PerformanceCSCJournals
A novel tensor based method is prepared to solve the supervised dimensionality reduction problem. In this paper a multilinear principal component analysis(MPCA) is utilized to reduce the tensor object dimension then a multilinear discriminant analysis(MDA), is applied to find the best subspaces. Because the number of possible subspace dimensions for any kind of tensor objects is extremely high, so testing all of them for finding the best one is not feasible. So this paper also presented a method to solve that problem, The main criterion of algorithm is not similar to Sequential mode truncation(SMT) and full projection is used to initialize the iterative solution and find the best dimension for MDA. This paper is saving the extra times that we should spend to find the best dimension. So the execution time will be decreasing so much. It should be noted that both of the algorithms work with tensor objects with the same order so the structure of the objects has been never broken. Therefore the performance of this method is getting better. The advantage of these algorithms is avoiding the curse of dimensionality and having a better performance in the cases with small sample sizes. Finally, some experiments on ORL and CMPU-PIE databases is provided.
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
Abstract
Like country India, there are so many people depending upon agriculture. In this area, many farmers don’t know about new
diseases which are impacting on their farm. As the disease changes, the disease control policy also changes. So many farmers
have very sharp observation on crop diseases, but whenever there is new diseases fall on crops then problems occur. Climate also
changes instantly many of times, because of such reasons farmers unable to understand various diseases.
If farmer unable to predict that diseases quickly then it will affect life of crops. Indirectly it gets affects on total productivity of
farm. As we are well known about that world facing lot of problems due rapid growth in population. So our goal is to increase
agricultural productivity using image processing technology which can help farmer in great extent [7].
In this research work, we are trying that crop disease using Artificial neural network (ANN) which work very effectively. First of
all, we have provided an digital image which is taken by digital camera. That image given to Gaussian filter firstly then
transferred to adaptive median filter to filter out noise present inside image. Gaussian filter removes Gaussian noise which is
present inside image. Adaptive noise filter removes impulsive noise which is present inside image. Also it will reduce distortions
which are present inside images. Then image transferred to segmentation part. In image segmentation we have choose CIELAB
color space method to extract color components properly. For segmentation we have used Gabor filter. After this we distinguish
crop diseases on the basis of texture features which are extracted by Gabor filter [6].
Key Words: Artificial Neural Networks, Image preprocessing, Image Acquisition, and Feature Extraction,
classification etc…
CONTRAST ENHANCEMENT TECHNIQUES USING HISTOGRAM EQUALIZATION METHODS ON COLOR...IJCSEA Journal
Histogram equalization (HE) is a simple and widely used image contrast enhancement technique. The basic disadvantage of HE is it changes the brightness of the image. In order to overcome this drawback, various HE methods have been proposed. These methods preserves the brightness on the output image but, does not have a natural look. In order to overcome this problem the, present paper uses Multi-HE methods, which decompose the image into several sub images, and classical HE method is applied to each sub image. The algorithm is applied on various images and has been analysed using both objective and subjective assessment.
Face skin color based recognition using local spectral and gray scale featureseSAT Journals
Abstract Human face conveys more information about identity of person. Human face recognition is one of the most challenging problem and it can be used in many applications at different security places in airports, defense and banking sectors etc.In this work used colored features obtained from color segmentation because in real time scenario color provides the more information than gray scale image but it has a drawback. To overcome this drawback gray scale feature extracted from co-occurrence matrix of an image and for efficient face recognition of human Face under different illumination conditions spectral features can be extracted from face texture. These three feature vectors concatenated into a single feature vector and applied Lenc-Kral matching technique to measure similarity between the database and query image, the similarity is high then face is recognized. Keywords: Face recognition, illumination condition, local texture features, color segmentation.
Performance Evaluation of 2D Adaptive Bilateral Filter For Removal of Noise F...CSCJournals
In this paper, we present the performance analysis of adaptive bilateral filter by pixel to noise ratio and mean square errors. It was evaluate changing the parameters of the adaptive filter half width values and standard deviations. In adaptive bilateral filter, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The variance of range filter can also be adaptive. The filter is applied to improve the sharpens of a gray level and color image by increasing the slope of the edges without producing overshoot or undershoots. The related graphs were plotted and the best filter parameters are obtained.
TWO-LAYER SECURE PREVENTION MECHANISM FOR REDUCING E-COMMERCE SECURITY RISKSijcsit
E-commerce is an important information system in the network and digital age. However, the network intrusion, malicious users, virus attack and system security vulnerabilities have continued to threaten the operation of the e-commerce, making e-commerce security encounter serious test. How to improve ecommerce security has become a topic worthy of further exploration. Combining routine security test and
security event detection procedures, this paper proposes the Two-Layer Secure Prevention Mechanism (TLSPM). Applying TLSPM, routine security test procedure can identify security vulnerability and defect,and develop repair operations. Security event detection procedure can timely detect security event, and assist follow repair. TLSPM can enhance the e-commerce security and effectively reduce the security risk
of e-commerce critical data and asset.
Evaluation of employee performance is an important element in enhancing the quality of the work and
improves employees’ motivation to perform well. It also presents a basis for upgrading and enhancing of
an organization. Periodical employees’ performance evaluation in an organization assists management to
recognize its strengths and weaknesses.
This paper presents a design and implementation of a performance appraisal system using the fuzzy logic.
In addition to the normal process of performance evaluation modules, the system contains step by step
inference engine processes. These processes demonstrate several calculation details in relations
composition and aggregation methods such as min operator, algebraic product, sup-min and sup-product.
The system has foundation to add-on analysis module to analyze and report the final result using various
similarity measures. MS Access database was used to maintain the data, build the inference logic and
develop all setting user interfaces.
Performance and power comparisons between nvidia and ati gpusijcsit
In recent years, modern graphics processing units have been widely adopted in high performance computing
areas to solve large scale computation problems. The leading GPU manufacturers Nvidia and ATI have
introduced series of products to the market. While sharing many similar design concepts, GPUs from these
two manufacturers differ in several aspects on processor cores and the memory subsystem. In this paper,
we choose two recently released products respectively from Nvidia and ATI and investigate the architectural
differences between them. Our results indicate that these two products have diverse advantages that
are reflected in their performance for different sets of applications. In addition, we also compare the energy
efficiencies of these two platforms since power/energy consumption is a major concern in the high
performance computing system.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
The impact of knowledge based trust (kbt) on the adoption and acceptability o...ijcsit
The introduction of cashless policy in Nigeria had gained a number of reactions over the adoption of
cashless economy (or cashless banking). The implication of this is that not many people have the
understanding of the benefits the cashless system would accrue to entire Nigerian populace. Although,
many have argued that the technological infrastructures available for the implementation of the cashless
economy is still a matter of concern. This study attempts to examine the impact of integrating Technology
Acceptance Model (TAM) with Knowledge-based Trust (KBT) on the adoption and acceptance of cashless
economy among Nigerian populace. We designed a paper-based questionnaire to harvest people’s view
about their intention on the adoption of cashless economy in Nigeria as well as their readiness to accept
the policy. Consequent upon the impact of trust on the acceptability of the cashless economy, we formulated
some hypotheses which was analysed with the use of T-statistics. The results of the hypothesis testing
indicate that the integration of KBT with TAM has a significant relationship on intention towards the
adoption and acceptance of cashless economy in Nigeria.
A N E XTENSION OF P ROTÉGÉ FOR AN AUTOMA TIC F UZZY - O NTOLOGY BUILDING U...ijcsit
The process of building ontology is a very
complex and time
-
consuming process
especially when dealing
with huge amount of data. Unfortunately current
marketed
tools are very limited and don’t meet
all
user
needs.
Indeed, t
hese software build the core of the ontology from initial data that generates
a
big number of
information.
In this paper, we
aim to resolve these problems
by adding an extension to the well known
ontology editor Protégé in order to work towards a complete
FCA
-
based framework
which resolves the
limitation of other tools in
building fuzzy
-
ontology
.
W
e will give
, in this paper
, some
details on
our
sem
i
-
automat
ic collaborative tool
called FOD Tab Plug
-
in
which
takes into consideration another degree of
granularity in the process of generation
.
In fact, i
t follows a bottom
-
up strategy based on conceptual
clustering, fuzzy logic and Formal Concept Analysis (FCA) a
nd it defines ontology between classes
resulting from a preliminary classification of data and not from the initial large amount of data
.
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...ijcsit
Short term traffic forecasting has been a very impo
rtant consideration in many areas of transportation
research for more than 3 decades. Short-term traffi
c forecasting based on data driven methods is one o
f the
most dynamic and developing research arenas with en
ormous published literature. In order to improve
forecasting model accuracy of wavelet neural networ
k, an adaptive particle swarm optimization algorith
m
based on cloud theory was proposed, not only to hel
p improve search performance, but also speed up
individual optimizing ability. And the inertia weig
ht adaptively changes depending on X-conditional cl
oud
generator which has the stable tendency and randomn
ess property .Then the adaptive particle swarm
optimization algorithm based on cloud theory was us
ed to optimize the weights and thresholds of wavele
t
BP neural network, Instead of traditional gradient
descent method . At last, wavelet BP neural network
was
trained to search for the optimal solution. Based o
n above theory, an improved wavelet neural network
model based on modified particle swarm optimization
algorithm was proposed and the availability of the
modified prediction method was proved by predicting
the time series of real traffic flow. At last, the
computer simulations have shown that the nonlinear
fitting and accuracy of the modified prediction
methods are better than other prediction methods.
The evolution of the Web and its applications has undergone in the last few years a mutation towards
technologies that include the social dimension as a first class entity in which the users, their interactions
and the emerging social networks are the center of this evolution. The web is growing and evolving the
intelligibility of its resources and data, the connectivity of its parts and its autonomy as a whole system. The
social dimension of the current and future web is being at the roots of its dynamics and evolution. It is thus,
fundamental to propose new underlying infrastructure to the web and applications on the web, to make
more explicit this social dimension and facilitate its exploitation. The work presented is this paper
contributes to this initiative by proposing a multi-agent modeling based on the system coupling to its
environment through its social dimension. Applied to a collaborative tagging system, the exploitation of the
social dimension of tagging allows an intelligent and better sharing of resources and enhancing social
learning between users.
Can ho belleza quan 7 Hotline: 0985 889 990 (Ms.Trang) www.canhosaigon365.comTTC Land
Chính sách áp dụng mới nhất cho Căn hộ Belleza quận 7
Phương án 1: Khách hàng chỉ cần Thanh Toán 55% (Nhận nhà ở ngay) ==> Chiết khấu 7%/ Tổng Giá trị căn hộ. Còn lại 40% ,Thanh toán chậm trong 10 tháng , không lãi suất
Phương án 2: Khách hàng chỉ cần Thanh Toán 95% (Nhận nhà ở ngay) ==> Chiết khấu 15%/ Tổng Giá trị căn hộ.
Hotline: 0985 889 990 (Ms.Trang)
web:www.canhosaigon365.com
Template matching is a basic method in image analysis to extract useful information from images. In this
paper, we suggest a new method for pattern matching. Our method transform the template image from two
dimensional image into one dimensional vector. Also all sub-windows (same size of template) in the
reference image will transform into one dimensional vectors. The three similarity measures SAD, SSD, and
Euclidean are used to compute the likeness between template and all sub-windows in the reference image
to find the best match. The experimental results show the superior performance of the proposed method
over the conventional methods on various template of different sizes.
Information security approach in open distributed multi agent virtual learnin...ijcsit
This paper presented the main information, security problems and threats in open multi-agent distributed
e-learning information systems and Proposed various approaches to solve information security attacks in
virtual learning environment using service oriented architecture which based on multi-agent information
systems architecture, the solution on the multi-agent learning information system implementation based on
the implementation of two types of systems the first system with the centralized mobile agent information
security management and the second system with decentralized mobile agents security management, and
proposed the migration behavior simulation for their active software components (software agents) .
A Prediction Model for Taiwan Tourism Industry Stock Indexijcsit
Investors and scholars pay continuous attention to the stock market, as each day, many investors attempt to
use different methods to predict stock price trends. However, as stock price is affected by economy, politics,
domestic and foreign situations, emergency, human factor, and other unknown factors, it is difficult to
establish an accurate prediction model. This study used a back-propagation neural network (BPN) as the
research approach, and input 29 variables, such as international exchange rate, indices of international
stock markets, Taiwan stock market analysis indicators, and overall economic indicators, to predict
Taiwan’s monthly tourism industry stock index. The empirical findings show that the BPN prediction model
has better predictive accuracy, Absolute Relative Error is 0.090058, and correlation coefficient is
0.944263. The model has low error and high correlation, and can serve as reference for investors and
relevant industries
Fulfillment request management (the approach)ijcsit
In this paper we introduce the term FRM (Fulfillment Request Management). According to the FRM in a
BSS / OSS environment we can use a unified approach to implement a SOA in order to integrate BSS with
OSS and handle 1. Orders 2. Events 3. Processes. So in a way that systems like ESB, Order Management,
and Business Process Management can be implemented under a unified architecture and a unified
implementation. We assume that all the above mentioned are 'requests' and according to the system we
want to implement, the request can be an event, an order, a process etc. So instead of having N systems we
have 1 system that covers all the above (ESB, Order Management, BPM etc) With the FRM we can have
certain advantages such as: 1. adaptation 2. Interoperability. 3. Re-usability 4. Fast implementation 5.
Easy reporting. In this paper we present a set of the main principles in order to build an FRM System.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image. After that the facial area is divided into some local regions. Finally, new
efficient facial-based identical twins feature extractor based on the geometric moment is applied into
local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is
employed inside the local regions of facial area of identical twins images. To evaluate the proposed
method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets
includes scaled and rotated facial images of identical twins in different illuminations. The experimental
results demonstrates the ability of proposed method to recognize a pair of identical twins in
different situations such as rotation, scaling and changing illumination
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When humans have very similar biometric properties, such as identical twins, the face recognition system is considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the facial area of input image. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is employed inside the local regions of facial area of identical twins images. To evaluate the proposed method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets includes scaled and rotated facial images of identical twins in different illuminations. The experimental results demonstrates the ability of proposed method to recognize a pair of identical twins in different situations such as rotation, scaling and changing illumination
International Journal of Artificial Intelligence & Applications (IJAIA)gerogepatton
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches.
PERFORMANCE EVALUATION OF BLOCK-SIZED ALGORITHMS FOR MAJORITY VOTE IN FACIAL ...ijaia
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches
PERFORMANCE EVALUATION OF BLOCK-SIZED ALGORITHMS FOR MAJORITY VOTE IN FACIAL ...gerogepatton
Facial recognition (FR) is a pattern recognition problem, in which images can be considered as a matrix of
pixels.There are manychallenges that affect the performance of face recognitionincluding illumination
variation, occlusion, and blurring. In this paper,a few preprocessing techniques are suggested to handle the
illumination variationsproblem. Also, other phases of face recognition problems like feature extraction and
classification are discussed. Preprocessing techniques like Histogram Equalization (HE), Gamma Intensity
Correction (GIC), and Regional Histogram Equalization (RHE) are tested inthe AT&T database. For
feature extraction, methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis
(LDA), Independent Component Analysis (ICA), and Local Binary Pattern (LBP) are applied. Support
Vector Machine (SVM) is used as the classifier. Both holistic and block-based methods are tested using the
AT&T database. For twelve different combinations of preprocessing, feature extraction, and classification
methods, experiments involving various block sizes are conducted to assess the computation performance
and recognition accuracy for the AT&T dataset.Using the block-based method, 100% accuracy is achieved
with the combination of GIC preprocessing, LDA feature extraction,and SVM classification using 2x2
block-sizingwhile the holistic method yields the maximum accuracy of 93.5%. The block-sized algorithm
performs better than the holistic approach under poor lighting conditions.SVM Radial Basis Function
performs extremely well on theAT&Tdataset for both holistic and block-based approaches.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
MSB based Face Recognition Using Compression and Dual Matching TechniquesCSCJournals
Biometrics are used in almost all communication technology applications for secure recognition. In this paper, we propose MSB based face recognition using compression and dual matching techniques. The standard available face images are considered to test the proposed method. The novel concept of considering only four Most Significant Bits (MSB) of each pixel on image is introduced to reduce the total number of bits to half of an image for high speed computation and less architectural complexity. The Discrete Wavelet Transform (DWT) is applied to an image with only MSB's, and consider only LL band coefficients as final features. The features of the database and test images are compared using Euclidian Distance (ED) an Artificial Neural Network (ANN) to test the performance of the pot method. It is observed that, the performance of the proposed method is better than the existing methods.
National Flags Recognition Based on Principal Component Analysisijtsrd
Recognizing an unknown flag in a scene is challenging due to the diversity of the data and to the complexity of the identification process. And flags are associated with geographical regions, countries and nations. But flag identification of different countries is a challenging and difficult task. Recognition of an unknown flag image in a scene is challenging due to the diversity of the data and to the complexity of the identification process. The aim of the study is to propose a feature extraction based recognition system for Myanmar's national flag. Image features are acquired from the region and state of flags which are identified by using principal component analysis PCA . PCA is a statistical approach used for reducing the number of features in National flags recognition system. Soe Moe Myint | Moe Moe Myint | Aye Aye Cho "National Flags Recognition Based on Principal Component Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26775.pdfPaper URL: https://www.ijtsrd.com/other-scientific-research-area/other/26775/national-flags-recognition-based-on-principal-component-analysis/soe-moe-myint
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
An Efficient Feature Extraction Method With Local Region Zernike Moment for F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Similar to SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITION (20)
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Generating a custom Ruby SDK for your web service or Rails API using Smithy
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC APPROACH FOR EXPRESSION RECOGNITION
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
DOI:10.5121/ijcsit.2015.7408 93
SYMMETRICAL WEIGHTED SUBSPACE HOLISTIC
APPROACH FOR EXPRESSION RECOGNITION
G.P.Hegde1
, M. Seetha2
and Nagaratna Hegde3
1
Department of Computer Science and Engineering, SDMIT, Ujire, VTU, Belgaum
2
Department of Computer Science and Engineering, GNITS, Hyderabad
3
Department of Computer Science and Engineering, VCE, Hyderabad
ABSTRACT
Human face expression is one of the cognitive activity or attribute to deliver the opinions to others. This
paper mainly delivers the performance of appearance based holistic approach subspace methods based on
Principal Component Analysis (PCA). In this work texture features are extracted from face images using
Gabor filter. It was observed that extracted texture feature vector space has higher dimensional and has
more number of redundant contents. Hence training, testing and classification time becomes more. The
expression recognition accuracy rate is also reduced. To overcome this problem Symmetrical Weighted
2DPCA (SW2DPCA) subspace method is introduced. Extracted feature vector space is projected in to
subspace by using SW2DPCA method. By implementing weighted principles on odd and even symmetrical
decomposition space of training samples sets proposed method have been formed. Conventional PCA and
2DPCA method yields less recognition rate due to larger variations in expressions and light due to more
number of feature space redundant variants. Proposed SW2DPCA method optimizes this problem by
reducing redundant contents and discarding unequal variants. In this work a well known JAFFE databases
is used for experiments and tested with proposed SW2DPCA algorithm. From the experimental results it
was found that facial recognition accuracy rate of GF+SW2DPCA based feature fusion subspace method
has been increased to 95.24% compared to 2DPCA method.
KEYWORDS
Subspace, Gabor filter, Expression recognition, Symmetrical weight, Feature extraction, Classifier
1. INTRODUCTION
Facial expression exhibits the emotional state of a human being immediately as good means of
communication. Hence through different expressions person can get interacted with each other
and can understand their intensions and opinions [1-3]. In driving the vehicle, driver may get
fatigue condition. This can be detected by facial expression recognition system. In security fields,
expression recognition much needed for controlling the criminal people by identifying their
expressions. While playing the games, according to face expressions the game will be controlled.
Mind tempering needs face expression recognition. Person can interact with computers by
exhibiting expressions. Different approaches were analysed by several authors. Still subspace
based approaches finds large applications for dimensional and redundant data reduction.
Appearance based and geometrical based subspace approaches are most common. [4][5][6].
Subspace methods finds major role during dimension reduction, redundant contents removing and
feature extraction of face images. Sometimes direct implementation of subspace methods can
reduce the recognition rate during larger facial expressions, larger variations of light and face
poses. Hence it is better to use feature extraction approach and then removing the redundant
contents from feature vector space using subspace methods is still more robust. Different work
has been carried out for extraction of face features and classification of expressions. Ya Zheng,
Xiuxin Chen, Chongchong Yu and Cheng Gao (2013) proposed fast local binary pattern method
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
94
(FLBP) for dimensional reduction purpose [7]. They have shown FLBP uses fast PCA method
and it reduces the feature vectors space in to subspace. They worked on LBP histogram to
decrease the complexity of the algorithm. Yang Q. Ding, X. proposed symmetrical PCA for face
recognition [8]. In this work, facial expression recognition algorithm based on feature extraction
by Gabor filter (GF) and subspace projection by SW2DPCA method is introduced.
2. BRIEF OVERVIEW OF SUBSPACE METHODS
In subspace models un-correlation leads to increase in recognition rate by reducing redundancy.
Subspace methods that introduce redundancy removal, feature extraction and low-dimensional
feature representation of face objects with enhanced discriminatory power. Data reduction,
redundant content removing and extraction of features are used to solve the problem of poor
facial expression recognition at certain extent by improving the accuracy [9][10]. Process of
reducing the number of variables in larger set of database is one of the properties of subspace
analysis [9],[10]-[16]. In the context of face recognition, faces images are represented as high-
dimensional pixel arrays. Time and space complexities are two main problems for certain task in
image analysis. Literature survey is carried out on various linear PCA based subspace methods.
Principal Component Analysis (PCA) [9-11], CPCA [16], PC2
A [17], SPCA [19], 2DPCA [20],
2D2
PCA [21-22], Enhance PC2
A [21]. State of the art chart is given in table 2.
3. PROPOSED METHOD
The main objective of this work is to convert extracted feature vector space into subspace using
suitable PCA based method by discarding the redundant contents at certain extent from the set of
images. The conventional PCA and 2DPCA does not account for efficient recognition due to
more number of principal variants. 2DPCA method considers a 2D image as a matrix rather than
1D image n-vectors [22]. In this work Symmetrical Weighted 2D Principal Component Analysis
(SW2DPCA) method is proposed. Initially Gabor filter bank (5x8=40) has been formed. It was
found that extracted feature vector space has larger dimensional area. This causes the redundant
variants coefficients and reduces the recognition rate and increases the training time and
classification time. By observing all these drawbacks high dimensional space has been reduced by
implementing SW2DPCA method. Once PCA method have been applied to image sets it was
found that principal eigen components are having higher variations and unequal distribution of
variants. This problem would causes low expression recognition rate. All principle components
are having larger variations of components this can be reduced and made equal by implementing
weighted principal components analysis.
Total dimension of the image is given by N=pxq. The basic difference between PCA and 2DPCA
is in 2DPCA each column of input 2D image a different sample. Thus q such p-vectors from each
of the M images are used to build the covariance matrix. The number of the principal components
in the 2DPCA cannot exceed q whereas the number k of their conventional counterpart is
bounded with the total number M of the training samples (typically q>N). Consider 2D training
set as Ii=[I1, I2, ……. IM] of image vectors of size NxM. Where number of samples are given as
M. Then compute the mean value of the training set image vectors at each pixel level called mean
face. Size of the mean face is given as
∑=
=
M
i
ii I
M
I
1
1
1
Then subtract the mean value from image Ii and compute mean subtracted image such as
( )iii II −=Ψ 2
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
95
A=[ 1 2 3 - - - - M] is the matrix obtained by refereeing equation 2 and its size is N x M.
Then covariance matrix is computed which represents the scatter degree of all feature vectors
related to the mean vector.
T
i
M
i
i
T
M
AAC ΨΨ== ∑=1
1
. 3
From the NXN covariance matrix eigenvectors are calculated, the size of the eigenvectors is
considered as M x M. Covariance matrix can be computed as
T
i
M
i
i
T
M
AAL ΨΨ== ∑=1
1
. 4
In digital signal processing even and odd decomposition, can carried out by breaking a signal into
two component signals, one is even symmetry signal and the other is odd symmetry signal. If a
mirror image with S/2 for signal then it is treated as even symmetrical. That is, sample x[S/2 + 1]
must equal x[S/2 - 1], sample x[S/2 + 2] must equal x[S/2 - 2], etc. Similarly, odd symmetry
occurs when the matching image feature points have equal magnitudes but are opposite in sign,
such as: x[S/2 + 1] = -x[S/2 - 1], x[S/2 + 2] = -x[S/2 - 2], etc. Let us consider f(x) be a non-zero,
real-valued function whose domain is symmetric about the origin; that is, f(x) exists implies f(-x)
exists. Then f(x ) can be uniquely expressed as the sum of an even function and an odd function.
Algebraically, a function is even if f(-x)=f(x) , and this condition manifests itself geometrically as
symmetry with respect to the y-axis in the graph of y=f(x). A function is odd if f(-x)=-f(x) , and
geometrically this means that the graph of y=f(x) is symmetric with respect to the origin. For any
function if f(x) is not equal to zero then we define the functions fo(x)=[f(x)-f(-x)]/2 and
fe(x)=[f(x)+f(-x)]/2.
This odd-even rule can be applied to decomposition of face images. Our training set images are
= , = , … . . mirror symmetrical training image set is = , … .. , So the ith
image can be decomposed as = + . Where odd symmetrical image can be denoted by
= ( − )/2 and even symmetrical image can be denoted as = + . Here
i=1,2,3,…..M.. Odd symmetrical sample set (Io1, Io2, Io3, . . . . ., IoM) and even symmetrical sample
(Ie1, Ie2, Ie3, . . . .IeM ) set both are derived from original training sample set by mirror symmetrical
transform. The total scatter matrix of the three samples that is original, odd and even set can be
defined as
T
i
M
i
iTS ΨΨ= ∑=1
5
T
i
M
i
eiowS ΨΨ= ∑=1
6
T
ei
M
i
eiewS ΨΨ= ∑=1
7
Where, ST=Soi+Sei, hence the eigen value decomposition on ST is equal to the eigen
decomposition on Soi and Sei. Hence, image Ii can be reconstructed by the feature vector of Soi and
Sei. With respect to eigen theory assume all the non-zero eigen values of Soi and Sei are oi and ej.
and the corresponding eigen vectors are woi and wej . Where i=1…. rank (SoT) and j=1. . .
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
96
rank(SeT). Transformation of weight matrix for odd (Ωo) and even (Ωe) symmetrical sample sets
be derived from above demonstration as
= , … … … , = !"#$ , , % … … & 8
= , … … … '
, = !"#$ , , % … … &'
9
Where ro= rank(SoT), re=rank(SeT)
The representation of the odd and even symmetrical images can be represented as,
= ( , ( = )
, = ( , ( = )
10
In above, Poi and Pei are the odd symmetrical feature and even symmetrical feature of the ith
face
image. In order to reduce the effects made by the principal components which contain the
variation due lighting or face expression, it can treat as each component equally and let each
component have equal variance through transforming conventional PCA feature space to
weighted PCA feature space by the following whitening transformation for odd symmetrical
sample set and even symmetrical sample set:
* =
+ / )
= ,
+ /
,
+ /
, … … … . . &
+ /
& - 11
* =
+ / )
= ,
+ /
,
+ /
, … … … . . &'
+ /
- 12
Here Qo and Qe are the transform matrix of odd symmetrical images and even symmetrical
images for W2DPCA feature space. In particular, the representation of the odd or even
symmetrical images in W2DPCA feature space is
= * . ,. = *)
, = * . ,. = *)
13
. = .)
, .) )
= /0'1
0 1
2 , * = * , * , = !"#$ , 14
= *. = * , * /0'1
0 1
2 , . = *)
15
For feature selection in SWPCA [24], all the eigenvectors are ordered according to eigenvalues,
and then select eigenvectors corresponding to first m largest eigenvalues. Since the variance
(corresponding to eigenvalues) of the weighted even symmetrical components is bigger than the
variance of the correlative components of weighted odd symmetrical components. So it is natural
to consider the even symmetrical components first, and then the odd symmetrical components if
necessary otherwise discarded. Expression recognition system using proposed method is shown in
figure 1.
Figure 1. Block diagram of proposed system for recognition of expression
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
97
3. BRIEF ABOUT GABOR FILTER AND CLASSIFIER
3.1. Gabor Filter
In this paper feature extraction by Gabor filter is not focused. Holistic texture features of a face
image can be extracted using Gabor wavelets or Gabor filters (GF) [25]. In this work pahse part
and magnitude parts are extracted using Gabor filter design of 40. This filter bank utilizes the
group of wavelets and creates magnitude and phase parts at a specific frequency and specific
orientation. These properties of Gabor filter are useful for texture analysis. Design of Gabor
filters is followed in [26-30]. It is found that the Gabor feature vector space has high dimensional
space and has more number of redundant coefficients. For the global Gabor filter bank with all
the m frequencies and n orientations, denoted it as G(mxn). Yi-Chun Lee and Chin-Hsing Chen
[31] have proposed feature extraction for face recognition based on Gabor filters and two-
dimensional locality preserving projections. Gabor wavelets have been successfully and widely
applied to face recognition [32-33], face detection [34], texture segmentation [35] fingerprint
recognition [36].
3.2. Classifier
After texture information is extracted from an image using Gabor filters it is normally encoded
into a feature vector space. Given two subspace feature vectors, 3 and 3&, a distance function
computes the difference between them. The difference value which measures the accurately of the
dissimilarity between the images from which the features were extracted. For larger distance,
similarity values decreases. This Manhattan distance also called 4 norm or city-block metric.
The Manhattan distance [37] between train and test image feature vectors is given by,
! = (3 , 3&) = ∑ |3& " − 3 " |7
7
Here 38 is a vector describing the ith
face class of training images. If ! is less than some
predefined threshold value 9" then images are classified in to respective classes of expressions.
Based on subspace projection coefficients of Gabor filter feature vector coefficients, all seven
expression are classified using SVM classifier and recognition of images is achieved using
Manhattan distance matching score values. We perform the facial expression recognition by using
a support vector machine (SVM) [38] to evaluate the performance of the proposed method. SVM
is a supervised machine learning technique that implicitly maps the data into a higher dimensional
feature space. Consequently, it finds a linear hyper plane, with a maximal margin, to separate the
data in different classes in this higher dimensional space. Given training set of M labelled
examples T={(xiyi)|i=1, . . . M}, where xi Є Rn
and yi Є {-1,1}, the test is classified by
:(;) = <"$= >? #
7
@ A(; ;) + BC
Where ai is Lagrange multipliers of dual optimization problem b is a bias and K is a kernel
function. Note that SVM allows domain specific selection of the kernel function. Although many
kernels have been proposed, the most frequently used kernel function is the linear, polynomial,
and radial basis function (RBF) kernels. In this study SVM+RBF method is used.
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
98
4. TESTING AND RESULT ANALYSIS
4.1. Pre-processing
In this work, Japanese Female Facial Expression (JAFFE) [39] database is used for experiment.
This database contains 213 images of 7 facial expressions (6 basic facial expressions +1 neutral)
posed by 10 Japanese female models of 256x256 resolution. Each image has been rated on 6
emotion adjectives by 60 Japanese subjects. All the images of this database were pre-processed to
obtain pure facial expression images, which have normalized intensity, uniform size and shape.
The pre-processing procedure used in this work performs detecting facial feature points manually
including eyes, nose and mouth, rotating to line up the eye coordinates, locating and cropping the
face region using a rectangle according to face cropping model [40] as shown in figure 2 (a).
Consider d is the distance between two eyries as shown in fig 2a. then a rectangle is formed with
respect to d is 2.2d×1.8d; then images are scale down fixed size of 128×96, locating the centre
position of the two eyes to a fixed position. Finally using a histogram equalization method
illumination effects were removed. Fig. 2 (b) shows some examples of pure facial expression
images after pre-processing from JAFFE database.
(a) (b)
Figure 2. (a) Face cropping model (b) Few samples of cropped images of JAFFE database
4.2. Testing and Analysis of Results
The database is tested with 2DPCA and SW2DPCA methods by projecting the feature space of
Gabor filter. Support Vector Machine Classifier (SVM) using RBF kernel method is used to
classify the seven expressions of JAFFE database. To create SVM model, all 210 images of
JAFFE database are considered. In that 70% of images are considered for training and 30%
images are considered for testing using hold out cross validation classification method. The
database is tested with all the subspace models and proposed approach. In addition to a drastic
reduction in the number of coefficients, it is observed that a considerable improvement in the
recognition rate relative to the facial expression recognition experiment.
Table 1. Gabor Feature dimension after and before implementing SW2DPCA
Gabor filter
bank
Cropped
image
dimension
Number of
Filters
Original
dimension
Feature
reduction
dimension
G(5x8) 128x96 40 491520 3840
7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
99
Table 2. Brief about state of art
Table 3. Comparison of facial expression recognition rate (FERR) for JAFFE database
Figure 3. Comparison of expression rates for GF+2DPCA and GF+SW2DPCA
Subspace Methods FERR
PCA [43] 91.00%
2DPCA [44] 78.00%
2DPCA + features[44] 94.00%
2D2
PCA +features[44] 84.50%
Log Gabor [45] 91.80%
Gabor Filter (GF) [7] 89.98%
GF+PCA [1 ] 81.70%
DCT+PCA+RBF[42] 90.50%
GF+PCA+PNN[42] 94.72%
GF+PCA+RBF[42] 94.11%
PCA based subspace
methods
FERR
in %
Classification Time in
Sec.
GF+PCA[1] 81.70% ------------
GF+2DPCA 93.33% 0.660622
GF+SW2DPCA 95.24% 0.655970
8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
100
(a) (b)
Figure 4. Precision and recall rates (a) GF+2DPCA, (b) GF+SW2DPCA method
Experimental results shows that proposed SW2DPCA method improves the average recognition
accuracy rate by reducing redundant variants with odd /even symmetrical decomposition of input
images. Gabor filtered image feature vectors space is projected by SW2DPCA method, hence
original dimension of feature vector space is reduced to 3840 as given in table 1. Overall accuracy
of facial expression rate is found to be 95.24% for pre-processed images as shown in table 3.
Among seven expressions three expressions were found to be 100% accuracy. Disgust and
neutral, expressions accuracy rate is more in SW2DPCA method. Classification accuracy rate is
better in SW2DPCA compare to 2DPCA method as shown in figure 3. The percentage of
precision rate is given by ratio between number of images correctly recognized and total number
of recognition for each person faces. Similarly, the percentage of recall rate is defined as ratio
between numbers of correct images is recognized to total number of input faces for each person.
Figure 4 illustrates the correct and wrong recognition of expressions. Classification rate of seven
expressions of JAFFE database also improved. Experiments are performed using MATLAB,
R2013a,version.
5. CONCLUSIONS
Facial expression recognition using appearance based methods has significant application in
different fields. Subspace based methods were already implemented for face recognition since
from decade. In this paper a holistic texture feature extraction by Gabor filter and extracted
feature vector space dimension is reduced by SW2DPCA subspace method is proposed. This
proposed method performs well compared to other PCA and 2DPCA methods. All the second
order principal components of SWPCA was made to equal by minimizing the redundant contents
hence it improves the recognition efficiency at certain extent compare to state of art methods as
mentioned in this paper even in larger variations expressions. Experimental results conclude that
overall accuracy has been increased to 95.48%.All seven expressions are classified using support
vector machine classifier with radial basis function (SVM+RBF). Except sad expression of
JAFFE database all other six expressions accuracy rate is increased in SW2DPCA method
compared to 2DPCA method.
ACKNOWLEDGEMENTS
This work is supported by Dr. M, Seetha, Professor, Department of Computer Science and
Engineering G. N. Institute of Technology and Science, Hyderabad, India, and also I thank for
providing MATALAB tool 2013 version by Dr. Nagaratna Hegde, Professor, Department of
Computer Science and Engineering, Vasavi College of Engineering, Hyderabad.
9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
101
REFERENCES
[1] Bettadapura V., “Face Expression Recognition and Analysis: The State of the Art,” in Proceedings of
IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 22, pp.1424-1445, 2002.
[2] C. Shan, S. Gong, and P. McOwan, “Robust facial expression recognition using local binary
patterns,” in proceedings of the IEEE International Conference Image Processing, pp. 914‑17, 2005.
[3] Y. Tian, T. Kanade, and J.F. Cohn, “Facial Expression Analysis: Handbook of Face Recognition,”
New York City: Springer; 2003.
[4] Hsieh C., Lai S., and Chen Y., “An Optical Flow Based Approach to Robust Face Recognition under
Expression Variations,” IEEE Transaction on Image Processing, vol. 19, no. 1, pp. 233-240, Jan 2010.
[5] Ya Zheng, Xiuxin Chen, Chongchong Yu and Cheng Gao Facial Expression Feature Extraction Based
on FastLBP Journal of Software Vol. 8, No. 11, November 2013.
[6] Cruz, A.C, Bhanu, B. Thakoor,N.S. “Vision and Attention Theory Based Sampling for Continuous
Facial Emotion Recognition,” IEEE Transaction on Affective Computing, Volume:5 Issue:4 , April
2014.
[7] M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek I. Fasel, and J. Movellan, “Recognizing facial
expression: Machine learning and application to spontaneous behavior,” in IEEE Conference on
Computer Vision and Pattern Recognition, Vol. 2, pp. 568‑73, 2005.
[8] Yang Q, Ding X ,”Symmetrical PCA in face recognition. In Proc. IEEE International Conference on
Image Processing, New York, USA.pp.97‐100. 2002.
[9] S.Z. Li, A.K. Jain, Gregory Shakhnarovich and Baback Moghaddam “Handbook of Face Recognition,
Face Recognition in Subspaces” Springer-Verlag London Limited 2011.
[10] Dimitris Bolis, Anastasios Maronidis, Anastasios Tefas, and Ioannis Pitas “Improving the Robustness
of Subspace Learning Techniques for Facial Expression Recognition” ICANN 2010, Part I, LNCS
6352, pp. 470–479, 2010. Springer-Verlag Berlin Heidelberg 2010.
[11] Jamal Hussain Shah, Muhammad Sharif, Mudassar Raza, and Aisha Azeem “A Survey: Linear and
Nonlinear PCA Based Face Recognition Techniques” The International Arab Journal of Information
Technology, Vol. 10, No. 6, November 2013.
[12] A. Adeel Mohammed, M. A. Sid-Ahmed Rashid Minhas, and Q. M. Jonathan Wu, “A face portion
based recognition system using multidimensional PCA,” IEEE 54th
International Midwest Symposium
on Circuits and Systems, pp. 1 - 4, August 2011.
[13] Abegaz, T., Adams J, Bryant, K., Dozier, G., Popplewell, K., Ricanek, K., Shelton, J., and Woodard,
D.L., “Hybrid GAs for Eigen-based facial recognition,” IEEE Workshop on Computational
Intelligence in Biometrics and Identity Management, pp. 127 - 130, April 2011.
[14] Chengliang Wang, LibinLan, MinjieGu, and Yuwei Zhang, “Face Recognition Based on Principle
Component Analysis and Support Vector Machine,” 3rd
International Workshop on Intelligent
Systems and Applications, pp. 1 - 4, May 2011.
[15] Guan-Chun Luh, “Face recognition using PCA based immune networks with single training sample
per person,” International Conference on Machine Learning and Cybernetics, vol. 4, pp. 1773 - 1779,
July 2011.
[16] Ran He, Wei-Shi Zheng, Bao-Gang Hu, Xiang-Wei Kong, “Two-Stage Nonnegative Sparse
Representation for Large-Scale Face Recognition,” In: IEEE Trans. on Neural Networks and Learning
Systems, vol. 24, no. 1, Jan. 2013.
[17] Bag Soumen, and Sanyal Goutam, “An Efficient Face Recognition Approach using PCA and
Minimum Distance Classifier,” International Conference on Image Information Processing, pp. 1 - 6,
November 2011.
[18] Guoxia Sun, Huiqiang Sun and Liangliang Zhang, “Face Recognition Based on Symmetrical
Weighted PCA,” International Conference on Computer Science and Service System, pp. 2249 -
2252, August 2011.
[19] M.Turk, A Pentland. Eigenfaces for recognition, Journal of Cognitive Neuroscience 3 (1) (1991) 71-
86.
[20] Jainxin Wu, Zhi-hua Zhou, Face recognition with one training image per person, Pattern Recognition
Letters 23(2002) 1711-1719.
[21] Songcan Chen, Daoqiang Zhang, Zhi-Hua Zhou, Enhanced PC2
A for face recognition with one
training image per person, Pattern Recognition Letters 25 (2004) 1171-1181.
[22] D. Zhang, Z. H. Zhou, 2D2
PCA: 2-directional 2 dimensional PCA for efficient face representation
and recognition, Neurocomputing 69(1-3) 2005 224-231.
11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 7, No 4, August 2015
103
Authors
First-Author received M. Tech. degree in Computer Science and Engineering from
Visvesvaraya Technical University (VTU) Belgaum. India, in 2009-2010, and
Pursuing Ph.D. from Visvesvaraya Technical University (VTU) Belgaum. From 1994
to 2009, he was Lecturer, and became Assistant Professor in 2010 in department of
Computer Science and Engineering at the SDM Institute of Technology, Ujire,
Mangalore, India. SDMIT is affiliated to VTU Belgaum, India. First-Author is the
author of over 18 technical publications, proceedings. His research interests include
Image processing, Computer Networks, Software engineering, Software architecture.
Second-Author received M.S. degree in Software System Engineering from the Birla
Institute of Technology and Science (BITS), Pilani, India, and Ph.D. degree in
Computer Science and Engineering from the Jawaharlal Neharu Technological
University (JNTU) Hyderabad, India in 1999 and 2007, respectively. From 1999 to
2005, she was at the Computer Science Department of Chaitanya Bharathi Institute of
Technology, Hyderabad. Assistant Professor and worked as Associate Professor in
2005 to 2008. She is currently a Professor in the Department of Computer Science and
Engineering of G. Narayanamma Institute of Technology, Hyderabad and Sciences, affiliated to JNTU,
Hyderabad, India. Second-author is the author of over 41 technical publications. Her interested areas are
Image Processing, Pattern Recognition, Artificial Intelligence, Neural Networks, Data communication and
Interfacing, Data mining and Computer Organization. Recognized as supervisor from Osmania University,
Reviewer for International journals of Information Fusion, and ACM journal.