Facial Expression is a significant role in affective computing and one of the non-verbal communication for human computer interaction. Automatic recognition of human affects has become more challenging and interesting problem in recent years. Facial Expression is the significant features to recognize the human emotion in human daily life. Facial expression recognition system (FERS) can be developed for the application of human affect analysis, health care assessment, distance learning, driver fatigue detection and human computer interaction. Basically, there are three main components to recognize the human facial expression. They are face or face’s components detection, feature extraction of face image, classification of expression. The study proposed the methods of feature extraction and classification for FER.
IRJET-Facial Expression Recognition using Efficient LBP and CNNIRJET Journal
This document presents a facial expression recognition system using efficient Local Binary Patterns (LBP) for feature extraction and a Convolutional Neural Network (CNN) for classification. LBP describes local texture features of images in a simple yet robust way. A CNN is used for classification as it can automatically extract both low-level and high-level features from images without needing separate feature extraction. The proposed system takes LBP feature maps as input to the CNN to improve its understanding and learning. When tested on the Cohn-Kanade dataset, the system achieved 90% accuracy in facial expression recognition.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
Critical evaluation of frontal image based gender classification techniquesSalam Shah
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
This document summarizes 10 research papers on various techniques for facial expression recognition. The papers cover topics like using local gray code patterns and kernel canonical correlation analysis to extract facial features and recognize expressions. Other techniques discussed include using facial animation parameters and hidden Markov models, active appearance models to track facial features over video sequences, and using geometric deformation features and support vector machines to recognize expressions in image sequences. The document provides an overview of the different approaches researchers have taken and their relative performances on standard datasets.
Facial emotion recognition using deep learning detector and classifier IJECEIAES
Numerous research works have been put forward over the years to advance the field of facial expression recognition which until today, is still considered a challenging task. The selection of image color space and the use of facial alignment as preprocessing steps may collectively pose a significant impact on the accuracy and computational cost of facial emotion recognition, which is crucial to optimize the speed-accuracy trade-off. This paper proposed a deep learning-based facial emotion recognition pipeline that can be used to predict the emotion of detected face regions in video sequences. Five well-known state-of-the-art convolutional neural network architectures are used for training the emotion classifier to identify the network architecture which gives the best speed-accuracy trade-off. Two distinct facial emotion training datasets are prepared to investigate the effect of image color space and facial alignment on the performance of facial emotion recognition. Experimental results show that training a facial expression recognition model with grayscale-aligned facial images is preferable as it offers better recognition rates with lower detection latency. The lightweight MobileNet_v1 is identified as the best-performing model with WM=0.75 and RM=160 as its hyperparameters, achieving an overall accuracy of 86.42% on the testing video dataset.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand to develop robust methods to verify facial images when they age. In this paper, a deep learning method based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG) and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and classification. The experiments are based on the facial images collected from MORPH and FG-Net benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is 99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art methods.
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification
IRJET-Facial Expression Recognition using Efficient LBP and CNNIRJET Journal
This document presents a facial expression recognition system using efficient Local Binary Patterns (LBP) for feature extraction and a Convolutional Neural Network (CNN) for classification. LBP describes local texture features of images in a simple yet robust way. A CNN is used for classification as it can automatically extract both low-level and high-level features from images without needing separate feature extraction. The proposed system takes LBP feature maps as input to the CNN to improve its understanding and learning. When tested on the Cohn-Kanade dataset, the system achieved 90% accuracy in facial expression recognition.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
Critical evaluation of frontal image based gender classification techniquesSalam Shah
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
This document summarizes 10 research papers on various techniques for facial expression recognition. The papers cover topics like using local gray code patterns and kernel canonical correlation analysis to extract facial features and recognize expressions. Other techniques discussed include using facial animation parameters and hidden Markov models, active appearance models to track facial features over video sequences, and using geometric deformation features and support vector machines to recognize expressions in image sequences. The document provides an overview of the different approaches researchers have taken and their relative performances on standard datasets.
Facial emotion recognition using deep learning detector and classifier IJECEIAES
Numerous research works have been put forward over the years to advance the field of facial expression recognition which until today, is still considered a challenging task. The selection of image color space and the use of facial alignment as preprocessing steps may collectively pose a significant impact on the accuracy and computational cost of facial emotion recognition, which is crucial to optimize the speed-accuracy trade-off. This paper proposed a deep learning-based facial emotion recognition pipeline that can be used to predict the emotion of detected face regions in video sequences. Five well-known state-of-the-art convolutional neural network architectures are used for training the emotion classifier to identify the network architecture which gives the best speed-accuracy trade-off. Two distinct facial emotion training datasets are prepared to investigate the effect of image color space and facial alignment on the performance of facial emotion recognition. Experimental results show that training a facial expression recognition model with grayscale-aligned facial images is preferable as it offers better recognition rates with lower detection latency. The lightweight MobileNet_v1 is identified as the best-performing model with WM=0.75 and RM=160 as its hyperparameters, achieving an overall accuracy of 86.42% on the testing video dataset.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand to develop robust methods to verify facial images when they age. In this paper, a deep learning method based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG) and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and classification. The experiments are based on the facial images collected from MORPH and FG-Net benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is 99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art methods.
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification
This document presents a novel fuzzy feature extraction method using Local Binary Patterns (LBP) for face recognition. It begins with an introduction to face recognition challenges and common approaches. It then describes using LBP to extract local features from divided image windows. Features are extracted by computing membership values based on pixel intensities and taking the product with central pixel value. Local and central pixel information values are combined as the total information for classification using Support Vector Machine (SVM) or K-Nearest Neighbor classifiers. Experimental results on two databases show the proposed approach achieves better recognition rates than other methods. The conclusion is that accounting for both central and neighborhood pixel information is effective for images with expression, illumination and pose variations.
Face Recognition Using Gabor features And PCAIOSR Journals
This document summarizes a research paper on face recognition using Gabor features and principal component analysis (PCA). It begins by providing background on face recognition and discusses challenges like lighting, pose, and orientation. It then describes preprocessing faces using Gabor wavelets to extract discriminative features and reduce variations. PCA is used to further reduce the dimensionality of features into principal components. These components are used for classification, with nearest neighbor classification tested on the Yale face database. Results show the proposed approach improves recognition rates compared to Euclidean distance measures.
Selective local binary pattern with convolutional neural network for facial ...IJECEIAES
Variation in images in terms of head pose and illumination is a challenge in facial expression recognition. This research presents a hybrid approach that combines the conventional and deep learning, to improve facial expression recognition performance and aims to solve the challenge. We propose a selective local binary pattern (SLBP) method to obtain a more stable image representation fed to the learning process in convolutional neural network (CNN). In the preprocessing stage, we use adaptive gamma transformation to reduce illumination variability. The proposed SLBP selects the discriminant features in facial images with head pose variation using the median-based standard deviation of local binary pattern images. We experimented on the Karolinska directed emotional faces (KDEF) dataset containing thousands of images with variations in head pose and illumination and Japanese female facial expression (JAFFE) dataset containing seven facial expressions of Japanese females’ frontal faces. The experiments show that the proposed method is superior compared to the other related approaches with an accuracy of 92.21% on KDEF dataset and 94.28% on JAFFE dataset.
Ensemble-based face expression recognition approach for image sentiment anal...IJECEIAES
The document presents an ensemble-based facial expression recognition (FER) model for image sentiment analysis. It combines three classification models - a customized convolutional neural network (CNN), ResNet50, and InceptionV3. The predictions from the three models are averaged using an ensemble classifier method to determine the final classification. The model is trained and tested on the FER-2013 dataset containing uncontrolled images. Experimental results show the ensemble model outperforms individual models in classifying some expressions like happy and neutral, achieving accuracies of 91.7% and 81.7% respectively. However, for other expressions like disgust and anger, ResNet50 performs better. The ensemble model achieves an overall accuracy of 72.3% for FER.
1) The document proposes four techniques for facial feature extraction - 2DPCA, LDA, KPCA, and KFA. It uses these techniques to extract features from images in the ORL database, applies LDA for classification, and recognizes faces using a feedforward neural network (FFNN).
2) It compares the recognition rates of the different feature extraction techniques when using 10, 20, 30, and 40 hidden neurons in the FFNN. The technique that performs best is KPCA, achieving a maximum recognition rate of 94.82% using 20 hidden neurons.
3) In conclusion, the goal of implementing and comparing different feature extraction techniques with an FFNN classifier on face images was achieved. KPCA provided the best
Face Recognition using Improved FFT Based Radon by PSO and PCA TechniquesCSCJournals
Abstract Face Recognition is one of the problems which can be handled very well using a Hybrid technique or mixed transform rather than single technique, it is a very well in terms of a good performance and a large size of the problem. In this paper we represent the using of the Fourier-Based Radon Transform which is improved by the Particle Swarm Optimization (PSO). PSO in this research is used to select the optimum directions (projection angles) that achieve a very high recognition rate and a fast computation. The number of directions selected using PSO is less than the number required by ordinary Radon. This leads to a small number of features. These number of features are reduced farther using PCA to produce a low number of features which used to represent faces in the database. Our method has been applied to ORL Database and achieves 100% recognition rate.
Local Descriptor based Face Recognition SystemIRJET Journal
This document describes a local descriptor-based face recognition system that uses the Asymmetric Region Local Binary Pattern (AR-LBP) operator along with Principal Component Analysis (PCA) for facial expression recognition. The proposed AR-LBP operator addresses limitations of existing LBP operators in terms of scale, feature histogram length, and discriminability. The system divides input face images into regions, extracts AR-LBP histograms from each region, and concatenates them into a feature vector. It was evaluated on three datasets and achieved recognition accuracies of 96.43%, 97.14%, and 86.67%, respectively. Evaluation using different similarity metrics found that Mahalanobis Cosine distance performed best. Experiments varied grid and operator sizes.
A face recognition system using convolutional feature extraction with linear...IJECEIAES
Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).
Technique for recognizing faces using a hybrid of moments and a local binary...IJECEIAES
The face recognition process is widely studied, and the researchers made great achievements, but there are still many challenges facing the applications of face detection and recognition systems. This research contributes to overcoming some of those challenges and reducing the gap in the previous systems for identifying and recognizing faces of individuals in images. The research deals with increasing the precision of recognition using a hybrid method of moments and local binary patterns (LBP). The moment technique computed several critical parameters. Those parameters were used as descriptors and classifiers to recognize faces in images. The LBP technique has three phases: representation of a face, feature extraction, and classification. The face in the image was subdivided into variable-size blocks to compute their histograms and discover their features. Fidelity criteria were used to estimate and evaluate the findings. The proposed technique used the standard Olivetti Research Laboratory dataset in the proposed system training and recognition phases. The research experiments showed that adopting a hybrid technique (moments and LBP) recognized the faces in images and provide a suitable representation for identifying those faces. The proposed technique increases accuracy, robustness, and efficiency. The results show enhancement in recognition precision by 3% to reach 98.78%.
Possibility fuzzy c means clustering for expression invariant face recognitionIJCI JOURNAL
Face being the most natural method of identification for humans is one of the most significant biometric
modalities and various methods to achieve efficient face recognition have been proposed. However the
changes in face owing to different expressions, pose, makeup, illumination, age bring about marked
variations in the facial image. These changes will inevitably occur and they can be controlled only till a
certain degree beyond which they are bound to happen and will affect the face thereby adversely impacting
the performance of any face recognition system. This paper proposes a strategy to improve the
classification methodology in face recognition by using Possibility Fuzzy C-Means Clustering (PFCM).
This clustering technique was used for face recognition due to its properties like outlier insensitivity which
make it a suitable candidate for use in designing such robust applications.PFCM is a hybridization of
Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. PFCM is a robust
clustering technique and is especially significant for its noise insensitivity. It has also resolved the
coincident clusters problem which is faced by other clustering techniques. Therefore the technique can also
be used to increase the overall robustness of a face recognition system and thereby increase its invariance
and make it a reliably usable biometric modality.
Feature extraction comparison for facial expression recognition using adapti...IJECEIAES
Facial expression recognition is an important part in the field of affective computing. Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypes emotional expressions such as anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. This paper aims to compare feature extraction methods that are used to detect human facial expression. The study compares the gray level co-occurrence matrix, local binary pattern, and facial landmark (FL) with two types of facial expression datasets, namely Japanese female facial expression (JFFE), and extended Cohn-Kanade (CK+). In addition, we also propose an enhancement of extreme learning machine (ELM) method that can adaptively select best number of hidden neurons adaptive ELM (aELM) to reach its maximum performance. The result from this paper is our proposed method can slightly improve the performance of basic ELM method using some feature extractions mentioned before. Our proposed method can obtain maximum mean accuracy score of 88.07% on CK+ dataset, and 83.12% on JFFE dataset with FL feature extraction.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
This document provides a synopsis for a project on emotion detection from facial expressions. It outlines the objectives to develop an automatic emotion detection system using machine learning algorithms to analyze facial expressions in video frames and compare them to a database to classify emotions. The technical details discuss using a facial tracker and extracting features to represent expressions. Classification algorithms like KNN, SVM, and voting will be used for recognition and mapping expressions to emotions. Future work may include 3D processing, speech recognition, and detecting micro-expressions.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
Human emotion detection and classification using modified Viola-Jones and con...IAESIJAI
Facial expression is a kind of nonverbal communication that conveys
information about a person's emotional state. Human emotion detection and
recognition remains a major task in computer vision (CV) and artificial
intelligence (AI). To recognize and identify the many sorts of emotions,
several algorithms are proposed in the literature. In this paper, the modified
Viola-Jones method is introduced to provide a robust approach capable of
detecting and identifying human feelings such as angerness,sadness, desire,
surprise, anxiety, disgust, and neutrality in real-time. This technique captures
real-time pictures and then extracts the characteristics of the facial image to
identify emotions very accurately. In this method, many feature extraction
techniques like gray-level co-occurrence matrix (GLCM), linear binary
pattern (LBP) and robust principal components analysis (RPCA) are applied
to identify the distinct mood states and they are categorized using a
convolution neural network (CNN) classifier. The obtained outcome
demonstrates that the proposed method outperforms in terms of determining
the rate of emotion recognition as compared to the current human emotion
recognition techniques.
Neural Network based Supervised Self Organizing Maps for Face Recognition ijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize and verify the identity of an individual. Face is one of the human biometrics for passive identification with uniqueness and stability. In this manuscript we present a new face based biometric system based on neural networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed method to a variety of datasets and show the results.
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITIONijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize
and verify the identity of an individual. Face is one of the human biometrics for passive identification with
uniqueness and stability. In this manuscript we present a new face based biometric system based on neural
networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the
proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed
method to a variety of datasets and show the results.
Facial recognition based on enhanced neural networkIAESIJAI
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
Thermal Imaging Emotion Recognition final report 01Ai Zhang
This document is a thesis submitted by Ai Zhang to the University of Wollongong for a Bachelor of Engineering degree. The thesis investigates using thermal imaging to recognize human emotions from facial expressions. Zhang reviews previous research on visible light and infrared-based emotion recognition. Methods examined include feature extraction using PCA, segmentation, and classifying expressions with SVM or neural networks. Zhang proposes a new method using 2D wavelet transforms to extract features from thermal images and classify expressions with k-nearest neighbors. The algorithm is tested on a facial expression database and shows potential for infrared emotion recognition.
Hybrid model for detection of brain tumor using convolution neural networksCSITiaesprime
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
More Related Content
Similar to Feature extraction and classification methods of facial expression: A surey
This document presents a novel fuzzy feature extraction method using Local Binary Patterns (LBP) for face recognition. It begins with an introduction to face recognition challenges and common approaches. It then describes using LBP to extract local features from divided image windows. Features are extracted by computing membership values based on pixel intensities and taking the product with central pixel value. Local and central pixel information values are combined as the total information for classification using Support Vector Machine (SVM) or K-Nearest Neighbor classifiers. Experimental results on two databases show the proposed approach achieves better recognition rates than other methods. The conclusion is that accounting for both central and neighborhood pixel information is effective for images with expression, illumination and pose variations.
Face Recognition Using Gabor features And PCAIOSR Journals
This document summarizes a research paper on face recognition using Gabor features and principal component analysis (PCA). It begins by providing background on face recognition and discusses challenges like lighting, pose, and orientation. It then describes preprocessing faces using Gabor wavelets to extract discriminative features and reduce variations. PCA is used to further reduce the dimensionality of features into principal components. These components are used for classification, with nearest neighbor classification tested on the Yale face database. Results show the proposed approach improves recognition rates compared to Euclidean distance measures.
Selective local binary pattern with convolutional neural network for facial ...IJECEIAES
Variation in images in terms of head pose and illumination is a challenge in facial expression recognition. This research presents a hybrid approach that combines the conventional and deep learning, to improve facial expression recognition performance and aims to solve the challenge. We propose a selective local binary pattern (SLBP) method to obtain a more stable image representation fed to the learning process in convolutional neural network (CNN). In the preprocessing stage, we use adaptive gamma transformation to reduce illumination variability. The proposed SLBP selects the discriminant features in facial images with head pose variation using the median-based standard deviation of local binary pattern images. We experimented on the Karolinska directed emotional faces (KDEF) dataset containing thousands of images with variations in head pose and illumination and Japanese female facial expression (JAFFE) dataset containing seven facial expressions of Japanese females’ frontal faces. The experiments show that the proposed method is superior compared to the other related approaches with an accuracy of 92.21% on KDEF dataset and 94.28% on JAFFE dataset.
Ensemble-based face expression recognition approach for image sentiment anal...IJECEIAES
The document presents an ensemble-based facial expression recognition (FER) model for image sentiment analysis. It combines three classification models - a customized convolutional neural network (CNN), ResNet50, and InceptionV3. The predictions from the three models are averaged using an ensemble classifier method to determine the final classification. The model is trained and tested on the FER-2013 dataset containing uncontrolled images. Experimental results show the ensemble model outperforms individual models in classifying some expressions like happy and neutral, achieving accuracies of 91.7% and 81.7% respectively. However, for other expressions like disgust and anger, ResNet50 performs better. The ensemble model achieves an overall accuracy of 72.3% for FER.
1) The document proposes four techniques for facial feature extraction - 2DPCA, LDA, KPCA, and KFA. It uses these techniques to extract features from images in the ORL database, applies LDA for classification, and recognizes faces using a feedforward neural network (FFNN).
2) It compares the recognition rates of the different feature extraction techniques when using 10, 20, 30, and 40 hidden neurons in the FFNN. The technique that performs best is KPCA, achieving a maximum recognition rate of 94.82% using 20 hidden neurons.
3) In conclusion, the goal of implementing and comparing different feature extraction techniques with an FFNN classifier on face images was achieved. KPCA provided the best
Face Recognition using Improved FFT Based Radon by PSO and PCA TechniquesCSCJournals
Abstract Face Recognition is one of the problems which can be handled very well using a Hybrid technique or mixed transform rather than single technique, it is a very well in terms of a good performance and a large size of the problem. In this paper we represent the using of the Fourier-Based Radon Transform which is improved by the Particle Swarm Optimization (PSO). PSO in this research is used to select the optimum directions (projection angles) that achieve a very high recognition rate and a fast computation. The number of directions selected using PSO is less than the number required by ordinary Radon. This leads to a small number of features. These number of features are reduced farther using PCA to produce a low number of features which used to represent faces in the database. Our method has been applied to ORL Database and achieves 100% recognition rate.
Local Descriptor based Face Recognition SystemIRJET Journal
This document describes a local descriptor-based face recognition system that uses the Asymmetric Region Local Binary Pattern (AR-LBP) operator along with Principal Component Analysis (PCA) for facial expression recognition. The proposed AR-LBP operator addresses limitations of existing LBP operators in terms of scale, feature histogram length, and discriminability. The system divides input face images into regions, extracts AR-LBP histograms from each region, and concatenates them into a feature vector. It was evaluated on three datasets and achieved recognition accuracies of 96.43%, 97.14%, and 86.67%, respectively. Evaluation using different similarity metrics found that Mahalanobis Cosine distance performed best. Experiments varied grid and operator sizes.
A face recognition system using convolutional feature extraction with linear...IJECEIAES
Face recognition is one of the important biometric authentication research areas for security purposes in many fields such as pattern recognition and image processing. However, the human face recognitions have the major problem in machine learning and deep learning techniques, since input images vary with poses of people, different lighting conditions, various expressions, ages as well as illumination conditions and it makes the face recognition process poor in accuracy. In the present research, the resolution of the image patches is reduced by the max pooling layer in convolutional neural network (CNN) and also used to make the model robust than other traditional feature extraction technique called local multiple pattern (LMP). The extracted features are fed into the linear collaborative discriminant regression classification (LCDRC) for final face recognition. Due to optimization using CNN in LCDRC, the distance ratio between the classes has maximized and the distance of the features inside the class reduces. The results stated that the CNN-LCDRC achieved 93.10% and 87.60% of mean recognition accuracy, where traditional LCDRC achieved 83.35% and 77.70% of mean recognition accuracy on ORL and YALE databases respectively for the training number 8 (i.e. 80% of training and 20% of testing data).
Technique for recognizing faces using a hybrid of moments and a local binary...IJECEIAES
The face recognition process is widely studied, and the researchers made great achievements, but there are still many challenges facing the applications of face detection and recognition systems. This research contributes to overcoming some of those challenges and reducing the gap in the previous systems for identifying and recognizing faces of individuals in images. The research deals with increasing the precision of recognition using a hybrid method of moments and local binary patterns (LBP). The moment technique computed several critical parameters. Those parameters were used as descriptors and classifiers to recognize faces in images. The LBP technique has three phases: representation of a face, feature extraction, and classification. The face in the image was subdivided into variable-size blocks to compute their histograms and discover their features. Fidelity criteria were used to estimate and evaluate the findings. The proposed technique used the standard Olivetti Research Laboratory dataset in the proposed system training and recognition phases. The research experiments showed that adopting a hybrid technique (moments and LBP) recognized the faces in images and provide a suitable representation for identifying those faces. The proposed technique increases accuracy, robustness, and efficiency. The results show enhancement in recognition precision by 3% to reach 98.78%.
Possibility fuzzy c means clustering for expression invariant face recognitionIJCI JOURNAL
Face being the most natural method of identification for humans is one of the most significant biometric
modalities and various methods to achieve efficient face recognition have been proposed. However the
changes in face owing to different expressions, pose, makeup, illumination, age bring about marked
variations in the facial image. These changes will inevitably occur and they can be controlled only till a
certain degree beyond which they are bound to happen and will affect the face thereby adversely impacting
the performance of any face recognition system. This paper proposes a strategy to improve the
classification methodology in face recognition by using Possibility Fuzzy C-Means Clustering (PFCM).
This clustering technique was used for face recognition due to its properties like outlier insensitivity which
make it a suitable candidate for use in designing such robust applications.PFCM is a hybridization of
Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. PFCM is a robust
clustering technique and is especially significant for its noise insensitivity. It has also resolved the
coincident clusters problem which is faced by other clustering techniques. Therefore the technique can also
be used to increase the overall robustness of a face recognition system and thereby increase its invariance
and make it a reliably usable biometric modality.
Feature extraction comparison for facial expression recognition using adapti...IJECEIAES
Facial expression recognition is an important part in the field of affective computing. Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypes emotional expressions such as anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise. This paper aims to compare feature extraction methods that are used to detect human facial expression. The study compares the gray level co-occurrence matrix, local binary pattern, and facial landmark (FL) with two types of facial expression datasets, namely Japanese female facial expression (JFFE), and extended Cohn-Kanade (CK+). In addition, we also propose an enhancement of extreme learning machine (ELM) method that can adaptively select best number of hidden neurons adaptive ELM (aELM) to reach its maximum performance. The result from this paper is our proposed method can slightly improve the performance of basic ELM method using some feature extractions mentioned before. Our proposed method can obtain maximum mean accuracy score of 88.07% on CK+ dataset, and 83.12% on JFFE dataset with FL feature extraction.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
This document provides a synopsis for a project on emotion detection from facial expressions. It outlines the objectives to develop an automatic emotion detection system using machine learning algorithms to analyze facial expressions in video frames and compare them to a database to classify emotions. The technical details discuss using a facial tracker and extracting features to represent expressions. Classification algorithms like KNN, SVM, and voting will be used for recognition and mapping expressions to emotions. Future work may include 3D processing, speech recognition, and detecting micro-expressions.
COMPRESSION BASED FACE RECOGNITION USING TRANSFORM DOMAIN FEATURES FUSED AT M...sipij
The physiological biometric trait face images are used to identify a person effectively. In this paper, we
propose compression based face recognition using transform domain features fused at matching level. The
2D images are converted into 1-D vectors using mean to compress number of pixels. The Fast Fourier
Transform (FFT) and Discrete Wavelet Transform (DWT) are used to extract features. The low and high
frequency coefficients of DWT are concatenated to obtained final DWT features. The performance
parameters are computed by comparing database and test image features of FFT and DWT using Euclidian
Distance (ED). The performance parameters of FFT and DWT are fused at matching level to obtain better
results. It is observed that the performance of proposed method is better than the existing methods.
Human emotion detection and classification using modified Viola-Jones and con...IAESIJAI
Facial expression is a kind of nonverbal communication that conveys
information about a person's emotional state. Human emotion detection and
recognition remains a major task in computer vision (CV) and artificial
intelligence (AI). To recognize and identify the many sorts of emotions,
several algorithms are proposed in the literature. In this paper, the modified
Viola-Jones method is introduced to provide a robust approach capable of
detecting and identifying human feelings such as angerness,sadness, desire,
surprise, anxiety, disgust, and neutrality in real-time. This technique captures
real-time pictures and then extracts the characteristics of the facial image to
identify emotions very accurately. In this method, many feature extraction
techniques like gray-level co-occurrence matrix (GLCM), linear binary
pattern (LBP) and robust principal components analysis (RPCA) are applied
to identify the distinct mood states and they are categorized using a
convolution neural network (CNN) classifier. The obtained outcome
demonstrates that the proposed method outperforms in terms of determining
the rate of emotion recognition as compared to the current human emotion
recognition techniques.
Neural Network based Supervised Self Organizing Maps for Face Recognition ijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize and verify the identity of an individual. Face is one of the human biometrics for passive identification with uniqueness and stability. In this manuscript we present a new face based biometric system based on neural networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed method to a variety of datasets and show the results.
NEURAL NETWORK BASED SUPERVISED SELF ORGANIZING MAPS FOR FACE RECOGNITIONijsc
The word biometrics refers to the use of physiological or biological characteristics of human to recognize
and verify the identity of an individual. Face is one of the human biometrics for passive identification with
uniqueness and stability. In this manuscript we present a new face based biometric system based on neural
networks supervised self organizing maps (SOM). We name our method named SOM-F. We show that the
proposed SOM-F method improves the performance and robustness of recognition. We apply the proposed
method to a variety of datasets and show the results.
Facial recognition based on enhanced neural networkIAESIJAI
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
Thermal Imaging Emotion Recognition final report 01Ai Zhang
This document is a thesis submitted by Ai Zhang to the University of Wollongong for a Bachelor of Engineering degree. The thesis investigates using thermal imaging to recognize human emotions from facial expressions. Zhang reviews previous research on visible light and infrared-based emotion recognition. Methods examined include feature extraction using PCA, segmentation, and classifying expressions with SVM or neural networks. Zhang proposes a new method using 2D wavelet transforms to extract features from thermal images and classify expressions with k-nearest neighbors. The algorithm is tested on a facial expression database and shows potential for infrared emotion recognition.
Similar to Feature extraction and classification methods of facial expression: A surey (20)
Hybrid model for detection of brain tumor using convolution neural networksCSITiaesprime
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
Implementing lee's model to apply fuzzy time series in forecasting bitcoin priceCSITiaesprime
Over time, cryptocurrencies like Bitcoin have attracted investor's and speculators' interest. Bitcoin's dramatic rise in value in recent years has caught the attention of many who see it as a promising investment asset. After all, Bitcoin investment is inseparable from Bitcoin price volatility that investors must mitigate. This research aims to use Lee's Fuzzy Time Series approach to forecast the price of Bitcoin. A time series analysis method called Lee's Fuzzy Time Series to get around ambiguity and uncertainty in time series data. Ching-Cheng Lee first introduced this approach in his research on time series prediction. This method is a development of several previous fuzzy time series (FTS) models, namely Song and Chissom and Cheng and Chen. According to most previous studies, Lee's model was stated to be able to convey more precise forecasting results than the classic model from the FTS. This study used first and second orders, where researchers obtained error values from the first order of 5.419% and the second order of 4.042%, which means that the forecasting results are excellent. But of both orders, only the first order can be used to predict the next period's Bitcoin price. In the second order, the resulting relations in the next period do not have groups in their fuzzy logical relationship group (FLRG), so they can not predict the price in the next period. This study contributes to considering investors and the general public as a factor in keeping, selling, or purchasing cryptocurrencies.
Sentiment analysis of online licensing service quality in the energy and mine...CSITiaesprime
The Ministry of Energy and Mineral Resources of the Republic of Indonesia regularly assessed public satisfaction with its online licensing services. User rated their satisfaction at 3.42 on a scale of 4, below the organization's average of 3.53. Evaluating public service performance is crucial for quality improvement. Previous research relied solely on survey data to assess public satisfaction. This study goes further by analyzing user feedback in text form from an online licensing application to identify negative aspects of the service that need enhancement. The dataset spanned September 2019 to February 2023, with 24,112 entries. The choice of classification methods on the highest accuracy values among decision tree, random forest, naive bayes, stochastic gradient descent, logistic regression (LR), and k-nearest neighbor. The text data was converted into numerical form using CountVectorizer and term frequency-inverse document frequency (TF-IDF) techniques, along with unigrams and bigrams for dividing sentences into word segments. LR bigram CountVectorizer ranked highest with 89% for average precision, F1-score, and recall, compared to the other five classification methods. The sentiment analysis polarity level was 36.2% negative. Negative sentiment revealed expectations from the public to the ministry to improve the top three aspects: system, mechanism, and procedure; infrastructure and facilities; and service specification product types.
Trends in sentiment of Twitter users towards Indonesian tourism: analysis wit...CSITiaesprime
This research analyzes the sentiment of Twitter users regarding tourism in Indonesia using the keyword "wonderful Indonesia" as the tourism promotion identity. This study aims to gain a deeper understanding of the public sentiment towards "wonderful Indonesia" through social media data analysis. The novelty obtained provides new insights into valuable information about Indonesian tourism for the government and relevant stakeholders in promoting Indonesian tourism and enhancing tourist experiences. The method used is tweet analysis and classification using the K-nearest neighbor (KNN) algorithm to determine the positive, neutral, or negative sentiment of the tweets. The classification results show that the majority of tweets (65.1% out of a total of 14,189 tweets) have a neutral sentiment, indicating that most tweets with the "wonderful Indonesia" tagline are related to advertising or promoting Indonesian tourism. However, the percentage of tweets with positive sentiment (33.8%) is higher than those with negative sentiment (1.1%). This study also achieved training results with an accuracy rate of 98.5%, precision of 97.6%, recall of 98.5%, and F1-score of 98.1%. However, reassessment is needed in the future as Twitter users' sentiments can change along with the development of Indonesian tourism itself.
The impact of usability in information technology projectsCSITiaesprime
Achieving success in information system and technology (IS/IT) projects is a complex and multifaceted endeavour that has proven difficult. The literature is replete with project failures, but identifying the critical success factors contributing to favourable outcomes remains challenging. The triad of Time-Cost-Quality is widely accepted as key to achieving project success. While time and cost can be quantified and measured, quality is a more complex construct that requires different metrics and measurement approaches. Utilizing the PRISMA Methodology, this study initiated a comprehensive search across literature databases and identified 142 relevant articles pertaining to the specified keywords. A subset of ten articles was deemed suitable for further examination through rigorous screening and eligibility assessments. Notably, a primary finding indicates that despite recognizing usability as a critical element, there is a tendency to neglect usability enhancements due to time and resource constraints. Regarding the influence of usability on project success, the active involvement of end-users emerges as a pivotal factor. Moreover, fostering the enhancement of Human Computer Interaction (HCI) knowledge within the development team is essential. Failure to provide good usability can lead to project failure, undermining user satisfaction and adoption of the technology.
Collecting and analyzing network-based evidenceCSITiaesprime
Since nearly the beginning of the Internet, malware has been a significant deterrent to productivity for end users, both personal and business related. Due to the pervasiveness of digital technologies in all aspects of human lives, it is increasingly unlikely that a digital device is involved as goal, medium or simply ‘witness’ of a criminal event. Forensic investigations include collection, recovery, analysis, and presentation of information stored on network devices and related to network crimes. These activities often involve wide range of analysis tools and application of different methods. This work presents methods that helps digital investigators to correlate and present information acquired from forensic data, with the aim to get a more valuable reconstructions of events or action to reach case conclusions. Main aim of network forensic is to gather evidence. Additionally, the evidence obtained during the investigation must be produced through a rigorous investigation procedure in a legal context.
Agile adoption challenges in insurance: a systematic literature and expert re...CSITiaesprime
The drawback of agile is struggled to function in large businesses like banks, insurance companies, and government agencies, which are frequently associated with cumbersome processes. Traditional software development techniques were cumbersome and pay more attention to standardization and industry, this leads to high costs and prolonged costs. The insurance company does not embrace change and agility may find themselves distracted and lose customers to agile competitors who are more relevant and customer-centric. Thus, to investigate the challenges and to recognize the prospect of agile adoption in insurance industry, a systematic literature review (SLR) in this study was organized and validated by expert review from professional with expertise in agile. The project performance domain from project management body of knowledge (PMBOK) was applied to align the challenges and the solution. Academicians and practitioners can acquire the perception and knowledge in having exceeded understanding about the challenge and solution of agile adoption from the results.
Exploring network security threats through text mining techniques: a comprehe...CSITiaesprime
In response to the escalating cybersecurity threats, this research focuses on leveraging text mining techniques to analyze network security data effectively. The study utilizes user-generated reports detailing attacks on server networks. Employing clustering algorithms, these reports are grouped based on threat levels. Additionally, a classification algorithm discerns whether network activities pose security risks. The research achieves a noteworthy 93% accuracy in text classification, showcasing the efficacy of these techniques. The novelty lies in classifying security threat report logs according to their threat levels. Prioritizing high-risk threats, this approach aids network management in strategic focus. By enabling swift identification and categorization of network security threats, this research equips organizations to take prompt, targeted actions, enhancing overall network security.
An LSTM-based prediction model for gradient-descending optimization in virtua...CSITiaesprime
A virtual learning environment (VLE) is an online learning platform that allows many students, even millions, to study according to their interests without being limited by space and time. Online learning environments have many benefits, but they also have some drawbacks, such as high dropout rates, low engagement, and students' self-regulated behavior. Evaluating and analyzing the students' data generated from online learning platforms can help instructors to understand and monitor students learning progress. In this study, we suggest a predictive model for assessing student success in online learning. We investigate the effect of hyperparameters on the prediction of student learning outcomes in VLEs by the long short-term memory (LSTM) model. A hyperparameter is a parameter that has an impact on prediction results. Two optimization algorithms, adaptive moment estimation (Adam) and Nesterov-accelerated adaptive moment estimation (Nadam), were used to modify the LSTM model's hyperparameters. Based on the findings of research done on the optimization of the LSTM model using the Adam and Nadam algorithm. The average accuracy of the LSTM model using Nadam optimization is 89%, with a maximum accuracy of 93%. The LSTM model with Nadam optimisation performs better than the model with Adam optimisation when predicting students in online learning.
Generalization of linear and non-linear support vector machine in multiple fi...CSITiaesprime
Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. They belong to a family of generalized linear classifiers. In other terms, SVM is a classification and regression prediction tool that uses machine learning theory to maximize predictive accuracy. In this article, the discussion about linear and non-linear SVM classifiers with their functions and parameters is investigated. Due to the equality type of constraints in the formulation, the solution follows from solving a set of linear equations. Besides this, if the under-consideration problem is in the form of a non-linear case, then the problem must convert into linear separable form with the help of kernel trick and solve it according to the methods. Some important algorithms related to sentimental work are also presented in this paper. Generalization of the formulation of linear and non-linear SVMs is also open in this article. In the final section of this paper, the different modified sections of SVM are discussed which are modified by different research for different purposes.
Designing a framework for blockchain-based e-voting system for LibyaCSITiaesprime
A transition to democratic rule is considered the first step down a long road towards Libya’s recovery and prosperity. Thus, it strives to improve the country’s elections by introducing new technologies. A blockchain is a distributed ledger that is characterised by independence and security. Therefore, it has been widely applied in various fields ranging from credit encryption and digital currency. With the development of internet technology, electronic voting (E-voting) systems have been greatly popularised. However, they suffer from various security threats, which create a sense of distrust among existing systems. Integrating blockchain with online elections is a promising trend, which could lead to make an election transparent, immutable, reliable, and more secure. In this paper, we present a literature review and a case analysis of blockchain technology. Moreover, a framework for an E-voting system based on blockchain is proposed. The methodology is adopted on the basis of three activities, they are identification of the relevant literature about E-voting, system modelling, and the determination of suitable technological tools. The framework is secure and reliable. Thus, it could help increase the number of voters and ensure a high level of participation, as well as facilitate free and fair electoral processes.
Development reference model to build management reporter using dynamics great...CSITiaesprime
The digital technology transformation impacts changes in business patterns that require companies to innovate to act appropriately in making strategic decisions quickly, precisely, and accurately to increase efficiency, be practical company performance, and impacts changes in business patterns that require companies to innovate to act appropriately in making strategic decisions quickly to improve the performance. An enterprise resource planning (ERP) system is one step toward achieving performance. ERP system is one step to achieving performance. ERP system is essential for companies to automate the efficiency of business processes. The decisions from management in implementing the ERP system are necessary for ERP implementation to be successful. However, in practice, companies still experience complexity. For that, it needs to be considered related a business process reference model is essential to enhance efficiency in implementing the ERP used. This research discusses the business process reference model based on the ERP dynamics great plain (GP) application aggregated using management reporter (MR) to help users better understand the practical overview. The methodology utilizes a reference model based on Microsoft Dynamics GP guidelines with a business process redesign approach. This contributes to developing business processes to help users understand using the ERP dynamics GP application.
Social media and optimization of the promotion of Lake Toba tourism destinati...CSITiaesprime
Tourism is one of the largest contributors to Indonesia's foreign exchange earnings, surpassing taxation, energy, and gas. This study seeks to investigate the use of social media to optimize the promotion of Lake Toba as a tourist destination, which has been impacted by the COVID-19 pandemic. Using interview techniques and live field observations, it was discovered that social media, particularly the Instagram platform, play a significant role in promoting Lake Toba tourism. The Department of Culture and Tourism of the North Sumatra Province uses landscape photography as its primary promotion method, which has proved to be more effective and interesting than conventional methods such as the distribution of brochures or the use of manuals. The capture procedure and techniques for landscape photography were carried out by professional photographers in collaboration with the Department of Culture and Tourism of the North Sumatra Province. In addition to providing information, tourism sumut's Instagram account functions as a platform to raise public awareness about Lake Toba tourism and as a promotional medium for North Sumatra's tourist attractions on an international scale. Department of Culture and Tourism of the North Sumatra Province collaborates with travel agencies and local communities to disseminate Lake Toba tourism information.
Caring jacket: health monitoring jacket integrated with the internet of thing...CSITiaesprime
One of the policies that have been made by the World Health Organization (WHO) and the Indonesian government during this COVID-19 pandemic, is to use an oximeter for self-isolation patients. The oximeter is used to monitor the patient if happy hypoxia which is a silent killer, happens to the patient. To maintain body endurance, exercise is needed by COVID-19 patients, but doing too much exercise can also cause decreased immunity. That’s why fatigue level and exercise intensity need to be monitored. When exercising, social distancing protocol should be also reminded because can lower COVID-19 spreading up to 13.6%. To solve this issue, the Caring Jacket is proposed which is a health monitoring jacket integrated with an IoT system. This jacket is equipped with some sensors and the global positioning system (GPS) for tracking. The data from the test showed the temperature reading accuracy is up to 99.38%, the oxygen rate up to 97.31%, the beats per minute (BPM) sensor up to 97.82%, and the precision of all sensors is 97.00% compared with a calibrated device.
Net impact implementation application development life-cycle management in ba...CSITiaesprime
Digital transformation in the banking sector creates a lot of demand for application development, either new development or application enhancement. Continuous demand for reimagining, revamping, and running applications reliably needs to be supported by collaboration tools. Several big banks in Indonesia use Atlassian products, including Jira, Confluence, Bamboo, Bitbucket, and Crowd, to support strategic company projects. We need to measure the net impact of application development life-cycle management (ADLM) as a collaboration tool. Using the deLone and McLean model, process questionnaire data from banks in Indonesia that use ADLM. Processing data using structural equation modeling (SEM), multiple variables are analyzed statistically to establish, estimate, and test the causation model. The conclusions highlight that system quality strongly affected only User Satisfaction (p=0.049 and β=0.39). Information quality strongly affected use (p=0.001 and β=0.84) and strongly affected user satisfaction (p=0.169 and β=0.28). Service quality strongly affected only use (p=0.127 and β=0.31). Conclusion research verifies the information system's achievement approach described by DeLone and McLean. Importantly, it was discovered that system usability and quality were key indicators of ADLM success. To fulfill their objective, ADLM must be developed in a way that is simple to use, adaptable, and functional.
Circularly polarized metamaterial Antenna in energy harvesting wearable commu...CSITiaesprime
When battery powered sensors are spread out in places that are sometimes hard to reach, sustaining them become difficult. Therefore, to develop this technology on a large scale such as in the internet of things (IoT) scenario, it is necessary to figure out how to power them. The proffered solution in this work, is to get energy from the environment using energy harvesting Antennas. This work presents a wearable circular polarized efficient receiving and transmitting sensors for medical, IoT, and communication systems at the frequency range of WLAN, and GSM from 900 MHz up to 6 GHz. Using a cascaded system block of a circularly polarized Antenna, a rectifier and t-matching network, the design was successfully simulated. A DC charging voltage of 2.8V was achieved to power-up batteries of the wearable and IoT sensors. The major contribution of this work is the tri-band Antenna system which is able to harvest reflected Wi-Fi frequencies and also GSM frequencies combined in a miniaturized manner. This innovative configuration is a step forward in building devices with over 80% duty cycle.
An ensemble approach for the identification and classification of crime tweet...CSITiaesprime
Twitter is a famous social media platform, which supports short posts limited to 280 characters. Users tweet about many topics like movie reviews, customer service, meals they just ate, and awareness posts. Tweets carrying information about some crime scenes are crime tweets. Crime tweets are crucial and informative and separate classification is required. Identification and classification of crime tweets is a challenging task and has been the researcher’s latest interest. The researchers used different approaches to identify and classify crime tweets. This research has used an ensemble approach for the identification and classification of crime tweets. Tweepy and Twint libraries were used to collect datasets from Twitter. Both libraries use contrasting methods for extracting tweets from Twitter. This research has applied many ensemble approaches for the identification and classification of crime tweets. Logistic regression (LR), support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), and random forest (RF) Classifier assigned with the weights of 1,2,1,1 and 1 respectively ensemble together by a soft weighted Voting classifier along with term frequency – inverse document frequency (TF-IDF) vectorizer gives the best performance with an accuracy of 96.2% on the testing dataset.
National archival platform system design using the microservice-based service...CSITiaesprime
Archives play a vital function concerning the dynamics of people and nations as an instrument to treasure information in diverse domains of politics, society, economics, culture, science, and technology. The acceleration of digital transformation triggers the implementation of a smart government that supports better public services. The smart government encourages a national archival system to facilitate archive producers and users. The four electronic-based government system (SPBE) factors in the archival sector and open archival information system (OAIS) as a data preservation standard are the benchmarks in developing this study's national archival platform system. An improved service computing system engineering (SCSE) framework adapted to the microservice architecture is used to aid the design of the national archival platform system. The proposed design met the four-factor service design validation of coupling, cohesion, complexity, and reusability. Also, the prototype suggests what resources are needed to put the design into action by passing the performance test of availability measurement.
Antispoofing in face biometrics: A comprehensive study on software-based tech...CSITiaesprime
The vulnerability of the face recognition system to spoofing attacks has piqued the biometric community's interest, motivating them to develop anti-spoofing techniques to secure it. Photo, video, or mask attacks can compromise face biometric systems (types of presentation attacks). Spoofing attacks are detected using liveness detection techniques, which determine whether the facial image presented at a biometric system is a live face or a fake version of it. We discuss the classification of face anti-spoofing techniques in this paper. Anti-spoofing techniques are divided into two categories: hardware and software methods. Hardware-based techniques are summarized briefly. A comprehensive study on software-based countermeasures for presentation attacks is discussed, which are further divided into static and dynamic methods. We cited a few publicly available presentation attack datasets and calculated a few metrics to demonstrate the value of anti-spoofing techniques.
The antecedent e-government quality for public behaviour intention, and exten...CSITiaesprime
An The main objective of the study is to identify the antecedent of leadership quality, public satisfaction and public behaviour intention of e-government service. Also, this study integrated e-government quality to expectation-confirmation model. In order to achieve these goals, observational research was then carried out to collect primary information, using the method of data dissemination and obtaining the opinion of 360 from the public using the e-government service and some of the e-government and software quality experts. The results of the study show that the positive association among the e-government services quality and public perceived usefulness, public expectation confirmation, leadership quality and public satisfaction that also play a positive role on the public behavior intention.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Feature extraction and classification methods of facial expression: A surey
1. Computer Science and Information Technologies
Vol. 2, No. 1, March 2021, pp. 26~32
ISSN: 2722-3221, DOI: 10.11591/csit.v2i1.p26-32 26
Journal homepage: http://iaesprime.com/index.php/csit
Feature extraction and classification methods of facial
expression: a surey
Moe Moe Htay
Faculty of Computer Science, University of Computer Studies, Mandalay (UCSM), Patheingyi, Myanmar
Article Info ABSTRACT
Article history:
Received May 10, 2020
Revised Jun 1, 2020
Accepted Jul 24, 2020
Facial Expression is a significant role in affective computing and one of
the non-verbal communication for human computer interaction. Automatic
recognition of human affects has become more challenging and interesting
problem in recent years. Facial Expression is the significant features to
recognize the human emotion in human daily life. Facial expression
recognition system (FERS) can be developed for the application of human
affect analysis, health care assessment, distance learning, driver fatigue
detection and human computer interaction. Basically, there are three main
components to recognize the human facial expression. They are face or face’s
components detection, feature extraction of face image, classification of
expression. The study proposed the methods of feature extraction and
classification for FER.
Keywords:
Expression classification
Facial datasets
Facial features
Feature extraction
This is an open access article under the CC BY-SA license.
Corresponding Author:
Moe Moe Htay,
Faculty of Computer Science,
University of Computer Studies, Mandalay (UCSM),
Patheingyi, Myanmar.
Email: moemoehtay@ucsm.edu.mm
1. INTRODUCTION
In Artificial Intelligent era, facial expression recognition (FER) is interesting and challenging task
with the problems of limited dataset, different environments, pose, occlusion, person variation etc. FER
systems have been applied many systems such as human-computer-interaction (HCI), games, animation of
data-driven, surveillance, clinical monitoring etc., [1]. Ekman and Friesen, psychologists from America defined
six universal facial expressions: fear, happiness, anger, disgust, surprise, and sadness and also explored Action
Units based facial action coding system (FACS) to describe facial features of expressions [2]. Facial
expressions convey nonverbal communication cues that play a significant role in interpersonal relations. Some
literatures work adding on other emotions neutral, contempt, and many compound facial emotions. Some
researchers employed on handcrafted features extracted using algorithms and others employed on complicated
features extracted using deep learning methods. In this paper, we explored the feature extraction methods,
feature descriptors, classification methods, methods of feature dimension reduction, frameworks of the facial
expression recognition system and the comparison of the results. The remainder of the paper is organized as
follows. In section 2, Literature of current FER system. Typical FER system is shown in Section 3. After that,
two types feature of facial images is discussed in section 4, and section 5 described facial databases for FER
system. Section 6 describes the problem statement of FER system. In the last section, conclusion and future
work is presented.
2. Comput. Sci. Inf. Technol.
Feature extraction and classification methods of facial expression: a surey… (Moe Moe Htay)
27
2. LITERATURE OF CURRENT FER SYSTEM
Used geometric feature extraction, regional local binary pattern (LBP) features extraction, fusion of
both the features using autoencoders and self-organizing map (SOM)-based classifier. The average accuracy
97.55% of MMI and 98.95% of CK+ database. The accuracy of SOM-based classifier is significant
improvement over SVM with 3.94% increase for CK+ and 4.36% for MMI dataset respectively [3]. Explored
multiple feature fusion applying Histogram of oriented gradients from three orthogonal planes (HOG-TOP)
with experimentation of three datasets CK+, GEMEP-FERA 2011, and acted facial expression in the wild
(AFEW) 4.0 [4]. Presented a FER model using Haar cascades face components detection and neural network
(NN) to train the eye and adding mouth features on JAFFE Japanese database. Comparison of the result of
proposed method with Sobel edge detection methods is that the system has achieved more good accuracy. The
problem of illumination and pose of the image and to make fully meet theory and practical requirements by
integrating other biometric authentication methods and HCI perception methods is still existed [5]. Examined
emotion recognition system using hybrid feature descriptors combining spatial Bag of features and spatial
scale-invariant feature transform (SBoF-SSIFT) and classifiers of K-nearest neighbor. Codebook construction
is applied after features extraction to represent large feature sets by grouping similar features into a specified
cluster number. The experimentation accuracy has showed 98.33 and 98.5% on JAFFE and extended cohn-
canade (CK+) dataset respectively. However, the recognition performance depends on the number of clusters
for codebook generation, number of detected features, levels for image segmentation, and size of training
dataset [6]. Implemented cognition and mapped binary pattern-based FER using basic emotion model and
circumplex model on CK+ with 100 images for training and 50 images for testing. In the preprocessing step,
unwanted information such as hair, ear, and background are removed from the facial image. LBP and pseudo
3D model are used to extract the facial contours and to segment face area into sub-regions. To reduce the
dimension of the features mapped local binary pattern is employed and then used two classifiers of SVM and
softmax. The result found that local features and expressions are correlated. Moreover, the two classifiers have
a little difference in performance. The existence of occlusion, complex conditions, and micro-expression
recognition will be conducted in future FER system [7]. Proposed a method Angled Local Directional Pattern
(ALDP) for texture analysis of facial expression with six classifiers k-NN, SVM, DT, RF, Gaussian NB and
Perceptron on CK+ dataset. Firstly, facial image was detected using Haar-like as [5] and then cropped and
normalized the detected image. The accuracy improved 99% with ALDP method with no preprocessing [8].
Also proposed Grey Wolf optimization for feature selection and GWO-neural network (GWO-NN) for feature
classification. The parts of face eyes, nose, mouth and ears are detected using Viola-John algorithm and then
SIFT feature extraction is used feature points. The accuracy 89.79% on CK+ is less than [8] and achieved
91.22% [9]. Proposed a framework with high-dimensional features combination of appearance and geometric
features. The system used deep sparse autoencoders (DSAE) to learn robust discriminative feature and active
appearance model (AAM) to locate the facial landmarks 51 points. Three feature descriptors HoG, gray value
and LBP are utilized to describe the local features. Linear dimension reduction method of PCA is used to
compress the features and then give the map as the input of DASE. The accuracy of the proposed framework
achieved 95.79% of CK+ dataset by using leave on subject out cross-validation method [10].
Presented three models of differential geometric fusion network (DGFN) with extraction of
handcrafted features, deep facial sequential network (DFSN) based on CNN with auto-extracted features, and
DFSN-1 combination of the advantages of DGFN and DFSN by mapping and concatenation of handcrafted
and auto-extracted features. DFSN-1 achieved the best performance among the three models on all of CK+,
Oulu-CASIA and MMI dataset [11]. Used deep convolutional neural network (DCNN) using caffe framework
and Telsa K20Xm GPU. The frontal face is detected and cropped applied by openCV in facial images
preprocessing from CK+ and JAFFE. The accuracy of experiment achieved 97% with leave-one-subject-out
cross validation on CK+ and 98.12% with 10-folds cross validation on JAFFE [12]. Presented three models of
differential geometric fusion network (DGFN) with extraction of handcrafted features, deep facial sequential
network (DFSN) based on CNN with auto-extracted features, and DFSN-1 combination of the advantages of
DGFN and DFSN by mapping and concatenation of handcrafted and auto-extracted features. DFSN-1 achieved
the best performance among the three models on all of CK+, Oulu-CASIA and MMI dataset [11]. Used deep
convolutional neural network (DCNN) using caffe framework and Telsa K20Xm GPU. The frontal face is
detected and cropped applied by openCV in facial images preprocessing from CK+ and JAFFE. The accuracy
of experiment achieved 97% with leave-one-subject-out cross validation on CK+ and 98.12% with 10-folds
cross validation on JAFFE [12]. Reviewed analysis of 22 Local Binary Pattern variances on JAFFE and CK
databases using the simple parameter-free nearest neighbor classifier (1-NN). For JAFFEE database, the
highest recognition accuracy achieved 97.14% by using dLBPα, ELGS and LTP, while CK database, the
highest recognition rate of 100% by using AELTP, BGC3, CSALTP, dLBPα, nLBPd, STS, and WLD
discriptors. The basic LBP descriptor achieved the acceptable performance of 95.71% on JAFFE and 99.28%
of CK database. The study can be extended including other problems and other datasets.
3. ISSN: 2722-3221
Comput. Sci. Inf. Technol., Vol. 2, No. 1, March 2021: 26 – 32
28
Used DCNN adding data augmentation, cross entropy and L2 multi-class SVM [13]. In [14], weighted
center regression adaptive feature mapping (W-CR-AFM) for feature distribution and CNN for feature training
on CK+, Radbound Faces database (RaFD), Amsterdam dynamic facial expression set (ADFES) and
proprietary database. Different of other papers, spatial normalization and feature enhancement preprocessing
methods are used. The recognition obtained 89.84%, 96.27%, 92.70% for CK+, RaFD and ADFES
respectively. Address illumination problem of real-world facial images using fast fourier transform and contrast
limited adaptive histogram equalization (FFT+CLAHE) for poor illumination and then applied merged binary
pattern code (MBPC). PCA is used as a method of feature dimension reduction and k-NN as a classifier on
SFEW dataset [15]. Released a new database iCV-MEFED at FG work-shop. Multi-modality CNN is compared
with CNN for micro emotion recognition in the paper. The proposed network extracted firstly visual and
geometrical information of features then concatenated these into a long vector. The feature vector is fed to the
hinge loss layer. The framework is better performance than CNN with the misclassification of 80.212137 using
caffe [16]. Also proposed another three works of the work-shop. The first winner method using CNN with
geometric representation of landmark displacement leading better results compared with texture-only
information. The recognition accuracy achieves 51.84% for seven expressions and 13.7% for compound
emotion with the performance of average time 1.57ms using GPU or 30ms using CPU [17].
Employed deep emotional attention model using cross channel CNN by adding attention modulator
on the bimodal face and body (FABO) benchmark database. The system applied CNN to learn the location of
face expressions in a cluttered scene. The study has shown that the experimentation of one expression attention
mechanism and two expression attention mechanism. The accuracy of the framework with attention is better
than that of without attention [18]. Proposed a robust facial landmark extraction method by combining data-
driven of fully convolution network (FCN) and model-driven of pre-trained point distribution model (PDM)
with three steps estimation-correction-tuning (ECT). The computation of response maps of global landmark
estimation is trained by FCN and then the maximum points of the maps are fitted with PDM to generate initial
facial shape. In the final, a weighted version of regularized landmark mean-shift (RLMS) is applied to fine-
tune the facial shape iteratively [19].
Designed to learn NN architecture with three loss functions fully supervised, weekly supervised and
hybrid regularization. The experimentation of the proposed model has achieved promising results on CK+,
JAFFE under lab-environment and SFEW in the wild [20]. Proposed transductive deep transfer learning
(TDTL) architecture to address the problem of cross-database non-frontal facial expression recognition
applying VGGface 16-Net on BU-3DEF and Multi-PIE datasets. The study found that feature representation
with VGG network is better than traditional handcrafted features such like SIFT and LBP to represent
complicated features [21]. [22] Also used the two datasets for the experimentation to address the problem of
cross-domain and cross-view of facial expressions using transductive transfer regularized least-square
regression (TTRLSR) model, color SIFT (CSIFT) features with 49 landmarks and SVM classifiers. The two
databases have only four identical categories neutral, surprise, happy and disgust. The experimentation of the
study conducted two kinds cross-domain and same view and cross-view and same domain. PCA algorithm also
applied to reduce the features dimension.
The studies in references [3, 5-7] classified six universal emotions as happiness, angry, sadness,
surprise, fear, and disgust. In [9, 13, 15, 23-24] have classified one more class as neutral and [8, 17, 23] have
done contempt class. All of eight classes have been classified by the studies in [11, 10, 16]. However, [21] and
[22] have worked on neutral, happiness, surprise and disgust expressions. Chen et al. [4] employed with 5
classes of GEMEP-FERA 2011 database and 7 classes of CK+ and AFEW. Li et al. [25] explained seven basic
emotions and 11 compound emotions sadly angry, sadly surprised, sadly fearful, happily surprised, happily
disgusted, sadly disgusted, fearfully surprised, fearfully angry, angrily surprised, angrily disgusted and
disgustedly surprised. Ferreira et al. [20] has worked classification 6 universal classes of JAFFE, SFEW with
classes of 6 basic and neutral, and CK+ with 8 classes including contempt.
3. TYPICAL FER SYSTEM
Typical FER system is showed in the following system flow Figure 1. In the detection of face consists
of three works: locate the face, crop the face, and scale the face. Features extraction methods, dimension
reduction method and classification methods could be selected.
4. Comput. Sci. Inf. Technol.
Feature extraction and classification methods of facial expression: a surey… (Moe Moe Htay)
29
Figure1. Typical FER System
4. FEATURES OF FACIAL IMAGES
Most of the FER system used geometrical features or visual features or both of these features to extract
the features from the images of faces.
4.1. Geometrical features
Geometrical methods can estimate facial landmarks location or some components of facial images
such as the eyebrows, the mouth, and the nose and these features can be measured by distances, curvatures,
deformations, and other geometric properties to represent the geometric facial features as they are sensitive to
noise [3-4, 9, 16-17]. The paper [9] described facial point extraction method to extract the points of eye, nose,
mouth, and ears based on Viola-Jones object detection algorithm. Four key regions of face are used to extract
geometric features with four steps: detect face, detect eyes, locate eye center then get eye region height, and
estimate nose and lips regions. In the paper [17], facial landmark displacement method is applied to extract
geometrical information. Affective geometric features are extracted using the warp transformation of facial
landmarks to capture the configuration of facial landmark in [4]. Facial landmark with 68 points is described
as geometrical representation of face [16].
4.2. Apperance features
Appearance methods such as scale invariant feature transform (SIFT), Gabor appearance, local phase
quantization can detect the multi-scale, multi-direction of the local texture changes on either specific regions
or the whole face to encode the texture [3-4, 8-9, 16]. In [7], mapped local binary pattern with four
neighborhoods is used to describe the change of local texture features and then face is divided six regions such
as forehead, eyes, nose, mouth, left cheek and right cheek using pseudo 3D model. The paper [8] described the
texture feature using angled local directional pattern considering the center pixel. In reference [9], Scale
Invariance Feature Transform method is applied to extract the unique and precise informative face features.
The paper [3] used local binary pattern to extract local texture feature of four basic regions of face: two eyes,
nose and mouth. To extract the dynamic texture features from the video, [4] used histogram of oriented
gradients from three orthogonal planes (HOG-TOP). The visual features are extracted from the color image
using convolutional neural network (CNN) as a feature descriptor in [16]. The effects of the approaches are
time-consuming, and the characteristic dimension is huge, so the dimensionality reduction methods are used
to affect the accuracy of facial expression recognition.
5. FACIAL DATASETS
Facial expression datasets have two types of creation of images: posed expressions images and
spontaneous expressions images datasets. Researchers acquired facial images in three ways such as peak
expression images only, image sequences portraying an emotion from neutral to its peak, and video clips with
5. ISSN: 2722-3221
Comput. Sci. Inf. Technol., Vol. 2, No. 1, March 2021: 26 – 32
30
emotional annotations. The two widely used datasets are CK+ and JAFFE [26-29]. The real-world facial
databases are FER-2013, FERG-DB, SFEW2.0 (static facial expression in the wild), RAF-DB (real world
affective face database) and AffectNet database. Sample images of basic facial expression are described in
Table. 1 for each dataset.
Table 1. Sample image of facial image datasets
Dataset
Sample Images
Happy Sad Surprise Fear Anger Disgust
CK+
JAFFE
FER-2013
FERG-
DB
SFEW
RAF-DB
AffectNet
5.1. Extended cohn-canade dataset (CK+)
CK+ data set have been widely used in many years in facial expression system. This data set comprises
of 593 sequences of image vary in duration from 10 to 60 frames collected from 123 subjects. The age range
of subjects is 18-50 years, where 31% are men and 69% are women. The images express seven categories of
expressions: happy, sad, surprise, anger, fear, disgust, and neutral that cover the basic emotions. Each image
has 640 * 640- or 490-pixels resolution [27].
5.2. Japanse female facial expression dataset (JAFFE)
JAFFE data set is also widely used in expression recognition of human emotion. This dataset consists
of 213 images of 10 Japanese females including seven expressions: six basic (happy, surprise, sad, anger, fear
and disgust) and neutral. Each image has the resolution of 256 * 256 pixels [28].
5.3. FER 2013 dataset
FER-2013 data set contains 28,000 images that are labeled. The dataset is created in 2013 for learning
focused on three challenges: the black box learning, the facial expression recognition challenges and the
multimodal learning challenges. The images are 48 * 48 pixels grayscale of faces in seven expressions: six
basic expression and neutral [30].
6. Comput. Sci. Inf. Technol.
Feature extraction and classification methods of facial expression: a surey… (Moe Moe Htay)
31
5.4. FERG-DB dataset
FERG-DB stands for facial expression research group database that consists of face images of six
stylized characters grouped into seven types of expressions: six basic expressions and neutral. The dataset
includes 555767 images [31].
5.5. Static facial expression in the wild dataset (SFEW)
The images in the SFEW are extracted from a temporal facial expressions database Acted Facial
Expressions in the Wild (AFEW) which has been extracted from movies. The database contains 700 images
that have been labeled into six basic expressions [16].
5.6. Real-world affective face database (RAF-DB)
RAF-DB database is a large-scale facial expression database that includes facial images downloaded
from internet. The dataset is annotated seven-dimensional expression distribution vectors for each image [10].
5.7. AffectNet dataset
AffeNet is a largest database of facial expression in the real-world and contains more than 1,000,000
facial images downloaded from the internet search by six different languages with 1250 emotion related
keywords. The database defined eleven categories of expression: six basic expressions, neutral, contempt,
none, uncertain, and non-face [16].
6. PROBLEM STATEMENT
FER system is need to develop under the problem of illumination, lighting, pose, aging, occlusion for
the real-world expression classification system. The major challenges of the study include:
Most of researches classify basic emotions but fine-grain emotion is relatively small.
The reaearch works on mocro-expression and compound emotion recognition system are limited.
Mathematical model is needed to be developed for extraction more discriminant features facial images in
the wild.
Real time facial expression recognition systems should be developed to meet practical application.
Deep learning model also need to create for improving facial feature extraction and classification.
7. CONCLUSION AND FURTURE WORK
Facial expression recognition is an active research area and more interesting for researcher under the
problem of occlusion, brightness, viewing angle, pose, and background in the real-life images, sequence of
images and videos. This review paper has presented methods of preprocessing, feature extraction and
classification scheme. The FER research goes on to meet real-life applications for driver drowsiness
recognition, assistant of distance learning, clinical patient monitoring and teaching robot, health care system
for autism children. In the future, FER system will be developed for fined grained facial expressions recognition
and compound emotions recognition by using facial images.
REFERENCES
[1] Kalsum, Tehmina, Anwar, Syed, Majid, Muhammad, Ali, Sahibzada. “Emotion Recognition from Facial Expressions
using Hybrid Feature Descriptors.” IET Image Processing. vol. 12, no. 6, January 2018.
[2] P. Ekman, W. V. Friesen. “Facial action coding system a technique for the measurement of facial movement.” Palo
Alto: Consulting Psychologists Press, pp. 271-302, 1978.
[3] A. Majumder, L. Behera and V. K. Subramanian. “Automatic Facial Expression Recognition System Using Deep
Network-Based Data Fusion.” in IEEE Transactions on Cybernetics, vol. 48, no. 1, pp. 103-114, Jan. 2018.
[4] J. Chen, Z. Chen, Z. Chi and H. Fu. “Facial Expression Recognition in Video with Multiple Feature Fusion.” in IEEE
Transactions on Affective Computing, vol. 9, no. 1, pp. 38-50, 1 Jan.-March 2018.
[5] Yang, Dongri, Abeer Alsadoon, P. W. Chandana Prasad, Ashutosh Kumar Singh and Amr Elchouemi. “An Emotion
Recognition Model Based on Facial Recognition in Virtual Learning Environment.” Procedia Computer Science.
vol. 125, pp. 2-10, 2018.
[6] T. Kalsum, S. M. Anwar, M. Majid, B. Khan and S. M. Ali. “Emotion recognition from facial expressions using
hybrid feature descriptors.” in IET Image Processing, vol. 12, no. 6, pp. 1004-1012, 2018.
[7] C. Qi et al. “Facial Expressions Recognition Based on Cognition and Mapped Binary Patterns.” in IEEE Access, vol.
6, pp. 18795-18803, 2018.
[8] A. M. M. Shabat and J. Tapamo. “Angled local directional pattern for texture analysis with an application to facial
expression recognition.” in IET Computer Vision, vol. 12, no. 5, pp. 603-608, 8 2018
7. ISSN: 2722-3221
Comput. Sci. Inf. Technol., Vol. 2, No. 1, March 2021: 26 – 32
32
[9] N. P. Nirmala Sreedharan, B. Ganesan, R. Raveendran, P. Sarala, B. Dennis and R. Boothalingam R. “Grey Wolf
optimisation-based feature selection and classification for facial emotion recognition.” in IET Biometrics, vol. 7, no.
5, pp. 490-499, 2018.
[10] Zeng, N., Zhang, H., Song, B., Liu, W., Li, Y., Dobaie, A. M. “Facial expression recognition via learning deep sparse
autoencoders.” Neurocomputing, vol. 273, pp. 643-649, 2018.
[11] Y. Tang, X. M. Zhang and H. Wang. “Geometric-Convolutional Feature Fusion Based on Learning Propagation for
Facial Expression Recognition.” in IEEE Access, vol. 6, pp. 42532-42540, 2018.
[12] Mayya, V., Pai, R. M., & Pai, M. M., “Automatic facial expression recognition using DCNN.” Procedia Computer
Science, vol. 93, pp. 453-461, 2016.
[13] D. V. Sang, N. Van Dat and D. P. Thuan. “Facial expression recognition using deep convolutional neural networks.”
2017 9th International Conference on Knowledge and Systems Engineering (KSE), pp. 130-135, 2017.
[14] B. Wu and C. Lin. “Adaptive Feature Mapping for Customizing Deep Learning Based Facial Expression Recognition
Model.” in IEEE Access, vol. 6, pp. 12451-12461, 2018.
[15] Munir, A., Hussain, A., Khan, S. A., Nadeem, M., Arshid, S. “Illumination invariant facial expression recognition
using selected merged binary patterns for real world images.” Optic; vol. 158, pp. 1016-1025, 2018.
[16] Guo, J., Zhou, S., Wu, J., Wan, J., Zhu, X., Lei, Z., & Li, S. Z. “Multi-modality network with visual and geometrical
information for micro emotion recognition.” In Automatic face and Gesture Recognition (FG 2017), 12th IEEE
International Conference, pp.814-819, 2017.
[17] J. Guo et al. “Dominant and Complementary Emotion Recognition From Still Images of Faces.” in IEEE Access, vol.
6, pp. 26391-26403, 2018.
[18] Barros, P., Parisi, G.I., Weber, C., Wermter S. “Emotion-modulated attention improves expression recognition: A
deep learning model.” Neurocomputing, vol. 253, pp. 104-114, 2017.
[19] H. Zhang, Q. Li, Z. Sun and Y. Liu, "Combining Data-Driven and Model-Driven Methods for Robust Facial
Landmark Detection," in IEEE Transactions on Information Forensics and Security, vol. 13, no. 10, pp. 2409-2422,
Oct. 2018.
[20] P. M. Ferreira, F. Marques, J. S. Cardoso and A. Rebelo. “Physiological Inspired Deep Neural Networks for Emotion
Recognition.” in IEEE Access, vol. 6, pp. 53930-53943, 2018.
[21] Yan, K., Zheng, W., Zhang, T., Zong, Y., Cui, Z. “Cross-database non-frontal facial expression recognition based on
transductive deep transfer learning.” arXiv preprint arXiv: 1811.12774, 2018.
[22] W. Zheng, Y. Zong, X. Zhou and M. Xin. “Cross-Domain Color Facial Expression Recognition Using Transductive
Transfer Subspace Learning.” in IEEE Transactions on Affective Computing, vol. 9, no. 1, pp. 21-37, 2018.
[23] Tautkute, I., Trzcinski, T., and Bielski, A. “I Know How You Feel: Emotion with Facial Landmarks.” arXiv: preprint
arXiv: 1805.00326, 2018.
[24] B. Wu and C. Lin. “Adaptive Feature Mapping for Customizing Deep Learning Based Facial Expression Recognition
Model.” in IEEE Access, vol. 6, pp. 12451-12461, 2018.
[25] S. Li and W. Deng. “Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial
Expression Recognition.” in IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 356-370, Jan. 2019.
[26] C. Loob et al. “Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support
Vector Classification.” 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG
2017), Washington, DC, pp. 833-838, 2017.
[27] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar and I. Matthews. “The Extended Cohn-Kanade Dataset
(CK+): A complete dataset for action unit and emotion-specified expression.” 2010 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, pp. 94-101, 2010.
[28] Dhall, A., Goecke, R., Lucey, S., & Gedeon, T. Static facial expressions in the wild: data and experiment protocol.
CVHCI Google Scholar. [Online] https://fipa.cs.kit.edu/download/SFEW.pdf.
[29] Lyons, M. J., Akamatsu, S., Kamachi, M., Gyoba, J., & Budynek, J. “The Japanese female facial expression (JAFFE)
database.” In Proceedings of third international conference on automatic face and gesture recognition, pp. 14-16,
1998.
[30] Goodfellow I., Erhan D., Carrier PL., Courville A., Mirza M., Hamner B., Cukierski W., Tang Y., Lee DG., Zhou
Y., Ramaiah C., Feng F., Li R., Wang X., Athanasakis D., Shawe-Taylor J., Milakov M., Park J., Ionescu R., Popescu
M., Grozea C., Bergstra J., Xie J., Romaszko L., Xu B., Chaung Z., ans Bengio Y. “Challenges in Representation
Learning: A report on three machine learning contests.” International Conference on Neural Information Procession
Springer Berlin Heidelberg, 2013.
[31] Aneja, D., Colburn, A., Faigin, G., Shapiro, L., Mones, B. “Modeling stylized character expressions via deep
learning.” In Asian Conference on Computer Vision, Springer, Cham, pp. 136-135, 2016.