The document presents an automatic facial expression recognition system based on self-organizing feature maps (SOMs). The system addresses the three sub-problems of facial expression recognition: 1) face detection, 2) facial feature extraction, and 3) expression classification. It uses a modified SOM algorithm to automatically and effectively extract facial feature points from detected faces. The system was tested on two facial expression databases and achieved average correct recognition rates over 90%.
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
The document summarizes a research paper on facial feature extraction and lip tracking using facial points. It proposes a method to accurately extract facial features like eyes, nose, and mouth from images and then track selected facial points in image sequences using optical flow. A simple facial features model is developed using a triangular patch object model with vertices determined by tracked facial points. The model can track lip movements and synthesize facial expressions. Experimental results on a database show the model can successfully track features and deform to match expressions in original images.
1) The document presents a new face parts detection algorithm that combines the Viola-Jones object detection framework with geometric information of facial features.
2) It detects faces, then isolates regions of interest for the eyes, nose, and mouth. Eye pupils are located using iris recognition techniques.
3) The algorithm was tested on hundreds of images and showed promising results for automated facial feature detection.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
PARTIAL MATCHING FACE RECOGNITION METHOD FOR REHABILITATION NURSING ROBOTS BEDSIJCSES Journal
In order to establish face recognition system in rehabilitation nursing robots beds and achieve real-time
monitor the patient on the bed. We propose a face recognition method based on partial matching Hu
moments which apply for rehabilitation nursing robots beds. Firstly we using Haar classifier to detect
human faces automatically in dynamic video frames. Secondly we using Otsu threshold method to extract
facial features (eyebrows, eyes, mouth) in the face image and its Hu moments. Finally, we using Hu
moment feature set to achieve the automatic face recognition. Experimental results show that this method
can efficiently identify face in a dynamic video and it has high practical value (the accuracy rate is 91%
and the average recognition time is 4.3s).
This document discusses a study on detecting eyes from facial images using morphological image processing in MATLAB. The study used 213 facial images from the JAFFE database and detected eyes through a 6-stage process involving filtering, edge detection, morphological operations, dividing the image, identifying eye candidates, and selecting final eyes. Simulation of the process on all images found an average 83% accuracy in eye detection. The technique provides a simple and effective approach to eye detection that is independent of facial expression or person.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...IJECEIAES
The importance of emotion recognition lies in the role that emotions play in our everyday lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion recognition, is to equip the machine of this human ability to analyze, and to understand the human emotional state, in order to anticipate his intentions from facial expression. In this paper, a new approach is proposed to enhance accuracy of emotion recognition from facial expression, which is based on input features deducted only from fiducial points. The proposed approach consists firstly on extracting 1176 dynamic features from image sequences that represent the proportions of euclidean distances between facial fiducial points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature selection method is used to select only the most relevant features from them. Finally, the selected features are presented to a Neural Network (NN) classifier to classify facial expression input into emotion. The proposed approach has achieved an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database.
Automatic facial expression analysis is an area of great research especially in the field of computer vision and robotics. In the work done so far, the facial expression analysis is done either by recognizing the facial expression directly or indirectly by first recognizing AUs and then applying this information for facial expression analysis. The various challenges in facial expression analysis are associated with face detection and tracking, facial feature extraction and the facial feature classification. The presented review gives a brief description of the time line view of the research work carried for AU detection/estimation in static and dynamic image sequences and possible solutions proposed by researchers in this field since 2002. In short, the paper will provide an impetus for various challenges and applications of AU detection, and new research topics, which will increase the productivity in this exciting and challenging field.
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
The document summarizes a research paper on facial feature extraction and lip tracking using facial points. It proposes a method to accurately extract facial features like eyes, nose, and mouth from images and then track selected facial points in image sequences using optical flow. A simple facial features model is developed using a triangular patch object model with vertices determined by tracked facial points. The model can track lip movements and synthesize facial expressions. Experimental results on a database show the model can successfully track features and deform to match expressions in original images.
1) The document presents a new face parts detection algorithm that combines the Viola-Jones object detection framework with geometric information of facial features.
2) It detects faces, then isolates regions of interest for the eyes, nose, and mouth. Eye pupils are located using iris recognition techniques.
3) The algorithm was tested on hundreds of images and showed promising results for automated facial feature detection.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
PARTIAL MATCHING FACE RECOGNITION METHOD FOR REHABILITATION NURSING ROBOTS BEDSIJCSES Journal
In order to establish face recognition system in rehabilitation nursing robots beds and achieve real-time
monitor the patient on the bed. We propose a face recognition method based on partial matching Hu
moments which apply for rehabilitation nursing robots beds. Firstly we using Haar classifier to detect
human faces automatically in dynamic video frames. Secondly we using Otsu threshold method to extract
facial features (eyebrows, eyes, mouth) in the face image and its Hu moments. Finally, we using Hu
moment feature set to achieve the automatic face recognition. Experimental results show that this method
can efficiently identify face in a dynamic video and it has high practical value (the accuracy rate is 91%
and the average recognition time is 4.3s).
This document discusses a study on detecting eyes from facial images using morphological image processing in MATLAB. The study used 213 facial images from the JAFFE database and detected eyes through a 6-stage process involving filtering, edge detection, morphological operations, dividing the image, identifying eye candidates, and selecting final eyes. Simulation of the process on all images found an average 83% accuracy in eye detection. The technique provides a simple and effective approach to eye detection that is independent of facial expression or person.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...IJECEIAES
The importance of emotion recognition lies in the role that emotions play in our everyday lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion recognition, is to equip the machine of this human ability to analyze, and to understand the human emotional state, in order to anticipate his intentions from facial expression. In this paper, a new approach is proposed to enhance accuracy of emotion recognition from facial expression, which is based on input features deducted only from fiducial points. The proposed approach consists firstly on extracting 1176 dynamic features from image sequences that represent the proportions of euclidean distances between facial fiducial points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature selection method is used to select only the most relevant features from them. Finally, the selected features are presented to a Neural Network (NN) classifier to classify facial expression input into emotion. The proposed approach has achieved an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database.
Automatic facial expression analysis is an area of great research especially in the field of computer vision and robotics. In the work done so far, the facial expression analysis is done either by recognizing the facial expression directly or indirectly by first recognizing AUs and then applying this information for facial expression analysis. The various challenges in facial expression analysis are associated with face detection and tracking, facial feature extraction and the facial feature classification. The presented review gives a brief description of the time line view of the research work carried for AU detection/estimation in static and dynamic image sequences and possible solutions proposed by researchers in this field since 2002. In short, the paper will provide an impetus for various challenges and applications of AU detection, and new research topics, which will increase the productivity in this exciting and challenging field.
Facial expression identification by using features of salient facial landmarkseSAT Journals
Abstract
Facial expression recognition/identification (FER) systems plays vital role in the field of biometrics. Localizing the facial components accurately is a challenging task in image analysis and computer vision. Accurate detection of face and facial components gives effective performance with classification of expressions. This paper proposes feature based facial recognition system using JAFFE and CK databases. 18 facial landmarks were located using Haar cascade classifier. The distances between 12 points were extracted as features. These features were classified using SVM and K-NN classifier and comparison based on accuracy and execution time is done. The proposed algorithm gives better performance.
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...Yen Ho
This is a key paper : Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boosted Classifiers - face detection (100%) & feature extraction(93%) for expressionless faces
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
The document describes an algorithm for eye detection in face images. It begins with face detection using skin color detection in HSV color space. Then it finds the symmetric axis of the extracted face region using gradient orientation histograms to determine the location of the eyes. It further finds the symmetric axis within the eye region to locate the center of the eyes. The algorithm aims to accurately detect the eyes even when the face is rotated, which is important for applications like face recognition and gaze tracking.
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
In recent time, along with the advances and new inventions in science and technology, fraud people and identity thieves are
also becoming smarter by finding new ways to fool the authorization and authentication process. So, there is a strong need of efficient
face recognition process or computer systems capable of recognizing faces of authenticated persons. One way to make face recognition
efficient is by extracting features of faces. This paper is to compare the relative efficiency of Lip Extraction and Eye extraction feature
for face recognition in biometric devices. Importance of this paper is to bring to the light which Feature Extraction method provides
better results under various conditions. For recognition experiments, I used face images of persons from different sets of YALE
database. In my dataset, there are total 132 images consisting of 11 persons & 12 face images of each person.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
This document proposes a hybrid approach for facial image recognition using feature extraction and classification methods. It will use Principal Component Analysis (PCA) for feature extraction to reduce the dimensionality of feature vectors and select the most important features. This will be followed by Support Vector Machine (SVM) classification to classify facial images. PCA is applied to eigenfaces derived from facial training images to form a feature space. Test images are projected into this space and classified by SVM based on distance between their eigenvectors and stored eigenvectors. The approach aims to improve classification accuracy over other methods by combining effective feature extraction and classification.
A study on face recognition technique based on eigenfacesadique_ghitm
This document summarizes a study on face recognition techniques based on eigenfaces. It discusses the eigenface algorithm which represents faces as weighted combinations of eigenvectors derived from face images. The document outlines the eigenface initialization process and recognition steps. It also summarizes experimental results testing recognition accuracy on several databases using different numbers of training images per person. The conclusion discusses improving single-sample-per-person recognition for real-time applications like identifying individuals from CCTV footage using their Aadhaar card face image as the training sample.
This document reviews techniques for emotion recognition from facial expressions. It begins by outlining the general steps of emotion recognition systems as face detection, feature extraction, and classification. Popular techniques discussed include principal component analysis (PCA), local binary patterns (LBP), active appearance models, and Haar classifiers. PCA and LBP were found to provide higher recognition rates. The paper also reviews the Facial Action Coding System and compares the performance of different techniques based on recognition rate. In conclusion, PCA is identified as having the highest recognition rate and performance for emotion recognition.
A Review on Face Detection under Occlusion by Facial AccessoriesIRJET Journal
This document reviews various methods for detecting faces that are partially occluded by accessories like sunglasses or scarves. It discusses approaches that divide the face into patches and use PCA to detect occluded regions. Other methods use particle filtering to track occluded objects over multiple frames, or detect occlusion through Gabor wavelets and SVM classification of facial components. More advanced techniques apply deep convolutional neural networks to simultaneously estimate positions of facial landmarks while being robust to occlusion, pose variations and illumination changes. The document concludes that occlusion detection is important for face recognition systems and that future work could aim to improve detection accuracy.
Facial landmarking localization for emotion recognition using bayesian shape ...csandit
This work presents a framework for emotion recognition, based in facial expression analysis
using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action
Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The
BSM estimate the parameters of the model with an implementation of the EM algorithm. We
describe the characterization methodology from parametric model and evaluated the accuracy
for feature detection and estimation of the parameters associated with facial expressions,
analyzing its robustness in pose and local variations. Then, a methodology for emotion
characterization is introduced to perform the recognition. The experimental results show that
the proposed model can effectively detect the different facial expressions. Outperforming
conventional approaches for emotion recognition obtaining high performance results in the
estimation of emotion present in a determined subject. The model used and characterization
methodology showed efficient to detect the emotion type in 95.6% of the cases.
FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE ...cscpconf
This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterizationmethodology showed efficient to detect the emotion type in 95.6% of the cases.
Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is proposed.
A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...IOSR Journals
This document summarizes a research paper that proposes a new face recognition method for video sequences with variations in pose and illumination. The proposed method uses an active appearance model without nonlinear programming to extract features, and a lazy classifier for recognition, in order to reduce computational complexity compared to previous methods. Experimental results show the proposed method achieves better recognition performance and lower computational cost than conventional techniques. The document provides background on video face recognition challenges and reviews related work on pose-invariant and illumination-robust recognition methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IRJET- Survey of Iris Recognition TechniquesIRJET Journal
This document summarizes several techniques for iris recognition. It begins with an abstract describing iris recognition and its accuracy compared to other biometric traits. It then reviews four iris recognition techniques in the literature:
1. A technique using moment invariants and Euclidean or Mahalanobis distance classifiers that achieved 100% recognition rates.
2. A segmentation algorithm using Daugman's integro differential operator that improved discrimination capabilities over other methods.
3. A pupil localization technique using negative thresholds and neighbors, and iris boundary detection using contrast enhancement and thresholding, achieving accurate segmentation.
4. A technique using Gaussian mixture models, Gabor filter banks, and simulated annealing to generate iris masks and increase recognition rates
Facial expression recognition using pca and gabor with jaffe database 11748EditorIJAERD
This document discusses a facial expression recognition system that uses two different feature extraction methods - Principal Component Analysis (PCA) and Gabor filters - with the JAFFE facial expression database. PCA is used to reduce the dimensionality of the feature space, while Gabor filters are used to extract features due to their ability to encode spatial frequency and orientation information. The system that uses Gabor filters and PCA achieved better accuracy than one that used only PCA. The document provides mathematical background on PCA and Gabor filters and describes the steps of the facial expression recognition algorithm.
This document provides a literature review of face recognition techniques using face alignment and PCA. It discusses how face alignment techniques like Active Appearance Models (AAM) and Active Shape Models (ASM) are used to accurately align faces, which is important for face recognition. PCA is also discussed as a commonly used feature extraction and dimensionality reduction technique for face recognition. The document surveys recent research on face recognition using AAM for tasks like minimizing error between input and model images, modeling a wide range of facial appearances, and exploiting temporal correlations across image frames. It also discusses improvements to AAM modeling and fitting robustness.
The document summarizes face recognition techniques. It discusses how face recognition involves detecting faces, extracting and matching features. Common feature extraction methods discussed include principal component analysis, linear discriminant analysis, and neural networks. The document also summarizes different categories of face recognition approaches, such as template-based, statistical, neural network-based, and hybrid approaches. Local geometry-based features and other approaches like using range, infrared, or profile images are also mentioned.
Review of face detection systems based artificial neural networks algorithmsijma
This document provides a review of face detection systems that are based on artificial neural network algorithms. It summarizes several studies that have used different types of neural networks for face detection, including:
1) Retinal connected neural networks and rotation invariant neural networks.
2) Principal component analysis combined with neural networks.
3) Convolutional neural networks, multilayer perceptrons, backpropagation neural networks, and polynomial neural networks.
4) Fast neural networks, evolutionary optimization of neural networks, and Gabor wavelet features with neural networks. Strengths and limitations of these different approaches are discussed.
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Facial expression identification by using features of salient facial landmarkseSAT Journals
Abstract
Facial expression recognition/identification (FER) systems plays vital role in the field of biometrics. Localizing the facial components accurately is a challenging task in image analysis and computer vision. Accurate detection of face and facial components gives effective performance with classification of expressions. This paper proposes feature based facial recognition system using JAFFE and CK databases. 18 facial landmarks were located using Haar cascade classifier. The distances between 12 points were extracted as features. These features were classified using SVM and K-NN classifier and comparison based on accuracy and execution time is done. The proposed algorithm gives better performance.
Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boos...Yen Ho
This is a key paper : Fully Automatic Facial Feature Point Detection Using Gabor Feature Based Boosted Classifiers - face detection (100%) & feature extraction(93%) for expressionless faces
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
The document describes an algorithm for eye detection in face images. It begins with face detection using skin color detection in HSV color space. Then it finds the symmetric axis of the extracted face region using gradient orientation histograms to determine the location of the eyes. It further finds the symmetric axis within the eye region to locate the center of the eyes. The algorithm aims to accurately detect the eyes even when the face is rotated, which is important for applications like face recognition and gaze tracking.
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
In recent time, along with the advances and new inventions in science and technology, fraud people and identity thieves are
also becoming smarter by finding new ways to fool the authorization and authentication process. So, there is a strong need of efficient
face recognition process or computer systems capable of recognizing faces of authenticated persons. One way to make face recognition
efficient is by extracting features of faces. This paper is to compare the relative efficiency of Lip Extraction and Eye extraction feature
for face recognition in biometric devices. Importance of this paper is to bring to the light which Feature Extraction method provides
better results under various conditions. For recognition experiments, I used face images of persons from different sets of YALE
database. In my dataset, there are total 132 images consisting of 11 persons & 12 face images of each person.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
This document proposes a hybrid approach for facial image recognition using feature extraction and classification methods. It will use Principal Component Analysis (PCA) for feature extraction to reduce the dimensionality of feature vectors and select the most important features. This will be followed by Support Vector Machine (SVM) classification to classify facial images. PCA is applied to eigenfaces derived from facial training images to form a feature space. Test images are projected into this space and classified by SVM based on distance between their eigenvectors and stored eigenvectors. The approach aims to improve classification accuracy over other methods by combining effective feature extraction and classification.
A study on face recognition technique based on eigenfacesadique_ghitm
This document summarizes a study on face recognition techniques based on eigenfaces. It discusses the eigenface algorithm which represents faces as weighted combinations of eigenvectors derived from face images. The document outlines the eigenface initialization process and recognition steps. It also summarizes experimental results testing recognition accuracy on several databases using different numbers of training images per person. The conclusion discusses improving single-sample-per-person recognition for real-time applications like identifying individuals from CCTV footage using their Aadhaar card face image as the training sample.
This document reviews techniques for emotion recognition from facial expressions. It begins by outlining the general steps of emotion recognition systems as face detection, feature extraction, and classification. Popular techniques discussed include principal component analysis (PCA), local binary patterns (LBP), active appearance models, and Haar classifiers. PCA and LBP were found to provide higher recognition rates. The paper also reviews the Facial Action Coding System and compares the performance of different techniques based on recognition rate. In conclusion, PCA is identified as having the highest recognition rate and performance for emotion recognition.
A Review on Face Detection under Occlusion by Facial AccessoriesIRJET Journal
This document reviews various methods for detecting faces that are partially occluded by accessories like sunglasses or scarves. It discusses approaches that divide the face into patches and use PCA to detect occluded regions. Other methods use particle filtering to track occluded objects over multiple frames, or detect occlusion through Gabor wavelets and SVM classification of facial components. More advanced techniques apply deep convolutional neural networks to simultaneously estimate positions of facial landmarks while being robust to occlusion, pose variations and illumination changes. The document concludes that occlusion detection is important for face recognition systems and that future work could aim to improve detection accuracy.
Facial landmarking localization for emotion recognition using bayesian shape ...csandit
This work presents a framework for emotion recognition, based in facial expression analysis
using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action
Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The
BSM estimate the parameters of the model with an implementation of the EM algorithm. We
describe the characterization methodology from parametric model and evaluated the accuracy
for feature detection and estimation of the parameters associated with facial expressions,
analyzing its robustness in pose and local variations. Then, a methodology for emotion
characterization is introduced to perform the recognition. The experimental results show that
the proposed model can effectively detect the different facial expressions. Outperforming
conventional approaches for emotion recognition obtaining high performance results in the
estimation of emotion present in a determined subject. The model used and characterization
methodology showed efficient to detect the emotion type in 95.6% of the cases.
FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE ...cscpconf
This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterizationmethodology showed efficient to detect the emotion type in 95.6% of the cases.
Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is proposed.
A Fast Recognition Method for Pose and Illumination Variant Faces on Video Se...IOSR Journals
This document summarizes a research paper that proposes a new face recognition method for video sequences with variations in pose and illumination. The proposed method uses an active appearance model without nonlinear programming to extract features, and a lazy classifier for recognition, in order to reduce computational complexity compared to previous methods. Experimental results show the proposed method achieves better recognition performance and lower computational cost than conventional techniques. The document provides background on video face recognition challenges and reviews related work on pose-invariant and illumination-robust recognition methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IRJET- Survey of Iris Recognition TechniquesIRJET Journal
This document summarizes several techniques for iris recognition. It begins with an abstract describing iris recognition and its accuracy compared to other biometric traits. It then reviews four iris recognition techniques in the literature:
1. A technique using moment invariants and Euclidean or Mahalanobis distance classifiers that achieved 100% recognition rates.
2. A segmentation algorithm using Daugman's integro differential operator that improved discrimination capabilities over other methods.
3. A pupil localization technique using negative thresholds and neighbors, and iris boundary detection using contrast enhancement and thresholding, achieving accurate segmentation.
4. A technique using Gaussian mixture models, Gabor filter banks, and simulated annealing to generate iris masks and increase recognition rates
Facial expression recognition using pca and gabor with jaffe database 11748EditorIJAERD
This document discusses a facial expression recognition system that uses two different feature extraction methods - Principal Component Analysis (PCA) and Gabor filters - with the JAFFE facial expression database. PCA is used to reduce the dimensionality of the feature space, while Gabor filters are used to extract features due to their ability to encode spatial frequency and orientation information. The system that uses Gabor filters and PCA achieved better accuracy than one that used only PCA. The document provides mathematical background on PCA and Gabor filters and describes the steps of the facial expression recognition algorithm.
This document provides a literature review of face recognition techniques using face alignment and PCA. It discusses how face alignment techniques like Active Appearance Models (AAM) and Active Shape Models (ASM) are used to accurately align faces, which is important for face recognition. PCA is also discussed as a commonly used feature extraction and dimensionality reduction technique for face recognition. The document surveys recent research on face recognition using AAM for tasks like minimizing error between input and model images, modeling a wide range of facial appearances, and exploiting temporal correlations across image frames. It also discusses improvements to AAM modeling and fitting robustness.
The document summarizes face recognition techniques. It discusses how face recognition involves detecting faces, extracting and matching features. Common feature extraction methods discussed include principal component analysis, linear discriminant analysis, and neural networks. The document also summarizes different categories of face recognition approaches, such as template-based, statistical, neural network-based, and hybrid approaches. Local geometry-based features and other approaches like using range, infrared, or profile images are also mentioned.
Review of face detection systems based artificial neural networks algorithmsijma
This document provides a review of face detection systems that are based on artificial neural network algorithms. It summarizes several studies that have used different types of neural networks for face detection, including:
1) Retinal connected neural networks and rotation invariant neural networks.
2) Principal component analysis combined with neural networks.
3) Convolutional neural networks, multilayer perceptrons, backpropagation neural networks, and polynomial neural networks.
4) Fast neural networks, evolutionary optimization of neural networks, and Gabor wavelet features with neural networks. Strengths and limitations of these different approaches are discussed.
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Fiducial Point Location Algorithm for Automatic Facial Expression Recognitionijtsrd
We present an algorithm for the automatic recognition of facial features for color images of either frontal or rotated human faces. The algorithm first identifies the sub images containing each feature, afterwards, it processes them separately to extract the characteristic fiducial points. Then Calculate the Euclidean distances between the center of gravity coordinate and the annotated fiducial points coordinates of the face image. A system that performs these operations accurately and in real time would form a big step in achieving a human like interaction between man and machine. This paper surveys the past work in solving these problems. The features are looked for in down sampled images, the fiducial points are identified in the high resolution ones. Experiments indicate that our proposed method can obtain good classification accuracy. D. Malathi | A. Mathangopi | Dr. D. Rajinigirinath ""Fiducial Point Location Algorithm for Automatic Facial Expression Recognition"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-3 , April 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21754.pdf
Paper URL: https://www.ijtsrd.com/computer-science/data-miining/21754/fiducial-point-location-algorithm-for-automatic-facial-expression-recognition/d-malathi
This document describes a real-time facial expression recognition system that can handle low-resolution images and full head motion in real-world environments. The system uses background subtraction, head detection and pose estimation to analyze faces. It extracts location features like eye and mouth positions and shape features of the mouth region. A neural network then recognizes expressions like smile, anger and surprise from the features. The system aims to automatically recognize expressions in challenging real-world conditions like those in meetings, addressing limitations of prior systems.
This document summarizes a research paper that proposes a new face recognition method capable of recognizing faces with expressions, glasses, and/or rotation. The method uses variance estimation of the red, green, and blue color components to compare extracted faces to those in a database. It also uses Euclidean distance to compare extracted facial features (eyes, nose, mouth) to those in the database. The method is divided into three steps: 1) variance estimation of color components, 2) facial feature extraction based on feature locations, and 3) identifying similar faces by scanning the database. Experimental results showed the method achieved good accuracy, speed, and used simple computations for face recognition.
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET Journal
This document summarizes various approaches for facial expression recognition that are robust to partial facial occlusions. It begins by introducing the topic and importance of facial expression recognition systems that can handle real-world scenarios involving partial occlusions. It then categorizes and reviews key approaches in the literature, including feature reconstruction based on PCA or RPCA, sparse coding approaches using SRC or MLESR, sub-space based methods using Gabor filters or LGBPHS, and statistical prediction models using Bayesian or tracking methods. The document focuses on studies that have researched expression recognition for facial images with partial occlusions.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
This document summarizes a research paper on face recognition techniques. It discusses the history of face recognition research from the 1950s to present. Early work focused on identifying faces using geometric measurements of facial features. In the 1980s, neural networks and eigenfaces approaches using principal component analysis were introduced. The document also outlines common problems in face recognition systems, including variations in scale, illumination, expression, and noise. Finally, it reviews literature on human and machine face recognition systems.
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document summarizes a research paper on face recognition using principal component analysis (PCA). It discusses how PCA can be used to reduce the dimensionality of face images for recognition. The system detects faces in images, extracts features using PCA, and then compares new faces to those in a training database to recognize identities. The results showed an accuracy of 87.09% on a test set of 30 images using this PCA-based approach for face recognition. While effective, the system has limitations when faces vary significantly from the training data. Overall, PCA provides a way to analyze face patterns and identify faces with reasonable accuracy under controlled conditions.
This document discusses several techniques for face recognition, including linear discriminant analysis (LDA), eigenfaces, neural networks, and content-based image retrieval (CBIR). LDA and eigenfaces are statistical approaches that analyze facial features and expressions from a database of faces to enable classification of new faces. Neural networks can also be used for face detection by classifying image windows as containing faces or non-faces. CBIR allows image retrieval from large databases based on automatically extracted features like color, texture, and shape. Combining multiple techniques like color, texture, and shape features can improve accuracy of content-based image retrieval systems for applications like face recognition.
Face detection is one of the most suitable applications for image processing and biometric programs. Artificial neural networks have been used in the many field like image processing, pattern recognition, sales forecasting, customer research and data validation. Face detection and recognition have become one of the most popular biometric techniques over the past few years. There is a lack of research literature that provides an overview of studies and research-related research of Artificial neural networks face detection. Therefore, this study includes a review of facial recognition studies as well systems based on various Artificial neural networks methods and algorithms.
A Literature Review On Emotion Recognition System Using Various Facial Expres...Lisa Graves
This document reviews techniques for emotion recognition from facial expressions. It begins by outlining the general steps of emotion recognition systems as face detection, feature extraction, and classification. Popular techniques discussed include principal component analysis (PCA), local binary patterns (LBP), active appearance models, and Haar classifiers. PCA generally provides higher recognition rates but LBP has lower computational complexity. The document concludes PCA has the best performance among the discussed techniques. It provides a table comparing the techniques and their performance as reported in various other papers.
Review of facial expression recognition system and used datasetseSAT Journals
Abstract The human face is main part to recognize the individuals as well as provides the important information, current state of user behavior through their different expressions. Therefore, in biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. The other areas which use such technique are computer science medicine, psychology etc. Usually face recognition system is consisting of many internal tasks. Face detection is thefirst task of such systems. Due to different variations across the human faces, the process of detecting face becomes complex. But with help of different modeling methods, it becomes possible to recognize the face and hence different face expressions. This paperpresents a literature review over the techniques and methods used for facial expression recognition. Also, different facial expression datasets available for the research or testing of existing methods of facial expression recognition are discussed. Keywords: Facial Expression, Face Detection, Features Extraction, Recognition, datasets.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This report is based on research. This whole research content are taken by books and websites. you can learn about face recognition history, how's it is work traditional and in technical way, introduction of some face recognition software and devices. we also add face recognition algorithm in report.
IRJET - Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a facial emotion detection system called Emotionalizer. The system uses machine learning to analyze facial expressions in images and detect emotions like happy, sad, angry, fearful and disgust. It was developed in Python using techniques like pre-processing, skin color detection, facial feature extraction and a support vector machine classifier. The goal is to build a system that can automatically recognize emotions from faces as accurately as humans. It discusses previous related work on facial recognition and detection and outlines the objectives, methodology and evaluation of the Emotionalizer system.
IRJET- Emotionalizer : Face Emotion Detection SystemIRJET Journal
This document describes a face emotion detection system called Emotionalizer. It uses machine learning and facial recognition techniques to detect emotions like happy, sad, angry, fearful and disgust based on facial expressions. The system analyzes images of faces and determines the appropriate emotion based on geometric changes in facial features. It was developed in Python using tools like OpenCV for facial detection and recognition. The goal is to build a system that can read emotions from facial expressions similarly to how humans perceive emotions.
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification
Similar to An SOM-based Automatic Facial Expression Recognition System (20)
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
GraphRAG for LifeSciences Hands-On with the Clinical Knowledge Graph
An SOM-based Automatic Facial Expression Recognition System
1. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
DOI :10.5121/ijscai.2013.2305 45
An SOM-based Automatic Facial Expression
Recognition System
Mu-Chun Su1
, Chun-Kai Yang1
, Shih-Chieh Lin1
,De-Yuan Huang1
, Yi-Zeng
Hsieh1
, andPa-Chun Wang2
1
Department of Computer Science &InformationEngineering,National Central
University,Taiwan, R.O.C.
2
Cathay General Hospital, Taiwan, R.O.C.
E-mail: muchun@csie.ncu.edu.tw
Abstract
Recently, a number of applications of automatic facial expression recognition systems havesurfaced in
many different research fields. The automatic facial expression recognition problem is a very challenging
problem because it involves in three sub-problems: 1) face detection, 2) facial expression feature
extraction, and 3) expression classification. This paper presents an automatic facial expression recognition
system based on self-organizing feature maps, which provides an effective solution to the aforementioned
three sub-problems. The performance of the proposed system was computed on twowell-known facial
expression databases. The average correct recognition rates were over 90%.
Keywords
Facial expression recognition, SOM algorithm, face detection.
1. INTRODUCTION
Automatic facial expression recognition systems have been applied to many practical application
fields such as social robot interactions, human-computer interactions, human behavior analysis,
virtual reality, etc. Thus in recent years, the study of automatic facial expression recognition has
become a more and more important research topic for many researchers from different research
fields [1]-[23].
The human face is an elastic object that consists of organs, numerous muscles, skins, and bones.
When a muscle contracts, the transformation of the corresponding skin area attached to the
muscle result in a certain type of visual effect. Although the claim that theredo exist universally
basic emotions across genders and races has not been confirmed, most of the existing vision-
based facial expression studies accept the assumption defined Ekman about the universal
categories of emotions (i.e., happiness, sadness, surprise, fear, anger, and disgust) [24]. A human-
observer-based system called the Facial Action Coding System (FACS) has been developed to
facilitate objective measurement of subtle changes in facial appearance caused by contractions of
the facial muscles [25]. The FACS is able to give a linguistic description of all visibly
discriminable expressions via 44 action units.
2. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
46
The automatic facial expression recognition problem is a very challenging problem because it
involves in three sub-problems: 1) face detection, 2) facial expression feature extraction, and 3)
expression classification. Each of the sub-problems is a difficult problem to be solved due to
many factors such as cluttered backgrounds, illumination changes, face scales, pose variations,
head or body motions, etc. An overview of the research work in facial expression analysis can be
found in [26]-[28]. The approaches to facial expression recognition can be divided into two
classes in many different ways. In one way, they can be classified into static-image-based
approaches (e.g., [10]) and image sequence-based approaches (e.g., [2], [7]-[8], [11],[17], [20]-
[22], etc). While the static-image-based approach classifies expressions based on a single image,
the image sequence-based approach utilizes the motion information in an image sequence. In
another way, they can be classified into geometrical feature-based approaches (e.g., [1], [7], [15],
etc) and appearance-based approaches (e.g., [12], [16], etc). The geometrical feature-based
approach relies on the geometric facial features such as the locations and contours of eyebrows,
eyes, nose, mouth,etc. As for the appearance-based approach, the whole-face or specific regions
in a face image are used for the feature extraction via some kinds of filters or transformations.
Some approaches can fully automatically recognize expressions but some approaches still need
manual initializations before the recognition procedure.
In this paper we propose a simple approach to implement an automatic facial expression
recognition system based on self-organizing feature maps (SOMs). The SOM algorithm is a well-
known unsupervised learning algorithm in the field of neural networks [29]. A modified self-
organizing feature map algorithm is developed to automatically and effectively extract facial
feature points. Owing to the introduction of the SOMs, the motion of facial features can be more
reliably tracked than the methods using a conventional optical flow algorithm. The remaining of
this paper is organized as follows. A detailed description of the proposed expression recognition
algorithm is given in Section 2. Then simulation resultsare given in Section 3.Finally, Section 4
concludes the paper.
The Proposed Facial Expression Recognition System
The proposed automatic facial expression recognition system can automatically detect human
faces, extract facial features, and recognize facial expressions. The inputs to the proposed
automatic facial expression recognition algorithm are a sequence of images since dynamic images
can provide more information about facial expressions than a single static image.
2.1 Face Detection
The first step for facial expression recognition is to solve the face detection sub-problem. Face
detection determines the locations and sizes of faces in an input image. Automatic human face
detection is not a trivial task because face patterns can have significantly variable image
appearances due to many factors such as hair styles, glasses, and races.In addition, the variations
of face scales, shapes and poses of faces in images also hinder the success of automatic face
detection systems. Several different approaches have been proposed to solve the problem of face
detection [30]-[33]. Each approach has its own advantages and disadvantages. In this paper, we
adopt the method proposed by Viola and Jones to detect faces from images [34]. This face
detection method can minimize computational time while achieving high detection accuracy.
3. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
47
After a human face is detected, the proposed system adopts a composite method to locate pupils.
First of all, we adopt the Viola-Jones algorithm discussed in [34] to locate the regions of eyes.
One problem associated with the Viola-Jones algorithmis that it could effective locate the regions
of eyes but could not precisely locate the centers of the pupils. Therefore, we need to fine-tune
the eye regions to precisely locate the pupils. Weassume that the pupil regions are the darkest
regions in the eye regions detected by the Viola-Jones algorithm.The segmentation task for
locating the pupilscan be easily accomplished if the histogram of the eye regions presents two
obvious peaks; otherwise, correct threshold selection is usually crucial for successful threshold
segmentation. We adoptthep-tile thresholding technique to automatically segment the pupil
regions from the eye regions [35]. From our many simulation results, we found that the ratio
between the pupil regions and the remaining eye regions could be chosen to be 1/10 (i.e., p = 10).
After the pupils have been located, the face image is rotated, trimmed, and normalized to be an
image with the size80×60.We rotate the detected face image to make the pupils lie on the
horizontal axe. In addition, we normalize the distance between the two pupils to be 25 pixels. Fig.
1 illustrates an example of a normalized face image.
Fig. 1.An example of face detection and normalization. (a) Detected face. (b) Detected eyes. (c) Rotated,
trimmed, and normalized face.
2.2Facial Feature Extraction
After the face in the first image frame has been detected, the next step is to extract necessary
information about the facial expression presented in the image sequence. Facial features can be
categorized into many different classes [26]-[28]. In general, there are two types of facial features
can be extracted: geometrical features and appearance features [36]. While the appearance
features can be extracted on either the whole face or some specific regions via some kinds of
filters (e.g., Gabor wavelets filter), geometrical features focus on the extraction of shapes and
locations of intransient facial features (e.g., eyes, eyebrows, nose, and mouth). In our system,
geometrical features are extracted for facial expression recognition.
The movements of the facial featuressuch as eyebrows, eyes, and the mouth have a strong relation
to the information about facial expressions; however, the reliable extraction of the exact locations
of the intransientfacial features sometimes is a very challenging task due to many disturbing
factors (e.g., illumination factor, noise). Even if we can accurately locate the facial features, we
still encounter another problem about the extraction of the motion information of the facial
features.
One simple approach to solvethe aforementioned two problems is to place a certain number of
landmark points around the located facial feature regions and then use a tracking algorithm to
4. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
48
track those landmark points to compute the displacement vectors of those points. However, this
approach has to break some bottlenecks. The first bottleneck is that how and where to
automatically locate the landmark points. Accurate locations of landmark points usually require
intensive computational resources; therefore, some approaches adopted an alternative method to
compute motion information in meshes or grids which cover some important intransientfacial
features (e.g., potential net [9], uniform grid [20], Candide wireframe [22]). Another bottleneck is
that the performance of the adopted tracking algorithm may be sensible to some disturbing factors
such as illumination changes, head or body motions. This problem usually results in the
phenomenon that severallandmarks points will be erroneously tracked to some far away locations.
To encounter the aforementioned two problems, we proposed the use of self-organizing feature
maps (SOMs)[29]. The SOM algorithm is one of the most popular unsupervised learning
algorithms in the research field of neural networks. Recently, numerous technical reports have
been written about successful applications of the SOMs in a variety of problems.The principal
goal of SOMs is to transform patterns of arbitrary dimensionality into the responses of one- or
two-dimensional arrays of neurons, and to perform this transform adaptively in a topological
ordered fashion.
In our previous work, we built a generic face model from the examinations of a large number of
faces [20]. Based on the generic face model proposed in [20], we further proposed a normalized
generic face model as shown in Fig. 2. Although geometric relations between the eyes and the
mouth vary a little bit from person to person, the eyes, eyebrows, the nose, and the mouth are
basically enclosed in the three rectangles and the pentagon as shown in Fig. 2.We adopt a
pentagon instead of a rectangle is to make the recognition performance as insensible to beards as
possible. This observation was concluded from our many simulation results.
Fig. 2.A normalized generic face model.
After we have located the four critical regions (i.e., the eyes, the nose, and the mouth), the next
step is to extract the motion information of these facial features.
To encounter the aforementioned two problems, we proposed the use of SOMs. Four nets with
the sizes,6×6, 6×6, 7×7, and 4×4, are placed around the regions of the two eyes, the nose, and the
mouth, respectively. In total, there are 137 neurons in the four SOMs. A modified SOM algorithm
is developed to automatically and effectively extract facial feature points. The approximated
5. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
49
gradient of each pixel inside the corresponding facial region is used as an input pattern to the
modified SOM algorithm. The modified SOM algorithm is summarized as follows:
Step 1. Initialization:In the conventional SOM algorithm, weight vectors are usually randomly
initialized. Instead of adopting the random initialization scheme, we initialize the weight vectors,
⃗⃗⃗ , , to lie within a rhombus as shown in Fig. 3(a). From many
simulations, we found that the use of rhombuses was more efficient than the use of rectangles, not
to mention the initialization scheme.
Step 2. Winner Finding:Instead of directly presenting the gray level of each pixel, we present the
approximated gradient of each pixel,,⃗ , to the network and find the winning
neuron. The two gradients,, and , represent the row edge gradient and the column
gradient at the jth pixel, respectively. In our system, the Sobel operator was adopted for the
computation of the gradients. The neuron with the largest value of the activation function is
declared the winner for the competition. The winning neuron *
j at time k is found by using either
the maximum output criterion or the minimum-distance Euclidean criterion:
)()(min
1
*
kwkxArgj j
MMj
(1)
where
T
yx
T
kGkGkxkxkx )(),()(),()( 21
represents the kth input pattern corresponding to
a pixel located in the face feature region (i.e., the eye region, nose region, and the mouth region),
MM is the network size, and
indicates the Euclidean norm. For example, we input the
gradients of the pixels inside the rectangle (i.e., the region defined by ]44,32[]36,23[ in Fig. 2)
which enclose the nose to the neural network with the size 77 .
Step 3. Weight Updating: Adjust the weights of the winner and its neighbors using the following
rule:
MMjforkwkxkkskwkw j
jjjjj 1)]()()[()()()1( ,*
(2)
)(2
exp)( 2
2
,
,
*
*
k
d
k
jj
jj
(3)
{
| | | |
| | | |⁄ | | | | (4)
where )(k is a positive constant, denotes the lateral distance of neuron j from the winning
neuron j*, )(k is the “effect width” of the topological neighborhood , and
)(,* kjj
is the
topological neighborhood function of the winner neuron
*
j at time k. The parameter sj is a
6. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
50
weighting factor for the learning rate )(k . It was introduced to make the learning rate larger
when the absolute value of theapproximated gradient of each pixel is large (e.g., larger than or
equal to 255).We assume that pixels with high gradient convey more facial featureinformation.
Due to the introduction of the weighting factor, the weight vectors of the network can quickly
converge to important pixels on the corresponding facial regions as shown in Fig. 3(b).
Step 4.Iterating: Go to step 2 until a pre-specified number of iterations is achieved or some kind
of termination criterion is satisfied.
After sufficient training, each weight vector in a trained SOM corresponds toa landmark point in
the corresponding facial region as shown in Fig. 4.
(a)
(b)
Fig. 3 TheSOM training procedure. (a) The use of rhombus for the initial weight vectors. (b) The example
of a trained SOM in 50 iterations.
Fig. 4.The correspondence between the trained SOMs and the landmark points in the facial regions.
7. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
51
2.3 Landmark Point Tracking
To track landmark points, we adopt a two-stage neighborhood-correlation optical flow tracking
algorithm. At the first stage, we adopt the optical flow method to automatically track the 137
landmark points in the image sequence. Similar to the article [7], we adopt the cross-correlation
optical flow method. Cross-correlation of a T×T template in the previous image, and a W×W
searching window at the present image iscalculated and the position with the maximum cross-
correlation value which is larger than a pre-specified threshold is located at the present image.
The accuracy of the cross correlation method is sensitive to the illumination change, noise,
template size, moving speed, etc. Due to these disturbing factors, landmark points with
correlation values smaller than the pre-specified threshold are apt to result in tracking errors;
therefore, we cannot use the positions directly computed by the cross correlation method. To
provide an acceptable solution to the prediction of the positions of those points with small
correlation values, we propose to fully use the topology-preserving property of the SOM.The
assumption made by us is that nearby landmark points in a facial region move, to a certain extent,
coordinately. For a landmark point with a low correlation value, we use the average location of
the positions of its neighbors with correlation values which are larger than the threshold. For each
landmark, the information of its neighbors is already embedded in the trained SOMs as shown in
Fig. 4.
2.4Expression Recognition
There are total 137 neurons in the four regions. Basically, the displacement vectors of these
137landmark points located on the SOMs are used for the facial expression recognition. The
displacement of each landmark point is calculated by subtracting its original position in the first
image from the final position in the last image of the image sequence. We cannot directly feed
these 137 displacement vectors into a classifier for facial expression because the sizes of facial
feature regions vary from person to person.In addition, head movement may affect the
displacements. The 137 displacements have to be normalized in some way before they are
inputted to a classifier. To remedy the problem of head movement, we use the average
displacement vector of the 16 landmark points corresponding to the 16 neurons in the network
with the size in the region of nose to approximate the head displacement vector. Then all
displacement vectors are subtracted from the head displacement vector. In the following, the
remaining 136 displacement vectors are re-sampled to 70 average displacement vectors (as shown
in Fig. 5) in order to make the recognition system be person independent. We take the left eye
region for example to illustrate how we re-sample the 36 displacement vectors located in the left
eye region. First of all, we find a rectangle to circumscribe the 36 landmark points in the left eye
region. Then we dichotomize the rectangle into 20 small rectangles. The average displacement
vector of those landmark points lying in the same small rectangle is computed. Therefore, there
are 20 displacement vectors to represent the left eye region. The same re-sampling procedure is
applied to the right eye region and the mouth region. Since the mouth region is larger than the eye
region, we use 30 small rectangles in the mouth region. Finally, there are totally 70 normalized
displacement vectors as shown in Fig. 5.
Finally, a multi-layer perceptron (MLP) with the structure140×10×10×7was adopted for the
classification of the seven expressions including six basic facial expressions (i.e., happiness,
sadness, surprise, fear, anger, and disgust) and a neutral facial expression. The structure of the
MLP was chosen from many simulation results.
8. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
52
Fig. 4. The final 70displacement vectors re-sampled from 137 displacement vectors.
3.SIMULATION RESULTS
The performance of the proposed system was tested on the well-known Cohn-KanadeDatabase
[37]-[38] and FG-NETdatabase from the Technique University of Munich [39]. The Cohn-
KanadeDatabasecurrently contains 2105 digitized image sequences performed by 182 adult
subjects.This database has been FACS (Facial ActionCoding System) coded.The FG-NET
databaseis an image database containing face images showing a number of subjects performing
the six different basic emotions. The database contains material gathered from 18 different
individuals. Each individual performed all six desired actions three times. Additionally three
sequences doing no expressions at all are recorded. In total, there are 399 sequences in the
database.
To provide accurately labeled sequences for training the MLP to recognize 7 facial expressions
(i.e., happiness, sadness, surprise, fear, anger, disgust, and neutral), we asked 13 subjects to
visually evaluate the two databases and then label each sequence to a certain expression. Via the
majority consensus rule, we finally selected 486 image sequences from the Cohn-
KanadeDatabase and 364 sequences from the FG-NET database, respectively.
The training data set was consisted of the 75% of the labeled data set and the remaining data was
used to generate the testing data.For the Cohn-Kanade database, the recognition results were
tabulated in Tables1-2. The average correct recognition ratios were 94% and 91% for the training
data and testing data, respectively. The recognition results for FG-NET database were tabulated
in Tables3-4.The average correct recognition ratios were 88.8% and 83.9% for the training data
and testing data, respectively. Comparisons with other existing methods are shown in Table 5.
Table 5 shows that the performance of the proposed SOM-based facial expression recognition
system was comparable to those existing methods.
4. CONCLUSION
In this paper, anSOM-based automatic facial expression recognitionis presented. The proposed
system is able to automatically detect human faces, extract feature points, and perform facial
expression recognition from image sequences. First of all, the method proposed by Viola and
Jones was used to detect a face from an image. After a human face is detected, a composite
method was proposed to locate pupils so that the located face image can be rotated, trimmed, and
9. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
53
normalized to be an image with the size 80×60. To alleviate the computational load for extracting
facial features, we propose the use of SOMs. In the following section we adopt a two-stage
neighborhood-correlation optical flow tracking algorithm to track facial features. Finally, a multi-
layer perceptron (MLP) with the structure 140×10×10×7 was adopted for the classification of the
seven expressions including six basic facial expressions (i.e., happiness, sadness, surprise, fear,
anger, and disgust) and a neutral facial expression.Simulation results showed that the
performance of the proposed SOM-based facial expression recognition system was comparable to
those existing methods.
ACKNOWLEDGMENTS
This paper was partly supported by the National Science Council, Taiwan, R.O.C, under NSC
101-2221-E-008-124-MY3 and NSC 101-2911-I-008-001, CGH-NCU Joint Research Foundation
102NCU-CGH-04, and LSH-NCU Joint Research Foundation 102NCU-CGH-04. Also, the
authors would like to thank Dr. Cohn et al. for providing us their image sequences in the Cohn-
Kanade AU-Coded Facial Expression Database.
REFERENCES
[1] G. W. Cottrell and J. Metcalfe, “EMPATH: Face, gender, and emotion recognition using holons,”
Advances in Neural Information Processing Systems, vol. 3, pp. 564-571,1991.
[2] K. Mase, “Recognition of facial expression from optical flow,” IEICE Trans., vol. E74, no. 10, pp.
3474-3483,1991.
[3] D. Terzopoulos and K. Waters, “Analysis and synthesis of facial image sequences using physical and
anatomical models,” IEEE Trans. Pattern Anal. Machine Intell., vol. 15, pp. 569-579,1993.
[4] I. A. Essa and A. Pentland, “A vision system for observing and extracting facial action parameters,”
in Proc. Computer Vision and Pattern Recognition, pp. 76-83, 1994.
[5] K. Matsuno, C. Lee, and S. Tsuji, “Recognition of human facial expressions without feature
extraction,” ECCV, pp. 513-520, 1994.
[6] T. Darrel, I. Essa, and A. P. Pentland, “Correlation and interpolation networks for real-time
expression analysis/synthesis,”Advancesin Neural Information Processing Systems (NIPS) 7, MIT
Press,1995.
[7] Y. Yacoob and L. D. Davis, “Recognizing human facial expressions from long image sequences using
optical flow,” IEEE Trans.on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636-642,
1996.
[8] M.Rosenblum,Y. Yacoob,L.S.Davis, “Human expression recognition from motion using a radial basis
function network architecture,”IEEE Trans. on Neural Networks, vol. 7, no. 5, pp. 1121-1138, 1996.
[9] S. Kimura and M. Yachida, “Facial expression recognition and its degree estimation,” in Proc.
Computer Vision and Pattern Recognition, pp. 295-300, 1997.
[10] C. L. Huang and Y. M. Huang, “Facial expression recognition using model-based feature extraction
and action parameters classification,” Journal of Visual Communication and Image Representation,
vol. 8, no. 3, pp. 278-290, 1997.
[11] T. OtsukaandJ.Ohya, “Spotting segments displaying facial expression from image sequencesusing
HMM,” in Proc. IEEE Conf. onAutomatic Face and Gesture Recognition, pp. 442-447, Apr. 1998.
[12] M.S. Bartlett, J.C. Hager, P. Ekman, and T.J. Sejnowski, “Measuring Facial Expressions by Computer
Image Analysis,” Psychophysiology, vol. 36, pp. 253-263, 1999.
[13] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, and T.J. Sejnowski, “Classifying Facial Actions,”
IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 21, no.10,1999.
[14] J. J. Lien, T. Kanade, J. Cohn, and C. Li, “Detection, tracking, and classification of action units in
facial expression,”Journal of Robotics and Autonomous Systems, vol. 31, no. 3, pp. 131-146, 2000.
10. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
54
[15] Y. l. Tian, T. Kanade, and J. F. Cohn, “Recognizing Action Units for Facial Expression Analysis,”
IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, 2001.
[16] Y. l. Tian,T. Kanade, and J. F. Cohn, “Evaluation of Gabor-Wavelet-Based Facial Action Unit
Recognitionin Image Sequences of Increasing Complexity,” in Proc. of the Fifth IEEE International
Conference on Automatic Face and Gesture Recognition, pp. 229-234, 2002.
[17] M. Yeasin, B. Bullot, and R. Sharma, “Recognition of facial expressions and measurement of levels
of interest from video,”IEEE Trans. on Multimedia, vol. 8, pp. 500-508, June 2006.
[18] S. Kumano, K. Otsuka, J. Yamato, E. Maeda, and Y. Sato, “Pose-invariant facial expression
recognition using variable-intensity templates,” inProc. ACCV'07, 2007, vol. 4843, pp.324-334, 2007.
[19] J. Wang and L. Yin, “Static topographic modeling for facial expression recognition and analysis,”
Computer Vision and Image Understanding, vol. 108, no. 1-2, pp. 19-34, Oct. 2007.
[20] M. C. Su, Y. J. Hsieh, and D. Y. Huang, "Facial Expression Recognition using Optical Flow without
Complex Feature Extraction," WSEAS Transactions on Computers, vol. 6, pp. 763-770, 2007.
[21] I. Kotsia and I. Pitas, “Facial expression recognition in image sequences using geometric deformation
features and support vector machines,” IEEE Trans. On Image Processing, vol. 16.No. 1, pp. 172-187,
2007.
[22] P. Wanga, F. Barrettb, E. Martin, M. Milonova, R. E. Gur, R. C.Gur, C. Kohler, and R.Verma,
“Automated video-based facial expression analysis of neuropsychiatric disorders,” Neuroscience
Methods, vol. 168, pp. 224-238, Feb. 2008.
[23] I.Kotsia, I.Buciu, and I. Pitas, “An analysis of facial expression recognition under partial facial image
occlusion,”Image and Vision Computing,vol. 26, no. 7, pp. 1052-1067, July 2008.
[24] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion,” Journal of
Personality and Social Psychology, vol. 17, pp. 124-129, 1971.
[25] P. Ekman and W.V. Friesen, Facial Action Coding System(FACS), Consulting Psychologists Press,
1978.
[26] M. Pantie and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: the state of the art,”
IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, 2000.
[27] B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey,” Pattern Recognition, vol.
36, no. 1, pp. 259-275,2003.
[28] Z. Zeng, M. Pantic, G. I. Roisman, T. S. Hung, “A survey of afftect recognition method: audio,
visual, and spontaneous expressions,”IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.
31, no. 1, pp. 39-58, 2009.
[29] T. Kohonen, Self-Organizing Maps. Springer-Verlag, Berlin, 1995.
[30] S.H. Lin, S.Y. Kund, and L.J. Lin, “Face recognition / detection by probabilistic decision-based
neural network,”IEEE Trans on Neural Networks, vol. 8, no. 1, pp. 114-132, 1997.
[31] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network- based face detection,”IEEE Trans. on Patt.
Anal.And Mach. Intell., vol. 20, no. 1, pp. 23-38, 1998.
[32] K. K. Sung and T. Poggio, “Example-based learning for view-based human face detection,”IEEE
Trans on Patt.Anal.And Mach. Intell., vol. 20, no. 1, pp. 39-51, 1998.
[33] M. C. Su and C. H. Chou, “Associative- memory-based human face detection,” IEICE Trans. on
Fundamentals of Electronics, Communications and Computer Sciences, vol. E84-D, no. 8, pp. 1067-
1074, 2001.
[34] P. Viola and M. J. Jones,“Rapid object detection using a boosted cascade of simple features,”in Proc.
of the IEEE Computer Society International Conference on Computer Vision and Pattern
Recognition,vol. 1, pp. 511-518,2001.
[35] M. Sonka, V. Hlvac, and R. Boyle, Image Processing, Analysis, and Machine Vision, PWS
Publishing, 1999.
[36] Y. l. Tian, T. Kanade, and J. F. Cohn, “Facial expression analysis,” Handbook of Face Recognition,
S. Z. Li and A. K. Jain, eds., Chap. 11, pp. 247-276, 2001.
[37] T. Kanade, J. Cohn,and Y. Tian, “Comprehensive Database for Facial Expression Analysis,” in Proc.
of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition, pp. 45-63,
2000.
11. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
55
[38] Cohn-Kanade AU-Coded Facial Expression Database. [Online]. Available:
http://vasc.ri.cmu.edu/idb/html/face/facial_expression/index.html July 2008 [date accessed]
[39] Databasewith Facial Expressions and Emotions from the Technical University Munich. [Online].
Available: http://www.mmk.ei.tum.de/~waf/fgnet/ July 2008 [date accessed]
[40] S. Hadi, A. Ali, and K. Sohrab, “Recognition of six basic facial expressions by feature-points tracking
using RBF neural network and fuzzy inference system,” in Proc. of the IEEE Int. Conf.on Multimedia
and Expo, 2004,vol. 2, pp. 1219-1222.
[41] F. Wallhoff, B. Schuller, M. Hawellek, and G. Rigoll, “Efficient recognition of authentic dynamic
facial expressions on the Feedtum Database,” in IEEE Int. Conf. on Multimedia and Expo, July 2006,
pp. 493-496.
Table.1. The recognition performance for the training data set from the Cohn-KanadeDatabase. Su:
Surprise, F: Fear, H: Happy, Sa: Sad, D: Disgust, A: Angry, and N: Neutral.
NN
Real Su F H Sa D A N
Recognition
rate
Su 70 1 0 0 0 0 0 98.6%
F 0 24 0 0 1 0 2 88.9%
H 0 0 77 0 0 0 1 98.7%
Sa 0 0 0 52 1 2 1 92.9%
D 0 1 0 0 33 5 2 80.5%
A 0 1 0 1 0 38 1 92.7%
N 0 0 0 0 1 1 48 96.0%
Average 94%
Table.2. The recognition performance for the testing data set from the Cohn-KanadeDatabase. Su: Surprise,
F: Fear, H: Happy, Sa: Sad, D: Disgust, A: Angry, and N: Neutral.
NN
Real Su F H Sa D A N
Recognition
rate
Su 20 0 0 0 0 0 2 91.0%
F 0 7 0 0 0 2 0 77.8%
H 0 0 25 0 0 0 2 92.6%
Sa 0 0 0 18 1 0 0 94.7%
D 0 1 0 0 12 1 0 85.7%
A 0 0 0 0 0 12 2 85.7%
N 0 0 0 0 0 0 17 100.0%
Average 91%
12. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
56
Table.3. The recognition performance for the training data set from the FG-NETDatabase. Su: Surprise, F:
Fear, H: Happy, Sa: Sad, D: Disgust, A: Angry, and N: Neutral.
Table.4. The recognition performance for the testing data set from the FG-NETDatabase. Su: Surprise, F:
Fear, H: Happy, Sa: Sad, D: Disgust, A: Angry, and N: Neutral.
NN
Real Su F H Sa D A N
Recognition
rate
Su 9 0 0 0 0 0 1 90.0%
F 0 6 0 0 0 0 2 75.0%
H 0 0 12 0 1 0 2 80.0%
Sa 0 0 0 12 0 0 2 85.7%
D 0 0 1 0 10 0 2 76.9%
A 0 0 0 1 0 11 1 84.6%
N 0 0 0 1 0 0 13 92.9%
Average 83.9%
NN
Real Su F H Sa D A N
Recognition
rate
Su 37 1 0 0 0 0 2 92.5%
F 0 12 0 1 0 0 9 54.5%
H 0 0 42 0 0 0 0 100.0%
Sa 0 0 0 36 0 0 3 92.3%
D 0 2 0 1 32 2 2 82.1%
A 0 0 0 0 0 30 6 83.3%
N 0 0 0 0 0 0 42 100.0%
Average 88.8%
13. International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.4, August 2013
57
Table 5.Comparisons with other existing methods.
Method
Recognition Results
Database Features Classifier
Expressions Recognition
Our method 7
93.2%
Cohn-Kanade
(486
sequences:3/4
for training and
1/4 for testing)
Modified
SOM
MLP
87.6%
FG-NET
(364 sequences)
MLP
Su et al.
[20]
5 95.1%
Cohn-Kanade
(486 sequences)
Uniform
grids
MLP
Yeasin et al.
[17]
6 90.9%
Cohn-Kanade
(488 sequences)
Grid points HMMs
Kotsia et al.
[21]
6 91.6%
Cohn-Kanade
and JAFFE
(leave- 20% out
cross-validation)
Texture
model
SVM
Seyedarabiet
al. [40]
6 91.6%
Cohn-Kanade
(43 subjects for
training and 10
subjects for
testing)
Manually
label
RBF
Wallhoff et
al. [41]
7 61.7%
FG-NET
(5-fold cross-
validation)
2D-DCT SVM