SlideShare a Scribd company logo
1 of 7
Download to read offline
Jan Zizka (Eds) : CCSIT, SIPP, AISC, PDCTA - 2013
pp. 525–531, 2013. Β© CS & IT-CSCP 2013 DOI : 10.5121/csit.2013.3657
FACIAL LANDMARKING LOCALIZATION
FOR EMOTION RECOGNITION USING
BAYESIAN SHAPE MODELS
Hernan F. Garcia1
, Alejandro T. Valencia1
and Alvaro A. Orozco1
Technological University of Pereira, La Julita Periera, Risaralda, Colombia,
hernan.garcia@utp.edu.co,cristian.torres@utp.edu.co,
aaog@utp.edu.co
ABSTRACT
This work presents a framework for emotion recognition, based in facial expression analysis
using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action
Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The
BSM estimate the parameters of the model with an implementation of the EM algorithm. We
describe the characterization methodology from parametric model and evaluated the accuracy
for feature detection and estimation of the parameters associated with facial expressions,
analyzing its robustness in pose and local variations. Then, a methodology for emotion
characterization is introduced to perform the recognition. The experimental results show that
the proposed model can effectively detect the different facial expressions. Outperforming
conventional approaches for emotion recognition obtaining high performance results in the
estimation of emotion present in a determined subject. The model used and characterization
methodology showed efficient to detect the emotion type in 95.6% of the cases.
KEYWORDS
Facial landmarking, Bayesian framework, Fitting, Action Unit
1. INTRODUCTION
This work is focus to the problem of emotion recognition in images sequences with varying facial
expression, ilumination and change pose in a fully automatic way. Although some progress has
already been made in emotion recognition, several unsolved issues still exist. For example, it is
still an open problem which features are the most important for emotion recognition. It is a
subject that was seldom studied in computer science. In order to define which facial features are
most important to recognize the emotion type, we rely on Ekman study to develop a methodology
which be able to perform a emotion recognition trough a robust facial expression analysis [1].
Although there have been a variety of works in the emotion recognition field, the results are not
yet optimal, because the techniques used for facial expressions analysis are not robust enough.
Busso et al. [2], presents a system for recognizing emotions through facial expressions displayed
in live video streams and video sequences. However, works such as those presented in [3] suggest
526 Computer Science & Information Technology (CS & IT)
that developing a good methodology of emotional states characterization based on facial
expressions, leads to more robust recognition systems. Facial Action Coding System (FACS)
proposed by Ekman et al. [1], is a comprehensive and anatomically based system that is used to
measure all visually discernible facial movements in terms of atomic facial actions called Action
Units (AUs). As AUs are independent of interpretation, they can be used for any high-level
decision-making process, including the recognition of basic emotions according to Emotional
FACS (EM-FACS), the recognition of various affective states according to the FACS Affect
Interpretation Database (FACSAID) introduced by Ekman et al. [4], [5], [6], [7]. From the
detected features, it is possible to estimate the emotion present in a particular subject, based on an
analysis of estimated facial expression shape in comparision to a set of facial expressions of each
emotion [3], [8], [9].
In this work, we develop a novel technique for emotion recognition based on computer vision
techniques by analysing the visual answers of the facial expression, from the variations that
present the facial features, specially those in regions of FACS. Those features are detected by
using bayesian estimation techniques as Bayesian Shape Model (BSM) proposed on [10], which
from the a-priori object knowledge (face to be analyzed) and being help by a parametric model
(Candide3 in this case) [11], allow to estimate the object shape with a high accuracy level. The
motivation for its conception, is an extension of the Active Shape Model (ASM) and Active
Appeareance Model (AAM) which predominantly considers the shape of an object class [12]
[13]. Although facial feature detection is widely used in facial recognition tasks, databases
searching, image restoration and other models based on coding sequences of images containing
human faces. The accuracy which these features are detected does not reach high percentages of
performance due some issues present at the scene as the lighting changes, variations in pose of
the face in the scene or elements that produce some kind of occlusion. Therefore, it is important
to analyze how useful would be the image processing techniques developed for this work. The
rest of the paper is arranged as follows. Section 2 provides a detailed discussion of model-based
facial feature extraction. Section 3 presents our emotion characterization method. Sections 4 and
5 discuss the experimental setup and results respectively. The paper concludes in Section 6, with
a summary and discussion for future research.
2. A BAYESIAN FORMULATION TO SHAPE REGISTRATION
The probabilistic formulation of shape registration problem contains two models: one denotes the
prior shape distribution in tangent shape space and the other is a likelihood model in image shape
space. Based on these two models we derive the posterior distribution of model parameters.
2.1 Tangent Space Approximation
Computer Science & Information Technology (CS & IT) 527
Fig. 1. The mean face model and all the training sets normalized face models [16]
3. EMOTION CHARACTERIZATION
3.1 Facial Expression Analysis
Facial expressions are generated by contraction or relaxation of facial muscles or by other
physiological processes such as coloring of the skin, tears in the eyes or sweat on the skin. We
restrict ourselves to the contraction or relaxation of the muscles category. The contraction or
relaxation of muscles can be described by a system of 43 Action Units (AUs) [1]
Labeling of the Emotion Type After choosing that AUs are used to represent the facial
expressions that correspond to each emotion, we chose Candide3 vertexes used to define the
528 Computer Science & Information Technology (CS & IT)
configuration of the face and the calculation of the FACS [11]. Figure 2 shows the shapes of the
Candide3 models playing each emotion.
Fig. 2. Shape of each emotion type
4. EXPERIMENTAL SETUP
In this work we used the MUCT database which consists of 3755 faces from 276 subjects with 76
manual landmarks [16].
4.1 Evaluation Metrics
The shape model will be estimated for every subject, contemplating 5 changes of pose exposure
(-350
and 350
in the axis X and (-250
and 250
) in the axis Y to simulate the nods) and 1 frontal,
with which 1124 images will be had for changes of pose exposure and 276 frontal images. The
first step is to compute the average error of the distance between the manually labeled landmark
points pi and points estimated by the model for all training and test images. We compute the
relative error between the manually labeled landmark points and landmark points estimated by
the model for the eyelids region [17].
4.2 Landmarking The Training Set
The initial step is the selection of images to incorporate into the training set. In order to decide
which images will be include in the training set, the desired variations must be considered. In
order to include the whole region of the labeled face, there will be used a set of landmarks, which
consist in a group of 76 landmark points that depict the whole face regions in detail (eyes, nose,
mouth, chin, etc.) [16].
Fig. 3. Samples of the landmarking process
Computer Science & Information Technology (CS & IT) 529
After estimate the landmark points of BSM, procrustes analysis is performed to align an
estimated facial expression with respect to standard shape (Neutral emotion). Then, we extract all
the combinations of the estimated vertexes model and perform combinations of these sets
(complete set, set of eyes, eyebrows and mouth).
5. RESULTS
5.1 Bayesian shape Model Estimation Error
In Table 5.1, It can be seen that although the accuracy in the estimation of the points is greater for
images of the training set, the average error is also small for the test images. This is due to a
rigorous procedure in the training and model building in which there were considered to be the
biggest quantity of possible forms to estimate. Moreover, it is noted that although the average
error for images with changes in pose is a bit higher than in the case of frontal images, this
indicates that the accuracy in estimating the model is so high for images that show changes in the
pose of the face. Also, it is of highlighting that the times average of estimation of the model
BSM, they are relatively small which would help in applications on line.
5.2 Distribution on the Relative Error
Figure 4(a) shows the distribution function of the relative error against successful detection rate,
on which it is observed that for a relative error of 0:091 in the case of the adjustment of the right
eye, 0:084 for the left eye and 0:125 for the mouth region in frontal images, the detection rate is
100%, indicating that the accuracy in the BSM model fit is high. In addition to images with pose
variations in BSM is achieved 100% for adjusting the eye region for relative errors to 0:120,
0:123 and 0:13 for the mouth region; being this much lower than the established criterion of 0:25.
It is considered that the criterion Rerr < 0:25 is not adapted to establish a detection like correct
and can be not very suitable when it is desirable to realize facial features detection in images with
minor scale. That's why it is considered to be a successful detection for errors Rerr < 0:15 [18].
530 Computer Science & Information Technology (CS & IT)
Fig. 4. Shape of each emotion type.
Finally a set of fitting images by BSM model is shown in Figure 4(b) evidencing the robustness
of the proposed model.
5.3 Emotion Recognition
Table 2 shows the average success rates in the emotion detection obtained for all sets used in this
analysis using the MUCT database proving the high accuracy of the RMSE metric. Another
important factor in this parsing is the fact that the sets wich provide more evidence on the
emotional estimation state are the sets of B;C (mouth and eyebrow separated) with 95:6% of
average success rate, B;C;O (mouth, eyebrows and eyes separate set) with 92:2%, CO;B
(Eyebrow, Eyes together and Mouth separately) with 91:1%, and where the first of these provides
more discriminant results for the estimation of the emotional detection rate.
Table 2. Successful rate emotion recognition for MUCT database
.
6. CONCLUSION AND FUTURE WORKS
This paper presents a bayesian shape model for emotion recognition using facial expression
analisys. By projecting shape to tangent shape, we have built the model describing the prior
distribution of face shapes and their likelihood. We have developed the BSM algorithm to
uncover the shape parameters and transformation parameters of an arbitrary figure. The analyzed
regions correspond to the eyes, eyebrows and mouth region on which detection and tracking of
features is done using BSM. The results show that the estimation of the points is exact and
complies with the requests for this type of systems. Through quantitative analysis, the robustness
Computer Science & Information Technology (CS & IT) 531
of the BSM model in feature detection is evaluated, and it is maintained in nominal pose for a
range between [-400
to 400
] on X and [-200
to 200
] for Y . The used model and the methodology
of characterization showed efficiency to detect the emotion type in 95:55% of the evaluated
cases, showing a high performance in the sets analysis for the emotion recognition.In addition,
due to high accuracy in detecting and characterizing features proposed to estimate parameters
associated with the emotion type, the designed system has great potential for emotion recognition,
being of great interest in human/computer research systems.
Due to the current problem of facial pose changes in the scene, it would be of great interest to
design a 3D model that will be robust to these variations. Also, Taking advantage of sets analysis
demonstrated in this work, it would be interesting to develop a system that in addition to use the
sets analysis, add an adaptive methodology to estimate the status of each facial action unit to
improve performance of the emotion recognition system.
REFERENCES
[1] Ekman, P.: Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and
Emotional Life. 2nd edn. Owl Books, 175 Fifth Avenue, New York (2007)
[2] Carlos, B., Zhigang, D., Serdar, Y., Murtaza, B., Chul Min, L., Abe, K., Sungbok, L., Ulrich, N.,
Shrikanth, N.: Analysis of emotion recognition using facial expressions, speech and multimodal
information. In: in Sixth International Conference on Multimodal Interfaces ICMI 2004, ACM Press
(2004) 205{211
[3] Song, M., You, M., Li, N., Chen, C.: A robust multimodal approach for emotion recognition.
Neurocomput. 71 (2008) 1913{1920
[4] Ekman, P., Rosenberg, E.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression
Using the Facial Action Coding System (FACS). Oxford Univ. Press (2005)
[5] Griesser, R.T., Cunningham, D.W., Wallraven, C., B ultho, H.H.: Psychophysical investigation of facial
expressions using computer animated faces. In: Proceedings of the 4th symposium on Applied perception
in graphics and visualization. APGV '07, ACM (2007) 11{18
[6] Cheon, Y., Kim, D.: A natural facial expression recognition using differential aam and knns. In:
Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia. ISM '08, Washington, DC,
USA, IEEE Computer Society (2008) 220{227
[7] Cheon, Y., Kim, D.: Natural facial expression recognition using differential-aam and manifold learning.
Pattern Recogn. 42 (2009) 1340{1350
[8] Ioannou, S., Raouzaiou, A., Tzouvaras, V., Mailis, T., Karpouzis, K., Kollias, S.: Emotion recognition
through facial expression analysis based on a neurofuzzy network. Elsevier 18 (2005) 423{435
[9] Cohen, I., Garg, A., Huang, T.S.: Emotion recognition from facial expressions using multilevel hmm. In: in
In Neural Information Processing Systems. (2000)
[10] Xue, Z., Li, S.Z., Teoh, E.K.: Bayesian shape model for facial feature extraction and recognition. Pattern
Recognition 36 (2004) 2819{2833
[11] Ahlberg, J.: Candide-3 - an updated parameterised face. Technical report, Report No. LiTH-ISY-R-2326,
Dept. of Electrical Engineering, Linkoping University, Sweden (2001)
[12] Cootes, T., Taylor, C., Pt, M.M.: Statistical models of appearance for computer vision (2004)
[13] Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vision 60 (2004) 135{164
[14] Gower, J.C.: Generalised Procrustes analysis. Psychometrika 40 (1975) 33{51
[15] Larsen, R.: Functional 2D procrustes shape analysis. In: 14th Scandinavian Conference on Image Analysis.
Volume 3540 of LNCS., Berlin Heidelberg, Springer Verlag (2005) 205{213
[16] Milborrow, S., Morkel, J., Nicolls, F.: The MUCT Landmarked Face Database. Pattern Recognition
Association of South Africa (2010) http://www.milbo.org/muct.
[17] Hassaballah M., I.S.: Eye detection using intensity and appearance information. In: IAPR Conference on
Machine Vision Applications. (2009)
[18] M., H., Ido, S.: Eye detection using intensity and appearance information. (2009) 801809

More Related Content

What's hot

Facial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion PatternsFacial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion Patternsijeei-iaes
Β 
Face detection for video summary using enhancement based fusion strategy
Face detection for video summary using enhancement based fusion strategyFace detection for video summary using enhancement based fusion strategy
Face detection for video summary using enhancement based fusion strategyeSAT Publishing House
Β 
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Β 
Image processing
Image processingImage processing
Image processingVinod Gurram
Β 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET Journal
Β 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
Β 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenfacesadique_ghitm
Β 
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Β 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and posePvrtechnologies Nellore
Β 
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...csandit
Β 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseRaja Ram
Β 
Facial rigging for 3 d character
Facial rigging for 3 d characterFacial rigging for 3 d character
Facial rigging for 3 d characterijcga
Β 
5116ijscmc01
5116ijscmc015116ijscmc01
5116ijscmc01ijscmcj
Β 
An algorithm to detect drivers
An algorithm to detect driversAn algorithm to detect drivers
An algorithm to detect driversijscmcj
Β 
Reconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageReconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageeSAT Publishing House
Β 

What's hot (16)

Facial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion PatternsFacial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion Patterns
Β 
Face detection for video summary using enhancement based fusion strategy
Face detection for video summary using enhancement based fusion strategyFace detection for video summary using enhancement based fusion strategy
Face detection for video summary using enhancement based fusion strategy
Β 
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...
Β 
Image processing
Image processingImage processing
Image processing
Β 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
Β 
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
Β 
A Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on EigenfaceA Study on Face Recognition Technique based on Eigenface
A Study on Face Recognition Technique based on Eigenface
Β 
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...
Β 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and pose
Β 
Ch 1
Ch 1Ch 1
Ch 1
Β 
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
Β 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and pose
Β 
Facial rigging for 3 d character
Facial rigging for 3 d characterFacial rigging for 3 d character
Facial rigging for 3 d character
Β 
5116ijscmc01
5116ijscmc015116ijscmc01
5116ijscmc01
Β 
An algorithm to detect drivers
An algorithm to detect driversAn algorithm to detect drivers
An algorithm to detect drivers
Β 
Reconstruction of partially damaged facial image
Reconstruction of partially damaged facial imageReconstruction of partially damaged facial image
Reconstruction of partially damaged facial image
Β 

Similar to FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...IJMER
Β 
Human’s facial parts extraction to recognize facial expression
Human’s facial parts extraction to recognize facial expressionHuman’s facial parts extraction to recognize facial expression
Human’s facial parts extraction to recognize facial expressionijitjournal
Β 
A study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationIJCSES Journal
Β 
A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...Lisa Graves
Β 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithmAshwini Awatare
Β 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition Systemijscai
Β 
Spontaneous Smile Detection with Application of Landmark Points Supported by ...
Spontaneous Smile Detection with Application of Landmark Points Supported by ...Spontaneous Smile Detection with Application of Landmark Points Supported by ...
Spontaneous Smile Detection with Application of Landmark Points Supported by ...cscpconf
Β 
Face detection using the 3 x3 block rank patterns of gradient magnitude images
Face detection using the 3 x3 block rank patterns of gradient magnitude imagesFace detection using the 3 x3 block rank patterns of gradient magnitude images
Face detection using the 3 x3 block rank patterns of gradient magnitude imagessipij
Β 
Model Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsModel Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsLakshmi Sarvani Videla
Β 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognitionijtsrd
Β 
Happiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsHappiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
Β 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
Β 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
Β 
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Waqas Tariq
Β 
Facial_recognition_Siva vadapalli1.pptx.ppt
Facial_recognition_Siva vadapalli1.pptx.pptFacial_recognition_Siva vadapalli1.pptx.ppt
Facial_recognition_Siva vadapalli1.pptx.pptvijaynaidu51
Β 
Paper id 29201416
Paper id 29201416Paper id 29201416
Paper id 29201416IJRAT
Β 
Criminal Detection System
Criminal Detection SystemCriminal Detection System
Criminal Detection SystemIntrader Amit
Β 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition Rupinder Saini
Β 

Similar to FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS (20)

A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
Β 
Human’s facial parts extraction to recognize facial expression
Human’s facial parts extraction to recognize facial expressionHuman’s facial parts extraction to recognize facial expression
Human’s facial parts extraction to recognize facial expression
Β 
A study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classification
Β 
A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...
Β 
Face recogntion using PCA algorithm
Face recogntion using PCA algorithmFace recogntion using PCA algorithm
Face recogntion using PCA algorithm
Β 
An SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition SystemAn SOM-based Automatic Facial Expression Recognition System
An SOM-based Automatic Facial Expression Recognition System
Β 
Spontaneous Smile Detection with Application of Landmark Points Supported by ...
Spontaneous Smile Detection with Application of Landmark Points Supported by ...Spontaneous Smile Detection with Application of Landmark Points Supported by ...
Spontaneous Smile Detection with Application of Landmark Points Supported by ...
Β 
Face detection using the 3 x3 block rank patterns of gradient magnitude images
Face detection using the 3 x3 block rank patterns of gradient magnitude imagesFace detection using the 3 x3 block rank patterns of gradient magnitude images
Face detection using the 3 x3 block rank patterns of gradient magnitude images
Β 
Ck36515520
Ck36515520Ck36515520
Ck36515520
Β 
Model Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point CloudsModel Based Emotion Detection using Point Clouds
Model Based Emotion Detection using Point Clouds
Β 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Β 
Happiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age ConditionsHappiness Expression Recognition at Different Age Conditions
Happiness Expression Recognition at Different Age Conditions
Β 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
Β 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
Β 
Real time facial expression analysis using pca
Real time facial expression analysis using pcaReal time facial expression analysis using pca
Real time facial expression analysis using pca
Β 
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Β 
Facial_recognition_Siva vadapalli1.pptx.ppt
Facial_recognition_Siva vadapalli1.pptx.pptFacial_recognition_Siva vadapalli1.pptx.ppt
Facial_recognition_Siva vadapalli1.pptx.ppt
Β 
Paper id 29201416
Paper id 29201416Paper id 29201416
Paper id 29201416
Β 
Criminal Detection System
Criminal Detection SystemCriminal Detection System
Criminal Detection System
Β 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition
Β 

More from cscpconf

ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR cscpconf
Β 
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATIONcscpconf
Β 
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...cscpconf
Β 
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIES
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIESPROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIES
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIEScscpconf
Β 
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGIC
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICA SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGIC
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICcscpconf
Β 
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS cscpconf
Β 
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS cscpconf
Β 
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTIC
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICTWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTIC
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICcscpconf
Β 
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAIN
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINDETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAIN
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINcscpconf
Β 
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...cscpconf
Β 
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEM
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMIMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEM
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMcscpconf
Β 
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...cscpconf
Β 
AUTOMATED PENETRATION TESTING: AN OVERVIEW
AUTOMATED PENETRATION TESTING: AN OVERVIEWAUTOMATED PENETRATION TESTING: AN OVERVIEW
AUTOMATED PENETRATION TESTING: AN OVERVIEWcscpconf
Β 
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORK
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKCLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORK
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Β 
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...cscpconf
Β 
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATA
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAPROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATA
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAcscpconf
Β 
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCH
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHCHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCH
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHcscpconf
Β 
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...cscpconf
Β 
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGESOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGEcscpconf
Β 
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXT
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTGENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXT
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
Β 

More from cscpconf (20)

ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
ANALYSIS OF LAND SURFACE DEFORMATION GRADIENT BY DINSAR
Β 
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
Β 
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...
MOVING FROM WATERFALL TO AGILE PROCESS IN SOFTWARE ENGINEERING CAPSTONE PROJE...
Β 
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIES
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIESPROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIES
PROMOTING STUDENT ENGAGEMENT USING SOCIAL MEDIA TECHNOLOGIES
Β 
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGIC
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGICA SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGIC
A SURVEY ON QUESTION ANSWERING SYSTEMS: THE ADVANCES OF FUZZY LOGIC
Β 
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS
DYNAMIC PHONE WARPING – A METHOD TO MEASURE THE DISTANCE BETWEEN PRONUNCIATIONS
Β 
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS
INTELLIGENT ELECTRONIC ASSESSMENT FOR SUBJECTIVE EXAMS
Β 
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTIC
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTICTWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTIC
TWO DISCRETE BINARY VERSIONS OF AFRICAN BUFFALO OPTIMIZATION METAHEURISTIC
Β 
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAIN
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAINDETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAIN
DETECTION OF ALGORITHMICALLY GENERATED MALICIOUS DOMAIN
Β 
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...
GLOBAL MUSIC ASSET ASSURANCE DIGITAL CURRENCY: A DRM SOLUTION FOR STREAMING C...
Β 
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEM
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEMIMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEM
IMPORTANCE OF VERB SUFFIX MAPPING IN DISCOURSE TRANSLATION SYSTEM
Β 
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
EXACT SOLUTIONS OF A FAMILY OF HIGHER-DIMENSIONAL SPACE-TIME FRACTIONAL KDV-T...
Β 
AUTOMATED PENETRATION TESTING: AN OVERVIEW
AUTOMATED PENETRATION TESTING: AN OVERVIEWAUTOMATED PENETRATION TESTING: AN OVERVIEW
AUTOMATED PENETRATION TESTING: AN OVERVIEW
Β 
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORK
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKCLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORK
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORK
Β 
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...
VALIDATION METHOD OF FUZZY ASSOCIATION RULES BASED ON FUZZY FORMAL CONCEPT AN...
Β 
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATA
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATAPROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATA
PROBABILITY BASED CLUSTER EXPANSION OVERSAMPLING TECHNIQUE FOR IMBALANCED DATA
Β 
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCH
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCHCHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCH
CHARACTER AND IMAGE RECOGNITION FOR DATA CATALOGING IN ECOLOGICAL RESEARCH
Β 
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...
SOCIAL MEDIA ANALYTICS FOR SENTIMENT ANALYSIS AND EVENT DETECTION IN SMART CI...
Β 
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGESOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGE
SOCIAL NETWORK HATE SPEECH DETECTION FOR AMHARIC LANGUAGE
Β 
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXT
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTGENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXT
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXT
Β 

Recently uploaded

Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
Β 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
Β 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
Β 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
Β 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
Β 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
Β 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxsqpmdrvczh
Β 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
Β 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...JhezDiaz1
Β 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
Β 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
Β 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
Β 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
Β 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
Β 

Recently uploaded (20)

Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"
Β 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
Β 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
Β 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
Β 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
Β 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Β 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
Β 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
Β 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
Β 
Romantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptxRomantic Opera MUSIC FOR GRADE NINE pptx
Romantic Opera MUSIC FOR GRADE NINE pptx
Β 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
Β 
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
ENGLISH 7_Q4_LESSON 2_ Employing a Variety of Strategies for Effective Interp...
Β 
Model Call Girl in Bikash Puri Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Bikash Puri  Delhi reach out to us at πŸ”9953056974πŸ”Model Call Girl in Bikash Puri  Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Bikash Puri Delhi reach out to us at πŸ”9953056974πŸ”
Β 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
Β 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
Β 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
Β 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
Β 
Model Call Girl in Tilak Nagar Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Tilak Nagar Delhi reach out to us at πŸ”9953056974πŸ”Model Call Girl in Tilak Nagar Delhi reach out to us at πŸ”9953056974πŸ”
Model Call Girl in Tilak Nagar Delhi reach out to us at πŸ”9953056974πŸ”
Β 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Β 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
Β 

FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

  • 1. Jan Zizka (Eds) : CCSIT, SIPP, AISC, PDCTA - 2013 pp. 525–531, 2013. Β© CS & IT-CSCP 2013 DOI : 10.5121/csit.2013.3657 FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS Hernan F. Garcia1 , Alejandro T. Valencia1 and Alvaro A. Orozco1 Technological University of Pereira, La Julita Periera, Risaralda, Colombia, hernan.garcia@utp.edu.co,cristian.torres@utp.edu.co, aaog@utp.edu.co ABSTRACT This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases. KEYWORDS Facial landmarking, Bayesian framework, Fitting, Action Unit 1. INTRODUCTION This work is focus to the problem of emotion recognition in images sequences with varying facial expression, ilumination and change pose in a fully automatic way. Although some progress has already been made in emotion recognition, several unsolved issues still exist. For example, it is still an open problem which features are the most important for emotion recognition. It is a subject that was seldom studied in computer science. In order to define which facial features are most important to recognize the emotion type, we rely on Ekman study to develop a methodology which be able to perform a emotion recognition trough a robust facial expression analysis [1]. Although there have been a variety of works in the emotion recognition field, the results are not yet optimal, because the techniques used for facial expressions analysis are not robust enough. Busso et al. [2], presents a system for recognizing emotions through facial expressions displayed in live video streams and video sequences. However, works such as those presented in [3] suggest
  • 2. 526 Computer Science & Information Technology (CS & IT) that developing a good methodology of emotional states characterization based on facial expressions, leads to more robust recognition systems. Facial Action Coding System (FACS) proposed by Ekman et al. [1], is a comprehensive and anatomically based system that is used to measure all visually discernible facial movements in terms of atomic facial actions called Action Units (AUs). As AUs are independent of interpretation, they can be used for any high-level decision-making process, including the recognition of basic emotions according to Emotional FACS (EM-FACS), the recognition of various affective states according to the FACS Affect Interpretation Database (FACSAID) introduced by Ekman et al. [4], [5], [6], [7]. From the detected features, it is possible to estimate the emotion present in a particular subject, based on an analysis of estimated facial expression shape in comparision to a set of facial expressions of each emotion [3], [8], [9]. In this work, we develop a novel technique for emotion recognition based on computer vision techniques by analysing the visual answers of the facial expression, from the variations that present the facial features, specially those in regions of FACS. Those features are detected by using bayesian estimation techniques as Bayesian Shape Model (BSM) proposed on [10], which from the a-priori object knowledge (face to be analyzed) and being help by a parametric model (Candide3 in this case) [11], allow to estimate the object shape with a high accuracy level. The motivation for its conception, is an extension of the Active Shape Model (ASM) and Active Appeareance Model (AAM) which predominantly considers the shape of an object class [12] [13]. Although facial feature detection is widely used in facial recognition tasks, databases searching, image restoration and other models based on coding sequences of images containing human faces. The accuracy which these features are detected does not reach high percentages of performance due some issues present at the scene as the lighting changes, variations in pose of the face in the scene or elements that produce some kind of occlusion. Therefore, it is important to analyze how useful would be the image processing techniques developed for this work. The rest of the paper is arranged as follows. Section 2 provides a detailed discussion of model-based facial feature extraction. Section 3 presents our emotion characterization method. Sections 4 and 5 discuss the experimental setup and results respectively. The paper concludes in Section 6, with a summary and discussion for future research. 2. A BAYESIAN FORMULATION TO SHAPE REGISTRATION The probabilistic formulation of shape registration problem contains two models: one denotes the prior shape distribution in tangent shape space and the other is a likelihood model in image shape space. Based on these two models we derive the posterior distribution of model parameters. 2.1 Tangent Space Approximation
  • 3. Computer Science & Information Technology (CS & IT) 527 Fig. 1. The mean face model and all the training sets normalized face models [16] 3. EMOTION CHARACTERIZATION 3.1 Facial Expression Analysis Facial expressions are generated by contraction or relaxation of facial muscles or by other physiological processes such as coloring of the skin, tears in the eyes or sweat on the skin. We restrict ourselves to the contraction or relaxation of the muscles category. The contraction or relaxation of muscles can be described by a system of 43 Action Units (AUs) [1] Labeling of the Emotion Type After choosing that AUs are used to represent the facial expressions that correspond to each emotion, we chose Candide3 vertexes used to define the
  • 4. 528 Computer Science & Information Technology (CS & IT) configuration of the face and the calculation of the FACS [11]. Figure 2 shows the shapes of the Candide3 models playing each emotion. Fig. 2. Shape of each emotion type 4. EXPERIMENTAL SETUP In this work we used the MUCT database which consists of 3755 faces from 276 subjects with 76 manual landmarks [16]. 4.1 Evaluation Metrics The shape model will be estimated for every subject, contemplating 5 changes of pose exposure (-350 and 350 in the axis X and (-250 and 250 ) in the axis Y to simulate the nods) and 1 frontal, with which 1124 images will be had for changes of pose exposure and 276 frontal images. The first step is to compute the average error of the distance between the manually labeled landmark points pi and points estimated by the model for all training and test images. We compute the relative error between the manually labeled landmark points and landmark points estimated by the model for the eyelids region [17]. 4.2 Landmarking The Training Set The initial step is the selection of images to incorporate into the training set. In order to decide which images will be include in the training set, the desired variations must be considered. In order to include the whole region of the labeled face, there will be used a set of landmarks, which consist in a group of 76 landmark points that depict the whole face regions in detail (eyes, nose, mouth, chin, etc.) [16]. Fig. 3. Samples of the landmarking process
  • 5. Computer Science & Information Technology (CS & IT) 529 After estimate the landmark points of BSM, procrustes analysis is performed to align an estimated facial expression with respect to standard shape (Neutral emotion). Then, we extract all the combinations of the estimated vertexes model and perform combinations of these sets (complete set, set of eyes, eyebrows and mouth). 5. RESULTS 5.1 Bayesian shape Model Estimation Error In Table 5.1, It can be seen that although the accuracy in the estimation of the points is greater for images of the training set, the average error is also small for the test images. This is due to a rigorous procedure in the training and model building in which there were considered to be the biggest quantity of possible forms to estimate. Moreover, it is noted that although the average error for images with changes in pose is a bit higher than in the case of frontal images, this indicates that the accuracy in estimating the model is so high for images that show changes in the pose of the face. Also, it is of highlighting that the times average of estimation of the model BSM, they are relatively small which would help in applications on line. 5.2 Distribution on the Relative Error Figure 4(a) shows the distribution function of the relative error against successful detection rate, on which it is observed that for a relative error of 0:091 in the case of the adjustment of the right eye, 0:084 for the left eye and 0:125 for the mouth region in frontal images, the detection rate is 100%, indicating that the accuracy in the BSM model fit is high. In addition to images with pose variations in BSM is achieved 100% for adjusting the eye region for relative errors to 0:120, 0:123 and 0:13 for the mouth region; being this much lower than the established criterion of 0:25. It is considered that the criterion Rerr < 0:25 is not adapted to establish a detection like correct and can be not very suitable when it is desirable to realize facial features detection in images with minor scale. That's why it is considered to be a successful detection for errors Rerr < 0:15 [18].
  • 6. 530 Computer Science & Information Technology (CS & IT) Fig. 4. Shape of each emotion type. Finally a set of fitting images by BSM model is shown in Figure 4(b) evidencing the robustness of the proposed model. 5.3 Emotion Recognition Table 2 shows the average success rates in the emotion detection obtained for all sets used in this analysis using the MUCT database proving the high accuracy of the RMSE metric. Another important factor in this parsing is the fact that the sets wich provide more evidence on the emotional estimation state are the sets of B;C (mouth and eyebrow separated) with 95:6% of average success rate, B;C;O (mouth, eyebrows and eyes separate set) with 92:2%, CO;B (Eyebrow, Eyes together and Mouth separately) with 91:1%, and where the first of these provides more discriminant results for the estimation of the emotional detection rate. Table 2. Successful rate emotion recognition for MUCT database . 6. CONCLUSION AND FUTURE WORKS This paper presents a bayesian shape model for emotion recognition using facial expression analisys. By projecting shape to tangent shape, we have built the model describing the prior distribution of face shapes and their likelihood. We have developed the BSM algorithm to uncover the shape parameters and transformation parameters of an arbitrary figure. The analyzed regions correspond to the eyes, eyebrows and mouth region on which detection and tracking of features is done using BSM. The results show that the estimation of the points is exact and complies with the requests for this type of systems. Through quantitative analysis, the robustness
  • 7. Computer Science & Information Technology (CS & IT) 531 of the BSM model in feature detection is evaluated, and it is maintained in nominal pose for a range between [-400 to 400 ] on X and [-200 to 200 ] for Y . The used model and the methodology of characterization showed efficiency to detect the emotion type in 95:55% of the evaluated cases, showing a high performance in the sets analysis for the emotion recognition.In addition, due to high accuracy in detecting and characterizing features proposed to estimate parameters associated with the emotion type, the designed system has great potential for emotion recognition, being of great interest in human/computer research systems. Due to the current problem of facial pose changes in the scene, it would be of great interest to design a 3D model that will be robust to these variations. Also, Taking advantage of sets analysis demonstrated in this work, it would be interesting to develop a system that in addition to use the sets analysis, add an adaptive methodology to estimate the status of each facial action unit to improve performance of the emotion recognition system. REFERENCES [1] Ekman, P.: Emotions Revealed: Recognizing Faces and Feelings to Improve Communication and Emotional Life. 2nd edn. Owl Books, 175 Fifth Avenue, New York (2007) [2] Carlos, B., Zhigang, D., Serdar, Y., Murtaza, B., Chul Min, L., Abe, K., Sungbok, L., Ulrich, N., Shrikanth, N.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: in Sixth International Conference on Multimodal Interfaces ICMI 2004, ACM Press (2004) 205{211 [3] Song, M., You, M., Li, N., Chen, C.: A robust multimodal approach for emotion recognition. Neurocomput. 71 (2008) 1913{1920 [4] Ekman, P., Rosenberg, E.: What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford Univ. Press (2005) [5] Griesser, R.T., Cunningham, D.W., Wallraven, C., B ultho, H.H.: Psychophysical investigation of facial expressions using computer animated faces. In: Proceedings of the 4th symposium on Applied perception in graphics and visualization. APGV '07, ACM (2007) 11{18 [6] Cheon, Y., Kim, D.: A natural facial expression recognition using differential aam and knns. In: Proceedings of the 2008 Tenth IEEE International Symposium on Multimedia. ISM '08, Washington, DC, USA, IEEE Computer Society (2008) 220{227 [7] Cheon, Y., Kim, D.: Natural facial expression recognition using differential-aam and manifold learning. Pattern Recogn. 42 (2009) 1340{1350 [8] Ioannou, S., Raouzaiou, A., Tzouvaras, V., Mailis, T., Karpouzis, K., Kollias, S.: Emotion recognition through facial expression analysis based on a neurofuzzy network. Elsevier 18 (2005) 423{435 [9] Cohen, I., Garg, A., Huang, T.S.: Emotion recognition from facial expressions using multilevel hmm. In: in In Neural Information Processing Systems. (2000) [10] Xue, Z., Li, S.Z., Teoh, E.K.: Bayesian shape model for facial feature extraction and recognition. Pattern Recognition 36 (2004) 2819{2833 [11] Ahlberg, J.: Candide-3 - an updated parameterised face. Technical report, Report No. LiTH-ISY-R-2326, Dept. of Electrical Engineering, Linkoping University, Sweden (2001) [12] Cootes, T., Taylor, C., Pt, M.M.: Statistical models of appearance for computer vision (2004) [13] Matthews, I., Baker, S.: Active appearance models revisited. Int. J. Comput. Vision 60 (2004) 135{164 [14] Gower, J.C.: Generalised Procrustes analysis. Psychometrika 40 (1975) 33{51 [15] Larsen, R.: Functional 2D procrustes shape analysis. In: 14th Scandinavian Conference on Image Analysis. Volume 3540 of LNCS., Berlin Heidelberg, Springer Verlag (2005) 205{213 [16] Milborrow, S., Morkel, J., Nicolls, F.: The MUCT Landmarked Face Database. Pattern Recognition Association of South Africa (2010) http://www.milbo.org/muct. [17] Hassaballah M., I.S.: Eye detection using intensity and appearance information. In: IAPR Conference on Machine Vision Applications. (2009) [18] M., H., Ido, S.: Eye detection using intensity and appearance information. (2009) 801809