SlideShare a Scribd company logo
1 of 8
Download to read offline
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 8, No. 1, February 2018, pp. 52 – 59
ISSN: 2088-8708 52
Institute of Advanced Engineering and Science
w w w . i a e s j o u r n a l . c o m
Emotion Recognition from Facial Expression Based on
Fiducial Points Detection and using Neural Network
Fatima Zahra Salmam1
, Abdellah Madani2
, and Mohamed Kissi3
1,2
LAROSERI Laboratory, Faculty of Sciences, University of Chouaib Doukkali, El Jadida-Morocco
3
LIM Laboratory, Faculty of Sciences and Technologies, University Hassan II, Casablanca-Morocco
Article Info
Article history:
Received: Jun 5, 2017
Revised: Nov 30, 2017
Accepted: Dec 16, 2017
Keyword:
Facial expression
Feature selection
Neural network
Supervised Descent Method
Best First
ABSTRACT
The importance of emotion recognition lies in the role that emotions play in our everyday
lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion
recognition, is to equip the machine of this human ability to analyze, and to understand
the human emotional state, in order to anticipate his intentions from facial expression. In
this paper, a new approach is proposed to enhance accuracy of emotion recognition from
facial expression, which is based on input features deducted only from fiducial points.
The proposed approach consists firstly on extracting 1176 dynamic features from image
sequences that represent the proportions of euclidean distances between facial fiducial
points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature
selection method is used to select only the most relevant features from them. Finally, the
selected features are presented to a Neural Network (NN) classifier to classify facial ex-
pression input into emotion. The proposed approach has achieved an emotion recognition
accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and
93.8% on the JAFFE database.
Copyright c 2018 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
Abdellah Madani
LAROSERI Laboratory, computer science departement
Faculty of Sciences, University of Chouaib Doukkali, El Jadida - Morocco
Email: madaniabdellah@gmail.com
1. INTRODUCTION
As emotions play an implicit role in the communication process, and reflect human behavior, automatic
emotion recognition is a task of growing interest. To recognize human emotions, a wide range of features can be
used such as facial expression[1, 2], body gesture [2, 3], or speech [4, 5]. Giving a computer the capability of
emotion recognition (ER) is the scientific challenge around which gather different communities (signal processing,
image processing, artifcial intelligence, robotics, human-computer interaction )
Mehrabian [6] affirms that facial expression represents 55% of the nonverbal communication that allows
to understand the state or the emotion of a person. The objective of this work is to get a computer to detect human
emotions from facial expressions.
Facial expression is the most important key to understand human emotions. In fact, not all facial expres-
sions have a meaning and can be classified into emotions, but there are some basic emotions that are universal [7]
and can be expressed in the same way, which are: happy, sad, fear, anger, disgust, and surprise.
The main principles steps of facial expression recognition are generally: face detection, feature extraction,
and facial expression classification. In the first step; we have to determine whether an image belongs to the class
of faces or not. In the second step; we have to extract features or characteristics from the face that better describe
emotions. In the last step; we have to classify the extracted features into basic emotions. However, usually the issue
comes from the second step which is feature extraction. A set of features that better describes a facial expression
movement must be found and used for classification. For this reason, the proposed technique in this paper is based
on image sequences, and focused on calculating 1176 euclidean distances between all detected points to measure all
possible deformations of the face, because, there may be distances more descriptive than others that appear visually
Journal Homepage: http://iaescore.com/journals/index.php/IJECE
Institute of Advanced Engineering and Science
w w w . i a e s j o u r n a l . c o m
, DOI: 10.11591/ijece.v8i1.pp52-59
IJECE ISSN: 2088-8708 53
Figure 1. Steps of an emotion recognition system
logic. Once we have calculated dynamic features from fiducial points; an additional step of feature selection process
(see Figure 1) is used to reduce the number of features by choosing only the most relevant ones from them.
The rest of the paper is organized as follows: an overview of related work is presented in section 2, our
proposed method is presented in section 3, experimental setup is given in section 4, in section 5; results and a
discussion of our proposed approach are presented with a comparison of recognition rate between previous works.
Section 6 concludes the paper and presents some perspective researches.
2. RELATED WORK
In general, facial expression recognition system can be classified into two categories: geometric features
based methods and global features based methods [8]. In geometric features-based methods, only some parts of the
face are considered for feature extraction such as eyes, nose and mouth. Such methods consume a lot of computation
time to obtain accurate results for facial features detection and tracking which is a major disadvantage. Besides, in
global features based methods, the whole face is considered for feature extraction, as used in [9] where global
features are extracted from face images using local Zernike moment. These methods are easy to use because they
work directly on facial images to describe facial textures.
There are a plethora of works that aim to facilitate the way of recognizing emotions from facial expression
using static [10, 11, 12] or dynamic images [13, 14, 15, 16, 17, 18, 9].
By measuring dynamic facial motions in image sequences; Bassili [19] has confirmed that dynamic images
give accurate results in facial expression recognition than single static ones.
Friesen et al. [14] have proposed the FACS system that describes movements of the face, where forty
four Action Units (AU) are defined, and each one represents a movement of a particular part of the face (e.g Brow
Lowerer). According to Friesen et al., a facial expression could be characterized by a combination of AUs.
To demonstrate that AUs are capable to perform emotion expressions, basori et al. [20] have generated an
emotion expression of an avatar using combination of AUs based on facial muscle.
Pantic et al. [15] have focused their work on recognizing facial action units (AUs) and their temporal
models using profile-view face image sequences. To track 15 facial points in an input face profile sequences; they
apply particle filtering method [21].
Valstar et al. [22] have proposed an automatic method to recognize 22 actions units (AUs) and their models
using image sequences. Firstly, to automatically detect 20 fiducial points, they used Gabor-feature-based boosted
classifier, then, these points were tracked through a sequence of images using a particle filtering method with fac-
torized likelihoods.
Pu et al. [16] have suggested a new framework for facial expression analysis based on recognizing AUs
from image sequences. To detect and track fiducial points, they applied first AAM [23] to model the neutral facial
expression in the first frame, after that they used pyramidal implementation of Lucas-Kanade [24] to track feature
points in the others frames. They used two levels to classify facial expressions using random forest as method
of classification. The first level consists of classifying AUs, taking as input the displacement vectors between the
neutral expression frame and the peak expression frame. The second level consists of using as input the detected
AUs to classify facial expressions.
Most of facial expression recognition methods are based on AU-based method [15, 22, 16, 20]. They are
often influenced by the FACS system proposed by Friesen et al.[14]. Nevertheless, there are also other techniques
that are based only on fidicual points to recognize facial expression, which minimize computation time.
Abdat et al. [17] have focused on another geometric method to detect facial expression. They have used
twenty one distances to encode facial expressions; these distances describe facial features deformations compared
to the neutral state. These methods are focused firstly on the algorithm of Shi&Thomasi to extract feature points,
and secondly on the Lucas-Kanade algorithm [24] to track and detect points, after that the distance vector was used
as a descriptor of the facial expression, which is calculated from image sequences. This vector is the input of SVM
classifier.
Hammal et al. [13] have developed a classifying system based on the belief theory, and applied it on the
Hammal-Caplier database. They used five distances between different parts of the face (eybrow, both eyes and
mouth). In their work, distances were computed on skeletons of expression from image sequences, however, only
Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
54 ISSN: 2088-8708
four emotions (joy, surprise, disgust and neutral) were considered from the six basic emotions.
Perveen et al. [10] have focused their work on three regions (eyebrows, eyes, mouth) to define an emotion
from static images. First, they calculated the characteristic points of the face, then they tried to evaluate some
animation parameters such as: the openness of eyes, the width of eyes, the height of eyebrows, the opening of
mouth, and the width of mouth. As a classification technique, they used a decision tree based method, applied only
on thirty images from the JAFFE database [25], and they recognized six emotions (happy, surprise, fear, sad, angry,
and neutral) excluding the disgust emotion.
Saeed et al. [26] have proposed an emotion recognition system based on just eight fiducial points. They
represented six geometric features by measuring some distances between mouth, eyes, and eyebrows. These features
represent the changes of the face during an emotion occurrence. Then, the features were presented to an SVM
classifier for emotion recognition. The system was applied on Cohn-Kanade database (CK+) [27], and Binghamton
University 3D Facial Expression Database [28] to recognize six basic emotions.
Majumder et al. [29] have suggested an emotion recognition model based on the Kohonen self-organizing
map (KSOM) that uses 26 dimensional facial geometric feature vector calculated from three parts of the face (lips,
eyes and eyebrows) that describes the change of six basic emotions. The experience was applied on the MMI
database [30].
The research studies cited above show that dynamic facial expressions from image sequences are more
descriptive for the task of emotion recognition and can increase the accuracy in real time applications instead of
using static images.
3. PROPOSED METHOD
This section presents and justifies our proposed technique for emotion recognition from facial expression.
Our contribution concerns the feature extraction step, in which we have proposed to calculate all euclidean distances
between fiducial points, in the first and in the last frames to measure facial motion. Firstly, we detect the face
using ViolaJones algorithm [31], then, we detect and track 49 fiducial points using a powerful and recent Supervised
Decent Method (SDM) proposed by Xiong et al. [32], and from these points that represent the four parts of the
face (eyebrows, eyes, nose, and mouth), we calculate all possible distances between each pair of points, as a result,
we get C2
49 = 1176 euclidean distances. After that, to measure dynamic deformation related to the neutral state, we
calculate the distance ratio that represents dynamic features, it is calculated between the first and the last frames
(Section 2.1). Afterward, we use a feature selection method to reduce the number of features and to select only
the most relevant ones. Finally, we present the selected dynamic features to a neural network classifier for facial
expression recognition.
3.1. Facial expression representation
Once we have detected the face using Viola Jones algorithm [31], we have applied SDM method [32] to
detect and track fiducial points in image sequences.
To measure the face deformation, we have considered only the first and the last frames. Firstly, we have
calculated all Euclidean distances (1) from 49 detected points that are represented by x and y coordinates (2) (3),
and that refer to the parts of the face which are: 10 points for eyebrows, 12 points for eyes, 9 points for nose, and 18
points for mouth. In the total, we have calculated 1176 distances in the first and in the last frames. Then, we have
measured dynamic deformation by calculating the ratio (4) between frames. The ratio represents the division of the
calculated distance of the peak frame by the same calculated distance of the first frame. The dynamic features (5)
represent a vector of features that contains 1176 ratios calculated related to the neutral state.An overview of facial
expression representation process is presented in Figure 2.
D = [D1, D2, ..., Di, ..., Dt] (1)
V0 = [x10, y10, x20, y20, ..., xn0, yn0] (2)
Vp = [x1p, y1p, x2p, y2p, ..., xnp, ynp] (3)
∆Di =
Dip
Di0
(4)
DF = [∆D1, ∆D2, ..., ∆Dt] (5)
Where
n: The total number of fiducial points
IJECE Vol. 8, No. 1, February 2018: 52 – 59
IJECE ISSN: 2088-8708 55
t: The number of euclidean distances calculated between each pair of points
V0 : x and y coordinates of detected points in the first frame
Vp : x and y coordinates of detected points in the peak frame.
Dip : Euclidean distance in the peak frame.
Di0 : Euclidean distance in the first frame.
Figure 2. Dynamic features representation process
3.2. Feature selection
One of the key issues in emotion classification is the features used for prediction. For this reason; a feature
selection step has been used to choose the most relevant features. Generally, the feature selection step combines an
attribute subset evaluator with a search method. The attribute evaluator determines what method is used to assign a
worth to each subset of features. The search method determines what style of search is performed [33].
In this paper, we have chosen as a feature evaluator the CfsSubsetEval, and Best First as search method
implemented in weka. The CfsSubsetEval evaluator evaluates the worth of a subset of features by considering the
individual predictive ability of each feature along with the degree of redundancy between them; subsets of features
that are highly correlated with the class while having low inter-correlation are preferred. The Best first method
searches the space of feature subsets by greedy hill-climbing augmented with a backtracking facility [33].
3.3. Classification
A neural network (NN) classifier has been chosen to classify facial expressions based on dynamic features
that are previously selected. It was trained on a multi-class emotion recognition task, using the backpropagation
algorithmn, and the Sigmoid function as an activation function. Our NN is a signle network with one hidden layer.
The first layer represents the input data which are the DF. The second one is the hidden layer, and the last one
represents the output classes. The number of neurons in the hidden layer was chosen experimentally.
4. EXPERIMENTAL SETUP
The experiments of our work was conducted on three known facial expression databases: Extended Cohn-
Kanade (CK+) database [34, 27], Oulu-CASIA VIS database [35] database, and JAFFE database [25].
The CK+ database [34, 27] contains 327 labeled image sequences that refer to one of seven expressions,
i.e., anger, contempt, disgust, fear, happiness, sadness, and surprise. For each image sequences, only the last frame
is provided with an expression label. This database is detailed as follows: 45 images of angry expression, 59 images
of disgust expression, 25 images of fear expression, 69 images of happy expression, 28 images of sad expression,
and 83 images of surprise expression.
The Oulu-CASIA VIS database [35] contains different light conditions, we have used the strong and good
lighting onces that contains 80 subjects. Facial expressions are made by each subject and refer to the six basic
expressions (anger, disgust, fear, happiness, sadness, and surprise). In total we have 480 expression labeled image
sequences.
Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
56 ISSN: 2088-8708
The JAFFE database [25] contains 213 images from 10 Japanese female subjects. Each subject has 3
or 4 examples of each of the six basic expressions (anger, disgust, fear, happiness, sadness, surprise and neutral
expression). This database is detailed as follows: 30 images of angry expression, 29 images of disgust expression,
32 images of fear expression, 31 images of happy expression, 31 images of sad expression, 30 images of surprise
expression, and 30 images of neutral expression.
4.1. Training process
In our work, we have proceeded with three experiments to remark the influence of each used detail on the
emotion recognition accuracy. All experiments was conducted on the three databases (CK+, Oulu-CASIA VIS, and
JAFFE), and each one has been divided into 60% for training, 10% for validation, and 30% for test, with a NN of
20 neurons in the hidden layer in all experiments.
The first experiment consists firstly; on omitting the feature selection step, and using the DF (5) directly
as input to our classifier. Therefore, the NN classifier takes 1176 features as input, and six or seven classes in the
output that depend on the number of classes presented in the used database. Secondly; on using the feature selection
step. Thus, after calculating DF (5) for each image sequences presented in each used database; we have applied
feature selection method on the three databases to reduce the number of features. First, we have combined the
CK+ , the Oulu-CASIA VIS and the JAFFE databases in one database that contains 1020 data and refers to the eight
expressions (anger, contempt, disgust, fear, happiness, sadness, surprise, and neutral). Then, we have applied feature
selection method to this new database in order to select the common and only the relevant features. As result, we
have reduced our features from 1176 to 83 features. Last, we have trained three classifiers on the three databases,
each one apart. The NN classifier takes 83 features as input, and six or seven classes in the output.
In the second experiment, we have tried to observe the ability of classifying new image sequences, the
classifier trained on the CK+ was tested on the Oulu-CASIA VIS and the JAFFE databases, and vice versa.
The last experiment consists firstly on unifying the three databases; that means to delete the emotions that
don’t appear in other databases and keep the common ones. However; it will remain only 309 and 183 image
sequences for the CK+ and the JAFFE databases respectively. Secondly, it consists on testing each classifier trained
on one database, on the two other databases by varying the size of the training set , and showing how that influences
the emotion recognition accuracy.
5. RESULTS & DISCUSSION
Table 1 summarizes a comparaison between the results achieved in our first experiment and those achieved
by Pu et al. [16] using random forest. The third column presents emotion recognition accuracy achieved using
directly the DF calculated. The last column presents emotion recognition accuracy using feature selection step
where only 83 features are used from 1176.
The obtained results show that our method outperforms AU based method proposed by [16] whether the
feature selection process is used or not. Nevertheless, the use of feature selection process allows to take a less
number of features and gives better results than when using DF directly.
Table 1. Comparison of emotion recognition accuracy with and without the use of feature selection process
Pu et al. [16] Our approach
Without FS With FS
CK+ 96.3 98 99
OULU-CASIA VIS 76.25 81.3 84.7
JAFFE - 90.6 93.8
Table 2 presents the achieved results by the three classifiers trained separately on the CK+, the Oulu-CASIA
VIS, and the JAFFE databases, in the second experiment. The first classifier which was trained on the CK+ database
using always 83 features, gives an emotion recognition accuracy of 67.29% and 43.66% on the Oulu-CASIA VIS,
and the JAFFE databases respectively. The second classifier which was trained on the Oulu-CASIA VIS database,
gives an emotion recognition accuracy of 90.52% and 46.48% on the CK+, and the JAFFE databases respectively.
The third classifier which was trained on the JAFFE database, gives an emotion recognition accuracy of 64.22% and
47.29% on the CK+, and the Oulu-CASIA VIS databases respectively. We have obtained a competitive emotion
recognition accuracy by the second classifier which was trained on the Oulu-CASIA VIS database, unlike classifiers
IJECE Vol. 8, No. 1, February 2018: 52 – 59
IJECE ISSN: 2088-8708 57
Table 2. Test the trained classifier on other databases
Testing
CK+ OULU-CASIA VIS JAFFE
CK+ - 67.29 43.66
Training OULU-CASIA VIS 90.52 - 46.48
JAFFE 64.22 47.29 -
which were trained on the CK+, and the JAFFE databases. However, this decrease of results can be justified firstly
by the training size of the CK+ database (196 image sequences) and the training size of the JAFFE database (127
image sequences) comparatively to the size of the Oulu-CASIA VIS database (288 image sequences), secondly, by
the expressions which they do not exist on all databases, knowing that the contempt expression is present on the CK+
with an occurrence of 18, but not in other two databases, likewise, in the JAFFE database 30 neutral expressions are
considered as emotion class. Therefore, the classifiers trained on a supplementary expressions that do not exist in
all databases cause a decreasing of emotion recognition accuracy, for this reason, we have proceeded with the third
and last experiment to unify all used databases and show how that influence our results.
(a) Training on the CK+ (b) Training on the OULU-CASIA VIS (c) Training on the JAFFE
Figure 3. Training our proposed method with one database and testing it with another databases
Figure 3 shows how the training size and unified databases mark an increase of emotion recognition ac-
curacy over all databases. Figure3 (a) shows that emotion recognition accuracy increase from 67.29% to 72.08%
and from 43.66% to 56.83% in the Oulu-CASIA VIS and the JAFFE databases respectively. Figure3 (b) shows
that emotion recognition accuracy increase from 90.52% to 96.44% and from 46.48% to 53.55% in the CK+ and
the JAFFE databases respectively. Figure3 (c) shows that emotion recognition accuracy increase from 64.22% to
72.49% and from 47.29% to 51.25% in the CK+ and the Oulu-CASIA VIS databases respectively.
6. CONCLUSION & FUTURE WORK
In this research work, we have proposed an automatic approach for facial expression recognition task. Our
approach was tested using dynamic features that are calculated from the first and the last frames which represent
respectively the neutral state, and an emotional state. After detecting the face and fiducial points in the first and
the last frames; all possible euclidian distances have been calculated between each pair of points. For that, we have
calculated 1176 distances, then, to measure the deformation; each calculated distance of the first frame is divided
by the same calculated distance of the peak frame. After that, we have used a feature selection process to reduce the
number of features by choosing only the most relevant ones from them. In the last step of our proposed approach,
we have presented the selected dynamic features to a neural network classifier for facial expression recognition.
Evaluating this approach on three known databases has given encouraging results using neural network
classifier, with an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS
database, and 93.8% on the JAFFE database.
In our future work we will continue developing our proposed system along several axes. Firstly, we will
investigate the possibility of adding other features that represent the pose of the face. Secondly, we also intend to
consider another source to recognize emotions, which is the intonation of voice, using acoustic parameters. Finally,
our ultimate aim is to combine the two sources, which are facial expression and voice intonation, to automatically
recognize emotions from multimodal data using new approaches of deep learning classification.
Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
58 ISSN: 2088-8708
REFERENCES
[1] S. Lee and S.-Y. Shin, “Face song player according to facial expressions,” International Journal of Electrical
and Computer Engineering (IJECE), vol. 6, no. 6, pp. 2805–2809, 2016.
[2] P. Barros, G. I. Parisi, C. Weber, and S. Wermter, “Emotion-modulated attention improves expression recogni-
tion: A deep learning model,” Neurocomputing, 2017.
[3] J. Arunnehru and M. K. Geetha, “Automatic human emotion recognition in surveillance video,” in Intelligent
Techniques in Signal Processing for Multimedia Security. Springer, 2017, pp. 321–342.
[4] H. K. Palo and M. N. Mohanty, “Classification of emotional speech of children using probabilistic neural
network,” International Journal of Electrical and Computer Engineering, vol. 5, no. 2, p. 311, 2015.
[5] S. Motamed, S. Setayeshi, and A. Rabiee, “Speech emotion recognition based on a modified brain emotional
learning model,” Biologically Inspired Cognitive Architectures, 2017.
[6] A. Mehrabian, “Communication without words,” Communication Theory,, pp. 193–200, 2008.
[7] P. Ekman, “An argument for basic emotions,” Cognition & emotion, vol. 6, no. 3-4, pp. 169–200, 1992.
[8] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A
comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009.
[9] X. Fan and T. Tjahjadi, “A dynamic framework based on local zernike moment and motion history image for
facial expression recognition,” Pattern Recognition, vol. 64, pp. 399–406, 2017.
[10] N. Perveen, S. Gupta, and K. Verma, “Facial expression recognition using facial characteristic points and gini
index,” in Engineering and Systems (SCES), 2012 Students Conference on. IEEE, 2012, pp. 1–6.
[11] F. Z. Salmam, A. Madani, and M. Kissi, “Facial expression recognition using decision trees,” in 2016 13th
International Conference on Computer Graphics, Imaging and Visualization (CGiV). IEEE, 2016, pp. 125–
130.
[12] A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos, “Facial expression recognition with convo-
lutional neural networks: Coping with few data and the training sample order,” Pattern Recognition, vol. 61,
pp. 610–628, 2017.
[13] Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, “Facial expression recognition based on the belief
theory: comparison with different classifiers,” in Image Analysis and Processing–ICIAP 2005. Springer,
2005, pp. 743–752.
[14] E. Friesen and P. Ekman, “Facial action coding system: a technique for the measurement of facial movement,”
Palo Alto, 1978.
[15] M. Pantic and I. Patras, “Dynamics of facial expression: recognition of facial actions and their temporal
segments from face profile image sequences,” IEEE Transactions on Systems, Man, and Cybernetics, Part
B (Cybernetics), vol. 36, no. 2, pp. 433–449, 2006.
[16] X. Pu, K. Fan, X. Chen, L. Ji, and Z. Zhou, “Facial expression recognition from image sequences using twofold
random forest classifier,” Neurocomputing, vol. 168, pp. 1173–1180, 2015.
[17] F. Abdat, C. Maaoui, and A. Pruski, “Human-computer interaction using emotion recognition from facial ex-
pression,” in Computer Modeling and Simulation (EMS), 2011 Fifth UKSim European Symposium on. IEEE,
2011, pp. 196–201.
[18] A. S´anchez, J. V. Ruiz, A. B. Moreno, A. S. Montemayor, J. Hern´andez, and J. J. Pantrigo, “Differential optical
flow applied to automatic facial expression recognition,” Neurocomputing, vol. 74, no. 8, pp. 1272–1282, 2011.
[19] J. N. Bassili, “Emotion recognition: the role of facial movement and the relative importance of upper and lower
areas of the face.” Journal of personality and social psychology, vol. 37, no. 11, p. 2049, 1979.
[20] A. H. Basori and H. M. A. AlJahdali, “Emotional facial expression based on action units and facial muscle,”
International Journal of Electrical and Computer Engineering (IJECE), vol. 6, no. 5, pp. 2478–2487, 2016.
[21] N. Shepard and M. PITT, “Filtering via simulation: auxiliary particle filter,” Journal of the American Statistical
Association, vol. 94, pp. 590–599, 1999.
[22] M. F. Valstar and M. Pantic, “Fully automatic recognition of the temporal phases of facial actions,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 1, pp. 28–43, 2012.
[23] I. Matthews and S. Baker, “Active appearance models revisited,” International Journal of Computer Vision,
vol. 60, no. 2, pp. 135–164, 2004.
[24] J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade feature tracker description of the algo-
rithm,” Intel Corporation, vol. 5, no. 1-10, p. 4, 2001.
[25] M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The japanese female facial expression
(jaffe) database,” 1998.
[26] A. Saeed, A. Al-Hamadi, R. Niese, and M. Elzobi, “Effective geometric features for human emotion recog-
nition,” in Signal Processing (ICSP), 2012 IEEE 11th International Conference on, vol. 1. IEEE, 2012, pp.
IJECE Vol. 8, No. 1, February 2018: 52 – 59
IJECE ISSN: 2088-8708 59
623–627.
[27] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset
(ck+): A complete dataset for action unit and emotion-specified expression,” in Computer Vision and Pattern
Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. IEEE, 2010, pp. 94–101.
[28] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3d facial expression database for facial behavior
research,” in 7th international conference on automatic face and gesture recognition (FGR06). IEEE, 2006,
pp. 211–216.
[29] A. Majumder, L. Behera, and V. K. Subramanian, “Emotion recognition from geometric facial features using
self-organizing map,” Pattern Recognition, vol. 47, no. 3, pp. 1282–1293, 2014.
[30] M. Valstar and M. Pantic, “Induced disgust, happiness and surprise: an addition to the mmi facial expression
database,” in Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion
and Affect, 2010, p. 65.
[31] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Computer Vision
and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on,
vol. 1. IEEE, 2001, pp. I–511.
[32] X. Xiong and F. Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of
the IEEE conference on computer vision and pattern recognition, 2013, pp. 532–539.
[33] M. I. Devi, R. Rajaram, and K. Selvakuberan, “Generating best features for web page classification,” Webology,
vol. 5, no. 1, p. 52, 2008.
[34] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Automatic
Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, 2000,
pp. 46–53.
[35] G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietik¨ainen, “Facial expression recognition from near-infrared
videos,” Image and Vision Computing, vol. 29, no. 9, pp. 607–619, 2011.
BIOGRAPHIES OF AUTHORS
Fatima Zahra SALMAM is a Ph.D student at LAROSERI Laboratory, Faculty of Sciences, Uni-
versity of Chouaib Doukkali, EL Jadida (Morocco). She obtained Master Degree in computer
science specialty of Business Intelligence from the University of Sultan Moulay Slimane-Morocco
in 2014. Her researches are in fields of emotion recognition, data mining, data analysis, computer
vision,and intelligence artificielle. She prepare a dissertation on emotion recognition from image
and speech data using data mining techniques.
Abdellah Madani is currently a Professor and PhD Tutor in Department of Computer Science,
Chouaib Doukkali University, Faculty of Sciences, El Jadida, Morocco. His main research interests
include optimization algorithms, text mining, traffic flow and modelling platforms. He is the author
of many research papers published at conference proceedings and international journals.
Mohamed Kissi received his PhD degree from the UPMC, France in 2004 in Computer Science.
He is currently a Professor in Department of Computer Science, University Hassan II Casablanca,
Faculty of Sciences and Technologies, Mohammedia, Morocco. His current research interests in-
clude machine learning, data and text mining (Arabic) and Big Data. He is the author of many
research papers published at conference proceedings and international journals in Arabic text min-
ing, bioinformatics, genetic algorithms and fuzzy sets and systems.
Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)

More Related Content

What's hot

Thermal Imaging Emotion Recognition final report 01
Thermal Imaging Emotion Recognition final report 01Thermal Imaging Emotion Recognition final report 01
Thermal Imaging Emotion Recognition final report 01Ai Zhang
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...ijaia
 
Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748EditorIJAERD
 
Efficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking MethodEfficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
 
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVE
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVEEMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVE
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVEijait
 
Automatic Emotion Recognition Using Facial Expression: A Review
Automatic Emotion Recognition Using Facial Expression: A ReviewAutomatic Emotion Recognition Using Facial Expression: A Review
Automatic Emotion Recognition Using Facial Expression: A ReviewIRJET Journal
 
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...csandit
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition Rupinder Saini
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Sana Nasar
 
Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions  Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions twinkle singh
 
Facial expression recognition system : survey
Facial expression recognition system : surveyFacial expression recognition system : survey
Facial expression recognition system : surveyMohamed Alhmdany
 
Facial expression recognition based on image feature
Facial expression recognition based on image featureFacial expression recognition based on image feature
Facial expression recognition based on image featureTasnim Tara
 
Facial expression recongnition Techniques, Database and Classifiers
Facial expression recongnition Techniques, Database and Classifiers Facial expression recongnition Techniques, Database and Classifiers
Facial expression recongnition Techniques, Database and Classifiers Rupinder Saini
 
Deep Neural NEtwork
Deep Neural NEtworkDeep Neural NEtwork
Deep Neural NEtworkVai Jayanthi
 

What's hot (20)

N348295
N348295N348295
N348295
 
Automatic AU intensity detection/estimation for Facial Expression Analysis: A...
Automatic AU intensity detection/estimation for Facial Expression Analysis: A...Automatic AU intensity detection/estimation for Facial Expression Analysis: A...
Automatic AU intensity detection/estimation for Facial Expression Analysis: A...
 
Thermal Imaging Emotion Recognition final report 01
Thermal Imaging Emotion Recognition final report 01Thermal Imaging Emotion Recognition final report 01
Thermal Imaging Emotion Recognition final report 01
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...
FINDING FACIAL EXPRESSION PATTERNS ON VIDEOS BASED ON SMILE AND EYES-OPEN CON...
 
Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748Facial expression recognition using pca and gabor with jaffe database 11748
Facial expression recognition using pca and gabor with jaffe database 11748
 
Efficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking MethodEfficient Facial Expression and Face Recognition using Ranking Method
Efficient Facial Expression and Face Recognition using Ranking Method
 
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVE
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVEEMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVE
EMOTION RECOGNITION FROM FACIAL EXPRESSION BASED ON BEZIER CURVE
 
Automatic Emotion Recognition Using Facial Expression: A Review
Automatic Emotion Recognition Using Facial Expression: A ReviewAutomatic Emotion Recognition Using Facial Expression: A Review
Automatic Emotion Recognition Using Facial Expression: A Review
 
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
SPONTANEOUS SMILE DETECTION WITH APPLICATION OF LANDMARK POINTS SUPPORTED BY ...
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
 
Image processing
Image processingImage processing
Image processing
 
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
 
Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions  Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions
 
Facial expression recognition system : survey
Facial expression recognition system : surveyFacial expression recognition system : survey
Facial expression recognition system : survey
 
Facial expression recognition based on image feature
Facial expression recognition based on image featureFacial expression recognition based on image feature
Facial expression recognition based on image feature
 
Facial expression recongnition Techniques, Database and Classifiers
Facial expression recongnition Techniques, Database and Classifiers Facial expression recongnition Techniques, Database and Classifiers
Facial expression recongnition Techniques, Database and Classifiers
 
2018 09
2018 092018 09
2018 09
 
Deep Neural NEtwork
Deep Neural NEtworkDeep Neural NEtwork
Deep Neural NEtwork
 

Similar to Emotion recognition using facial fiducial points and neural networks

Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Vai Jayanthi
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsijma
 
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
 
Paper id 24201475
Paper id 24201475Paper id 24201475
Paper id 24201475IJRAT
 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET Journal
 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognitionijtsrd
 
The Quantification of Human Facial Expression Using Fuzzy Logic
The Quantification of Human Facial Expression Using Fuzzy LogicThe Quantification of Human Facial Expression Using Fuzzy Logic
The Quantification of Human Facial Expression Using Fuzzy LogicIJCSIS Research Publications
 
EMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEWEMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEWIRJET Journal
 
Two Level Decision for Recognition of Human Facial Expressions using Neural N...
Two Level Decision for Recognition of Human Facial Expressions using Neural N...Two Level Decision for Recognition of Human Facial Expressions using Neural N...
Two Level Decision for Recognition of Human Facial Expressions using Neural N...IIRindia
 
A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...Lisa Graves
 
Eigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesEigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesAlexander Decker
 
Emotion Recognition using Image Processing
Emotion Recognition using Image ProcessingEmotion Recognition using Image Processing
Emotion Recognition using Image Processingijtsrd
 
Review of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetsReview of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetseSAT Journals
 
Review of facial expression recognition system and
Review of facial expression recognition system andReview of facial expression recognition system and
Review of facial expression recognition system andeSAT Publishing House
 
Automatic analysis of facial affect a survey of registration, representation...
Automatic analysis of facial affect  a survey of registration, representation...Automatic analysis of facial affect  a survey of registration, representation...
Automatic analysis of facial affect a survey of registration, representation...redpel dot com
 
Real Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 SensorReal Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 Sensoriosrjce
 
Facial landmarking localization for emotion recognition using bayesian shape ...
Facial landmarking localization for emotion recognition using bayesian shape ...Facial landmarking localization for emotion recognition using bayesian shape ...
Facial landmarking localization for emotion recognition using bayesian shape ...csandit
 

Similar to Emotion recognition using facial fiducial points and neural networks (20)

184-144342105039-43
184-144342105039-43184-144342105039-43
184-144342105039-43
 
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
 
Review of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithmsReview of face detection systems based artificial neural networks algorithms
Review of face detection systems based artificial neural networks algorithms
 
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
 
Paper id 24201475
Paper id 24201475Paper id 24201475
Paper id 24201475
 
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial OcclusionIRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
IRJET- A Survey on Facial Expression Recognition Robust to Partial Occlusion
 
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression RecognitionFiducial Point Location Algorithm for Automatic Facial Expression Recognition
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
 
The Quantification of Human Facial Expression Using Fuzzy Logic
The Quantification of Human Facial Expression Using Fuzzy LogicThe Quantification of Human Facial Expression Using Fuzzy Logic
The Quantification of Human Facial Expression Using Fuzzy Logic
 
EMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEWEMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEW
 
Two Level Decision for Recognition of Human Facial Expressions using Neural N...
Two Level Decision for Recognition of Human Facial Expressions using Neural N...Two Level Decision for Recognition of Human Facial Expressions using Neural N...
Two Level Decision for Recognition of Human Facial Expressions using Neural N...
 
A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...A Literature Review On Emotion Recognition System Using Various Facial Expres...
A Literature Review On Emotion Recognition System Using Various Facial Expres...
 
Eigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant facesEigenface based recognition of emotion variant faces
Eigenface based recognition of emotion variant faces
 
Emotion Recognition using Image Processing
Emotion Recognition using Image ProcessingEmotion Recognition using Image Processing
Emotion Recognition using Image Processing
 
Review of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasetsReview of facial expression recognition system and used datasets
Review of facial expression recognition system and used datasets
 
Review of facial expression recognition system and
Review of facial expression recognition system andReview of facial expression recognition system and
Review of facial expression recognition system and
 
Automatic analysis of facial affect a survey of registration, representation...
Automatic analysis of facial affect  a survey of registration, representation...Automatic analysis of facial affect  a survey of registration, representation...
Automatic analysis of facial affect a survey of registration, representation...
 
184
184184
184
 
K017326168
K017326168K017326168
K017326168
 
Real Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 SensorReal Time Facial Emotion Recognition using Kinect V2 Sensor
Real Time Facial Emotion Recognition using Kinect V2 Sensor
 
Facial landmarking localization for emotion recognition using bayesian shape ...
Facial landmarking localization for emotion recognition using bayesian shape ...Facial landmarking localization for emotion recognition using bayesian shape ...
Facial landmarking localization for emotion recognition using bayesian shape ...
 

More from IJECEIAES

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionIJECEIAES
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...IJECEIAES
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...IJECEIAES
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningIJECEIAES
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...IJECEIAES
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...IJECEIAES
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...IJECEIAES
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyIJECEIAES
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationIJECEIAES
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceIJECEIAES
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...IJECEIAES
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...IJECEIAES
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...IJECEIAES
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...IJECEIAES
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...IJECEIAES
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...IJECEIAES
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...IJECEIAES
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...IJECEIAES
 

More from IJECEIAES (20)

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regression
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learning
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancy
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimization
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performance
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
 

Recently uploaded

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...ranjana rawat
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...Call girls in Ahmedabad High profile
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 

Recently uploaded (20)

Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
(TARA) Talegaon Dabhade Call Girls Just Call 7001035870 [ Cash on Delivery ] ...
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
High Profile Call Girls Dahisar Arpita 9907093804 Independent Escort Service ...
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 

Emotion recognition using facial fiducial points and neural networks

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 8, No. 1, February 2018, pp. 52 – 59 ISSN: 2088-8708 52 Institute of Advanced Engineering and Science w w w . i a e s j o u r n a l . c o m Emotion Recognition from Facial Expression Based on Fiducial Points Detection and using Neural Network Fatima Zahra Salmam1 , Abdellah Madani2 , and Mohamed Kissi3 1,2 LAROSERI Laboratory, Faculty of Sciences, University of Chouaib Doukkali, El Jadida-Morocco 3 LIM Laboratory, Faculty of Sciences and Technologies, University Hassan II, Casablanca-Morocco Article Info Article history: Received: Jun 5, 2017 Revised: Nov 30, 2017 Accepted: Dec 16, 2017 Keyword: Facial expression Feature selection Neural network Supervised Descent Method Best First ABSTRACT The importance of emotion recognition lies in the role that emotions play in our everyday lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion recognition, is to equip the machine of this human ability to analyze, and to understand the human emotional state, in order to anticipate his intentions from facial expression. In this paper, a new approach is proposed to enhance accuracy of emotion recognition from facial expression, which is based on input features deducted only from fiducial points. The proposed approach consists firstly on extracting 1176 dynamic features from image sequences that represent the proportions of euclidean distances between facial fiducial points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature selection method is used to select only the most relevant features from them. Finally, the selected features are presented to a Neural Network (NN) classifier to classify facial ex- pression input into emotion. The proposed approach has achieved an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database. Copyright c 2018 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Abdellah Madani LAROSERI Laboratory, computer science departement Faculty of Sciences, University of Chouaib Doukkali, El Jadida - Morocco Email: madaniabdellah@gmail.com 1. INTRODUCTION As emotions play an implicit role in the communication process, and reflect human behavior, automatic emotion recognition is a task of growing interest. To recognize human emotions, a wide range of features can be used such as facial expression[1, 2], body gesture [2, 3], or speech [4, 5]. Giving a computer the capability of emotion recognition (ER) is the scientific challenge around which gather different communities (signal processing, image processing, artifcial intelligence, robotics, human-computer interaction ) Mehrabian [6] affirms that facial expression represents 55% of the nonverbal communication that allows to understand the state or the emotion of a person. The objective of this work is to get a computer to detect human emotions from facial expressions. Facial expression is the most important key to understand human emotions. In fact, not all facial expres- sions have a meaning and can be classified into emotions, but there are some basic emotions that are universal [7] and can be expressed in the same way, which are: happy, sad, fear, anger, disgust, and surprise. The main principles steps of facial expression recognition are generally: face detection, feature extraction, and facial expression classification. In the first step; we have to determine whether an image belongs to the class of faces or not. In the second step; we have to extract features or characteristics from the face that better describe emotions. In the last step; we have to classify the extracted features into basic emotions. However, usually the issue comes from the second step which is feature extraction. A set of features that better describes a facial expression movement must be found and used for classification. For this reason, the proposed technique in this paper is based on image sequences, and focused on calculating 1176 euclidean distances between all detected points to measure all possible deformations of the face, because, there may be distances more descriptive than others that appear visually Journal Homepage: http://iaescore.com/journals/index.php/IJECE Institute of Advanced Engineering and Science w w w . i a e s j o u r n a l . c o m , DOI: 10.11591/ijece.v8i1.pp52-59
  • 2. IJECE ISSN: 2088-8708 53 Figure 1. Steps of an emotion recognition system logic. Once we have calculated dynamic features from fiducial points; an additional step of feature selection process (see Figure 1) is used to reduce the number of features by choosing only the most relevant ones from them. The rest of the paper is organized as follows: an overview of related work is presented in section 2, our proposed method is presented in section 3, experimental setup is given in section 4, in section 5; results and a discussion of our proposed approach are presented with a comparison of recognition rate between previous works. Section 6 concludes the paper and presents some perspective researches. 2. RELATED WORK In general, facial expression recognition system can be classified into two categories: geometric features based methods and global features based methods [8]. In geometric features-based methods, only some parts of the face are considered for feature extraction such as eyes, nose and mouth. Such methods consume a lot of computation time to obtain accurate results for facial features detection and tracking which is a major disadvantage. Besides, in global features based methods, the whole face is considered for feature extraction, as used in [9] where global features are extracted from face images using local Zernike moment. These methods are easy to use because they work directly on facial images to describe facial textures. There are a plethora of works that aim to facilitate the way of recognizing emotions from facial expression using static [10, 11, 12] or dynamic images [13, 14, 15, 16, 17, 18, 9]. By measuring dynamic facial motions in image sequences; Bassili [19] has confirmed that dynamic images give accurate results in facial expression recognition than single static ones. Friesen et al. [14] have proposed the FACS system that describes movements of the face, where forty four Action Units (AU) are defined, and each one represents a movement of a particular part of the face (e.g Brow Lowerer). According to Friesen et al., a facial expression could be characterized by a combination of AUs. To demonstrate that AUs are capable to perform emotion expressions, basori et al. [20] have generated an emotion expression of an avatar using combination of AUs based on facial muscle. Pantic et al. [15] have focused their work on recognizing facial action units (AUs) and their temporal models using profile-view face image sequences. To track 15 facial points in an input face profile sequences; they apply particle filtering method [21]. Valstar et al. [22] have proposed an automatic method to recognize 22 actions units (AUs) and their models using image sequences. Firstly, to automatically detect 20 fiducial points, they used Gabor-feature-based boosted classifier, then, these points were tracked through a sequence of images using a particle filtering method with fac- torized likelihoods. Pu et al. [16] have suggested a new framework for facial expression analysis based on recognizing AUs from image sequences. To detect and track fiducial points, they applied first AAM [23] to model the neutral facial expression in the first frame, after that they used pyramidal implementation of Lucas-Kanade [24] to track feature points in the others frames. They used two levels to classify facial expressions using random forest as method of classification. The first level consists of classifying AUs, taking as input the displacement vectors between the neutral expression frame and the peak expression frame. The second level consists of using as input the detected AUs to classify facial expressions. Most of facial expression recognition methods are based on AU-based method [15, 22, 16, 20]. They are often influenced by the FACS system proposed by Friesen et al.[14]. Nevertheless, there are also other techniques that are based only on fidicual points to recognize facial expression, which minimize computation time. Abdat et al. [17] have focused on another geometric method to detect facial expression. They have used twenty one distances to encode facial expressions; these distances describe facial features deformations compared to the neutral state. These methods are focused firstly on the algorithm of Shi&Thomasi to extract feature points, and secondly on the Lucas-Kanade algorithm [24] to track and detect points, after that the distance vector was used as a descriptor of the facial expression, which is calculated from image sequences. This vector is the input of SVM classifier. Hammal et al. [13] have developed a classifying system based on the belief theory, and applied it on the Hammal-Caplier database. They used five distances between different parts of the face (eybrow, both eyes and mouth). In their work, distances were computed on skeletons of expression from image sequences, however, only Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
  • 3. 54 ISSN: 2088-8708 four emotions (joy, surprise, disgust and neutral) were considered from the six basic emotions. Perveen et al. [10] have focused their work on three regions (eyebrows, eyes, mouth) to define an emotion from static images. First, they calculated the characteristic points of the face, then they tried to evaluate some animation parameters such as: the openness of eyes, the width of eyes, the height of eyebrows, the opening of mouth, and the width of mouth. As a classification technique, they used a decision tree based method, applied only on thirty images from the JAFFE database [25], and they recognized six emotions (happy, surprise, fear, sad, angry, and neutral) excluding the disgust emotion. Saeed et al. [26] have proposed an emotion recognition system based on just eight fiducial points. They represented six geometric features by measuring some distances between mouth, eyes, and eyebrows. These features represent the changes of the face during an emotion occurrence. Then, the features were presented to an SVM classifier for emotion recognition. The system was applied on Cohn-Kanade database (CK+) [27], and Binghamton University 3D Facial Expression Database [28] to recognize six basic emotions. Majumder et al. [29] have suggested an emotion recognition model based on the Kohonen self-organizing map (KSOM) that uses 26 dimensional facial geometric feature vector calculated from three parts of the face (lips, eyes and eyebrows) that describes the change of six basic emotions. The experience was applied on the MMI database [30]. The research studies cited above show that dynamic facial expressions from image sequences are more descriptive for the task of emotion recognition and can increase the accuracy in real time applications instead of using static images. 3. PROPOSED METHOD This section presents and justifies our proposed technique for emotion recognition from facial expression. Our contribution concerns the feature extraction step, in which we have proposed to calculate all euclidean distances between fiducial points, in the first and in the last frames to measure facial motion. Firstly, we detect the face using ViolaJones algorithm [31], then, we detect and track 49 fiducial points using a powerful and recent Supervised Decent Method (SDM) proposed by Xiong et al. [32], and from these points that represent the four parts of the face (eyebrows, eyes, nose, and mouth), we calculate all possible distances between each pair of points, as a result, we get C2 49 = 1176 euclidean distances. After that, to measure dynamic deformation related to the neutral state, we calculate the distance ratio that represents dynamic features, it is calculated between the first and the last frames (Section 2.1). Afterward, we use a feature selection method to reduce the number of features and to select only the most relevant ones. Finally, we present the selected dynamic features to a neural network classifier for facial expression recognition. 3.1. Facial expression representation Once we have detected the face using Viola Jones algorithm [31], we have applied SDM method [32] to detect and track fiducial points in image sequences. To measure the face deformation, we have considered only the first and the last frames. Firstly, we have calculated all Euclidean distances (1) from 49 detected points that are represented by x and y coordinates (2) (3), and that refer to the parts of the face which are: 10 points for eyebrows, 12 points for eyes, 9 points for nose, and 18 points for mouth. In the total, we have calculated 1176 distances in the first and in the last frames. Then, we have measured dynamic deformation by calculating the ratio (4) between frames. The ratio represents the division of the calculated distance of the peak frame by the same calculated distance of the first frame. The dynamic features (5) represent a vector of features that contains 1176 ratios calculated related to the neutral state.An overview of facial expression representation process is presented in Figure 2. D = [D1, D2, ..., Di, ..., Dt] (1) V0 = [x10, y10, x20, y20, ..., xn0, yn0] (2) Vp = [x1p, y1p, x2p, y2p, ..., xnp, ynp] (3) ∆Di = Dip Di0 (4) DF = [∆D1, ∆D2, ..., ∆Dt] (5) Where n: The total number of fiducial points IJECE Vol. 8, No. 1, February 2018: 52 – 59
  • 4. IJECE ISSN: 2088-8708 55 t: The number of euclidean distances calculated between each pair of points V0 : x and y coordinates of detected points in the first frame Vp : x and y coordinates of detected points in the peak frame. Dip : Euclidean distance in the peak frame. Di0 : Euclidean distance in the first frame. Figure 2. Dynamic features representation process 3.2. Feature selection One of the key issues in emotion classification is the features used for prediction. For this reason; a feature selection step has been used to choose the most relevant features. Generally, the feature selection step combines an attribute subset evaluator with a search method. The attribute evaluator determines what method is used to assign a worth to each subset of features. The search method determines what style of search is performed [33]. In this paper, we have chosen as a feature evaluator the CfsSubsetEval, and Best First as search method implemented in weka. The CfsSubsetEval evaluator evaluates the worth of a subset of features by considering the individual predictive ability of each feature along with the degree of redundancy between them; subsets of features that are highly correlated with the class while having low inter-correlation are preferred. The Best first method searches the space of feature subsets by greedy hill-climbing augmented with a backtracking facility [33]. 3.3. Classification A neural network (NN) classifier has been chosen to classify facial expressions based on dynamic features that are previously selected. It was trained on a multi-class emotion recognition task, using the backpropagation algorithmn, and the Sigmoid function as an activation function. Our NN is a signle network with one hidden layer. The first layer represents the input data which are the DF. The second one is the hidden layer, and the last one represents the output classes. The number of neurons in the hidden layer was chosen experimentally. 4. EXPERIMENTAL SETUP The experiments of our work was conducted on three known facial expression databases: Extended Cohn- Kanade (CK+) database [34, 27], Oulu-CASIA VIS database [35] database, and JAFFE database [25]. The CK+ database [34, 27] contains 327 labeled image sequences that refer to one of seven expressions, i.e., anger, contempt, disgust, fear, happiness, sadness, and surprise. For each image sequences, only the last frame is provided with an expression label. This database is detailed as follows: 45 images of angry expression, 59 images of disgust expression, 25 images of fear expression, 69 images of happy expression, 28 images of sad expression, and 83 images of surprise expression. The Oulu-CASIA VIS database [35] contains different light conditions, we have used the strong and good lighting onces that contains 80 subjects. Facial expressions are made by each subject and refer to the six basic expressions (anger, disgust, fear, happiness, sadness, and surprise). In total we have 480 expression labeled image sequences. Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
  • 5. 56 ISSN: 2088-8708 The JAFFE database [25] contains 213 images from 10 Japanese female subjects. Each subject has 3 or 4 examples of each of the six basic expressions (anger, disgust, fear, happiness, sadness, surprise and neutral expression). This database is detailed as follows: 30 images of angry expression, 29 images of disgust expression, 32 images of fear expression, 31 images of happy expression, 31 images of sad expression, 30 images of surprise expression, and 30 images of neutral expression. 4.1. Training process In our work, we have proceeded with three experiments to remark the influence of each used detail on the emotion recognition accuracy. All experiments was conducted on the three databases (CK+, Oulu-CASIA VIS, and JAFFE), and each one has been divided into 60% for training, 10% for validation, and 30% for test, with a NN of 20 neurons in the hidden layer in all experiments. The first experiment consists firstly; on omitting the feature selection step, and using the DF (5) directly as input to our classifier. Therefore, the NN classifier takes 1176 features as input, and six or seven classes in the output that depend on the number of classes presented in the used database. Secondly; on using the feature selection step. Thus, after calculating DF (5) for each image sequences presented in each used database; we have applied feature selection method on the three databases to reduce the number of features. First, we have combined the CK+ , the Oulu-CASIA VIS and the JAFFE databases in one database that contains 1020 data and refers to the eight expressions (anger, contempt, disgust, fear, happiness, sadness, surprise, and neutral). Then, we have applied feature selection method to this new database in order to select the common and only the relevant features. As result, we have reduced our features from 1176 to 83 features. Last, we have trained three classifiers on the three databases, each one apart. The NN classifier takes 83 features as input, and six or seven classes in the output. In the second experiment, we have tried to observe the ability of classifying new image sequences, the classifier trained on the CK+ was tested on the Oulu-CASIA VIS and the JAFFE databases, and vice versa. The last experiment consists firstly on unifying the three databases; that means to delete the emotions that don’t appear in other databases and keep the common ones. However; it will remain only 309 and 183 image sequences for the CK+ and the JAFFE databases respectively. Secondly, it consists on testing each classifier trained on one database, on the two other databases by varying the size of the training set , and showing how that influences the emotion recognition accuracy. 5. RESULTS & DISCUSSION Table 1 summarizes a comparaison between the results achieved in our first experiment and those achieved by Pu et al. [16] using random forest. The third column presents emotion recognition accuracy achieved using directly the DF calculated. The last column presents emotion recognition accuracy using feature selection step where only 83 features are used from 1176. The obtained results show that our method outperforms AU based method proposed by [16] whether the feature selection process is used or not. Nevertheless, the use of feature selection process allows to take a less number of features and gives better results than when using DF directly. Table 1. Comparison of emotion recognition accuracy with and without the use of feature selection process Pu et al. [16] Our approach Without FS With FS CK+ 96.3 98 99 OULU-CASIA VIS 76.25 81.3 84.7 JAFFE - 90.6 93.8 Table 2 presents the achieved results by the three classifiers trained separately on the CK+, the Oulu-CASIA VIS, and the JAFFE databases, in the second experiment. The first classifier which was trained on the CK+ database using always 83 features, gives an emotion recognition accuracy of 67.29% and 43.66% on the Oulu-CASIA VIS, and the JAFFE databases respectively. The second classifier which was trained on the Oulu-CASIA VIS database, gives an emotion recognition accuracy of 90.52% and 46.48% on the CK+, and the JAFFE databases respectively. The third classifier which was trained on the JAFFE database, gives an emotion recognition accuracy of 64.22% and 47.29% on the CK+, and the Oulu-CASIA VIS databases respectively. We have obtained a competitive emotion recognition accuracy by the second classifier which was trained on the Oulu-CASIA VIS database, unlike classifiers IJECE Vol. 8, No. 1, February 2018: 52 – 59
  • 6. IJECE ISSN: 2088-8708 57 Table 2. Test the trained classifier on other databases Testing CK+ OULU-CASIA VIS JAFFE CK+ - 67.29 43.66 Training OULU-CASIA VIS 90.52 - 46.48 JAFFE 64.22 47.29 - which were trained on the CK+, and the JAFFE databases. However, this decrease of results can be justified firstly by the training size of the CK+ database (196 image sequences) and the training size of the JAFFE database (127 image sequences) comparatively to the size of the Oulu-CASIA VIS database (288 image sequences), secondly, by the expressions which they do not exist on all databases, knowing that the contempt expression is present on the CK+ with an occurrence of 18, but not in other two databases, likewise, in the JAFFE database 30 neutral expressions are considered as emotion class. Therefore, the classifiers trained on a supplementary expressions that do not exist in all databases cause a decreasing of emotion recognition accuracy, for this reason, we have proceeded with the third and last experiment to unify all used databases and show how that influence our results. (a) Training on the CK+ (b) Training on the OULU-CASIA VIS (c) Training on the JAFFE Figure 3. Training our proposed method with one database and testing it with another databases Figure 3 shows how the training size and unified databases mark an increase of emotion recognition ac- curacy over all databases. Figure3 (a) shows that emotion recognition accuracy increase from 67.29% to 72.08% and from 43.66% to 56.83% in the Oulu-CASIA VIS and the JAFFE databases respectively. Figure3 (b) shows that emotion recognition accuracy increase from 90.52% to 96.44% and from 46.48% to 53.55% in the CK+ and the JAFFE databases respectively. Figure3 (c) shows that emotion recognition accuracy increase from 64.22% to 72.49% and from 47.29% to 51.25% in the CK+ and the Oulu-CASIA VIS databases respectively. 6. CONCLUSION & FUTURE WORK In this research work, we have proposed an automatic approach for facial expression recognition task. Our approach was tested using dynamic features that are calculated from the first and the last frames which represent respectively the neutral state, and an emotional state. After detecting the face and fiducial points in the first and the last frames; all possible euclidian distances have been calculated between each pair of points. For that, we have calculated 1176 distances, then, to measure the deformation; each calculated distance of the first frame is divided by the same calculated distance of the peak frame. After that, we have used a feature selection process to reduce the number of features by choosing only the most relevant ones from them. In the last step of our proposed approach, we have presented the selected dynamic features to a neural network classifier for facial expression recognition. Evaluating this approach on three known databases has given encouraging results using neural network classifier, with an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database. In our future work we will continue developing our proposed system along several axes. Firstly, we will investigate the possibility of adding other features that represent the pose of the face. Secondly, we also intend to consider another source to recognize emotions, which is the intonation of voice, using acoustic parameters. Finally, our ultimate aim is to combine the two sources, which are facial expression and voice intonation, to automatically recognize emotions from multimodal data using new approaches of deep learning classification. Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)
  • 7. 58 ISSN: 2088-8708 REFERENCES [1] S. Lee and S.-Y. Shin, “Face song player according to facial expressions,” International Journal of Electrical and Computer Engineering (IJECE), vol. 6, no. 6, pp. 2805–2809, 2016. [2] P. Barros, G. I. Parisi, C. Weber, and S. Wermter, “Emotion-modulated attention improves expression recogni- tion: A deep learning model,” Neurocomputing, 2017. [3] J. Arunnehru and M. K. Geetha, “Automatic human emotion recognition in surveillance video,” in Intelligent Techniques in Signal Processing for Multimedia Security. Springer, 2017, pp. 321–342. [4] H. K. Palo and M. N. Mohanty, “Classification of emotional speech of children using probabilistic neural network,” International Journal of Electrical and Computer Engineering, vol. 5, no. 2, p. 311, 2015. [5] S. Motamed, S. Setayeshi, and A. Rabiee, “Speech emotion recognition based on a modified brain emotional learning model,” Biologically Inspired Cognitive Architectures, 2017. [6] A. Mehrabian, “Communication without words,” Communication Theory,, pp. 193–200, 2008. [7] P. Ekman, “An argument for basic emotions,” Cognition & emotion, vol. 6, no. 3-4, pp. 169–200, 1992. [8] C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009. [9] X. Fan and T. Tjahjadi, “A dynamic framework based on local zernike moment and motion history image for facial expression recognition,” Pattern Recognition, vol. 64, pp. 399–406, 2017. [10] N. Perveen, S. Gupta, and K. Verma, “Facial expression recognition using facial characteristic points and gini index,” in Engineering and Systems (SCES), 2012 Students Conference on. IEEE, 2012, pp. 1–6. [11] F. Z. Salmam, A. Madani, and M. Kissi, “Facial expression recognition using decision trees,” in 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV). IEEE, 2016, pp. 125– 130. [12] A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos, “Facial expression recognition with convo- lutional neural networks: Coping with few data and the training sample order,” Pattern Recognition, vol. 61, pp. 610–628, 2017. [13] Z. Hammal, L. Couvreur, A. Caplier, and M. Rombaut, “Facial expression recognition based on the belief theory: comparison with different classifiers,” in Image Analysis and Processing–ICIAP 2005. Springer, 2005, pp. 743–752. [14] E. Friesen and P. Ekman, “Facial action coding system: a technique for the measurement of facial movement,” Palo Alto, 1978. [15] M. Pantic and I. Patras, “Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 36, no. 2, pp. 433–449, 2006. [16] X. Pu, K. Fan, X. Chen, L. Ji, and Z. Zhou, “Facial expression recognition from image sequences using twofold random forest classifier,” Neurocomputing, vol. 168, pp. 1173–1180, 2015. [17] F. Abdat, C. Maaoui, and A. Pruski, “Human-computer interaction using emotion recognition from facial ex- pression,” in Computer Modeling and Simulation (EMS), 2011 Fifth UKSim European Symposium on. IEEE, 2011, pp. 196–201. [18] A. S´anchez, J. V. Ruiz, A. B. Moreno, A. S. Montemayor, J. Hern´andez, and J. J. Pantrigo, “Differential optical flow applied to automatic facial expression recognition,” Neurocomputing, vol. 74, no. 8, pp. 1272–1282, 2011. [19] J. N. Bassili, “Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face.” Journal of personality and social psychology, vol. 37, no. 11, p. 2049, 1979. [20] A. H. Basori and H. M. A. AlJahdali, “Emotional facial expression based on action units and facial muscle,” International Journal of Electrical and Computer Engineering (IJECE), vol. 6, no. 5, pp. 2478–2487, 2016. [21] N. Shepard and M. PITT, “Filtering via simulation: auxiliary particle filter,” Journal of the American Statistical Association, vol. 94, pp. 590–599, 1999. [22] M. F. Valstar and M. Pantic, “Fully automatic recognition of the temporal phases of facial actions,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 42, no. 1, pp. 28–43, 2012. [23] I. Matthews and S. Baker, “Active appearance models revisited,” International Journal of Computer Vision, vol. 60, no. 2, pp. 135–164, 2004. [24] J.-Y. Bouguet, “Pyramidal implementation of the affine lucas kanade feature tracker description of the algo- rithm,” Intel Corporation, vol. 5, no. 1-10, p. 4, 2001. [25] M. J. Lyons, S. Akamatsu, M. Kamachi, J. Gyoba, and J. Budynek, “The japanese female facial expression (jaffe) database,” 1998. [26] A. Saeed, A. Al-Hamadi, R. Niese, and M. Elzobi, “Effective geometric features for human emotion recog- nition,” in Signal Processing (ICSP), 2012 IEEE 11th International Conference on, vol. 1. IEEE, 2012, pp. IJECE Vol. 8, No. 1, February 2018: 52 – 59
  • 8. IJECE ISSN: 2088-8708 59 623–627. [27] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on. IEEE, 2010, pp. 94–101. [28] L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3d facial expression database for facial behavior research,” in 7th international conference on automatic face and gesture recognition (FGR06). IEEE, 2006, pp. 211–216. [29] A. Majumder, L. Behera, and V. K. Subramanian, “Emotion recognition from geometric facial features using self-organizing map,” Pattern Recognition, vol. 47, no. 3, pp. 1282–1293, 2014. [30] M. Valstar and M. Pantic, “Induced disgust, happiness and surprise: an addition to the mmi facial expression database,” in Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, 2010, p. 65. [31] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1. IEEE, 2001, pp. I–511. [32] X. Xiong and F. Torre, “Supervised descent method and its applications to face alignment,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 532–539. [33] M. I. Devi, R. Rajaram, and K. Selvakuberan, “Generating best features for web page classification,” Webology, vol. 5, no. 1, p. 52, 2008. [34] T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on. IEEE, 2000, pp. 46–53. [35] G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. Pietik¨ainen, “Facial expression recognition from near-infrared videos,” Image and Vision Computing, vol. 29, no. 9, pp. 607–619, 2011. BIOGRAPHIES OF AUTHORS Fatima Zahra SALMAM is a Ph.D student at LAROSERI Laboratory, Faculty of Sciences, Uni- versity of Chouaib Doukkali, EL Jadida (Morocco). She obtained Master Degree in computer science specialty of Business Intelligence from the University of Sultan Moulay Slimane-Morocco in 2014. Her researches are in fields of emotion recognition, data mining, data analysis, computer vision,and intelligence artificielle. She prepare a dissertation on emotion recognition from image and speech data using data mining techniques. Abdellah Madani is currently a Professor and PhD Tutor in Department of Computer Science, Chouaib Doukkali University, Faculty of Sciences, El Jadida, Morocco. His main research interests include optimization algorithms, text mining, traffic flow and modelling platforms. He is the author of many research papers published at conference proceedings and international journals. Mohamed Kissi received his PhD degree from the UPMC, France in 2004 in Computer Science. He is currently a Professor in Department of Computer Science, University Hassan II Casablanca, Faculty of Sciences and Technologies, Mohammedia, Morocco. His current research interests in- clude machine learning, data and text mining (Arabic) and Big Data. He is the author of many research papers published at conference proceedings and international journals in Arabic text min- ing, bioinformatics, genetic algorithms and fuzzy sets and systems. Emotion Recognition from Facial Expression Based on Fiducial ... (Fatima Zahra Salmam)