Face Recognition & Detection Using Image Processing
Synops emotion recognize
1. EMOTION DETECTION FROM FACIAL EXPRESSIONS
A Project Synopsis
Submitted in partial fulfilment of the
requirements for the award of the degree
of
Bachelor of Technology
in
COMPUTER SCIENCE & ENGINEERING
Submitted by
SAURABH SRIVASTAV (1114310142)
SHISHIR AGARWAL (1114310149)
SANCHI GUPTA (1114310138)
VISHAL SAXENA (1114310183)
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
IMS ENGINEERING COLLEGE, GHAZIABAD
March-2014
2. 2
1. PROJECT TITLE:
“Emotion detection from facial expressions”
2. TEAM MEMBER DETAILS:
3. MENTOR DETAILS:
Dr. Avdhesh Gupta - Associate Professor (Department of Computer Science & Engineering)
Email: avvipersonal@gmail.com
4. OBJECTIVES AND SCOPE:
The most informative channel for machine perception of emotions is through facial
expressions . Effective human-computer intelligent interaction (HCII) needs the computer to
detect emotions through facial expressions. This project aims to develop automatic emotion
detecting system by evaluating machine learning algorithms for facial expression recognition.
The system will perform feature selection in each video frames to analyse the image and
compare with an authentic database of natural emotions to classify each frame as a class of
human emotion by harnessing facial expression dynamics.
Expected Features, Novelness & Significance of the proposed project:
Successfully detection of facial features, including eyes, eyebrows, nose, and mouth. The
program can also able to find motion distribution of different facial features, and sends
back an image fusion with facial features shaded with colours, which represents motion
NAME ROLL NO. E-MAIL
SAURABH SRIVASTAV 1114310142 saurabh .30.srivastav@gmail.com
SHISHIR AGARWAL 1114310149 shishiragarwal19@gmail.com
SANCHI GUPTA 1114310138 sanchigupta .0920@gmail.com
VISHAL SAXENA 1114310183 vish_on_net00@yahoo.com
3. 3
magnitude . With a full motion distribution of different facial features, after training our data on
different facial expressions.
Scope:
The scope lies in the increasing trend towards human-computer interactions in more natural
way to communicate with computer without traditional interface devices. Emotion sensing
systems have wide range of application in fields like :
- Research and education
- Security and law enforcement. e.g. Detecting micro-expression for lie.
- Psychiatric evaluations
- Telecommunication
- Communication controls in games etc.
5. TECHNICAL DETAILS:
Facial expressions give important perception about emotions. Therefore several approaches
have been proposed to classify human affective states. The features used are typically based on
local spatial position or displacement of specific points and regions of the face, unlike the
approaches based on audio, which use global statistics of the acoustic features. [1]
The main task involves feature extraction and emotion recognition by pattern matching to
classify into a specific class of emotion using classifiers. The basic parts are:
Face tracking and feature extraction: The face tracking we use is based on a system developed
by Tao and Huang called the piecewise Be´zier volume deformation [2] (PBVD) tracker.
This face tracker uses a model-based approach where an explicit 3D Wireframe model of
the face is constructed. In the first frame of the image sequence, landmark facial features such as
the eye and mouth corners are selected interactively. A face model consisting of 16 surface
patches embedded in Be´zier volumes is then warped to fit the selected facial features.
The surface patches defined this way are guaranteed to be continuous and smooth .The shape
of the mesh can be changed by changing the locations of the control points in the Be´zier
volume.
Expression representation: We use a fusion of face shape and texture as the representation of
facial expression. This hybrid representation is able to incorporate local pixel intensity variation
pattern while still adhering to shape constraint at a global level, proving to be effective.
Recognition and Classification: Several classifier are used from machine learning to classify
the query image as one of the test set images. Classifiers that can be used for recognition [3] are:
4. 4
KNN (k- nearest neighbour) algorithm
SVM (support vector machine) algorithm
Bayesian network
PEBLS
CN2
SSS
Decision Tree
Voting algorithm
KNN classifier: For a kNN classifier, the class label is determined by the consensus of k nearest
neighbours [4]. Traditionally, in the absence of prior knowledge on the statistical regularities
in data, Euclidean distance is used to measure the dissimilarity between instances. However,
as shown by some researchers KNN performance can be significantly improved by exploiting the
inherent data embedding and learning a distance metric accordingly. The K nearest neighbour
(KNN) classifier is an extension of the simple nearest neighbour (NN) classifier system. The
nearest neighbour classifier works based on a simple nonparametric decision. Each query image
Iqis examined based on the distance of its features from the features of other images in the
training database. The nearest neighbour is the image which has the minimum distance from
the query image in the feature space. The distance between two features can be measured based
on one of the distance functions such as, city block distance d1, Euclidean distance d2or cosine
Distance dcos[5]
Voting algorithm:
Methods for voting classification, such as Bagging and Boosting (AdaBoost) [2] have been
shown to be very successful in improving the accuracy of certain classifiers for artificial and
real-world datasets. A voting algorithm takes an inducer and a training set as input and runs the
inducer multiple times by changing the distribution of training set instances. The generated
classifiers are then combined to create a final classifier that is used to classify the test set.
5. 5
6. FLOWCHART:
7. FUTURE WORK:
Future work can be done in analysing micro-expression hotspot by training facial feature and
edge changes on candidate emotions to form a robust scoring system for hidden emotion
detection, or mapping distribution curves to the motion distribution curve of different emotions.
The program can also be extended to detect more facial features, edges, and even gestures to
provide more data in evaluating potential emotions, and potentially used to measure stress level,
pain level and detect lie.
3D image processing can be included in the program to detect emotions more efficiently.
TEXT/VIDEO
INPUT
FACE EXTRACTION
FEATURE EXTRACTION
EMOTION RECOGNITION
USING MACHINE
LEARNING ALGORITHM
NEURAL NETWORK
TRAINING
MAPPING ON EMOTION
PLANE
RECOGNIZED
EMOTION
6. 6
Speech recognition can be combined with facial expression detection to reduce the confusion
matrix in emotion detection thus creating more robust and effective system.
8. HARDWARE & SOFTWARE REQUIREMENTS:
SOFTWARE REQUIREMENT
MATLAB 2010 or above
OPEN CV TOOL
JAVA DEVELOPMENT KIT
SQL
HARDWARE REQUIREMENT
WINDOWS XP or above
1 GB RAM
PROCESSOR: DUAL CORE or above
9. REFERENCES:
[1] Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C. M., Kazemzadeh, A., & Narayanan,
S. (2004, October). Analysis of emotion recognition using facial expressions, speech and
multimodal information. In Proceedings of the 6th international conference on
Multimodal interfaces (pp. 205-211)
[2] Sebe, N., Lew, M. S., Sun, Y., Cohen, I., Gevers, T., & Huang, T. S. (2007). Authentic
facial expression analysis. Image and Vision Computing, 25(12), 1856-1863.
[3] Wan, S., & Aggarwal, J. K. (2013, April). A scalable metric learning-based voting
method for expression recognition. In Automatic Face and Gesture Recognition (FG),
2013 10th IEEE International Conference and Workshops on (pp. 1-8). IEEE.
[4] Weighted k-nearest neighbour Siddharth Deokar.
[5] Ebrahimpour, H., & Kouzani, A. Face Recognition Using Bagging KNN