First Doctoral Committee
Meeting
INTERNAL FULL-TIME RESEARCH SCHOLAR
SIVASHANKAR P (2014PHD1116)
SCHOOL OF ELECTRINICS ENGINEERING
GUIDE
Dr. R. VISHNU PRIYA
ASSOCIATE PROFESSOR
SCHOOL OF COMPUTING SCIENCE & ENGINEERING
Motivation
According to social psychology
Verbal part – 7% of effect
Vocal part – 38% of effect
Facial expression – 55% of effect
Automation of objective measurement of facial
activity
Behavioral Science
Man-machine interaction
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 2
Introduction
AI scope can be extended by considering irrational thoughts
(emotion, consciousness)
Cognitive Theories :
 Emotions are emergent property of mind which heuristically process information in the
cognitive domain.
 Psychological and physical reactions to a particular event.
Typical human emotions display
 Voice
 Face
 Gestures
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 3
Facial Component
Human face
 Static facial signals - Permanent features (identification)
 Slow facial signals - Changes in the appearance (age)
 Artificial signals - Exogenous features (gender)
 Rapid facial signals - Temporal changes in neuromuscular activity
Non-verbal communication (Facial expression, tone of voice, posture, eye gaze,
etc.,)
Muscle contraction
Constitute a finite, small set of alternative expressions
Discriminated using specific features.
Refer to the internal states (usually, the emotions)
Universal in both configuration and meaning.
Irrational and complex
Darwin (1970) - facial expressions in man and animals
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 4
Background
Facial Action Coding System(FACS)
Ekman and Friesen (1978)
46 Action Units (AU)
Combination of AU’s
 Facial Expressions
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 5
Cont.,
Six universally common expression
Sign based – Action Units
Message based – Emotions
Clear idea of visual properties
Describing and analysing movements of points belong to the facial features
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 6
Phases of Facial Emotion Recognition
1. Face acquisition
Face detection
Normalization
2. Facial feature point extraction and Tracking
Geometric based method
Appearance based method
Hybrid method
3. Facial expression classification
Machine learning techniques
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 7
Face
acquisition
Feature point
extraction and
Tracking
Expression
Classification
1. Face Acquisition
Image or Image sequence
On set , Apex and Off set
Temporal information
Face detection
Haar cascade classifier
 Haar-like features
 Integral Image
 Eliminate sub-images that
do not contain the object
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 8
Cont.,
Ada boost classifier
Choose the Classifier with the lowest error
Update the weight
Decide the final classifier
Occlusions, variations in head pose and lighting conditions
Normalization
Non-frontal face wrapped to frontal face
Slight head rotation – translation and rotation
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 9
2. Feature Point Extraction And Tracking
Feature point
Primary features - eye corners, mouth corners, nose tip, etc.,
Secondary features – Wrinkles, existence of tooth etc.,
Optical flow
Motion of the image pixel
Advantage
• Capture the dynamic events
• Simple
Disadvantage
• Noisy measure
• Degrade the performance
Particle filtering
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 10
2.1 Geometric Model
Geometric method
A priori information
Size and Locations
Low level features
Disadvantage
Difficult to design a deterministic physical model
A priori rules useless
 Illumination changes
 non-rigid motion
 inaccuracy of image registration
 motion discontinuity
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 11
2.2 Appearance model
Texture and shape information
Active shape model (ASM)
Local appearance
Snake model
Steps:
 Shape representation
 Training procedure
 Point Distribution Model (PDM)
 Landmark matching
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 12
Cont.,
Active Appearance Model (AAM)
Whole appearance
Iterative method of matching model to image
Disadvantage
Model initialization is difficult
Manual intervention
Complex training procedures
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 13
Place model in
image
Measure
Difference
Update Model
Iterate
Other models
Hybrid model
Computational cost is high
Dimensionality reduction technique- PCA,LDA, LBP
General issues
 Combination of same action units mapped on to multiple emotions
 Obtaining context information is difficult
 Other critical factor - Duration and intensity
 Trained model is often unreliable for practical use.
 No guarantee that the subject will perform the required expression.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 14
*PCA-Principal component analysis
*LDA-Linear discriminant analysis
*LBP-Local binary patterns
3. Classification
Machine learning techniques
Issues
How to defining a set of categories/classes?
How to choosing a classification mechanism?
Capable of analyzing any subject?
Classifiers
Support Vector Machine (SVM)
Artificial Neural Network (ANN)
Hidden Marko Model (HMM)
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 15
Applications
Multimodal human-computer interface (HCI).
Educational intelligent tutoring system.
Air craft, Air traffic control, nuclear plant surveillance.
 Video surveillance for security, driver state monitoring for
automotive safety.
Pain assessment, lie detection.
Social media and customer survey.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 16
Challenges
• A key challenge is achieving optimal preprocessing, feature extraction
or selection, and classification, particularly under conditions of input
data variability.
• To attain successful recognition performance, most current
expression recognition approaches require some control over the
imaging conditions.
• The controlled imaging conditions typically cover the following
aspects.
(i) View or pose of the head.
(ii) Environment clutter and illumination.
(iii) Miscellaneous sources of facial variability.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 17
Cont.,
• The controlling of imaging conditions is detrimental to the
widespread deployment of expression recognition systems, because
many real-world applications require operational flexibility.
• Emotions also have acoustic characteristics.
• Although the combination of acoustic and visual characteristics
promises improved recognition accuracy, the development of
effective combination techniques is a challenge, which has not been
addressed by many
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 18
Aim
• Choosing of Optimized feature points which can exactly translate the
emotion.
• Transforming them to the mathematical model which can enrich the
true value of the feature for better classification.
• Known-edging the machine to classify the features for better
recognition.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 19
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 20
Literature Method No. of
feature
Classifier Issues
Wang and Ruan (2010) Orthogonal LFDA 15 Nearest neighbor Performance
Is Far From Human Perception
Zhang et al. (2012) Local Binary Pattern + LFDA 11 SVM (1-against-1) High Training Time
Gupta et al. (2011) Hybrid (discrete cosine
transform + Gabor filter +
Wavelet transform +
Gaussian distribution)
Unknown Adaboost Complexity, Variability, Subtle
Changes
Rahulamathavan et al.
(2013)
LFDA (in the encrypted
domain)
40 Nearest neighbor Non-linear Emotional Facial
Features
Kharat and Dudul
(2009)
Discrete cosine transform +
Statistical features of images
71 MPL NN, SVM, PCA
and GFFNN
Complex Processing.
Zhao and Zhang
(2011)
Local binary pattern +
KDIsomap
20 Nearest neighbor Wrong Choice Of Patches For
Matching Leads To Low
Recognition Rate
Zhang and
Tjondronegoro (2011)
Patch-based Gabor 185 SVM linear Design Parameters Utilized In
This Method Is Exceptionally
Hard To Fix
Gu et al. (2010) Radial encoded Gabor jets 49 KKN Mask Creation Is A Time
Upper Face Demo
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 21
Lower Face Demo
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 22
Emotions And Its Respective Mouth
Poses
Emotion Mouth Poses
Fear Lip corners pulled sideway, tighten and elongating the mouth.
Happy Lips corners pulled up.
Anger Lips tighten and press together.
Surprise Mouth opens as jaw drops.
Disgust Mouth upper lip rises and mouth opens, tongue stick out.
Sadness Lips corner pulled straight.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 23
References
[1] P. Ekman, W.V. Friesen, “Constants across cultures in the face and emotion”, J.Personality Social Psychol. 17 (2),124–129, 1971.
[2] Paul Viola, Michael J. Jones, “Robust Real-Time Face Detection”, International Journal of Computer Vision 57(2), 137–154, 2004.
[3] P. Ekman and W. Friesen. “The Facial Action Coding System: A Technique for the Measurement of Facial Movement”, Consulting Psychologists Press, San Francisco, 1978.
[4] Kotsia.I, Pitas.I, “Facial Expression Recognition in Image Sequences using Geometric Deformation Features and Support Vector Machines”, IEEE Transactions on Image Processing, pages(s): 172 - 187 ,
Jan. 2007.
[5] T.F. Cootes, G. Edwards, C. Taylor, “Comparing active shape models with active appearance models”, in: Proceedings of British Machine Vision Conference, BMVA Press, pp. 173–182, 1999.
[6] Ying-li Tian, Takeo Kanade, and Jeffrey F. Cohn, “Recognizing Action Units for Facial Expression Analysis”, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 23, No. 2, February 2001.
[7] Maurício Pamplona Segundo, Luciano Silva, Olga Regina Pereira Bellon, “Automatic Face Segmentation and Facial Landmark Detection in Range Images”, IEEE Transactions On Systems, Man, And
Cybernetics—Part B: Cybernetics, Vol. 40, No. 5, October 2010.
[8] Irene Kotsia, Ioan Buciu, Ioannis Pitas, “An analysis of facial expression recognition under partial facial image occlusion”, Image and Vision Computing 26 ,1052–1067, 2008.
[9] José M. Buenaposada, Enrique Muñoz, Luis Baumela “Efficient illumination independent appearance-based face tracking”, Image and Vision Computing Volume 27, Issue 5, Pages 560–578, April 2009.
[10]Mahdi Ilbeygi a,n, HamedShah-Hosseini, “A novel fuzzy facial expression recognition system based on facial feature extraction from color face images”, Engineering Applications of Artificial Intelligence
25 (2012) 130–146.
Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 24

DC_1

  • 1.
    First Doctoral Committee Meeting INTERNALFULL-TIME RESEARCH SCHOLAR SIVASHANKAR P (2014PHD1116) SCHOOL OF ELECTRINICS ENGINEERING GUIDE Dr. R. VISHNU PRIYA ASSOCIATE PROFESSOR SCHOOL OF COMPUTING SCIENCE & ENGINEERING
  • 2.
    Motivation According to socialpsychology Verbal part – 7% of effect Vocal part – 38% of effect Facial expression – 55% of effect Automation of objective measurement of facial activity Behavioral Science Man-machine interaction Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 2
  • 3.
    Introduction AI scope canbe extended by considering irrational thoughts (emotion, consciousness) Cognitive Theories :  Emotions are emergent property of mind which heuristically process information in the cognitive domain.  Psychological and physical reactions to a particular event. Typical human emotions display  Voice  Face  Gestures Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 3
  • 4.
    Facial Component Human face Static facial signals - Permanent features (identification)  Slow facial signals - Changes in the appearance (age)  Artificial signals - Exogenous features (gender)  Rapid facial signals - Temporal changes in neuromuscular activity Non-verbal communication (Facial expression, tone of voice, posture, eye gaze, etc.,) Muscle contraction Constitute a finite, small set of alternative expressions Discriminated using specific features. Refer to the internal states (usually, the emotions) Universal in both configuration and meaning. Irrational and complex Darwin (1970) - facial expressions in man and animals Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 4
  • 5.
    Background Facial Action CodingSystem(FACS) Ekman and Friesen (1978) 46 Action Units (AU) Combination of AU’s  Facial Expressions Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 5
  • 6.
    Cont., Six universally commonexpression Sign based – Action Units Message based – Emotions Clear idea of visual properties Describing and analysing movements of points belong to the facial features Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 6
  • 7.
    Phases of FacialEmotion Recognition 1. Face acquisition Face detection Normalization 2. Facial feature point extraction and Tracking Geometric based method Appearance based method Hybrid method 3. Facial expression classification Machine learning techniques Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 7 Face acquisition Feature point extraction and Tracking Expression Classification
  • 8.
    1. Face Acquisition Imageor Image sequence On set , Apex and Off set Temporal information Face detection Haar cascade classifier  Haar-like features  Integral Image  Eliminate sub-images that do not contain the object Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 8
  • 9.
    Cont., Ada boost classifier Choosethe Classifier with the lowest error Update the weight Decide the final classifier Occlusions, variations in head pose and lighting conditions Normalization Non-frontal face wrapped to frontal face Slight head rotation – translation and rotation Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 9
  • 10.
    2. Feature PointExtraction And Tracking Feature point Primary features - eye corners, mouth corners, nose tip, etc., Secondary features – Wrinkles, existence of tooth etc., Optical flow Motion of the image pixel Advantage • Capture the dynamic events • Simple Disadvantage • Noisy measure • Degrade the performance Particle filtering Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 10
  • 11.
    2.1 Geometric Model Geometricmethod A priori information Size and Locations Low level features Disadvantage Difficult to design a deterministic physical model A priori rules useless  Illumination changes  non-rigid motion  inaccuracy of image registration  motion discontinuity Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 11
  • 12.
    2.2 Appearance model Textureand shape information Active shape model (ASM) Local appearance Snake model Steps:  Shape representation  Training procedure  Point Distribution Model (PDM)  Landmark matching Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 12
  • 13.
    Cont., Active Appearance Model(AAM) Whole appearance Iterative method of matching model to image Disadvantage Model initialization is difficult Manual intervention Complex training procedures Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 13 Place model in image Measure Difference Update Model Iterate
  • 14.
    Other models Hybrid model Computationalcost is high Dimensionality reduction technique- PCA,LDA, LBP General issues  Combination of same action units mapped on to multiple emotions  Obtaining context information is difficult  Other critical factor - Duration and intensity  Trained model is often unreliable for practical use.  No guarantee that the subject will perform the required expression. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 14 *PCA-Principal component analysis *LDA-Linear discriminant analysis *LBP-Local binary patterns
  • 15.
    3. Classification Machine learningtechniques Issues How to defining a set of categories/classes? How to choosing a classification mechanism? Capable of analyzing any subject? Classifiers Support Vector Machine (SVM) Artificial Neural Network (ANN) Hidden Marko Model (HMM) Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 15
  • 16.
    Applications Multimodal human-computer interface(HCI). Educational intelligent tutoring system. Air craft, Air traffic control, nuclear plant surveillance.  Video surveillance for security, driver state monitoring for automotive safety. Pain assessment, lie detection. Social media and customer survey. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 16
  • 17.
    Challenges • A keychallenge is achieving optimal preprocessing, feature extraction or selection, and classification, particularly under conditions of input data variability. • To attain successful recognition performance, most current expression recognition approaches require some control over the imaging conditions. • The controlled imaging conditions typically cover the following aspects. (i) View or pose of the head. (ii) Environment clutter and illumination. (iii) Miscellaneous sources of facial variability. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 17
  • 18.
    Cont., • The controllingof imaging conditions is detrimental to the widespread deployment of expression recognition systems, because many real-world applications require operational flexibility. • Emotions also have acoustic characteristics. • Although the combination of acoustic and visual characteristics promises improved recognition accuracy, the development of effective combination techniques is a challenge, which has not been addressed by many Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 18
  • 19.
    Aim • Choosing ofOptimized feature points which can exactly translate the emotion. • Transforming them to the mathematical model which can enrich the true value of the feature for better classification. • Known-edging the machine to classify the features for better recognition. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 19
  • 20.
    Saturday, April 16,2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 20 Literature Method No. of feature Classifier Issues Wang and Ruan (2010) Orthogonal LFDA 15 Nearest neighbor Performance Is Far From Human Perception Zhang et al. (2012) Local Binary Pattern + LFDA 11 SVM (1-against-1) High Training Time Gupta et al. (2011) Hybrid (discrete cosine transform + Gabor filter + Wavelet transform + Gaussian distribution) Unknown Adaboost Complexity, Variability, Subtle Changes Rahulamathavan et al. (2013) LFDA (in the encrypted domain) 40 Nearest neighbor Non-linear Emotional Facial Features Kharat and Dudul (2009) Discrete cosine transform + Statistical features of images 71 MPL NN, SVM, PCA and GFFNN Complex Processing. Zhao and Zhang (2011) Local binary pattern + KDIsomap 20 Nearest neighbor Wrong Choice Of Patches For Matching Leads To Low Recognition Rate Zhang and Tjondronegoro (2011) Patch-based Gabor 185 SVM linear Design Parameters Utilized In This Method Is Exceptionally Hard To Fix Gu et al. (2010) Radial encoded Gabor jets 49 KKN Mask Creation Is A Time
  • 21.
    Upper Face Demo Saturday,April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 21
  • 22.
    Lower Face Demo Saturday,April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 22
  • 23.
    Emotions And ItsRespective Mouth Poses Emotion Mouth Poses Fear Lip corners pulled sideway, tighten and elongating the mouth. Happy Lips corners pulled up. Anger Lips tighten and press together. Surprise Mouth opens as jaw drops. Disgust Mouth upper lip rises and mouth opens, tongue stick out. Sadness Lips corner pulled straight. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 23
  • 24.
    References [1] P. Ekman,W.V. Friesen, “Constants across cultures in the face and emotion”, J.Personality Social Psychol. 17 (2),124–129, 1971. [2] Paul Viola, Michael J. Jones, “Robust Real-Time Face Detection”, International Journal of Computer Vision 57(2), 137–154, 2004. [3] P. Ekman and W. Friesen. “The Facial Action Coding System: A Technique for the Measurement of Facial Movement”, Consulting Psychologists Press, San Francisco, 1978. [4] Kotsia.I, Pitas.I, “Facial Expression Recognition in Image Sequences using Geometric Deformation Features and Support Vector Machines”, IEEE Transactions on Image Processing, pages(s): 172 - 187 , Jan. 2007. [5] T.F. Cootes, G. Edwards, C. Taylor, “Comparing active shape models with active appearance models”, in: Proceedings of British Machine Vision Conference, BMVA Press, pp. 173–182, 1999. [6] Ying-li Tian, Takeo Kanade, and Jeffrey F. Cohn, “Recognizing Action Units for Facial Expression Analysis”, IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 23, No. 2, February 2001. [7] Maurício Pamplona Segundo, Luciano Silva, Olga Regina Pereira Bellon, “Automatic Face Segmentation and Facial Landmark Detection in Range Images”, IEEE Transactions On Systems, Man, And Cybernetics—Part B: Cybernetics, Vol. 40, No. 5, October 2010. [8] Irene Kotsia, Ioan Buciu, Ioannis Pitas, “An analysis of facial expression recognition under partial facial image occlusion”, Image and Vision Computing 26 ,1052–1067, 2008. [9] José M. Buenaposada, Enrique Muñoz, Luis Baumela “Efficient illumination independent appearance-based face tracking”, Image and Vision Computing Volume 27, Issue 5, Pages 560–578, April 2009. [10]Mahdi Ilbeygi a,n, HamedShah-Hosseini, “A novel fuzzy facial expression recognition system based on facial feature extraction from color face images”, Engineering Applications of Artificial Intelligence 25 (2012) 130–146. Saturday, April 16, 2016 MINUTES OF 1ST DC:- SIVASHANKAR, IFTRS, VIT UNIVERSITY, CHENNAI CAMPUS 24