An Enhanced Independent Component-Based Human Facial Expression Recognition from Video

1,239 views

Published on

An Enhanced Independent Component-Based Human Facial Expression Recognition from Video

1 Comment
2 Likes
Statistics
Notes
No Downloads
Views
Total views
1,239
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
28
Comments
1
Likes
2
Embeds 0
No embeds

No notes for slide

An Enhanced Independent Component-Based Human Facial Expression Recognition from Video

  1. 1. An Enhanced IndependentComponent-Based HumanFacial Expression Recognitionfrom Video
  2. 2. By:-Umme Rumaan 57Ansari Rumana 63
  3. 3. Introduction• Facial Expression Recognition (FER) from video is anessential research area in the field of Human ComputerInterfaces(HCI).• Nowadays, consumer video cameras have become inexpensiveand are being extensively used in many consumer devices suchas laptops, mobile phones, etc. Lately, these cameras are usedfor the face related applications such as[1]face detection,[2]face recognition and[3]facial expression recognition (FER).
  4. 4. • FER has been regarded as one of the fundamental technologiesfor HCI, which enables computers to interrelate with humansin a way to human to human interactions.• For feature extraction from the facial expression video images,most of the early FER research works extracted useful featuresusing Principal Component Analysis (PCA).• PCA is a second-order statistical method to derive theorthogonal bases containing the maximum variability in anunsupervised manner that provides global image features. It isalso commonly used for dimension reduction.
  5. 5. Method• Our proposed FER system consists of preprocessing ofsequential facial expression images in video,• feature extraction via EICA-FLDA,• codebook generation via vector quantization algorithm and• modeling and recognition via HMM.
  6. 6. A. Preprocessing• In preprocessing of sequential images of facialexpressions,first image alignment is performed to realign thecommon regions of the face.• A face alignment approach is utilized by manually matchingthe eyes and mouth of the faces in the designated coordinates.• The typical realigned image consists of 60 by 80 pixels.• Histogram equalization is then performed on the realignedimages for lighting correction.• Afterwards, the first frame of each input sequence issubtracted from the following frames to obtain the deltaimages to produce the facial expression change differences inthe images over time.
  7. 7. B. Feature Extraction• The key idea of the feature extraction from a set of timesequential facial expression images is the combination ofEICA and FLDA.• The purpose of this method is to find an optimal localrepresentation of facial expression images in a lowdimensional space and to lead the well separated timesequential features for robust recognition.• It has following steps:(1)PCA is performed first for dimension reduction,(2) ICA is applied on the reduced PCA subspace to find statisticallyindependent basis images for corresponding facial expression imagerepresentation,(3) FLDA is then employed to compress the same classes as close aspossible and to separate the different classes as far as possible.
  8. 8. C. Vector Quantization• To decode the temporal variations of the facial expressionfeatures, we have employed discrete HMMs.
  9. 9. D. Modeling and Recognition via HMM• While training the HMMs, the facial expression imagesequences are projected on the feature space and symbolizedthrough vector quantization.• Thus, for each training facial expression image sequence,corresponding observation sequence O={O1,O2,O3...,OT}isobtained where T indicates the sequence length.• The obtained observation symbol sequences are then utilizedto train the corresponding expression HMM such as anger-HMM, joy-HMM, sad-HMM, disgust-HMM,fear-HMM, andsurprise-HMM.
  10. 10. EXPERIMENTAL SETUPS• A set of comparison experiments were performed under thesame procedure.• To report the recognition performance, we prepared thetraining and testing video clips of variable length utilizing thewell-known Cohn-Kanade facial expression database.• Thus, we tried to recognize six universal facial expressionssuch as anger, joy, sad, disgust, fear, and surprise.
  11. 11. A. Facial Expression Database• The facial expression database used in our experiments is the Cohn-Kanade AU-coded facial expression database consisting of facialexpression sequences with a neutral expression as an origin to a targetfacial expression.• The frontal view of the face and each subset is composed of severalsequential frames of the specific expression.• There are six universal expressions to be classified and recognized viathe proposed approach.• B. Experiments• Further the steps given in the methodology are applied on theseimages using the facial expression DB.• To evaluate the performance of the proposed system we applied a totalof 15 and 40 image sequences per expression for training and testingeach expression respectively.
  12. 12. Application• FER is used in markets where there is a sale put up.• It is used in online games as a Real-time application.
  13. 13. CONCLUSIONWe have presented a video-based robust FER system usingEICA-FLDA for facial expression feature extraction and HMM forrecognition. We have illustrated the performance of our proposedmethod applied on sequential datasets for the six facial expressionrecognition problems. The experimental results show that EICA-FLDA, the linear discriminant approach on IC feature vectors fromoptimal representation of PCs, improves the feature extraction task.Furthermore, HMM, dealing with the EICA-FLDA processedsequential facial expression images can provide superior recognitionrate over the conventional feature extraction approaches, reachingup to the mean recognition rate of 93.23%. Our system could beused in any consumer systems for better human computerinteraction.
  14. 14. References• [1] M. T. Rahman and N. Kehtarnavaz, “Real-TimeFace-Priority Auto• Focus for Digital and Cell-Phone Cameras,” IEEETransactions on• Consumer Electronics, vol. 54, no. 4, pp. 1506–1513,2008.• [2] D.-S. Kim, I.-J. Jeon, S.-Y. Lee, P.-K. Rhee, and D.-J.Chung,• “Embedded Face Recognition based on Fast GeneticAlgorithm for• Intelligent Digital Photography,” IEEE Transactionson Consumer• Electronics, vol. 52, no. 3, pp. 726–734, 2006.• *3+ C. Padgett and G. Cottrell, “Representation faceimages for emotion• classification,” Advances in Neural InformationProcessing Systems,• vol. 9, Cambridge, MA, MIT Press, 1997.• [4] S. Mitra and T. Acharya, “Gesture Recognition: Asurvey,” IEEE• Transactions on Systems, Man, and Cybernetics-PartC: Applications• and Reviews, vol. 37, no. 3, pp. 311-324, 2007.• [5] G. Donato, M. S. Bartlett, J. C. Hagar, P. Ekman,and T. J. Sejnowski,• “Classifying Facial Actions,” IEEE Transaction onPattern Analysis and• Machine Intelligence, vol. 21, no. 10, pp. 974-989,1999.• [6] P. S. Aleksic and A. K. Katsaggelos, “Automatic facial expression• recognition using facial animation parameters and multistreamHMMs,”• IEEE Transaction on Information and Security, vol. 1, pp. 3-11, 2006.• [7] L. R. Rabiner, “A Tutorial on Hidden Markov Modes and selected• application in speech recognition,” in Proceedings of IEEE, vol. 77,pp.• 257-286, 1989.• *8+ L. Zhang and G. W. Cottrell, “When Holistic Processing is NotEnough:• Local Features Save the Day,” in Proceedings of the Twenty-sixth• Annual Cognitive Science Society Conference, 2004.• [9] Y. Linde, A. Buzo, and R. Gray, “An Algorithm for VectorQuantizer• Design,” IEEE Transaction on Communications, vol. 28, no. 1, pp.84–• 94, 1980.• [10] J. F. Cohn, A. Zlochower, J. Lien, and T. Kanade, “Automatedface• analysis by feature point tracking has high concurrent validity with• manual FACS coding,” Psychophysiology, vol. 36, pp. 35-43, 1999.• [11] J. J. Lee, M. Z. Uddin, P. T. H. Truc, and T.-S. Kim,“Spatiotemporal• Depth Information-based Human Facial Expression RecognitionUsing• FICA and HMM,” in Proceedings of the International Conference on• Ubiquitous Healthcare, pp. 105-106, 2008.• [32] G. J. Iddan and G. Yahav, “3D imaging in the studio (and• elsewhere…),” in Proceedings of SPIE, vol. 4298, pp 48-55, 2001.

×