SlideShare a Scribd company logo
1 of 7
Download to read offline
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
1
FACIAL EXTRACTION AND LIP TRACKING USING
FACIAL POINTS
Hlaing Htake Khaung Tin
University of Computer Studies, Yangon, Myanmar
hlainghtakekhaungtin@gmail.com
ABSTRACT
Automatic facial feature extraction is one of the most important and attempted problems in computer
vision. It is a necessary step in face recognition, facial image compression. There are many methods have
been proposed in the literature for the facial feature extraction task. However, all of them have still
disadvantage such as not complete reflection about face structure, face texture. In this paper, we propose
a method for fast and accurate extraction of feature points such as eyes, nose, mouth, eyebrows and the
like from dynamic images with the purpose of face recognition. These methods are far from satisfactory
in terms of extraction accuracy and processing speed. The proposed method achieves high position
accuracy at a low computing cost by combining shape extraction with geometric features of facial images
like eyes, nose, mouth etc. In this paper, a facial expressions synthesis system, based on the facial points
tracking in the frontal image sequences. Selected facial points are automatically tracked using a cross-
correlation based optical flow. The proposed synthesis system uses a simple facial features model with a
few set of control points that can be tracked in original facial image sequences.
KEYWORDS
Facial feature point extraction; face recognition; geometric features.
1. INTRODUCTION
One of the most elusive goals in computer animation is the realistic animation of the human
face. Human face modelling and animation are very complex tasks because of the physical
structure of the face and the dynamics involving its psychological and behavioural aspects [11].
The objective of this paper is to study a statistical model for human face aging, which is then
used for face aging simulation and age estimation. Face aging simulation and prediction is an
interesting task with many applications in digital entertainment [1]. A problem of personal
verification and identification is an actively growing area of research. Face, voice, lip
movements, hand geometry, odor, gait, iris, retina, fingerprint are the most commonly used
authentication methods. All of these psychological and behavioural characteristics of a person
are called biometric. The driving force of the progress in this field is due to the growing role of
the Internet and electronic transfers in modern society. Therefore, considerable number of
applications is concentrated in the area of electronic commerce and electronic banking systems.
The biometrics have a significant advantage over traditional authentication techniques due to the
biometric characteristics of the individual are not easily transferable, are unique of every person
and cannot be lost, stolen or broken. The biometrics is a measurable physiological or
behavioural characteristic of an individual used in personal identification and verification and
the choice of the biometric solutions depends on user acceptance, level of security, accuracy,
and cost and implementation time [2].
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
2
2. RELATED WORK
In Facial feature extraction, local features on face such as nose, and then eyes are extracted and
then used as input data. And it has been the central step for several applications. Various
approaches have been proposed in this chapter to extract these facial points from images or
video sequences of faces. The basically of approaches are geometry-based, template-based,
colour segmentation techniques, appearance-based approaches [5]. The ability for a computer
system to sense the user’s emotions opens a wide range of applications in different research
areas, including security, law enforcement, medicine, education, and telecommunications [6].
However , it is important not to confuse human emotion recognition from facial expression
recognition: the latter is merely a classification of facial deformations into a set of abstract
classes, solely based on visual information. Indeed, human emotions can only be inferred from
context, self-report, physiological indicators, and expressive behaviour which may or may not
include facial expressions [7]. Zhilin Wu et al. [8] have presented a combined method to
accurately track the outer lips by using the Gradient Vector Flow (GVF) snakes with parabolic
templates as an additional external force. This combination needs fewer requirements of both
salient boundaries and accuracy of templates. Erol [9] described a facial modelling and
animation system that used muscle-based generic face model and deformed it using deformation
techniques to model the individualized faces. Two orthogonal photos of the real faces were used
for this purpose. Image processing techniques were employed to extract certain features on the
photographs, which were then refined manually by the user through the facilities of the user
interface of the system. The feature points located on the frontal and side views of a real face
were used to deform the generic model. Then, the muscle vectors in the individualized face
model were arranged accordingly. The individualized models produced in this manner were
animated using the parametric interpolation techniques. Worral et al. [10] examined the
performance of the low bit rate 3-D , “Talking Heads”, in order to determine the appropriate
delivery scheme for the encoded data.
2. IMAGE DATABASE
In this work, we used a database that prepared at University of Computer Studies, Yangon for
this research. There are 500 subjects in the database. Subjects sat directly in front of the camera,
for each person there are on average 5 frames for frontal view. Image sequences for the frontal
views were digitized into 640*490 pixel array with 8 bits greyscale. Table 1 shows database
specifications.
Table 1. Database Specifications
Age 15 to 70
Female 70%
Male 30%
Myanmar 80%
Other 20%
Resolution 640 * 490
Greyscale
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
3
3. FACIAL FEATURE POINT TRACKING USING OPTICAL FLOW
In the first digitized frame, 21 key feature points were manually marked with a computer-mouse
around facial features such as eyes, eyebrows and mouth in Figure 1. That is why the
information expressing movement of eyes, eyebrows and mouth is desirable for machine
recognition of the facial expressions. We are confined in these which are representative of the
boundary between these components and then determine the facial feature points which are
representative of the boundary between these components and skin. Some of this points used
only for specifying the model features in the first frame. Other points were automatically
tracked in the subsequent frames using cross-correlation based optical flow.
Figure 1. Selected 21 facial feature points
Each point is the centre of a 11*11 flow window that includes horizontal and vertical flows.
Figure.2 shows the implementation of this method in two subsequent frames.
Figure 2. Cross- correlation optical flow calculation
Cross-correlation of 11*11 windows in the first frame, and a 21*21 window at the next frame
were calculated and the position with maximum cross-correlation of two windows, were
estimated as the position of the feature point at the next frame. In this paper, we developed a
face model for synthesizing the facial expressions, by tracking some feature points that were
described in the Figure.1. In the subsequent section we describe the specifications of the
proposed face model.
4. FACE MODEL
We are confined in the three facial features: mouth, eyes and eyebrows. We used a muscle-
based triangular patch object model that the vertices of the triangles were determined from the
feature points tracking results [3]. Deformed triangles gave the ability of shape tracking to the
model. In this section we describe the mouth, eyes and eyebrows model.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
4
4.1. Mouse Model
Figure 3 shows the selected mouth feature points and face features that were extracted
from these points.
Figure 3. Mouth features and feature points
Points 1, 2 and 3 were tracked in the subsequent frames and their position was transferred to the
mouth model. But other points were only used in the mouth frame for determining some
features such as mouth wide, upper and lower lip thickness. Figure 4 shows the proposed mouth
model. The coordinates of the vertices 1, 8 and 15 directly determined from the normalized
position of feature points in the Figure 3 (UL were normalized to one). Coordinates of the other
vertices were determined from the features MW, Mx, My, UL, LL and MH relative to 1, 8 and
15 vertices. Right hand side of the model is the symmetry of the left hand side.
Figure 4. Eye Model
4.2. Eye Model and Eyebrow Model
Figure 5. shows the selected eye feature points and the features that were extracted from these
points.
Figure 5. Eye and Eyebrow features and feature points
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
5
Points 6, 7, 10 and 11 were tracked in the subsequent frames and the other points were only
used in the first frame for determining some features such as wide and height of the eye and
eyebrow. Figure 6 shows the proposed eye and eyebrow model. The coordinates of the vertices
16, 17, 30 and 31 were directly determined from normalized position of the 6, 7, 10, and 11
feature points in the Figure 5. Coordinates of the other vertices were determined from features
EH, EW, EBH and EBW relative to 16, 17, 30 and 31 vertices. For example wide of the iris
assumed to be a half of the EW. Right hand side of the model is the symmetry of the left hand
side. Face landmarks were determined by using 10 landmark points around the face in the first
frame (points 12-21 in the Figure. 1). These points weren’t tracked in subsequent frames, but
the points around the jaw were synchronized with the movement of the point 2 in the Figure 3.
Figure.6 Eye and Eyebrow model
5. EXPERIMENTAL RESULTS
The proposed face model was evaluated with prepared database. Figure.7 shows the result of
feature points tracking in the original face and the model deformation to show the same
expressions. Algorithms have shown good robustness and reasonable accuracy for the photos
from our test set. Tracked feature points in the original frames were also denoted in this figure.
The proposed model also can be used for lip tracking and speech synchronizing facial animation
system. Figure.7 shows the results of lip tracking for pronouncing the word “MinGaLarPar”,
that means “Hello” in Myanmar.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
6
Figure 7. Lip tracking for pronouncing the word “MinGaLarPar”
6. CONCLUSION
In this paper, muscle-based face model that tracks some facial points in the real face image
sequences and shows the same expressions. The proposed model had a simple structure and
used a few set of control points comparing to the similar face models. The proposed model has a
low complexity and is suitable for real time implementations, such as real time facial animation.
Because of using the frontal images, we used a 2-D face model. By using the side images beside
the frontal images and applying 3-D coordinates to the vertices of the model, 3-D face model
can be obtained. The procedure is fully automatic, while user modification is also allowed if
necessary. Also, the results can play a crucial role in feature-based face recognition system.
Future work includes the investigation of additional features and the application of the method
to the recognition of more naturalistic facial expression videos.
ACKNOWLEDGEMENTS
First of all, the author thanks Dr Nilar Thein, Rector for University of Computer Studies,
Yangon, Myanmar. The author is also grateful to her family who specifically offered strong
moral and physical support, care and kindness. Finally, the author is deeply thankful to Dr
Myint Myint Sein, Professor, University of Computer Studies, Yangon.
REFERENCES
[1] Jinli Suo, Song-Chun Zhu, Shiguang Shan, Member, IEEE, and Xilin Chen, Member, IEEE,
“A Compositional and Dynamic Model for Face Aging”, IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. 32, No.3, March 2010.
[2] Hadi Seyedarabi, Ali Aghagolzadeh, and Sohrab Khanmohammadi, “ Facial Expressions
Animation and Lip Tracking Using Facial Characteristic Points and Deformable Model”,
International Journal of Information and Communication Enginerring 1:8 2005.
[3] Pohsiang Tsai, Tom Hintz, and Tony Jan, “Facial Behavior as Behavior Biometric? An
Empirical Study”, 2007.
[4] Nabil Hewahi, Aya Olwan, Nebal Tubeel, Salha EL-Asar, Zeinab Abu-Sultan, “Age
Estimation based on Neural Networks using Face Features”, Journal of Emerging
Trends in Computing and Information Science , Vol.1 No. 2, October 2010.
[5] Elham Bagherian, Rahmita.Wirza.Rahmat and Nur Izura Udzir, “Extract of Facial
Feature Point”, International Journal of Computer Science and Network Security,
Vol.9, January 2009.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011
7
[6] Sebe, N., Sun, Y., Bakker, E., Lew, M., Cohen, I., Huang, T.: “Towards authentic
emotion recognition”, International Conference on Systems, Man and Cybernetics.
(2004).
[7] Cohn, J.F , “Foundations of human computing: facial expression and emotion”,
International Conference on Multimodal Interfaces. (2006) 233-238.
[8] Z. Wu, P. S. Aleksic and A. K.Katsaggelos, “ Lip tracking for MPEG-4 Facial Animation,” in
Proc. 4th
IEEE Int. Conf. On Multimodal Interfaces, ICMI’02.
[9] F. Erol, “Modeling and Animating Personalized Faces”, M.Sc. Thesis, Bilkent University,
January 2001.
[10] S.T. Worral, A. H. Sadka, and A.M.Kondoz, “3-D Facial Animation for Very Low Bit
Rate Mobile Video”, in Proc. IEEE Int.Conference on 3G Mobile Communication
Technologies, May 2002.
[11] Hussein Karam Hussein, “Towards Realistic Facial Modeling and Re-Rendering of Human Skin
Aging Animation’, in Proc. IEEE Int. Conference of the Shape Modeling, 2002.
Hlaing Htake Khaung Tin received the B.C.Sc and B.C.Sc (Hons:) degree
from Government Computer College, Lashio, Northern Shan State, Myanmar
in 2003, the Master of Computer Science degree from University of Computer
Studies, Mandalay, Myanmar in 2006. She is currently attending her Ph.D at
University of Computer Studies, Yangon, Myanmar. Her research interest
mainly include face aging modelling, face recognition, and perception of
human faces.

More Related Content

What's hot

A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodIRJET Journal
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Editor IJCATR
 
IRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET Journal
 
Visual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning ApproachesVisual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning Approachescsandit
 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
 
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...SubmissionResearchpa
 
IRJET - Facial Recognition based Attendance System with LBPH
IRJET -  	  Facial Recognition based Attendance System with LBPHIRJET -  	  Facial Recognition based Attendance System with LBPH
IRJET - Facial Recognition based Attendance System with LBPHIRJET Journal
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...IJMER
 
Facial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion PatternsFacial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion Patternsijeei-iaes
 
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...cscpconf
 
A survey on face recognition techniques
A survey on face recognition techniquesA survey on face recognition techniques
A survey on face recognition techniquesAlexander Decker
 
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
 
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...CSCJournals
 
A Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionA Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionCSCJournals
 

What's hot (19)

A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction MethodA Hybrid Approach to Recognize Facial Image using Feature Extraction Method
A Hybrid Approach to Recognize Facial Image using Feature Extraction Method
 
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
Comparative Study of Lip Extraction Feature with Eye Feature Extraction Algor...
 
IRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using LaplacianfaceIRJET - A Review on: Face Recognition using Laplacianface
IRJET - A Review on: Face Recognition using Laplacianface
 
Visual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning ApproachesVisual Saliency Model Using Sift and Comparison of Learning Approaches
Visual Saliency Model Using Sift and Comparison of Learning Approaches
 
Age Invariant Face Recognition using Convolutional Neural Network
Age Invariant Face Recognition using Convolutional  Neural Network Age Invariant Face Recognition using Convolutional  Neural Network
Age Invariant Face Recognition using Convolutional Neural Network
 
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
Explaining Aluminous Ascientification Of Significance Examples Of Personal St...
 
IRJET - Facial Recognition based Attendance System with LBPH
IRJET -  	  Facial Recognition based Attendance System with LBPHIRJET -  	  Facial Recognition based Attendance System with LBPH
IRJET - Facial Recognition based Attendance System with LBPH
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
 
Facial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion PatternsFacial Expression Recognition Based on Facial Motion Patterns
Facial Expression Recognition Based on Facial Motion Patterns
 
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...
FUSION BASED MULTIMODAL AUTHENTICATION IN BIOMETRICS USING CONTEXT-SENSITIVE ...
 
An optimal face recoginition tool
An optimal face recoginition toolAn optimal face recoginition tool
An optimal face recoginition tool
 
A survey on face recognition techniques
A survey on face recognition techniquesA survey on face recognition techniques
A survey on face recognition techniques
 
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
 
Aa4102207210
Aa4102207210Aa4102207210
Aa4102207210
 
Ijetcas14 351
Ijetcas14 351Ijetcas14 351
Ijetcas14 351
 
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
 
50220130402003
5022013040200350220130402003
50220130402003
 
Ijetcas14 435
Ijetcas14 435Ijetcas14 435
Ijetcas14 435
 
A Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression DetectionA Comprehensive Survey on Human Facial Expression Detection
A Comprehensive Survey on Human Facial Expression Detection
 

Similar to FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS

REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarkseSAT Journals
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarkseSAT Journals
 
DETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESDETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESJournal For Research
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
Feature extraction comparison for facial expression recognition using adapti...
Feature extraction comparison for facial expression recognition  using adapti...Feature extraction comparison for facial expression recognition  using adapti...
Feature extraction comparison for facial expression recognition using adapti...IJECEIAES
 
Recognition of Surgically Altered Face Images
Recognition of Surgically Altered Face ImagesRecognition of Surgically Altered Face Images
Recognition of Surgically Altered Face ImagesIRJET Journal
 
Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a surveyieijjournal
 
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET-  	  Persons Identification Tool for Visually Impaired - Digital EyeIRJET-  	  Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Persons Identification Tool for Visually Impaired - Digital EyeIRJET Journal
 
Ensemble-based face expression recognition approach for image sentiment anal...
Ensemble-based face expression recognition approach for image  sentiment anal...Ensemble-based face expression recognition approach for image  sentiment anal...
Ensemble-based face expression recognition approach for image sentiment anal...IJECEIAES
 
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Vai Jayanthi
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Editor IJARCET
 
Research and Development of DSP-Based Face Recognition System for Robotic Reh...
Research and Development of DSP-Based Face Recognition System for Robotic Reh...Research and Development of DSP-Based Face Recognition System for Robotic Reh...
Research and Development of DSP-Based Face Recognition System for Robotic Reh...IJCSES Journal
 
A study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationIJCSES Journal
 
Facial emotion recognition using deep learning detector and classifier
Facial emotion recognition using deep learning detector and classifier Facial emotion recognition using deep learning detector and classifier
Facial emotion recognition using deep learning detector and classifier IJECEIAES
 
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...Face Verification Across Age Progression using Enhanced Convolution Neural Ne...
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...sipij
 

Similar to FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS (20)

REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSREVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMS
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
 
Facial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarksFacial expression identification by using features of salient facial landmarks
Facial expression identification by using features of salient facial landmarks
 
184
184184
184
 
DETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGESDETECTING FACIAL EXPRESSION IN IMAGES
DETECTING FACIAL EXPRESSION IN IMAGES
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Feature extraction comparison for facial expression recognition using adapti...
Feature extraction comparison for facial expression recognition  using adapti...Feature extraction comparison for facial expression recognition  using adapti...
Feature extraction comparison for facial expression recognition using adapti...
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Fl33971979
Fl33971979Fl33971979
Fl33971979
 
Recognition of Surgically Altered Face Images
Recognition of Surgically Altered Face ImagesRecognition of Surgically Altered Face Images
Recognition of Surgically Altered Face Images
 
Face recognition a survey
Face recognition a surveyFace recognition a survey
Face recognition a survey
 
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET-  	  Persons Identification Tool for Visually Impaired - Digital EyeIRJET-  	  Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
 
Ensemble-based face expression recognition approach for image sentiment anal...
Ensemble-based face expression recognition approach for image  sentiment anal...Ensemble-based face expression recognition approach for image  sentiment anal...
Ensemble-based face expression recognition approach for image sentiment anal...
 
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
Efficient facial expression_recognition_algorithm_based_on_hierarchical_deep_...
 
Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356Ijarcet vol-2-issue-4-1352-1356
Ijarcet vol-2-issue-4-1352-1356
 
50120140504002
5012014050400250120140504002
50120140504002
 
Research and Development of DSP-Based Face Recognition System for Robotic Reh...
Research and Development of DSP-Based Face Recognition System for Robotic Reh...Research and Development of DSP-Based Face Recognition System for Robotic Reh...
Research and Development of DSP-Based Face Recognition System for Robotic Reh...
 
A study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classificationA study of techniques for facial detection and expression classification
A study of techniques for facial detection and expression classification
 
Facial emotion recognition using deep learning detector and classifier
Facial emotion recognition using deep learning detector and classifier Facial emotion recognition using deep learning detector and classifier
Facial emotion recognition using deep learning detector and classifier
 
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...Face Verification Across Age Progression using Enhanced Convolution Neural Ne...
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...
 

Recently uploaded

What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 

Recently uploaded (20)

What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 

FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS

  • 1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 1 FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTS Hlaing Htake Khaung Tin University of Computer Studies, Yangon, Myanmar hlainghtakekhaungtin@gmail.com ABSTRACT Automatic facial feature extraction is one of the most important and attempted problems in computer vision. It is a necessary step in face recognition, facial image compression. There are many methods have been proposed in the literature for the facial feature extraction task. However, all of them have still disadvantage such as not complete reflection about face structure, face texture. In this paper, we propose a method for fast and accurate extraction of feature points such as eyes, nose, mouth, eyebrows and the like from dynamic images with the purpose of face recognition. These methods are far from satisfactory in terms of extraction accuracy and processing speed. The proposed method achieves high position accuracy at a low computing cost by combining shape extraction with geometric features of facial images like eyes, nose, mouth etc. In this paper, a facial expressions synthesis system, based on the facial points tracking in the frontal image sequences. Selected facial points are automatically tracked using a cross- correlation based optical flow. The proposed synthesis system uses a simple facial features model with a few set of control points that can be tracked in original facial image sequences. KEYWORDS Facial feature point extraction; face recognition; geometric features. 1. INTRODUCTION One of the most elusive goals in computer animation is the realistic animation of the human face. Human face modelling and animation are very complex tasks because of the physical structure of the face and the dynamics involving its psychological and behavioural aspects [11]. The objective of this paper is to study a statistical model for human face aging, which is then used for face aging simulation and age estimation. Face aging simulation and prediction is an interesting task with many applications in digital entertainment [1]. A problem of personal verification and identification is an actively growing area of research. Face, voice, lip movements, hand geometry, odor, gait, iris, retina, fingerprint are the most commonly used authentication methods. All of these psychological and behavioural characteristics of a person are called biometric. The driving force of the progress in this field is due to the growing role of the Internet and electronic transfers in modern society. Therefore, considerable number of applications is concentrated in the area of electronic commerce and electronic banking systems. The biometrics have a significant advantage over traditional authentication techniques due to the biometric characteristics of the individual are not easily transferable, are unique of every person and cannot be lost, stolen or broken. The biometrics is a measurable physiological or behavioural characteristic of an individual used in personal identification and verification and the choice of the biometric solutions depends on user acceptance, level of security, accuracy, and cost and implementation time [2].
  • 2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 2 2. RELATED WORK In Facial feature extraction, local features on face such as nose, and then eyes are extracted and then used as input data. And it has been the central step for several applications. Various approaches have been proposed in this chapter to extract these facial points from images or video sequences of faces. The basically of approaches are geometry-based, template-based, colour segmentation techniques, appearance-based approaches [5]. The ability for a computer system to sense the user’s emotions opens a wide range of applications in different research areas, including security, law enforcement, medicine, education, and telecommunications [6]. However , it is important not to confuse human emotion recognition from facial expression recognition: the latter is merely a classification of facial deformations into a set of abstract classes, solely based on visual information. Indeed, human emotions can only be inferred from context, self-report, physiological indicators, and expressive behaviour which may or may not include facial expressions [7]. Zhilin Wu et al. [8] have presented a combined method to accurately track the outer lips by using the Gradient Vector Flow (GVF) snakes with parabolic templates as an additional external force. This combination needs fewer requirements of both salient boundaries and accuracy of templates. Erol [9] described a facial modelling and animation system that used muscle-based generic face model and deformed it using deformation techniques to model the individualized faces. Two orthogonal photos of the real faces were used for this purpose. Image processing techniques were employed to extract certain features on the photographs, which were then refined manually by the user through the facilities of the user interface of the system. The feature points located on the frontal and side views of a real face were used to deform the generic model. Then, the muscle vectors in the individualized face model were arranged accordingly. The individualized models produced in this manner were animated using the parametric interpolation techniques. Worral et al. [10] examined the performance of the low bit rate 3-D , “Talking Heads”, in order to determine the appropriate delivery scheme for the encoded data. 2. IMAGE DATABASE In this work, we used a database that prepared at University of Computer Studies, Yangon for this research. There are 500 subjects in the database. Subjects sat directly in front of the camera, for each person there are on average 5 frames for frontal view. Image sequences for the frontal views were digitized into 640*490 pixel array with 8 bits greyscale. Table 1 shows database specifications. Table 1. Database Specifications Age 15 to 70 Female 70% Male 30% Myanmar 80% Other 20% Resolution 640 * 490 Greyscale
  • 3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 3 3. FACIAL FEATURE POINT TRACKING USING OPTICAL FLOW In the first digitized frame, 21 key feature points were manually marked with a computer-mouse around facial features such as eyes, eyebrows and mouth in Figure 1. That is why the information expressing movement of eyes, eyebrows and mouth is desirable for machine recognition of the facial expressions. We are confined in these which are representative of the boundary between these components and then determine the facial feature points which are representative of the boundary between these components and skin. Some of this points used only for specifying the model features in the first frame. Other points were automatically tracked in the subsequent frames using cross-correlation based optical flow. Figure 1. Selected 21 facial feature points Each point is the centre of a 11*11 flow window that includes horizontal and vertical flows. Figure.2 shows the implementation of this method in two subsequent frames. Figure 2. Cross- correlation optical flow calculation Cross-correlation of 11*11 windows in the first frame, and a 21*21 window at the next frame were calculated and the position with maximum cross-correlation of two windows, were estimated as the position of the feature point at the next frame. In this paper, we developed a face model for synthesizing the facial expressions, by tracking some feature points that were described in the Figure.1. In the subsequent section we describe the specifications of the proposed face model. 4. FACE MODEL We are confined in the three facial features: mouth, eyes and eyebrows. We used a muscle- based triangular patch object model that the vertices of the triangles were determined from the feature points tracking results [3]. Deformed triangles gave the ability of shape tracking to the model. In this section we describe the mouth, eyes and eyebrows model.
  • 4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 4 4.1. Mouse Model Figure 3 shows the selected mouth feature points and face features that were extracted from these points. Figure 3. Mouth features and feature points Points 1, 2 and 3 were tracked in the subsequent frames and their position was transferred to the mouth model. But other points were only used in the mouth frame for determining some features such as mouth wide, upper and lower lip thickness. Figure 4 shows the proposed mouth model. The coordinates of the vertices 1, 8 and 15 directly determined from the normalized position of feature points in the Figure 3 (UL were normalized to one). Coordinates of the other vertices were determined from the features MW, Mx, My, UL, LL and MH relative to 1, 8 and 15 vertices. Right hand side of the model is the symmetry of the left hand side. Figure 4. Eye Model 4.2. Eye Model and Eyebrow Model Figure 5. shows the selected eye feature points and the features that were extracted from these points. Figure 5. Eye and Eyebrow features and feature points
  • 5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 5 Points 6, 7, 10 and 11 were tracked in the subsequent frames and the other points were only used in the first frame for determining some features such as wide and height of the eye and eyebrow. Figure 6 shows the proposed eye and eyebrow model. The coordinates of the vertices 16, 17, 30 and 31 were directly determined from normalized position of the 6, 7, 10, and 11 feature points in the Figure 5. Coordinates of the other vertices were determined from features EH, EW, EBH and EBW relative to 16, 17, 30 and 31 vertices. For example wide of the iris assumed to be a half of the EW. Right hand side of the model is the symmetry of the left hand side. Face landmarks were determined by using 10 landmark points around the face in the first frame (points 12-21 in the Figure. 1). These points weren’t tracked in subsequent frames, but the points around the jaw were synchronized with the movement of the point 2 in the Figure 3. Figure.6 Eye and Eyebrow model 5. EXPERIMENTAL RESULTS The proposed face model was evaluated with prepared database. Figure.7 shows the result of feature points tracking in the original face and the model deformation to show the same expressions. Algorithms have shown good robustness and reasonable accuracy for the photos from our test set. Tracked feature points in the original frames were also denoted in this figure. The proposed model also can be used for lip tracking and speech synchronizing facial animation system. Figure.7 shows the results of lip tracking for pronouncing the word “MinGaLarPar”, that means “Hello” in Myanmar.
  • 6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 6 Figure 7. Lip tracking for pronouncing the word “MinGaLarPar” 6. CONCLUSION In this paper, muscle-based face model that tracks some facial points in the real face image sequences and shows the same expressions. The proposed model had a simple structure and used a few set of control points comparing to the similar face models. The proposed model has a low complexity and is suitable for real time implementations, such as real time facial animation. Because of using the frontal images, we used a 2-D face model. By using the side images beside the frontal images and applying 3-D coordinates to the vertices of the model, 3-D face model can be obtained. The procedure is fully automatic, while user modification is also allowed if necessary. Also, the results can play a crucial role in feature-based face recognition system. Future work includes the investigation of additional features and the application of the method to the recognition of more naturalistic facial expression videos. ACKNOWLEDGEMENTS First of all, the author thanks Dr Nilar Thein, Rector for University of Computer Studies, Yangon, Myanmar. The author is also grateful to her family who specifically offered strong moral and physical support, care and kindness. Finally, the author is deeply thankful to Dr Myint Myint Sein, Professor, University of Computer Studies, Yangon. REFERENCES [1] Jinli Suo, Song-Chun Zhu, Shiguang Shan, Member, IEEE, and Xilin Chen, Member, IEEE, “A Compositional and Dynamic Model for Face Aging”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No.3, March 2010. [2] Hadi Seyedarabi, Ali Aghagolzadeh, and Sohrab Khanmohammadi, “ Facial Expressions Animation and Lip Tracking Using Facial Characteristic Points and Deformable Model”, International Journal of Information and Communication Enginerring 1:8 2005. [3] Pohsiang Tsai, Tom Hintz, and Tony Jan, “Facial Behavior as Behavior Biometric? An Empirical Study”, 2007. [4] Nabil Hewahi, Aya Olwan, Nebal Tubeel, Salha EL-Asar, Zeinab Abu-Sultan, “Age Estimation based on Neural Networks using Face Features”, Journal of Emerging Trends in Computing and Information Science , Vol.1 No. 2, October 2010. [5] Elham Bagherian, Rahmita.Wirza.Rahmat and Nur Izura Udzir, “Extract of Facial Feature Point”, International Journal of Computer Science and Network Security, Vol.9, January 2009.
  • 7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.1, No.1, April 2011 7 [6] Sebe, N., Sun, Y., Bakker, E., Lew, M., Cohen, I., Huang, T.: “Towards authentic emotion recognition”, International Conference on Systems, Man and Cybernetics. (2004). [7] Cohn, J.F , “Foundations of human computing: facial expression and emotion”, International Conference on Multimodal Interfaces. (2006) 233-238. [8] Z. Wu, P. S. Aleksic and A. K.Katsaggelos, “ Lip tracking for MPEG-4 Facial Animation,” in Proc. 4th IEEE Int. Conf. On Multimodal Interfaces, ICMI’02. [9] F. Erol, “Modeling and Animating Personalized Faces”, M.Sc. Thesis, Bilkent University, January 2001. [10] S.T. Worral, A. H. Sadka, and A.M.Kondoz, “3-D Facial Animation for Very Low Bit Rate Mobile Video”, in Proc. IEEE Int.Conference on 3G Mobile Communication Technologies, May 2002. [11] Hussein Karam Hussein, “Towards Realistic Facial Modeling and Re-Rendering of Human Skin Aging Animation’, in Proc. IEEE Int. Conference of the Shape Modeling, 2002. Hlaing Htake Khaung Tin received the B.C.Sc and B.C.Sc (Hons:) degree from Government Computer College, Lashio, Northern Shan State, Myanmar in 2003, the Master of Computer Science degree from University of Computer Studies, Mandalay, Myanmar in 2006. She is currently attending her Ph.D at University of Computer Studies, Yangon, Myanmar. Her research interest mainly include face aging modelling, face recognition, and perception of human faces.