This paper deals with one sample face recognition which is a new challenging problem in pattern recognition. In the proposed method, the frontal 2D face image of each person divided to some sub-regions. After computing the 3D shape of each sub-region, a fusion scheme is applied on sub-regions to create a total 3D shape for whole face image. Then, 2D face image is added to the corresponding 3D shape to construct 3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated. Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5% in recognition rate for one sample per person by enlarging training set using generated virtual samples. Compared with other related works, the proposed method has the following advantages: 1) only one single frontal face is required for face recognition and the outputs are virtual images with variant views for each individual 2) need only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully mathematical with no training phase and the estimated 3D model is unique for each individual.
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper describes for a robust face recognition system using skin segmentation technique. This paper addresses the problem of detecting faces in color images in the presence of various lighting conditions. In this paper the face is preprocessed using histogram equalization to avoid illumination problems and then is detected using skin segmentation method. The principal component analysis using neural network is used to recognize the extracted facial features.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This paper describes for a robust face recognition system using skin segmentation technique. This paper addresses the problem of detecting faces in color images in the presence of various lighting conditions. In this paper the face is preprocessed using histogram equalization to avoid illumination problems and then is detected using skin segmentation method. The principal component analysis using neural network is used to recognize the extracted facial features.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
In this paper, we present an automatic application of 3D face recognition system using geodesic distance in Riemannian geometry. We consider, in this approach, the three dimensional face images as residing in Riemannian manifold and we compute the geodesic distance using the Jacobi iterations as a solution of the Eikonal equation. The problem of solving the Eikonal equation, unstructured simplified meshes of 3D face surface, such as tetrahedral and triangles are important for accurately modeling material interfaces and curved domains, which are approximations to curved surfaces in R3. In the classifying steps, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). To test this method and evaluate its performance, a simulation series of experiments were performed on 3D Shape REtrieval Contest 2008 database (SHREC2008).
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
An efficient feature extraction method with pseudo zernike moment for facial ...ijcsity
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. Face recognition system is critical when individuals have very similar biometric signature such as
identical twins. In this paper, new efficient facial-based identical twins recognition is proposed according
to the geometric moment. The utilized geometric moment is Pseudo-Zernike Moment (PZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area inside an image is detected
using Ada Boost approach. The proposed method is evaluated on two datasets, Twins Days Festival and
Iranian Twin Society which contain scaled, which contain the shifted and rotated facial images of identical
twins in different illuminations. The results prove the ability of proposed method to recognize a pair of
identical twins. Also, results show that the proposed method is robust to rotation, scaling and changing
illumination.
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. The face recognition system is critical when individuals have very similar biometric signature such
as identical twins. In this paper, new efficient facial-based identical twins recognition is proposed
according to geometric moment. The utilized geometric moment is Zernike Moment (ZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area in an image is detected using
AdaBoost approach. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian
Twin Society which contain scaled and rotated facial images of identical twins in different illuminations.
The results prove the ability of proposed method to recognize a pair of identical twins. Also, results show
that the proposed method is robust to rotation, scaling and changing illumination.
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Multimodal Approach for Face Recognition using 3D-2D Face Feature FusionCSCJournals
3D Face recognition has been an area of interest among researchers for the past few decades especially in pattern recognition. The main advantage of 3D Face recognition is the availability of geometrical information of the face structure which is more or less unique for a subject. This paper focuses on the problems of person identification using 3D Face data. Use of unregistered 3D Face data for feature extraction significantly increases the operational speed of the system with huge database enrollment. In this work, unregistered 3D Face data is fed to a classifier in multiple spectral representations of the same data. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used for the spectral representations. The face recognition accuracy obtained when the feature extractors are used individually is evaluated. The use of depth information alone in different spectral representation was not sufficient to increase the recognition rate. So a fusion of texture and depth information of face is proposed. Fusion of the matching scores proves that the recognition accuracy can be improved significantly by fusion of scores of multiple representations. FRAV3D database is used for testing the algorithm.
Multi modal face recognition using block based curvelet featuresijcga
In this paper, we present multimodal 2D +3D face recognition method using block based curvelet features. The 3D surface of face (Depth Map) is computed from the stereo face images using stereo vision technique. The statistical measures such as mean, standard deviation, variance and entropy are extracted from each block of curvelet subband for both depth and intensity images independently.In order to compute the decision score, the KNN classifier is employed independently for both intensity and depth map. Further, computed decision scoresof intensity and depth map are combined at decision level to improve the face recognition rate. The combination of intensity and depth map is verified experimentally using benchmark face database. The experimental results show that the proposed multimodal method is better than individual modality.
This project includes two face recognition systems implemented with the help of Principal Component Analysis (PCA) and Morphological Shared-Weight Neural Network(MSNN).From these systems we will evaluate the performance of both the techniques and based on the accuracy achieved we determine which technique will be better for the face recognition
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
In this paper, we present an automatic application of 3D face recognition system using geodesic distance in Riemannian geometry. We consider, in this approach, the three dimensional face images as residing in Riemannian manifold and we compute the geodesic distance using the Jacobi iterations as a solution of the Eikonal equation. The problem of solving the Eikonal equation, unstructured simplified meshes of 3D face surface, such as tetrahedral and triangles are important for accurately modeling material interfaces and curved domains, which are approximations to curved surfaces in R3. In the classifying steps, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). To test this method and evaluate its performance, a simulation series of experiments were performed on 3D Shape REtrieval Contest 2008 database (SHREC2008).
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
Most face recognition algorithms are generally capable to achieve a high level of accuracy when
the image is acquired under wellcontrolled conditions. The face should be still during the acquisition
process; otherwise, the resulted image would be blur and hard for recognition. Enforcing persons to stand
still during the process is impractical; extremely likely that recognition should be performed on a blurred
image. It is important to understand the relation between the image blur and the recognition accuracy. The
ORL Database was used in the study. All images were in PGM format of 92 × 112 pixels from forty
different persons, ten images per person. Those images were randomly divided into training and testing
datasets with 50-50 ratio. Singular value decomposition was used to extract the features. The images in
the testing datasets were artificially blurred to represent a linear motion, and recognition was performed.
The blurred images were also filtered using various methods. The accuracy levels of the recognition on the
basis of the blurred faces and filtered faces were compared. The performed numerical study suggests that
at its best, the image improvement processes are capable to improve the recognition accuracy level by
less than five percent.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
An efficient feature extraction method with pseudo zernike moment for facial ...ijcsity
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. Face recognition system is critical when individuals have very similar biometric signature such as
identical twins. In this paper, new efficient facial-based identical twins recognition is proposed according
to the geometric moment. The utilized geometric moment is Pseudo-Zernike Moment (PZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area inside an image is detected
using Ada Boost approach. The proposed method is evaluated on two datasets, Twins Days Festival and
Iranian Twin Society which contain scaled, which contain the shifted and rotated facial images of identical
twins in different illuminations. The results prove the ability of proposed method to recognize a pair of
identical twins. Also, results show that the proposed method is robust to rotation, scaling and changing
illumination.
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. The face recognition system is critical when individuals have very similar biometric signature such
as identical twins. In this paper, new efficient facial-based identical twins recognition is proposed
according to geometric moment. The utilized geometric moment is Zernike Moment (ZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area in an image is detected using
AdaBoost approach. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian
Twin Society which contain scaled and rotated facial images of identical twins in different illuminations.
The results prove the ability of proposed method to recognize a pair of identical twins. Also, results show
that the proposed method is robust to rotation, scaling and changing illumination.
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Multimodal Approach for Face Recognition using 3D-2D Face Feature FusionCSCJournals
3D Face recognition has been an area of interest among researchers for the past few decades especially in pattern recognition. The main advantage of 3D Face recognition is the availability of geometrical information of the face structure which is more or less unique for a subject. This paper focuses on the problems of person identification using 3D Face data. Use of unregistered 3D Face data for feature extraction significantly increases the operational speed of the system with huge database enrollment. In this work, unregistered 3D Face data is fed to a classifier in multiple spectral representations of the same data. Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) are used for the spectral representations. The face recognition accuracy obtained when the feature extractors are used individually is evaluated. The use of depth information alone in different spectral representation was not sufficient to increase the recognition rate. So a fusion of texture and depth information of face is proposed. Fusion of the matching scores proves that the recognition accuracy can be improved significantly by fusion of scores of multiple representations. FRAV3D database is used for testing the algorithm.
Multi modal face recognition using block based curvelet featuresijcga
In this paper, we present multimodal 2D +3D face recognition method using block based curvelet features. The 3D surface of face (Depth Map) is computed from the stereo face images using stereo vision technique. The statistical measures such as mean, standard deviation, variance and entropy are extracted from each block of curvelet subband for both depth and intensity images independently.In order to compute the decision score, the KNN classifier is employed independently for both intensity and depth map. Further, computed decision scoresof intensity and depth map are combined at decision level to improve the face recognition rate. The combination of intensity and depth map is verified experimentally using benchmark face database. The experimental results show that the proposed multimodal method is better than individual modality.
This project includes two face recognition systems implemented with the help of Principal Component Analysis (PCA) and Morphological Shared-Weight Neural Network(MSNN).From these systems we will evaluate the performance of both the techniques and based on the accuracy achieved we determine which technique will be better for the face recognition
A study of techniques for facial detection and expression classificationIJCSES Journal
Automatic recognition of facial expressions is an important component for human-machine interfaces. It
has lot of attraction in research area since 1990's.Although humans recognize face without effort or
delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their
orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user
authentication, person identification, video surveillance, information security, data privacy etc. The
various approaches for facial recognition are categorized into two namely holistic based facial
recognition and feature based facial recognition. Holistic based treat the image data as one entity without
isolating different region in the face where as feature based methods identify certain points on the face
such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various
methods of facial detection,facial feature extraction and classification.
LOCAL REGION PSEUDO-ZERNIKE MOMENT- BASED FEATURE EXTRACTION FOR FACIAL RECOG...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image.
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When
humans have very similar biometric properties, such as identical twins, the face recognition system is
considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the
facial area of input image. After that the facial area is divided into some local regions. Finally, new
efficient facial-based identical twins feature extractor based on the geometric moment is applied into
local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is
employed inside the local regions of facial area of identical twins images. To evaluate the proposed
method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets
includes scaled and rotated facial images of identical twins in different illuminations. The experimental
results demonstrates the ability of proposed method to recognize a pair of identical twins in
different situations such as rotation, scaling and changing illumination
Local Region Pseudo-Zernike Moment- Based Feature Extraction for Facial Recog...aciijournal
In the domain of image processing, face recognition is one of the most well-known research field. When humans have very similar biometric properties, such as identical twins, the face recognition system is considered as a challengeable problem. In this paper, the AdaBoost method is utilized to detect the facial area of input image. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image. The used feature extractor is Pseudo-Zernike Moment (PZM) which is employed inside the local regions of facial area of identical twins images. To evaluate the proposed method, two datasets, Twins Days Festival and Iranian Twin Society, are collected where the datasets includes scaled and rotated facial images of identical twins in different illuminations. The experimental results demonstrates the ability of proposed method to recognize a pair of identical twins in different situations such as rotation, scaling and changing illumination
Face detection is one of the most suitable applications for image processing and biometric programs. Artificial neural networks have been used in the many field like image processing, pattern recognition, sales forecasting, customer research and data validation. Face detection and recognition have become one of the most popular biometric techniques over the past few years. There is a lack of research literature that provides an overview of studies and research-related research of Artificial neural networks face detection. Therefore, this study includes a review of facial recognition studies as well systems based on various Artificial neural networks methods and algorithms.
Realtime face matching and gender prediction based on deep learningIJECEIAES
Face analysis is an essential topic in computer vision that dealing with human faces for recognition or prediction tasks. The face is one of the easiest ways to distinguish the identity people. Face recognition is a type of personal identification system that employs a person’s personal traits to determine their identity. Human face recognition scheme generally consists of four steps, namely face detection, alignment, representation, and verification. In this paper, we propose to extract information from human face for several tasks based on recent advanced deep learning framework. The proposed approach outperforms the results in the state-of-the-art.
FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE ...cscpconf
This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterizationmethodology showed efficient to detect the emotion type in 95.6% of the cases.
Facial landmarking localization for emotion recognition using bayesian shape ...csandit
This work presents a framework for emotion recognition, based in facial expression analysis
using Bayesian Shape Models (BSM) for facial landmarking localization. The Facial Action
Coding System (FACS) compliant facial feature tracking based on Bayesian Shape Model. The
BSM estimate the parameters of the model with an implementation of the EM algorithm. We
describe the characterization methodology from parametric model and evaluated the accuracy
for feature detection and estimation of the parameters associated with facial expressions,
analyzing its robustness in pose and local variations. Then, a methodology for emotion
characterization is introduced to perform the recognition. The experimental results show that
the proposed model can effectively detect the different facial expressions. Outperforming
conventional approaches for emotion recognition obtaining high performance results in the
estimation of emotion present in a determined subject. The model used and characterization
methodology showed efficient to detect the emotion type in 95.6% of the cases.
Research and Development of DSP-Based Face Recognition System for Robotic Reh...IJCSES Journal
This article describes the development of DSP as the core of the face recognition system, on the basis of
understanding the background, significance and current research situation at home and abroad of face
recognition issue, having a in-depth study to face detection, Image preprocessing, feature extraction face
facial structure, facial expression feature extraction, classification and other issues during face recognition
and have achieved research and development of DSP-based face recognition system for robotic
rehabilitation nursing beds. The system uses a fixed-point DSP TMS320DM642 as a central processing
unit, with a strong processing performance, high flexibility and programmability.
Face Images Database Indexing for Person Identification ProblemCSCJournals
Face biometric data are with high dimensional features and hence, traditional searching techniques are not applicable to retrieve them. As a consequence, it is an issue to identify a person with face data from a large pool of face database in real-time. This paper addresses this issue and proposes an indexing technique to narrow down the search space. We create a two level index space based on the SURF key points and divide the index space into a number of cells. We define a set of hash functions to store the SURF descriptors of a face image into the cell. The SURF descriptors within an index cell are stored into kd-tree. A candidate set is retrieved from the index space by applying the same hash functions on the query key points and kd-tree based nearest neighbor searching. Finally, we rank the retrieved candidates according to their occurrences. We have done our experiment with three popular face databases namely, FERET, FRGC and CalTech face databases and achieved 95.57%, 97.00% and 92.31% hit rate with 7.90%, 12.55% and 23.72% penetration rate for FERET, FRGC and CalTech databases, respectively. The hit rate increases to 97.78%, 99.36% and 100% for FERET, FRGC and CalTech databases, respectively when we consider top fifty ranks. Further, in our proposed approach, it is possible to retrieve a set of face templates similar with query template in the order of milliseconds. From the experimental results we can substantiate that application of indexing using hash function on SURF key points is effective for fast and accurate face image retrieval.
CDS is the criminal face identification by capsule neural network.
Solving the common problems in image recognition such as illumination problem, scale variability, and to fight against a most common problem like pose problem, we are introducing Face Reconstruction System.
Realtime human face tracking and recognition system on uncontrolled environmentIJECEIAES
Recently, one of the most important biometrics is that automatically recognized human faces are based on dynamic facial images with different rotations and backgrounds. This paper presents a real-time system for human face tracking and recognition with various expressions of the face, poses, and rotations in an uncontrolled environment (dynamic background). Many steps are achieved in this paper to enhance, detect, and recognize the faces from the image frame taken by web-camera. The system has three steps: the first is to detect the face, Viola-Jones algorithm is used to achieve this purpose for frontal and profile face detection. In the second step, the color space algorithm is used to track the detected face from the previous step. The third step, principal component analysis (eigenfaces) algorithm is used to recognize faces. The result shows the effectiveness and robustness depending on the training and testing results. The real-time system result is compared with the results of the previous papers and gives a success, effectiveness, and robustness recognition rate of 91.12% with a low execution time. However, the execution time is not fixed due depending on the frame background and specification of the web camera and computer.
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
Recognition of different internal emotions of human face under various critical
conditions is a difficult task. Facial expression recognition with different age variations is
considered in this study. This paper emphasizes on recognition of facial expression like
happiness mood of nine persons using subspace methods. This paper mainly focuses on new
robust subspace method which is based on Proposed Euclidean Distance Score Level Fusion
(PEDSLF) using PCA, ICA, SVD methods. All these methods and new robust method is
tested with FGNET database. An automatic recognition of facial expressions is being carried
out. Comparative analysis results surpluses PEDSLF method is more accurate for happiness
emotional facial expression recognition.
Real time voting system using face recognition for different expressions and ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Similar to A Novel Mathematical Based Method for Generating Virtual Samples from a Frontal 2D Face Image for Single Training Sample Face Recognition (20)
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
The Art Pastor's Guide to Sabbath | Steve Thomason
A Novel Mathematical Based Method for Generating Virtual Samples from a Frontal 2D Face Image for Single Training Sample Face Recognition
1. Reza Ebrahimpour, Masoom Nazari, Mehdi Azizi & Mahdieh Rezvan
International Journal of Computer Science and Security (IJCSS), Volume (5) : Issue (1) : 2011 64
A Novel Mathematical Based Method for Generating Virtual Samples
from a Frontal 2D Face Image for Single Training Sample Face
Recognition
Reza Ebrahimpour ebrahimpour@ipm.ir
Assistant Professor, Department of
Electrical Engineering
Shahid Rajaee Univercity
Tehran, P. O. Box 16785-136, Iran
Masoom Nazari innocent1364@gmail.com
Department of Electrical Engineering
Shahid Rajaee Univercity
Tehran, P. O. Box 16785-136, Iran
Mehdi Azizi azizi_php@gmail.com
Department of Electrical Engineering
Shahid Rajaee Univercity
Tehran, P. O. Box 16785-136, Iran
Mahdieh Rezvan mhrezvan@gmail.com
Islamic Azad University south Tehran Branch
Tehran, P. O. Box, 515794453, Iran
Abstract
This paper deals with one sample face recognition which is a new challenging problem in pattern recognition.
In the proposed method, the frontal 2D face image of each person is divided to some sub-regions. After
computing the 3D shape of each sub-region, a fusion scheme is applied on them to create the total 3D
shape of whole face image. Then, 2D face image is draped over the corresponding 3D shape to construct
3D face image. Finally by rotating the 3D face image, virtual samples with different views are generated.
Experimental results on ORL dataset using nearest neighbor as classifier reveal an improvement about 5%
in recognition rate for one sample per person by enlarging training set using generated virtual samples.
Compared with other related works, the proposed method has the following advantages: 1) only one single
frontal face is required for face recognition and the outputs are virtual images with variant views for each
individual 2) it requires only 3 key points of face (eyes and nose) 3) 3D shape estimation for generating
virtual samples is fully automatic and faster than other 3D reconstruction approaches 4) it is fully
mathematical with no training phase and the estimated 3D model is unique for each individual.
Keywords: Face Recognition, Nearest Neighbor, Virtual images, 3D face modelModel, 3D shape.
1. INTRODUCTION
Face Recognition is an effective pathway between human and computer, which has a lot of applications in
information security, human identification, security validation, law enforcement, smart cards, access control
etc. For this reasons, industrial and academic computer vision and pattern recognition researchers have a
significant attention to this task.
Almost the face recognition systems are related to the set of the stored images of a person, which called
training data. Efficiency of these types of systems considerably falls when the size of training data sample is
small (Small Sample Size Problem). For example in ID card verification and mug-shot we have only one
sample per person. Several methods have done with the mentioned problem which we will introduce some of
them that our idea is given from.
From the primary and most famous appearance based methods we can mention to PCA [1]. Then for one
training sample per person, J. Wu et al. introduced (PC)
2
A [2] method. In this method, at first a pre-process
2. Reza Ebrahimpour, Masoom Nazari, Mehdi Azizi & Mahdieh Rezvan
International Journal of Computer Science and Security (IJCSS), Volume (5) : Issue (1) : 2011 65
on image is done to compute a projection matrix of face image and combine it with the original image, then
PCA have being applied on projection combined image. Then, S.C. Chen et al. offered E(PC)
2
A [3] method
which was the enhanced version of (PC)
2
A. To increase the efficiency of system they could increase the set
of training samples by calculating the projection matrix in different orders and combining it with the original
image. In [4] J. Yang offered 2DPCA method for feature extraction. 2DPCA is a 2D extension of PCA and
has less computational load compared to PCA with higher efficiency compared to PCA for few training
samples.
From another point of view, one can generate virtual samples to enlarge the training set and improve its
representative ability, variant analysis-by-synthesis methods are put forward, i.e., the labeled training
samples are warped to cover different poses or re-lighted to simulate different illuminations [5-8].
Photometric stereo technologies such as illumination cones and quotation image are used to recover the
illumination or relight the sample face images. From this point of view, Shape from shading algorithms [9-11]
has been explored to extract 3D geometry information of a face and to generate virtual samples by rotating
the result 3D face models.
In our proposed method, we divided the frontal face to some sub-regions. After estimating the 3D shape of
each sub-region, we combined them to create 3D shape of whole face. Then, we add the 2D face image with
its 3D shape to construct 3D face models. Finally, different virtual samples with different views can be
obtained by rotating the 3D face model in different angels.
Compared to previous works [8], this framework has following advantages: 1) only one single frontal face is
required for face recognition and the outputs are virtual images with variant views for the individual of the
input image, which avoids the burdensome enrollment work; 2) this framework needs only 3 key point of face
(eyes and nose) 3) the proposed 3D shape estimation for generating virtual samples is fully automatic and
faster than other 3D reconstruction approaches 4) this method has no training phase and is fully
mathematical and also the estimated 3D model is unique for each individual .
Experimental results on ORL dataset also prove the efficiency of our proposed method than traditional
methods in which only the original sample of each individual uses as training sample.
2. OVERVIEW OF THE PROPOSED SCHEME
Aiming to solve the problem of recognizing a face image with single training sample, an integrated scheme is
designed which is composed of two parts: database image synthesis part and face image recognition part.
Before recognition, synthesis work would be done on frontal pose of face image. Through the synthesis part,
the training database will be enlarged by adding virtual images with other different views. In the recognition
stage, One Nearest Neighbor is used to classify the test images. Therefore, the most important part in the
proposed scheme is the synthesis one which has crucial affect on the recognition accuracy.
2.1 Face image synthesis
This section gives a summary of the synthesis proposed in our scheme and introduces briefly the key
techniques utilized for generating virtual views.
As we know the general shape of human face is almost uniform. It means that the main regions of face such
as eyes, nose and mouth nearly have the uniform shape for all human. For example if we consider a typical
3D face image of human in frontal, the region around the eyes has some notch and also the region around
the nose has some nub which begin from the center of the brow and its nub increases approximately linearly
till tip of the nose. In the proposed method we divided the frontal face image to some sub-regions. After
estimating the 3D shape of each sub-region, we combine them to create 3D shape of whole face.
To obtain the 3D shape of the face, we require a distance matrix which can be easily computed from the
distance between two lenses in 3D cameras. But in 2D images we need to estimate the distance matrix.
Consider the 2D image of face in 3D space as shown in Figure 1.
3. Reza Ebrahimpour, Masoom Nazari, Mehdi Azizi & Mahdieh Rezvan
International Journal of Computer Science and Security (IJCSS), Volume (5) : Issue (1) : 2011 66
Each pixel of this image represents one point in Cartesian X-Y coordinate system and Z can be regarded as
distance axis of the image. In our proposed method we aim to estimate the Z matrix of face image to create
virtual face images with different views that illustrated in detail as following:
It is worth noting that all of the equations used in our proposed method are obtained heuristically by some
manipulation of different values and functions.
1- Consider an m×n face image. We locate three key points on face image (eyes and nose) automatically
using the following method.
Note that to find the location of eyes and nose we need to crop the region of face. To accomplish our method
for generating virtual sample, because it is the first step of the proposed scheme and affects the next steps
dramatically, the region of face should be cropped with 100% accuracy,. There is no automatic algorithm
with the accuracy of 100% until now (although some algorithms [12] with high accuracy need a manually
located point of face such as nose location). Thus we crop all of ORL dataset manually as you see in Figure
2.
This method finds the region of eyes and nose as following:
a) Illumination Adjusting the face image and converting into a binary face image (see Figure 3)
b) Dividing the binary image into three regions to locate the position of left eye, right eye, nose and lips
(Figure 4).
To accomplish this task, we have used an eye detector, based on histogram analysis. In order to eye, nose
and lip localization, the following steps are performed: 1. Compute the vertical and horizontal projections on
the face pixels 2. Locating the top, down, right and left region boundaries where the projecting value exceeds
a certain threshold.
We assume that eyes should be located in the upper half of face skin region. Once the face area is found, it
may be assumed that the possible eye region is the upper portion of the face region. By analyzing the curve,
we find the maximum and minimal point of the projection curve. Figure 4 shows the corresponding relation
between these points and the position of facial organ: eyes, nostril, and mouth. Only the positions of eyes
and lips are calculated in this case.
FIGURE 1: A 2D image of face in 3D dimension
FIGURE 2: a sample of manually cropped face image
FIGURE 3: converting illumination adjusted face image into binary image
4. Reza Ebrahimpour, Masoom Nazari, Mehdi Azizi & Mahdieh Rezvan
International Journal of Computer Science and Security (IJCSS), Volume (5) : Issue (1) : 2011 67
0 20 40 60
0
50
100
150
0 50 100 150
0
5
10
15
20
Let the center of left and right eye and the tip of nose er, el and pn respectively.
2- By using the position of eyes, we can compute the distance between eyes and also the middle point of the distance as
shown in equation (1).
(1)
Where σ is the distance and C is its middle point between left eye and right eye.
3-Through the equation (2), we make the face border (including the ears and head border) more sunken.
2000
( ) | [1, ] (50 ( ) / )
z x y mgf x cxσ σ
=∈ + −
(2)
4-We know that the brow region is nearly slick and plane from the side-view and after the eyebrows there is the pone of
eyes. By using eye situation, we find the part of the matrix Z which represents brow region and call it Zfh. Thus we can
have a good estimate of brow region according to equation (3).
1
( ) | [1, ] ( ( /(.2 )))/5
1
z y x nfh y cy
e
σ σ
=∈ − −
+
(3)
5- In the previous stage, the points under brow sunk whereas the cheek must be salient. To fix this notch and signalize
the cheek region, we can use equations (4).
1
( ) |1 [1, ] ( ( /(.08 )))/5
1
1
( ) |2 [1, ] ( ( /(.12 )))/5
1
( , ). ( , )1 2
z yc x n y cy
e
z yc x n y cy
e
Z z x y z x yc ccheek
σ σ
σ σ
=∈ − −
+
=∈ − −
+
=
(4)
6- If we pay attention to the downward regions of face, we would find out that in most faces the notch of borders
increases nearly exponentially with respect to the center of face. According to equation (5), we estimate the matrix that
does this task.
2 2( ) /
( ) | [1, ]
x cxz x ey mdf
σ− −
=∈ (5)
7- In this stage we obtain an estimate for the distance matrix of the nose. By little attention to the general form of human
face we find out that the bridge begins from the middle point between of eyes and its nub almost increase linearly with
the slight slope till nose tip and then decrease nearly with sharp slope while both side of nose sink exponentially as
shown in equation (6).
FIGURE 4: Eye, nose and lip localization using vertical and horizontal projections on the face pixels.
The red rectangles indicate the boundaries of eyes, nose and lips and blue lines indicate the central
lines of them.
( ( ) ( ))
, [1, ]
2
( ) ( )
, [1, ]
2
2 2
( ( ) ( )) ( ( ) ( ))
e x e xrlc x nx
e y e yrlc y my
e x e x e y e yr rl lσ
+
= ∈
+
= ∈
= − + −
5. Reza Ebrahimpour, Masoom Nazari, Mehdi Azizi & Mahdieh Rezvan
International Journal of Computer Science and Security (IJCSS), Volume (5) : Issue (1) : 2011 68
2( )
20.05( ).(e ) ,
( , )
0 ,
x cx
y cy cy y pn
Z x ynose
others
σ
− −
− < <
=
(6)
8-In the preceding stages, we estimated some different sub-matrixes for the distance matrix (Z) that each of them can
estimate one part of the face excellently and in the other regions cause increase in error. Thus, the only important point
is how to combine these matrixes. Since in each sub-region of the face corresponding matrix must be used, we used
equation (7) to combine estimated local matrixes in order to obtain the total estimate matrix for 3D shape of face image.
( , ) ( , ).((1 ( , )).(1 ( , )).(1 ( , ))) ( , )gf df fh cheek noseZ x y Z x y Z x y Z x y Z x y Z x y= − − − + (7)
Figure 5 schematically represents our proposed method for estimating 3D shape of face.
9-Now, since we have the distance matrix Z, we can drape the 2D face image over its 3D shape and create 3D face
model as shown in Figure 6.
FIGURE 5: Our proposed method for estimating 3D shape of face
FIGURE 6: 3D face model after draping 2D face image over its 3D shape
6. Author(s) Name
International Journal of Computer Science and Security, (IJCSS), Volume (15) : Issue (31) : 2011 69
10- Finally, we rotate the 3D face model in different views and produce virtual images in some different
angles to obtain virtual images with different poses. Figure 7 shows some of virtually generated 3D faces
with the proposed method on ORL dataset.
3. EXPERIMENTAL RESULTS
In the proposed method we only used frontal face image and generated 18 virtual images with
different views vary from -200
to +200
. We systematically evaluated the performance of our
algorithm compared with the conventional algorithm that do not uses the virtual faces synthesized
from the personalized 3D face models.
To test the performance of our proposed method, some experiments are performed on ORL
face database which contains images from 40 individuals, each providing 10 different images. For
some subjects, the images were taken at different times. The facial expressions and facial details
(glasses or no glasses) also vary. The images were taken with a tolerance for some tilting and
rotation of the face of up to 20 degrees (-200
to +200
) and also some variation in the scale of up to
about 10 percent. All images are grayscales and cropped to a resolution of 48×48 pixels. Figure 8
shows some example of ORL dataset.
FIGURE 7: Virtual images with different views generated from only a frontal 2D face image (a: tilt up
(60
) and angle (-250
:+250
) b: normal and angle (-250
:+250
) c: tilt down (60
) and angle (-250
:+250
)
d: original image)
FIGURE 8: Some samples of ORL database
7. Author(s) Name
International Journal of Computer Science and Security, (IJCSS), Volume (15) : Issue (31) : 2011 70
In all the experiments, the conventional methods used only the frontal faces of each person for
training and the other faces are all used for testing. The comparison experiments have been
conducted to evaluate the effectiveness of the virtual faces created from the 3D face model for
face recognition. We used PCA and 2DPCA for dimension reduction as well as extracting useful
features and nearest neighbor for classifying the test images.
Table 1 and 2 compare the results of our proposed method and conventional method.
By enlarging training data using our proposed method achieve higher top recognition rate (about
5%) than traditional methods in which only one frontal face image is used as training sample.
4. CONCLUSION AND FUTURE WORK
In this paper, we proposed a simple but effective model to make applicable face recognition task
in situations where only one training sample per person is available.
In the proposed method, we select a frontal 2D face image of each person and divide it to
some sub-regions. After computing the 3D shape of each sub-region, we combine the 3D shape
of ach sub-regions to create the total 3D shape for whole 2D face image. Then, 2D face image is
draped over the corresponding 3D shape to construct 3D face model. Finally by rotating the 3D
face image in different angels, different virtual views are generated and added to training sample.
Experimental results on ORL face dataset using nearest neighbor as classifier reveal an
improvement of 5% in correct recognition rate using virtual samples compared to the time we use
only frontal face image of each person.
Compared with other related works, the propose method has the following advantages: 1) only
one single frontal face is required for face recognition and the outputs are virtual images with
variant views for the individual of the input image, which avoids the burdensome enrollment work;
2) this framework needs only 3 key points of face (eyes and nose) 3) the proposed 3D shape
estimation for generating virtual samples is fully automatic and faster than other 3D
reconstruction approaches 4) this method has no training phase and is fully mathematical and
also the estimated 3D model is unique for each individual .
Our experiments also show the top recognition rate of 82.50% which still is far from satisfactory
compared to average recognition accuracy that may be realized by human beings. It is expected
that other techniques are needed to further improve the performance of face recognition. A
possible way to achieve the mentioned goal is generating more virtual views with different
Dimension
Method
5 20 40 70 100
Without virtual
views
(%)
53.5 70.61 72.58 72.58 72.58
With virtual
views
(%)
70.12 73.22 78.50 78.40 78.30
Dimension
Method
(48×1) (48×2) (48×5) (48×8) (48×10)
Without
virtual views
(%)
64.89 73.50 77.44 75.22 74.44
With virtual
views
(%)
71.37 79.10 82.10 81.20 80.83
Table 1 Recognition rate comparison between face
recognition with/without virtual face using PCA
Table 2 Recognition rate comparison between face
recognition with/without virtual face using 2DPCA
8. Author(s) Name
International Journal of Computer Science and Security, (IJCSS), Volume (15) : Issue (31) : 2011 71
expression and illumination using more complex techniques, another possible way could be
explored on classifiers with more complexity and higher accuracy.
5. REFERENCES
[1] M. Turk and A. Pentland, “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3,
no. 1, pp. 71-86, 1991.
[2] J. Wu, Z.H. Zhou, “Face recognition with one training image per person,” Pattern
Recognition Letters, vol. 23, no. 14, pp. 1711–1719, 2002.
[3] S.C. Chen, D.Q. Zhang, Z.H. Zhou, “Enhanced (PC)2A for face recognition with one training
image per person,” Pattern Recognition Letters, vol. 25, no. 10, pp. 1173–1181, 2004.
[4] J. Yang, D. Zhang, “Two-Dimensional PCA: A New Approach to Appearance-Based Face
Representation and Recognition,” IEEE Trans. Pattern Analysis and Machine Intelligence,
vol. 26, no. 1, pp. 1173–1181, 2004.
[5] T. Riklin-Raviv, A. ShaShua, “The quotient image: class based re-rendering and recognition
with varying illuminations,” Pattern Anal. Mach. Intell, vol. 23, no. 23, pp. 129–139, 2001.
[6] A.S. Georghiades, P.N. Belhumeur, D.J. Kriegman, “From few to many: illumination cone
models for face recognition under variable lighting and pose,” IEEE Trans. Pattern Anal.
Mach. Intell, pp. 643–660, 2001
[7] Talukder, D. Casasent, “Pose-invariant recognition of faces at unknown aspect views,”
IJCNN Washington, DC, 1999.
[8] T. Vetter, T. Poggio, “Linear object classes and image synthesis from a single example
image,” IEEE Trans. Pattern Anal. Mach.Intell, vol. 19, no. 7, pp. 733–741, 1997.
[9] R. Zhang, P. Tai, J. Cryer, M. Sha,”Shape from shading: a survey,” IEEE Trans. Pattern
Anal. Mach. Intell, vol. 21, no. 8, pp. 690–706, 1999.
[10] J. Atick, P. Griffin, N. Redlich, “Statistical approach to shape from shading: reconstruction of
three dimensional face surfaces from single two dimensional image,” Neural Comput, vol. 8,
pp. 1321–1340, 1996.
[11] T. Sim, T. Kanade, “Combining models and exemplars for face recognition: an illuminating
example,” Proceedings of the CVPR 2001 Workshop on Models versus Exemplars in
Computer Vision, 2001.
[12] T. Jilin, F. Yun, and S. Huang, “Locating Nose-Tips and Estimating Head Poses in Images
by Tensorposes,” IEEE Trans. Circuit and Systems for Video Technology, vol. 19, no. 1,
2009