Your SlideShare is downloading. ×
  • Like
Face recognition across pose with estimation of pose parameters
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Face recognition across pose with estimation of pose parameters

  • 1,217 views
Published

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,217
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
55
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. International Journal of Electronics and Communication Engineering & Technology (IJECET), INTERNATIONAL JOURNAL OF ELECTRONICS AND ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) IAEMEISSN 0976 – 6464(Print)ISSN 0976 – 6472(Online)Volume 3, Issue 1, January- June (2012), pp. 311-316 IJECET© IAEME: www.iaeme.com/ijecet.htmlJournal Impact Factor (2011): 0.8500 (Calculated by GISI) ©IAEMEwww.jifactor.com FACE RECOGNITION ACROSS POSE WITH ESTIMATION OF POSE PARAMETERS Abhishek Choubey Girish D. Bonde Head, Department of Electronics and M.Tech. Student, Department of EC Communication R.K.D.F. Institute of Technology, Bhopal R.K.D.F. Institute of Technology, Bhopal Bhopal, INDIA Bhopal, INDIA girish_bonde55@rediffmail.com abhishekchoubey84@gmail.com ABSTRACT In this paper, we implemented eigenface based face recognition and tried to compare the results with fisherface algorithm. The process required preprocessing. The images had to be resized to a consistent size. The database used included cropped faces of various sizes. Hence the need for face detection was eliminated. We tried to compare two of the most frequently used algorithms; eigenface and fisherface. We compared the performance of each algorithm against two constraints. Pose and the size of training data. Our study has shown us that fisherface algorithm is robust in both cases. This leads us conclude that the eigenface algorithm is beneficial when the database is large. But given the robustness of the fisherface algorithm, it would be the algorithm of choice if the resources are not a problem. We have extended the work towards automatic estimation of pose parameters using a patch based approach. Keywords: Eigenface, Fisherface, pose 1. INTRODUCTION The face plays a major role in our social intercourse in conveying identity and emotion. The human ability to recognize faces is remarkable. We can recognize thousands of faces learned throughout our lifetime and identify familiar faces at a glance even after years of separation. The skill is quite robust, despite large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses or changes in hairstyle. We have implemented the eigenface and fisherface algorithms and tested them against two face databases, observing results across pose (out-of-plane face rotation). We evaluated 311
  • 2. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEMEperformance against databases with both densely-sampled and sparsely-sampled facialposes.We have also extended the work towards automatic estimation of pose parameters.Our (Specific) Problem StatementGiven a training database of pre-processed face images, train an automated system torecognize the identity of a person from a new image of the person. Examine sensitivity to poseusing the eigenface approach suggested in [1,2] and the fisherface approach developed in [3].Our new ideas include: 1. comparing results of eigenface & fisherface across pose, 2. testing dense and sparse training databases 3. Estimating the pose parameters of the face 2. RELATED WORK The Eigenface [1,2] is the first method considered as a successful technique of facerecognition. The Eigenface method uses Principal Component Analysis (PCA) to linearlyproject the image space to a low dimensional feature space. The Fisherface [3] is an enhancement of the Eigenface method. The Eigenface methoduses PCA for dimensionality reduction, thus, yields projection directions that maximize thetotal scatter across all classes, i.e., across all image s of all faces. The PCA projections areoptimal for representation in a low dimensional basis, but they may not be optional from adiscrimination standpoint. In stead, the Fisherface method uses Fisher’s Linear DiscriminantAnalysis (FLDA or LDA) which maximizes the ratio of between-class scatter to that ofwithin-class scatter. 3. COMPARISON BETWEEN EIGENFACE AND FISHERFACE Eigenface and Fisherface are global approach of face recognition which takes entire imageas a 2-D array of pixels. Both methods are quite similar as Fisherface is a modified version ofeigenface. Both make use of linear projection of the images into a face space, which take thecommon features of face and find a suitable orthonormal basis for the projection. Thedifference between them is the method of projection is different; Eigenface uses PCA whileFisherface uses FLD. PCA works better with dimension reduction and FLD works better forclassification of different classesEigenface Eigenface is a practical approach for face recognition. Due to the simplicity of its algorithm,we could implement an Eigenface recognition system easily. Besides, it is efficient inprocessing time and storage. PCA reduces the dimension size of an image greatly in a shortperiod of time. The accuracy of Eigenface is also satisfactory (over 90 %) with frontal faces.However, as there has a high correlation between the training data and the recognition data. 312
  • 3. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEMEFisherface Fisherface is similar to Eigenface but with improvement in better classification ofdifferent classes image. With FLD, we could classify the training set to deal with differentpeople and different pose. We could have better accuracy in various pose than Eigenfaceapproach. Besides, Fisherface removes the first three principal components which isresponsible for light intensity changes, it is more invariant to light intensity. Fisherface is more complex than Eigenface in finding the projection of face space.Calculation of ratio of between-class scatter to within-class scatter requires a lot of processingtime. Besides, due to the need of better classification, the dimension of projection in facespace is not as compact as Eigenface, results in larger storage of the face and more processingtime in recognition. Facial recognition software was developed using the MATLABprogramming language by the MathWorks. This environment was chosen because it easilysupports image processing, image visualization, and linear algebra.The software was testedagainst UMIST databse. UMIST was created by Daniel B. Graham, with a purpose ofcollecting a controlled set of images that vary pose uniformly from frontal to side view. TheUMIST database has 565 total images of 20 people. The UMIST database images, displayedbelow, has uniform lighting and pose varying from side to frontal. Figure 1: UMIST database Images3.1 Comparison by Size of training data For these results, 20 recognition faces (one for each person) were randomly picked fromthe database, leaving 545 photos to use as training faces. Mp, the number of principalcomponents to use, was chosen as 20. All 20 of 20 images were correctly recognized, confirming the very good performance ofeigenface with densely and uniformly sampled inputs. For this same database and setup,fisherface performs very similarly. 313
  • 4. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME3.2 Comparison by Image pose For these results, 20 recognition faces (one for each person) were randomly picked fromthe database, then 60 more photos were used as training faces. Three training faces werepicked for each person: a frontal, side, and 45-degree view. Out of the 20 faces, 16 were correctly classified in the 1st match. Also notice that thisapproach is rather pose invariant — it often (13 times) picks out all 3 training images from thedatabase. For comparison, the same setup is run using the eigenface algorithm. Here 14 of the 20faces are correctly classified, and all 3 correct images are never found. Clearly, the fisherfacealgorithm performs better under pose variation when only a few samples across pose areavailable in the training set. Table comparing eigenface to fisherface Fisherface Eigenface Computational slightly more simple Complexity complex Effectiveness Across good, even with some, with Pose limited data enough data Sensitivity to Lighting little very We find that both the eigenface and fisherface techniques work very well for a uniformlyand densely sampled data set varied over pose. When a more sparse data set across pose isavailable, the fisherface approach performs better than eigenface4 ESTIMATION OF POSE PARAMETERS Automatic estimation of head pose faciliates human facial analysis. It has widespreadapplications such as, gaze direction detection, video teleconferencing and human computerinteraction (HCI). It can also be integrated in a multi-view face detection and recognitionsystem. Most current methods estimate pose in a limited range or treat pose as a classificationproblem by assigning the face to one of many discrete poses [6,7]. Mainly tested on imagestaken in controlled environments e.g. the FacePix dataset [8] (Fig. 2). In short, a framework ismissing for continuous face pose estimation in uncontrolled environments (Fig 2b). 314
  • 5. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME Figure 2: a) Example images from the FacePix database [8] typically used for pose estimation. These images are taken in controlled environments with fixed scale, lighting and background. b) We address the problem of estimating pose as a continuous parameter on “real world” images with large variations in background, illumination and expression.Procedure (a) Our method decomposes a test image Y into a regular grid of patches. (b) There is a large predefined library of object instances. The library can be considered as a palette from which image patches can be taken. (c). The library is used to approximate each patch from the test images. The choice of library patch provides information about the true pose. (d) Model parameters W are used to interpret these patch choices in a Bayesian framework to calculate a posterior over pose (e). Figure 3 Example Results. True pose i.e. human estimate (above) vs. our estimate (below)5 CONCLUSION AND FUTURE WORK The Eigenface and Fisherface method were investigated and compared. The omparativeexperiment showed that the Fisherface method outperformed the Eigenface method. Theusefulness of the Fisherface method under varying pose and varying sizes of trainingdatabases was verified. Also our results show that patch-based representation is suitable forface pose estimation. We achieve promising results on automatic face pose estimation inuncontrolled environments.6. REFERENCES1. M. Turk and A. Pentland, “Eigenfaces for recognition,” J. Cognitive Neuroscience, vol. 3,no. 1, 1991.2. M. Turk and A. Pentland, “Face recognition using eigenfaces,” Proc. IEEE Conf. onComputer Vision and Pattern Recognition, 1991, pp. 586-591. 315
  • 6. International Journal of Electronics and Communication Engineering & Technology (IJECET), ISSN0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 3, Issue 1, January- June (2012), © IAEME3. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. fisherfaces:recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.4.. Alan Brooks (in collaboration with Li Gao) Face Recognition: Eigenface and FisherfacePerformance Across Pose, ECE 432 Computer Vision with Professor Ying Wu 20045. Jania Aghajanian and Simon J.D. Prince, “Face Pose Estimation in UncontrolledEnvironments”, Department of Computer Science University College London6. N. Kruger, M. Potzsch, T. Maurer, and M. Rinne. Estimation of face position and pose withlabeled graphs. In BMVC, pages 735–743, 1996.7. SZ Li, X.H. Peng, X.W. Hou, H.J. Zhang, and Q.S. Cheng. Multi-view face poseestimation based on supervised isa learning. In AFGR 2002.8. G. Little, Krishna S., Black J., and Panchanathan S. A methodology for evaluatingrobustness of face recognition algorithms with respect to changes in pose and illuminationangle. In ICASSP, 20059. Belhuumeur, P. N., Hespanha, J. P., and Kriegman, D.J. 1997. Eigenfaces vs. Fisherfaces:Recognitionusing class specific linear projection. IEEETrans. Patt. Anal. Mach. Intell. 19,711–720.10. Bhati R., Jain S, Mishra D.K., and D.Bhati 2010. A comparative analysis of differentneural networks for face recognition using Principal Component Analysis and efficientvariable learning rate. IEEE Fourth Asia International Conference on athematical/AnalyticalModelling and Computer Simulation pp. 354-35911. Huang, J., Heisele, B., and Blanz, V. 2003. Component-based face recognition with 3Dmorphable models. In Proceedings, International Conference on Audio- and Video-BasedPerson Authentication.12. Kirby, M. and Sirovich, L. 1990. Application of the Karhunen-Loeve procedure for thecharacterization of human faces. IEEE Trans. Patt. Anal.Mach. Intell. 12 .13. Khanale, P.B., 2010a. Recognition of marathi numerals using artificial neural network. J.Artificial Intell., 3: 135-140.14. Lanitis, A., Taylor, C. J., and Cootes, T. F. 1995. Automatic face identification systemusing flexible appearance models. Image Vis. Comput. 13, 393–40115. Marian Stewart Bartlett, Javier R. Movellan and Terrence J. Sejnowski,2002.FaceRecognition by Independent Component analysis. IEEE Transactions on Neural Networks ,Vol. 13, No. 6.16. S.Lawrence, C.L.Giles, A.C.Tsoi, and A.d.Back, (1993) IEEE Transactions of NeuralNetworks. vol.8, no.1, pp.98-113.17. Turk, M. and Petntland, A. 1991. Eigenfaces for recognition. J. Cogn. Neurosci. 3, 72–86.18.Wiskott, L., Fellous, J.-M., and Von Der Malsburg, C. 1997. Face recognition by elasticbunch graph matching. IEEE Trans. Patt. Anal. Mach. Intell. 19, 775–779.19. Zaho W., R.Chellepa, A.Rosenfeld, 2003. Face Recognition: A Literature Survey, ACMComputing Surveys, Vol.35, No.4, pp.399-45820. Z aho W., R.Chellepa, A.Rosenfeld, 2003. Face Recognition: A Literature Survey, ACMComputing Surveys, Vol.35, No.4, pp.399-458. 316