1                       Face recognition and the head pose problem         José Luis Alba Castro1, Lucía Teijeiro-Mosquera...
2texture similarity and graph similarity, was minimized in order to fit the EBGM model and to perform thecomparison betwee...
3built by applying a third PCA to the suitably combined shape and texture coefficients. At this point we have amodel that ...
4as we going to show later on, this assumption could not be necessarily true, as it was reported for View-basedeigenfaces ...
5a computationally simpler method coined as pose-dependent AAM. First we use a multiresolution AAM [21]that includes both ...
6                           Ridges                            &                           Valleys                         ...
7                   Probe angle             -45º      -22,5º   0º        +22,5º    +45º      Average                   -45...
8values compared to view-based AAM and a performance quite close to that achieved using manuallylandmarked faces.      REF...
Upcoming SlideShare
Loading in …5

Hoip10 articulo reconocimiento facial_univ_vigo


Published on

Artículo presentado por la Universidad de Vigo durante la jornada HOIP'10 organizada por la Unidad de Sistemas de información e interacción de TECNALIA.

Más información en http://www.tecnalia.com/es/ict-european-software-institute/index.htm

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Hoip10 articulo reconocimiento facial_univ_vigo

  1. 1. 1 Face recognition and the head pose problem José Luis Alba Castro1, Lucía Teijeiro-Mosquera1 and Daniel González-Jiménez2 (1) Signal Theory and Communications Department, University of Vigo, (2) Gradiant ABSTRACT One of the main advantages of face recognition is that it doesnt need active cooperation from the subject.This advantage converts into a challenging research topic when it comes to process face images acquired inuncontrolled scenarios. Matching faces on the web or pointing out criminals in the crowd are real applicationexamples where we found many of the face recognition open issues: gallery-test pose mismatch, extremeillumination conditions, partial occlusions or severe expression changes. From all of these problems, posevariation has been identified as one of the most recurrent problems in real-life applications. In this paper wegive a small introduction to the face recognition problem through a combination of local and global approachesand then explain an AAM-based fully automatic system for pose robust face recognition that allows faster fittingto any not frontal view. 1. INTRODUCTION TO FACE RECOGNITION Identifying people from their face appearance is a natural and fundamental way of human interaction. That’sthe reason behind the great public acceptance of automatic face recognition as a biometric technologysupporting many kinds of human-computer interfaces. We can find face recognition technology in manydifferent applications like, control access to physical and logical facilities, video-surveillance, personalizedhuman-machine interaction, multimedia database indexing and retrieval, meeting summarization, interactivevideogames, interactive advertising, etc. Some of these applications are quite mature and, under a set ofrestrictions, the performance of the face recognition module is quite high and acceptable for normal operation.Nevertheless, face recognition is still a very challenging pattern recognition problem due to the high intra-classvariability. The main sources of variability can be divided into four groups: Variability due to the 3D deformable and non-convex nature of heads:illumination, expression, pose/viewpoint, Variability due to short-term changes: occlusions, make-up, moustache, beard, glasses, hat, etc. Variability due to long-term changes: getting weight, getting old Variability due to change of acquisition device and quality (resolution, compression, focussing, etc.) The last source of variability is common to many other pattern recognition problems in computer vision butthe other sources are quite particular to the face “object”. From all of them, the pose, illumination andexpression (PIE) variability are the ones that have yielded more research efforts since the nineties. For human-beings it is quite disturbing that automatic systems fail so catastrophically when two pictures from the sameperson but with different expression, point of view and lighting conditions are not categorized as the sameidentity, even if the pictures have been taken two minutes apart. The great specialty of our brains to recognizefaces has been largely studied and nowadays, thanks to the prosopagnosia or “face-blindness” disease, it isaccepted that there is a specific area in the temporal lobe dedicated to this complex task and that both localcharacteristics and the whole ordered appearance of the “object” play an important role when recognizing aparticular identity. Holistic approaches to face recognition have been evolving since the very first works on eigenfaces [1],where a simple dimensionality reduction (classical PCA) was applied to a set of face images to get rid of smallchanges in appearance, through Fisherfaces [2], where a similar dimensionality reduction principle was applied(LDA) to maximize discriminability among classes instead of minimize MSE for optimal reconstruction, andthrough probabilistic subspaces [3], where the concept of classes as separated identities is changed to binaryclassification over image differences: same identity against different identity. Finally, the AAM technique [4]appears as a natural evolution of shape description techniques and as a way to geometrically normalize texturedimages from the same class before applying any kind of dimensionality reduction. Thousands of papers havebeen written on different modifications and combinations of the above related techniques. In this paper we willextend the AAM approach to handle pose variations and normalize faces before feature extraction and matchingfor recognition. Local approaches have been largely based on locating facial features and describing them jointly with theirmutual distances. The elasticity of their geometrical relationship has been successfully described as a graph withanchor points in the EBGM model (Elastic Bunch Graph Matching) [5] where a cost function that fused local
  2. 2. 2texture similarity and graph similarity, was minimized in order to fit the EBGM model and to perform thecomparison between two candidate faces. Since this model was launched many others tried to find more robustrepresentations for the local texture using local statistics of simpler descriptors like LBP (Local Binary Patterns)[6], SIFT (Scale-Invariant Feature Transform) [7], HoG (Histogram of Gradients) [8], most of them trying to getdescriptors invariants to illumination changes and more robust to small face-feature location errors. In thispaper, the feature extraction module of the face recognizer is based on Gabor Jets applied to user-specific facelandmarks from a illumination normalized image, adding, this way, more discriminability and illuminationinvariance to the local matching process. Local and holistic approaches for face recognition have been combined in very different ways to produceface recognizers robust to the main sources of variation. In our work we use a holistic approach that we calledPose-dependent AAM to normalize the face as a frontal face and a local approach to extract the textureinformation over a set of face points in order to match two normalized sets of local textures from different faceimages. The rest of the paper is organized as follows: section 2 gives a brief review of methods that try to handlepose variations and rapidly focus on 2D pose correction approaches, that is the main body of the paper. Section3 explains how the feature extraction and matching is performed once the face has been corrected. Section 4gives some results comparing the proposed pose-dependent approach with the well-known view-based AAM.Section 5 closes the paper with some conclusions. 2 FACE RECOGNITION ACROSS POSE The last decade has witnessed great research efforts to deal with pose variation in face recognition. Most ofthe main works are compiled in [9]. The brute-force solution to this problem consists of saving different viewsfor each registered subject. Nevertheless in many real applications there are difficulties to take more than oneenrollment shot per subject or we can not afford to store and/or match more than one image per subject, andeven, frequently, the stored face image is not frontal, as in video-surveillance. In these cases the approaches canbe divided into those based on fully redesigning the recognizer [10][11][12] in order to match pose-invariantface features, and those that rely on creating virtual views of rotated faces and use a general purpose recognizerfor matching the real image and the synthetic one under the same pose. In this last approach, we distinguishbetween methods based on 3D modeling, [13][14] and methods based on 2D, [15][16][17][18]. In [15] a PDM-based PCA analysis allows to identify the two eigenvectors responsible for in-depth rotation that they happen tobe those with the highest eigenvalues. Manipulations of the projection coefficients associated to theseeigenvectors allow to synthesize a virtual shape in a wide range of pitch and yaw values and then rendering thetexture of the new image using thin plate splines from the original face image. In this paper we present a fully automatic pose-robust face recognition system extending the study in [15] toinclude automatic estimation of pose parameters through AAM-based landmarking and design a fully automaticsystem for face recognition across pose. We will present several variants for improving speed and accuracy ofthe View-Based Active Appearance Model [18]. Since pose changes is also one of the factors that detrimentAAM landmarking performance, we compare View-based AAM performance with this novel and faster variantcoined as Pose Dependent AAM [19]. This approach is also based on dividing the non linear manifold createdby face pose changes into several linear subspaces (different models for different poses), but makes use ofautomatic pose estimation to decide between different pose-models in a multiresolution scheme and differs inthe way virtual views are created.In the next subsection we give a brief review of AAM and View-Based AAM to introduce the concepts andnotation that root our work. 2.1. VIEW-BASED AAM Active Appearance Models combine a powerful model of joint shape and texture with a gradient-descentfitting algorithm. AAM was first introduced by Cootes et al. in 1998 [4] and, since then, this modeling has beenwidely used in face and medical image analysis. During a training stage, we use a set of manual landmarkedimages, these images are selected as representative examples of the face variability. All the image have beenmanually landmarked with 72 landmark points, the position of these 72 landmarks conform the face-shape of theimage. The training set of landmarked face-shapes si = (x0i; y0i; :::; xni; yni) are aligned using Procrustes analysis,in order to get invariance against 2D rigid changes (scale, translation, roll rotation). A shape model is createdthrough PCA of the aligned training shapes (1). In the same way, textures g are warped to a frame reference set,and intensity normalized before being combined in a PCA texture model (2). The joint shape-texture model is
  3. 3. 3built by applying a third PCA to the suitably combined shape and texture coefficients. At this point we have amodel that can represent one face in a few set of parameters, known as the appearance parameters ci (3). si = s + bsi Pg (1) g i = g + bgi Pg (2) si = s + ci Qs ⎫ ⎬ (3) g i = g + ci Qg ⎭ Interpretation of a previously unseen image is seen as an optimization problem in which the differencebetween the new image an the model (synthesized) image is minimized, in other words, the modelreconstruction error. On this purpose, after the model is built, a regression matrix R = δc/δr is calculated in orderto learn the variation of the residual according to the variation of the appearance parameters. As we want to usea constant regression matrix the residual r(p)=gs-gm needs to be calculated in a normalized reference frame (see[4] for details). Therefore, we minimize the squared error between the texture of the face normalized and warpedto the reference frame, gs, and the texture of the face reconstructed by the model, gm, using the appearanceparameters, where p includes both the appearance parameters and the rigid parameters (scale, translation, rollrotation). The assumption of R as constant, lets us estimate it from our training set. We estimate _δc/δr bynumeric differentiation, systematically displacing each parameter from the know optimal value on trainingimages and computing an average over the training set. During the fitting stage, starting from a reasonable goodinitialization, the AAM algorithm iteratively corrects the appearance parameters, using an iterative gradientdescent algorithm. The projection of the residual over the regression matrix leads us to the optimal increment ofthe parameters, δp=-Rr(p) pi+1=pi+kδp . After updating parameters, the residual (reconstruction error) isrecalculated and the process is repeated until the error stops decreasing. Once the fitting converges we havefound the appearance parameters that best represent the new image in our model. This means, that given a targetface, with an unknown shape, we can figure out the shape of the face by fitting it to the model and recoveringthe shape from the appearance parameters. One of the drawbacks of the AAM is that the performance of the fitting algorithm decreases if the model hasto explain large variations and non-gaussian distributions, like face yaw and pitch. The straight-forward solutionto this problem is to divide the face space into different clusters depending on the rotation angle, this way wetrain different AAM models for each class of the clustered face space. As result, we have one model for eachview/pose group and therefore each model has less variations to handle and distributions are nearly gaussian. In [18], Cootes et al. trained different AAM models to fit different ranges in pose. The view-based AAM cancope with large pose changes, by fitting each new image to its most adequate model. The drawback of thisapproach, is that the fitting relays on the ability to choose the best model to each image. The model selectionapproach proposed by Cootes consist of trying to fit each image to each one of the N view-based models (seefigure 1). The fitting stops after a few iterations and the model with the smallest reconstruction error at that timeis chosen as the most adequate model for this image. Fitting AAM Fitting Any Pose Dependent Initialization Pose View View 1 1 View View 2D 2 2 Parameter r(p)? Estimation ... ... View View N N Figure 1. Landmarking Process using the View-Based AAM. The input image is fitted to each of the models, after a few iterations the model with the smallest error is chosen. The fitting keeps on using the chosen model until convergence. From our point of view, the model selection of ’View-based’ approach presents two different drawbacks. Onone hand using the residual in different models to choose the best model, relies on the assumption that differentmodels have comparable residuals. We argue that even though it works fine for acquisition-controlled datasets,
  4. 4. 4as we going to show later on, this assumption could not be necessarily true, as it was reported for View-basedeigenfaces [3]. On the other hand, fitting each image to each of the view-based models is computationallyexpensive and with an increasing cost as we increase the number of models. Notice that the iterations in the not-selected models are wasted. In the next section, we propose a different way to select the best model for eachimage based on the estimation of the rotation angle. In our approach (see figure 2), we use a multiresolutionframework, where a generic model including all pose variations is used to estimate the rotation angle. Once therotation angle is estimated we use the most adequate pose model to fit the image. In the next section we explainhow to estimate the rotation angle. 2.2. POSE DEPENDENT AAM In the scheme of Figure 1 we can save computation time if the final view-based model is selected beforemaking iterations over uncorrect models. A pose estimator is then needed to select the candidate model. Wehave explored two different approaches to detect the head angle from manual or automatically landmarkedimages. In the first one, González-Jiménez et al. [15] show that pitch and yaw rotation angles, have nearly linearvariation with the first two shape parameters of a PDM model, coined then as pose parameters. In [18], Cooteset al. restrict their approach to yaw rotations, and claim that appearance parameters follow an elliptic variationwith the yaw rotation angle. Both approaches are different by nature because the first one is based on empiricalobservation over the shape parameters and the second one is based on a cilindrical model for the face and anextension of the shape model to the appearance model. It is easy to show that Gonzalez’s approach can be alsoseen as a simplification of Cootes’s model under a range of yaw rotation angles +/-45º. So far we have seen thatthe yaw angle can be estimated both from appearance and shape parameters, and that shape-elliptic and shape-linear approaches are equivalent for small angles. We will extend the angle etimation approach to the creation ofvirtual frontal faces and we will show comparative results among three different rotation models: theappearance-elliptical (Cootes), shape-linear (González), and shape elliptical (as the result of mixing bothapproaches). Once the yaw angle is estimated we can proceed to manipulate it and make a virtual frontal view. Bothgallery and probe images are processed, therefore, the face recognition across pose problem is reduced to afrontal face recognition problem and a general purpose recognizer can be used. In this subsection, we are goingto compare approaches presented in [15] and [18]. González-Jiménez et al. [15] proposed a linear model todetect the head angle. They also proposed to build frontal synthetic images by creating synthetic frontal shapes,setting to zero the pose parameter and warping the image texture to the synthetic frontal shape. Eachlandmarked shape is projected into a PCA shape subspace. A PCA model of both frontal and yaw rotated facesis used to capture the main variation in the first eigenvector, as we explained before. Once we have a facerepresented in our shape model bsj , the frontalization process consist of setting to zero the pose parameter andreconstructing the shape using the frontalized parameters. The frontalized shape is filled with the texture fromthe image [19]. In cases where the rotation is large and self-occlusions appear, symmetry is applied, and thetexture of the visible half side of the face is used to fill both left and right half side of the frontalized shape. In[18], Cootes et al. modeled yaw rotation in the appearance subspace using an elliptical approach. Once theyhave represented the face with angle θ_ in the appearance model, a new head angle, α_, can be obtained usingthe transformation in equation 4, where r is the residual vector not explained by the rotation model. A syntheticfrontal face can be recovered from the appearance parameters. It has been demonstrated in [19], that when weuse the model texture instead of the original texture from the input image, recognition results decrease becausethe test subjects are not included in the model and, consequently, the representation of test subjects texture isless accurate. Therefore, to establish a fair comparison between [15] and [18], we are frontalizing the shapeusing both methods but rendering the frontal face always with the original texture from the image. c(α ) = c0 + c x cos(α ) + c y cos(α ) + r ⎫ ⎪ ⎬ (4) r = c − c0 + c x cos(θ ) + c y cos(θ ) ⎪ ⎭ In table 1 we show comparative results of the two frontalization methods using manual landmarks. We alsoshow the performance of the elliptical approach applied to the shape instead of to the appearance. We can seethat the linear approach performs better for angles between −45º and 45º than the elliptical approach. Also,using the elliptical model in the shape subspace, instead of using it in the appearance subspace, performs slightlybetter, this can be due to the fact that some information relevant to recognition can be modified by the texturerepresentation in the appearance subspace. In any case this differences are not statistically significant. Linear Model ShapeElliptical Model Texture Elliptical Model Recognition rate 98,68% 95,44% 94,71% Table 1: Recognition rate with different frontalization methods. Results averaged over 34 subjects in PIE database [20]. In order to automatically run the frontalization method in the whole recognition process, we have resorted to
  5. 5. 5a computationally simpler method coined as pose-dependent AAM. First we use a multiresolution AAM [21]that includes both left and right rotation to have a coarse approximation of the face shape, and hence, a coarseapproximation of the shape parameters. Before the higher level of resolution, we decide which is the best modelfor our image based on the pose parameter. In the previous section, we have justified the selection of thisparameter. Therefore, the face is finally landmarked with its corresponding model. The higher resolution of thegeneric multiresolution model is not used for fitting, being replaced by the pose dependent model. Figure 2shows the full landmarking process using Pose-Dependent AAM. We use 4 Viola-Jones detectors [22] (face,eyes, nose, mouth), to estimate the 2D rigid parameters (scale, translation and tilt). An scaled and translatedversion of the mean shape is used as initial shape at the lower level of resolution. As we are going to see next,having a good initialization improves the landmarking results. Once we have estimated scaling and translation,we fit the image to the low resolution level of the generic model, jumping to the next resolution level after theerror stop decreasing. Before the higher level of resolution, the decision about the image rotation angle is madeand the adequate pose-dependent model is chosen. The advantage of our approach is that we save the extra-cost of landmarking the face with several pose-dependent models. The view-based approach runs a few iterations of the AAM algorithm for each model, thebest fitting model is used to landmark the image, while the iterations done in the not selected models are wasted.On the contrary, using the pose-dependent approach, no iterations are wasted, even if the generic multiresolutionmodel can not achieve a good fitting for every image, it helps to improve the initialization, and thus the numberof iterations in the pose-dependent level decreases. AAM Fitting Any Fitting Initialization Pose Pose Dependent 2D Lower Left Parameter Resolution Model Estimation bs? Middle Right Resolution ModelFigure 2: Landmarking process using the Pose-Dependent AAM. The image is registered with the generic multiresolutionmodel. Before the higher level of resolution, pose estimation is performed. The image is then registered to the most adequatepose-dependent model. 3 FACE FEATURE EXTRACTION AND MATCHING The frontal (or frontalized) face verification engine is based on multi-scale and multi-orientation Gaborfeatures, which have been shown to provide accurate results for face recognition [23], due to biological reasonsand because of the optimal resolution in both frequency and spatial domains [24]. More specifically, the facerecognition system relies upon extraction of local Gabor responses (jets) at each of the nodes located at facialpoints with informative shape characteristics. These points are selected by sampling the binary face imageresulting from a ridge&valley operator and the jets are applied on the geometrically and photometricallycorrected face region [25]. The similarity between two jets is given by their normalized dot product, and thefinal score between two faces combines the local similarities using trained functions in order to optimizediscriminability in the matching process [26]. Figure 3 shows the process of obtaining the texture features fromthe face image. Figure 4 shows the full face recognition diagram robust to pose changes between -45º and 45º. In next section some comparative results between Pose-dependent and View-based AAM will be exposed. Itis important to highlight that this scheme combines efficiently a local description of the face with a holisticrepresentation that normalize the faces before matching.
  6. 6. 6 Ridges & Valleys Multiresolutio Thresholding n and lti i t ti Sampling Feature set: one jet perFigure 3: Feature extraction process: Sampling of ridges in the face image and Multiscale and multiorientation Gaborfiltering centered on these points. Each point is represented by a texture vector called jet. Target Virtual Image Image Ilumination AAM Shape Warping Correction Landmarking Frontalization (texture) Matching Ilumination AAM Shape Warping Correction Landmarking Frontalization (texture) Registered Registered Image Virtual ImageFigure 4: Face-Recognition Diagram. After the illumination processing, both gallery and probe images are landmarked usingpose-dependent approach. Frontal view of both faces are synthesized and matched to perform recognition 4 COMPARATIVE RESULTS This section shows the recognition results using the two schemes for registering the rotated image and thereference result with manually landmarked images (72 points). These experiments are tested over 34 subjectsfrom CMU Pie database in order to have an easy comparison to the results presented in [15]. Tables 2, 3 and 4show the recognition results for manually landmarked faces and for the fully automatic landmarking system,both using Pose Dependent scheme and View Based scheme. The results in recognition are comparable for mostof the probe-gallery pose combinations, decreasing for pose differences of 45º due to the propagation of largerlandmarking errors in these poses, for both automatic approaches.
  7. 7. 7 Probe angle -45º -22,5º 0º +22,5º +45º Average -45º - 100 100 100 91,18 97,79 -22,5º 100 - 100 100 97,06 99,26 0º 100 100 - 100 100 100 +22,5º 100 100 100 - 100 100 +45º 91,18 94,12 100 100 - 96,68 Average 97,79 98,53 100 100 97,06 98,68 Table 2: Face recognition results using manual landmarks instead of AAM fitting. Probe angle -45º -22,5º 0º +22,5º +45º Average -45º - 97,06 97,06 94,12 91,18 94,85 -22,5º 94,12 - 100 100 85,29 94,85 0º 94,12 100 - 100 100 98,53 +22,5º 91,18 100 100 - 100 97,85 +45º 88,23 94,12 100 97,06 - 94,85 Average 91,91 97,79 99,26 97,79 94,12 96,18 Table 3: Face recognition results using the Pose-Dependent solution Probe angle -45º -22,5º 0º +22,5º +45º Average -45º - 97,06 97,06 94,12 91,18 94,85 -22,5º 97,06 - 100 100 85,29 95,58 0º 97,06 100 - 100 94,12 98,53 +22,5º 94,12 100 100 - 94,12 97,85 +45º 88,23 91,18 100 97,06 - 94,85 Average 94,12 97,06 99,26 98,53 91,78 96,02 Table 4: Face recognition results using the View-Based solutionIt is clear that both approaches perform quite similarly and very close to the perfect fitting given by the manuallandmarked faces. What is more interesting to highlight is that the pose Dependent solution was able tolandmark 5 images per second while the view-based solution landmarked 2 images per second, using in bothcases an Intel Core 2 Quad CPU (2,85 GHz). 5 CONCLUSIONS In this paper we have presented a fully automatic system for face recognition using a combination of localand global approaches and introducing a scheme to avoid most of the errors due to pose variation. Themultiresolution scheme of Pose Dependent AAM allowed a faster fitting to the correct pose-dependent modelthan the pure view-based approach. Recognition results over CMU PIE Database showed similar recognition
  8. 8. 8values compared to view-based AAM and a performance quite close to that achieved using manuallylandmarked faces. REFERENCES[1] M. Turk and A. Pentland: “Eigenfaces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991.[2] P. N. Belhumeur , J. P. Hespanha , D. J. Kriegman: “Eigenfaces vs. Fisherfaces: Recognition Using Class SpecificLinear Projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, v.19 n.7, p.711-720, July 1997[3] A. Pentland, B. Moghaddam, and Starner, View-based and modular eigenspaces for face recognition, in Proc. IEEEConf. Computer Vision and Pattern Recognition, 1994, pp. 84. 1991.[4] T.F. Cootes, G.J. Edwards, and C.J. Taylor: “Active Appearance Models”,Proc. Fifth European Conf. Computer Vision,H. Burkhardt and B. Neumann, eds., vol. 2, pp. 484-498, 1998..[5] L. Wiskott, J.M. Fellous, N.Kruger and C. von der Malsburg: “Face recognition by Elastic Bunch Graph Matching,”IEEE Trans. on PAMI, 19(7), 775-779, 1997[6] Timo Ahonen, Abdenour Hadid, Matti Pietikainen: "Face Description with Local Binary Patterns: Application to FaceRecognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037-2041, Dec. 2006[7] M. Bicego, A. Lagorio, E. Grosso, and M. Tistarelli.: “On the Use of SIFT Features for Face Authentication”,Computer Vision and Pattern Recognition Workshop (CVPRW06), 35, 2006[8] Albiol, A., Monzo, D., Martin, A., Sastre, J., Albiol, A.: “Face recognition using hog-ebgm”. Pattern Recognit. Lett.29(10), 1537–1543 (2008)[9] X. Zhang and Y. Gao: “Face recognition across pose: A review. Pattern Recognition, 42(11):2876 – 2896, 2009.[10] C. Castillo and D. Jacob:. “Using stereo matching for 2-d face recognition across pose”. Computer Vision and PatternRecognition, 2007. CVPR ’07. IEEE Conference on, pages 1–8, June 2007.[11] Z. Wang, X. Ding, and C. Fang: “Pose adaptive lda based face recognition”. ICPR 2008. 19th, pages 1–4, Dec.2008.[12] R. Gross, I. Matthews, and S. Baker:. “Appearance-based face recognition and light-fields”. Pattern Analysis andMachine Intelligence, IEEE Transactions on, 26(4):449–465, April 2004.[13] X. Zhang, Y. Gao, and M. Leung: “Recognizing rotated faces from frontal and side views: An approach towardeffective use of mugshot databases”. Information Forensics and Security, IEEE Transactions on, 3(4):684–697,Dec. 2008.[14] V. Blanz and T. Vetter: “A morphable model for the synthesis of 3d faces”. SIGGRAPH ’99: Proceedings of the26th annual conference on Computer graphics and interactive techniques, pages 187–194, 1999.[15] D. Gonzalez-Jimenez and J. Alba-Castro: “Toward pose-invariant 2-d face recognition through point distributionmodels and facial symmetry”. Information Forensics and Security, IEEE Transactions on, 2(3):413–429, Sept.2007.[16] T. Shan, B. Lovell, and S. Chen:. “Face recognition robust to head pose from one sample image”. ICPR 2006.,1:515–518, 0-0 2006.[17] X. Chai, S. Shan, X. Chen, and W. Gao:. “Locally linear regression for pose-invariant face recognition”. ImageProcessing, IEEE Transactions on, 16(7):1716–1725, July 2007.[18] Cootes, T.F.;Walker, K.;Taylor, C.J.. “View-based active appearance models”, Automatic Face and GestureRecognition, 2000. Proceedings. Fourth IEEE International Conference on, pp. 227-232, 2000[19] L.Teijeiro-Mosquera, J.L.Alba-Castro, D. González-Jiménez: “Face recognition across pose with automatic esti-mationof pose parameters through AAM-based landmarking”, ICPR 2010. 20th, pages 1339-1342.[20] T. Sim, S. Baker, and M. Bsat:. “The cmu pose, illumination, and expression (pie) database of human faces” TR.[21] T. Cootes, C. Taylor, and A. Lanitis:. “Multi-resolution search with active shape models”. Pattern Recognition, 1994.Vol. 1 - Conference A: Computer Vision and Image Processing., Proceedings of the 12th IAPR International Conference on,1:610–612 vol.1, Oct 1994.[22] P. Viola and M. Jones: “Rapid object detection using a boosted cascade of simple features,” in proc. Intl. Conf. onComputer Vision and Pattern Recognition, pp. 511-518, 2001.[23] N. Poh, C. H. Chan, J. Kittler, S. Marcel, C. McCool, E. Argones Rúa, J. L. Alba Castro, M. Villegas, R. Paredes, V.ˇStruc, N. Paveˇsic, A. A. Salh, H. Fang, and N. Costen: “An evaluation of video to video face verification”. IEEETransactions on Information Forensics and Security. Vol. 5, N. 4, pp. 781-801, dic. 2010[24] J. G. Daugman: “Complete Discrete 2D Gabor Transforms by Neural Networks for Image Analysis and Compression”.IEEE Trans. on Acoustics, Speech and Signal Processing, 36(7):1169 – 1179, July 1988.[25] D. González-Jiménez and J.L- Alba-Castro: “Shape-Driven Gabor Jets for Face Description and Authentication”.IEEE Transactions on Information Forensics and Security, 2(4):769–780, 2007.[26] D.González-Jiménez, E. Argones-Rúa, José L. Alba-Castro, Josef Kittler: “Evaluation of Point Localization andSimilarity Fusion Methods for Gabor Jets-based Face Verification”, IET Computer Vision, pp. 101-112, Vol. 1, Num 3-4.Dec 2007