Handbook of Face Recognition
Stan Z. Li Anil K. Jain
Editors
Handbook of
Face Recognition
Second Edition
Editors
Stan Z. Li
Institute of Automation, Center Biometrics
Research & Security
Chinese Academy of Science
Room 1227, No...
Preface
Face recognition is one of the most important abilities that we use in our daily lives.
There are several reasons ...
vi Preface
images, in video, in non-visible spectrum images, and in 3D. Part III, Performance
Evaluation: Machines and Hum...
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Stan Z. Li and Anil K. Jain
Part...
viii Contents
Part II Face Recognition Techniques
11 Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
Contents ix
26 Face Recognition in Forensic Science . . . . . . . . . . . . . . . . . . 655
Nicole A. Spaun
27 Privacy Pro...
Contributors
Mongi Abidi Imaging, Robotics, and Intelligent Systems Lab, University of Ten-
nessee, Knoxville, TN 37996, U...
xii Contributors
D. Chu Computational Biomedicine Lab, Department of Computer Science, Uni-
versity of Houston, Houston, T...
Contributors xiii
Reinhard Knothe Department of Mathematics and Computer Science, University
of Basel, Bernoullistrasse 16...
xiv Contributors
tory, Department of Informatics and Telecommunications, University of Athens,
Ilisia 15784, Greece
Vladim...
Contributors xv
Pavan Turaga Department of Electrical and Computer Engineering, Center for
Automation Research, University...
Chapter 1
Introduction
Stan Z. Li and Anil K. Jain
1.1 Face Recognition
Face recognition is a task that humans perform rou...
2 S.Z. Li and A.K. Jain
Fig. 1.1 A scenario of using biometric MRTD systems for passport control (left), and a compar-
iso...
1 Introduction 3
image against an enrollment face image whose identity is being claimed. Person
verification for self-servi...
4 S.Z. Li and A.K. Jain
Fig. 1.2 Depiction of face recognition processing flow
Face detection segments the face area from t...
1 Introduction 5
Fig. 1.3 Face subspace or manifolds. a Face versus nonface manifolds. b Face manifolds of dif-
ferent ind...
6 S.Z. Li and A.K. Jain
Fig. 1.4 Nonlinearity and nonconvexity of face manifolds under (from top to bottom) translation,
r...
1 Introduction 7
individual. A transform (horizontal transform, in-plane rotation, size scaling, and
gamma transform for t...
8 S.Z. Li and A.K. Jain
Fig. 1.5 Intra-subject variations in pose, illumination, expression, occlusion, accessories (e.g.,...
1 Introduction 9
Fig. 1.7 Challenges in face recognition from subspace viewpoint. a Euclidean distance is unable to
differ...
10 S.Z. Li and A.K. Jain
techniques developed to date are not sufficiently reliable for the geometric feature-
based recogn...
1 Introduction 11
400 000 local appearance features can be generated when an image of size 100×100
is filtered with Gabor fi...
12 S.Z. Li and A.K. Jain
Fig. 1.8 1:1 Face verification used at the 2008 Beijing Olympic Games, and 1:1 NIR face verifi-
cat...
1 Introduction 13
1.8 Summary
Face recognition technology has made impressive gains, but it is still not able to
meet the ...
14 S.Z. Li and A.K. Jain
18. Kanade, T.: Picture processing system by computer complex and recognition of human faces.
PhD...
1 Introduction 15
43. Valentin, D., Abdi, H., O’Toole, A.J., Cottrell, G.W.: Connectionist models of face processing:
A su...
Part I
Face Image Modeling and Representation
Chapter 2
Face Recognition in Subspaces
Gregory Shakhnarovich and Baback Moghaddam
2.1 Introduction
Images of faces, repre...
20 G. Shakhnarovich and B. Moghaddam
dimensionality, the number of coordinates necessary to specify a data point. Below
we...
2 Face Recognition in Subspaces 21
to translation along the direction of a. Consequently, representing the points in this
...
22 G. Shakhnarovich and B. Moghaddam
Fig. 2.1 The concept of PCA/KLT. a Solid lines, the original basis; dashed lines, the...
2 Face Recognition in Subspaces 23
where the M × N matrix U and the N × N matrix V have orthonormal columns,
and the N × N...
24 G. Shakhnarovich and B. Moghaddam
Fig. 2.2 Eigenfaces: the average face on the left, followed by seven top eigenfaces. ...
2 Face Recognition in Subspaces 25
Fig. 2.3 a Decomposition of RN into the principal subspace F and its orthogonal complem...
26 G. Shakhnarovich and B. Moghaddam
the above expression for ρ is the maximum-likelihood solution of a latent variable
mo...
2 Face Recognition in Subspaces 27
applies FLD to further reduce the dimensionality to m − 1. The recognition is then
acco...
28 G. Shakhnarovich and B. Moghaddam
To compute the likelihoods P(Δ | ΩI ) and P(Δ | ΩE), the database images Ij
are prepr...
2 Face Recognition in Subspaces 29
Fig. 2.4 Signal flow diagrams for computing the similarity g between two images. a Origi...
30 G. Shakhnarovich and B. Moghaddam
Fig. 2.5 ICA vs. PCA decomposition of a 3D data set. a The bases of PCA (orthogonal) ...
2 Face Recognition in Subspaces 31
Fig. 2.6 Basis images obtained with ICA: Architecture I (top) and II (bottom). (From Dr...
32 G. Shakhnarovich and B. Moghaddam
Fig. 2.7 Tensorfaces. a Data tensor; the four dimensions visualized are identity, ill...
2 Face Recognition in Subspaces 33
Each person in the database can be represented by a single Np vector, which
contains co...
34 G. Shakhnarovich and B. Moghaddam
Fig. 2.8 a PCA basis (linear, ordered, and orthogonal). b ICA basis (linear, unordere...
2 Face Recognition in Subspaces 35
corresponding to nonlinear projection, approximate reconstruction, and typically
no pri...
36 G. Shakhnarovich and B. Moghaddam
where w = (w1,...,wT )T is the vector of expansion coefficients of a given eigen-
vect...
2 Face Recognition in Subspaces 37
Fig. 2.10 Experiments on FERET data. a Several faces from the gallery. b Multiple probe...
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Li, jain -_handbook_of_face_recognition
Upcoming SlideShare
Loading in...5
×

Li, jain -_handbook_of_face_recognition

1,053

Published on

Li, jain -_handbook_of_face_recognition
gonzalo santiago martinez

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,053
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
86
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Li, jain -_handbook_of_face_recognition

  1. 1. Handbook of Face Recognition
  2. 2. Stan Z. Li Anil K. Jain Editors Handbook of Face Recognition Second Edition
  3. 3. Editors Stan Z. Li Institute of Automation, Center Biometrics Research & Security Chinese Academy of Science Room 1227, No. 95 Zhongguancun East Rd Beijing 100190 People’s Republic of China szli@cbsr.ia.ac.cn Anil K. Jain Dept. Computer Science & Engineering Michigan State University East Lansing, MI 48824-1226 USA jain@cse.msu.edu ISBN 978-0-85729-931-4 e-ISBN 978-0-85729-932-1 DOI 10.1007/978-0-85729-932-1 Springer London Dordrecht Heidelberg New York British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2011936022 © Springer-Verlag London Limited 2011 Apart from any fair dealing for the purposes of research or private study, or criticism or review, as per- mitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publish- ers, or in the case of reprographic reproduction in accordance with the terms of licenses issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Cover design: VTeX UAB, Lithuania Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
  4. 4. Preface Face recognition is one of the most important abilities that we use in our daily lives. There are several reasons for the growing interest in automated face recognition, including rising concerns for public security, the need for identity verification for physical and logical access, and the need for face analysis and modeling techniques in multimedia data management and digital entertainment. Research in automatic face recognition started in the 1960s. Recent years have seen significant progress in this area and a number of face recognition and modeling systems have been de- veloped and deployed. However, accurate and robust face recognition still offers a number of challenges to computer vision and pattern recognition researchers, espe- cially under unconstrained environments. This book is written with two primary motivations. The first is to compile major approaches, algorithms, and technologies available for automated face recognition. The second is to provide a reference book to students, researchers, and practitioners. The book is intended for anyone who plans to work in face recognition or who wants to become familiar with the state-of-the-art in face recognition. It also pro- vides references for scientists and engineers working in image processing, computer vision, biometrics and security, computer graphics, animation, and the computer game industry. The material contained in the book fits the following categories: ad- vanced tutorial, state-of-the-art survey, and a guide to current technology. This second edition consists of twenty seven chapters, with additions and up- dates from the sixteen chapters in the first edition. It covers all the subareas and major components necessary for developing face recognition algorithms, designing operational systems, and addressing related issues in large scale applications. Each chapter focuses on a specific topic or system component, introduces background in- formation, reviews up-to-date techniques, presents results, and points out challenges and future directions. The twenty seven chapters are divided into four parts according to the main prob- lems addressed. Part I, Face Image Modeling and Representation, consists of ten chapters, presenting theories in face image modeling and facial feature representa- tion. Part II, Face Recognition Techniques, also consists of ten chapters, presenting techniques for face detection, landmark detection, and face recognition in static face v
  5. 5. vi Preface images, in video, in non-visible spectrum images, and in 3D. Part III, Performance Evaluation: Machines and Humans, consists of three chapters, presenting methods and programs for face recognition evaluation and also studies and comparisons with human performance. Part IV, Face Recognition Applications, consists of four chap- ters, presenting various applications of face recognition and related issues. A project like this requires the efforts and support of many individuals and or- ganizations. First of all, we would like to thank all the authors for their outstanding contributions which made this edition possible. We also thank Wayne Wheeler and Simon Rees, the Springer editors for their support and patience during the course of this project. Thanks are also due to a number of individuals who have assisted us during the editing phase of this project, including Shikun Feng, Shengcai Liao, Xi- angsheng Huang, Brendan Klare, Unsang Park, Abhishek Nagar, and not the least Kim Thompson for her careful proofreading of the manuscript. Stan Z. Li would like to acknowledge the support of the Chinese National Natural Science Foun- dation Project #61070146, the National Science and Technology Support Program Project #2009BAK43B26, the AuthenMetric R&D Funds, and the TABULA RASA project (http://www.tabularasa-euproject.org) under the Seventh Framework Pro- gramme for research and technological development (FP7) of the European Union (EU), grant agreement #257289. Anil Jain’s research was partially supported by the WCU (World Class University) program funded by the Ministry of Education, Science and Technology through the National Research Foundation of Korea (R31- 10008) to the Brain & Cognitive Engineering Department, Korea University where he is an Adjunct Professor. Stan Z. Li Anil K. Jain Beijing, People’s Republic of China East Lansing, USA
  6. 6. Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Stan Z. Li and Anil K. Jain Part I Face Image Modeling and Representation 2 Face Recognition in Subspaces . . . . . . . . . . . . . . . . . . . . . 19 Gregory Shakhnarovich and Baback Moghaddam 3 Face Subspace Learning . . . . . . . . . . . . . . . . . . . . . . . . . 51 Wei Bian and Dacheng Tao 4 Local Representation of Facial Features . . . . . . . . . . . . . . . . 79 Joni-Kristian Kämäräinen, Abdenour Hadid, and Matti Pietikäinen 5 Face Alignment Models . . . . . . . . . . . . . . . . . . . . . . . . . 109 Phil Tresadern, Tim Cootes, Chris Taylor, and Vladimir Petrovi´c 6 Morphable Models of Faces . . . . . . . . . . . . . . . . . . . . . . . 137 Reinhard Knothe, Brian Amberg, Sami Romdhani, Volker Blanz, and Thomas Vetter 7 Illumination Modeling for Face Recognition . . . . . . . . . . . . . . 169 Ronen Basri and David Jacobs 8 Face Recognition Across Pose and Illumination . . . . . . . . . . . . 197 Ralph Gross, Simon Baker, Iain Matthews, and Takeo Kanade 9 Skin Color in Face Analysis . . . . . . . . . . . . . . . . . . . . . . . 223 J. Birgitta Martinkauppi, Abdenour Hadid, and Matti Pietikäinen 10 Face Aging Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Unsang Park and Anil K. Jain vii
  7. 7. viii Contents Part II Face Recognition Techniques 11 Face Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Stan Z. Li and Jianxin Wu 12 Facial Landmark Localization . . . . . . . . . . . . . . . . . . . . . . 305 Xiaoqing Ding and Liting Wang 13 Face Tracking and Recognition in Video . . . . . . . . . . . . . . . . 323 Rama Chellappa, Ming Du, Pavan Turaga, and Shaohua Kevin Zhou 14 Face Recognition at a Distance . . . . . . . . . . . . . . . . . . . . . 353 Frederick W. Wheeler, Xiaoming Liu, and Peter H. Tu 15 Face Recognition Using Near Infrared Images . . . . . . . . . . . . . 383 Stan Z. Li and Dong Yi 16 Multispectral Face Imaging and Analysis . . . . . . . . . . . . . . . 401 Andreas Koschan, Yi Yao, Hong Chang, and Mongi Abidi 17 Face Recognition Using 3D Images . . . . . . . . . . . . . . . . . . . 429 I.A. Kakadiaris, G. Passalis, G. Toderici, E. Efraty, P. Perakis, D. Chu, S. Shah, and T. Theoharis 18 Facial Action Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . 461 Jörgen Ahlberg and Igor S. Pandzic 19 Facial Expression Recognition . . . . . . . . . . . . . . . . . . . . . . 487 Yingli Tian, Takeo Kanade, and Jeffrey F. Cohn 20 Face Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 Yang Wang, Zicheng Liu, and Baining Guo Part III Performance Evaluation: Machines and Humans 21 Evaluation Methods in Face Recognition . . . . . . . . . . . . . . . . 551 P. Jonathon Phillips, Patrick Grother, and Ross Micheals 22 Dynamic Aspects of Face Processing in Humans . . . . . . . . . . . . 575 Heinrich H. Bülthoff, Douglas W. Cunningham, and Christian Wallraven 23 Face Recognition by Humans and Machines . . . . . . . . . . . . . . 597 Alice J. O’Toole Part IV Face Recognition Applications 24 Face Recognition Applications . . . . . . . . . . . . . . . . . . . . . . 617 Thomas Huang, Ziyou Xiong, and Zhenqiu Zhang 25 Large Scale Database Search . . . . . . . . . . . . . . . . . . . . . . 639 Michael Brauckmann and Christoph Busch
  8. 8. Contents ix 26 Face Recognition in Forensic Science . . . . . . . . . . . . . . . . . . 655 Nicole A. Spaun 27 Privacy Protection and Face Recognition . . . . . . . . . . . . . . . . 671 Andrew W. Senior and Sharathchandra Pankanti Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693
  9. 9. Contributors Mongi Abidi Imaging, Robotics, and Intelligent Systems Lab, University of Ten- nessee, Knoxville, TN 37996, USA, abidi@utk.edu Jörgen Ahlberg Division of Information Systems, Swedish Defence Research Agency (FOI), P.O. Box 1165, 583 34 Linköping, Sweden, jorahl@foi.se Brian Amberg Department of Mathematics and Computer Science, University of Basel, Bernoullistrasse 16, 4056 Basel, Switzerland, brian.amberg@unibas.ch Heinrich H. Bülthoff Max Planck Institute for Biological Cybernetics, Spe- mannstrasse 38, 72076 Tübingen, Germany, heinrich.buelthoff@tuebingen.mpg.de; Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea Simon Baker Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA, simonb@cs.cmu.edu Ronen Basri The Weizmann Institute of Science, Rehovot 76100, Israel, ronen.basri@weizmann.ac.il Wei Bian Centre for Quantum Computation & Intelligence Systems, FEIT, Univer- sity of Technology, Sydney, NSW 2007, Australia, wei.bian@student.uts.edu.au Volker Blanz Universität Siegen, Hölderlinstrasse 3, 57068 Siegen, Germany, blanz@mpi-sb.mpg.de Michael Brauckmann L-1 Identity Solutions AG, Bochum, Germany, MBrauckmann@l1id.com Christoph Busch Hochschule Darmstadt/Fraunhofer IGD, Darmstadt, Germany, christoph.busch@igd.fraunhofer.de Hong Chang Imaging, Robotics, and Intelligent Systems Lab, University of Ten- nessee, Knoxville, TN 37996, USA, hchang2@utk.edu Rama Chellappa Department of Electrical and Computer Engineering, Center for Automation Research, University of Maryland, College Park, MD 20742, USA, rama@umiacs.umd.edu xi
  10. 10. xii Contributors D. Chu Computational Biomedicine Lab, Department of Computer Science, Uni- versity of Houston, Houston, TX 77204, USA Jeffrey F. Cohn Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260, USA, jeffcohn@pitt.edu Tim Cootes Imaging Science and Biomedical Engineering, University of Manch- ester, Manchester, UK, t.cootes@man.ac.uk Douglas W. Cunningham Max Planck Institute for Biological Cybernetics, Spe- mannstrasse 38, 72076 Tübingen, Germany, douglas.cunningham@tu-cottbus.de; Brandenburg Technical University, 03046 Cottbus, Germany Xiaoqing Ding State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Depart- ment of Electronic Engineering, Tsinghua University, Beijing 100084, China, dingxq@tsinghua.edu.cn Ming Du Department of Electrical and Computer Engineering, Center for Au- tomation Research, University of Maryland, College Park, MD 20742, USA, mingdu@umiacs.umd.edu E. Efraty Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA Ralph Gross Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA, rgross@cs.cmu.edu Patrick Grother National Institute of Standards and Technology, Gaithersburg, MD 20899, USA, pgrother@nist.gov Baining Guo Microsoft Research Asia, Beijing 100080, China, bainguo@microsoft.com Abdenour Hadid Machine Vision Group, Department of Electrical and Infor- mation Engineering, University of Oulu, P.O. Box 4500, 90014 Oulu, Finland, hadid@ee.oulu.fi Thomas Huang University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA, huang@ifp.uiuc.edu David Jacobs University of Maryland, College Park, MD 20742, USA, djacobs@umiacs.umd.edu Anil K. Jain Michigan State University, East Lansing, MI 48824, USA, jain@cse.msu.edu I.A. Kakadiaris Computational Biomedicine Lab, Department of Computer Sci- ence, University of Houston, Houston, TX 77204, USA, ioannisk@grip.cis.upenn.edu Takeo Kanade Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA, tk@cs.cmu.edu
  11. 11. Contributors xiii Reinhard Knothe Department of Mathematics and Computer Science, University of Basel, Bernoullistrasse 16, 4056 Basel, Switzerland, reinhard.knothe@unibas.ch Andreas Koschan Imaging, Robotics, and Intelligent Systems Lab, University of Tennessee, Knoxville, TN 37996, USA, akoschan@utk.edu Joni-Kristian Kämäräinen Machine Vision and Pattern Recognition Laboratory, Lappeenranta University of Technology, Lappeenranta, Finland, Joni.Kamarainen@lut.fi Stan Z. Li Center for Biometrics and Security Research & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Bei- jing, China, szli@cbsr.ia.ac.cn Xiaoming Liu Visualization and Computer Vision Lab, GE Global Research, Niskayuna, NY 12309, USA, liux@ge.com Zicheng Liu Microsoft Research, Redmond, WA 98052, USA, zliu@microsoft.com J. Birgitta Martinkauppi Department of Electrical Engineering and Automation, University of Vaasa, Wolffintie 34, 65101 Vaasa, Finland, birmar@uwasa.fi Iain Matthews Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA, iainm@cs.cmu.edu Ross Micheals National Institute of Standards and Technology, Gaithersburg, MD 20899, USA, rossm@nist.gov Baback Moghaddam Mitsubishi Electric Research Labs, Cambridge, MA 02139, USA, baback@merl.com Alice J. O’Toole School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 W. Campbell Rd., Richardson, TX 75083-0688, USA, otoole@utdallas.edu Igor S. Pandzic Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia, igor.pandzic@fer.hr Sharathchandra Pankanti IBM Research, Yorktown Heights, NY 10598, USA, sharat@us.ibm.com Unsang Park Michigan State University, East Lansing, MI 48824, USA, parkunsa@cse.msu.edu G. Passalis Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA; Computer Graphics Labora- tory, Department of Informatics and Telecommunications, University of Athens, Ilisia 15784, Greece P. Perakis Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA; Computer Graphics Labora-
  12. 12. xiv Contributors tory, Department of Informatics and Telecommunications, University of Athens, Ilisia 15784, Greece Vladimir Petrovi´c Imaging Science and Biomedical Engineering, University of Manchester, Manchester, UK P. Jonathon Phillips National Institute of Standards and Technology, Gaithersburg, MD 20899, USA, jonathon@nist.gov Matti Pietikäinen Machine Vision Group, Department of Electrical and Infor- mation Engineering, University of Oulu, P.O. Box 4500, 90014 Oulu, Finland, mkp@ee.oulu.fi Sami Romdhani Department of Mathematics and Computer Science, University of Basel, Bernoullistrasse 16, 4056 Basel, Switzerland, sami.romdhani@unibas.ch Andrew W. Senior Google Research, New York, NY 10011, USA, andrewsenior@google.com S. Shah Computational Biomedicine Lab, Department of Computer Science, Uni- versity of Houston, Houston, TX 77204, USA Gregory Shakhnarovich Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA, gregory@ai.mit.edu Nicole A. Spaun Forensic Audio, Video and Image Analysis Unit, Federal Bureau of Investigation, Quantico, VA, USA, Nicole.Spaun@us.army.mil; United States Army Europe Headquarters, Heidelberg, Germany; USAREUR, CMR 420, Box 2872, APO AE 09036, USA Dacheng Tao Centre for Quantum Computation & Intelligence Systems, FEIT, University of Technology, Sydney, NSW 2007, Australia, dacheng.tao@uts.edu.au Chris Taylor Imaging Science and Biomedical Engineering, University of Manch- ester, Manchester, UK T. Theoharis Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA; Computer Graphics Labora- tory, Department of Informatics and Telecommunications, University of Athens, Ilisia 15784, Greece Yingli Tian Department of Electrical Engineering, The City College of New York, New York, NY 10031, USA, ytian@ccny.cuny.edu G. Toderici Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204, USA Phil Tresadern Imaging Science and Biomedical Engineering, University of Manchester, Manchester, UK Peter H. Tu Visualization and Computer Vision Lab, GE Global Research, Niskayuna, NY 12309, USA, tu@ge.com
  13. 13. Contributors xv Pavan Turaga Department of Electrical and Computer Engineering, Center for Automation Research, University of Maryland, College Park, MD 20742, USA, pturaga@umiacs.umd.edu Thomas Vetter Department of Mathematics and Computer Science, University of Basel, Bernoullistrasse 16, 4056 Basel, Switzerland, thomas.vetter@unibas.ch Christian Wallraven Max Planck Institute for Biological Cybernetics, Spe- mannstrasse 38, 72076 Tübingen, Germany, wallraven@korea.ac.kr; Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea Liting Wang State Key Laboratory of Intelligent Technology and Systems, Ts- inghua National Laboratory for Information Science and Technology, Depart- ment of Electronic Engineering, Tsinghua University, Beijing 100084, China, wangltmail@tsinghua.edu.cn Yang Wang Carnegie Mellon University, Pittsburgh, PA 15213, USA, wangy@cs.cmu.edu Frederick W. Wheeler Visualization and Computer Vision Lab, GE Global Re- search, Niskayuna, NY 12309, USA, wheeler@ge.com Jianxin Wu School of Computer Engineering, Nanyang Technological University, Singapore, Singapore, jxwu@ntu.edu.sg Ziyou Xiong United Technologies Research Center, East Hartford, CT 06108, USA, xiongz@utrc.utc.com Yi Yao Visualization and Computer Vision Lab, GE Global Research, Niskayuna, NY 12309, USA, yi.yao@ge.com Dong Yi Center for Biometrics and Security Research & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Bei- jing, China, dyi@cbsr.ia.ac.cn Zhenqiu Zhang University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA, zzhang6@uiuc.edu Shaohua Kevin Zhou Siemens Corporate Research, 755 College Road East, Princeton, NJ 08540, USA, kzhou@scr.siemens.com
  14. 14. Chapter 1 Introduction Stan Z. Li and Anil K. Jain 1.1 Face Recognition Face recognition is a task that humans perform routinely and effortlessly in our daily lives. Wide availability of powerful and low-cost desktop and embedded computing systems has created an enormous interest in automatic processing of digital images in a variety of applications, including biometric authentication, surveillance, human- computer interaction, and multimedia management. Research and development in automatic face recognition follows naturally. Face recognition has several advantages over other biometric modalities such as fingerprint and iris: besides being natural and nonintrusive, the most important ad- vantage of face is that it can be captured at a distance and in a covert manner. Among the six biometric attributes considered by Hietmeyer [16], facial features scored the highest compatibility in a Machine Readable Travel Documents (MRTD) [27] sys- tem based on a number of evaluation factors, such as enrollment, renewal, machine requirements, and public perception, shown in Fig. 1.1. Face recognition, as one of the major biometric technologies, has become increasingly important owing to rapid advances in image capture devices (surveillance cameras, camera in mobile phones), availability of huge amounts of face images on the Web, and increased demands for higher security. The first automated face recognition system was developed by Takeo Kanade in his Ph.D. thesis work [18] in 1973. There was a dormant period in automatic face recognition until the work by Sirovich and Kirby [19, 38] on a low dimen- S.Z. Li ( ) Center for Biometrics and Security Research & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China e-mail: szli@cbsr.ia.ac.cn A.K. Jain Michigan State University, East Lansing, MI 48824, USA e-mail: jain@cse.msu.edu S.Z. Li, A.K. Jain (eds.), Handbook of Face Recognition, DOI 10.1007/978-0-85729-932-1_1, © Springer-Verlag London Limited 2011 1
  15. 15. 2 S.Z. Li and A.K. Jain Fig. 1.1 A scenario of using biometric MRTD systems for passport control (left), and a compar- ison of various biometric traits based on MRTD compatibility (right, from Hietmeyer [16] with permission) sional face representation, derived using the Karhunen–Loeve transform or Princi- pal Component Analysis (PCA). It is the pioneering work of Turk and Pentland on Eigenface [42] that reinvigorated face recognition research. Other major milestones in face recognition include: the Fisherface method [3, 12], which applied Linear Discriminant Analysis (LDA) after a PCA step to achieve higher accuracy; the use of local filters such as Gabor jets [21, 45] to provide more effective facial features; and the design of the AdaBoost learning based cascade classifier architecture for real time face detection [44]. Face recognition technology is now significantly advanced since the time when the Eigenface method was proposed. In the constrained situations, for example where lighting, pose, stand-off, facial wear, and facial expression can be controlled, automated face recognition can surpass human recognition performance, especially when the database (gallery) contains a large number of faces.1 However, automatic face recognition still faces many challenges when face images are acquired under unconstrained environments. In the following sections, we give a brief overview of the face recognition process, analyze technical challenges, propose possible solu- tions, and describe state-of-the-art performance. This chapter provides an introduction to face recognition research. Main steps of face recognition processing are described. Face detection and recognition problems are explained from a face subspace viewpoint. Technology challenges are identified and possible strategies for solving some of the problems are suggested. 1.2 Categorization As a biometric system, a face recognition system operates in either or both of two modes: (1) face verification (or authentication), and (2) face identification (or recog- nition). Face verification involves a one-to-one match that compares a query face 1Most individuals can identify only a few thousand people in real life.
  16. 16. 1 Introduction 3 image against an enrollment face image whose identity is being claimed. Person verification for self-serviced immigration clearance using E-passport is one typical application. Face identification involves one-to-many matching that compares a query face against multiple faces in the enrollment database to associate the identity of the query face to one of those in the database. In some identification applications, one just needs to find the most similar face. In a watchlist check or face identification in surveillance video, the requirement is more than finding most similar faces; a confidence level threshold is specified and all those faces whose similarity score is above the threshold are reported. The performance of a face recognition system largely depends on a variety of factors such as illumination, facial pose, expression, age span, hair, facial wear, and motion. Based on these factors, face recognition applications may be divided into two broad categories in terms of a user’s cooperation: (1) cooperative user scenarios and (2) noncooperative user scenarios. The cooperative case is encountered in applications such as computer login, physical access control, and e-passport, where the user is willing to be coopera- tive by presenting his/her face in a proper way (for example, in a frontal pose with neutral expression and eyes open) in order to be granted the access or privilege. In the noncooperative case, which is typical in surveillance applications, the user is unaware of being identified. In terms of distance between the face and the camera, near field face recognition (less than 1 m) for cooperative applications (e.g., access control) is the least difficult problem, whereas far field noncooperative applications (e.g., watchlist identification) in surveillance video is the most challenging. Applications in-between the above two categories can also be foreseen. For ex- ample, in face-based access control at a distance, the user is willing to be coopera- tive but he is unable to present the face in a favorable condition with respect to the camera. This may present challenges to the system even though such cases are still easier than identifying the identity of the face of a subject who is not cooperative. However, in almost all of the cases, ambient illumination is the foremost challenge for most face recognition applications. 1.3 Processing Workflow Face recognition is a visual pattern recognition problem, where the face, represented as a three-dimensional object that is subject to varying illumination, pose, expres- sion, and other factors, needs to be identified based on acquired images. While two- dimensional face images are commonly used in most applications, certain applica- tions requiring higher levels of security demand the use of three-dimensional (depth or range) images or optical images beyond the visual spectrum. A face recognition system generally consists of four modules as depicted in Fig. 1.2: face localiza- tion, normalization, feature extraction, and matching. These modules are explained below.
  17. 17. 4 S.Z. Li and A.K. Jain Fig. 1.2 Depiction of face recognition processing flow Face detection segments the face area from the background. In the case of video, the detected faces may need to be tracked across multiple frames using a face track- ing component. While face detection provides a coarse estimate of the location and scale of the face, face landmarking localizes facial landmarks (e.g., eyes, nose, mouth, and facial outline). This may be accomplished by a landmarking module or face alignment module. Face normalization is performed to normalize the face geometrically and pho- tometrically. This is necessary because state-of-the-art recognition methods are ex- pected to recognize face images with varying pose and illumination. The geomet- rical normalization process transforms the face into a standard frame by face crop- ping. Warping or morphing may be used for more elaborate geometric normaliza- tion. The photometric normalization process normalizes the face based on properties such as illumination and gray scale. Face feature extraction is performed on the normalized face to extract salient information that is useful for distinguishing faces of different persons and is robust with respect to the geometric and photometric variations. The extracted face features are used for face matching. In face matching the extracted features from the input face are matched against one or many of the enrolled faces in the database. The matcher outputs ‘yes’ or ‘no’ for 1:1 verification; for 1:N identification, the output is the identity of the in- put face when the top match is found with sufficient confidence or unknown when the tip match score is below a threshold. The main challenge in this stage of face recognition is to find a suitable similarity metric for comparing facial features. The accuracy of face recognition systems highly depends on the features that are extracted to represent the face which, in turn, depend on correct face localization and normalization. While face recognition still remains a challenging pattern recognition problem, it may be analyzed from the viewpoint of face subspaces or manifolds, as follows. 1.4 Face Subspace Although face recognition technology has significantly improved and can now be successfully performed in “real-time” for images and videos captured under favor- able (constrained) situations, face recognition is still a difficult endeavor, especially
  18. 18. 1 Introduction 5 Fig. 1.3 Face subspace or manifolds. a Face versus nonface manifolds. b Face manifolds of dif- ferent individuals for unconstrained tasks where viewpoint, illumination, expression, occlusion, and facial accessories can vary considerably. This can be illustrated from face subspace or manifold viewpoint. Subspace analysis techniques for face recognition are based on the fact that a class of patterns of interest, such as the face, resides in a subspace of the input image space. For example, a 64 × 64 8-bit image with 4096 pixels can express a large number of pattern classes, such as trees, houses, and faces. However, among the 2564096 > 109864 possible “configurations,” only a tiny fraction correspond to faces. Therefore, the pixel-based image representation is highly redundant, and the dimensionality of this representation could be greatly reduced when only the face patterns are of interest. The eigenface or PCA method [19, 42] derives a small number (typically 40 or lower) of principal components or eigenfaces from a set of training face images. Given the eigenfaces as basis for a face subspace, a face image is compactly rep- resented by a low dimensional feature vector and a face can be reconstructed as a linear combination of the eigenfaces. The use of subspace modeling techniques has significantly advanced the face recognition technology. The manifold or distribution of all the faces accounts for variations in facial ap- pearance whereas the nonface manifold accounts for all objects other than the faces. If we examine these manifolds in the image space, we find them highly nonlinear and nonconvex [5, 41]. Figure 1.3(a) illustrates face versus nonface manifolds and Fig. 1.3(b) illustrates the manifolds of two individuals in the entire face manifold. Face detection can be considered as a task of distinguishing between the face and nonface manifolds in the image (subwindow) space and face recognition can be con- sidered as a task of distinguishing between faces of different individuals in the face manifold. Figure 1.4 further demonstrates the nonlinearity and nonconvexity of face mani- folds in a PCA subspace spanned by the first three principal components, where the
  19. 19. 6 S.Z. Li and A.K. Jain Fig. 1.4 Nonlinearity and nonconvexity of face manifolds under (from top to bottom) translation, rotation, scaling, and Gamma transformations plots are drawn from real face image data. Each plot depicts the manifolds of three individuals (in three colors). The data consists of 64 frontal face images for each
  20. 20. 1 Introduction 7 individual. A transform (horizontal transform, in-plane rotation, size scaling, and gamma transform for the 4 groups, respectively) is performed on each face image with 11 gradually varying parameters, producing 11 transformed face images; each transformed image is cropped to contain only the face region; the 11 cropped face images form a sequence. A curve in this figure represents such a sequence in the PCA space, and so there are 64 curves for each individual. The three-dimensional (3D) PCA space is projected on three different 2D spaces (planes). We can observe the nonlinearity of the trajectories. The following observations can be drawn based on Fig. 1.4. First, while this example is demonstrated in the PCA space, more complex (nonlinear and noncon- vex) trajectories are expected in the original image space. Second, although these face images have been subjected to geometric transformations in the 2D plane and pointwise lighting (gamma) changes, more significant complexity of trajectories is expected for geometric transformations in 3D space (for example, out-of-plane head rotations) and ambient lights. 1.5 Technology Challenges As shown in Fig. 1.3, the problem of face detection is highly nonlinear and non- convex, even more so for face matching. Face recognition evaluation reports, for example Face Recognition Technology (FERET) [34], Face Recognition Vendor Test (FRVT) [31] and other independent studies, indicate that the performance of many state-of-the-art face recognition methods deteriorates with changes in light- ing, pose, and other factors [8, 43, 50]. The key technical challenges in automatic face recognition are summarized below. Large Variability in Facial Appearance Whereas shape and reflectance are in- trinsic properties of a face, the appearance (i.e., the texture) of a face is also influ- enced by several other factors, including the facial pose (or, equivalently, camera viewpoint), illumination, and facial expression. Figure 1.5 shows an example of large intra-subject variations caused by these factors. Aging is also an important factor that leads to an increase in the intra-subject variations especially in applica- tions requiring duplication of government issued photo ID documents (e.g., driver licenses and passports). In addition to these, various imaging parameters, such as aperture, exposure time, lens aberrations, and sensor spectral response also increase intra-subject variations. Face-based person identification is further complicated by possible small inter-subject variations (Fig. 1.6). All these factors are confounded in the image data, so “the variations between the images of the same face due to il- lumination and viewing direction are almost always larger than the image variation due to change in face identity” [30]. This variability makes it difficult to extract the intrinsic information about the face identity from a facial image. Complex Nonlinear Manifolds As illustrated above, the entire face manifold is highly nonconvex, and so is the face manifold of any individual under various
  21. 21. 8 S.Z. Li and A.K. Jain Fig. 1.5 Intra-subject variations in pose, illumination, expression, occlusion, accessories (e.g., glasses), color, and brightness. (Courtesy of Rein-Lien Hsu [17]) Fig. 1.6 Similarity of frontal faces between a twins (downloaded from www.marykateandashley. com); and b a father and his son (downloaded from BBC news, news.bbc.co.uk) changes. Linear methods such as PCA [19, 42], independent component analysis (ICA) [2], and linear discriminant analysis (LDA) [3]) project the data linearly from a high-dimensional space (for example, the image space) to a low-dimensional sub- space. As such, they are unable to preserve the nonconvex variations of face man- ifolds necessary to differentiate among individuals. In a linear subspace, Euclidean distance and, more generally, the Mahalanobis distance do not perform well for dis- criminating between face and nonface manifolds and between manifolds of differ- ent individuals (Fig. 1.7(a)). This limits the power of the linear methods to achieve highly accurate face detection and recognition in many practical scenarios. High Dimensionality and Small Sample Size Another challenge in face recogni- tion is the generalization ability, which is illustrated in Fig. 1.7(b). The figure depicts a canonical face image of size 112 × 92 which resides in a 10,304-dimensional fea- ture space. The number of example face images per person (typically fewer than 10, and sometimes just one) available for learning the manifold is usually much smaller
  22. 22. 1 Introduction 9 Fig. 1.7 Challenges in face recognition from subspace viewpoint. a Euclidean distance is unable to differentiate between individuals. When using Euclidean distance, an inter-person distance can be smaller than an intra-person distance. b The learned manifold or classifier is unable to characterize (i.e., generalize) unseen images of the same face than the dimensionality of the image space; a system trained on a small number of examples may not generalize well to unseen instances of the face. 1.6 Solution Strategies There are two strategies for tackling the challenges outlined in Sect. 1.5: (i) extract invariant and discriminative face features, and (ii) construct a robust face classifier. A set of features, constituting a feature space, is deemed to be good if the face man- ifolds are simple (i.e., less nonlinear and nonconvex). This requires two stages of processing: (1) normalizing face images geometrically and photometrically (for ex- ample, using geometric warping into a standard frame and photometric illumination correction) and (2) extracting features in the normalized images, such as using Ga- bor wavelets and LBP (local binary pattern), that are stable with respect to possible geometric and photometric variations. A powerful classification engine is still necessary to deal with difficult nonlin- ear classification and regression problems in the constructed feature space. This is because the normalization and feature extraction cannot solve the problems of non- linearity and nonconvexity. Learning methods are useful tools to find good features and build powerful robust classifiers based on these features. The two stages of pro- cessing may be designed jointly using learning methods. In the early development of face recognition [6, 13, 18, 36], geometric facial features such as eyes, nose, mouth, and chin were explicitly used. Properties of the features and relations (e.g., areas, distances, angles) between the features were used as descriptors for face recognition. Advantages of this approach include econ- omy and efficiency when achieving data reduction and insensitivity to variations in illumination and viewpoint. However, facial feature detection and measurement
  23. 23. 10 S.Z. Li and A.K. Jain techniques developed to date are not sufficiently reliable for the geometric feature- based recognition [9]. Further, geometric properties alone are inadequate for face recognition because rich information contained in the facial texture or appearance is not utilized. These are the main reasons why early feature-based techniques were not effective. Statistical learning methods are the mainstream approach that has been used in building current face recognition systems. Effective features and classifiers are learned from training data (appearance images or features extracted therefrom). Dur- ing the learning, both prior knowledge about face(s) and variations encountered in the training data are taken into consideration. The appearance-based approach, such as PCA [42] and LDA [3] based methods, has significantly advanced face recog- nition technology. Such an approach generally operates directly on an image-based representation (i.e., array of pixel intensities). It extracts features in a subspace de- rived from training images. Using PCA, an “optimal” face subspace is constructed to represent only the face object; using LDA, a discriminant subspace is constructed to distinguish faces of different persons. It is now well known that LDA-based meth- ods generally yields better results than PCA-based methods [3]. These linear, holistic appearance-based methods encode prior knowledge con- tained in the training data and avoid instability of manual selection and tuning needed in the early geometric feature-based methods. However, they are not effec- tive in describing local variations in the face appearance and are unable to capture subtleties of face subspaces: protrusions of nonconvex manifolds may be smoothed out and concavities may be filled in, thereby loosing useful information. Note that the appearance-based methods require that the face images be properly aligned, typ- ically based on the eye locations. Nonlinear subspace methods use nonlinear transforms to convert a face image into a feature vector in a discriminative feature space. Kernel PCA [37] and kernel LDA [29] use kernel tricks to map the original data into a high-dimension space to make the data separable. Manifold learning, which assumes that face images occupy a low-dimensional manifold in the original space, attempts to model such manifolds. These include ISOMAP [39], LLE [35], and LPP [15]. Although these methods achieve good performance on the training data, they tend to overfit and hence do not generalize well to unseen data. The most successful approach to date for handling the nonconvex face distribu- tion works with local appearance-based features extracted using appropriate image filters. This is advantageous in that distributions of face images in local feature space are less affected by the changes in facial appearance. Early work in this direction in- cluded local features analysis (LFA) [33] and Gabor wavelet-based features [21, 45]. Current methods are based on local binary pattern (LBP) [1] and many variants (for example ordinal feature [23], Scale-Invariant Feature Transform (SIFT) [26], and Histogram of Oriented Gradients (HOG) [10]). While these features are general- purpose and can be extracted from arbitrary images, face-specific local filters may be learned from images [7, 20]. A large number of local features can be generated by varying parameters associ- ated with the position, scale, and orientation of the filters. For example, more than
  24. 24. 1 Introduction 11 400 000 local appearance features can be generated when an image of size 100×100 is filtered with Gabor filters with five different scales and eight different orientation for all pixel positions. While some of these features are useful for face recognition, others may be less useful or may even degrade the recognition performance. Boost- ing based methods have been implemented to select good local features [46, 48, 49]. A discriminant analysis step can be applied to further transform the space of the se- lected local features to discriminative subspace of a lower dimensionality to achieve better face classification [22, 24, 25]. This leads to a framework for learning both effective features and powerful classifiers. There have been only a few studies reported on face recognition at a distance. These approaches can be essentially categorized into two groups: (i) generating a super resolution face image from the given low resolution image [11, 32] and (ii) acquiring high resolution face image using a special camera system (e.g., a high resolution camera or a PTZ camera) [4, 14, 28, 40, 47]. The availability of high resolution face images (i.e., tens of megapixels per im- age) provides new opportunities in facial feature representation and matching. In the 2006 Face Recognition Vendor Test (FRVT) [31], the best face matching accuracies were obtained from the high resolution 2D images or 3D images. This underlines the importance of developing advanced sensors as well as robust feature extraction and matching algorithms in achieving high face recognition accuracy. The increasing popularity of infrared cameras also supports the importance of sensing techniques. 1.7 Current Status For cooperative scenarios, frontal face detection and tracking in normal lighting environment is a reasonably well-solved problem. Assuming the face is captured with sufficient image resolution, 1:1 face verification also works satisfactorily well for cooperative frontal faces. Figure 1.8 illustrates an application of face verification at the 2008 Beijing Olympic Games. This system verifies the identity of a ticket holder (spectator) at entrances to the National Stadium (Bird’s Nest). Each ticket is associated with a unique ID number, and the ticket holder is required to submit his registration form with a two-inch ID/passport photo attached. The face photo is scanned into the system. At the entrance, the ticket is read in by an RFID reader, and the face image is captured using a video camera, which is compared with the enrollment photo scan, and the verification result is produced. A novel solution to deal with uncontrolled illumination is to use active near in- frared (NIR) face imaging to control the illumination direction and the strength. This enables the system to achieve high face recognition accuracy. The NIR face recog- nition technology has been in use at China–Hong Kong border2 for self-service immigration clearance since 2005 (see Fig. 1.8). 2The ShenZhen (China)–Hong Kong border is the world’s largest border crossing point, with more than 400 000 crossings every day.
  25. 25. 12 S.Z. Li and A.K. Jain Fig. 1.8 1:1 Face verification used at the 2008 Beijing Olympic Games, and 1:1 NIR face verifi- cation used at the China–Hong Kong border control since 2005 Fig. 1.9 An embedded NIR face recognition system for access control in 1:N identification mode and watch-list face surveillance and identification at subways One-to-many face identification using the conventional, visible band face images has not yet met the accuracy requirements of practical applications even for coop- erative scenarios. The main problem is the uncontrolled ambient illumination. The NIR face recognition provides a good solution, even for 1:N identification. Embed- ded NIR face recognition based access control products (Fig. 1.9) have been on the market since 2008. Face recognition in noncooperative scenarios, such as watch-list identification, remains a challenging task. Major problems include pose, illumination, and motion blur. Because of growing emphasis on security, there have been several watch-list identification application trials. On the right of Fig. 1.9, it shows a snapshot of 1:N watch-list face surveillance and identification at a Beijing Municipal Subways sta- tion, aimed at identifying suspects in the crowd. CCTV cameras are mounted at the subway entrances and exits, in such a way that images of frontal faces are more likely to be captured. The best system could achieve a recognition rate of up to 60% at a FAR = 0.1%.
  26. 26. 1 Introduction 13 1.8 Summary Face recognition technology has made impressive gains, but it is still not able to meet the accuracy requirements of many applications. A sustained and collaborative effort is needed to address many of the open problems in face recognition. References 1. Ahonen, T., Hadid, A., Pietikainen, M.: Face recognition with local binary patterns. In: Pro- ceedings of the European Conference on Computer Vision, pp. 469–481. Prague, Czech Re- public (2004) 2. Bartlett, M.S., Lades, H.M., Sejnowski, T.J.: Independent component representations for face recognition. In: Proceedings of the SPIE, Conference on Human Vision and Electronic Imag- ing III, vol. 3299, pp. 528–539 (1998) 3. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711–720 (1997) 4. Bernardin, K., v. d. Camp, F., Stiefelhagen, R.: Automatic person detection and tracking using fuzzy controlled active cameras. In: Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007) 5. Bichsel, M., Pentland, A.P.: Human face recognition and the face image set’s topology. CVGIP, Image Underst. 59, 254–261 (1994) 6. Brunelli, R., Poggio, T.: Face recognition: Features versus templates. IEEE Trans. Pattern Anal. Mach. Intell. 15(10), 1042–1052 (1993) 7. Cao, Z., Yin, Q., Tang, X., Sun, J.: Face recognition with learning-based descriptor. In: Pro- ceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010) 8. Chellappa, R., Wilson, C., Sirohey, S.: Human and machine recognition of faces: A survey. Proc. IEEE 83, 705–740 (1995) 9. Cox, I.J., Ghosn, J., Yianilos, P.: Feature-based face recognition using mixture-distance. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, pp. 209–216 (1996) 10. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 886– 893 (2005) 11. Dedeoglu, G., Kanade, T., August, J.: High-zoom video hallucination by exploiting spatio- temporal regularities. In: Proceedings of IEEE International Conference on Computer Vision, pp. 151–158 (2004) 12. Etemad, K., Chellapa, R.: Face recognition using discriminant eigenvectors. In: Proceedings of the International Conference on Acoustic, Speech and Signal Processing (1996) 13. Goldstein, A.J., Harmon, L.D., Lesk, A.B.: Identification of human faces. Proc. IEEE 59(5), 748–760 (1971) 14. Hampapur, A., Pankanti, S., Senior, A., Tian, Y.-L., Brown, L., Bolle, R.: Face cataloger: multi-scale imaging for relating identity to location. In: Proc. IEEE Conference Advanced Video and Signal Based Surveillance, pp. 13–20 (2003) 15. He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328–340 (2005) 16. Hietmeyer, R.: Biometric identification promises fast and secure processing of airline passen- gers. ICAO J. 55(9), 10–11 (2000) 17. Hsu, R.-L.: Face detection and modeling for recognition. PhD thesis, Michigan State Univer- sity (2002)
  27. 27. 14 S.Z. Li and A.K. Jain 18. Kanade, T.: Picture processing system by computer complex and recognition of human faces. PhD thesis, Kyoto University (1973) 19. Kirby, M., Sirovich, L.: Application of the Karhunen-Loeve procedure for the characterization of human faces. IEEE Trans. Pattern Anal. Mach. Intell. 12(1), 103–108 (1990) 20. Kumar, R., Banerjee, A., Vemuri, B.: Volterrafaces: discriminant analysis using Volterra ker- nels. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 150–155 (2009) 21. Lades, M., Vorbruggen, J., Buhmann, J., Lange, J., von der Malsburg, C., Wurtz, R.P., Ko- nen, W.: Distortion invariant object recognition in the dynamic link architecture. IEEE Trans. Comput. 42, 300–311 (1993) 22. Lei, Z., Liao, S., Pietikäinen, M., Li, S.Z.: Face recognition by exploring information jointly in space, scale and orientation. IEEE Trans. Image Process. 20(1), 247–256 (2011) 23. Liao, S., Lei, Z., Zhu, X., Sun, Z., Li, S.Z., Tan, T.: Face recognition using ordinal features. In: Proceedings of IAPR International Conference on Biometrics, pp. 40–46 (2006) 24. Liao, S., Zhu, X., Lei, Z., Zhang, L., Li, S.Z.: Learning multi-scale block local binary pat- terns for face recognition. In: Proceedings of IAPR International Conference on Biometrics, pp. 828–837 (2007) 25. Liu, C.: Capitalize on dimensionality increasing techniques for improving face recognition grand challenge performance. IEEE Trans. Pattern Anal. Mach. Intell. 28(5), 725–737 (2006) 26. Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of IEEE International Conference on Computer Vision, p. 1150, Los Alamitos, CA (1999) 27. Machine Readable Travel Documents (MRTD). http://www.icao.int/mrtd/overview/overview. cfm 28. Marchesotti, L., Piva, S., Turolla, A., Minetti, D., Regazzoni, C.: Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions. In: Proceedings of SPIE Int’l Conf. Image and Video Communications and Processing (2005) 29. Mika, S., Ratsch, G., Weston, J., Scholkopf, B., Muller, K.-R.: Fisher discriminant analysis with kernels. In: Neural Networks for Signal Processing IX, pp. 41–48 (1999) 30. Moses, Y., Adini, Y., Ullman, S.: Face recognition: The problem of compensating for changes in illumination direction. In: Proceedings of the European Conference on Computer Vision, vol. A, pp. 286–296 (1994) 31. NIST: Face Recognition Vendor Tests (FRVT) (2006). http://www.frvt.org 32. Park, J., Lee, S.: Stepwise reconstruction of high-resolution facial image based on interpo- lated morphable face model. In: Proc. Int’l Conf. Audio-and Video-based Biometric Person Authentication, pp. 102–111 (2005) 33. Penev, P., Atick, J.: Local feature analysis: A general statistical theory for object representa- tion. Neural Syst. 7(3), 477–500 (1996) 34. Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET evaluation methodology for face- recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000) 35. Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(22), 2323–2326 (2000) 36. Samal, A., Iyengar, P.A.: Automatic recognition and analysis of human faces and facial ex- pressions: A survey. Pattern Recognit. 25, 65–77 (1992) 37. Schölkopf, B., Smola, A., Müller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 10, 1299–1319 (1999) 38. Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. J. Opt. Soc. Am. A 4(3), 519–524 (1987) 39. Tenenbaum, J., Silva, V., Langford, J.: A global geometric framework for nonlinear dimen- sionality reduction. Science 290(22), 2319–2323 (2000) 40. Tistarelli, M., Li, S., Chellappa, R. (eds.): Handbook of Remote Biometrics for Surveillance and Security. Springer, Berlin (2009) 41. Turk, M.: A random walk through eigenspace. IEICE Trans. Inf. Syst. E84-D(12), 1586–1695 (2001) 42. Turk, M.A., Pentland, A.P.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)
  28. 28. 1 Introduction 15 43. Valentin, D., Abdi, H., O’Toole, A.J., Cottrell, G.W.: Connectionist models of face processing: A survey. Pattern Recognit. 27(9), 1209–1230 (1994) 44. Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, p. 511 (2001) 45. Wiskott, L., Fellous, J., Kruger, N., v. d. Malsburg, C.: Face recognition by elastic bunch graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997) 46. Yang, P., Shan, S., Gao, W., Li, S.Z., Zhang, D.: Face recognition using Ada-boosted Gabor features. In: Proceedings of International Conference on Automatic Face and Gesture Recog- nition, pp. 356–361 (2004) 47. Yao, Y., Abidi, B., Kalka, N., Schmid, N., Abidi, M.: Improving long range and high mag- nification face recognition: Database acquisition, evaluation, and enhancement. Comput. Vis. Image Underst. 111(2), 111–125 (2008) 48. Zhang, L., Li, S.Z., Qu, Z., Huang, X.: Boosting local feature based classifiers for face recog- nition. In: Proceedings of First IEEE Workshop on Face Processing in Video, Washington, DC (2004) 49. Zhang, G., Huang, X., Li, S.Z., Wang, Y., Wu, X.: Boosting local binary pattern (LBP)-based face recognition. In: Li, S.Z., Lai, J., Tan, T., Feng, G., Wang, Y. (eds.) Advances in Biometric Personal Authentication, vol. 3338, pp. 180–187. Springer, Berlin (2005) 50. Zhao, W., Chellappa, R., Phillips, P., Rosenfeld, A.: Face recognition: A literature survey. ACM Comput. Surv. 399–458 (2003)
  29. 29. Part I Face Image Modeling and Representation
  30. 30. Chapter 2 Face Recognition in Subspaces Gregory Shakhnarovich and Baback Moghaddam 2.1 Introduction Images of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low dimension. Face recognition, and computer vision re- search in general, has witnessed a growing interest in techniques that capitalize on this observation and apply algebraic and statistical tools for extraction and analy- sis of the underlying manifold. In this chapter, we describe in roughly chronologic order techniques that identify, parameterize, and analyze linear and nonlinear sub- spaces, from the original Eigenfaces technique to the recently introduced Bayesian method for probabilistic similarity analysis. We also discuss comparative experi- mental evaluation of some of these techniques as well as practical issues related to the application of subspace methods for varying pose, illumination, and expression. 2.2 Face Space and Its Dimensionality Computer analysis of face images deals with a visual signal (light reflected off the surface of a face) that is registered by a digital sensor as an array of pixel values. The pixels may encode color or only intensity. In this chapter, we assume the latter case (i.e., gray-level imagery). After proper normalization and resizing to a fixed m-by-n size, the pixel array can be represented as a point (i.e., vector) in an mn- dimensional image space by simply writing its pixel values in a fixed (typically raster) order. A critical issue in the analysis of such multidimensional data is the G. Shakhnarovich ( ) Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA e-mail: gregory@ai.mit.edu B. Moghaddam Mitsubishi Electric Research Labs, Cambridge, MA 02139, USA e-mail: baback@merl.com S.Z. Li, A.K. Jain (eds.), Handbook of Face Recognition, DOI 10.1007/978-0-85729-932-1_2, © Springer-Verlag London Limited 2011 19
  31. 31. 20 G. Shakhnarovich and B. Moghaddam dimensionality, the number of coordinates necessary to specify a data point. Below we discuss the factors affecting this number in the case of face images. 2.2.1 Image Space Versus Face Space To specify an arbitrary image in the image space, one needs to specify every pixel value. Thus, the “nominal” dimensionality of the space, dictated by the pixel repre- sentation, is mn, a high number even for images of modest size. Recognition meth- ods that operate on this representation suffer from a number of potential disadvan- tages, most of them rooted in the so-called curse of dimensionality. • Handling high-dimensional examples, especially in the context of similarity- and matching-based recognition, is computationally expensive. • For parametric methods, the number of parameters one needs to estimate typically grows exponentially with the dimensionality. Often this number is much higher than the number of images available for training, making the estimation task in the image space ill-posed. • Similarly, for nonparametric methods, the sample complexity—the number of examples needed to represent the underlying distribution of the data efficiently— is prohibitively high. However, much of the surface of a face is smooth and has regular texture. There- fore, per-pixel sampling is in fact unnecessarily dense: The value of a pixel is typ- ically highly correlated with the values of the surrounding pixels. Moreover, the appearance of faces is highly constrained; for example, any frontal view of a face is roughly symmetrical, has eyes on the sides, nose in the middle, and so on. A vast proportion of the points in the image space does not represent physically possible faces. Thus, the natural constraints dictate that the face images are in fact confined to a subspace referred to as the face subspace. 2.2.2 Principal Manifold and Basis Functions It is common to model the face subspace as a (possibly disconnected) principal manifold embedded in the high-dimensional image space. Its intrinsic dimensional- ity is determined by the number of degrees of freedom within the face subspace; the goal of subspace analysis is to determine this number and to extract the principal modes of the manifold. The principal modes are computed as functions of the pixel values and referred to as basis functions of the principal manifold. To make these concepts concrete, consider a straight line in R3, passing through the origin and parallel to the vector a = [a1,a2,a3]T. Any point on the line can be described by three coordinates; nevertheless, the subspace that consists of all points on the line has a single degree of freedom, with the principal mode corresponding
  32. 32. 2 Face Recognition in Subspaces 21 to translation along the direction of a. Consequently, representing the points in this subspace requires a single basis function: φ(x1,x2,x3) = 3 j=1 aj xj . The analogy here is between the line and the face subspace and between R3 and the image space. Note that, in theory, according to the described model any face image should fall in the face subspace. In practice, owing to sensor noise, the signal usually has a nonzero component outside the face subspace. This introduces uncertainty into the model and requires algebraic and statistical techniques capable of extracting the basis functions of the principal manifold in the presence of noise. In Sect. 2.2.3, we briefly describe principal component analysis, which plays an important role in many of such techniques. For a more detailed discussion, see Gerbrands [12] and Joliffe [17]. 2.2.3 Principal Component Analysis Principal component analysis (PCA) [17] is a dimensionality reduction technique based on extracting the desired number of principal components of the multidimen- sional data. The first principal component is the linear combination of the original dimensions that has the maximum variance; the nth principal component is the lin- ear combination with the highest variance, subject to being orthogonal to the n − 1 first principal components. The idea of PCA is illustrated in Fig. 2.1a; the axis labeled φ1 corresponds to the direction of maximum variance and is chosen as the first principal component. In a two-dimensional case, the second principal component is then determined uniquely by the orthogonality constraints; in a higher-dimensional space the selection process would continue, guided by the variances of the projections. PCA is closely related to the Karhunen–Loève Transform (KLT) [21], which was derived in the signal processing context as the orthogonal transform with the basis Φ = [φ1,...,φN ]T that for any k ≤ N minimizes the average L2 reconstruction error for data points x ε(x) = x − k i=1 φT i x φi . (2.1) One can show [12] that, under the assumption that the data are zero-mean, the formulations of PCA and KLT are identical. Without loss of generality, we here- after assume that the data are indeed zero-mean; that is, the mean face ¯x is always subtracted from the data. The basis vectors in KLT can be calculated in the following way. Let X be the N × M data matrix whose columns x1,...,xM are observations of a signal em- bedded in RN ; in the context of face recognition, M is the number of available face images, and N = mn is the number of pixels in an image. The KLT basis Φ is ob- tained by solving the eigenvalue problem Λ = ΦTΣΦ, where Σ is the covariance
  33. 33. 22 G. Shakhnarovich and B. Moghaddam Fig. 2.1 The concept of PCA/KLT. a Solid lines, the original basis; dashed lines, the KLT basis. The dots are selected at regularly spaced locations on a straight line rotated at 30° and then per- turbed by isotropic 2D Gaussian noise. b The projection (1D reconstruction) of the data using only the first principal component matrix of the data Σ = 1 M M i=1 xixT i (2.2) Φ = [φ1,...,φm]T is the eigenvector matrix of Σ, and Λ is the diagonal matrix with eigenvalues λ1 ≥ ··· ≥ λN of Σ on its main diagonal, so φj is the eigenvector corresponding to the jth largest eigenvalue. Then it can be shown that the eigenvalue λi is the variance of the data projected on φi. Thus, to perform PCA and extract k principal components of the data, one must project the data onto Φk, the first k columns of the KLT basis Φ, which correspond to the k highest eigenvalues of Σ. This can be seen as a linear projection RN → Rk, which retains the maximum energy (i.e., variance) of the signal. Another important property of PCA is that it decorrelates the data: the covariance matrix of ΦT k X is always diagonal. The main properties of PCA are summarized by the following x ≈ Φky, ΦT k Φk = I, E{yiyj }i=j = 0 (2.3) namely, approximate reconstruction, orthonormality of the basis Φk, and decor- related principal components yi = φT i x, respectively. These properties are illus- trated in Fig. 2.1, where PCA is successful in finding the principal manifold, and in Fig. 2.8a (see later), where it is less successful, owing to clear nonlinearity of the principal manifold. PCA may be implemented via singular value decomposition (SVD). The SVD of an M × N matrix X (M ≥ N) is given by X = UDV T (2.4)
  34. 34. 2 Face Recognition in Subspaces 23 where the M × N matrix U and the N × N matrix V have orthonormal columns, and the N × N matrix D has the singular values1 of X on its main diagonal and zero elsewhere. It can be shown that U = Φ, so SVD allows efficient and robust computation of PCA without the need to estimate the data covariance matrix Σ (2.2). When the number of examples M is much smaller than the dimension N, this is a crucial advantage. 2.2.4 Eigenspectrum and Dimensionality An important largely unsolved problem in dimensionality reduction is the choice of k, the intrinsic dimensionality of the principal manifold. No analytical derivation of this number for a complex natural visual signal is available to date. To simplify this problem, it is common to assume that in the noisy embedding of the signal of interest (in our case, a point sampled from the face subspace) in a high-dimensional space, the signal-to-noise ratio is high. Statistically, that means that the variance of the data along the principal modes of the manifold is high compared to the variance within the complementary space. This assumption relates to the eigenspectrum, the set of eigenvalues of the data covariance matrix Σ. Recall that the ith eigenvalue is equal to the variance along the ith principal component; thus, a reasonable algorithm for detecting k is to search for the location along the decreasing eigenspectrum where the value of λi drops sig- nificantly. A typical eigenspectrum for a face recognition problem, and the natural choice of k for such a spectrum, is shown in Fig. 2.3b (see later). In practice, the choice of k is also guided by computational constraints, related to the cost of matching within the extracted principal manifold and the number of available face images. See Penev and Sirovich [29] as well as Sects. 2.3.2 and 2.3.4 for more discussion on this issue. 2.3 Linear Subspaces Perhaps the simplest case of principal manifold analysis arises under the assumption that the principal manifold is linear. After the origin has been translated to the mean face (the average image in the database) by subtracting it from every image, the face subspace is a linear subspace of the image space. In this section, we describe meth- ods that operate under this assumption and its generalization, a multilinear manifold. 1A singular value of a matrix X is the square root of an eigenvalue of XXT.
  35. 35. 24 G. Shakhnarovich and B. Moghaddam Fig. 2.2 Eigenfaces: the average face on the left, followed by seven top eigenfaces. From Turk and Pentland [36], with permission 2.3.1 Eigenfaces and Related Techniques In their ground-breaking work in 1990, Kirby and Sirovich [19] proposed the use of PCA for face analysis and representation. Their paper was followed by the “eigen- faces” technique by Turk and Pentland [35], the first application of PCA to face recognition. Because the basis vectors constructed by PCA had the same dimension as the input face images, they were named “eigenfaces.” Figure 2.2 shows an exam- ple of the mean face and a few of the top eigenfaces. Each face image was projected (after subtracting the mean face) into the principal subspace; the coefficients of the PCA expansion were averaged for each subject, resulting in a single k-dimensional representation of that subject. When a test image was projected into the subspace, Euclidean distances between its coefficient vector and those representing each sub- ject were computed. Depending on the distance to the subject for which this distance would be minimized, and the PCA reconstruction error (2.1), the image was clas- sified as belonging to one of the familiar subjects, as a new face, or as a nonface. The latter demonstrates the dual use of subspace techniques for detection: When the appearance of an object class (e.g., faces) is modeled by a subspace, the distance from this subspace can serve to classify an object as a member or a nonmember of the class. 2.3.2 Probabilistic Eigenspaces The role of PCA in the original Eigenfaces was largely confined to dimensional- ity reduction. The similarity between images I1 and I2 was measured in terms of the Euclidean norm of the difference Δ = I1 − I2 projected to the subspace, essentially ignoring the variation modes within the subspace and outside it. This was improved in the extension of eigenfaces proposed by Moghaddam and Pent- land [24, 25], which uses a probabilistic similarity measure based on a parametric estimate of the probability density p(Δ | Ω). A major dif