Face recognition uses principal component analysis (PCA) to reduce the dimensionality of face images and recognize faces. PCA identifies eigenfaces - principal components that account for the most variance in face images. It represents each face as a linear combination of the eigenfaces and mean face. To recognize unknown faces, it projects them onto the eigenface space, calculates weight vectors, and compares them to trained faces using Euclidean distance. While PCA provides reasonably accurate face recognition with low error rates, it has high computational costs and limited speed for recognition.
3. INTRODUCTION
WHAT IS FACE RECOGNITION ?
“Face Recognition is the task of identifying an already detected face as a KNOWN
or UNKNOWN face, and in more advanced cases, TELLING EXACTLY WHO’S
IT IS” !
WHAT IS PCA?
Principal Component Analysis (PCA) is a useful statistical technique that has found
application in fields such as face recognition and image compression.
It reduce the dimensionality of the data by retaining as much as variation possible in
our original data set.
The best low-dimensional space can be determined by best principal-components.
3
4. OPERATIONS
Algorithm:-
Initialization Operations in Face Recognition.
1. Prepare the Training Set to Face Vector.
2. Normalize the Face Vectors.
3. Calculate the Eigen Vectors.
4. Reduce Dimensionality.
5. Back to original dimensionality.
6. Represent Each Face Image a Linear Combination of all K Eigenvectors.
Recognizing An Unknown Face.
5
5. ………..
112
× 92
Face vector space
Images converted to vector
Each Image size
column
vector
𝑀= 16 images in the training set
Convert each of face images in
Training set to face vectors
𝜞𝒊
10304 × 1
6
6. STEP 2:NORMALIZE THE FACE VECTOR
(i) AVERAGE FACE VECTOR/MEAN IMAGE(Ψ)
𝑀= 16 images in the training set
……….. 𝜳
Converted
Face vector space
Mean Image
𝜳𝜞𝒊
Calculate Average face vector
Save it into face vector space
7
7. (ii) SUBTRACT MEAN IMAGE FROM EACH FACE IMAGE
………..
Ф𝒊
𝜳
Converted
Face vector space
𝑀= 16 images in the training set
− =
𝛤1 𝛹
Normalized Face vector
Ф1
8
9. C =
𝑛=1
16
Ф 𝑛 Ф 𝑛
𝑇
= 𝐴𝐴 𝑇
= {(𝑁2
× 𝑀). (𝑀 × 𝑁2
)}
= 𝑁2× 𝑁2
= (10304 × 10304)
Where 𝐴 = {Ф1, Ф2, Ф3, … … … ., Ф16}
[𝐀 = 𝐍 𝟐
× 𝐌]……….. 𝜳
Ф𝒊
Face vector space
Converted
𝑀= 16 images in the training set
Converted
10
To calculate the eigenvectors , we need
to calculate the covariance vector C
10. C = 10304 × 10304
10304 eigenvectors
………
Each 10304×1 dimensional
……….. 𝜳
Ф𝒊
Face vector space
𝒖𝒊
Converted
𝑀= 16 images in the training set
11
N2 =
But we need to
find only K
eigenvectors from
the above
N2 eigenvectors,
where K≤M
VERY TIME
CONSUMING
SOLUTION:
“DIMENSIONALITY
REDUCTION”
i.e. Calculate
eigenvectors from a
covariance of
reduced
dimensionality
11. ……….. 𝜳
Ф𝒊
Lower dimensional Sub-space
Face vector space
Converted
𝑀= 16 images in the training set
STEP 4:REDUCE DIMENSIONALITY
14
12. 𝑳 = 𝑨 𝑻 𝑨
= 𝑴 × 𝑵2 𝑵2 × 𝑴
= 𝑴 × 𝑴
= 16 × 16
… … . .
16 eigenvectors
Each 16×1 dimensional
𝒗𝒊
……….. 𝜳
Ф𝒊
Lower dimensional Sub-space
Face vector space
Converted
𝑀= 16 images in the training set
Calculate Co-variance matrix(𝑳)
of lower dimensional
15
13. 𝑳 = 𝑨 𝑻
𝑨
= 𝑴 × 𝑵2
𝑵2
× 𝑴
= 𝑴 × 𝑴
= 16 × 16
… … . .
16 eigenvectors
Each 16×1 dimensional
𝒗𝒊
……….. 𝜳
Ф𝒊
Lower dimensional Sub-space
Face vector space
Converted
10304 eigenvectors
………
Each 10304×1 dimensional
𝒖𝒊
C = 10304 × 10304
v/s
𝑀 images in the training set
16
14. 𝑳 = 𝑨 𝑻 𝑨
= 𝑴 × 𝑵2
𝑵2
× 𝑴
= 𝑴 × 𝑴
= 16 × 16
……….. 𝜳
Ф𝒊
Lower dimensional Sub-space
Face vector space
Converted
… … . .
16 eigenvectors
Each 16×1 dimensional
𝒗𝒊
Selected K eigenfaces MUST be in
The ORIGINAL dimensionality of the
Face vector space
17
15. 𝑳 = 𝑨 𝑻 𝑨
= 𝑴 × 𝑵2
𝑵2
× 𝑴
= 𝑴 × 𝑴
= 16 × 16
……….. 𝜳
Ф𝒊
Lower dimensional Sub-space
Face vector space
Converted
… … . .
16 eigenvectors
Each 16×1 dimensional
𝒗𝒊
A=
𝒖𝒊 = 𝑨𝒗𝒊
10304 eigenvectors
………
Each 10304×1 dimensional
𝒖𝒊
𝑀= 16 images in the training set
STEP 5:BACK TO ORIGINAL DIMENSIONALITY
18
16. C = 𝐴𝐴 𝑇
10304 eigenvectors
………
Each 10304×1 dimensional
𝒖𝒊
The K selected eigenface
……….. 𝜳
Ф𝒊
Face vector space
Converted
𝑀= 16 images in the training set
20
18. ∑
𝛚 𝟏 𝛚 𝟐 𝛚 𝟑 𝛚 𝟒 𝛚 𝟓 𝛚 𝐊⋯ ⋯ ⋯
+ 𝜳 (Mean Image)
Each face from Training set can be represented a weighted sum of the K Eigenfaces + the Mean face
STEP 6:Represent Each Face Image a Linear Combination of all K
Eigenvectors
22
19. ∑
𝛚 𝟏 𝛚 𝟐 𝛚 𝟑 𝛚 𝟒 𝛚 𝟓 𝛚 𝐊⋯
+ 𝜳 (Mean Image)
The K selected eigenface
Each face from Training set can be represented a
weighted sum of the K Eigenfaces + the Mean
face
………..
Ф𝒊
𝜳
Converted
Face vector space
𝑀= 16 images in the training set
23
20. ∑
𝛚 𝟏 𝛚 𝟐 𝛚 𝟑 𝛚 𝟒 𝛚 𝟓 𝛚 𝐊⋯
+ 𝜳 (Mean Image)
=
𝜴𝒊 =
𝝎1
𝒊
𝝎2
𝒊
𝝎3
𝒊
.
.
.
𝝎 𝑲
𝒊
Each face from Training set can be represented a
weighted sum of the K Eigenfaces + the Mean
face
A weight vector 𝛀𝐢 which is
the eigenfaces representation
of the 𝒊 𝒕𝒉
face. We calculated
each faces weight vector.
24
21. Convert the
Input to Face
Vector
Normalize the
Face Vector
Project Normalize
Face Vector onto
the Eigenspace
Get the Weight
Vector
𝜴 𝒏𝒆𝒘 =
𝝎 𝟏
𝝎 𝟐
𝝎 𝟑
.
.
.
𝝎 𝑲
Euclidian Distance
(E) = (𝛀 𝒏𝒆𝒘 − 𝛀𝒊)
If
𝑬 < 𝜽 𝒕
No
Unknown
Yes
Input of a
unknown
Image
Recognized as
RECOGNIZING AN UNKNOWN FACE
25
22. PROS & CONS
PCA based method provide better face recognition with reasonably low error
rates.
Low-to-high dimensional Eigen space for alignment.
Improve the image reconstruction and recognition performance.
Implementation cost too high.
Limited input.
Recognizing time too high.
26