Software used in face recognition Technology
“A biometric is a physiological or behavioral characteristic
of a human being that can distinguish one person from
another and that theoretically can be used for identification
or verification of identity.”
WHAT IS BIOMETRICS ?
Biometric applications available today are
categorized into 2 sectors
Psychological: Iris, Fingerprints, Hand, Retinal and Face
Behavioral: Voice, Typing pattern, Signature
Face recognition systems (FRSs) are an important field in computer
vision, because it represent a non-invasive BI technique.
1. A face detection algorithm is used for extracting faces from
video frames (training videos) and generating a face database.
2. Filtering and preprocessing are applied to face images obtained
in the previous step.
3. A collection of machine learning algorithms are trained using as
input data the faces obtained in the previous step.
4. Finally, the classifiers are used for classify faces obtained from
Facial recognition is a form of computer vision that uses faces to
attempt to identify a person or verify a person’s claimed identity.
For face recognition there are two types of comparisons,
- figure out “Who is X?”
- accomplished by system performing a “one-to-many ”
- answer the question “Is this X?”
- accomplished by the system performing a “one-to-one”
Describe the different methods of face recognition.
Feature extraction methods
The input image to identify and extract (and measure) distinctive
facial features such as the eyes, mouth, nose, etc.
Compute the geometric relationships among those facial points,
thus reducing the input facial image to a vector of geometric
Holistic approaches attempt to identify faces using global
representations, i.e., descriptions based on the entire image rather
than on local features of the face
During the past decades, several ML algorithms have been
proposed for classification tasks.
Most of them are from the theoretical view under some
assumption about data distribution, characteristics of the classification
task, signal to-noise-ratio, etc.
In reality, these assumptions are often hard to be verified.
Therefore, a practical solution for selecting an appropriate model for a
given classification task is to experimentally compare these
Five widely used machine classifiers
K-Nearest Neighbor (KNN)
Locally-Weighted Learning (LWL)
Naive Bayes classifier (NB)
Decision Table Classifier (DT)
Single Decision Tree (SDT).
Single Decision Tree (SDT) :
Decision tree induction is the learning of decision trees from class-
labeled training tuple.
A decision tree is a flowchart-like tree structure.
Each internal node (non leaf node) denotes a test on an attribute.
Each branch represents an outcome of the test.
Each leaf node (or terminal node) holds a class label.
The topmost node in a tree is the root node.
A path is traced from the root to a leaf node.
Single Decision Tree (SDT) :
Most algorithms for decision tree induction follow a top-down
Starts with a training set of tuples and their associated class labels.
The training set is recursively partitioned into smaller subsets as the
tree is being built.
To split D into smaller partitions according to the outcomes of the
The specific algorithm for generating the decision tree is called
Consider the two different videos of 10-second duration were used.
A total of 10x30x2 = 600 frames where processed. In the input
video, there was 6 different individuals, representing a total of 3, 600
samples (600 for each individual). Three versions of the dataset were
generated: one for a 100 x 100 pixels face resolution, one for a 50 x 50
pixels face resolution, and finally one for a 25 x 25 pixels face
Face detector implemented on OpenCV.
Faces were detected using the function cvHaarDetectObjects.
The Semi-Aided Labeling Module (SALM) reads the input
video, and for each frame where at least one face was detected by
the face detection module.
Filtering and Preprocessing (FPM)
This module performs the following transformations:
RGB to Gray scale Transformation: For reducing the amount of
data to be processed, a 24-bit per pixel RGB format is transformed
into a 8-bit per pixel gray-scale format.
Scaling: The face images are scaled to a fixed number of rows
and columns. The output resolution for each face can be set by
user according to the required accuracy.
Tabular Dataset Building Module (TDBM)
This module obtains the image pixels, and generates a tabular
where rows are the total number of subjects, and the columns are
the image pixels.
The final column represents the class attribute.
For performing the training of the classification algorithms, the
following steps are required:
Permute and split dataset. This operation is performed by the
Random Permutation and Splitting Module (RPSM). Basically, a
random permutation of the samples contained in the tabular dataset
is performed, and the resulting dataset is divided into two datasets:
the training dataset and the test dataset.
Train classification algorithms. Each classification algorithm takes
as input the training data set generated by the RPSM, and performs
the model building for each classifier. Later, the model for each
classifier is stored in disk for use it later in the classification step.
In this module, with the help of the previous trained
classifiers, takes as input the faces from the test set, applies filter
and pre-preprocessing operators, and evaluates the test face in each
model generated by the trained classifiers.
After doing this comparison, face image is classified with the label
or name predicted by each classified.
The output of each classified is processed by the Performance
Evaluation Module (PEM), which generates a table with a
comparison among several classifiers.
SOFTWARE USED IN FACE
Facial recognition software falls into a larger group of
technologies known as biometrics.
Here is the basic process that is used by the Face system to
capture and compare images
When the system is attached to a video surveillance system, the
recognition software searches the field of view of a video camera
If there is a face in the view, it is detected within a fraction of a
second. A multi-scale algorithm is used to search for faces in low
Once a face is detected, the system determines the head's position,
size and pose. A face needs to be turned at least 35 degrees toward
the camera for the system to register it.
Normalization is performed regardless of the head's location and
distance from the camera. Light does not impact the normalization
The system translates the facial data into a unique code. This
coding process allows for easier comparison of the newly acquired
facial data to stored facial data.
The newly acquired facial data is compared to the stored data and
linked to at least one stored facial representation. This is the
mathematical technique the system uses to encode faces. The system
can match multiple face prints at a rate of 60 million per minute from
memory or 15 million per minute from hard disk. The comparison
using a scale of one to 10. If a score is above a predetermined
threshold, a match is declared.
Convenient, social acceptability
Easy to use
Inexpensive technique of identification
1. Replacement of PIN
2. Border control
3. Voter verification
4. Computer security
5. Government Use,
8. Commercial Use,
a. Residential Security
b. Banking using ATM
This technique evaluate the suitability of both computer vision
and ML techniques for solving the problem of face detection and
recognition. Face recognition technologies have been associated
generally with very costly top secure applications. Today the core
technologies have evolved and the cost of equipment’s is going
down dramatically due to the integration and the increasing
1. E. Garc´ ıa Amaro, M.A. Nu ˜ no-Maganda and M. Morales-Sandoval, “Evaluation of Machine
Learning Techniques for Face Detection and Recognition”, IEEE 2012.
2. Claudia Iancu, Peter Corcoran and Gabriel Costache,” A Review of Face Recognition
Techniques for In-Camera Applications”, IEEE 2007.
3. Brian C. Becker, Enrique G.Ortiz, “Evaluation of Face Recognition Techniques for Application
to Facebook ” 2008 IEEE
4. D. Bhattacharyya, R. Ranjan, F. Alisherov, and M. Choi, “Biometric authentication: A review,”
International Journal of u- and e- Service, Science and Technology, vol. 3, no. 2, pp. 23–
5. C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics).
Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2006
6. G. Bradski and A. Kaehler, Learning OpenCV. O’Reilly Media Inc., 2008.