SlideShare a Scribd company logo
1
A
DISSERTATION REPORT
ON
FACE RECOGNITION IN REAL TIME
BY
ADITYA GUPTA
(121297012)
UNDER THE GUIDANCE OF
Dr. (Mrs.) M. A. Joshi
In fulfillment of
M. TECH ELECTRONICS & COMMUNICATION
(SIGNAL PROCESSING)
DEPARTMENT OF
ELECTRONICS & TELECOMMUNICATIONS ENGINEERING
COLLEGE OF ENGINEERING
PUNE – 411005.
2
Personal identity refers to a set of attributes (e.g., name, social security number, etc.) that are
associated with a person. Identity management is the process of creating, maintaining and
destroying identities of individuals in a population. A reliable identity management system is
urgently needed in order to combat the epidemic growth in identity theft and to meet the
increased security requirements in a variety of applications ranging from international border
crossing to accessing personal information. Establishing (determining or verifying) the
identity of a person is called person recognition or authentication and it is a critical task in
any identity management system.
It is necessary to replace knowledge-based(Identity cards & passwords). Token-based
mechanisms for reliable identity determination and stronger authentication schemes based on
namely biometrics, are needed.
Biometric authentication, or simply biometrics, offers reliable solution to the problem of
identity determination by establishing the identity of a person based on “who he is”.
Biometric systems automatically verify a person’s identity based on his physical and
behavioural characteristics such as fingerprint, face, iris, voice and gait.
A number of physical and behavioural body traits can be used for biometric
recognition. Examples of physical traits include face, fingerprint, iris, palm print, and hand
geometry. Gait, signature and keystroke dynamics are some of the behavioural characteristics
that can be used for person authentication. Each biometric modality has its advantages and
limitations, and no single modality is expected to meet all the requirements such as accuracy,
practicality and cost imposed by all applications.
A typical biometric system consists of four main components, namely, sensor, feature
extractor, matcher and decision modules. A sensor acquires the biometric data from an
individual. A quality estimation algorithm is used many times to ascertain whether the
3
acquired biometric data is good enough to be processed by the subsequent components. When
the data is not of sufficiently high quality, it is usually re-acquired from the user. The feature
extractor computes only the salient information from the acquired biometric sample to form a
new representation of the biometric trait, generally termed as the feature set. Ideally, the
feature set should possess uniqueness for every single individual (extremely small inter-user
similarity) and also should be invariant with respect to changes in the different samples of the
same biometric trait collected from the same person (extremely small intra-user variability).
The feature set obtained during enrolment is stored in the system database as a template.
During authentication, the feature set extracted from the biometric sample is compared to the
template by the matcher, which determines the degree of similarity between the two feature
sets generated and stored. The identity of user is decided based on similarity score given by
matcher module.
The functionalities provided by a biometric system can be categorized as verification and
identification. Figure 1.1 shows the enrollment and authentication stages of a biometric
system operating in the verification and identification modes. In verification, the user claims
an identity and the system verifies whether the claim is genuine. Here, the query is compared
only to the template corresponding to the claimed identity. If the input from user and the
template of the claimed identity have a high degree of similarity, then the claim is accepted as
“genuine”. Otherwise, the claim is rejected and the user is considered an “impostor”.
Identification functionality can be classified into positive and negative identification.
In positive identification, the user attempts to positively identify himself to the system. Here
user need not claim his identity explicitly. Screening is often used at airports to verify
whether a passenger’s identity matches with any person on a “watch-list”. In this situation
authorities need not worry about identities of individuals. Screening can also be used to
prevent the issue of multiple credential records (e.g., driver’s license, passport) to the same
person. Negative identification is critical in applications such as welfare disbursement to
prevent a person from claiming multiple benefits (i.e., double dipping) under different names.
In both positive and negative identification, the user’s biometric input is compared with the
templates of all the persons enrolled in the database. The system simply checks for similarity
of input from user with existing database and outputs whether user is enrolled or not.
4
The number of enrolled users in the database can be quite large which makes
identification process more challenging than verification.
Figure 1.1 shows Verification process
Fig 1.2 shows Identification Process
Biometric
Sensor
Quality
Assessment
Module
Feature
Extractor
System
Database
User User Identity
Biometric
Sensor
Quality
Assessment
Module
Feature
Extractor
Matcher
User System
Database
Decision
Module
5
Biometric traits collected over a period of time may vary dramatically. The variability
observed in the biometric feature set of an individual is known as intra-user variations. For
example, in the case of face, factors such as facial expression, person’s mood at that instance
and his appearance and feature extraction errors lead to large intra-user variations .On the
other hand, features extracted from biometric traits of different individuals can be quite
similar. Appearance-based facial features will exhibit a large similarity for the pair of
individuals e.g. twins and such a similarity is usually referred to as inter-user similarity.
A biometric system can make two types of errors, namely, false rejection and false
acceptance. A false acceptance occurs when two samples from different individuals are
incorrectly recognized as a match due to large inter-user similarity. When the intra-user
variation is large, two samples of the same biometric trait of an individual may not be
recognized as a match and this leads to a false rejection error. Therefore, the basic measures
of the accuracy of a biometric system are False Acceptance Rate (FAR) and False Rejection
Rate (FRR).
A False Rejection Rate of 2% indicates that on average, 2 in 100 genuine attempts do
not succeed. A majority of the false rejection errors are usually due to incorrect interaction of
the user with the biometric sensor and can be easily rectified by allowing the user to present
his/her biometric trait again. A False Acceptance Rate of 0.2% indicates that on average, 2 in
10, 00 impostor attempts are likely to succeed.
Other than false rejection and false acceptance, two other types of failures are also
possible in a practical biometric system. If an individual cannot interact correctly with the
biometric user interface or if the biometric samples of the individual are of very poor quality,
the sensor or feature extractor may not be able to process these individuals. Hence, they
cannot be enrolled in the biometric system and the proportion of individuals who cannot be
enrolled is referred to as Failure to Enroll Rate (FTER). In some cases, a particular sample
provided by the user during authentication cannot be acquired or processed reliably. This
error is called failure to capture and the fraction of authentication attempts in which the
biometric sample cannot be captured is known as Failure to Capture Rate (FTCR).
A match score is termed as genuine or authentic score if it indicates the similarity
between two samples of a same user. An impostor score measures the similarity between two
6
samples of different users. An impostor score that exceeds the threshold η results in a false
accept, while a genuine score that falls below the threshold η results in a false reject. The
Genuine Accept Rate (GAR) is defined as the fraction of genuine scores exceeding the
threshold η. Therefore,
( ) ( | ) ∫ ( ) 1
( ) ( | ) ∫ ( ) 2
Regulating the value of η changes the FRR and the FAR values, but for a given biometric
system, it is not possible to decrease both these errors simultaneously.
Though biometric systems have been used in real-world applications . But three main factors
that affect accuracy of biometric system design are FAR, GAR, size of the database. The
challenge in biometrics is to design a system that operates in the extremes of all these three
factors. In other words, the challenge is to develop a biometric system in real time that is
highly accurate and secure .The major obstacles that hinder the design of such an “ideal”
biometric system.
An ideal biometric system should always provide the correct decision when a biometric
sample is presented. The main factors affecting the accuracy of a biometric system are:
 Non-universality: If every individual in the target population is able to present the
biometric trait for recognition, then the trait is said to be universal. Universality is
one of the basic requirements for a biometric identifier. However, not all biometric
traits are truly universal
Due to the above factors, the error rates associated with biometric systems are higher than
what is required in many applications.
In the case of a biometric verification system, the size of the database (number of enrolled
users in the system) is not an issue because each authentication attempt basically involves
matching the query with a single template. In the case of large scale identification systems
7
where N identities are enrolled in the system, sequentially comparing the query with all the N
templates is not an effective solution due to two reasons. Firstly, the throughput of the system
would be greatly reduced if the value of N is quite large. For example, if the size of the
database is 1 million and if each match requires an average of 100 microseconds, then the
throughput of the system will be less than 1 per minute. Furthermore, the large number of
identities also affects the false match rate of the system adversely. Hence, there is a need for
efficiently scaling the system. This is usually achieved by a process known as filtering or
indexing where the database is pruned based on extrinsic (e.g., gender, ethnicity, age, etc.) or
intrinsic (e.g., fingerprint pattern class) factors and the search is restricted to a smaller
fraction of the database that is likely to contain the true identity of the user.
Although it is difficult to steal someone’s biometric traits, it is still possible for an impostor to
circumvent a biometric system in a number of ways. For example, it is possible to construct
fake or spoof fingers using lifted fingerprint impressions (e.g., from the sensor surface) and
utilize them to circumvent a fingerprint recognition system. Behavioural traits like signature
and voice are more susceptible to such attacks than anatomical traits.
The most straightforward way to secure a biometric system is to put all the system
modules and the interfaces between them on a smart card (or more generally a secure
processor). In such systems, known as match-on-card or system-on-card technology, sensor,
feature extractor, matcher and template reside on the card. The advantage of this technology
is that the user’s biometric data never leaves the card which is in the user’s possession.
However, system-on-card solutions are not appropriate for most large-scale verification
applications because they are still expensive and users must carry the card with them at all
times. Moreover, system-on-card solutions cannot be used in identification applications.
One of the critical issues in biometric systems is protecting the template of a user
which is typically stored in a database or a smart card. Stolen biometric templates can be used
to compromise the security of the system in the following two ways. (i) The stolen template
can be replayed to the matcher to gain unauthorized access, and (ii) a physical spoof can be
created from the template to gain unauthorized access to the system (as well as other systems
which use the same biometric trait). Note that an adversary can covertly acquire the biometric
information of a genuine user (e.g., lift the fingerprint from a surface touched by the user).
Hence, spoof attacks are possible even when the adversary does not have access to the
8
biometric template. However, the adversary needs to be in the physical proximity of the
person he is attempting to impersonate in order to covertly acquire his biometric trait. On the
other hand, even a remote adversary can create a physical spoof if he gets access to the
biometric template information.
9
Biometric data is only one component in wider systems of security. Typical phases of
Biometric security would include.
1) Collection of data
2) Extraction
3) Comparison and Matching.
As a first step, a system must collect the biometric to be used (Face, figure print, palm print).
The method of capture a biometric must be done in controlled environment. All Biometric
systems have some sort of collection mechanism. This could be a reader or sensor upon
which a person places their finger or hand, a camera that takes a picture or video of their face
or eye. In order to “enrol” in a system, an individual presents their “live” biometric a number
of times so the system can build a composition or profile of their characteristic, allowing for
slight variations (e.g., different degrees of pressure when they place their finger on the
reader). Depending upon the purpose of the system, enrolment could also involve the
collection of other personally identifiable information.
Commercially available biometric devices generally do not record full images of biometrics
the way law enforcement agencies collect actual fingerprints. Instead, specific features of the
biometric are “extracted.” Only certain attributes are collected (e.g., particular measurements
of a fingerprint or pressure points of a signature). Which parts are used is dependent upon the
type of biometric, as well as the design of the proprietary system. This extracted information,
10
sometimes called “raw data,” is converted into a mathematical code. Again, exactly how this
is done varies amongst the different proprietary systems.
To use a biometric system, the specific features of a person’s biometric characteristic are
measured and captured each time they present their “live” biometric. This extracted
information is translated into a mathematical code using the same method that created the
template. The new code created from the live scan is compared against a central database of
templates in the case of a one-to-many match (identification), or to a single stored template in
the case of a one-to-one match (verification). If it falls within a certain statistical range of
values, the match is considered.
One of the most interesting facts about most biometric technologies is that unique
biometric templates are generated every time a user interacts with a biometric system. These
templates, when processed by a vendor’s algorithm, are recognizable as being from the same
person, but are not identical.
11
Systems that consolidate evidences from multiple sources of biometric information in order to
reliably determine the identity of an individual are known as multibiometric systems.
Multibiometric systems can alleviate many of the limitations of unibiometric systems because
the different biometric sources usually compensate for the inherent limitations of the other
sources. Multibiometric systems offer the following advantages over unibiometric systems.
1. Combining the evidence obtained from different sources using an effective fusion scheme
can significantly improve the overall accuracy of the biometric system. The presence of
multiple sources also effectively increases the dimensionality of the feature space and reduces
the overlap between the feature spaces of different individuals.
2. Multibiometric systems can address the non-universality problem and reduce the FTER and
FTCR. For example, if a person cannot be enrolled in a finger- print system due to worn-out
ridge details, he can still be identified using other biometric traits like face or iris.
3. Multibiometric systems can also provide a certain degree of flexibility in user
authentication. Suppose a user enrolls into the system using several different traits. Later, at
the time of authentication, only a subset of these traits may be acquired based on the nature of
the application under consideration and the convenience of the user. For example, consider a
banking application where the user enrolls into the system using face, voice and fingerprint.
During authentication, the user can select which trait to present depending on his
convenience. While the user can choose face or voice modality when he is attempting to
access the application from his mobile phone equipped with a digital camera, he can choose
the fingerprint modality when accessing the same application from a public ATM or a
network computer.
4. The availability of multiple sources of information considerably reduces the effect of noisy
data. If the biometric sample obtained from one of the sources is not of sufficient quality
during a particular acquisition, the samples from other sources may still provide sufficient
discriminatory information to enable reliable decision-making.
5. Multibiometric systems can provide the capability to search a large database in a
computationally efficient manner. This can be achieved by first using a relatively simple but
less accurate modality to prune the database before using the more complex and accurate
12
modality on the remaining data to perform the final identification task. This will improve the
throughput of a biometric identification system.
6. Multibiometric systems are resistant to spoof attacks because it is difficult to
simultaneously spoof multiple biometric sources. Further, a multibiometric system can easily
incorporate a challenge-response mechanism during biometric acquisition by acquiring a
subset of the traits in some random order (e.g., left index finger followed by face and then
right index finger). Such a mechanism will ensure that the system is interacting with a live
user. Further, it is also possible to improve the template security by combining the feature sets
from different biometric sources using an appropriate fusion scheme.
Multibiometric systems have a few disadvantages when compared to unibiometric
systems. They are more expensive and require more resources for computation and storage
than unibiometric systems. Multibiometric systems generally require additional time for user
enrollment, causing some inconvenience to the user. Finally, the accuracy of a multibiometric
system can actually be lower than that of the unibiometric system if an appropriate technique
is not followed for combining the evidence provided by the different sources. Still,
multibiometric systems offer features that are attractive and as a result, such systems are
being increasingly deployed in security critical applications.
13
Biometric authentication has gained a lot of interest in research community. Researchers have
proposed many systems with different modalities as inputs and with different new techniques
as well as new combinations of different techniques. Current topic tries to survey recent
advances in authentication systems and techniques used a based on face and fingerprint
modalities.
3.1
Let I(¢x,¢y,µ) represent a rotation of the input image I by an angle θ around
the origin (usually the image center) and shifted by ¢x and ¢y pixels in directions
x and y, respectively. Then the similarity between the two fingerprint
images T and I can be measured as
where CC(T, I) = TT I is the cross-correlation between T and I. The cross correlation
is a well known measure of image similarity and the maximization
in (2.1) allows us to find the optimal registration.[1]
But In practical it can’t be use reason are
 Non-linear distortion makes impressions of the same facial significantly different in
terms of global structure; It is not at all immune to rotation and Scaling
.
 Skin condition cause image brightness, contrast, and ridge thickness to vary
significantly across different impressions. The use of more sophisticated correlation
measures may compensate for these problems.
14
Authors Chengjun Liu et al. [1] present in paper an independent Gabor features (IGFs)
method and its application to face recognition. IGF method first derives a Gabor feature
vector from a set of down sampled Gabor wavelet representations of face images, then
reduces the dimensionality of the vector by means of principal component analysis, and
finally defines the independent Gabor features based on the independent component analysis
(ICA). As Gabor transformed face images exhibit strong characteristics of spatial locality,
scale, and orientation selectivity, application of ICA reduces redundancy and exhibits
independent features. With FERET dataset, they could achieve 100% results.
Authors Syed Maajid Mohsinet al. [3] have experimented with set of Gabor filter
bank. Using 30 filters and nearest neighbour classifier at last stage, they could achieve
recognition accuracy of 92.5%. They note training time for 30 filters for single image as 5
seconds.
The main aim of the paper proposed by Victor-Emil Neagoe et al. [4] is to take
advantage of fiducial approaches in holistic approach applied to face. Authors employ Gabor
filter bank to extract features. They try to localize outputs of Gabor filter bank using Head
model of Human face. They have experimented with ORL face database. With neural
network classifiers, they could get recognition score of 96%.
Basically paper proposed by Zhi-Kai Huang et al. [6] contributes in area of color
image processing. Using different colour transforms and models, they extract features from
face using Gabor filters. The set of features is fed to SVM classifier. Authors have considered
face images with multiple poses and with varied illumination conditions. For YCbCr model
they could achieve 94% recognition accuracy.
In face detection, multiple faces detection in a single frame is tedious task.
Dr.P.K.Suri et al. [6] contribute in this area. This paper utilizes Gabor filter bank at 5 scales
and 8 orientations generating a set of 40 filters. With varying threshold and Gabor features as
input NN classifier, they could achieve highest recognition accuracy of 100% detection of
multiple faces in a single frame. This highlights discriminating ability of Gabor features at
different orientations.
15
 Advantages of Gabor based techniques: From the literature it is evident that Gabor
based techniques are powerful tools of directional data capture or record. Also these
techniques conquer the problem of slight variations in illumination conditions.
 Disadvantages: Though Gabor wavelets are directional selective tools, they generate
a larger stream of data. As we are designing in hardware so complexity must be as
low as possible. Hence we can’t go for Gabor for feature extraction
Md. Tajmilur Rahman et al. [13] address an algorithm for face recognition using
neural networks trained by Gabor features. The system commences on convolving some
morphed images of particular face with a series of Gabor filter coefficients at different scales
and orientations. Two novel contributions of this paper are: scaling of RMS contrast, and
contribution of morphing as an advancement of image recognition perfection. The neural
network employed for face recognition is based on the Multi Layer Perceptron (MLP)
architecture with back-propagation algorithm and incorporates the convolution filter response
of Gabor jet. This strategy could achieve correct recognition rate of 96%.
Lin-Lin Huang et al. [11] present a classification-based face detection method using
Gabor filter features. they have designed four filters corresponding to four orientations for
extracting facial features from local images in sliding windows. The feature vector based on
Gabor filters is used as the input of the face/non-face classifier, which is a polynomial neural
network (PNN) on a reduced feature subspace learned by principal component analysis
(PCA). They have achieved some good recognition accuracies while experimenting with
CMU database and synthetic images.
Bhaskar Gupta et al. [12] propose a classification-based face detection method using
Gabor filter features Feature vector, generated with 40 set of filters, is used as the input of the
classifier, which is a Feed Forward neural network (FFNN) on a reduced feature subspace
learned by an approach simpler than principal component analysis (PCA). Instead of applying
dimensionality reduction techniques like PCA, some rows and columns from Gabor feature
vector have been deleted. Though this is not a sophisticated technique, they could achieve
some good face classification rates.
16
Muhammad Azam, EPE Department, PNEC[24] proposes a new approach to Face
recognition. which is based on processing of face images in hexagonal lattice. Few
advantages of processing images on hexagonal lattice are higher degree of circular symmetry,
uniform connectivity, greater angular resolution, and a reduced need of storage. Proposed
methodology is a hybrid approach to face recognition. DCT is being applied to hexagonally
converted images for dimensionality reduction and feature extraction. These features are
stored in a database for recognition purpose. Artificial Neural Network (ANN) is being used
for recognition. A quick back propagation algorithm is used as the training algorithm.
Recognition rate on Yale database remained 92.77%.But the time which is taken for
recognition is so less.
Meng Joo Er,[25] propose, an efficient method for high-speed face recognition based on the
discrete cosine transform (DCT),the Fisher’s linear discriminant (FLD). the dimensionality of
the original face image is reduced by using the DCT. FLD applies to the the truncated DCT
coefficient vectors, discriminating features are maintained by the FLD. Further parameter
estimation for the RBF neural networks is fulfilled easily which facilitates fast training was
done by RBF neural networks. the proposed system achieves excellent performance with high
training and recognition speed with error rate of 1.8%.
Sidra Batool Kazmi[26], a method for automatic recognition of facial expressions
from face images by providing Discrete Wavelet Transform (DWT) features to a bank
of five parallel neural networks. Each neural network is trained to recognize a
particular facial expression and they got result of 96 %
 Advantages: Using Neural Network along with DCT , DWT, Gabor are useful for
facial expressions. It is been seen that it shows higher recognition rate of up to
95%.Hence it is quite useful technique.
17
 Disadvantages: We have to make neural network and have to design its layer which
will be quite complex. As we are designing in hardware so it will be hard to
implement.
Sidra Batool Kazmi[26], a method for automatic recognition of facial expressions from face
images by providing Discrete Wavelet Transform (DWT) features to a bank of five parallel
neural networks. Each neural network is trained to recognize a particular facial expression
and they got result of 96 %.
 Advantages: These techniques exploit the utility of Gabor wavelets for facial
expressions. This highlights the robustness of Gabor feature set under different facial
expressions as well as under different illumination conditions. Also it extracts the
directional information also.
 Disadvantages: We have to make filter banks which increases the complexity. As we
are designing in hardware so complexity must be as low as possible. Hence we
can’t go for it.
Authors A.N. Rajagopalan et al. [14] propose a face recognition method that fuses
information acquired from global and local features of the face for improving performance.
Principle components analysis followed by Fisher analysis is used for dimensionality
reduction and construction of individual feature spaces.. Before feature extraction, a block
histogram modification technique is applied to compensate for local changes in illumination.
PCA in conjunction with FLD is then used to encode the facial features in a lower
dimensional space. The distance in feature space (DIFS) values are calculated for all the
training images in each of the feature spaces and these values are used to compute the
distributions of the DIFS values. In the recognition phase, given a test image, the three facial
features are extracted and their DIFS values are computed in each feature space.
18
Young-Jun Song et al. [15] has proposed shaded-face pre-processing technique using
front-face symmetry. The existing face recognition PCA technique has a shortcoming of
making illumination variation lower the recognition performance of a shaded face. this
method computes difference between illuminations on either side of nose line. If they are
different then mirror image of one side is taken and then PCA is applied to generate feature
set. With Yale database, authors could achieve 98.9% accuracy.
Peter N. Belhumeur[16][35], shows the comparison of PCA and LDA algorithum. The LDA
technique, another method based on linearly discrimination projecting the image space to a
low dimensional subspace, has similar computational requirements. But extensive
experimental results demonstrate that the “LDA” method has error rates that are lower than
those of the PCA technique for tests on the Harvard and Yale Face Databases.
Kamran Etemad et al. [17] focus on the linear discriminant analysis (LDA) of
different aspects of human faces in the spatial as well as in the wavelet domain.. The LDA of
faces also provides a small set of features that carry the most relevant information for
classification purposes. The features are obtained through eigenvector analysis of scatter
matrices with the objective of maximizing between-class variations and minimizing within-
class variations. For a medium sized dataset, authors could achieve 97% recognition
accuracy.
 Advantages: These techniques achieve good recognition accuracies through de-
correlation of input data as they are statistical in nature. It helps in reducing
redundancies present and feature set generated is also of smaller size. So it helps in
avoiding curse of dimensionality. It also extracts the directional information which
can’t be given in detail by wavelet.
 Disadvantages: PCA and LDA are supervised learning methods. So they need
aggregation of whole data simultaneously.
Kamran Etemad et al. [25] focus on the linear discriminant analysis (LDA) of different
aspects of human faces in the spatial as well as in the wavelet domain.. The LDA of faces
also provides a small set of features that carry the most relevant information for classification
19
purposes. The features are obtained through eigenvector analysis of scatter matrices with the
objective of maximizing between-class variations and minimizing within-class variations. For
a medium sized dataset, authors could achieve 97% recognition accuracy.
Haifeng Hu [26] presents discrete wavelet transform (DWT) based illumination
normalization approach for face recognition under varying lighting conditions. Firstly, DWT
based denoising technique is employed to detect the illumination discontinuities in the detail
sub bands. And the detail coefficients are updated with using the obtained discontinuity
information. Finally, multi-scale reflectance model is presented to extract the illumination
invariant features. Recognition accuracy of 97.5% was achieved on CMU PIE dataset.
Authors D.V.
Authors X. Cao et al. [27] propose a novel wavelet based approach that considers the
correlation of neighbouring wavelet coefficients to extract an illumination invariant. This
invariant represents the key facial structure needed for face recognition. The method has
better edge preserving ability in low frequency illumination fields and better useful
information saving ability in high frequency fields using wavelet based Neigh Shrink denoise
techniques. This method proposes different process approaches for training images and
testing images since these images always have different illuminations. Experimental results
on Yale face database B and CMU PIE Face Database show excellent recognition rates up to
100%.
Authors K.Jaya Priya et al. [28] propose a novel face recognition method for the one
sample problem. This approach is based on local appearance feature extraction using
directional multiresolution decomposition offered by dual tree complex wavelet transform
(DT-CWT). It provides a local multiscale description of images with good directional
selectivity, effective edge representation and invariance to shifts and in-plane rotations. The
2-D dual-tree complex wavelet transform is less redundant and computationally efficient. The
fusion of local DT-CWT coefficients of detail sub bands are used to extract the facial features
which improve the face recognition with small sample size in relatively short computation
time. With Yale face dataset recognition accuracy of 93.33% was achieved.
M.Koteswara Rao[29] Discrete Wavelet Transform (DWT) & eigenvectors is proposed in this
paper. Eachface image is decomposed as four sub bands using DWT(HH) sub band is useful
20
to distinguish the images in the database. HH band is exploited for face recognition. HH sub
band is further processed using Principal Component Analysis (PCA). PCA extracts the
relevant information from confusing data sets. Further, PCA provides a solution to reduce the
higher dimensionality to lower dimensionality. Feature vector is generated using DWT and
PCA.
 Advantages: Wavelet techniques provide advantage of multi resolution analysis. It can
be used to investigate into directional properties of face in different frequency sub
bands. It helps in recognizing redundant information as well as invariants in the input.
This property along with multi resolution property can be used to extract features in
pose and illumination variations.
 Disadvantages: Wavelet transforms are basically characterized by inherent high
computational complexities. Like Gabor wavelets, they exhibit high dimensionality in
coefficients.
Hazim Kemal Ekenel[21] proposes algorithm in which local information is extracted using
block-based discrete cosine transform. Obtained local features are combined both at the
feature level and at the decision level. The performance of the proposed algorithm is tested on
the Yale and CMU PIE face databases shows result up to 98.9%.
Aman R. Chadha[22], proposes an efficient method , In which Discrete Cosine Transform
(DCT) is used for Local and Global Features involves recognizing the corresponding face
image from the database. Then features like nose , eyes has been extracted and they had given
some weightage depending upon recognition rate and then combine to give the result. The
result is performed on database of 25 people and shows recognition of 94% after
normalization.
 Advantages: DCT helps in recognizing redundant information as well as invariants in
the input. DCT along with PCA or LDA gives good recognition rate.
 Disadvantages: DCT transforms doesn’t give multi resolution analysis
21
3.2
There have been many challenges while designing the palm print and palm vein
authentication system like the hygienic issue arisen from the contact based system, the
complexity involved in handling the large feature vectors etc. Lin and Wan [39] proposed the
thermal imaging of palm dorsal surfaces, which typically captures the thermal pattern
generated from the flow of (hot) blood in cephalic and basilic veins. Goh Kah Ong Michael,
Tee Connie Andrew [40] introduces an innovative contactless palm print and palm vein
recognition system. They designed a hand vein sensor that could capture the palm print and
palm vein image using low resolution web camera. The images captured exhibit considerable
noise. Huan Zhang proposed a Local Contrast Enhancement technique for the Ridge
Enhancement [41].
Principle Component Analysis (PCA) aims at finding a subspace whose basis vectors
correspond to the maximum variance directions in the original space. The features extracted
by PCA are best description of the data, but not the best discriminant features. Fisher Linear
Discriminant (FLD) finds the set of most discriminant projection vectors that can map high
dimensional samples onto a low dimensional space. The major drawback of applying FLD is
that it may encounter the small-sample-size problem. Jing Liu and Yue Zhang introduce
2DFLD that computes the covariance’s matrices in a subspace of input space and achieved
optimal discriminate vectors. This method gives greater recognition accuracy with reduced
computational complexity [42].
David Zhang extracted texture features from ‘low resolution’ palm print images,
based on 2D Gabor phase coding scheme. [43]. Ajay Kumar used minutiae based technique
for hand vein recognition. The structural similarity of hand vein triangulation and knuckle
shape features are combined for discriminating the samples. [44]
The key contributions from this paper can be summarized as follows.
1) The proposed system majorly contributes to a contactless and registration free
authentication system, utilizing the data captured simultaneously through a single sensor for
both modalities.
2) We have developed a feature level fusion framework .The proposed method utilizes only
16 entropy based features for palm print and palm vein modalities facilitating a lesser
complex integration scenario.
22
Due to increase in terrorism attacks it is very much require a robust biometric system to
identify those person which can prove harmful to nation. Hence we can develop such
biometric system which will identify them. Biometric like figure print and palm print etc. it’s
difficult as they won’t ready to give and can spoof the system. Hence we can use biometric
such as face using which we can take images from camera which is far away from person can
take picture and we can recognize it .It can prove to be highly useful system in place like
airport , railway station etc.
Literature revolves around plethora of techniques which investigate into different properties
and technical aspects of face modalities. Based upon appearance, present texture and
information provided by some transform domain techniques like wavelet filters, authors have
developed many uni-modal authentication systems.
If surveyed, many of the authors use linear and non linear classifiers like neural
network classifiers, support vector machines for classification stage. But it’s difficult to
design neural network and we want fast operation so complexities should be less so we can’t
go for it.
We aim to design a secure, computationally efficient and reliable authentication
system. Face are unique and robust in nature. Faces are very easy to capture without drawing
much attention of user.so our system must be robust against side face and frontal face
detection.
As we are capturing face so illumination will play an important role in it.Hence we
need to apply some transform to reduce the effect of improper illumination. We can use
wavelet like Daubechies ,Haar etc. but we want simple deign of system hence we will use
simple wavelet that is Haar wavelet. we can use Gabor also for feature extraction but for that
we need to make filter bank which will make the system complex.
As we will take frames so the dimension is quite large hence dimension reduction
technique must be require hence PCA algorithm can be use .But using PCA alone will not
give effective results.TO discriminate between the subjects LDA can be employed. So here
we will use both PCA-LDA. This will increase complexities a bit but we want to make
system efficient also so we will go for it. At last template matching is being done to identify
the person who is he.
23
In this section we will discuss designed, developed and experimented system architecture.
First quarter of the section discusses the basic theory of selected modalities. In remaining
section, we discuss actual system architecture step by step.
One of things that really admire the viewer is the human ability to recognize faces. Humans
can recognize thousands of faces and identify familiar faces despite large changes in the
visual stimulus due to viewing conditions, expression. When we pay attention to human
ability in Face Recognition,
Face recognition has been studied for over two decades in order to make a noticeable
advance in this admire field and it is still an active subject due to extensive practical
applications. Many recent events, such as terrorist attacks, exposed serious weakness in most
sophisticated security systems of fingerprints and iris, but many other human characteristics
have been studied in last years such as finger/palm geometry, voice, signature, face.
However, biometrics have drawbacks. Iris recognition is extremely accurate, but
expensive to implement and not very accepted by people as more exposer to IR may cause
eye problem. Fingerprints are reliable and non-intrusive, but not suitable for non-
collaborative individuals. On the contrary, face recognition seems to be a good compromise
between reliability and social acceptance and balances security and privacy well.
Assume for the moment we start with images, and we want to distinguish between
images of different people. Many face recognition systems have been developed to construct
a set of "images" that provides the best approximation of the overall image data set. The
training set is then projected onto this subspace. To query a new image, we simply project the
image onto this subspace and seek a training image whose projection is closest to it.
24
The main aim of face detection is do detect the face if present in frame of video, also locate
the image face. This appears as a challenging task for computers, and has been one of the top
studied research topics in the past. Early efforts in face detection have presented as early as
the beginning of the 1970s,. Some of the factors that make face detection such a difficult task
are:
 Face orientation: A face can appear in many different poses. For instance the face
can appear in a frontal or a profile (i.e. sideways) position. Furthermore a face can be
rotated by some angle in plane and that too horizontal as well as vertical (e.g. it
appears under an angle of 60').
 Face size: The size of the human face can vary a lot .While taking Video in Real time
size of face may vary every second.
 Same person have different facial expression: Person who is laughing is may have
totally different appearance when he is in rude mood. Therefore facial expressions
directly affect the appearance of the face in the image.
 Facial feature: Some people have a moustache, long hair , spects others have a scar.
These types of features are called facial features.
 Illumination condition: Faces appear totally different when different illuminations
were used. For instance part of the face is very bright while the other part is very dark
when light is fall from side of face.
Fingerprints are the patterns formed on the epidermis of the fingertip. The fingerprints are of
three types: arch, loop and whorl. The fingerprint is composed of ridges and valleys. The
interleaved pattern of ridges and valleys are the most evident structural characteristic of a
fingerprint. There are three main fingerprint features
a) Global Ridge Pattern
b) Local Ridge Detail
c) Intra Ridge Detail
25
Fig 4.1. Sample Fingerprint Image
Global ridge detail:
There are two types of ridge flows: the pseudo-parallel ridge flows and high-curvature ridge
flows which are located around the core point and/or delta point(s). This representation relies
on the ridge structure, global landmarks and ridge pattern characteristics.
Commonly used global fingerprint features are:
i) Singular points – They are discontinuities in the orientation field. There are two types of
singular points- core and delta. A core is the uppermost of a curving ridge, and a delta point is
the point where three ridge flows meet. They are used for fingerprint registration and
classification.
ii) Ridge orientation map – They are local direction of the ridge-valley structure. It is helpful
in classification, image enhancement, feature verification and filtering.
ii) Ridge frequency map – They are the reciprocal of the ridge distance in the direction
perpendicular to local ridge orientation. It is used for filtering of fingerprint images.
Local Ridge Detail:
This is the most widely used and studied fingerprint representation. Local ridge details are the
discontinuities of local ridge structure referred to as minutiae. They are used by forensic
experts to match two fingerprints. There are about 150 different types of minutiae. Among
these minutiae types, ridge ending and ridge bifurcation are the most commonly used as all
the other types of minutiae are combinations of ridge endings and ridge bifurcations.
26
The minutiae are relatively stable and robust to contrast, image resolutions, and global
distortion when compared to other representations. Although most of the automatic
fingerprint recognition systems are designed to use minutiae as their fingerprint
representations, the location information and the direction of a minutia point alone are not
sufficient for achieving high performance. Minutiae-derived secondary features are used as
the relative distance and radial angle are invariant with respect to the rotation and translation
of the fingerprint.
Intra Ridge Detail
On every ridge of the finger epidermis, there are many tiny sweat pores and other permanent
details. Pores are distinctive in terms of their number, position, and shape. However,
extracting pores is feasible only in high-resolution fingerprint images and with very high
image quality. Thus the cost is very high.
Fingerprint recognition is one of the popular biometric techniques. It refers to the automated
method of verifying a match between two fingerprint images. It is mainly used in the
identification of a person and in criminal investigations. It is formed by the ridge pattern of
the finger. Discontinuities in the ridge pattern are used for identification. These
discontinuities are known as minutiae. For minutiae extraction type, orientation and location
of minutiae are extracted. Two features of minutiae are used for identification: termination
and bifurcation.
The advantages of fingerprint recognition system are
(a) They are highly universal as majority of the population have legible fingerprints.
(a) (b) (c) (d) (e) (f)
Fig. 4.2 Types of Minutiae
27
(b) They are very reliable as no two people (even twins) have same fingerprint.
(c) Fingerprints are formed in the foetal stage and remain structurally unchanged throughout
life.
(d) It is one of the most accurate forms of biometrics available.
(e) Fingerprint acquisition is non intrusive and hence is a good option
(a) Ridge ending (b) Bifurcation
Fig 4.3 Types of local ridge features
28
29
4.8.1 For uni-modality system
Result
Fig 4.1. Block Diagram for Proposed System
Our system aims to achieve following goals:
1. Secure and spoof attack free authentication
2. As it has to be implemented on hardware so Computationally complex should be less
Proposed system architecture contains following main steps
1. Take images of modality. In case of face also perform face detection and crop the
face.
2. Rescaling of Image to 64 x 64 pixels.
3. Find out Discrete Cosine Transform (DCT).
4. Feature extraction by taking out standard deviation of predefine block of DCT
coefficients.
5. Store the feature vector.
6. Distance based matching/verification.
Input
Feature
Extraction
Using Standard
Deviation of
DCT blocks
Distance
Measure
Template
Database
Resize to
dimension
of 64 X 64
30
4.8.2 Flow chart for fusion technique of multi-modal system
Fig 4.1. Block Diagram for Proposed System for fusion technique
Proposed system architecture contains following main steps
1. Take images of both modalities. In case of face also perform face detection and crop
the face.
2. Rescaling of Image to 64 x 64 pixels.
3. Find out Discrete Cosine Transform (DCT).
4. Feature extraction by taking out standard deviation of predefine block of DCT
coefficients.
5. Store the feature vector.
6. Fuse both the feature vectors.
7. Distance based matching/verification.
We used Microsoft Visual Studio environment to simulate the results for designed algorithms.
Microsoft Visual Studio is an integrated development environment from Microsoft
Corporation. OPENCV is used for coding of Algorithm.
Input of 1st
modality
Feature
Extraction
Using Standard
Deviation of
DCT blocks
Distance
Measure
Template
Database
Resize to
dimension
of 64 X 64
Input of 2nd
modality
Resize to
dimension
of 64 X 64
Feature
Extraction
Using Standard
Deviation of
DCT blocks
Feature level
Fusion
31
To find out ROI in palm print and palm veins code is written down in MATLAB.
System configurations which is used to simulate the code are listed as follows:
CPU: Intel(R) Core(TM) i3-2330M
CPU Clock Frequency: 2.20 GHz
System Memory: 2GBytes
Data base of 40 students was taken through video for face under different conditions.
Data base of palm print of 150 students were collected. Acquisition of images using JAI-SDK
camera using this we can capture both palm print and palm veins at the same time.
Data base of palm veins of 150 students were collected. Acquisition of images is done using
JAI-SDK camera using this we can capture both palm print and palm veins at the same time.
Data base of finger print of 200 students were collected. The image is taken by device name
Finger Key. Thumb impression is taken while capturing images.
Palm Print and Palm Vein
In this research, a JAI AD-080-GE camera is used to capture NIR hand vein images. The
camera contains two 1/3” progressive scan CCD with 1024x768 active pixels, one of the two
CCD’s is used to capture visible light images (400 to 700 nm), while the other captures light
in the NIR band of the spectrum (700 to 1000nm). Since most light sources don’t irradiate
32
with sufficient intensity in the NIR part of the spectrum, a dedicated NIR lightning system
was built using infrared Light Emitting Diodes (LED’s ) which a have a peak wavelength at
830nm.
Fig.3.1: Acquisition System
Fingerprint
In this research, Finger Key device is used to capture finger print images.
Face
In this research, Sony camera is used to capture video. It is 9.1 megapixel camera. Lens is
from Carl Zeius. Its lens is c-mount.
Following steps will be followed for face recognition.
Frames has been taken from Video. Frames have been taken after certain amount of delay.
From each video 6 frames have been captured to train the system. As there may be vary in
illumination some may have improper illumination which can be balanced using histogram
equalization.
Faces are rich in directional data. We can also go for Fourier transform or DCT [20][21] but
here we need multi resolution solution. As we also need features of different frequency band
even that of high frequency also. It is evident that wavelets offer selectivity directional also.
We are using low frequency band that is LL band. Wavelets offer robust performance against
illumination variations. For these reasons we have employed wavelet as a feature extraction
tool.
33
As inputs to the designed algorithm, video is captured from camera sources and with
different illumination and face position. Resulting feature sets from individual modalities are
use for recognition.
Principal Component Analysis (PCA) is one of the most successful that has been used for
image compression and recognition[10][11] .The purpose of PCA is to reduce the large
dimensionality of the data space (observed variables) to a smaller intrinsic dimensionality of
feature space (independent variables), [62].Using PCA can transform each original image of
the training set into a corresponding Eigenface. An important feature of PCA is that one can
reconstruct any original image from the training set by combining the Eigen faces [12], which
are nothing but characteristic features of the faces. By using all the Eigenfaces extracted from
original images, exact reconstruction of the original images is possible. But for practical
applications, certain part of the Eigenfaces is used. Then the reconstructed image is an
approximation of the original image. However losses due to omitting some of the Eigenfaces
can be minimized. This happens by choosing only the most important features (Eigenfaces)
[61].
A 2-D facial image can be represented as 1-D vector by concatenating each row (or column)
into a long thin vector.
1. Assume the training sets of images represented by , , ,…. , with each image (x, y)
where (x, y) is the size of the image represented by p and m is the number of training images.
Converting each image into set of vectors given by (m x p).
2. The mean face Ψ is given by:
∑ 1
3. The mean-subtracted face is given by ( ) :
2
where i = 1,2,……m and A=[ ] is the mean-subtracted matrix with size
.
34
4. By implementing the matrix transformations, the vector matrix is reduced by:
3
Where C is the covariance matrix
5. Finding the Eigen vectors and Eigen values from the C matrix and ordering the
Eigen vectors by highest Eigen values
6. With the sorted Eigen vectors matrix, is adjusted. These vectors determine the linear
combinations of the training set images to form the Eigen faces represented by as follows:
∑ k=1,2,……m 4
7. Instead of using m Eigen faces, m′ Eigen faces (m′<< m) is considered as the most
significant Eigen vectors provided for training of each individual.
8. With the reduced Eigen face vector, each image has its face vector given by
( ), k=1,2,……m 5
9. The weights form a feature vector given by
6
11. The reduced data is taken as the input to the next stage for extricating discriminating
feature.
Linear Discriminant Analysis (LDA) has been successfully used as a dimensionality
reduction technique to many classification problems. The objective is to maximizes the ratio
of between - class matrix and within-class scatter matrix
The LDA is defined by the transformation [64]
7
The columns of W are the Eigenvectors of , It is possible to show that this choice
maximizes the ratio
det( )/det( ) 8
35
These matrices are computed as follows:
∑ ∑ ( )( ) 9
Where ∑ and is ith pattern of jth class, and is number of patterns of jth
class
∑ ( )( ) 10
The Eigenvectors of LDA are called “Fisherfaces”. LDA transformation is strongly
dependent on the number of classes (c), the number of samples (m), and the original space
dimensionality (d). It is possible to show that there are almost c-1 nonzero Eigenvectors. c-1
being the upper bound of the discriminant space dimensionality. We need d+c samples at
least to have a non-singular [64]. It is impossible to guarantee this condition in many real
applications. Consequently, an intermediate transformation is applied to reduce the
dimensionality of the image space. Here we used the PCA transform. LDA derives a low
dimensional representation of a high dimensional face feature vector space. From Eqn. 12 and
Eqn. 13, the covariance matrix C is obtained as follows
11
The coefficients of the covariance matrix give the discriminating feature vectors for the LDA
method. The face vector is projected by the transformation matrix . The projection
coefficients are used as the feature representation of each face image. The matching score
between the test face image and the training image is calculated as the distance between their
coefficients vectors. A smaller distance score means a better match. For the proposed work,
the column vectors (i = 1, 2,….c-1) of matrix W are referred to as Fisherfaces.
In the identification mode, the system determines whether an individual is enrolled or not, by
comparing the extracted features with those stored in the database. In an identification
application, the biometric device reads a sample, processes it, and compares it against
all samples in template database. We can made comparison by different distance metrics
which are discussed in experimental results.
36
1. Euclidean Distance [32]: It states that the shortest distance between two points is a
line. In Euclidean distance metric difference of each dimension of feature vector of
test and database image is squared which increases the divergence between the test and
database image if the dissimilarity is more.
( ) √∑ | | 12
2. Lorentzian Distance [69]: One is added to guarantee the non negative property and to avoid
the log zero. This metric is sensitive to small changes because log scale expands the lower
range compresses higher one.
( ) ∑ ( | |) 13
3. Hellinger Distance [69]: In this distance square root of sum of squared square root
difference at each dimension is taken, which minimizes the difference if similarity between
vectors is more.
( ) √ ∑ (√ √ ) 14
4. Canberra Distance [69]: Canberra distance is similar to Manhattan distance. The distinction
is that the absolute difference between the variables of the two objects is divided by sum of
absolute variable values prior to summing.
( ) ∑
| |
| | | |
15
Where, = Testing feature vector
= Trained feature vector
N= Total number of features
37
To reach towards the goals defined in previous chapters, we have experimented with
individual biometric modalities first. To check designed algorithms, we have experimented
with standard databases for both face and fingerprint. Trends for initial experiments with face
and fingerprint were conducive. We further experimented with palm print and palm vein
modalities. Which Blocks of DCT coefficients that we have to take for taking standard
deviation is also important. So we have followed this trend and also tweaked with it to see the
effect on system performance with palm print, finger print and palm vein modalities. Initially,
to decide how many testing and training samples we have to take, we have conducted
experiments with varying training and testing sets. This chapter gives detailed account of
experiments with fingerprint database, palm print database, face database using video as well
as images, palm vein database and respective performance indices associated. Also it includes
comparisons for recognition performances with each modality individually. This comparison
is made for common no. of people in each case (for 100 people) for finger print palm print
and palm vein. Finally, taking inferences from experiments with individual modalities into
account, we implement fusion scheme for fingerprint and face, fingerprint and palm print,
fingerprint and palm vein, palm print and palm vein. We also compared the results in the end
for fusion technique using different multimodal technique.
5.1 Experiment with FVC2002 database
Distance Metric
40 people
FAR FRR GAR
Euclidean 7.5 6.25 93.75
38
Table 1.Results with FVC2002
taking lower frequency blocks of DCT
coefficients mentioned in figure ().
Following figures show graphical representations of Table 1
Fig. 5.1 (a) FAR & FRR for Euclidean Distance Fig. 5.1 (b) FAR & FRR for Canberra Distance
Table 2.Results with FVC2002 taking higher frequency blocks of DCT coefficients
mentioned in figure ().
Canberra 6.25 5 95
Distance Metric
40 people
FAR FRR GAR
Euclidean 2.5 2.5 97.5
Canberra 1.25 1.25 98.75
39
Following figures show graphical representations of Table 2
(a) Euclideandistance (b) Canberradistance
Fig. 5.2 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Canberra Distance
Inferences:
Results from table no.1 and 2 clearly show that FAR and GAR is low when higher frequency
coefficients are taken. This is due to the fact that finger print contains edges thus they
contains information in higher frequency.
5.2 Experiments with fingerprint database for varying no of training sets
Table 3.Results taking 6 training and 2 testing images for varying threshold
40
Following figures show graphical representations of Table 3
Fig. 5.3(a) FAR & FRR for Euclidean Distance (40 per) Fig. 5.3(b) FAR & FRR for Euclidean Distance (65
per)
Table 4.Results taking 4 training and 2 testing images for varying threshold
Distance
Metric
40 people 65 people
FAR FRR GAR FAR FRR GAR
Euclidean 2.5 0 100 6.25 2.5 97.5
Euclidean 1.25 8.75 91.25 2.5 16.25 83.75
Distance
Metric
40 people 65 people
FAR FRR GAR FAR FRR GAR
Euclidean 5 0 100 11.25 11.25 88.75
Euclidean 2.5 13.75 84.25 4.5 33.75 66.25
41
Following figures show graphical representations of Table 4
Fig. 5.4(a) FAR & FRR for Euclidean Distance (40 per) Fig. 5.4(b) FAR & FRR for Euclidean Distance (65
per)
Inference:
Experiments shows that results for 6 training images and 2 testing images show better than
that for 4 training and 2 testing. It can also be seen that there is rapid increase in FAR and
FRR if database increase further. So for fingerprint we will take 6 training sets and 2 testing
images.
5.3 Experiments with COEP fingerprint database
After experimenting with FVC2002 dataset, we have experimented with our own generated
dataset. Here we discuss experimentation with fingerprint database of COEP. Following table
shows results for middle finger dataset.
Table 5.Results with COEP database for 50 and 100 subjects
Distance Metric 50 people 100 people
42
FAR FRR GAR FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 1.0 0.0 100 0.5 2.0 98.0
Manhatten 1.0 0.0 100 0.50 0.0 100 0.0 0.50 99.5
Euclidean 0.0 0.0 100 1.33 1.67 98.33 - - -
Gower 1.0 0.0 100 0.50 0.0 100.0 0.0 0.5 99.5
Following figures show graphical representations of Table 5
(a) Grower Distance (b) Euclidean Distance
(c)Canberra Distance (d) Manhatten distance
43
Fig. 5.5 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Graph for 100 subjects
(a)Grower Distance (b)Euclidean Distance
(c)Canberra Distance (d)Manhatten Distance
44
Fig. 5.6 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Table 6.Results with COEP database for 150 subjects
Distance Metric
150 people
FAR FRR GAR FAR FRR GAR
Canberra 1.33 0.0 100 - - -
Manhatten .67 2.0 98.0 1.33 0.67 99.33
Euclidean 2.0 2.67 97.33 - - -
Gower .67 2.0 98.00 2.0 0.0 100
Following figures show graphical representations of Table 6
45
(a) Grower Distance (b) Euclidean Distance
(c) Canberra Distance (d) Manhatten Distance
Fig. 5.7 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Table 7.Results with COEP database for 200 subjects
Distance Metric 200 people
46
FAR FRR GAR FAR FRR GAR
Canberra 1.75 1.25 98.5 2.0 0.0 100
Manhatten 2.0 0 100.0 1.0 1.25 98.75
Euclidean 3.0 1.0 99 - - -
Gower 2.0 0 100.0 1.0 1.25 98.75
Following figures show graphical representations of Table 7
(a)Gower Distance (b) Euclidean Distance
47
(c)Canberra Distance (d)Manhatten Distance
Fig. 5.8 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Inference:
Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment
in no. of entries in database. This inference can be attributed to stability of features of Middle
finger database. But false acceptance rates increase or remain almost constant with increased
size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger
distance provides more secure system with similar GAR in both cases. It is because Canberra
distance normalizes features of the difference between them while Hellinger distance
separates features well.
5.3 Experiments with COEP palm print database
Initially we have cropped Region of Interest (ROI) from the image of palm print. Procedure is
already mentioned in proposed methodology.
We have experimented with our own generated dataset. Here we discuss experimentation
with fingerprint database of COEP. Following table shows results for palm print dataset. Here
48
experiment is done by taking 2 testing images and 3 training images of palm veins for each
subjects. Where each image has been reduced to feature vector of dimension 1 x 38.
Table 8.Results with COEP database
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 1.0 0.0 100
Manhatten 1.0 0.0 100 1.50 1.50 98.5
Euclidean 1.0 0.0 100 3.5 1.0 99.0
Gower 1.0 0.0 100 1.50 1.0 99.0
Following figures show graphical representations of Table 8
(a)Gower Distance (b)Euclidean Distance
49
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.9 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
(a)Gower Distance (b)Euclidean Distance
50
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.10 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Table 9.Results with COEP database
Distance Metric
150 people
FAR FRR GAR FAR FRR GAR
Canberra 1.33 0.0 100 0.67 1.67 98.33
Manhatten 1.67 1.0 99 1.0 2.67 97.33
Euclidean 1.67 3 97 - - -
Gower 1.33 1.33 98.67 - - -
51
Following figures show graphical representations of Table 9
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.11 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
52
Inference:
Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment
in no. of entries in database. This inference can be attributed to stability of features of Middle
finger database. But false acceptance rates increase or remain almost constant with increased
size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger
distance provides more secure system with similar GAR in both cases. It is because Canberra
distance normalizes features of the difference between them while Hellinger distance
separates features well.
5.3 Experiments with COEP palm vein database
Initially we have cropped Region of Interest (ROI) from the image of palm. Procedure is
already mentioned in proposed methodology section for taking out ROI.
We have experimented with our own generated dataset. Here we discuss experimentation
with palm vein database of COEP. Following table shows results for palm vein dataset. Here
experiment is done by taking 2 testing images and 3 training images of palm veins for each
subjects. Where each image has been reduced to feature vector of dimension 1 x 38.
Table 10.Results with COEP database for palm veins
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 0.5 0 100
Manhatten 1 1 99 0.5 1.0 99
Euclidean 1 2 98 1 3 97
Gower 1 1 99 .5 1 99
53
Following figures show graphical representations of Table 10
(a)Euclidean Distance (b)Gower Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.12 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Gower Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
54
Following figures show graphical representations of Table 10
(a)Euclidean Distance (b)Gower Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.13 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Gower Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
55
Inference:
Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment
in no. of entries in database. This inference can be attributed to stability of features of Middle
finger database. But false acceptance rates increase or remain almost constant with increased
size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger
distance provides more secure system with similar GAR in both cases. It is because Canberra
distance normalizes features of the difference between them while Hellinger distance
separates features well.
5.4 Result of fusion of Palm Print and Palm Vein
Experiments with both palm print and palm vein were carried out separately previously. Now
we are fusing palm print and palm vein. In implementation of fusion strategy, we are using
feature level fusion scheme. So the individual feature sets are concatenated together to
facilitate feature level fusion. This process increases number of computations as feature
length increases almost twice.
We had already stores the feature of training sets and testing sets in the form of text file while
performing experiments of unimodal trail. Here features vector of fusion is of dimension
1x76. In this experiment while fusion the images of palm print and palm vein which were
used belongs to same person. Here experiment is done by taking 2 testing images and 3
training images of palm print and palm veins each.
Table 11.Results of fusion of palm veins and palm print
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 0.0 0.0 100
Manhatten 0 0 100 0.0 0.0 100
56
Euclidean 0 0 100 1.0 0.0 100
Gower 0 0 100 0.0 0.0 100
Following figures show graphical representations of Table 11 for 50 subjects
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.14 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Following figures show graphical representations of Table 11 for 100 subjects
57
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.15 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
58
Inference:
The results for fusion of palm print and palm vein shows great improvement in parameters.
Here all distances have 100% GAR. For small data sets all distances perform superbly with
0% FAR, FRR. Although it had been observed that when there is increase in database the
FAR of Euclidean distance increases that just by 1% which is still less than FAR of uni-
modality using Euclidean distance for 100 subjects. We can recommend this bimodality
system as it shows better result than individual modality.
Table 12.Results of fusion of palm veins and palm print of exchanged database
Note-Here reading is taken for varying threshold
Following figure show graphical representations of Table 11 for 100 subjects
Fig. 5.16 FAR & FRR for Canberra Distance
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 2 0 100
Canberra 0.0 0.0 100 1 1 99
59
Inference:
The results for fusion of palm print and palm vein for exchange database shows variation in
results as compare to previous results. Here FAR, FRR increases as the database increases to
100 subjects. Hence we can say that palm print and palm vein modality are highly correlated.
Time Complexity
Fusion of palm print and palm vein technique is less complex. The time taken by system to
load the database and to test for one subject using 2 testing sets is .161sec.
Inference:
Fusion technique of palm print and palm vein is highly efficient due to its ease of acquisition,
its accuracy (100% GAR) and less time complexity. Hence we can implement it also in real
time operations.
5.5 Result of fusion of fingerprint and Palm Vein
Experiments with both fingerprint and palm vein were carried out separately previously. Now
we are fusing fingerprint and palm vein. In implementation of fusion strategy, we are using
feature level fusion scheme. So the individual feature sets are concatenated together to
facilitate feature level fusion. This process increases number of computations as feature
length increases almost twice.
We had already stores the feature of training sets and training sets in the form of text file
while performing experiments of unimodal trail. Here features vector of fusion is of
dimension 1x57. Here experiment is done by taking 2 testing images of finger print and palm
veins each.
60
Table 13.Results of fusion of palm veins and fingerprint database
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 0.50 0.0 100
Manhatten 1.0 0 100 0.50 0.0 100
Euclidean 1.0 0 100 1.0 1.0 99
Gower 0.0 0 100 1.0 1.5 98.5
Following figures show graphical representations of Table 13 for 50 subjects
61
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.17 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
62
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Following figures show graphical representations of Table 13 for 100 subjects
(a)Gower Distance (b)Euclidean Distance
63
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.18 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Inference:
The results for fusion of fingerprint and palm vein show great improvement in parameters.
Here all distances have 100% GAR for small data sets. Canberra distance perform superbly
with 0% FAR, FRR for database of 50 subjects although FAR increases to 1% for 100
subjects still GAR is maintained to 100%. It is because Canberra distance normalizes features
of the difference between them. It had been observed that Gar still maintained to 100% in
case of Manhatten distances. Euclidean distance have constant FAR of 1% which is still less
or equal than FAR of uni-modality using same distance. However there is decrease in GAR of
system when Grower distance is use. We can recommend this bimodality system where less
FAR is required.
Time Complexity
Fusion of palm print and palm vein technique is calculated. The time taken by system to load
the database and to test for one subject using 2 testing sets is .371sec.
5.6 Result of fusion of fingerprint and Palm print
64
Experiments with both fingerprint and palm print were carried out separately previously. Now
we are fusing palm print and fingerprint. In implementation of fusion strategy, we are using
feature level fusion scheme. So the individual feature sets are concatenated together to
facilitate feature level fusion. This process increases number of computations as feature
length increases almost twice.
We had already stores the feature of training sets and training sets in the form of text file
while performing experiments of unimodal trail. Here features vector of fusion is of
dimension 1x57. Here experiment is done by taking 2 testing images of finger print and palm
print each.
Table 14.Results of fusion of palm print and fingerprint database
Distance Metric
50 people 100 people
FAR FRR GAR FAR FRR GAR
Canberra 0.0 0.0 100 0.50 0.0 100
Manhatten 0.0 0 100 0.50 0.50 99.5
Euclidean 0.0 0 100 1.0 1.0 99
Gower 1.0 0 100 0.50 0.50 99.5
65
Following figures show graphical representations of Table 14 for 50 subjects
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.19 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
66
Following figures show graphical representations of Table 14 for 100 subjects
(a)Gower Distance (b)Euclidean Distance
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.20 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
67
Table 15.Results of fusion of palm print and fingerprint database of 150 subjects with
different thresholds
Distance Metric
150 people
FAR FRR GAR FAR FRR GAR
Canberra 1.0 0.0 100 0.67 0.67 99.33
Manhatten 2.0 0.0 100 1.0 2.0 98.0
Euclidean 1.33 0.0 100 1.67 1.0 99.0
Gower 2.0 0.0 100 1.0 2.0 98.0
Following figures show graphical representations of Table 15 for 150 subjects
(a)Gower Distance (b)Euclidean Distance
68
(C)Canberra Distance (d)Manhatten Distance
Fig. 5.20 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
(c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
Inference:
The results for fusion of fingerprint and palm vein show great improvement in parameters.
Here all distances have 100% GAR for small data sets. Canberra distance perform superbly
with 0% FRR and 100% GAR for database of 150 subjects. FAR increase to .67% for 150
subjects as compare to .5 of 100, still which is half by the same distance in uni-modality
system of palm print and fingerprint. Manhatten gives better GAR and less FAR then
Euclidean distances. Grower distance gives better GAR at higher data base. We can
recommend this bimodality system as it has less FAR of .67% and GAR of 99.33%.
Time Complexity
Fusion of palm print and palm vein technique is calculated. The time taken by system to load
the database and to test for one subject using 2 testing sets is .4813sec. We can implement it
in real time systems as well.
5.7 Experiments with ORL database for Face modality
Proposed system is firstly applied on ORL database to check the accuracy of system. System
is tested using only one image. Here input is in form of image. As from literature survey it is
69
very well known that face information is in lower frequency, hence block of lower frequency
is taken.
Table 16.Results with ORL database taking lower frequency blocks of DCT coefficients
mentioned in figure () for varying thresholds.
Following figures show graphical representations of Table 15
Fig. 5.21 FAR & FRR for Euclidean Distance
Inference
The results for face modality shows that low FAR can be achieve but with higher value of
FRR.
Afterapplyinghistogramtoface we getbetter qualityof image.
Distance Metric
40 people
FAR FRR GAR
Euclidean 3.75 3.75 96.25
Euclidean 1.25 6 94
70
Fig 5.1 Image before Histogram Equalization Fig5.2 Image after histogram Equalization
Fig 5.3 Image before Histogram Equalization Fig5.4 Image after histogram Equalization
Figure 5.1 to 5.4 showsthat histogramequalizationimprovesthe image qualityof picture whichis
verymuch requiredforface recognitionspecial incase where lessilluminationisthere inhalf of face.
whichwill be case inmanyof our images.
 Salt & Pepper Noise
71
Fig. 5.5 Before histogram equalization Fig.5.6 after histogram equalization
 GaussianNoise
Fig.5.7 Before histogram equalization Fig. 5.8 after histogram equalization
The above result shows that if image is noisy so it can’t be remove by Histogram
Equalization. So if image coming from source is noisy then we have to use filter to remove
noise from images.To remove salt and pepper noise median filter can be apply. Results
after applying filtering
Salt and pepper noise
Fig 5.9 Images after applying median and Gaussian LPF on image suffering from Salt & Pepper noise.
72
After applying Median and Gaussian Low Pass filter we find that the Median filter work for
salt and pepper noise. But it will not work with Gaussian low pass. We can see that there is
still black and white spots. Hence we can use median filter for such kind of noise.
GaussianNoise
Fig 5.10 Images after applying median and Gaussian LPF on image suffering from Gaussian noise.
After applying Median and Gaussian Low Pass filter we find that the Gaussian low pass filter
work a bit for Gaussian noise although there is still noise and image gets blurred. Median
filter don’t remove Gaussian noise. Hence we cannot go for median filter for Gaussian noise.
PoissonNoise
Fig 5.11 Images after applying median and Gaussian LPF on image suffering from Poisson noise.
After applying Median and Gaussian Low Pass filter we find that the Gaussian low pass filter
work well for Poisson noise although image gets blurred. Median filter don’t remove Poisson
noise very efficiently. Hence we cannot go for median filter for Gaussian noise.
We can reduce the dimension of an Image that is nothing but 2-D array. Here we have used
PCA algorithm for dimensionality reduction. We first convert an image into single Colum
matrix. Then after combining all the images we find out there eigenvalues and eigenvectors
and make projection matrix taking eigenvectors having non-zero eigenvalue. Then we project
images into that projection matrix and we get matrix having reduced dimension.
73
Image Size Size after
Conversion to
single column(Is)
No ofcolumn of
eigenvector
having non-zero
eigenvalues
Size of
projection
matrix(Pmat)
Dimension of
Reduced image
((Pmat)T
x Is)
30x30 900x1 3 900x3 3x1
50x50 2500x1 3 2500x3 3x1
100x100 10000x1 4 10000x4 4x1
Table 5.1 Shows dimensionality reduction using PCA algorithm
Hence we can use PCA algorithm to reduce the dimension of images also reduces the no of
calculation required. It will reduce the memory requirement to store images. Which can be
use later for template matching.
The result shows robust face detection. In this the size of face is not mater as far as it is more
than 50x50.
Fig 5.9 and 5.10 shows detection of face in image.
NO. Of
Images
Taken
Type Face Size
(app)
Detected FAR FRR
10 Frontal 100x150 10 0 0
20 Frontal 100x150 20 0 0
74
30 Frontal 100x150 30 0 0
40 Frontal 100x150 40 0 0
50 Frontal 100x150 50 0 0
Table 5.1 shows FAR and FRR for Face Detection
FAR for Face Frontal Detection FRR for Frontal Face Detection
NO. Of
Images
Taken
Type Face Size
(app)
Detected FAR FRR
10 Frontal 200x250 10 0 0
20 Frontal 200x250 20 0 0
30 Frontal 200x250 30 0 0
40 Frontal 200x250 40 0 0
50 Frontal 200x250 50 0 0
Table 5.2 shows FAR and FRR for Face Detection
FAR for Frontal Face Detection FRR for Frontal Face Detection
0
1
10 20 30 40 50
FAR
FAR
0
1
10 20 30 40 50
FRR
FRR
0
1
10 20 30 40 50
FAR
FAR
0
1
10 20 30 40 50
FRR
FRR
75
NO. Of
Images
Taken
Type Face Size
(app)
Detected FAR FRR
10 Side 200x250 3 0 7
Table 5.3 shows FAR and FRR for Face Detection of Side face
FRR for Side Face Detection FAR for Side Face Detection
It can be seen that we had developed a robust face detection algorithm which can detect small
face from a frame or image. But it should be more than 50x50 pixels. The algorithm used for
face detection is Viola-Jones algorithm. But experiment results shows that it is not compatible
with size face hence not able to detect them. Using symmetry we can recover the other side of
face but it becomes this will take time and also makes our system complex.
No of
images
taken
Type of
Wavelet
Size Time
taken
(Sec)
Size of image
after taking
low freq.
Size Time
taken
(Sec)
Size of image
after taking low
freq.
40 Haar 200x200 .67 100x100 256x256 1.09 128x128
60 Haar 200x200 .78 100x100 256x256 1.42 128x128
40 db4 200x200 1.14 103x103 256x256 1.81 131x131
60 db4 200x200 1.98 103x103 256x256 2.76 131x131
40 db8 200x200 2.15 107x107 256x256 3.125 135x135
60 db8 200x200 3.05 107x107 256x256 4.229 135x135
Table 5.4 shows comparisons of time requirement for Harr and Daubechies wavelet
0
200
2 4 6 8 10
FRR
FRR 0
1
2 4 6 8 10
FAR
FAR
76
It can be Seen that the time taken by Haar wavelet is much more less almost to half from
‘db4’.The ‘db8’ takes 3 times more then that of Haar wavelet. Also it can be found that the no
of pixel in low frequency band is more in ‘db4’ and ‘db8’.which will again increase the
operation time that is being performed afterwards .Hence we can use Haar wavelet which is
more simple than other wavelets.
Fig 5.11 and 5.812 shows Images after Haar Wavelettransform whichwill used for feature
extraction
Table shows the result for face recognition using videos.
No of
Video
Taken
No of
Training
Images
No of
images
taken for
recogniti
on
FAR FRR
4 4 2 0 0
6 4 2 0 0
77
3 3 2 0 0
5 4 2 0 0
6 3 2 0 0
Graph showingFARandFRR for face recognition.
0
0.5
1
3 5 6
FAR
FRR
78
We have proposed a secure and computationally efficient biometric authentication system for
Face Recognition. For Face Recognition system, we used histogram equalization followed
by Haar Wavelet along with PCA+LDA framework for feature reduction and dominant
feature selection.
 Distance of Face from camera doesn’t matter much but size of image should be more
than 50*50 pixel in a frame to get detected. Side Face detection is problem.
 From results it is clear that implemented histogram equalization showcases
improvement in picture quality of noise less image .
 Histogram Equalization doesn’t remove noise.
 Usage of Haar Wavelet for feature extraction reduces the complexity of program in
comparison with Gabor or other complex Wavelet.
79
 Employment of LDA after PCA not only reduces the size dimension of system overall
but also Increases the Inter class difference which is desire one.
 Integration of information at an earlier stage (feature level) and its segregation in
separate classes using Linear Discriminant Analysis leads to the betterment of result.
It is because LDA tries to maximise the ratio of between-class scatter matrix and
within-class scatter matrix. Here discriminating features are retained by LDA which
has input as concatenated features from individual modalities.
 Localize histogram can be use to increase the quality of input image. As it will
enhance the quality of picture further. [31]
 In case of feature extraction different wavelets like Daubechies [18] and Cubic B
spline[19] can be use.
 We can also apply DCT after wavelet transform as it also help in increasing
recognition as it is found in many literature[20][21]
 Multi Modal Biometric can be used to save system from spoof attack[33].
80
[1].Hand Book of biometrics edited by Anil K. Jain Michigan State University, USA Patrick
Flynn University of Notre Dame, USA, Arun A. Ross,West Virginia University, USA, 2008
Springer Science & Business Media,
[2] Chengjun Liu, and Harry Wechsler, “Independent Component Analysis of Gabor Features
for Face Recognition”, IEEE Transactions on Neural Networks, Vol. 14, No. 4, July 2003
[3]. Syed Maajid Mohsin, Muhammad Younus Javed, Almas Anjum, “Face Recognition
using Bank of Gabor Filters”, IEEE 2nd International Conference on Emerging Technologies,
Peshawar
[4]. Victor-Emil Neagoe, John Mitrache,”A Feature-Based Face Recognition Approach Using
Gabor Wavelet Filters Cascaded With Concurrent Neural Modules”, World Automation
Congress (WAC) 2006, July 24-26, Budapest, Hungary
[5]. Zhi-Kai Huang, Wei-Zhong Zhang, Hui-Ming Huang, Ling-Ying Hou,” Using Gabor
Filters Features for Multi-Pose Face Recognition in Color Images”, IEEE Second
International Symposium on Intelligent Information Technology Application, 2008
[6]. Dr.P.K.Suri, Dr.Ekta Walia and Er.Amit Verma, “Novel Face Detection Using Gabor
Filter Bank with Variable Threshold”, Proceedings of Springer – International conference on
High Performance Architecture and Grid Computing(July 19-20,20116
[8]. Xiaoyang Tan and Bill Triggs, “Fusing Gabor and LBP Feature Sets for Kernel-based
Face Recognition”, Proceedings of 3rd Springer international conference on Analysis and
modeling of faces and gestures, 2007
[9]. Poonam Sharma, K.V. Arya, R.N. Yadav, “Efficient face recognition using wavelet-
based generalized neural network” Elsevier Journal of Signal Processing Volume 93 Issue 6,
June, 2013 Pages 1557-1565
[10]. Shufu Xie, Shiguang Shan, Xilin Chen, Xin Meng, Wen Gao, “Learned local Gabor
patterns for face representation and recognition”, Elsevier Signal Processing Vol. 89 Issue 2.
Pp 2333-2344, 2009
81
[11]. Lin-Lin Huang, Akinobu Shimizu, Hidefumi Kobatake, “Robust face detection using
Gabor filter features”, Pattern Recognition Letters Vol.26 Issue 11 pages 1641–1649, 2005
[12]. Bhaskar Gupta , Sushant Gupta , Arun Kumar Tiwari, “Face Detection Using Gabor
Feature Extraction and Artificial Neural Network”, IEEE First Workshop On Image
Processing And Applications, Pp 1-6, 2008
[13]. Md. Tajmilur Rahman and Md. Alamin Bhuiyan,” Face Recognition using Gabor
Filters”, Proceedings of 11th IEEE International Conference on Computer and Information
Technology, Khulna, Bangladesh, 2008
14]. A.N. Rajagopalan , K. Srinivasa Rao, Y. Anoop Kumar, “Face recognition using multiple
facial features”, Elsevier Pattern Recognition Letters, Vol. 28, pages 335–341, 2007
[15]. Young-Jun Song,Young-Gil Kim, Un-Dong Chang, Heak Bong Kwon, “Face
recognition robust to left/right shadows; facial symmetry”, Elsevier Pattern Recognition Vol.
39,pages 1542 – 1545, 2009
[16] D. SRIDHAR” Face Image Classification Using Combined Classifier” 2013
International Conference on Signal Processing, Image Processing and Pattern Recognition
,IEEE.
[17]. Kamran Etemad and Rama Chellappa, “Discriminant analysis for recognition of human
face images”, Springer Lecture Notes on Computer Science, Vol. 1206, pp 125-142, 1997(to
search it)
[18] Yong Chen” Face Recognition Using Cubic B-spline Wavelet Transform” 2008 IEEE
Pacific-Asia Workshop.
[19] Mohamed El Aroussi” Curvelet-Based Feature Extraction with B-LDA for Face
Recognition” IEEE-2009.
[20] Meihua Wang, Hong Jiang and Ying Li “Face Recognition based on DWT/DCT and
SVM” 2010 International Conference on Computer Application
[21] Hazim Kemal Ekenel, Rainer Stiefelhagen,” Local Appearance based face recognition
using DCT”.
[22] Aman R. Chadha, Pallavi P. Vaidya “face recognition using DCT using local and global
feature”IEEE 2011 International Conference.
82
[23] Meng Joo Er, Member, IEEE” High-Speed Face Recognition Based on Discrete
Cosine Transform and RBF Neural Networks”. IEEE transaction on neural network ,Vol.16
may 2005.
[24] Muhammad Azam EPE Department” Discrete Cosine Transform (DCT) Based Face
Recognition in Hexagonal Images” IEEE 2010.
[25]. Kamran Etemad and Rama Chellappa, “Discriminant analysis for recognition of human
face images”, Springer Lecture Notes on Computer Science, Vol. 1206, pp 125-142, 1997
26]. Haifeng Hu, “Variable lighting face recognition using discrete wavelet transform”,
Elsevier Pattern Recognition Letters Vol. 32, pp 1526–1534, 2011
[27]. X. Cao, W. Shen , L.G. Yu, Y.L. Wanga, J.Y. Yang, Z.W. Zhang, “Illumination
invariant extraction for face recognition using neighboring wavelet coefficients”, Elsevier
Pattern Recognition Vol. 45, pp 1299–1305, 2012
[28]. K.Jaya Priyaa, Dr.R.S Rajesh, “Local Fusion of Complex Dual-Tree Wavelet
Coefficients Based Face Recognition for Single Sample Problem”, Elsevier Procedia
Computer Science Vol. 2, pp 94–100, 2010
[29] M.Koteswara Rao “Face recognition using DWT and eigenvectors”. 2012 1st
International Conference on Emerging Technology Trends in Electronics
[30] Sidra Batool Kazmi, “Wavelets Based Facial Expression Recognition
Using a Bank of Neural Networks” 2010 IEEE
[31]. Tripti Goel” Rescaling of Low Frequency DCT Coefficients with
Kernel PCA for Illumination Invariant Face “2012 –IEEE.
[32] Sung-Hyuk Cha, “Comprehensive Survey on Distance/Similarity Measures between
Probability Density Functions”, International Journal of Mathematical Models and Methods
in Applied Sciences, Issue 4, Volume 1, 2007
83
[33]. Djamel Bouchaffraa, Abbes Amira, “Structural hidden Markov models for biometrics:
Fusion of face and fingerprint”, Elsevier Pattern Recognition Vol. 41, pp 852 – 867, 2008
[34]. R. Raghavendra, Bernadette Dorizzi, Ashok Rao, G. Hemantha Kumar, “Designing
efficient fusion schemes for multimodal biometric systems using face and palm print”,
Elsevier Pattern Recognition Vol. 44, pp 1076–1088, 2011
[35] Belhumeur, P.N., J.P. Hespanha and D.J. Kriegman, “ Eigenfaces vs. Fisherfaces:
recognition using class specific linear projection”, IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol 19, pp. 711-720, 1997
[36] Belhumeur, P.N., J.P. Hespanha and D.J. Kriegman,” Eigenfaces vs. Fisherfaces:
recognition using class specific linear projection ” , IEEE Trans. Pattern Anal. Mach.
Intelligence, vol 19, pp. 711-720, 1997
[37] R. Jafri and H.R.Arabnia, "A Survey of Face Recognition Techniques", Journal of
Information Processing Systems, Vol. 5, No.2, June 2009.
[38] Sanchit, Maurício Ramalho, Paulo Lobato Correia1,Luís Ducla Soares,” Biometric
Identification through Palm and Dorsal Hand Vein Patterns”,2011.
[39] C.-L. Lin and K.-C. Fan, “Biometric verification using thermal images of palm-dorsa
vein patterns,” IEEE Trans. Circuits Syst. Video Technol.,vol. 14, pp. 199–213, Feb. 2004.
[40] Goh Kah Ong Michael, Tee Connie Andrew, Teoh Beng Jin,“Design and
Implementation of a Contactless PalmPrint and Palm Vein Sensor,” 11th
Int. Conf. Control,
Automation, Robotics and VisionSingapore, December 2010.
[41] Huan Zhang, Dewen Hu,”A Palm Vein Recognition System”, International Conference
on Intelligent Computation Technology and Automation,2010.
[42] Jing Liu , Yue Zhang,” Palm-Dorsa Vein Recognition Based on Two-Dimensional
Fisher Linear Discriminant”,IEEE,2011.
[43] David Zhang, “Online palm print identification”
84
[44] Ajay Kumar, Venkata Prathyusha,”Personal Authentication Using Hand Vein
Triangulation and Knuckle Shape”, IEEE transactions on Image Processing, VOL. 18, NO. 9,
September 2009.
[45] A. El-Zaart, “Images thresholding using Isodata technique with gamma distribution,”
Pattern Recognition and Image Analysis, vol. 20, no. 1, pp.29-41, 2010.
[46] Suzuki, S. and Abe, K., Topological Structural Analysis of Digitized Binary Images by
Border Following. CVGIP 30 1, pp 32-46.
[47] Zhong Qu, Zheng-yong Wang, Research on pre processing of Palm print image based on
adaptive threshold and Euclidean distance, Natural Computation ICNC 2010 sixth
international conference, page 4238-4242

More Related Content

Similar to ADITYA_Thesis

Face Recognition report
Face Recognition reportFace Recognition report
Face Recognition report
lavanya693
 
Biometrics for e-voting
Biometrics for e-votingBiometrics for e-voting
Biometrics for e-voting
Vignesh Ravichandran
 
Biometrics Presentation By Sachin Yadav (S/W Engineer)
Biometrics Presentation By Sachin Yadav (S/W Engineer)Biometrics Presentation By Sachin Yadav (S/W Engineer)
Biometrics Presentation By Sachin Yadav (S/W Engineer)
sachin yadav
 
Security Issues Related to Biometrics
Security Issues Related to BiometricsSecurity Issues Related to Biometrics
Security Issues Related to Biometrics
YogeshIJTSRD
 
Biometricsppt
BiometricspptBiometricsppt
Biometricsppt
DrKRBadhiti
 
Biometric Identification system.pptx
Biometric Identification system.pptxBiometric Identification system.pptx
Biometric Identification system.pptx
Kurukshetra University, Kurukshetra
 
IRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
IRJET-Biostatistics in Indian Banks: An Enhanced Security ApproachIRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
IRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
IRJET Journal
 
Jss academy of technical education
Jss academy of technical educationJss academy of technical education
Jss academy of technical education
Arhind Gautam
 
Fingerprint detection
Fingerprint detectionFingerprint detection
Fingerprint detection
Mudit Mishra
 
Personal authentication using 3 d finger geometry (synopsis)
Personal authentication using 3 d finger geometry (synopsis)Personal authentication using 3 d finger geometry (synopsis)
Personal authentication using 3 d finger geometry (synopsis)
Mumbai Academisc
 
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
ijcsa
 
Biometrics_basicsandcharacteristics_.pdf
Biometrics_basicsandcharacteristics_.pdfBiometrics_basicsandcharacteristics_.pdf
Biometrics_basicsandcharacteristics_.pdf
shivagreenevv
 
Seminar report on Error Handling methods used in bio-cryptography
Seminar report on Error Handling methods used in bio-cryptographySeminar report on Error Handling methods used in bio-cryptography
Seminar report on Error Handling methods used in bio-cryptography
kanchannawkar
 
Biometrics
BiometricsBiometrics
Biometrics
Rajan Kumar
 
Dr Gurumurthi V. Ramanan Face Recognition - Presentation
Dr Gurumurthi V. Ramanan Face Recognition - PresentationDr Gurumurthi V. Ramanan Face Recognition - Presentation
Dr Gurumurthi V. Ramanan Face Recognition - Presentation
Ramanan Gurumurthi V.
 
Biometrics
BiometricsBiometrics
MULTIMODAL BIOMETRIC SECURITY SYSTEM
MULTIMODAL BIOMETRIC SECURITY  SYSTEMMULTIMODAL BIOMETRIC SECURITY  SYSTEM
MULTIMODAL BIOMETRIC SECURITY SYSTEM
xiaomi5
 
Biometrics
BiometricsBiometrics
Biometrics
N/A
 
Biometric systems
Biometric systemsBiometric systems
Biometric systems
Anooja Pillai
 
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
CSCJournals
 

Similar to ADITYA_Thesis (20)

Face Recognition report
Face Recognition reportFace Recognition report
Face Recognition report
 
Biometrics for e-voting
Biometrics for e-votingBiometrics for e-voting
Biometrics for e-voting
 
Biometrics Presentation By Sachin Yadav (S/W Engineer)
Biometrics Presentation By Sachin Yadav (S/W Engineer)Biometrics Presentation By Sachin Yadav (S/W Engineer)
Biometrics Presentation By Sachin Yadav (S/W Engineer)
 
Security Issues Related to Biometrics
Security Issues Related to BiometricsSecurity Issues Related to Biometrics
Security Issues Related to Biometrics
 
Biometricsppt
BiometricspptBiometricsppt
Biometricsppt
 
Biometric Identification system.pptx
Biometric Identification system.pptxBiometric Identification system.pptx
Biometric Identification system.pptx
 
IRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
IRJET-Biostatistics in Indian Banks: An Enhanced Security ApproachIRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
IRJET-Biostatistics in Indian Banks: An Enhanced Security Approach
 
Jss academy of technical education
Jss academy of technical educationJss academy of technical education
Jss academy of technical education
 
Fingerprint detection
Fingerprint detectionFingerprint detection
Fingerprint detection
 
Personal authentication using 3 d finger geometry (synopsis)
Personal authentication using 3 d finger geometry (synopsis)Personal authentication using 3 d finger geometry (synopsis)
Personal authentication using 3 d finger geometry (synopsis)
 
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
 
Biometrics_basicsandcharacteristics_.pdf
Biometrics_basicsandcharacteristics_.pdfBiometrics_basicsandcharacteristics_.pdf
Biometrics_basicsandcharacteristics_.pdf
 
Seminar report on Error Handling methods used in bio-cryptography
Seminar report on Error Handling methods used in bio-cryptographySeminar report on Error Handling methods used in bio-cryptography
Seminar report on Error Handling methods used in bio-cryptography
 
Biometrics
BiometricsBiometrics
Biometrics
 
Dr Gurumurthi V. Ramanan Face Recognition - Presentation
Dr Gurumurthi V. Ramanan Face Recognition - PresentationDr Gurumurthi V. Ramanan Face Recognition - Presentation
Dr Gurumurthi V. Ramanan Face Recognition - Presentation
 
Biometrics
BiometricsBiometrics
Biometrics
 
MULTIMODAL BIOMETRIC SECURITY SYSTEM
MULTIMODAL BIOMETRIC SECURITY  SYSTEMMULTIMODAL BIOMETRIC SECURITY  SYSTEM
MULTIMODAL BIOMETRIC SECURITY SYSTEM
 
Biometrics
BiometricsBiometrics
Biometrics
 
Biometric systems
Biometric systemsBiometric systems
Biometric systems
 
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...
 

ADITYA_Thesis

  • 1. 1 A DISSERTATION REPORT ON FACE RECOGNITION IN REAL TIME BY ADITYA GUPTA (121297012) UNDER THE GUIDANCE OF Dr. (Mrs.) M. A. Joshi In fulfillment of M. TECH ELECTRONICS & COMMUNICATION (SIGNAL PROCESSING) DEPARTMENT OF ELECTRONICS & TELECOMMUNICATIONS ENGINEERING COLLEGE OF ENGINEERING PUNE – 411005.
  • 2. 2 Personal identity refers to a set of attributes (e.g., name, social security number, etc.) that are associated with a person. Identity management is the process of creating, maintaining and destroying identities of individuals in a population. A reliable identity management system is urgently needed in order to combat the epidemic growth in identity theft and to meet the increased security requirements in a variety of applications ranging from international border crossing to accessing personal information. Establishing (determining or verifying) the identity of a person is called person recognition or authentication and it is a critical task in any identity management system. It is necessary to replace knowledge-based(Identity cards & passwords). Token-based mechanisms for reliable identity determination and stronger authentication schemes based on namely biometrics, are needed. Biometric authentication, or simply biometrics, offers reliable solution to the problem of identity determination by establishing the identity of a person based on “who he is”. Biometric systems automatically verify a person’s identity based on his physical and behavioural characteristics such as fingerprint, face, iris, voice and gait. A number of physical and behavioural body traits can be used for biometric recognition. Examples of physical traits include face, fingerprint, iris, palm print, and hand geometry. Gait, signature and keystroke dynamics are some of the behavioural characteristics that can be used for person authentication. Each biometric modality has its advantages and limitations, and no single modality is expected to meet all the requirements such as accuracy, practicality and cost imposed by all applications. A typical biometric system consists of four main components, namely, sensor, feature extractor, matcher and decision modules. A sensor acquires the biometric data from an individual. A quality estimation algorithm is used many times to ascertain whether the
  • 3. 3 acquired biometric data is good enough to be processed by the subsequent components. When the data is not of sufficiently high quality, it is usually re-acquired from the user. The feature extractor computes only the salient information from the acquired biometric sample to form a new representation of the biometric trait, generally termed as the feature set. Ideally, the feature set should possess uniqueness for every single individual (extremely small inter-user similarity) and also should be invariant with respect to changes in the different samples of the same biometric trait collected from the same person (extremely small intra-user variability). The feature set obtained during enrolment is stored in the system database as a template. During authentication, the feature set extracted from the biometric sample is compared to the template by the matcher, which determines the degree of similarity between the two feature sets generated and stored. The identity of user is decided based on similarity score given by matcher module. The functionalities provided by a biometric system can be categorized as verification and identification. Figure 1.1 shows the enrollment and authentication stages of a biometric system operating in the verification and identification modes. In verification, the user claims an identity and the system verifies whether the claim is genuine. Here, the query is compared only to the template corresponding to the claimed identity. If the input from user and the template of the claimed identity have a high degree of similarity, then the claim is accepted as “genuine”. Otherwise, the claim is rejected and the user is considered an “impostor”. Identification functionality can be classified into positive and negative identification. In positive identification, the user attempts to positively identify himself to the system. Here user need not claim his identity explicitly. Screening is often used at airports to verify whether a passenger’s identity matches with any person on a “watch-list”. In this situation authorities need not worry about identities of individuals. Screening can also be used to prevent the issue of multiple credential records (e.g., driver’s license, passport) to the same person. Negative identification is critical in applications such as welfare disbursement to prevent a person from claiming multiple benefits (i.e., double dipping) under different names. In both positive and negative identification, the user’s biometric input is compared with the templates of all the persons enrolled in the database. The system simply checks for similarity of input from user with existing database and outputs whether user is enrolled or not.
  • 4. 4 The number of enrolled users in the database can be quite large which makes identification process more challenging than verification. Figure 1.1 shows Verification process Fig 1.2 shows Identification Process Biometric Sensor Quality Assessment Module Feature Extractor System Database User User Identity Biometric Sensor Quality Assessment Module Feature Extractor Matcher User System Database Decision Module
  • 5. 5 Biometric traits collected over a period of time may vary dramatically. The variability observed in the biometric feature set of an individual is known as intra-user variations. For example, in the case of face, factors such as facial expression, person’s mood at that instance and his appearance and feature extraction errors lead to large intra-user variations .On the other hand, features extracted from biometric traits of different individuals can be quite similar. Appearance-based facial features will exhibit a large similarity for the pair of individuals e.g. twins and such a similarity is usually referred to as inter-user similarity. A biometric system can make two types of errors, namely, false rejection and false acceptance. A false acceptance occurs when two samples from different individuals are incorrectly recognized as a match due to large inter-user similarity. When the intra-user variation is large, two samples of the same biometric trait of an individual may not be recognized as a match and this leads to a false rejection error. Therefore, the basic measures of the accuracy of a biometric system are False Acceptance Rate (FAR) and False Rejection Rate (FRR). A False Rejection Rate of 2% indicates that on average, 2 in 100 genuine attempts do not succeed. A majority of the false rejection errors are usually due to incorrect interaction of the user with the biometric sensor and can be easily rectified by allowing the user to present his/her biometric trait again. A False Acceptance Rate of 0.2% indicates that on average, 2 in 10, 00 impostor attempts are likely to succeed. Other than false rejection and false acceptance, two other types of failures are also possible in a practical biometric system. If an individual cannot interact correctly with the biometric user interface or if the biometric samples of the individual are of very poor quality, the sensor or feature extractor may not be able to process these individuals. Hence, they cannot be enrolled in the biometric system and the proportion of individuals who cannot be enrolled is referred to as Failure to Enroll Rate (FTER). In some cases, a particular sample provided by the user during authentication cannot be acquired or processed reliably. This error is called failure to capture and the fraction of authentication attempts in which the biometric sample cannot be captured is known as Failure to Capture Rate (FTCR). A match score is termed as genuine or authentic score if it indicates the similarity between two samples of a same user. An impostor score measures the similarity between two
  • 6. 6 samples of different users. An impostor score that exceeds the threshold η results in a false accept, while a genuine score that falls below the threshold η results in a false reject. The Genuine Accept Rate (GAR) is defined as the fraction of genuine scores exceeding the threshold η. Therefore, ( ) ( | ) ∫ ( ) 1 ( ) ( | ) ∫ ( ) 2 Regulating the value of η changes the FRR and the FAR values, but for a given biometric system, it is not possible to decrease both these errors simultaneously. Though biometric systems have been used in real-world applications . But three main factors that affect accuracy of biometric system design are FAR, GAR, size of the database. The challenge in biometrics is to design a system that operates in the extremes of all these three factors. In other words, the challenge is to develop a biometric system in real time that is highly accurate and secure .The major obstacles that hinder the design of such an “ideal” biometric system. An ideal biometric system should always provide the correct decision when a biometric sample is presented. The main factors affecting the accuracy of a biometric system are:  Non-universality: If every individual in the target population is able to present the biometric trait for recognition, then the trait is said to be universal. Universality is one of the basic requirements for a biometric identifier. However, not all biometric traits are truly universal Due to the above factors, the error rates associated with biometric systems are higher than what is required in many applications. In the case of a biometric verification system, the size of the database (number of enrolled users in the system) is not an issue because each authentication attempt basically involves matching the query with a single template. In the case of large scale identification systems
  • 7. 7 where N identities are enrolled in the system, sequentially comparing the query with all the N templates is not an effective solution due to two reasons. Firstly, the throughput of the system would be greatly reduced if the value of N is quite large. For example, if the size of the database is 1 million and if each match requires an average of 100 microseconds, then the throughput of the system will be less than 1 per minute. Furthermore, the large number of identities also affects the false match rate of the system adversely. Hence, there is a need for efficiently scaling the system. This is usually achieved by a process known as filtering or indexing where the database is pruned based on extrinsic (e.g., gender, ethnicity, age, etc.) or intrinsic (e.g., fingerprint pattern class) factors and the search is restricted to a smaller fraction of the database that is likely to contain the true identity of the user. Although it is difficult to steal someone’s biometric traits, it is still possible for an impostor to circumvent a biometric system in a number of ways. For example, it is possible to construct fake or spoof fingers using lifted fingerprint impressions (e.g., from the sensor surface) and utilize them to circumvent a fingerprint recognition system. Behavioural traits like signature and voice are more susceptible to such attacks than anatomical traits. The most straightforward way to secure a biometric system is to put all the system modules and the interfaces between them on a smart card (or more generally a secure processor). In such systems, known as match-on-card or system-on-card technology, sensor, feature extractor, matcher and template reside on the card. The advantage of this technology is that the user’s biometric data never leaves the card which is in the user’s possession. However, system-on-card solutions are not appropriate for most large-scale verification applications because they are still expensive and users must carry the card with them at all times. Moreover, system-on-card solutions cannot be used in identification applications. One of the critical issues in biometric systems is protecting the template of a user which is typically stored in a database or a smart card. Stolen biometric templates can be used to compromise the security of the system in the following two ways. (i) The stolen template can be replayed to the matcher to gain unauthorized access, and (ii) a physical spoof can be created from the template to gain unauthorized access to the system (as well as other systems which use the same biometric trait). Note that an adversary can covertly acquire the biometric information of a genuine user (e.g., lift the fingerprint from a surface touched by the user). Hence, spoof attacks are possible even when the adversary does not have access to the
  • 8. 8 biometric template. However, the adversary needs to be in the physical proximity of the person he is attempting to impersonate in order to covertly acquire his biometric trait. On the other hand, even a remote adversary can create a physical spoof if he gets access to the biometric template information.
  • 9. 9 Biometric data is only one component in wider systems of security. Typical phases of Biometric security would include. 1) Collection of data 2) Extraction 3) Comparison and Matching. As a first step, a system must collect the biometric to be used (Face, figure print, palm print). The method of capture a biometric must be done in controlled environment. All Biometric systems have some sort of collection mechanism. This could be a reader or sensor upon which a person places their finger or hand, a camera that takes a picture or video of their face or eye. In order to “enrol” in a system, an individual presents their “live” biometric a number of times so the system can build a composition or profile of their characteristic, allowing for slight variations (e.g., different degrees of pressure when they place their finger on the reader). Depending upon the purpose of the system, enrolment could also involve the collection of other personally identifiable information. Commercially available biometric devices generally do not record full images of biometrics the way law enforcement agencies collect actual fingerprints. Instead, specific features of the biometric are “extracted.” Only certain attributes are collected (e.g., particular measurements of a fingerprint or pressure points of a signature). Which parts are used is dependent upon the type of biometric, as well as the design of the proprietary system. This extracted information,
  • 10. 10 sometimes called “raw data,” is converted into a mathematical code. Again, exactly how this is done varies amongst the different proprietary systems. To use a biometric system, the specific features of a person’s biometric characteristic are measured and captured each time they present their “live” biometric. This extracted information is translated into a mathematical code using the same method that created the template. The new code created from the live scan is compared against a central database of templates in the case of a one-to-many match (identification), or to a single stored template in the case of a one-to-one match (verification). If it falls within a certain statistical range of values, the match is considered. One of the most interesting facts about most biometric technologies is that unique biometric templates are generated every time a user interacts with a biometric system. These templates, when processed by a vendor’s algorithm, are recognizable as being from the same person, but are not identical.
  • 11. 11 Systems that consolidate evidences from multiple sources of biometric information in order to reliably determine the identity of an individual are known as multibiometric systems. Multibiometric systems can alleviate many of the limitations of unibiometric systems because the different biometric sources usually compensate for the inherent limitations of the other sources. Multibiometric systems offer the following advantages over unibiometric systems. 1. Combining the evidence obtained from different sources using an effective fusion scheme can significantly improve the overall accuracy of the biometric system. The presence of multiple sources also effectively increases the dimensionality of the feature space and reduces the overlap between the feature spaces of different individuals. 2. Multibiometric systems can address the non-universality problem and reduce the FTER and FTCR. For example, if a person cannot be enrolled in a finger- print system due to worn-out ridge details, he can still be identified using other biometric traits like face or iris. 3. Multibiometric systems can also provide a certain degree of flexibility in user authentication. Suppose a user enrolls into the system using several different traits. Later, at the time of authentication, only a subset of these traits may be acquired based on the nature of the application under consideration and the convenience of the user. For example, consider a banking application where the user enrolls into the system using face, voice and fingerprint. During authentication, the user can select which trait to present depending on his convenience. While the user can choose face or voice modality when he is attempting to access the application from his mobile phone equipped with a digital camera, he can choose the fingerprint modality when accessing the same application from a public ATM or a network computer. 4. The availability of multiple sources of information considerably reduces the effect of noisy data. If the biometric sample obtained from one of the sources is not of sufficient quality during a particular acquisition, the samples from other sources may still provide sufficient discriminatory information to enable reliable decision-making. 5. Multibiometric systems can provide the capability to search a large database in a computationally efficient manner. This can be achieved by first using a relatively simple but less accurate modality to prune the database before using the more complex and accurate
  • 12. 12 modality on the remaining data to perform the final identification task. This will improve the throughput of a biometric identification system. 6. Multibiometric systems are resistant to spoof attacks because it is difficult to simultaneously spoof multiple biometric sources. Further, a multibiometric system can easily incorporate a challenge-response mechanism during biometric acquisition by acquiring a subset of the traits in some random order (e.g., left index finger followed by face and then right index finger). Such a mechanism will ensure that the system is interacting with a live user. Further, it is also possible to improve the template security by combining the feature sets from different biometric sources using an appropriate fusion scheme. Multibiometric systems have a few disadvantages when compared to unibiometric systems. They are more expensive and require more resources for computation and storage than unibiometric systems. Multibiometric systems generally require additional time for user enrollment, causing some inconvenience to the user. Finally, the accuracy of a multibiometric system can actually be lower than that of the unibiometric system if an appropriate technique is not followed for combining the evidence provided by the different sources. Still, multibiometric systems offer features that are attractive and as a result, such systems are being increasingly deployed in security critical applications.
  • 13. 13 Biometric authentication has gained a lot of interest in research community. Researchers have proposed many systems with different modalities as inputs and with different new techniques as well as new combinations of different techniques. Current topic tries to survey recent advances in authentication systems and techniques used a based on face and fingerprint modalities. 3.1 Let I(¢x,¢y,µ) represent a rotation of the input image I by an angle θ around the origin (usually the image center) and shifted by ¢x and ¢y pixels in directions x and y, respectively. Then the similarity between the two fingerprint images T and I can be measured as where CC(T, I) = TT I is the cross-correlation between T and I. The cross correlation is a well known measure of image similarity and the maximization in (2.1) allows us to find the optimal registration.[1] But In practical it can’t be use reason are  Non-linear distortion makes impressions of the same facial significantly different in terms of global structure; It is not at all immune to rotation and Scaling .  Skin condition cause image brightness, contrast, and ridge thickness to vary significantly across different impressions. The use of more sophisticated correlation measures may compensate for these problems.
  • 14. 14 Authors Chengjun Liu et al. [1] present in paper an independent Gabor features (IGFs) method and its application to face recognition. IGF method first derives a Gabor feature vector from a set of down sampled Gabor wavelet representations of face images, then reduces the dimensionality of the vector by means of principal component analysis, and finally defines the independent Gabor features based on the independent component analysis (ICA). As Gabor transformed face images exhibit strong characteristics of spatial locality, scale, and orientation selectivity, application of ICA reduces redundancy and exhibits independent features. With FERET dataset, they could achieve 100% results. Authors Syed Maajid Mohsinet al. [3] have experimented with set of Gabor filter bank. Using 30 filters and nearest neighbour classifier at last stage, they could achieve recognition accuracy of 92.5%. They note training time for 30 filters for single image as 5 seconds. The main aim of the paper proposed by Victor-Emil Neagoe et al. [4] is to take advantage of fiducial approaches in holistic approach applied to face. Authors employ Gabor filter bank to extract features. They try to localize outputs of Gabor filter bank using Head model of Human face. They have experimented with ORL face database. With neural network classifiers, they could get recognition score of 96%. Basically paper proposed by Zhi-Kai Huang et al. [6] contributes in area of color image processing. Using different colour transforms and models, they extract features from face using Gabor filters. The set of features is fed to SVM classifier. Authors have considered face images with multiple poses and with varied illumination conditions. For YCbCr model they could achieve 94% recognition accuracy. In face detection, multiple faces detection in a single frame is tedious task. Dr.P.K.Suri et al. [6] contribute in this area. This paper utilizes Gabor filter bank at 5 scales and 8 orientations generating a set of 40 filters. With varying threshold and Gabor features as input NN classifier, they could achieve highest recognition accuracy of 100% detection of multiple faces in a single frame. This highlights discriminating ability of Gabor features at different orientations.
  • 15. 15  Advantages of Gabor based techniques: From the literature it is evident that Gabor based techniques are powerful tools of directional data capture or record. Also these techniques conquer the problem of slight variations in illumination conditions.  Disadvantages: Though Gabor wavelets are directional selective tools, they generate a larger stream of data. As we are designing in hardware so complexity must be as low as possible. Hence we can’t go for Gabor for feature extraction Md. Tajmilur Rahman et al. [13] address an algorithm for face recognition using neural networks trained by Gabor features. The system commences on convolving some morphed images of particular face with a series of Gabor filter coefficients at different scales and orientations. Two novel contributions of this paper are: scaling of RMS contrast, and contribution of morphing as an advancement of image recognition perfection. The neural network employed for face recognition is based on the Multi Layer Perceptron (MLP) architecture with back-propagation algorithm and incorporates the convolution filter response of Gabor jet. This strategy could achieve correct recognition rate of 96%. Lin-Lin Huang et al. [11] present a classification-based face detection method using Gabor filter features. they have designed four filters corresponding to four orientations for extracting facial features from local images in sliding windows. The feature vector based on Gabor filters is used as the input of the face/non-face classifier, which is a polynomial neural network (PNN) on a reduced feature subspace learned by principal component analysis (PCA). They have achieved some good recognition accuracies while experimenting with CMU database and synthetic images. Bhaskar Gupta et al. [12] propose a classification-based face detection method using Gabor filter features Feature vector, generated with 40 set of filters, is used as the input of the classifier, which is a Feed Forward neural network (FFNN) on a reduced feature subspace learned by an approach simpler than principal component analysis (PCA). Instead of applying dimensionality reduction techniques like PCA, some rows and columns from Gabor feature vector have been deleted. Though this is not a sophisticated technique, they could achieve some good face classification rates.
  • 16. 16 Muhammad Azam, EPE Department, PNEC[24] proposes a new approach to Face recognition. which is based on processing of face images in hexagonal lattice. Few advantages of processing images on hexagonal lattice are higher degree of circular symmetry, uniform connectivity, greater angular resolution, and a reduced need of storage. Proposed methodology is a hybrid approach to face recognition. DCT is being applied to hexagonally converted images for dimensionality reduction and feature extraction. These features are stored in a database for recognition purpose. Artificial Neural Network (ANN) is being used for recognition. A quick back propagation algorithm is used as the training algorithm. Recognition rate on Yale database remained 92.77%.But the time which is taken for recognition is so less. Meng Joo Er,[25] propose, an efficient method for high-speed face recognition based on the discrete cosine transform (DCT),the Fisher’s linear discriminant (FLD). the dimensionality of the original face image is reduced by using the DCT. FLD applies to the the truncated DCT coefficient vectors, discriminating features are maintained by the FLD. Further parameter estimation for the RBF neural networks is fulfilled easily which facilitates fast training was done by RBF neural networks. the proposed system achieves excellent performance with high training and recognition speed with error rate of 1.8%. Sidra Batool Kazmi[26], a method for automatic recognition of facial expressions from face images by providing Discrete Wavelet Transform (DWT) features to a bank of five parallel neural networks. Each neural network is trained to recognize a particular facial expression and they got result of 96 %  Advantages: Using Neural Network along with DCT , DWT, Gabor are useful for facial expressions. It is been seen that it shows higher recognition rate of up to 95%.Hence it is quite useful technique.
  • 17. 17  Disadvantages: We have to make neural network and have to design its layer which will be quite complex. As we are designing in hardware so it will be hard to implement. Sidra Batool Kazmi[26], a method for automatic recognition of facial expressions from face images by providing Discrete Wavelet Transform (DWT) features to a bank of five parallel neural networks. Each neural network is trained to recognize a particular facial expression and they got result of 96 %.  Advantages: These techniques exploit the utility of Gabor wavelets for facial expressions. This highlights the robustness of Gabor feature set under different facial expressions as well as under different illumination conditions. Also it extracts the directional information also.  Disadvantages: We have to make filter banks which increases the complexity. As we are designing in hardware so complexity must be as low as possible. Hence we can’t go for it. Authors A.N. Rajagopalan et al. [14] propose a face recognition method that fuses information acquired from global and local features of the face for improving performance. Principle components analysis followed by Fisher analysis is used for dimensionality reduction and construction of individual feature spaces.. Before feature extraction, a block histogram modification technique is applied to compensate for local changes in illumination. PCA in conjunction with FLD is then used to encode the facial features in a lower dimensional space. The distance in feature space (DIFS) values are calculated for all the training images in each of the feature spaces and these values are used to compute the distributions of the DIFS values. In the recognition phase, given a test image, the three facial features are extracted and their DIFS values are computed in each feature space.
  • 18. 18 Young-Jun Song et al. [15] has proposed shaded-face pre-processing technique using front-face symmetry. The existing face recognition PCA technique has a shortcoming of making illumination variation lower the recognition performance of a shaded face. this method computes difference between illuminations on either side of nose line. If they are different then mirror image of one side is taken and then PCA is applied to generate feature set. With Yale database, authors could achieve 98.9% accuracy. Peter N. Belhumeur[16][35], shows the comparison of PCA and LDA algorithum. The LDA technique, another method based on linearly discrimination projecting the image space to a low dimensional subspace, has similar computational requirements. But extensive experimental results demonstrate that the “LDA” method has error rates that are lower than those of the PCA technique for tests on the Harvard and Yale Face Databases. Kamran Etemad et al. [17] focus on the linear discriminant analysis (LDA) of different aspects of human faces in the spatial as well as in the wavelet domain.. The LDA of faces also provides a small set of features that carry the most relevant information for classification purposes. The features are obtained through eigenvector analysis of scatter matrices with the objective of maximizing between-class variations and minimizing within- class variations. For a medium sized dataset, authors could achieve 97% recognition accuracy.  Advantages: These techniques achieve good recognition accuracies through de- correlation of input data as they are statistical in nature. It helps in reducing redundancies present and feature set generated is also of smaller size. So it helps in avoiding curse of dimensionality. It also extracts the directional information which can’t be given in detail by wavelet.  Disadvantages: PCA and LDA are supervised learning methods. So they need aggregation of whole data simultaneously. Kamran Etemad et al. [25] focus on the linear discriminant analysis (LDA) of different aspects of human faces in the spatial as well as in the wavelet domain.. The LDA of faces also provides a small set of features that carry the most relevant information for classification
  • 19. 19 purposes. The features are obtained through eigenvector analysis of scatter matrices with the objective of maximizing between-class variations and minimizing within-class variations. For a medium sized dataset, authors could achieve 97% recognition accuracy. Haifeng Hu [26] presents discrete wavelet transform (DWT) based illumination normalization approach for face recognition under varying lighting conditions. Firstly, DWT based denoising technique is employed to detect the illumination discontinuities in the detail sub bands. And the detail coefficients are updated with using the obtained discontinuity information. Finally, multi-scale reflectance model is presented to extract the illumination invariant features. Recognition accuracy of 97.5% was achieved on CMU PIE dataset. Authors D.V. Authors X. Cao et al. [27] propose a novel wavelet based approach that considers the correlation of neighbouring wavelet coefficients to extract an illumination invariant. This invariant represents the key facial structure needed for face recognition. The method has better edge preserving ability in low frequency illumination fields and better useful information saving ability in high frequency fields using wavelet based Neigh Shrink denoise techniques. This method proposes different process approaches for training images and testing images since these images always have different illuminations. Experimental results on Yale face database B and CMU PIE Face Database show excellent recognition rates up to 100%. Authors K.Jaya Priya et al. [28] propose a novel face recognition method for the one sample problem. This approach is based on local appearance feature extraction using directional multiresolution decomposition offered by dual tree complex wavelet transform (DT-CWT). It provides a local multiscale description of images with good directional selectivity, effective edge representation and invariance to shifts and in-plane rotations. The 2-D dual-tree complex wavelet transform is less redundant and computationally efficient. The fusion of local DT-CWT coefficients of detail sub bands are used to extract the facial features which improve the face recognition with small sample size in relatively short computation time. With Yale face dataset recognition accuracy of 93.33% was achieved. M.Koteswara Rao[29] Discrete Wavelet Transform (DWT) & eigenvectors is proposed in this paper. Eachface image is decomposed as four sub bands using DWT(HH) sub band is useful
  • 20. 20 to distinguish the images in the database. HH band is exploited for face recognition. HH sub band is further processed using Principal Component Analysis (PCA). PCA extracts the relevant information from confusing data sets. Further, PCA provides a solution to reduce the higher dimensionality to lower dimensionality. Feature vector is generated using DWT and PCA.  Advantages: Wavelet techniques provide advantage of multi resolution analysis. It can be used to investigate into directional properties of face in different frequency sub bands. It helps in recognizing redundant information as well as invariants in the input. This property along with multi resolution property can be used to extract features in pose and illumination variations.  Disadvantages: Wavelet transforms are basically characterized by inherent high computational complexities. Like Gabor wavelets, they exhibit high dimensionality in coefficients. Hazim Kemal Ekenel[21] proposes algorithm in which local information is extracted using block-based discrete cosine transform. Obtained local features are combined both at the feature level and at the decision level. The performance of the proposed algorithm is tested on the Yale and CMU PIE face databases shows result up to 98.9%. Aman R. Chadha[22], proposes an efficient method , In which Discrete Cosine Transform (DCT) is used for Local and Global Features involves recognizing the corresponding face image from the database. Then features like nose , eyes has been extracted and they had given some weightage depending upon recognition rate and then combine to give the result. The result is performed on database of 25 people and shows recognition of 94% after normalization.  Advantages: DCT helps in recognizing redundant information as well as invariants in the input. DCT along with PCA or LDA gives good recognition rate.  Disadvantages: DCT transforms doesn’t give multi resolution analysis
  • 21. 21 3.2 There have been many challenges while designing the palm print and palm vein authentication system like the hygienic issue arisen from the contact based system, the complexity involved in handling the large feature vectors etc. Lin and Wan [39] proposed the thermal imaging of palm dorsal surfaces, which typically captures the thermal pattern generated from the flow of (hot) blood in cephalic and basilic veins. Goh Kah Ong Michael, Tee Connie Andrew [40] introduces an innovative contactless palm print and palm vein recognition system. They designed a hand vein sensor that could capture the palm print and palm vein image using low resolution web camera. The images captured exhibit considerable noise. Huan Zhang proposed a Local Contrast Enhancement technique for the Ridge Enhancement [41]. Principle Component Analysis (PCA) aims at finding a subspace whose basis vectors correspond to the maximum variance directions in the original space. The features extracted by PCA are best description of the data, but not the best discriminant features. Fisher Linear Discriminant (FLD) finds the set of most discriminant projection vectors that can map high dimensional samples onto a low dimensional space. The major drawback of applying FLD is that it may encounter the small-sample-size problem. Jing Liu and Yue Zhang introduce 2DFLD that computes the covariance’s matrices in a subspace of input space and achieved optimal discriminate vectors. This method gives greater recognition accuracy with reduced computational complexity [42]. David Zhang extracted texture features from ‘low resolution’ palm print images, based on 2D Gabor phase coding scheme. [43]. Ajay Kumar used minutiae based technique for hand vein recognition. The structural similarity of hand vein triangulation and knuckle shape features are combined for discriminating the samples. [44] The key contributions from this paper can be summarized as follows. 1) The proposed system majorly contributes to a contactless and registration free authentication system, utilizing the data captured simultaneously through a single sensor for both modalities. 2) We have developed a feature level fusion framework .The proposed method utilizes only 16 entropy based features for palm print and palm vein modalities facilitating a lesser complex integration scenario.
  • 22. 22 Due to increase in terrorism attacks it is very much require a robust biometric system to identify those person which can prove harmful to nation. Hence we can develop such biometric system which will identify them. Biometric like figure print and palm print etc. it’s difficult as they won’t ready to give and can spoof the system. Hence we can use biometric such as face using which we can take images from camera which is far away from person can take picture and we can recognize it .It can prove to be highly useful system in place like airport , railway station etc. Literature revolves around plethora of techniques which investigate into different properties and technical aspects of face modalities. Based upon appearance, present texture and information provided by some transform domain techniques like wavelet filters, authors have developed many uni-modal authentication systems. If surveyed, many of the authors use linear and non linear classifiers like neural network classifiers, support vector machines for classification stage. But it’s difficult to design neural network and we want fast operation so complexities should be less so we can’t go for it. We aim to design a secure, computationally efficient and reliable authentication system. Face are unique and robust in nature. Faces are very easy to capture without drawing much attention of user.so our system must be robust against side face and frontal face detection. As we are capturing face so illumination will play an important role in it.Hence we need to apply some transform to reduce the effect of improper illumination. We can use wavelet like Daubechies ,Haar etc. but we want simple deign of system hence we will use simple wavelet that is Haar wavelet. we can use Gabor also for feature extraction but for that we need to make filter bank which will make the system complex. As we will take frames so the dimension is quite large hence dimension reduction technique must be require hence PCA algorithm can be use .But using PCA alone will not give effective results.TO discriminate between the subjects LDA can be employed. So here we will use both PCA-LDA. This will increase complexities a bit but we want to make system efficient also so we will go for it. At last template matching is being done to identify the person who is he.
  • 23. 23 In this section we will discuss designed, developed and experimented system architecture. First quarter of the section discusses the basic theory of selected modalities. In remaining section, we discuss actual system architecture step by step. One of things that really admire the viewer is the human ability to recognize faces. Humans can recognize thousands of faces and identify familiar faces despite large changes in the visual stimulus due to viewing conditions, expression. When we pay attention to human ability in Face Recognition, Face recognition has been studied for over two decades in order to make a noticeable advance in this admire field and it is still an active subject due to extensive practical applications. Many recent events, such as terrorist attacks, exposed serious weakness in most sophisticated security systems of fingerprints and iris, but many other human characteristics have been studied in last years such as finger/palm geometry, voice, signature, face. However, biometrics have drawbacks. Iris recognition is extremely accurate, but expensive to implement and not very accepted by people as more exposer to IR may cause eye problem. Fingerprints are reliable and non-intrusive, but not suitable for non- collaborative individuals. On the contrary, face recognition seems to be a good compromise between reliability and social acceptance and balances security and privacy well. Assume for the moment we start with images, and we want to distinguish between images of different people. Many face recognition systems have been developed to construct a set of "images" that provides the best approximation of the overall image data set. The training set is then projected onto this subspace. To query a new image, we simply project the image onto this subspace and seek a training image whose projection is closest to it.
  • 24. 24 The main aim of face detection is do detect the face if present in frame of video, also locate the image face. This appears as a challenging task for computers, and has been one of the top studied research topics in the past. Early efforts in face detection have presented as early as the beginning of the 1970s,. Some of the factors that make face detection such a difficult task are:  Face orientation: A face can appear in many different poses. For instance the face can appear in a frontal or a profile (i.e. sideways) position. Furthermore a face can be rotated by some angle in plane and that too horizontal as well as vertical (e.g. it appears under an angle of 60').  Face size: The size of the human face can vary a lot .While taking Video in Real time size of face may vary every second.  Same person have different facial expression: Person who is laughing is may have totally different appearance when he is in rude mood. Therefore facial expressions directly affect the appearance of the face in the image.  Facial feature: Some people have a moustache, long hair , spects others have a scar. These types of features are called facial features.  Illumination condition: Faces appear totally different when different illuminations were used. For instance part of the face is very bright while the other part is very dark when light is fall from side of face. Fingerprints are the patterns formed on the epidermis of the fingertip. The fingerprints are of three types: arch, loop and whorl. The fingerprint is composed of ridges and valleys. The interleaved pattern of ridges and valleys are the most evident structural characteristic of a fingerprint. There are three main fingerprint features a) Global Ridge Pattern b) Local Ridge Detail c) Intra Ridge Detail
  • 25. 25 Fig 4.1. Sample Fingerprint Image Global ridge detail: There are two types of ridge flows: the pseudo-parallel ridge flows and high-curvature ridge flows which are located around the core point and/or delta point(s). This representation relies on the ridge structure, global landmarks and ridge pattern characteristics. Commonly used global fingerprint features are: i) Singular points – They are discontinuities in the orientation field. There are two types of singular points- core and delta. A core is the uppermost of a curving ridge, and a delta point is the point where three ridge flows meet. They are used for fingerprint registration and classification. ii) Ridge orientation map – They are local direction of the ridge-valley structure. It is helpful in classification, image enhancement, feature verification and filtering. ii) Ridge frequency map – They are the reciprocal of the ridge distance in the direction perpendicular to local ridge orientation. It is used for filtering of fingerprint images. Local Ridge Detail: This is the most widely used and studied fingerprint representation. Local ridge details are the discontinuities of local ridge structure referred to as minutiae. They are used by forensic experts to match two fingerprints. There are about 150 different types of minutiae. Among these minutiae types, ridge ending and ridge bifurcation are the most commonly used as all the other types of minutiae are combinations of ridge endings and ridge bifurcations.
  • 26. 26 The minutiae are relatively stable and robust to contrast, image resolutions, and global distortion when compared to other representations. Although most of the automatic fingerprint recognition systems are designed to use minutiae as their fingerprint representations, the location information and the direction of a minutia point alone are not sufficient for achieving high performance. Minutiae-derived secondary features are used as the relative distance and radial angle are invariant with respect to the rotation and translation of the fingerprint. Intra Ridge Detail On every ridge of the finger epidermis, there are many tiny sweat pores and other permanent details. Pores are distinctive in terms of their number, position, and shape. However, extracting pores is feasible only in high-resolution fingerprint images and with very high image quality. Thus the cost is very high. Fingerprint recognition is one of the popular biometric techniques. It refers to the automated method of verifying a match between two fingerprint images. It is mainly used in the identification of a person and in criminal investigations. It is formed by the ridge pattern of the finger. Discontinuities in the ridge pattern are used for identification. These discontinuities are known as minutiae. For minutiae extraction type, orientation and location of minutiae are extracted. Two features of minutiae are used for identification: termination and bifurcation. The advantages of fingerprint recognition system are (a) They are highly universal as majority of the population have legible fingerprints. (a) (b) (c) (d) (e) (f) Fig. 4.2 Types of Minutiae
  • 27. 27 (b) They are very reliable as no two people (even twins) have same fingerprint. (c) Fingerprints are formed in the foetal stage and remain structurally unchanged throughout life. (d) It is one of the most accurate forms of biometrics available. (e) Fingerprint acquisition is non intrusive and hence is a good option (a) Ridge ending (b) Bifurcation Fig 4.3 Types of local ridge features
  • 28. 28
  • 29. 29 4.8.1 For uni-modality system Result Fig 4.1. Block Diagram for Proposed System Our system aims to achieve following goals: 1. Secure and spoof attack free authentication 2. As it has to be implemented on hardware so Computationally complex should be less Proposed system architecture contains following main steps 1. Take images of modality. In case of face also perform face detection and crop the face. 2. Rescaling of Image to 64 x 64 pixels. 3. Find out Discrete Cosine Transform (DCT). 4. Feature extraction by taking out standard deviation of predefine block of DCT coefficients. 5. Store the feature vector. 6. Distance based matching/verification. Input Feature Extraction Using Standard Deviation of DCT blocks Distance Measure Template Database Resize to dimension of 64 X 64
  • 30. 30 4.8.2 Flow chart for fusion technique of multi-modal system Fig 4.1. Block Diagram for Proposed System for fusion technique Proposed system architecture contains following main steps 1. Take images of both modalities. In case of face also perform face detection and crop the face. 2. Rescaling of Image to 64 x 64 pixels. 3. Find out Discrete Cosine Transform (DCT). 4. Feature extraction by taking out standard deviation of predefine block of DCT coefficients. 5. Store the feature vector. 6. Fuse both the feature vectors. 7. Distance based matching/verification. We used Microsoft Visual Studio environment to simulate the results for designed algorithms. Microsoft Visual Studio is an integrated development environment from Microsoft Corporation. OPENCV is used for coding of Algorithm. Input of 1st modality Feature Extraction Using Standard Deviation of DCT blocks Distance Measure Template Database Resize to dimension of 64 X 64 Input of 2nd modality Resize to dimension of 64 X 64 Feature Extraction Using Standard Deviation of DCT blocks Feature level Fusion
  • 31. 31 To find out ROI in palm print and palm veins code is written down in MATLAB. System configurations which is used to simulate the code are listed as follows: CPU: Intel(R) Core(TM) i3-2330M CPU Clock Frequency: 2.20 GHz System Memory: 2GBytes Data base of 40 students was taken through video for face under different conditions. Data base of palm print of 150 students were collected. Acquisition of images using JAI-SDK camera using this we can capture both palm print and palm veins at the same time. Data base of palm veins of 150 students were collected. Acquisition of images is done using JAI-SDK camera using this we can capture both palm print and palm veins at the same time. Data base of finger print of 200 students were collected. The image is taken by device name Finger Key. Thumb impression is taken while capturing images. Palm Print and Palm Vein In this research, a JAI AD-080-GE camera is used to capture NIR hand vein images. The camera contains two 1/3” progressive scan CCD with 1024x768 active pixels, one of the two CCD’s is used to capture visible light images (400 to 700 nm), while the other captures light in the NIR band of the spectrum (700 to 1000nm). Since most light sources don’t irradiate
  • 32. 32 with sufficient intensity in the NIR part of the spectrum, a dedicated NIR lightning system was built using infrared Light Emitting Diodes (LED’s ) which a have a peak wavelength at 830nm. Fig.3.1: Acquisition System Fingerprint In this research, Finger Key device is used to capture finger print images. Face In this research, Sony camera is used to capture video. It is 9.1 megapixel camera. Lens is from Carl Zeius. Its lens is c-mount. Following steps will be followed for face recognition. Frames has been taken from Video. Frames have been taken after certain amount of delay. From each video 6 frames have been captured to train the system. As there may be vary in illumination some may have improper illumination which can be balanced using histogram equalization. Faces are rich in directional data. We can also go for Fourier transform or DCT [20][21] but here we need multi resolution solution. As we also need features of different frequency band even that of high frequency also. It is evident that wavelets offer selectivity directional also. We are using low frequency band that is LL band. Wavelets offer robust performance against illumination variations. For these reasons we have employed wavelet as a feature extraction tool.
  • 33. 33 As inputs to the designed algorithm, video is captured from camera sources and with different illumination and face position. Resulting feature sets from individual modalities are use for recognition. Principal Component Analysis (PCA) is one of the most successful that has been used for image compression and recognition[10][11] .The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to a smaller intrinsic dimensionality of feature space (independent variables), [62].Using PCA can transform each original image of the training set into a corresponding Eigenface. An important feature of PCA is that one can reconstruct any original image from the training set by combining the Eigen faces [12], which are nothing but characteristic features of the faces. By using all the Eigenfaces extracted from original images, exact reconstruction of the original images is possible. But for practical applications, certain part of the Eigenfaces is used. Then the reconstructed image is an approximation of the original image. However losses due to omitting some of the Eigenfaces can be minimized. This happens by choosing only the most important features (Eigenfaces) [61]. A 2-D facial image can be represented as 1-D vector by concatenating each row (or column) into a long thin vector. 1. Assume the training sets of images represented by , , ,…. , with each image (x, y) where (x, y) is the size of the image represented by p and m is the number of training images. Converting each image into set of vectors given by (m x p). 2. The mean face Ψ is given by: ∑ 1 3. The mean-subtracted face is given by ( ) : 2 where i = 1,2,……m and A=[ ] is the mean-subtracted matrix with size .
  • 34. 34 4. By implementing the matrix transformations, the vector matrix is reduced by: 3 Where C is the covariance matrix 5. Finding the Eigen vectors and Eigen values from the C matrix and ordering the Eigen vectors by highest Eigen values 6. With the sorted Eigen vectors matrix, is adjusted. These vectors determine the linear combinations of the training set images to form the Eigen faces represented by as follows: ∑ k=1,2,……m 4 7. Instead of using m Eigen faces, m′ Eigen faces (m′<< m) is considered as the most significant Eigen vectors provided for training of each individual. 8. With the reduced Eigen face vector, each image has its face vector given by ( ), k=1,2,……m 5 9. The weights form a feature vector given by 6 11. The reduced data is taken as the input to the next stage for extricating discriminating feature. Linear Discriminant Analysis (LDA) has been successfully used as a dimensionality reduction technique to many classification problems. The objective is to maximizes the ratio of between - class matrix and within-class scatter matrix The LDA is defined by the transformation [64] 7 The columns of W are the Eigenvectors of , It is possible to show that this choice maximizes the ratio det( )/det( ) 8
  • 35. 35 These matrices are computed as follows: ∑ ∑ ( )( ) 9 Where ∑ and is ith pattern of jth class, and is number of patterns of jth class ∑ ( )( ) 10 The Eigenvectors of LDA are called “Fisherfaces”. LDA transformation is strongly dependent on the number of classes (c), the number of samples (m), and the original space dimensionality (d). It is possible to show that there are almost c-1 nonzero Eigenvectors. c-1 being the upper bound of the discriminant space dimensionality. We need d+c samples at least to have a non-singular [64]. It is impossible to guarantee this condition in many real applications. Consequently, an intermediate transformation is applied to reduce the dimensionality of the image space. Here we used the PCA transform. LDA derives a low dimensional representation of a high dimensional face feature vector space. From Eqn. 12 and Eqn. 13, the covariance matrix C is obtained as follows 11 The coefficients of the covariance matrix give the discriminating feature vectors for the LDA method. The face vector is projected by the transformation matrix . The projection coefficients are used as the feature representation of each face image. The matching score between the test face image and the training image is calculated as the distance between their coefficients vectors. A smaller distance score means a better match. For the proposed work, the column vectors (i = 1, 2,….c-1) of matrix W are referred to as Fisherfaces. In the identification mode, the system determines whether an individual is enrolled or not, by comparing the extracted features with those stored in the database. In an identification application, the biometric device reads a sample, processes it, and compares it against all samples in template database. We can made comparison by different distance metrics which are discussed in experimental results.
  • 36. 36 1. Euclidean Distance [32]: It states that the shortest distance between two points is a line. In Euclidean distance metric difference of each dimension of feature vector of test and database image is squared which increases the divergence between the test and database image if the dissimilarity is more. ( ) √∑ | | 12 2. Lorentzian Distance [69]: One is added to guarantee the non negative property and to avoid the log zero. This metric is sensitive to small changes because log scale expands the lower range compresses higher one. ( ) ∑ ( | |) 13 3. Hellinger Distance [69]: In this distance square root of sum of squared square root difference at each dimension is taken, which minimizes the difference if similarity between vectors is more. ( ) √ ∑ (√ √ ) 14 4. Canberra Distance [69]: Canberra distance is similar to Manhattan distance. The distinction is that the absolute difference between the variables of the two objects is divided by sum of absolute variable values prior to summing. ( ) ∑ | | | | | | 15 Where, = Testing feature vector = Trained feature vector N= Total number of features
  • 37. 37 To reach towards the goals defined in previous chapters, we have experimented with individual biometric modalities first. To check designed algorithms, we have experimented with standard databases for both face and fingerprint. Trends for initial experiments with face and fingerprint were conducive. We further experimented with palm print and palm vein modalities. Which Blocks of DCT coefficients that we have to take for taking standard deviation is also important. So we have followed this trend and also tweaked with it to see the effect on system performance with palm print, finger print and palm vein modalities. Initially, to decide how many testing and training samples we have to take, we have conducted experiments with varying training and testing sets. This chapter gives detailed account of experiments with fingerprint database, palm print database, face database using video as well as images, palm vein database and respective performance indices associated. Also it includes comparisons for recognition performances with each modality individually. This comparison is made for common no. of people in each case (for 100 people) for finger print palm print and palm vein. Finally, taking inferences from experiments with individual modalities into account, we implement fusion scheme for fingerprint and face, fingerprint and palm print, fingerprint and palm vein, palm print and palm vein. We also compared the results in the end for fusion technique using different multimodal technique. 5.1 Experiment with FVC2002 database Distance Metric 40 people FAR FRR GAR Euclidean 7.5 6.25 93.75
  • 38. 38 Table 1.Results with FVC2002 taking lower frequency blocks of DCT coefficients mentioned in figure (). Following figures show graphical representations of Table 1 Fig. 5.1 (a) FAR & FRR for Euclidean Distance Fig. 5.1 (b) FAR & FRR for Canberra Distance Table 2.Results with FVC2002 taking higher frequency blocks of DCT coefficients mentioned in figure (). Canberra 6.25 5 95 Distance Metric 40 people FAR FRR GAR Euclidean 2.5 2.5 97.5 Canberra 1.25 1.25 98.75
  • 39. 39 Following figures show graphical representations of Table 2 (a) Euclideandistance (b) Canberradistance Fig. 5.2 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Canberra Distance Inferences: Results from table no.1 and 2 clearly show that FAR and GAR is low when higher frequency coefficients are taken. This is due to the fact that finger print contains edges thus they contains information in higher frequency. 5.2 Experiments with fingerprint database for varying no of training sets Table 3.Results taking 6 training and 2 testing images for varying threshold
  • 40. 40 Following figures show graphical representations of Table 3 Fig. 5.3(a) FAR & FRR for Euclidean Distance (40 per) Fig. 5.3(b) FAR & FRR for Euclidean Distance (65 per) Table 4.Results taking 4 training and 2 testing images for varying threshold Distance Metric 40 people 65 people FAR FRR GAR FAR FRR GAR Euclidean 2.5 0 100 6.25 2.5 97.5 Euclidean 1.25 8.75 91.25 2.5 16.25 83.75 Distance Metric 40 people 65 people FAR FRR GAR FAR FRR GAR Euclidean 5 0 100 11.25 11.25 88.75 Euclidean 2.5 13.75 84.25 4.5 33.75 66.25
  • 41. 41 Following figures show graphical representations of Table 4 Fig. 5.4(a) FAR & FRR for Euclidean Distance (40 per) Fig. 5.4(b) FAR & FRR for Euclidean Distance (65 per) Inference: Experiments shows that results for 6 training images and 2 testing images show better than that for 4 training and 2 testing. It can also be seen that there is rapid increase in FAR and FRR if database increase further. So for fingerprint we will take 6 training sets and 2 testing images. 5.3 Experiments with COEP fingerprint database After experimenting with FVC2002 dataset, we have experimented with our own generated dataset. Here we discuss experimentation with fingerprint database of COEP. Following table shows results for middle finger dataset. Table 5.Results with COEP database for 50 and 100 subjects Distance Metric 50 people 100 people
  • 42. 42 FAR FRR GAR FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 1.0 0.0 100 0.5 2.0 98.0 Manhatten 1.0 0.0 100 0.50 0.0 100 0.0 0.50 99.5 Euclidean 0.0 0.0 100 1.33 1.67 98.33 - - - Gower 1.0 0.0 100 0.50 0.0 100.0 0.0 0.5 99.5 Following figures show graphical representations of Table 5 (a) Grower Distance (b) Euclidean Distance (c)Canberra Distance (d) Manhatten distance
  • 43. 43 Fig. 5.5 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Graph for 100 subjects (a)Grower Distance (b)Euclidean Distance (c)Canberra Distance (d)Manhatten Distance
  • 44. 44 Fig. 5.6 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Table 6.Results with COEP database for 150 subjects Distance Metric 150 people FAR FRR GAR FAR FRR GAR Canberra 1.33 0.0 100 - - - Manhatten .67 2.0 98.0 1.33 0.67 99.33 Euclidean 2.0 2.67 97.33 - - - Gower .67 2.0 98.00 2.0 0.0 100 Following figures show graphical representations of Table 6
  • 45. 45 (a) Grower Distance (b) Euclidean Distance (c) Canberra Distance (d) Manhatten Distance Fig. 5.7 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Table 7.Results with COEP database for 200 subjects Distance Metric 200 people
  • 46. 46 FAR FRR GAR FAR FRR GAR Canberra 1.75 1.25 98.5 2.0 0.0 100 Manhatten 2.0 0 100.0 1.0 1.25 98.75 Euclidean 3.0 1.0 99 - - - Gower 2.0 0 100.0 1.0 1.25 98.75 Following figures show graphical representations of Table 7 (a)Gower Distance (b) Euclidean Distance
  • 47. 47 (c)Canberra Distance (d)Manhatten Distance Fig. 5.8 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Inference: Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment in no. of entries in database. This inference can be attributed to stability of features of Middle finger database. But false acceptance rates increase or remain almost constant with increased size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger distance provides more secure system with similar GAR in both cases. It is because Canberra distance normalizes features of the difference between them while Hellinger distance separates features well. 5.3 Experiments with COEP palm print database Initially we have cropped Region of Interest (ROI) from the image of palm print. Procedure is already mentioned in proposed methodology. We have experimented with our own generated dataset. Here we discuss experimentation with fingerprint database of COEP. Following table shows results for palm print dataset. Here
  • 48. 48 experiment is done by taking 2 testing images and 3 training images of palm veins for each subjects. Where each image has been reduced to feature vector of dimension 1 x 38. Table 8.Results with COEP database Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 1.0 0.0 100 Manhatten 1.0 0.0 100 1.50 1.50 98.5 Euclidean 1.0 0.0 100 3.5 1.0 99.0 Gower 1.0 0.0 100 1.50 1.0 99.0 Following figures show graphical representations of Table 8 (a)Gower Distance (b)Euclidean Distance
  • 49. 49 (C)Canberra Distance (d)Manhatten Distance Fig. 5.9 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance (a)Gower Distance (b)Euclidean Distance
  • 50. 50 (C)Canberra Distance (d)Manhatten Distance Fig. 5.10 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Table 9.Results with COEP database Distance Metric 150 people FAR FRR GAR FAR FRR GAR Canberra 1.33 0.0 100 0.67 1.67 98.33 Manhatten 1.67 1.0 99 1.0 2.67 97.33 Euclidean 1.67 3 97 - - - Gower 1.33 1.33 98.67 - - -
  • 51. 51 Following figures show graphical representations of Table 9 (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.11 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 52. 52 Inference: Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment in no. of entries in database. This inference can be attributed to stability of features of Middle finger database. But false acceptance rates increase or remain almost constant with increased size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger distance provides more secure system with similar GAR in both cases. It is because Canberra distance normalizes features of the difference between them while Hellinger distance separates features well. 5.3 Experiments with COEP palm vein database Initially we have cropped Region of Interest (ROI) from the image of palm. Procedure is already mentioned in proposed methodology section for taking out ROI. We have experimented with our own generated dataset. Here we discuss experimentation with palm vein database of COEP. Following table shows results for palm vein dataset. Here experiment is done by taking 2 testing images and 3 training images of palm veins for each subjects. Where each image has been reduced to feature vector of dimension 1 x 38. Table 10.Results with COEP database for palm veins Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 0.5 0 100 Manhatten 1 1 99 0.5 1.0 99 Euclidean 1 2 98 1 3 97 Gower 1 1 99 .5 1 99
  • 53. 53 Following figures show graphical representations of Table 10 (a)Euclidean Distance (b)Gower Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.12 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Gower Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 54. 54 Following figures show graphical representations of Table 10 (a)Euclidean Distance (b)Gower Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.13 (a) FAR & FRR for Euclidean Distance (b) FAR & FRR for Gower Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 55. 55 Inference: Entries for genuine acceptance rate from Table 2 don’t show steep decrement with increment in no. of entries in database. This inference can be attributed to stability of features of Middle finger database. But false acceptance rates increase or remain almost constant with increased size of database. Here, Canberra distance performs better in terms of GAR, while Hellinger distance provides more secure system with similar GAR in both cases. It is because Canberra distance normalizes features of the difference between them while Hellinger distance separates features well. 5.4 Result of fusion of Palm Print and Palm Vein Experiments with both palm print and palm vein were carried out separately previously. Now we are fusing palm print and palm vein. In implementation of fusion strategy, we are using feature level fusion scheme. So the individual feature sets are concatenated together to facilitate feature level fusion. This process increases number of computations as feature length increases almost twice. We had already stores the feature of training sets and testing sets in the form of text file while performing experiments of unimodal trail. Here features vector of fusion is of dimension 1x76. In this experiment while fusion the images of palm print and palm vein which were used belongs to same person. Here experiment is done by taking 2 testing images and 3 training images of palm print and palm veins each. Table 11.Results of fusion of palm veins and palm print Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 0.0 0.0 100 Manhatten 0 0 100 0.0 0.0 100
  • 56. 56 Euclidean 0 0 100 1.0 0.0 100 Gower 0 0 100 0.0 0.0 100 Following figures show graphical representations of Table 11 for 50 subjects (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.14 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Following figures show graphical representations of Table 11 for 100 subjects
  • 57. 57 (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.15 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 58. 58 Inference: The results for fusion of palm print and palm vein shows great improvement in parameters. Here all distances have 100% GAR. For small data sets all distances perform superbly with 0% FAR, FRR. Although it had been observed that when there is increase in database the FAR of Euclidean distance increases that just by 1% which is still less than FAR of uni- modality using Euclidean distance for 100 subjects. We can recommend this bimodality system as it shows better result than individual modality. Table 12.Results of fusion of palm veins and palm print of exchanged database Note-Here reading is taken for varying threshold Following figure show graphical representations of Table 11 for 100 subjects Fig. 5.16 FAR & FRR for Canberra Distance Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 2 0 100 Canberra 0.0 0.0 100 1 1 99
  • 59. 59 Inference: The results for fusion of palm print and palm vein for exchange database shows variation in results as compare to previous results. Here FAR, FRR increases as the database increases to 100 subjects. Hence we can say that palm print and palm vein modality are highly correlated. Time Complexity Fusion of palm print and palm vein technique is less complex. The time taken by system to load the database and to test for one subject using 2 testing sets is .161sec. Inference: Fusion technique of palm print and palm vein is highly efficient due to its ease of acquisition, its accuracy (100% GAR) and less time complexity. Hence we can implement it also in real time operations. 5.5 Result of fusion of fingerprint and Palm Vein Experiments with both fingerprint and palm vein were carried out separately previously. Now we are fusing fingerprint and palm vein. In implementation of fusion strategy, we are using feature level fusion scheme. So the individual feature sets are concatenated together to facilitate feature level fusion. This process increases number of computations as feature length increases almost twice. We had already stores the feature of training sets and training sets in the form of text file while performing experiments of unimodal trail. Here features vector of fusion is of dimension 1x57. Here experiment is done by taking 2 testing images of finger print and palm veins each.
  • 60. 60 Table 13.Results of fusion of palm veins and fingerprint database Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 0.50 0.0 100 Manhatten 1.0 0 100 0.50 0.0 100 Euclidean 1.0 0 100 1.0 1.0 99 Gower 0.0 0 100 1.0 1.5 98.5 Following figures show graphical representations of Table 13 for 50 subjects
  • 61. 61 (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.17 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance
  • 62. 62 (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Following figures show graphical representations of Table 13 for 100 subjects (a)Gower Distance (b)Euclidean Distance
  • 63. 63 (C)Canberra Distance (d)Manhatten Distance Fig. 5.18 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Inference: The results for fusion of fingerprint and palm vein show great improvement in parameters. Here all distances have 100% GAR for small data sets. Canberra distance perform superbly with 0% FAR, FRR for database of 50 subjects although FAR increases to 1% for 100 subjects still GAR is maintained to 100%. It is because Canberra distance normalizes features of the difference between them. It had been observed that Gar still maintained to 100% in case of Manhatten distances. Euclidean distance have constant FAR of 1% which is still less or equal than FAR of uni-modality using same distance. However there is decrease in GAR of system when Grower distance is use. We can recommend this bimodality system where less FAR is required. Time Complexity Fusion of palm print and palm vein technique is calculated. The time taken by system to load the database and to test for one subject using 2 testing sets is .371sec. 5.6 Result of fusion of fingerprint and Palm print
  • 64. 64 Experiments with both fingerprint and palm print were carried out separately previously. Now we are fusing palm print and fingerprint. In implementation of fusion strategy, we are using feature level fusion scheme. So the individual feature sets are concatenated together to facilitate feature level fusion. This process increases number of computations as feature length increases almost twice. We had already stores the feature of training sets and training sets in the form of text file while performing experiments of unimodal trail. Here features vector of fusion is of dimension 1x57. Here experiment is done by taking 2 testing images of finger print and palm print each. Table 14.Results of fusion of palm print and fingerprint database Distance Metric 50 people 100 people FAR FRR GAR FAR FRR GAR Canberra 0.0 0.0 100 0.50 0.0 100 Manhatten 0.0 0 100 0.50 0.50 99.5 Euclidean 0.0 0 100 1.0 1.0 99 Gower 1.0 0 100 0.50 0.50 99.5
  • 65. 65 Following figures show graphical representations of Table 14 for 50 subjects (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.19 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 66. 66 Following figures show graphical representations of Table 14 for 100 subjects (a)Gower Distance (b)Euclidean Distance (C)Canberra Distance (d)Manhatten Distance Fig. 5.20 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance
  • 67. 67 Table 15.Results of fusion of palm print and fingerprint database of 150 subjects with different thresholds Distance Metric 150 people FAR FRR GAR FAR FRR GAR Canberra 1.0 0.0 100 0.67 0.67 99.33 Manhatten 2.0 0.0 100 1.0 2.0 98.0 Euclidean 1.33 0.0 100 1.67 1.0 99.0 Gower 2.0 0.0 100 1.0 2.0 98.0 Following figures show graphical representations of Table 15 for 150 subjects (a)Gower Distance (b)Euclidean Distance
  • 68. 68 (C)Canberra Distance (d)Manhatten Distance Fig. 5.20 (a) FAR & FRR for Gower Distance (b) FAR & FRR for Euclidean Distance (c) FAR & FRR for Canberra Distance (d) FAR & FRR for Manhatten Distance Inference: The results for fusion of fingerprint and palm vein show great improvement in parameters. Here all distances have 100% GAR for small data sets. Canberra distance perform superbly with 0% FRR and 100% GAR for database of 150 subjects. FAR increase to .67% for 150 subjects as compare to .5 of 100, still which is half by the same distance in uni-modality system of palm print and fingerprint. Manhatten gives better GAR and less FAR then Euclidean distances. Grower distance gives better GAR at higher data base. We can recommend this bimodality system as it has less FAR of .67% and GAR of 99.33%. Time Complexity Fusion of palm print and palm vein technique is calculated. The time taken by system to load the database and to test for one subject using 2 testing sets is .4813sec. We can implement it in real time systems as well. 5.7 Experiments with ORL database for Face modality Proposed system is firstly applied on ORL database to check the accuracy of system. System is tested using only one image. Here input is in form of image. As from literature survey it is
  • 69. 69 very well known that face information is in lower frequency, hence block of lower frequency is taken. Table 16.Results with ORL database taking lower frequency blocks of DCT coefficients mentioned in figure () for varying thresholds. Following figures show graphical representations of Table 15 Fig. 5.21 FAR & FRR for Euclidean Distance Inference The results for face modality shows that low FAR can be achieve but with higher value of FRR. Afterapplyinghistogramtoface we getbetter qualityof image. Distance Metric 40 people FAR FRR GAR Euclidean 3.75 3.75 96.25 Euclidean 1.25 6 94
  • 70. 70 Fig 5.1 Image before Histogram Equalization Fig5.2 Image after histogram Equalization Fig 5.3 Image before Histogram Equalization Fig5.4 Image after histogram Equalization Figure 5.1 to 5.4 showsthat histogramequalizationimprovesthe image qualityof picture whichis verymuch requiredforface recognitionspecial incase where lessilluminationisthere inhalf of face. whichwill be case inmanyof our images.  Salt & Pepper Noise
  • 71. 71 Fig. 5.5 Before histogram equalization Fig.5.6 after histogram equalization  GaussianNoise Fig.5.7 Before histogram equalization Fig. 5.8 after histogram equalization The above result shows that if image is noisy so it can’t be remove by Histogram Equalization. So if image coming from source is noisy then we have to use filter to remove noise from images.To remove salt and pepper noise median filter can be apply. Results after applying filtering Salt and pepper noise Fig 5.9 Images after applying median and Gaussian LPF on image suffering from Salt & Pepper noise.
  • 72. 72 After applying Median and Gaussian Low Pass filter we find that the Median filter work for salt and pepper noise. But it will not work with Gaussian low pass. We can see that there is still black and white spots. Hence we can use median filter for such kind of noise. GaussianNoise Fig 5.10 Images after applying median and Gaussian LPF on image suffering from Gaussian noise. After applying Median and Gaussian Low Pass filter we find that the Gaussian low pass filter work a bit for Gaussian noise although there is still noise and image gets blurred. Median filter don’t remove Gaussian noise. Hence we cannot go for median filter for Gaussian noise. PoissonNoise Fig 5.11 Images after applying median and Gaussian LPF on image suffering from Poisson noise. After applying Median and Gaussian Low Pass filter we find that the Gaussian low pass filter work well for Poisson noise although image gets blurred. Median filter don’t remove Poisson noise very efficiently. Hence we cannot go for median filter for Gaussian noise. We can reduce the dimension of an Image that is nothing but 2-D array. Here we have used PCA algorithm for dimensionality reduction. We first convert an image into single Colum matrix. Then after combining all the images we find out there eigenvalues and eigenvectors and make projection matrix taking eigenvectors having non-zero eigenvalue. Then we project images into that projection matrix and we get matrix having reduced dimension.
  • 73. 73 Image Size Size after Conversion to single column(Is) No ofcolumn of eigenvector having non-zero eigenvalues Size of projection matrix(Pmat) Dimension of Reduced image ((Pmat)T x Is) 30x30 900x1 3 900x3 3x1 50x50 2500x1 3 2500x3 3x1 100x100 10000x1 4 10000x4 4x1 Table 5.1 Shows dimensionality reduction using PCA algorithm Hence we can use PCA algorithm to reduce the dimension of images also reduces the no of calculation required. It will reduce the memory requirement to store images. Which can be use later for template matching. The result shows robust face detection. In this the size of face is not mater as far as it is more than 50x50. Fig 5.9 and 5.10 shows detection of face in image. NO. Of Images Taken Type Face Size (app) Detected FAR FRR 10 Frontal 100x150 10 0 0 20 Frontal 100x150 20 0 0
  • 74. 74 30 Frontal 100x150 30 0 0 40 Frontal 100x150 40 0 0 50 Frontal 100x150 50 0 0 Table 5.1 shows FAR and FRR for Face Detection FAR for Face Frontal Detection FRR for Frontal Face Detection NO. Of Images Taken Type Face Size (app) Detected FAR FRR 10 Frontal 200x250 10 0 0 20 Frontal 200x250 20 0 0 30 Frontal 200x250 30 0 0 40 Frontal 200x250 40 0 0 50 Frontal 200x250 50 0 0 Table 5.2 shows FAR and FRR for Face Detection FAR for Frontal Face Detection FRR for Frontal Face Detection 0 1 10 20 30 40 50 FAR FAR 0 1 10 20 30 40 50 FRR FRR 0 1 10 20 30 40 50 FAR FAR 0 1 10 20 30 40 50 FRR FRR
  • 75. 75 NO. Of Images Taken Type Face Size (app) Detected FAR FRR 10 Side 200x250 3 0 7 Table 5.3 shows FAR and FRR for Face Detection of Side face FRR for Side Face Detection FAR for Side Face Detection It can be seen that we had developed a robust face detection algorithm which can detect small face from a frame or image. But it should be more than 50x50 pixels. The algorithm used for face detection is Viola-Jones algorithm. But experiment results shows that it is not compatible with size face hence not able to detect them. Using symmetry we can recover the other side of face but it becomes this will take time and also makes our system complex. No of images taken Type of Wavelet Size Time taken (Sec) Size of image after taking low freq. Size Time taken (Sec) Size of image after taking low freq. 40 Haar 200x200 .67 100x100 256x256 1.09 128x128 60 Haar 200x200 .78 100x100 256x256 1.42 128x128 40 db4 200x200 1.14 103x103 256x256 1.81 131x131 60 db4 200x200 1.98 103x103 256x256 2.76 131x131 40 db8 200x200 2.15 107x107 256x256 3.125 135x135 60 db8 200x200 3.05 107x107 256x256 4.229 135x135 Table 5.4 shows comparisons of time requirement for Harr and Daubechies wavelet 0 200 2 4 6 8 10 FRR FRR 0 1 2 4 6 8 10 FAR FAR
  • 76. 76 It can be Seen that the time taken by Haar wavelet is much more less almost to half from ‘db4’.The ‘db8’ takes 3 times more then that of Haar wavelet. Also it can be found that the no of pixel in low frequency band is more in ‘db4’ and ‘db8’.which will again increase the operation time that is being performed afterwards .Hence we can use Haar wavelet which is more simple than other wavelets. Fig 5.11 and 5.812 shows Images after Haar Wavelettransform whichwill used for feature extraction Table shows the result for face recognition using videos. No of Video Taken No of Training Images No of images taken for recogniti on FAR FRR 4 4 2 0 0 6 4 2 0 0
  • 77. 77 3 3 2 0 0 5 4 2 0 0 6 3 2 0 0 Graph showingFARandFRR for face recognition. 0 0.5 1 3 5 6 FAR FRR
  • 78. 78 We have proposed a secure and computationally efficient biometric authentication system for Face Recognition. For Face Recognition system, we used histogram equalization followed by Haar Wavelet along with PCA+LDA framework for feature reduction and dominant feature selection.  Distance of Face from camera doesn’t matter much but size of image should be more than 50*50 pixel in a frame to get detected. Side Face detection is problem.  From results it is clear that implemented histogram equalization showcases improvement in picture quality of noise less image .  Histogram Equalization doesn’t remove noise.  Usage of Haar Wavelet for feature extraction reduces the complexity of program in comparison with Gabor or other complex Wavelet.
  • 79. 79  Employment of LDA after PCA not only reduces the size dimension of system overall but also Increases the Inter class difference which is desire one.  Integration of information at an earlier stage (feature level) and its segregation in separate classes using Linear Discriminant Analysis leads to the betterment of result. It is because LDA tries to maximise the ratio of between-class scatter matrix and within-class scatter matrix. Here discriminating features are retained by LDA which has input as concatenated features from individual modalities.  Localize histogram can be use to increase the quality of input image. As it will enhance the quality of picture further. [31]  In case of feature extraction different wavelets like Daubechies [18] and Cubic B spline[19] can be use.  We can also apply DCT after wavelet transform as it also help in increasing recognition as it is found in many literature[20][21]  Multi Modal Biometric can be used to save system from spoof attack[33].
  • 80. 80 [1].Hand Book of biometrics edited by Anil K. Jain Michigan State University, USA Patrick Flynn University of Notre Dame, USA, Arun A. Ross,West Virginia University, USA, 2008 Springer Science & Business Media, [2] Chengjun Liu, and Harry Wechsler, “Independent Component Analysis of Gabor Features for Face Recognition”, IEEE Transactions on Neural Networks, Vol. 14, No. 4, July 2003 [3]. Syed Maajid Mohsin, Muhammad Younus Javed, Almas Anjum, “Face Recognition using Bank of Gabor Filters”, IEEE 2nd International Conference on Emerging Technologies, Peshawar [4]. Victor-Emil Neagoe, John Mitrache,”A Feature-Based Face Recognition Approach Using Gabor Wavelet Filters Cascaded With Concurrent Neural Modules”, World Automation Congress (WAC) 2006, July 24-26, Budapest, Hungary [5]. Zhi-Kai Huang, Wei-Zhong Zhang, Hui-Ming Huang, Ling-Ying Hou,” Using Gabor Filters Features for Multi-Pose Face Recognition in Color Images”, IEEE Second International Symposium on Intelligent Information Technology Application, 2008 [6]. Dr.P.K.Suri, Dr.Ekta Walia and Er.Amit Verma, “Novel Face Detection Using Gabor Filter Bank with Variable Threshold”, Proceedings of Springer – International conference on High Performance Architecture and Grid Computing(July 19-20,20116 [8]. Xiaoyang Tan and Bill Triggs, “Fusing Gabor and LBP Feature Sets for Kernel-based Face Recognition”, Proceedings of 3rd Springer international conference on Analysis and modeling of faces and gestures, 2007 [9]. Poonam Sharma, K.V. Arya, R.N. Yadav, “Efficient face recognition using wavelet- based generalized neural network” Elsevier Journal of Signal Processing Volume 93 Issue 6, June, 2013 Pages 1557-1565 [10]. Shufu Xie, Shiguang Shan, Xilin Chen, Xin Meng, Wen Gao, “Learned local Gabor patterns for face representation and recognition”, Elsevier Signal Processing Vol. 89 Issue 2. Pp 2333-2344, 2009
  • 81. 81 [11]. Lin-Lin Huang, Akinobu Shimizu, Hidefumi Kobatake, “Robust face detection using Gabor filter features”, Pattern Recognition Letters Vol.26 Issue 11 pages 1641–1649, 2005 [12]. Bhaskar Gupta , Sushant Gupta , Arun Kumar Tiwari, “Face Detection Using Gabor Feature Extraction and Artificial Neural Network”, IEEE First Workshop On Image Processing And Applications, Pp 1-6, 2008 [13]. Md. Tajmilur Rahman and Md. Alamin Bhuiyan,” Face Recognition using Gabor Filters”, Proceedings of 11th IEEE International Conference on Computer and Information Technology, Khulna, Bangladesh, 2008 14]. A.N. Rajagopalan , K. Srinivasa Rao, Y. Anoop Kumar, “Face recognition using multiple facial features”, Elsevier Pattern Recognition Letters, Vol. 28, pages 335–341, 2007 [15]. Young-Jun Song,Young-Gil Kim, Un-Dong Chang, Heak Bong Kwon, “Face recognition robust to left/right shadows; facial symmetry”, Elsevier Pattern Recognition Vol. 39,pages 1542 – 1545, 2009 [16] D. SRIDHAR” Face Image Classification Using Combined Classifier” 2013 International Conference on Signal Processing, Image Processing and Pattern Recognition ,IEEE. [17]. Kamran Etemad and Rama Chellappa, “Discriminant analysis for recognition of human face images”, Springer Lecture Notes on Computer Science, Vol. 1206, pp 125-142, 1997(to search it) [18] Yong Chen” Face Recognition Using Cubic B-spline Wavelet Transform” 2008 IEEE Pacific-Asia Workshop. [19] Mohamed El Aroussi” Curvelet-Based Feature Extraction with B-LDA for Face Recognition” IEEE-2009. [20] Meihua Wang, Hong Jiang and Ying Li “Face Recognition based on DWT/DCT and SVM” 2010 International Conference on Computer Application [21] Hazim Kemal Ekenel, Rainer Stiefelhagen,” Local Appearance based face recognition using DCT”. [22] Aman R. Chadha, Pallavi P. Vaidya “face recognition using DCT using local and global feature”IEEE 2011 International Conference.
  • 82. 82 [23] Meng Joo Er, Member, IEEE” High-Speed Face Recognition Based on Discrete Cosine Transform and RBF Neural Networks”. IEEE transaction on neural network ,Vol.16 may 2005. [24] Muhammad Azam EPE Department” Discrete Cosine Transform (DCT) Based Face Recognition in Hexagonal Images” IEEE 2010. [25]. Kamran Etemad and Rama Chellappa, “Discriminant analysis for recognition of human face images”, Springer Lecture Notes on Computer Science, Vol. 1206, pp 125-142, 1997 26]. Haifeng Hu, “Variable lighting face recognition using discrete wavelet transform”, Elsevier Pattern Recognition Letters Vol. 32, pp 1526–1534, 2011 [27]. X. Cao, W. Shen , L.G. Yu, Y.L. Wanga, J.Y. Yang, Z.W. Zhang, “Illumination invariant extraction for face recognition using neighboring wavelet coefficients”, Elsevier Pattern Recognition Vol. 45, pp 1299–1305, 2012 [28]. K.Jaya Priyaa, Dr.R.S Rajesh, “Local Fusion of Complex Dual-Tree Wavelet Coefficients Based Face Recognition for Single Sample Problem”, Elsevier Procedia Computer Science Vol. 2, pp 94–100, 2010 [29] M.Koteswara Rao “Face recognition using DWT and eigenvectors”. 2012 1st International Conference on Emerging Technology Trends in Electronics [30] Sidra Batool Kazmi, “Wavelets Based Facial Expression Recognition Using a Bank of Neural Networks” 2010 IEEE [31]. Tripti Goel” Rescaling of Low Frequency DCT Coefficients with Kernel PCA for Illumination Invariant Face “2012 –IEEE. [32] Sung-Hyuk Cha, “Comprehensive Survey on Distance/Similarity Measures between Probability Density Functions”, International Journal of Mathematical Models and Methods in Applied Sciences, Issue 4, Volume 1, 2007
  • 83. 83 [33]. Djamel Bouchaffraa, Abbes Amira, “Structural hidden Markov models for biometrics: Fusion of face and fingerprint”, Elsevier Pattern Recognition Vol. 41, pp 852 – 867, 2008 [34]. R. Raghavendra, Bernadette Dorizzi, Ashok Rao, G. Hemantha Kumar, “Designing efficient fusion schemes for multimodal biometric systems using face and palm print”, Elsevier Pattern Recognition Vol. 44, pp 1076–1088, 2011 [35] Belhumeur, P.N., J.P. Hespanha and D.J. Kriegman, “ Eigenfaces vs. Fisherfaces: recognition using class specific linear projection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 19, pp. 711-720, 1997 [36] Belhumeur, P.N., J.P. Hespanha and D.J. Kriegman,” Eigenfaces vs. Fisherfaces: recognition using class specific linear projection ” , IEEE Trans. Pattern Anal. Mach. Intelligence, vol 19, pp. 711-720, 1997 [37] R. Jafri and H.R.Arabnia, "A Survey of Face Recognition Techniques", Journal of Information Processing Systems, Vol. 5, No.2, June 2009. [38] Sanchit, Maurício Ramalho, Paulo Lobato Correia1,Luís Ducla Soares,” Biometric Identification through Palm and Dorsal Hand Vein Patterns”,2011. [39] C.-L. Lin and K.-C. Fan, “Biometric verification using thermal images of palm-dorsa vein patterns,” IEEE Trans. Circuits Syst. Video Technol.,vol. 14, pp. 199–213, Feb. 2004. [40] Goh Kah Ong Michael, Tee Connie Andrew, Teoh Beng Jin,“Design and Implementation of a Contactless PalmPrint and Palm Vein Sensor,” 11th Int. Conf. Control, Automation, Robotics and VisionSingapore, December 2010. [41] Huan Zhang, Dewen Hu,”A Palm Vein Recognition System”, International Conference on Intelligent Computation Technology and Automation,2010. [42] Jing Liu , Yue Zhang,” Palm-Dorsa Vein Recognition Based on Two-Dimensional Fisher Linear Discriminant”,IEEE,2011. [43] David Zhang, “Online palm print identification”
  • 84. 84 [44] Ajay Kumar, Venkata Prathyusha,”Personal Authentication Using Hand Vein Triangulation and Knuckle Shape”, IEEE transactions on Image Processing, VOL. 18, NO. 9, September 2009. [45] A. El-Zaart, “Images thresholding using Isodata technique with gamma distribution,” Pattern Recognition and Image Analysis, vol. 20, no. 1, pp.29-41, 2010. [46] Suzuki, S. and Abe, K., Topological Structural Analysis of Digitized Binary Images by Border Following. CVGIP 30 1, pp 32-46. [47] Zhong Qu, Zheng-yong Wang, Research on pre processing of Palm print image based on adaptive threshold and Euclidean distance, Natural Computation ICNC 2010 sixth international conference, page 4238-4242