SlideShare a Scribd company logo
1 of 71
Download to read offline
A Project on
LATENT FINGERPRINT MATCHING USING AUTOMATED
FINGERPRINT IDENTIFICATION SYSTEM
Submitted in partial fulfillment of the requirements for the degree of
Bachelor of Technology
in
Electronics and Communication Engineering
by
Manish Negi
Pratiksha Yadav
Shubham
Rishi Raj Singh Rawat
Under the guidance of
Mr. Manoj Kumar
DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING
G. B. PANT ENGINEERING COLLEGE, PAURI, UTTARAKHAND, INDIA
JUNE 2015
DECLARATION
We hereby declare that this dissertation entitled “LATENT FINGERPRINT MATCH-
ING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM” submitted
to the Department of Electronics and Communication Engineering, G. B. Pant Engi-
neering College, Pauri Garhwal (Uttarakhand) for the award of Bachelor of Technology
degree in Electronics and Communication Engineering is a bonafide work carried out
by us under the guidance of Mr. Manoj Kumar and that it has not been submitted
anywhere for any award. Where other sources of information have been used, they have
been acknowledged.
Date: 08 June 2015 Manish Negi
Place: GBPEC, Pauri Pratiksha Yadav
Shubham
Rishi Raj Singh Rawat
CERTIFICATE
This is to certify that the dissertation entitled “LATENT FINGERPRINT MATCHING
USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM” being submit-
ted by Manish Negi, Pratiksha Yadav, Shubham and Rishi Raj Singh Rawat in the
partial fulfilment of the requirements for the award of Bachelor of Technology degree
in Electronics and Communication Engineering to the Department of Electronics and
Communication Engineering, G. B. Pant Engineering College, Pauri Garhwal (Uttarak-
hand) is a bonafide work carried out by them under my guidance and supervision.
To the best of my knowledge, the matter embodied in the dissertation has not been
submitted for the award of any other degree or diploma.
Date: 08 June 2015 ˜ Mr. Manoj Kumar
Place: GBPEC, Pauri Assistant Professor
˜ SUPERVISOR
PREFACE
Among all the biometric techniques, fingerprint-based identification is the oldest method
which has been successfully used in numerous applications. Everyone has unique, im-
mutable fingerprints. Identifying suspects based on impressions of fingers lifted from
crime scenes (latent prints) is a routine procedure that is extremely important to foren-
sics and law enforcement agencies. Latents are partial fingerprints that are usually
smudgy, with small area and containing large distortion. Due to these characteristics,
latents have a significantly smaller number of minutiae points compared to full (rolled
or plain) fingerprints.
A fingerprint is made of a series of ridges and furrows on the surface of the finger. The
uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as
well as the minutiae points. Minutiae points are local ridge characteristics that occur
at either a ridge bifurcation or a ridge ending. Minutiae are very important features for
fingerprint representation, and most practical fingerprint recognition systems store only
the minutiae template in the database for further usage.
ACKNOWLEDGEMENT
We place on record and warmly acknowledge the continuous encouragement, invaluable
supervision, timely suggestions and inspired guidance offered by our guide Mr. Manoj
Kumar, Assistant Professor,Department of Electronics & Communication Engineering,
G. B. Pant Engineering College, Pauri Garhwal (Uttarakhand) in bringing this project
to a successful completion. We are also grateful to Dr. Y. Singh, Head and Professor,
Electronics & Communication Engineering Department and Dr. A. K. Gautam, As-
sociate Professor, Electronics & Communication Engineering Department, G. B. Pant
Engineering College, Pauri Garhwal (Uttarakhand) for helping us through the entire
duration of the project. Last but not the least we express our sincere thanks to all our
friends who have patiently extended all kind of help for accomplishing this undertaking.
Our sincere thanks and acknowledgements are due to all our family members who have
constantly encouraged us for completing this project.
Manish Negi
Pratiksha Yadav
Shubham
Rishi Raj Singh Rawat
ABSTRACT
In this project, we propose a new fingerprint matching algorithm which is especially
designed for matching latents. The proposed algorithm uses a robust alignment algo-
rithm (local based descriptor MCC) to align fingerprints and measure similarities be-
tween fingerprints by considering both minutiae and orientation field information. The
conventional methods that utilize minutiae information treat them as a point set and
find the matched points from different minutiae sets. These minutiae are used for fin-
gerprint recognition, in which the fingerprint’s orientation field is reconstructed from
virtual minutiae and further utilized in the matching stage to enhance the system’s per-
formance. A decision fusion scheme is used to combine the reconstructed orientation
field matching with conventional minutiae based matching. Since orientation field is an
important global feature of fingerprints, the proposed method can obtain better results
than conventional methods. In our project it is implemented using MATLAB-GUI where
virtual minutiae are considered.
CONTENTS
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.1 Minutiae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5.2 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.6 Need For Automated Extraction System . . . . . . . . . . . . . . . 6
1.7 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2. FINGERPRINT ENHANCEMENT TECHNIQUE . . . . . . . . . . . 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Binarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3. FEATURE EXTRACTION . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 Minutiae Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Orientation Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.5 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4. DATABASE AND FINGERPRINT MATCHING . . . . . . . . . . . . 21
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Database FVC2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Fingerprint Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3.1 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.3.2 Similarity Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5. IMPLEMENTATION OF THE PROPOSED ALGORITHM . . . . . 26
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.2 Enhancement of the fingerprint image . . . . . . . . . . . . . . . . 26
5.3 Minutiae Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.1 Ridge Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.2 Minutiae Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3.3 False Minutiae Removal . . . . . . . . . . . . . . . . . . . . . . . 30
5.4 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.4.1 Segmentation and Region of interest . . . . . . . . . . . . . . . . 33
5.5 Minutiae Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6. RESULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1 Result and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
vii
8. APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8.1 Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
viii
LIST OF FIGURES
1.1 Block Diagram of proposed algorithm. . . . . . . . . . . . . . . . . . . . 2
1.2 Three types of fingerprint impressions. (a) Rolled; (b) plain; (c) latent. . 4
1.3 Ridge Ending and Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 The orientation of a ridge pixel in a fingerprint. . . . . . . . . . . . . . . 6
2.1 Binarized output of a fingerprint . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Thinned output of a fingerprint . . . . . . . . . . . . . . . . . . . . . . . 11
3.1 (a) Mask for bifurcation (b) Mask for termination. . . . . . . . . . . . . . 13
3.2 Examples of a ridge ending and bifurcation pixel. (a) A Crossing Number
of one corresponds to a ridge ending pixel. (b) A Crossing Number of three
corresponds to a bifurcation pixel. . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Minutiae extracted image . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 (a) Orientation field with white background. (b) Orientation field with
thinned image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Ridges and valleys on a fingerprint image . . . . . . . . . . . . . . . . . . 19
3.6 A fingerprint image and its foreground and background regions . . . . . . 20
4.1 One fingerprint image from each database . . . . . . . . . . . . . . . . . 22
4.2 Sample images from the database. . . . . . . . . . . . . . . . . . . . . . . 23
5.1 (a)Input image.(b) Binarized output . . . . . . . . . . . . . . . . . . . . 26
5.2 (a)Binarized image.(b) Thinned output . . . . . . . . . . . . . . . . . . . 27
5.3 (a)Thinned image. (b) Extracted ridge ending and bifurcation. . . . . . . 29
5.4 (a)Thinned image.(b) Orientation field. . . . . . . . . . . . . . . . . . . . 32
5.5 Marked region of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.6 Similarity Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1 Graphical User Interface(GUI) for creating database . . . . . . . . . . . . 35
6.2 Binarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.3 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4 Minutiae extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.5 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.6 Marked region of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.7 Matching with similar fingerprint . . . . . . . . . . . . . . . . . . . . . . 40
6.8 Matching with non-similar fingerprint . . . . . . . . . . . . . . . . . . . . 40
x
LIST OF TABLES
3.1 Property of crossing number . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1 Extracted information from image in term of ridge termination and bi-
furcation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.1 Results after comparing similarities between input with other fingerprints
in database FVC2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.2 Extracted information from image in term of ridge termination ,bifurca-
tion and orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
CHAPTER 1
INTRODUCTION
1.1 Introduction
Fingerprint recognition has been used by law enforcement agencies to identify suspects
and victims for several decades [1]. Recent advances in automated fingerprint identifi-
cation technology, coupled with the pronounced need for reliable person identification,
have resulted in the increased use of fingerprints in both government and civilian appli-
cations such as border control, employment background check and secure facility access.
Fingerprints obtained during the crime scenes are mostly latent images. Latent Fin-
gerprints refer to the impressions unintentionally left on item handled or touched by
fingers. Such fingerprints are often not directly visible unless some physical or chemical
technique is applied to enhance them. Since the early 20th century latent fingerprints
have served as important evidence for law enforcement agencies to apprehend and con-
vict criminals [2].
Given a latent fingerprint (with manually marked minutiae) and a rolled fingerprint, we
extract additional features from both prints, align them in the same coordinate system,
and compute a match score between them. The proposed matching approach uses minu-
tiae and orientation field from both latent and rolled prints. To enable reliable feature
extraction, a latent fingerprint image, which is often of very poor quality, needs to go
through an image enhancement stage, which connects broken ridges, separates joined
ridges, and removes overlapping patterns. These steps are shown in Fig. 1.1.
Here we consider the problem of biometric verification in a more formal manner. In
a verification problem, the biometric signal from the user is compared against a single
enrolled template. This template is chosen based on the claimed identity of the user.
Each user i is represented by a biometric Bi. It is assumed that there is a one-to-one
correspondence between the biometric Bi and the identity i of the individual. The fea-
ture extraction phase results in a machine representation (template) Ti of the biometric.
During verification; the user claims an identity j and provides a biometric signal Bj.
The feature extractor now derives the corresponding machine [3] representation Tj. The
recognition consists of computing a similarity score S (Ti, Tj). The claimed identity is
assumed to be true if the S(Ti, Tj) > Th for some threshold Th. The choice of the
threshold also determines the trade-off between user convenience and system security as
will be seen in the ensuing section.
Figure 1.1: Block Diagram of proposed algorithm.
2
1.2 Motivation
The motivation behind this fingerprint image enhancement and minutiae extraction
process is to improve the quality of fingerprint and to extract the minutiae points. And
in the extraction process we should not get the false minutiae and preserve the true ridge
endings and ridge bifurcations. The minutiae extracted from the fingerprint heavily
depends upon the quality of the input fingerprint. In order to extract true minutiae
from the fingerprint we need to remove the noise from the input image and for that we
need an enhancement algorithm.
1.3 Thesis Organization
The dissertation is divided into six chapters and their outline is described as given below:
In chapter 2 we have explained various image enhancement techniques on latent finger-
print for better results during matching process. In chapter 3 we have explained about
the local minutiae descriptor, which is used to extract the information from fingerprint
and also explain about the orientation field detection. In chapter 4 we have explained
about the database used in this project and how matching between two fingerprint is
done.In chapter 5 we have explained about the algorithm implemented to get desired
results. Finally, chapter 6 is dedicated to the result part which shows output of various
operations done.
1.4 Fingerprint
We touch things every day: a coffee cup, a car door, a computer keyboard, etc. Each time
we touch, it is likely that we leave behind our unique signature in our fingerprints. No
two people have exactly the same fingerprints. Even identical twins, with identical DNA,
have different fingerprints. This uniqueness allows fingerprints to be used in all sorts of
ways, including background checks, biometric security, mass disaster identification, and
3
of course, in criminal situations. There are essentially three types of fingerprints in law
enforcement applications:
1. Rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this
case ink is first applied to the finger surface) or the platen of a scanner as shown
in Fig. 1.2(a).
2. Plain, which is obtained by placing the finger flat on a paper or the platen of a
scanner without rolling as shown in Fig. 1.2(b).
3. Latents, which are lifted from surfaces of objects that are inadvertently touched
or handled by a person typically at crime scenes [3] as shown in Fig. 1.2(c).
(a) (b) (c)
Figure 1.2: Three types of fingerprint impressions. (a) Rolled; (b) plain; (c) latent.
Rolled prints contain the largest amount of information about the ridge structure on
a fingerprint since they capture the largest finger surface area; latent usually contain
the least amount of information for matching or identification because of their size
and inherent noise. Compared to rolled or plain fingerprints, latents are smudgy and
blurred, capture only a small finger area, and have large nonlinear distortion due to
pressure variations.
4
1.5 Feature Extraction
In pattern recognition and in image processing, feature extraction is a special form of
dimensionality education. Transforming the input data into the set of features is called
feature extraction. If the features extracted are carefully chosen it is expected that the
features set will extract the relevant information from the input data in order to perform
the desired task using this reduced representation instead of the full size input [4].
1.5.1 Minutiae
Minutiae refer to the specific plot point on fingerprint. These include characteristics
such as ridge bifurcation and ridge ending as shown in Fig. 1.3.
(a)Ridge Ending- the abrupt end of a ridge
(b)Ridge Bifurcation- a single ridge that divides into two ridges
Figure 1.3: Ridge Ending and Bifurcation
1.5.2 Orientation field
Orientation field defines the local orientation of the ridges contained in the fingerprint
as shown in Fig. 1.4. It is reconstructed from minutiae location and direction for the
latent. It is used to improve fingerprint matching performance
5
Figure 1.4: The orientation of a ridge pixel in a fingerprint.
1.6 Need For Automated Extraction System
1. Reducing the time spent by latent examiners in manual markup. A crime scene
can contain as many as hundreds of latents. However, only a small portion of them
can be processed simply because law enforcement agencies do not have sufficient
manpower. It can take twenty minutes or even longer to mark the minutiae in a
single latent. Automatic feature extraction can improve the efficiency of processing
latents, leading to more identification quickly [5].
2. Improving the compatibility between minutiae in latents and full fingerprints. In
current practice, minutiae in latents are manually marked while minutiae in full
fingerprints are automatically extracted. This can cause a compatibility problem.
Although this compatibility issue is not a severe problem for full fingerprint match-
ing, this problem cannot be underestimated in the case of latent matching, since in
a tiny and smudgy latent, every minutia plays an important role. To address this
issue, AFIS vendors usually provide training courses to latent examiners on how
to better mark minutiae for their particular AFIS system since different vendors
systems are not very consistent in extracting minutiae. However, it takes time for
fingerprint examiners to get familiar with a system. This problem can be alleviated
provided features in latents are also extracted by automatic algorithms [6].
3. Improving repeatability/reproducibility of latent identification. The minutiae in
6
the same latent marked by different latent examiners or even by the same examiner
(but at different times) may not be the same. This is one of reasons why different
latent examiners or even the same examiner (but at different times) make different
matching decisions on the same latent-exemplar pair [7].
1.7 Application
1. Identifying suspects based on impressions of fingers lifted from crime scenes (latent
prints) is a routine procedure that is extremely important to forensics and law
enforcement agencies.
2. Verifying the matching between driver fingerprint and the fingerprint features
stored on the license assures that the driver is indeed the person that the license
is issued for. This task can be done on-site where the fingerprint features obtained
from the driver by live scanning is compared with the features magnetically stored
on the driver license. Current ”smart card” technology allows abundant memory
capacity to store the features on card. A driver/ license match means that the
license indeed belongs to the driver, this, however does no warranty that the driver
license is not falsified. To check for validity of the driver license the police officer
has the option to make additional inquiry against the database. In this case a
license validity check will result.
3. Since 2000, electronic fingerprint readers have been introduced for security ap-
plications such as log-in authentication for the identification of computer users.
However, some less sophisticated devices have been discovered to be vulnerable to
quite simple methods of deception, such as fake fingerprints cast in gels. In 2006,
fingerprint sensors gained popularity in the notebook PC market. Built-in sensors
in Think-Pads, VAIO, HP Pavilion laptops, and others also double as motion de-
tectors for document scrolling, like the scroll wheel. Following the release of the
7
iPhone 5S model, a group of German hackers announced on September 21, 2013,
that they had bypassed Apple’s new Touch ID fingerprint sensor by photographing
a fingerprint from a glass surface and using that captured image as verification.
4. Electronic registration and library access: Fingerprints and, to a lesser extent, iris
scans can be used to validate electronic registration, cashless catering, and library
access. By 2007, this practice was particularly widespread in UK schools, and it
was also starting to be adopted in some states in the US.
8
CHAPTER 2
FINGERPRINT ENHANCEMENT
TECHNIQUE
2.1 Introduction
A critical step in Automatic Fingerprint Matching System is to automatically and re-
liably extract minutiae from input fingerprint images. However the performance of the
minutiae extraction algorithm relies heavily on the quality of the input fingerprint image.
In order to ensure the extraction of true minutiae points, it is essential to incorporate
the enhancement algorithm. Reliable and sound verification of fingerprints in any AFIS
is always preceded with a proper detection and extraction of its features. A fingerprint
image is first enhanced before the features contained in it are detected or extracted.
A well enhanced image will provide a clear separation between the valid and spurious
features. Spurious features are those minutiae points that are created due to noise or
artifacts and they are not actually part of the fingerprint.
2.2 Binarization
Most minutiae extraction algorithms operate on binary images where there are only two
levels of interest: the black pixels that represent ridges, and the white pixels that rep-
resent valleys. Binarization is the process that converts a grey-level image into a binary
image. The binarization process involves examining the grey-level value of each pixel
in the enhanced image as shown in Fig. 2.1 and if the value is greater than the global
threshold, then the pixel value is set to a binary value one; otherwise, it is set to zero.
Equation used to binarize the gray scale images [8].
G(x,y)= 1 if f(x, y) > T
0 if f(x, y) <= T
Where, f(x,y) is the value of a pixel in gray-scale image and g(x,y) is the binarized image
Figure 2.1: Binarized output of a fingerprint
2.2.1 Thresholding
In this method, the grey-level value of each pixel in the filtered image is examined and,
if the value is greater than the threshold value 1, then the pixel value is set to a binary
value one; otherwise, it is set to zero. The threshold value successfully makes each cluster
as tight as possible and also eliminate all the overlaps. The threshold value of 1 is taken
after a careful selection from a series of within and between class variance values ranging
from 0 to 1 that optimally supported the maximum separation of the ridges from the
valleys. The clear separation of the ridges from the valleys verifies the correctness of the
algorithm as proposed in [9] and implemented in this project.
10
2.3 Thinning
The final image enhancement step typically performed prior to minutiae extraction is
thinning. Thinning is a morphological operation that successively erodes away the fore-
ground pixels until they are one pixel wide. A standard thinning algorithm is employed,
which performs the thinning operation using two subiterations. This algorithm is accessi-
ble in MATLAB via the ‘thin’ operation under the bwmorph function. Each subiteration
begins by examining the neighbourhood of each pixel in the binary image, and based on
a particular set of pixel-deletion criteria, it checks whether the pixel can be deleted or
not. These subiterations continue until no more pixels can be deleted. The application
of the thinning algorithm to a fingerprint image preserves the connectivity of the ridge
structures while forming a skeletonized version of the binary image as shown in Fig. 2.2.
This skeleton image is then used in the subsequent extraction of minutiae. The process
Figure 2.2: Thinned output of a fingerprint
involving the extraction of minutiae from a skeleton image will be discussed in the next
chapter.
11
CHAPTER 3
FEATURE EXTRACTION
3.1 Introduction
After a fingerprint image has been enhanced, the next step is to extract the minutiae
from the enhanced image. Following the extraction of minutiae, a final image post
processing stage is performed to eliminate false minutiae. This chapter provides discus-
sion on the methodology and implementation of techniques for minutiae extraction and
orientation field. The proposed matching approach uses minutiae and orientation field
from both latent and rolled prints. Minutiae are manually marked by latent examin-
ers in the latent, and automatically extracted using commercial matchers in the rolled
print. Based on minutiae, local minutiae descriptors are built and used in the proposed
descriptor-based alignment and scoring algorithms. Orientation field is reconstructed
from minutiae location and direction for the latents as proposed in [10],and orientation
field is automatically extracted from the rolled print images by using a gradient-based
method.
3.2 Minutiae Extraction
The most commonly employed method of minutiae extraction is the Crossing Number
(CN) concept [11] [12]. This method involves the use of the skeleton image where the
ridge flow pattern is eight-connected. The minutiae are extracted by scanning the local
neighbourhood of each ridge pixel in the image using a 3×3 window as shown in Fig.
3.1. The CN value is then computed, which is defined as half the sum of the differences
between pairs of adjacent pixels in the eight-neighbourhood. Using the properties of the
CN as shown in Table 3.1, the ridge pixel can then be classified as a ridge ending, bifur-
cation or non-minutiae point. For example, a ridge pixel with a CN of one corresponds
to a ridge ending, and a CN of three corresponds to a bifurcation.
(a) (b)
Figure 3.1: (a) Mask for bifurcation (b) Mask for termination.
Table 3.1: Property of crossing number
CN Property
0 Isolated point
1 Ridge ending point
2 Continuing ridge ending
3 Birufication ending
4 Crossing point
This approach involves using a 3×3 window to examine the local neighbourhood of
each ridge pixel in the image. A pixel is then classified as a ridge ending if it has only
one neighbouring ridge pixel in the window, and classified as a bifurcation if it has three
neighbouring ridge pixels. Consequently, it can be seen that this approach is very similar
to the Crossing Number method. The CN for a ridge pixel P is given by eq. 3.1 [13].
13
CN = 0.5
8
i=1
|Pi − Pi+1|, P9 = P1 (3.1)
where Pi is the pixel value in the neighbourhood of P. For a pixel P, its eight neighbouring
pixels are scanned in an anti-clockwise direction.
After the CN for a ridge pixel has been computed, the pixel can then be classified
according to the property of its CN value. As shown in Figure 3.2, a ridge pixel with a
CN of one corresponds to a ridge ending, and a CN of three corresponds to a bifurcation
as shown in Fig. 3.2. For each extracted minutiae point, the following information is
recorded:
1. x and y coordinates,
2. orientation of the associated ridge segment, and
3. type of minutiae (ridge ending or bifurcation).
(a)CN=1 (b)CN=3
Figure 3.2: Examples of a ridge ending and bifurcation pixel. (a) A Crossing Number
of one corresponds to a ridge ending pixel. (b) A Crossing Number of
three corresponds to a bifurcation pixel.
We propose the use of local minutiae descriptor known as Minutiae Cylindrical Code
(MCC) to improve the robustness against distortion. Local Minutiae Descriptor: Local
descriptors have been widely used in fingerprint matching (e.g. [14, 15]). Feng and
Zhou [16] evaluated the performance of local descriptors associated with fingerprint
matching in four categories of fingerprints: good quality, poor quality, small common
region, and large plastic distortion. They also coarsely classified the local descriptors as
14
image-based, texture-based, and minutiae-based descriptors. Minutiae cylinder records
the neighborhood information of a minutiae as a 3-D function and minutiae extracted
image is shown in Fig. 3.3. The cylinder contains several layers and each layer represents
the density of neighboring minutiae along the corresponding direction. The cylinder can
be concatenated as a vector, and therefore the similarity between two minutiae cylinders
can be efficiently computed.
Figure 3.3: Minutiae extracted image
3.3 Normalization
The next step in the fingerprint enhancement process is image normalization. Normal-
ization is used to standardize the intensity values in an image by adjusting the range of
grey-level values so that it lies within a desired range of values. Let I(i,j) represent the
grey-level value at pixel (i,j) and N represent the normalized grey-level value at pixel
(i,j). The equation for normalized image is defined in eq. 3.2:
N(i, j) =



M0 + V0(I(i,j)−M)2
V
, if I(i, j) > M
M0 − V0(I(i,j)−M)2
V
, otherwise
(3.2)
15
Where M and V are the estimated mean and variance of I(i, j), respectively, and M0
and V0 are the desired mean and variance values, respectively. Normalization does not
change the ridge structures in a fingerprint; it is performed to standardize the dynamic
levels of variation in grey-level values, which facilitates the processing of subsequent
image enhancement stage.
3.4 Orientation Field
Orientation field can be used in several ways to improve fingerprint matching perfor-
mance, such as by matching orientation fields directly and fusing scores with other
matching scores, or by enhancing the images to extract more reliable features. Orienta-
tion field estimation using gradient-based method is very reliable [13] in good quality im-
ages. However, when the image contains noise, this estimation becomes very challenging.
A few model-based orientation field estimation methods have been proposed [17,18]that
use singular points as input to the model. In the latent fingerprint matching case, it is
very challenging to estimate the orientation field based only on the image due to the
poor quality and small area of the latent. Moreover, if singular points are to be used,
they need to be manually marked (and they are not always present) in the latent finger-
print image. Hence, we use a minutiae-based orientation field reconstruction algorithm
proposed, which takes manually marked minutiae in latents as input and outputs an ori-
entation field as shown in Fig. 3.4. This approach estimates the local ridge orientation
in a block by averaging the direction of neighboring minutiae. The orientation field is
reconstructed only inside the convex hull of minutiae. Since the directions of manually
marked minutiae are very reliable, the orientation field reconstructed using this approach
is quite accurate except in areas absent of minutiae or very close to singular points. For
rolled fingerprints, orientation field is automatically extracted using a gradient- based
method.
The steps for calculating the orientation at pixel are as follows:
16
(a) (b)
Figure 3.4: (a) Orientation field with white background. (b) Orientation field with
thinned image.
1. Firstly, a block of size W x W is centered at pixel (i,j) in the normalized fingerprint
image.
2. For each pixel in the block, compute the gradients x(i,j) and y(i,j) which are the
gradient magnitudes in the x and y directions, respectively. The horizontal Sobel
operator is used to compute x(i,j) :






1 0 −1
2 0 −2
1 0 −1






The vertical Sobel operator is used to compute y(i,j):






1 2 1
0 0 0
−1 −2 −1






3. The local orientation at pixel (i,j) can then be estimated using the following eq.3.3,
3.4 and 3.5.
17
vx(i, j) =
i+ w
2
u=i− w
2
j+ w
2
v=j− w
2
2∂x(u, v)∂y(u, v), (3.3)
vy(i, j) =
i+ w
2
u=i− w
2
j+ w
2
v=j− w
2
(∂2
x(u, v)∂2
y(u, v)), (3.4)
θ(i, j) =
1
2
tan−1 vx(i, j)
vy(i, j)
, (3.5)
Where (i,j) is the least square estimate of the local orientation at the block centered
at pixel(i,j).
4. Smooth the orientation field in a local neighborhood using a Gaussian filter. The
orientation image is firstly converted into a continuous vector field, which is de-
fined by eq. 3.6 and 3.7.
φx(i, j) = cos(2θ(i, j)), (3.6)
φy(i, j) = sin(2θ(i, j)), (3.7)
Where x and y are the x and y components of the vector field, respectively. After
the vector field has been computed, Gaussian smoothing is then performed and
given by eq. 3.8 and 3.9.
φx(i, j) =
w
2
u=− w
2
w
2
v=− w
2
G(u, v)φx(i − uw, j − vx), (3.8)
φy(i, j) =
w
2
u=− w
2
w
2
v=− w
2
G(u, v)φy(i − uw, j − vx), (3.9)
Where, G is a Gaussian low-pass filter of size w x w.
18
5. The final smoothed orientation field O at pixel(i,j) is defined by eq. 3.10.
O(i, j) =
1
2
tan−1 φx(i, j)
φy(i, j)
, (3.10)
3.5 Segmentation
There are two regions that describe any fingerprint image; namely the foreground region
and the background region. The foreground regions are the regions containing the ridges
and valleys. As shown in Fig. 3.5, the ridges are the raised and dark regions of a
fingerprint image while the valleys are the low and white regions between the ridges.
The foreground regions, often referred to as the Region of Interest (RoI) is shown in Fig.
3.6. The background regions are mostly the outside regions where the noises introduced
into the image during enrolment are mostly found. The essence of segmentation is to
reduce the burden associated with image enhancement by ensuring that focus is only on
the foreground regions while the background regions are ignored.
Figure 3.5: Ridges and valleys on a fingerprint image
The background regions possess very low grey-level variance values while the fore-
ground regions possess very high grey-level variance values. A block processing approach
used in [19] [20] is adopted in this research for obtaining the grey-level variance values.
19
The approach firstly divides the image into blocks of size W x W and then the variance
V(k) for each of the pixels in block k is obtained from eq. 3.11 and 3.12.
V (k) =
1
W2
W
i=1
W
i=1
(I(i, j) − M(k))2
(3.11)
M(k) =
1
W2
W
a=1
W
b=1
J(a, b) (3.12)
I(i,j) and J(a,b) are the grey-level value for pixel (i,j) and (a,b) respectively in block k.
Figure 3.6: A fingerprint image and its foreground and background regions
20
CHAPTER 4
DATABASE AND FINGERPRINT
MATCHING
4.1 Introduction
In this chapter we report the orientation field estimation performance and the resulting
matching performances on the FVC2002 latent fingerprint database and an overlapped
fingerprint input. Finally, we discuss the impact of reference fingerprints on orientation
field estimation.
4.2 Database FVC2002
FVC2002 is the Second International Competition for Fingerprint Verification Algo-
rithms. The evaluation was held in April 2002 and the results of the 31 participants
were presented at 16th ICPR (International Conference on Pattern Recognition). This
initiative is organized by D. Maio, D. Maltoni, R. Cappelli from Biometric Systems Lab
(University of Bologna), J. L. Wayman from the U.S. National Biometric Test Center
(San Jose State University) and A. K. Jain from the Pattern Recognition and Image Pro-
cessing Laboratory of Michigan State University. A sample image from each database
FVC2002 is shown in Fig. 4.1.
The size of database FVC2002 is established as 110 fingers, 8 impressions per finger
(880 impressions) (Fig. 4.2). Collecting some additional data provides a margin in
case of collection errors, and also allowed us to systematically choose from the collected
Figure 4.1: One fingerprint image from each database
impressions to include in the test databases. An automatic all-against-all comparison
was first performed by using an internally-developed fingerprint matching algorithm, to
discover possible data-collection errors. False match and false non-match errors were
manually analyzed: two labeling errors were discovered and removed. Fingerprints in
each database were then sorted by quality according to a quality index [21]. The top-
ten quality fingers were removed from each database since they do not constitute an
interesting case study. The remaining 110 fingers were split into set A (100 fingers -
evaluation set) and set B (10 fingers - training set). To make set B representative of
the whole database, the 110 collected fingers were ordered by quality, then the 8 images
from every tenth finger were included in set B. The remaining fingers constituted set A.
After training, set B were made available to the participants, some of them informed
us of the presence of fingerprint pairs whose relative rotation exceeded the maximum
specification of about 35 degrees. We were not much surprised by this, since although
the persons in charge of data collection were informed of the constraint, the require-
22
ment of exaggerating rotation but remaining within a maximum of about 35 degrees
between any two samples is not simple to implement in practice, especially when the
volunteers are untrained users. A further semiautomatic analysis was then necessary
to ensure that, in the evaluation set A, the samples were compliant with the initial
specifications: maximum rotation and non-null overlap between any two impressions of
the same finger. Software was developed to support us in this daunting task. All of
the 12 originally collected impressions of the same fingers were displayed at the same
time and the authors selected a subset of 8 impressions by point and click. Once the
selection was made, the software automatically compared the selected impressions and
warning was issued in case the rotation or displacement between any two pairs exceeded
the maximum allowed. Fortunately, the 12 samples at our disposal always allowed us to
find a subset of 8 impressions compliant with the specification.
Figure 4.2: Sample images from the database.
4.3 Fingerprint Matching
In order to estimate the alignment error, we use ground truth mated minutiae pairs from
FVC2002, which are marked by fingerprint examiners, to compute the average distance
between the true mated pairs after alignment. If the average Euclidean distance for a
given latent is less than a prespecified number of pixels in at least one of the ten best
alignments then we consider it a correct alignment. This alignment is done for removal
of false minutiae detected in latent sample.
23
4.3.1 Alignment
In the latent matching case, singularities are not always present in latents, making it
difficult to base the alignment of the fingerprint on singular points alone. To obtain
manually marked orientation field is expensive, and to automatically extract orientation
field from a latent image is a very challenging problem. Since manually marking minutiae
is a common practice for latent matching, our approach to align two fingerprints is based
on minutiae. Local descriptors can also be used to align two fingerprints. In this case,
usually the most similar minutiae pair is used as a base for the transformation parameters
(rotation and translation), and the most similar pair is chosen based on a measure of
similarity between the local descriptors of the minutiae pair Given two sets of points
(minutiae), a matching score is computed for each transformation in the discretized
set of all allowed transformations. For each pair of minutiae, one minutia from each
image (latent or full), and for given scale and rotation parameters, unique translation
parameters can be computed. Each parameter receives a vote that is proportional to the
matching score for the corresponding transformation. In our approach, the alignment is
conducted in a similar way, but the evidence for each parameter is accumulated based
on the similarity between the local descriptors of the two involved minutiae, with the
similarity and descriptor. The assumption here is that true mated minutiae pairs will
vote for very similar sets of alignment parameters, while non-mated minutiae pairs will
vote randomly throughout the parameter space. As a result, the set of parameters
that presents the highest evidence is considered the best one. For robustness, ten sets
of alignment parameters with strong evidence are considered. In order to make the
alignment computationally efficient and also more accurate, we use the minutiae pairs
that vote for a peak to compute a rigid transformation between the two fingerprints.
The use of voting minutiae pairs to compute the transformation gives more accurate
alignment parameters than directly using the peak parameters.
24
4.3.2 Similarity Measure
For each of the 10 different alignments, a matching score between two fingerprints is
computed by comparing minutiae and orientation fields. The maximum value of the 10
scores is chosen as the final matching score between the two fingerprints. To compute
minutiae matching score under a given alignment, we first find the corresponding minu-
tiae pairs (one in the latent, one in the rolled print). For this purpose, we align the
minutiae sets of the two fingerprints and then find a one-to-one matching between the
two minutiae sets using a greedy algorithm is expressed by eq. 4.1. For each minutia
ml in the latent, a set of candidate minutiae in the rolled print is found. A minutia mr
in the rolled print is called a candidate if it has not been yet matched to any minutia,
and both its location and angle are sufficiently close to ml. The threshold values Ts
for spatial distance and TA for angle distance were determined empirically. Among all
candidates, the one closest to ml in location is chosen as the matching minutia of ml.
SM =
1
N
N
i=1
(sc(i))(ss(i)) (4.1)
where sc(i)denotes the similarity between the minutiae cylinder codes of the ith
pair of
matched minutiae ss(i)=1-(ds(i)/2T(s)) maps the spatial distance ds(i)of the ith
pair of
matched minutiae into a similarity score, and denotes the number of minutiae in the
latent. According to equation, the matching score depends on the number of matching
minutiae, which itself is affected by the distance threshold. However, due to large
distortion present in many latents, it is difficult to choose an appropriate value for.
While a large threshold value will lead to more matching minutiae for distorted mated
pairs, the number of matching minutiae for non-mated pairs will increase too. Hence,
we use two different values (15 pixels and 25 pixels) and for each threshold, a set of
matching minutiae is found and a matching score is computed using the above equation.
25
CHAPTER 5
IMPLEMENTATION OF THE
PROPOSED ALGORITHM
5.1 Introduction
Using MATLAB Version 7.11.0 (R2010b), both proposed enrolment and verification
phases are implemented as described in next three subsections.
5.2 Enhancement of the fingerprint image
(a) (b)
Figure 5.1: (a)Input image.(b) Binarized output
The first step is to enhance the fingerprint image by setting the contrast level using
imadjust() function. Then binarization of the image is done above threshold value of
160. Binarization results on a monochrome image is shown in Fig. 5.1.
Then thinning is done by using function bwmorph() function as shown in Fig. 5.2, which
is a morphological operator which operate on binary images and applies the operation
n times and can be Inf, in which case the operator is repeated until the image no longer
changes.
Syntax:
BW= bwmorph(bwoperation)
When used with the thin option, bwmorph() uses the following algorithm:
1. Divide the image into two distinct subfields checker board pattern.
2. In the first subiteration, delete pixel p from the first subfield.
3. In the second subiteration, delete pixel p from second subfield.
(a) (b)
Figure 5.2: (a)Binarized image.(b) Thinned output
The two subiterations together make up one iteration of the thinning algorithm.
When the user specifies an infinite number of iterations (n=Inf), the iterations are
repeated until the image stops changing. The conditions are all tested using applylut
with precomputed lookup tables.
27
5.3 Minutiae Extraction
Ridge Ending
Ridge endings are found by using nlfilter() function as shown in Fig. ??. It consists of
general sliding-neighborhood operations.
Syntax:
B = nlfilter(A, [m n], fun)
B = nlfilter(A, ’indexed’,...)
B = nlfilter(A, [m n], fun) applies the function fun to each m-by-n sliding block of the
grayscale image A. fun is a function that accepts an m-by-n matrix as input and returns
a scalar result.
c = fun(x), fun must be a function handle.
Parameterizing Functions, in the MATLAB Mathematics documentation, explains how
to provide additional parameters to the function fun, c is the output value for the center
pixel in the m-by-n block x. nlfilter calls fun for each pixel in A. nlfilter zero-pads the
m-by-n block at the edges, if necessary.
B = nlfilter(A, ’indexed’,...) processes A as an indexed image, padding with 1’s if A is
of class single or double and 0’s if A is of class logical, uint8, or uint16.
5.3.1 Ridge Bifurcation
Ridge bifurcations are found by using bwlabel() function as shown in Fig. ??. It uses
label connected components in 2-D binary image.
Syntax:
L = bwlabel(BW, n)
[L, num] = bwlabel(BW, n)
L = bwlabel(BW, n) returns a matrix L, of the same size as BW, containing labels for
the connected objects in BW. The variable n can have a value of either 4 or 8, where
28
4 specifies 4-connected objects and 8 specifies 8-connected objects. If the argument is
omitted, it defaults to 8.
The elements of L are integer values greater than or equal to 0. The pixels labeled 0 are
the background. The pixels labeled 1 make up one object; the pixels labeled 2 make up
a second object; and so on.
[L, num] = bwlabel(BW, n) returns in num the number of connected objects found in
BW.
(a) (b)
Figure 5.3: (a)Thinned image. (b) Extracted ridge ending and bifurcation.
5.3.2 Minutiae Table
For constructing minutiae table 5.1, we used round towards infinity function seil(). It
rounds the elements of A to the nearest integers greater than or equal to A. For complex
A, the imaginary and real parts are rounded independently.
Syntax:
B=seil(A)
29
Table 5.1: Extracted information from image in term of ridge termination and bifur-
cation.
Ridge termination Ridge bifurcation
144 60
150 66
172 97
127 109
146 120
212 127
191 131
168 136
153 145
132 152
115 157
211 157
215 162
133 192
114 197
180 202
139 208
211 214
192 215
145 218
167 225
163 232
168 239
190 241
215 243
194 246
158 249
5.3.3 False Minutiae Removal
The preprocessing stage does not totally heal the fingerprint image. For example, false
ridge breaks due to insufficient amount of ink and ridge cross-connections due to over
inking are not totally eliminated. Actually all the earlier stages themselves occasionally
introduce some artifacts which later lead to spurious minutiae. These false minutiae
will significantly affect the accuracy of matching if they are simply regarded as genuine
30
minutiae.
Our procedures in removing false minutiae are
1. If the distance between one bifurcation and one termination is less than D and
the two minutiae are in the same ridge, remove both of them. Where D is the
average inter-ridge width representing the average distance between two parallel
neighbouring ridges.
2. If the distance between two bifurcations is less than D and they are in the same
ridge, remove the two bifurcations.
3. If two terminations are within a distance D and their directions are coincident with
a small angle variation and they suffice the condition that no other termination is
located between the two terminations, then the two terminations are regarded as
false minutia derived from a broken ridge and are removed.
4. If two terminations are located in a short ridge with length less than D, remove
the two terminations.
5.4 Orientation field
1. fspecial( ) function creates predefined 2-D filter.
Syntax:
h = fspecial(type)
h = fspecial(type, parameters)
h = fspecial(type) creates a two-dimensional filter h of the specified type. fspecial
returns h as a correlation kernel, which is the appropriate form to use with imfilter.
type is a string having one of these values.
For e.g., h = fspecial(’gaussian’, hsize, sigma) returns a rotationally symmetric
Gaussian lowpass filter of size hsize with standard deviation sigma (positive). hsize
31
can be a vector specifying the number of rows.
2. filter2() function works as a 2-D digital filter.
Syntax:
Y = filter2(h,X)
Y = filter2(h,X,shape)
Y = filter2(h,X) filters the data in X with the two-dimensional FIR filter in the
matrix h. It computes the result, Y, using two-dimensional correlation, and returns
the central part of the correlation that is the same size as X.
(a) (b)
Figure 5.4: (a)Thinned image.(b) Orientation field.
3. quiver() function gives quiver or velocity plot as shown in Fig. 5.4.
Syntax:
a=quiver(x,y)
A quiver plot displays velocity vectors as arrows with components (u,v) at the
points (x,y).
32
5.4.1 Segmentation and Region of interest
Another function regionprops() is used which measures properties of image regions which
is used to find region of interest as shown in Fig. 5.5.
Syntax:
STATS = regionprops(L, properties)
STATS = regionprops(L, properties) measures a set of properties for each labeled region
in the label matrix L. Positive integer elements of L correspond to different regions. For
example, the set of elements of L equal to 1 corresponds to region 1; the set of elements
of L equal to 2 corresponds to region 2; and so on.
STATS is a structure array with length equal to the number of objects in BW, CC.NumObjects,
or max(L(:)). The fields of the structure array denote different properties for each re-
gion, as specified by properties.
Figure 5.5: Marked region of interest
33
5.5 Minutiae Match
Given two set of minutiae of two fingerprint images, the minutiae match algorithm
determines whether the two minutiae sets are from the same finger or not. Fig. 5.6
shows similarity measure between two fingerprints.
Figure 5.6: Similarity Comparison
An alignment-based match algorithm includes two consecutive stages: one is alignment
stage and the second is match stage.
1. Alignment stage: Given two fingerprint images to be matched, choose any minutiae
from each image; calculate the similarity of the two ridges associated with the two
referenced minutiae points. If the similarity is larger than a threshold, transform
each set of minutiae to a new coordination system whose origin is at the referenced
point and whose x-axis is coincident with the direction of the referenced point.
2. Match stage: After we get two set of transformed minutiae points, we use the
elastic match algorithm to count the matched minutiae pairs by assuming two
minutiae having nearly the same position and direction are identical.
34
CHAPTER 6
RESULT
6.1 Result and Discussions
This chapter consists of the results generated by the GUI software which is shown in
Fig. 6.1, designed by using MATLAB.
First we created a database of 10 fingerprints. The steps involved in it are as follows:
1. Image to the software with the details of the person:
Figure 6.1: Graphical User Interface(GUI) for creating database
After saving the personal information, we have to extract the features of the fin-
gerprint.
2. Applying Binarization technique:
Figure 6.2: Binarization
3. Applying Thinning process:
Figure 6.3: Thinning
36
4. Marking minutiae Points:
Figure 6.4: Minutiae extraction
5. Calculating and marking orientation field:
Figure 6.5: Orientation field
37
6. Marking the region of interest:
Figure 6.6: Marked region of interest
The above figure shows the output image of different operation performed on
it. The output numerical value obtain in background is used to match with the
fingerprint.
7. After creating the database, we match the fingerprints from it. The software takes
a latent image as input and matches the minutiae points and orientation with the
database and generates matching score. The following results were obtained:
Table 6.1: Results after comparing similarities between input with other fingerprints
in database FVC2002
FVC Database Input Match Score
101 1 101 1 1
101 2 101 1 0.770
102 1 101 1 0.197
102 2 101 1 0.245
103 1 101 1 0.180
103 2 101 1 0.217
104 1 101 1 0.247
104 2 101 1 0.223
38
Table 6.2: Extracted information from image in term of ridge termination ,bifurcation
and orientation field
Ridge termination Ridge bifurcation Orientation field
144 60 135, 67
150 66 133, 101
172 97 198, 101
127 109 192, 107
146 120 136, 122
212 127 220, 172
191 131 0, 0
168 136 0, 0
153 145 0, 0
132 152 0, 0
115 157 0, 0
211 157 0, 0
215 162 0, 0
133 192 0, 0
114 197 0, 0
180 202 0, 0
139 208 0, 0
211 214 0, 0
192 215 0, 0
145 218 0, 0
167 225 0, 0
163 232 0, 0
168 239 0, 0
190 241 0, 0
215 243 0, 0
194 246 0, 0
158 249 0, 0
39
Figure 6.7: Matching with similar fingerprint
Figure 6.8: Matching with non-similar fingerprint
40
CHAPTER 7
CONCLUSION
The primary focus of the work in this project is on the enhancement of fingerprint im-
ages, and the subsequent extraction of minutiae. Firstly, we have implemented a series
of techniques for fingerprint image enhancement to facilitate the extraction of minutiae.
Experiments were then conducted using a combination of both synthetic test images
and real fingerprint images in order to provide a well-balanced evaluation on the perfor-
mance of the implemented algorithm. The use of synthetic images has provided a more
quantitative and accurate measure of the performance. Whereas real images rely on
qualitative measures of inspection, but can provide a more realistic evaluation as they
provide a natural representation of fingerprint imperfections such as noise and corrupted
elements. The experimental results have shown that combined with an accurate esti-
mation of the orientation and ridge frequency, our Automated Fingerprint Identification
System is able to effectively enhance the clarity of the ridge structures while reducing
noise. In contrast, for low quality images that exhibit high intensities of noise, the filter
is less effective in enhancing the image due to inaccurate estimation of the orientation
and ridge frequency parameters. However, in practice, this does not pose a significant
limitation as fingerprint matching techniques generally place more emphasis on the well-
defined regions, and will disregard an image if it is severely corrupted. Overall, the
results have shown that our Automated Fingerprint Identification System is useful to
employ prior to minutiae extraction.
CHAPTER 8
APPENDIX
8.1 Matlab Code
% ————————————————————————————————–
clear all;
clc;
addpath(genpath(pwd));
% % LOAD FINGERPRINT TEMPLATE DATABASE
load(’db.mat’)
% % EXTRACT FEATURES FROM AN ARBITRARY FINGERPRINT
[filename, PathName] = uigetfile(’ ∗ .jpg; ∗ .png; ∗ .tif; ∗ .jpeg; ∗ .bmp’,’Load image
File’);
img = imread( [PathName′
/′
filename] );
figure(1)
imshow(img)
img=imresize(img, [300300] );
if ndims(img) == 3; img = rgb2gray(img); end % Color Images
disp( [′
Extractingfeaturesfrom′
filename′
...′
] );
img=imadjust(img, [.3.7] , [] );
J=img(:,:,1) > 160;
figure(2)
imshow(J)
set(gcf,’position’, [11600600] );
K=bwmorph( ∼ J,’thin’,’inf’);
figure(3)
imshow(K)
ffnew=extMinutia(img,K)
figure(8)
% % CALCULATE MATCHING SCORE IN COMPARISION WITH FIRST ONE
load(’xdb.mat’)
for i=1:x
S(i)=match(ffnew,ffi);
drawnow
end
% % OFFER MATCHED FINGERPRINTS
Matched FigerPrints=find(S > 0.65)
% ————————————————————————————————–
% MINUTIAE EXTRACTION function [a5] = extMinutia(I,K)
fun=@minutie;
L = nlfilter(K, [33] ,fun);
LTerm=(L==1);
LTermLab=bwlabel(LTerm);
propTerm=regionprops(LTermLab,’Centroid’)
CentroidTerm=round(cat(1,propTerm(:).Centroid));
figure(4)
imshow(K)
hold on
plot(CentroidTerm(:,1),CentroidTerm(:,2),’ro’)
43
hold off
CentroidFinX=CentroidTerm(:,1);
CentroidFinY=CentroidTerm(:,2);
LSep=(L==3);
LSepLab=bwlabel(LSep);
propSep=regionprops(LSepLab,’Centroid’,’Image’);
CentroidSep=round(cat(1,propSep(:).Centroid));
CentroidSepX=CentroidSep(:,1);
CentroidSepY=CentroidSep(:,2);
figure(5)
imshow(K)
hold on
plot(CentroidSepX,CentroidSepY,’g ∗ ’)
hold off
figure(6)
imshow(K)
hold on
plot(CentroidTerm(:,1),CentroidTerm(:,2),’ro’)
plot(CentroidSepX,CentroidSepY,’g ∗ ’)
hold off
D=10;
% % Process 1
Distance=DistEuclidian( [CentroidSepXCentroidSepY ] , [CentroidFinXCentroidFinY ]
);
SpuriousMinutae=Distance < D;
[i, j] =find(SpuriousMinutae);
CentroidSepX(i)= [] ;
44
CentroidSepY(i)= [] ;
CentroidFinX(j)= [] ;
CentroidFinY(j)= [] ;
% % Process 2
D=7;
Distance=DistEuclidian( [CentroidSepXCentroidSepY ] );
SpuriousMinutae=Distance < D;
[i, j] =find(SpuriousMinutae);
CentroidSepX(i)= [] ;
CentroidSepY(i)= [] ;
D=6;
% % Process 3
Distance=DistEuclidian( [CentroidFinXCentroidFinY ] );
SpuriousMinutae=Distance < D;
[i, j] =find(SpuriousMinutae);
CentroidFinX(i)= [] ;
CentroidFinY(i)= [] ;
Kopen=imclose(K,strel(’square’,7));
KopenClean= imfill(Kopen,’holes’);
KopenClean=bwareaopen(KopenClean,5);
KopenClean( [1end] ,:)=0;
KopenClean(:, [1end] )=0;
ROI=imerode(KopenClean,strel(’disk’,10));
% % Suppress extrema minutiae
[m, n] =size(K(:,:,1));
indFin=sub2ind( [m, n] ,CentroidFinX,CentroidFinY);
Z=zeros(m,n);
45
Z(indFin)=1;
size(ROI’)
size(Z)
ZFin=Z. ∗ ROI’;
[CentroidFinX, CentroidFinY ] =find(ZFin);
indSep=sub2ind( [m, n] ,CentroidSepX,CentroidSepY);
Z=zeros(m,n);
Z(indSep)=1;
ZSep=Z. ∗ ROI’;
[CentroidSepX, CentroidSepY ] =find(ZSep);
figure(7)
imshow(I)
hold on
image(255 ∗ ROI)
alpha(0.5)
plot(CentroidFinX,CentroidFinY,’ro’,’linewidth’,2)
plot(CentroidSepX,CentroidSepY,’go’,’linewidth’,2)
hold off
m1=max(length(CentroidFinX),length(CentroidFinY));
m2=max(length(CentroidSepX),length(CentroidSepY));
m3=max(m1,m2)
a1 = [CentroidFinX(1 : length(CentroidFinX), 1); zeros([m3 − length(CentroidFinX), 1])]
;
a2 = [CentroidFinY (1 : length(CentroidFinY ), 1); zeros([m3 − length(CentroidFinY ), 1])]
;
a3 = [CentroidSepX(1 : length(CentroidSepX), 1); zeros([m3 − length(CentroidSepX), 1])]
;
46
a4 = [CentroidSepY (1 : length(CentroidSepY ), 1); zeros([m3 − length(CentroidSepY ), 1])]
;
a5= [a1, a2, a3, a4] ;
% ————————————————————————————————–
% COORDINATION TRANSFORM FUNCTION
function [T] = transform( M, i )
Count=size(M,1);
XRef=M(i,1);
YRef=M(i,2);
ThRef=M(i,4);
T=zeros(Count,4);
R= [cos(ThRef)sin(ThRef)0; − sin(ThRef)cos(ThRef)0; 001] ; % Transformation Ma-
trix
for i=1:Count
B= [M(i, 1) − XRef; M(i, 2) − Y Ref; M(i, 4) − ThRef] ;
T(i,1:3)=R ∗ B;
T(i,4)=M(i,3);
end
end
% ————————————————————————————————–
% COORDINATION TRANSFORM FUNCTION
function [Tnew] = transform2( T, alpha )
Count=size(T,1);
Tnew=zeros(Count,4);
R= [cos(alpha)sin(alpha)00; ... − sin(alpha)cos(alpha)00; ...0010; 0001] ; % Transfor-
mation Matrix
for i=1:Count
47
B=T(i,:)- [00alpha0] ;
Tnew(i,:)=R ∗ B’;
end
end
% ————————————————————————————————–
% RIDGEORIENTATION CALCULATION
function [orientim, reliability, coherence] = ... ridgeorient(im, gradientsigma, block-
sigma, orientsmoothsigma)
if ∼ exist(’orientsmoothsigma’, ’var’), orientsmoothsigma = 0;
end
[rows, cols] = size(im);
% Calculate image gradients.
sze = fix(6 ∗ gradientsigma); if ∼ mod(sze,2); sze = sze+1; end
f = fspecial(’gaussian’, sze, gradientsigma); % Generate Gaussian filter.
[fx, fy] = gradient(f); % Gradient of Gausian.
Gx = filter2(fx, im);
% Gradient of the image in x
Gy = filter2(fy, im);
% ... and y
% Estimate the local ridge orientation at each point by finding the Gxx = Gx. ∧ 2; %
Covariance data for the image gradients
Gxy = Gx. ∗ Gy;
Gyy = Gy. ∧ 2;
sze = fix(6 ∗ blocksigma);
if ∼ mod(sze,2);
sze = sze+1;
48
end
f = fspecial(’gaussian’, sze, blocksigma);
Gxx = filter2(f, Gxx);
Gxy = 2 ∗ filter2(f, Gxy);
Gyy = filter2(f, Gyy);
% Analytic solution of principal direction
denom = sqrt(Gxy. ∧ 2 + (Gxx - Gyy). ∧ 2) + eps; sin2theta = Gxy./denom; % Sine
and cosine of doubled angles
cos2theta = (Gxx-Gyy)./denom;
if orientsmoothsigma
sze = fix(6 ∗ orientsmoothsigma);
if ∼ mod(sze,2);
sze = sze+1;
end
f = fspecial(’gaussian’, sze, orientsmoothsigma);
cos2theta = filter2(f, cos2theta);
of sin2theta = filter2(f, sin2theta);
end
orientim = pi/2 + atan2(sin2theta,cos2theta)/2;
Imin = (Gyy+Gxx)/2 - (Gxx-Gyy). ∗ cos2theta/2 - Gxy. ∗ sin2theta/2;
Imax = Gyy+Gxx - Imin;
reliability = 1 - Imin./(Imax+.001);
coherence = ((Imax-Imin)./(Imax+Imin)). ∧ 2;
reliability = reliability. ∗ (denom > .001);
% ————————————————————————————————–
% PLOTING OF RIDGEORIENTATION function plotridgeorient(orient, spacing, im,
figno, I)
49
if fix(spacing) ∼ = spacing
error(’spacing must be an integer’);
end
[rows, cols] = size(orient);
lw = 2; % linewidth
len = 0.8*spacing; % length of orientation lines
% Subsample the orientation data according to the specified spacing
s orient = orient(spacing:spacing:rows-spacing, ...
spacing:spacing:cols-spacing);
xoff = len/2*cos(s orient);
yoff = len/2*sin(s orient);
if nargin > = 3 % Display fingerprint image
if nargin == 4
imshow(im, figno);
else
imshow(im);
end
end
% Determine placement of orientation vectors
[x, y] = meshgrid(spacing:spacing:cols-spacing, ...
spacing:spacing:rows-spacing);
x = x-xoff;
y = y-yoff;
% Orientation vectors
u = xoff*2;
v = yoff*2;
imshow(I)
50
hold on
quiver(x,y,u,v,0,’.’,’linewidth’,1, ’color’,’r’);
axis equal, axis ij, hold off
% ————————————————————————————————–
% RIDGE SEGMENTATION function [normim, mask, maskind] = ridgesegment(im,
blksze, thresh)
im = normalise(im); % normalise to have zero mean, unit std dev
fun = inline(’std(x(:))*ones(size(x))’);
stddevim = blkproc(im, [blksze blksze], fun);
mask = stddevim > thresh; maskind = find(mask);
% Renormalise image so that the *ridge regions* have zero mean, unit % standard
deviation. im = im - mean(im(maskind)); normim = im/std(im(maskind)); % ———
—————————————————————————————–
% COORDINATION TRANSFORM FUNCTION
function [normim, mask, maskind] = ridgesegment(im, blksze, thresh)
im = normalise(im);
fun = inline(’std(x(:)) ∗ ones(size(x))’);
stddevim = blkproc(im, [blkszeblksze] , fun);
mask = stddevim > thresh;
maskind = find(mask);
im = im - mean(im(maskind));
normim = im/std(im(maskind));
% ————————————————————————————————–
function n = normalise(im, reqmean, reqvar)
if ∼ (nargin == 1 — nargin == 3)
error(’No of arguments must be 1 or 3’);
end
51
if nargin == 1 % Normalise 0 1
if ndims(im) == 3
hsv = rgb2hsv(im);
v = hsv(:,:,3);
v = v - min(v(:));
v = v/max(v(:));
hsv(:,:,3) = v;
n = hsv2rgb(hsv);
else % Assume greyscale
if ∼ isa(im,’double’), im = double(im);
end n = im - min(im(:));
n = n/max(n(:));
end
else % Normalise to desired mean and variance
if ndims(im) == 3 % colour image
error(’cannot normalise colour image to desired mean and variance’);
end
if ∼ isa(im,’double’), im = double(im); end im = im - mean(im(:));
im = im/std(im(:));
n = reqmean + im ∗ sqrt(reqvar);
end
% ————————————————————————————————–
% TRANSFORMED MINUTIAE MATCHING SCORE
function [sm] = score( T1, T2 )
Count1=size(T1,1);
Count2=size(T2,1);
n=0; T=15;
52
TT=14;
for i=1:Count1
Found=0; j=1;
while (Found==0) && (j < =Count2)
dx=(T1(i,1)-T2(j,1));
dy=(T1(i,2)-T2(j,2));
d=sqrt(dx ∧ 2+dy ∧ 2);
if d < T
DTheta=abs(T1(i,3)-T2(j,3)) ∗ 180/pi;
DTheta=min(DTheta,360-DTheta);
if DTheta < TT
n=n+1;
Found=1;
end
end
j=j+1;
end
end
sm=sqrt(n ∧ 2/(Count1 ∗ Count2)); % Similarity Index
end
% ————————————————————————————————–
function D=DistEuclidian(dataset1,dataset2)
h = waitbar(0,’Distance Computation’);
switch nargin
case 1
[m1, n1] =size(dataset1);
m2=m1;
53
D=zeros(m1,m2);
for i=1:m1
waitbar(i/m1)
for j=1:m2
if i==j
D(i,j)=NaN;
else D(i,j)=sqrt((dataset1(i,1)-dataset1(j,1)) ∧ 2+(dataset1(i,2)-dataset1(j,2)) ∧ 2);
end
end
end
case 2
[m1, n1] =size(dataset1);
[m2, n2] =size(dataset2);
D=zeros(m1,m2);
for i=1:m1
waitbar(i/m1)
for j=1:m2
D(i,j)=sqrt((dataset1(i,1)-dataset2(j,1)) ∧ 2+(dataset1(i,2)-dataset2(j,2)) ∧ 2);
end
end
otherwise error(’only one or two input arguments’)
end
close(h)
% ————————————————————————————————–
% FINGERPRINT MATCHING SCORE
function [S] = match( M1, M2, display flag )
if nargin==2;
54
display flag=0;
end
M1=M1(M1(:,3) < 5,:);
M2=M2(M2(:,3) < 5,:);
count1=size(M1,1);
count2=size(M2,1);
bi=0; bj=0; ba=0; % Best i,j,alpha
S=0;
% Best Similarity Score
for i=1:count1
T1=transform(M1,i);
for j=1:count2
if M1(i,3)==M2(j,3)
T2=transform(M2,j);
for a=-5:5
% Alpha
T3=transform2(T2,a ∗ pi/180);
sm=score(T1,T3);
if S < sm
S=sm;
bi=i;
bj=j;
ba=a;
end
end
end
end
55
end
if display flag==1
figure, title( [′
SimilarityMeasure :′
num2str(S)] );
T1=transform(M1,bi);
T2=transform(M2,bj);
T3=transform2(T2,ba ∗ pi/180);
plot data(T1,1);
plot data(T3,2);
end
end
% ————————————————————————————————–
56
REFERENCES
[1] A. A. Paulino, J. Feng, and A. K. Jain, “Latent fingerprint matching using
descriptor-based hough transform,” IEEE transactions on information forensics
and security, vol. 8, pp. 1–15, Jan 2013.
[2] A. A. Paulino, J. Feng, and A. K. Jain, “Latent fingerprint matching using
descriptor-based hough transform,” Proc. Int. Joint Conf. Biometrics, pp. 1–7, Oct
2011.
[3] A. Jain, L. Hong, and R. Bolle, “On-line fingerprint verification,” IEEE Transaction
on Pattern Analysis and Machine Intelligence, vol. 19, pp. 302–314, 1997.
[4] B. Janani, S. Valarmathi, A. Kumar, and S.Boobalakumaran, “Identification of
palmprint and fingerprint using improved hierarchical minutiae matching,” Inter-
national Journal of Innovative Science, Engineering and Technology, vol. 1, Nov
2014.
[5] JianjiangFeng, J. Zhou, and A. K.Jain, “Orientation field estimation for latent fin-
gerprint enhancement,” Pattern Analysis and Machine Intelligence, IEEE Trans-
actions, vol. 35, Aug 2012.
[6] B. T. Ulery, R. A. Hicklin, J. Buscaglia, and M. A. Roberts, “Accuracy and relia-
bility of forensic latent fingerprint decisions,” Proceedings of the National Academy
of Sciences, 2011.
[7] L. Haber and R. N. Haber, “Error rates for human latent fingerprint examiners,”
pp. 339–360, 2003.
[8] R.Kausalya and A.Ramya, “International journal of advanced research in computer
and communication engineering,” vol. 3, Feb 2014.
[9] R. Thai, “Fingerprint image enhancement and minutiae extraction,” School of Com-
puter Science and Software Engineering, University of Western Australia, 2003.
[10] J. Feng and A. K. Jain, “ Fingerprint reconstruction: From minutiae to phase ,
journal = IEEE Trans. Pattern Anal. Mach. Intell, volume = 33, pages = 209-
223,,” Feb 2011.
[11] Amengual, J. C., Juan, A. Prez, J. C., Prat, F., Sez, S., and Vilar, “ Real-time
minutiae extraction in fingerprint images , journal = In Proc.of the 6th Int. Conf.
on Image Processing and its Applications , pages = 871-875,,” July 1997.
[12] S. Kasaei, M. D., and Boashash, “Fingerprint feature extraction using block-
direction on reconstructed images,” IEEE region TEN Conf., digital signal Pro-
cessing applications, pp. 303–306, Dec 1997.
[13] D. Maltoni, a. A. K. J. D. Maio, and S. Prabhakar, “Handbook of fingerprint
recognition,” New York: Springer-Verlag, 2009.
[14] R. Cappelli, M. Ferrara, and D. Maltoni, “Minutia cylinder-code: A new represen-
tation and matching technique for fingerprint recognition,” IEEE Trans. Pattern
Anal. Mach. Intell, vol. 32, p. 21282141, Dec 2010.
[15] M. Tico and P. Kuosmanen, “Fingerprint matching using and orientation based
minutia descriptor,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 25, p. 10091014,
Aug 2003.
[16] J. Feng and J. Zhou, “A performance evaluation of fingerprint minutia descriptors,”
Proc. Int. Conf. Hand-Based Biometrics, pp. 1–6, Aug 2011.
58
[17] B. G. Sherlock and D.M.Monro, “A model for interpreting fingerprint topology,”
Pattern Recognit, vol. 26, p. 10471055, 1993.
[18] S. Huckemann, T. Hotz, and A. Munk, “Global models for the orientation field of
fingerprints: An approach based on quadratic differentials,” IEEE Trans. Pattern
Anal.Mach. Intell, p. 15071519, Sep 2008.
[19] L. Hong, Y. Wan, and A. Jain, “Pattern recognition and image processing labora-
tory, department of computer science,” Department of Computer Science, Michigan
State University, pp. 1–30, 2006.
[20] R. Thai, “Fingerprint image enhancement and minutiae extraction,” PhD Thesis
Submitted to School of Computer Science and Software Engineering University of
Western Australia, pp. 1–30, 2003.
[21] D. Maio and D. Maltoni, “Direct gray-scale minutiae detection in fingerprints,”
IEEE PAMI, vol. 19, pp. 27–40, Sep 1997.
59

More Related Content

What's hot

Flow And Throughput Improvement
Flow And Throughput ImprovementFlow And Throughput Improvement
Flow And Throughput ImprovementRamon Saviñon
 
rigid and flexiable pavement of highway Project bbjr report
rigid and flexiable pavement of highway Project bbjr reportrigid and flexiable pavement of highway Project bbjr report
rigid and flexiable pavement of highway Project bbjr reportrakeshchoudhary129
 
Logistics Feasibility Study for Ultra Mega Power Plant (UMPP)
Logistics Feasibility Study for Ultra Mega Power Plant  (UMPP)Logistics Feasibility Study for Ultra Mega Power Plant  (UMPP)
Logistics Feasibility Study for Ultra Mega Power Plant (UMPP)Genex Logistics
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Jason Cheung
 
Eta nonfab-deploy-guide-2019oct
Eta nonfab-deploy-guide-2019octEta nonfab-deploy-guide-2019oct
Eta nonfab-deploy-guide-2019octssuserae99fb
 
Eta design-guide-2019oct
Eta design-guide-2019octEta design-guide-2019oct
Eta design-guide-2019octssuserae99fb
 
Smart attendance system using facial recognition
Smart attendance system using facial recognitionSmart attendance system using facial recognition
Smart attendance system using facial recognitionVigneshLakshmanan8
 
Motorola enterprise wlan design guide version 1.2
Motorola enterprise wlan design guide version 1.2Motorola enterprise wlan design guide version 1.2
Motorola enterprise wlan design guide version 1.2Advantec Distribution
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINALAmit Ramji ✈
 
CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445Amit Ramji ✈
 
Yuhang Chen - Internship Report
Yuhang Chen - Internship ReportYuhang Chen - Internship Report
Yuhang Chen - Internship ReportYuhang Chen
 
Global Digital Inclusion Benchmarking Study
Global Digital Inclusion Benchmarking StudyGlobal Digital Inclusion Benchmarking Study
Global Digital Inclusion Benchmarking StudyCatherine Henry
 
One step compensation_v6
One step compensation_v6One step compensation_v6
One step compensation_v6Luis Baquero
 
IMechE Report Final_Fixed
IMechE Report Final_FixedIMechE Report Final_Fixed
IMechE Report Final_FixedAmit Ramji ✈
 

What's hot (20)

Abrek_Thesis
Abrek_ThesisAbrek_Thesis
Abrek_Thesis
 
Flow And Throughput Improvement
Flow And Throughput ImprovementFlow And Throughput Improvement
Flow And Throughput Improvement
 
rigid and flexiable pavement of highway Project bbjr report
rigid and flexiable pavement of highway Project bbjr reportrigid and flexiable pavement of highway Project bbjr report
rigid and flexiable pavement of highway Project bbjr report
 
Logistics Feasibility Study for Ultra Mega Power Plant (UMPP)
Logistics Feasibility Study for Ultra Mega Power Plant  (UMPP)Logistics Feasibility Study for Ultra Mega Power Plant  (UMPP)
Logistics Feasibility Study for Ultra Mega Power Plant (UMPP)
 
Report
ReportReport
Report
 
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
Trinity Impulse - Event Aggregation to Increase Stundents Awareness of Events...
 
Eta nonfab-deploy-guide-2019oct
Eta nonfab-deploy-guide-2019octEta nonfab-deploy-guide-2019oct
Eta nonfab-deploy-guide-2019oct
 
final_sustainability_report_09dec2015-8
final_sustainability_report_09dec2015-8final_sustainability_report_09dec2015-8
final_sustainability_report_09dec2015-8
 
Eta design-guide-2019oct
Eta design-guide-2019octEta design-guide-2019oct
Eta design-guide-2019oct
 
Smart attendance system using facial recognition
Smart attendance system using facial recognitionSmart attendance system using facial recognition
Smart attendance system using facial recognition
 
Internship report on flutter lawyer app
Internship report  on flutter lawyer appInternship report  on flutter lawyer app
Internship report on flutter lawyer app
 
Human computer interaction
Human computer interactionHuman computer interaction
Human computer interaction
 
Motorola enterprise wlan design guide version 1.2
Motorola enterprise wlan design guide version 1.2Motorola enterprise wlan design guide version 1.2
Motorola enterprise wlan design guide version 1.2
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINAL
 
CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445CFD-Assignment_Ramji_Amit_10241445
CFD-Assignment_Ramji_Amit_10241445
 
Yuhang Chen - Internship Report
Yuhang Chen - Internship ReportYuhang Chen - Internship Report
Yuhang Chen - Internship Report
 
Training Report
Training ReportTraining Report
Training Report
 
Global Digital Inclusion Benchmarking Study
Global Digital Inclusion Benchmarking StudyGlobal Digital Inclusion Benchmarking Study
Global Digital Inclusion Benchmarking Study
 
One step compensation_v6
One step compensation_v6One step compensation_v6
One step compensation_v6
 
IMechE Report Final_Fixed
IMechE Report Final_FixedIMechE Report Final_Fixed
IMechE Report Final_Fixed
 

Viewers also liked

Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2prjpublications
 
Correlation based Fingerprint Recognition
Correlation based Fingerprint RecognitionCorrelation based Fingerprint Recognition
Correlation based Fingerprint Recognitionmahesamrin
 
A High Performance Fingerprint Matching System for Large Databases Based on GPU
A High Performance Fingerprint Matching System for Large Databases Based on GPUA High Performance Fingerprint Matching System for Large Databases Based on GPU
A High Performance Fingerprint Matching System for Large Databases Based on GPUAlpesh Kurhade
 
Biometric Fingerprint Recognintion based on Minutiae Matching
Biometric Fingerprint Recognintion based on Minutiae MatchingBiometric Fingerprint Recognintion based on Minutiae Matching
Biometric Fingerprint Recognintion based on Minutiae MatchingNabila mahjabin
 
Fingerprint, seminar at IASRI, New Delhi
Fingerprint, seminar at IASRI, New DelhiFingerprint, seminar at IASRI, New Delhi
Fingerprint, seminar at IASRI, New DelhiNishikant Taksande
 
50409621003 fingerprint recognition system-ppt
50409621003  fingerprint recognition system-ppt50409621003  fingerprint recognition system-ppt
50409621003 fingerprint recognition system-pptMohankumar Ramachandran
 
Fingerprint recognition
Fingerprint recognitionFingerprint recognition
Fingerprint recognitionvarsha mohite
 
Iaetsd latent fingerprint recognition and matching
Iaetsd latent fingerprint recognition and matchingIaetsd latent fingerprint recognition and matching
Iaetsd latent fingerprint recognition and matchingIaetsd Iaetsd
 
Latent fingerprint matching using descriptor
Latent fingerprint matching using descriptorLatent fingerprint matching using descriptor
Latent fingerprint matching using descriptorvishakhmarari
 
Super Glue Enhancement Techniques-Fingerprint Evidence.
Super Glue Enhancement Techniques-Fingerprint Evidence.Super Glue Enhancement Techniques-Fingerprint Evidence.
Super Glue Enhancement Techniques-Fingerprint Evidence.Timothy Babcock
 
Latent Fingerprint Individualization
Latent Fingerprint IndividualizationLatent Fingerprint Individualization
Latent Fingerprint IndividualizationRmcauley
 
Finger print based EVM by saurabh
Finger print based EVM by saurabhFinger print based EVM by saurabh
Finger print based EVM by saurabhSaurabh Kumar
 
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...IOSR Journals
 
Report finger print
Report finger printReport finger print
Report finger printEshaan Verma
 
Latent fingerprint and vein matching using ridge feature identification
Latent fingerprint and vein matching using ridge feature identificationLatent fingerprint and vein matching using ridge feature identification
Latent fingerprint and vein matching using ridge feature identificationeSAT Publishing House
 

Viewers also liked (20)

Dip fingerprint
Dip fingerprintDip fingerprint
Dip fingerprint
 
Fingerprint
FingerprintFingerprint
Fingerprint
 
Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2Review of three categories of fingerprint recognition 2
Review of three categories of fingerprint recognition 2
 
Correlation based Fingerprint Recognition
Correlation based Fingerprint RecognitionCorrelation based Fingerprint Recognition
Correlation based Fingerprint Recognition
 
A High Performance Fingerprint Matching System for Large Databases Based on GPU
A High Performance Fingerprint Matching System for Large Databases Based on GPUA High Performance Fingerprint Matching System for Large Databases Based on GPU
A High Performance Fingerprint Matching System for Large Databases Based on GPU
 
Biometric Fingerprint Recognintion based on Minutiae Matching
Biometric Fingerprint Recognintion based on Minutiae MatchingBiometric Fingerprint Recognintion based on Minutiae Matching
Biometric Fingerprint Recognintion based on Minutiae Matching
 
Finger print
Finger printFinger print
Finger print
 
Fingerprint, seminar at IASRI, New Delhi
Fingerprint, seminar at IASRI, New DelhiFingerprint, seminar at IASRI, New Delhi
Fingerprint, seminar at IASRI, New Delhi
 
50409621003 fingerprint recognition system-ppt
50409621003  fingerprint recognition system-ppt50409621003  fingerprint recognition system-ppt
50409621003 fingerprint recognition system-ppt
 
Fingerprint recognition
Fingerprint recognitionFingerprint recognition
Fingerprint recognition
 
Iaetsd latent fingerprint recognition and matching
Iaetsd latent fingerprint recognition and matchingIaetsd latent fingerprint recognition and matching
Iaetsd latent fingerprint recognition and matching
 
Latent fingerprint matching using descriptor
Latent fingerprint matching using descriptorLatent fingerprint matching using descriptor
Latent fingerprint matching using descriptor
 
Super Glue Enhancement Techniques-Fingerprint Evidence.
Super Glue Enhancement Techniques-Fingerprint Evidence.Super Glue Enhancement Techniques-Fingerprint Evidence.
Super Glue Enhancement Techniques-Fingerprint Evidence.
 
Fingerprints
FingerprintsFingerprints
Fingerprints
 
Latent Fingerprint Individualization
Latent Fingerprint IndividualizationLatent Fingerprint Individualization
Latent Fingerprint Individualization
 
Finger print based EVM by saurabh
Finger print based EVM by saurabhFinger print based EVM by saurabh
Finger print based EVM by saurabh
 
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...
Developmentof Image Enhancement and the Feature Extraction Techniques on Rura...
 
Report finger print
Report finger printReport finger print
Report finger print
 
Latent fingerprint and vein matching using ridge feature identification
Latent fingerprint and vein matching using ridge feature identificationLatent fingerprint and vein matching using ridge feature identification
Latent fingerprint and vein matching using ridge feature identification
 
K0167683
K0167683K0167683
K0167683
 

Similar to LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM

Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile GraphicsJiri Danihelka
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign DetectionCraig Ferguson
 
Specification of the Linked Media Layer
Specification of the Linked Media LayerSpecification of the Linked Media Layer
Specification of the Linked Media LayerLinkedTV
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Artur Filipowicz
 
Steganography final report
Steganography final reportSteganography final report
Steganography final reportABHIJEET KHIRE
 
Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)Priyanka Kapoor
 
project Report on LAN Security Manager
project Report on LAN Security Managerproject Report on LAN Security Manager
project Report on LAN Security ManagerShahrikh Khan
 
nasa-safer-using-b-method
nasa-safer-using-b-methodnasa-safer-using-b-method
nasa-safer-using-b-methodSylvain Verly
 
bonino_thesis_final
bonino_thesis_finalbonino_thesis_final
bonino_thesis_finalDario Bonino
 
Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV YasmineBelHajsalah
 
Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAPArjun Aravind
 
Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programmingunifesptk
 
Dragos Datcu_PhD_Thesis
Dragos Datcu_PhD_ThesisDragos Datcu_PhD_Thesis
Dragos Datcu_PhD_Thesisdragos80
 
Applying Machine Learning Techniques to Revenue Management
Applying Machine Learning Techniques to Revenue ManagementApplying Machine Learning Techniques to Revenue Management
Applying Machine Learning Techniques to Revenue ManagementAhmed BEN JEMIA
 
Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Cooper Wakefield
 

Similar to LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM (20)

Distributed Mobile Graphics
Distributed Mobile GraphicsDistributed Mobile Graphics
Distributed Mobile Graphics
 
High Performance Traffic Sign Detection
High Performance Traffic Sign DetectionHigh Performance Traffic Sign Detection
High Performance Traffic Sign Detection
 
document
documentdocument
document
 
Specification of the Linked Media Layer
Specification of the Linked Media LayerSpecification of the Linked Media Layer
Specification of the Linked Media Layer
 
MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
Virtual Environments as Driving Schools for Deep Learning Vision-Based Sensor...
 
Steganography final report
Steganography final reportSteganography final report
Steganography final report
 
Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)Report on e-Notice App (An Android Application)
Report on e-Notice App (An Android Application)
 
project Report on LAN Security Manager
project Report on LAN Security Managerproject Report on LAN Security Manager
project Report on LAN Security Manager
 
nasa-safer-using-b-method
nasa-safer-using-b-methodnasa-safer-using-b-method
nasa-safer-using-b-method
 
bonino_thesis_final
bonino_thesis_finalbonino_thesis_final
bonino_thesis_final
 
Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV
 
Final Report - Major Project - MAP
Final Report - Major Project - MAPFinal Report - Major Project - MAP
Final Report - Major Project - MAP
 
Manual
ManualManual
Manual
 
Pratical mpi programming
Pratical mpi programmingPratical mpi programming
Pratical mpi programming
 
Thesis_Prakash
Thesis_PrakashThesis_Prakash
Thesis_Prakash
 
Dragos Datcu_PhD_Thesis
Dragos Datcu_PhD_ThesisDragos Datcu_PhD_Thesis
Dragos Datcu_PhD_Thesis
 
Applying Machine Learning Techniques to Revenue Management
Applying Machine Learning Techniques to Revenue ManagementApplying Machine Learning Techniques to Revenue Management
Applying Machine Learning Techniques to Revenue Management
 
Graduation Report
Graduation ReportGraduation Report
Graduation Report
 
Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...Im-ception - An exploration into facial PAD through the use of fine tuning de...
Im-ception - An exploration into facial PAD through the use of fine tuning de...
 

Recently uploaded

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 

Recently uploaded (20)

young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 

LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM

  • 1. A Project on LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM Submitted in partial fulfillment of the requirements for the degree of Bachelor of Technology in Electronics and Communication Engineering by Manish Negi Pratiksha Yadav Shubham Rishi Raj Singh Rawat Under the guidance of Mr. Manoj Kumar DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING G. B. PANT ENGINEERING COLLEGE, PAURI, UTTARAKHAND, INDIA JUNE 2015
  • 2. DECLARATION We hereby declare that this dissertation entitled “LATENT FINGERPRINT MATCH- ING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM” submitted to the Department of Electronics and Communication Engineering, G. B. Pant Engi- neering College, Pauri Garhwal (Uttarakhand) for the award of Bachelor of Technology degree in Electronics and Communication Engineering is a bonafide work carried out by us under the guidance of Mr. Manoj Kumar and that it has not been submitted anywhere for any award. Where other sources of information have been used, they have been acknowledged. Date: 08 June 2015 Manish Negi Place: GBPEC, Pauri Pratiksha Yadav Shubham Rishi Raj Singh Rawat
  • 3. CERTIFICATE This is to certify that the dissertation entitled “LATENT FINGERPRINT MATCHING USING AUTOMATED FINGERPRINT IDENTIFICATION SYSTEM” being submit- ted by Manish Negi, Pratiksha Yadav, Shubham and Rishi Raj Singh Rawat in the partial fulfilment of the requirements for the award of Bachelor of Technology degree in Electronics and Communication Engineering to the Department of Electronics and Communication Engineering, G. B. Pant Engineering College, Pauri Garhwal (Uttarak- hand) is a bonafide work carried out by them under my guidance and supervision. To the best of my knowledge, the matter embodied in the dissertation has not been submitted for the award of any other degree or diploma. Date: 08 June 2015 ˜ Mr. Manoj Kumar Place: GBPEC, Pauri Assistant Professor ˜ SUPERVISOR
  • 4. PREFACE Among all the biometric techniques, fingerprint-based identification is the oldest method which has been successfully used in numerous applications. Everyone has unique, im- mutable fingerprints. Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to foren- sics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. A fingerprint is made of a series of ridges and furrows on the surface of the finger. The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points. Minutiae points are local ridge characteristics that occur at either a ridge bifurcation or a ridge ending. Minutiae are very important features for fingerprint representation, and most practical fingerprint recognition systems store only the minutiae template in the database for further usage.
  • 5. ACKNOWLEDGEMENT We place on record and warmly acknowledge the continuous encouragement, invaluable supervision, timely suggestions and inspired guidance offered by our guide Mr. Manoj Kumar, Assistant Professor,Department of Electronics & Communication Engineering, G. B. Pant Engineering College, Pauri Garhwal (Uttarakhand) in bringing this project to a successful completion. We are also grateful to Dr. Y. Singh, Head and Professor, Electronics & Communication Engineering Department and Dr. A. K. Gautam, As- sociate Professor, Electronics & Communication Engineering Department, G. B. Pant Engineering College, Pauri Garhwal (Uttarakhand) for helping us through the entire duration of the project. Last but not the least we express our sincere thanks to all our friends who have patiently extended all kind of help for accomplishing this undertaking. Our sincere thanks and acknowledgements are due to all our family members who have constantly encouraged us for completing this project. Manish Negi Pratiksha Yadav Shubham Rishi Raj Singh Rawat
  • 6. ABSTRACT In this project, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algo- rithm (local based descriptor MCC) to align fingerprints and measure similarities be- tween fingerprints by considering both minutiae and orientation field information. The conventional methods that utilize minutiae information treat them as a point set and find the matched points from different minutiae sets. These minutiae are used for fin- gerprint recognition, in which the fingerprint’s orientation field is reconstructed from virtual minutiae and further utilized in the matching stage to enhance the system’s per- formance. A decision fusion scheme is used to combine the reconstructed orientation field matching with conventional minutiae based matching. Since orientation field is an important global feature of fingerprints, the proposed method can obtain better results than conventional methods. In our project it is implemented using MATLAB-GUI where virtual minutiae are considered.
  • 7. CONTENTS Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v 1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Fingerprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.5 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5.1 Minutiae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.5.2 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.6 Need For Automated Extraction System . . . . . . . . . . . . . . . 6 1.7 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2. FINGERPRINT ENHANCEMENT TECHNIQUE . . . . . . . . . . . 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Binarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
  • 8. 3. FEATURE EXTRACTION . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Minutiae Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 Orientation Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.5 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4. DATABASE AND FINGERPRINT MATCHING . . . . . . . . . . . . 21 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 Database FVC2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Fingerprint Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.3.1 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3.2 Similarity Measure . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5. IMPLEMENTATION OF THE PROPOSED ALGORITHM . . . . . 26 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2 Enhancement of the fingerprint image . . . . . . . . . . . . . . . . 26 5.3 Minutiae Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.3.1 Ridge Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.3.2 Minutiae Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.3.3 False Minutiae Removal . . . . . . . . . . . . . . . . . . . . . . . 30 5.4 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.4.1 Segmentation and Region of interest . . . . . . . . . . . . . . . . 33 5.5 Minutiae Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6. RESULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 6.1 Result and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . 35 7. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 vii
  • 9. 8. APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 8.1 Matlab Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 viii
  • 10. LIST OF FIGURES 1.1 Block Diagram of proposed algorithm. . . . . . . . . . . . . . . . . . . . 2 1.2 Three types of fingerprint impressions. (a) Rolled; (b) plain; (c) latent. . 4 1.3 Ridge Ending and Bifurcation . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 The orientation of a ridge pixel in a fingerprint. . . . . . . . . . . . . . . 6 2.1 Binarized output of a fingerprint . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 Thinned output of a fingerprint . . . . . . . . . . . . . . . . . . . . . . . 11 3.1 (a) Mask for bifurcation (b) Mask for termination. . . . . . . . . . . . . . 13 3.2 Examples of a ridge ending and bifurcation pixel. (a) A Crossing Number of one corresponds to a ridge ending pixel. (b) A Crossing Number of three corresponds to a bifurcation pixel. . . . . . . . . . . . . . . . . . . . . . . 14 3.3 Minutiae extracted image . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 (a) Orientation field with white background. (b) Orientation field with thinned image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Ridges and valleys on a fingerprint image . . . . . . . . . . . . . . . . . . 19 3.6 A fingerprint image and its foreground and background regions . . . . . . 20 4.1 One fingerprint image from each database . . . . . . . . . . . . . . . . . 22 4.2 Sample images from the database. . . . . . . . . . . . . . . . . . . . . . . 23 5.1 (a)Input image.(b) Binarized output . . . . . . . . . . . . . . . . . . . . 26 5.2 (a)Binarized image.(b) Thinned output . . . . . . . . . . . . . . . . . . . 27 5.3 (a)Thinned image. (b) Extracted ridge ending and bifurcation. . . . . . . 29 5.4 (a)Thinned image.(b) Orientation field. . . . . . . . . . . . . . . . . . . . 32 5.5 Marked region of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
  • 11. 5.6 Similarity Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 6.1 Graphical User Interface(GUI) for creating database . . . . . . . . . . . . 35 6.2 Binarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.3 Thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.4 Minutiae extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.5 Orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.6 Marked region of interest . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.7 Matching with similar fingerprint . . . . . . . . . . . . . . . . . . . . . . 40 6.8 Matching with non-similar fingerprint . . . . . . . . . . . . . . . . . . . . 40 x
  • 12. LIST OF TABLES 3.1 Property of crossing number . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.1 Extracted information from image in term of ridge termination and bi- furcation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 6.1 Results after comparing similarities between input with other fingerprints in database FVC2002 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6.2 Extracted information from image in term of ridge termination ,bifurca- tion and orientation field . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
  • 13. CHAPTER 1 INTRODUCTION 1.1 Introduction Fingerprint recognition has been used by law enforcement agencies to identify suspects and victims for several decades [1]. Recent advances in automated fingerprint identifi- cation technology, coupled with the pronounced need for reliable person identification, have resulted in the increased use of fingerprints in both government and civilian appli- cations such as border control, employment background check and secure facility access. Fingerprints obtained during the crime scenes are mostly latent images. Latent Fin- gerprints refer to the impressions unintentionally left on item handled or touched by fingers. Such fingerprints are often not directly visible unless some physical or chemical technique is applied to enhance them. Since the early 20th century latent fingerprints have served as important evidence for law enforcement agencies to apprehend and con- vict criminals [2]. Given a latent fingerprint (with manually marked minutiae) and a rolled fingerprint, we extract additional features from both prints, align them in the same coordinate system, and compute a match score between them. The proposed matching approach uses minu- tiae and orientation field from both latent and rolled prints. To enable reliable feature extraction, a latent fingerprint image, which is often of very poor quality, needs to go through an image enhancement stage, which connects broken ridges, separates joined ridges, and removes overlapping patterns. These steps are shown in Fig. 1.1. Here we consider the problem of biometric verification in a more formal manner. In
  • 14. a verification problem, the biometric signal from the user is compared against a single enrolled template. This template is chosen based on the claimed identity of the user. Each user i is represented by a biometric Bi. It is assumed that there is a one-to-one correspondence between the biometric Bi and the identity i of the individual. The fea- ture extraction phase results in a machine representation (template) Ti of the biometric. During verification; the user claims an identity j and provides a biometric signal Bj. The feature extractor now derives the corresponding machine [3] representation Tj. The recognition consists of computing a similarity score S (Ti, Tj). The claimed identity is assumed to be true if the S(Ti, Tj) > Th for some threshold Th. The choice of the threshold also determines the trade-off between user convenience and system security as will be seen in the ensuing section. Figure 1.1: Block Diagram of proposed algorithm. 2
  • 15. 1.2 Motivation The motivation behind this fingerprint image enhancement and minutiae extraction process is to improve the quality of fingerprint and to extract the minutiae points. And in the extraction process we should not get the false minutiae and preserve the true ridge endings and ridge bifurcations. The minutiae extracted from the fingerprint heavily depends upon the quality of the input fingerprint. In order to extract true minutiae from the fingerprint we need to remove the noise from the input image and for that we need an enhancement algorithm. 1.3 Thesis Organization The dissertation is divided into six chapters and their outline is described as given below: In chapter 2 we have explained various image enhancement techniques on latent finger- print for better results during matching process. In chapter 3 we have explained about the local minutiae descriptor, which is used to extract the information from fingerprint and also explain about the orientation field detection. In chapter 4 we have explained about the database used in this project and how matching between two fingerprint is done.In chapter 5 we have explained about the algorithm implemented to get desired results. Finally, chapter 6 is dedicated to the result part which shows output of various operations done. 1.4 Fingerprint We touch things every day: a coffee cup, a car door, a computer keyboard, etc. Each time we touch, it is likely that we leave behind our unique signature in our fingerprints. No two people have exactly the same fingerprints. Even identical twins, with identical DNA, have different fingerprints. This uniqueness allows fingerprints to be used in all sorts of ways, including background checks, biometric security, mass disaster identification, and 3
  • 16. of course, in criminal situations. There are essentially three types of fingerprints in law enforcement applications: 1. Rolled, which is obtained by rolling the finger nail-to-nail either on a paper (in this case ink is first applied to the finger surface) or the platen of a scanner as shown in Fig. 1.2(a). 2. Plain, which is obtained by placing the finger flat on a paper or the platen of a scanner without rolling as shown in Fig. 1.2(b). 3. Latents, which are lifted from surfaces of objects that are inadvertently touched or handled by a person typically at crime scenes [3] as shown in Fig. 1.2(c). (a) (b) (c) Figure 1.2: Three types of fingerprint impressions. (a) Rolled; (b) plain; (c) latent. Rolled prints contain the largest amount of information about the ridge structure on a fingerprint since they capture the largest finger surface area; latent usually contain the least amount of information for matching or identification because of their size and inherent noise. Compared to rolled or plain fingerprints, latents are smudgy and blurred, capture only a small finger area, and have large nonlinear distortion due to pressure variations. 4
  • 17. 1.5 Feature Extraction In pattern recognition and in image processing, feature extraction is a special form of dimensionality education. Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input [4]. 1.5.1 Minutiae Minutiae refer to the specific plot point on fingerprint. These include characteristics such as ridge bifurcation and ridge ending as shown in Fig. 1.3. (a)Ridge Ending- the abrupt end of a ridge (b)Ridge Bifurcation- a single ridge that divides into two ridges Figure 1.3: Ridge Ending and Bifurcation 1.5.2 Orientation field Orientation field defines the local orientation of the ridges contained in the fingerprint as shown in Fig. 1.4. It is reconstructed from minutiae location and direction for the latent. It is used to improve fingerprint matching performance 5
  • 18. Figure 1.4: The orientation of a ridge pixel in a fingerprint. 1.6 Need For Automated Extraction System 1. Reducing the time spent by latent examiners in manual markup. A crime scene can contain as many as hundreds of latents. However, only a small portion of them can be processed simply because law enforcement agencies do not have sufficient manpower. It can take twenty minutes or even longer to mark the minutiae in a single latent. Automatic feature extraction can improve the efficiency of processing latents, leading to more identification quickly [5]. 2. Improving the compatibility between minutiae in latents and full fingerprints. In current practice, minutiae in latents are manually marked while minutiae in full fingerprints are automatically extracted. This can cause a compatibility problem. Although this compatibility issue is not a severe problem for full fingerprint match- ing, this problem cannot be underestimated in the case of latent matching, since in a tiny and smudgy latent, every minutia plays an important role. To address this issue, AFIS vendors usually provide training courses to latent examiners on how to better mark minutiae for their particular AFIS system since different vendors systems are not very consistent in extracting minutiae. However, it takes time for fingerprint examiners to get familiar with a system. This problem can be alleviated provided features in latents are also extracted by automatic algorithms [6]. 3. Improving repeatability/reproducibility of latent identification. The minutiae in 6
  • 19. the same latent marked by different latent examiners or even by the same examiner (but at different times) may not be the same. This is one of reasons why different latent examiners or even the same examiner (but at different times) make different matching decisions on the same latent-exemplar pair [7]. 1.7 Application 1. Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. 2. Verifying the matching between driver fingerprint and the fingerprint features stored on the license assures that the driver is indeed the person that the license is issued for. This task can be done on-site where the fingerprint features obtained from the driver by live scanning is compared with the features magnetically stored on the driver license. Current ”smart card” technology allows abundant memory capacity to store the features on card. A driver/ license match means that the license indeed belongs to the driver, this, however does no warranty that the driver license is not falsified. To check for validity of the driver license the police officer has the option to make additional inquiry against the database. In this case a license validity check will result. 3. Since 2000, electronic fingerprint readers have been introduced for security ap- plications such as log-in authentication for the identification of computer users. However, some less sophisticated devices have been discovered to be vulnerable to quite simple methods of deception, such as fake fingerprints cast in gels. In 2006, fingerprint sensors gained popularity in the notebook PC market. Built-in sensors in Think-Pads, VAIO, HP Pavilion laptops, and others also double as motion de- tectors for document scrolling, like the scroll wheel. Following the release of the 7
  • 20. iPhone 5S model, a group of German hackers announced on September 21, 2013, that they had bypassed Apple’s new Touch ID fingerprint sensor by photographing a fingerprint from a glass surface and using that captured image as verification. 4. Electronic registration and library access: Fingerprints and, to a lesser extent, iris scans can be used to validate electronic registration, cashless catering, and library access. By 2007, this practice was particularly widespread in UK schools, and it was also starting to be adopted in some states in the US. 8
  • 21. CHAPTER 2 FINGERPRINT ENHANCEMENT TECHNIQUE 2.1 Introduction A critical step in Automatic Fingerprint Matching System is to automatically and re- liably extract minutiae from input fingerprint images. However the performance of the minutiae extraction algorithm relies heavily on the quality of the input fingerprint image. In order to ensure the extraction of true minutiae points, it is essential to incorporate the enhancement algorithm. Reliable and sound verification of fingerprints in any AFIS is always preceded with a proper detection and extraction of its features. A fingerprint image is first enhanced before the features contained in it are detected or extracted. A well enhanced image will provide a clear separation between the valid and spurious features. Spurious features are those minutiae points that are created due to noise or artifacts and they are not actually part of the fingerprint. 2.2 Binarization Most minutiae extraction algorithms operate on binary images where there are only two levels of interest: the black pixels that represent ridges, and the white pixels that rep- resent valleys. Binarization is the process that converts a grey-level image into a binary image. The binarization process involves examining the grey-level value of each pixel in the enhanced image as shown in Fig. 2.1 and if the value is greater than the global
  • 22. threshold, then the pixel value is set to a binary value one; otherwise, it is set to zero. Equation used to binarize the gray scale images [8]. G(x,y)= 1 if f(x, y) > T 0 if f(x, y) <= T Where, f(x,y) is the value of a pixel in gray-scale image and g(x,y) is the binarized image Figure 2.1: Binarized output of a fingerprint 2.2.1 Thresholding In this method, the grey-level value of each pixel in the filtered image is examined and, if the value is greater than the threshold value 1, then the pixel value is set to a binary value one; otherwise, it is set to zero. The threshold value successfully makes each cluster as tight as possible and also eliminate all the overlaps. The threshold value of 1 is taken after a careful selection from a series of within and between class variance values ranging from 0 to 1 that optimally supported the maximum separation of the ridges from the valleys. The clear separation of the ridges from the valleys verifies the correctness of the algorithm as proposed in [9] and implemented in this project. 10
  • 23. 2.3 Thinning The final image enhancement step typically performed prior to minutiae extraction is thinning. Thinning is a morphological operation that successively erodes away the fore- ground pixels until they are one pixel wide. A standard thinning algorithm is employed, which performs the thinning operation using two subiterations. This algorithm is accessi- ble in MATLAB via the ‘thin’ operation under the bwmorph function. Each subiteration begins by examining the neighbourhood of each pixel in the binary image, and based on a particular set of pixel-deletion criteria, it checks whether the pixel can be deleted or not. These subiterations continue until no more pixels can be deleted. The application of the thinning algorithm to a fingerprint image preserves the connectivity of the ridge structures while forming a skeletonized version of the binary image as shown in Fig. 2.2. This skeleton image is then used in the subsequent extraction of minutiae. The process Figure 2.2: Thinned output of a fingerprint involving the extraction of minutiae from a skeleton image will be discussed in the next chapter. 11
  • 24. CHAPTER 3 FEATURE EXTRACTION 3.1 Introduction After a fingerprint image has been enhanced, the next step is to extract the minutiae from the enhanced image. Following the extraction of minutiae, a final image post processing stage is performed to eliminate false minutiae. This chapter provides discus- sion on the methodology and implementation of techniques for minutiae extraction and orientation field. The proposed matching approach uses minutiae and orientation field from both latent and rolled prints. Minutiae are manually marked by latent examin- ers in the latent, and automatically extracted using commercial matchers in the rolled print. Based on minutiae, local minutiae descriptors are built and used in the proposed descriptor-based alignment and scoring algorithms. Orientation field is reconstructed from minutiae location and direction for the latents as proposed in [10],and orientation field is automatically extracted from the rolled print images by using a gradient-based method. 3.2 Minutiae Extraction The most commonly employed method of minutiae extraction is the Crossing Number (CN) concept [11] [12]. This method involves the use of the skeleton image where the ridge flow pattern is eight-connected. The minutiae are extracted by scanning the local neighbourhood of each ridge pixel in the image using a 3×3 window as shown in Fig.
  • 25. 3.1. The CN value is then computed, which is defined as half the sum of the differences between pairs of adjacent pixels in the eight-neighbourhood. Using the properties of the CN as shown in Table 3.1, the ridge pixel can then be classified as a ridge ending, bifur- cation or non-minutiae point. For example, a ridge pixel with a CN of one corresponds to a ridge ending, and a CN of three corresponds to a bifurcation. (a) (b) Figure 3.1: (a) Mask for bifurcation (b) Mask for termination. Table 3.1: Property of crossing number CN Property 0 Isolated point 1 Ridge ending point 2 Continuing ridge ending 3 Birufication ending 4 Crossing point This approach involves using a 3×3 window to examine the local neighbourhood of each ridge pixel in the image. A pixel is then classified as a ridge ending if it has only one neighbouring ridge pixel in the window, and classified as a bifurcation if it has three neighbouring ridge pixels. Consequently, it can be seen that this approach is very similar to the Crossing Number method. The CN for a ridge pixel P is given by eq. 3.1 [13]. 13
  • 26. CN = 0.5 8 i=1 |Pi − Pi+1|, P9 = P1 (3.1) where Pi is the pixel value in the neighbourhood of P. For a pixel P, its eight neighbouring pixels are scanned in an anti-clockwise direction. After the CN for a ridge pixel has been computed, the pixel can then be classified according to the property of its CN value. As shown in Figure 3.2, a ridge pixel with a CN of one corresponds to a ridge ending, and a CN of three corresponds to a bifurcation as shown in Fig. 3.2. For each extracted minutiae point, the following information is recorded: 1. x and y coordinates, 2. orientation of the associated ridge segment, and 3. type of minutiae (ridge ending or bifurcation). (a)CN=1 (b)CN=3 Figure 3.2: Examples of a ridge ending and bifurcation pixel. (a) A Crossing Number of one corresponds to a ridge ending pixel. (b) A Crossing Number of three corresponds to a bifurcation pixel. We propose the use of local minutiae descriptor known as Minutiae Cylindrical Code (MCC) to improve the robustness against distortion. Local Minutiae Descriptor: Local descriptors have been widely used in fingerprint matching (e.g. [14, 15]). Feng and Zhou [16] evaluated the performance of local descriptors associated with fingerprint matching in four categories of fingerprints: good quality, poor quality, small common region, and large plastic distortion. They also coarsely classified the local descriptors as 14
  • 27. image-based, texture-based, and minutiae-based descriptors. Minutiae cylinder records the neighborhood information of a minutiae as a 3-D function and minutiae extracted image is shown in Fig. 3.3. The cylinder contains several layers and each layer represents the density of neighboring minutiae along the corresponding direction. The cylinder can be concatenated as a vector, and therefore the similarity between two minutiae cylinders can be efficiently computed. Figure 3.3: Minutiae extracted image 3.3 Normalization The next step in the fingerprint enhancement process is image normalization. Normal- ization is used to standardize the intensity values in an image by adjusting the range of grey-level values so that it lies within a desired range of values. Let I(i,j) represent the grey-level value at pixel (i,j) and N represent the normalized grey-level value at pixel (i,j). The equation for normalized image is defined in eq. 3.2: N(i, j) =    M0 + V0(I(i,j)−M)2 V , if I(i, j) > M M0 − V0(I(i,j)−M)2 V , otherwise (3.2) 15
  • 28. Where M and V are the estimated mean and variance of I(i, j), respectively, and M0 and V0 are the desired mean and variance values, respectively. Normalization does not change the ridge structures in a fingerprint; it is performed to standardize the dynamic levels of variation in grey-level values, which facilitates the processing of subsequent image enhancement stage. 3.4 Orientation Field Orientation field can be used in several ways to improve fingerprint matching perfor- mance, such as by matching orientation fields directly and fusing scores with other matching scores, or by enhancing the images to extract more reliable features. Orienta- tion field estimation using gradient-based method is very reliable [13] in good quality im- ages. However, when the image contains noise, this estimation becomes very challenging. A few model-based orientation field estimation methods have been proposed [17,18]that use singular points as input to the model. In the latent fingerprint matching case, it is very challenging to estimate the orientation field based only on the image due to the poor quality and small area of the latent. Moreover, if singular points are to be used, they need to be manually marked (and they are not always present) in the latent finger- print image. Hence, we use a minutiae-based orientation field reconstruction algorithm proposed, which takes manually marked minutiae in latents as input and outputs an ori- entation field as shown in Fig. 3.4. This approach estimates the local ridge orientation in a block by averaging the direction of neighboring minutiae. The orientation field is reconstructed only inside the convex hull of minutiae. Since the directions of manually marked minutiae are very reliable, the orientation field reconstructed using this approach is quite accurate except in areas absent of minutiae or very close to singular points. For rolled fingerprints, orientation field is automatically extracted using a gradient- based method. The steps for calculating the orientation at pixel are as follows: 16
  • 29. (a) (b) Figure 3.4: (a) Orientation field with white background. (b) Orientation field with thinned image. 1. Firstly, a block of size W x W is centered at pixel (i,j) in the normalized fingerprint image. 2. For each pixel in the block, compute the gradients x(i,j) and y(i,j) which are the gradient magnitudes in the x and y directions, respectively. The horizontal Sobel operator is used to compute x(i,j) :       1 0 −1 2 0 −2 1 0 −1       The vertical Sobel operator is used to compute y(i,j):       1 2 1 0 0 0 −1 −2 −1       3. The local orientation at pixel (i,j) can then be estimated using the following eq.3.3, 3.4 and 3.5. 17
  • 30. vx(i, j) = i+ w 2 u=i− w 2 j+ w 2 v=j− w 2 2∂x(u, v)∂y(u, v), (3.3) vy(i, j) = i+ w 2 u=i− w 2 j+ w 2 v=j− w 2 (∂2 x(u, v)∂2 y(u, v)), (3.4) θ(i, j) = 1 2 tan−1 vx(i, j) vy(i, j) , (3.5) Where (i,j) is the least square estimate of the local orientation at the block centered at pixel(i,j). 4. Smooth the orientation field in a local neighborhood using a Gaussian filter. The orientation image is firstly converted into a continuous vector field, which is de- fined by eq. 3.6 and 3.7. φx(i, j) = cos(2θ(i, j)), (3.6) φy(i, j) = sin(2θ(i, j)), (3.7) Where x and y are the x and y components of the vector field, respectively. After the vector field has been computed, Gaussian smoothing is then performed and given by eq. 3.8 and 3.9. φx(i, j) = w 2 u=− w 2 w 2 v=− w 2 G(u, v)φx(i − uw, j − vx), (3.8) φy(i, j) = w 2 u=− w 2 w 2 v=− w 2 G(u, v)φy(i − uw, j − vx), (3.9) Where, G is a Gaussian low-pass filter of size w x w. 18
  • 31. 5. The final smoothed orientation field O at pixel(i,j) is defined by eq. 3.10. O(i, j) = 1 2 tan−1 φx(i, j) φy(i, j) , (3.10) 3.5 Segmentation There are two regions that describe any fingerprint image; namely the foreground region and the background region. The foreground regions are the regions containing the ridges and valleys. As shown in Fig. 3.5, the ridges are the raised and dark regions of a fingerprint image while the valleys are the low and white regions between the ridges. The foreground regions, often referred to as the Region of Interest (RoI) is shown in Fig. 3.6. The background regions are mostly the outside regions where the noises introduced into the image during enrolment are mostly found. The essence of segmentation is to reduce the burden associated with image enhancement by ensuring that focus is only on the foreground regions while the background regions are ignored. Figure 3.5: Ridges and valleys on a fingerprint image The background regions possess very low grey-level variance values while the fore- ground regions possess very high grey-level variance values. A block processing approach used in [19] [20] is adopted in this research for obtaining the grey-level variance values. 19
  • 32. The approach firstly divides the image into blocks of size W x W and then the variance V(k) for each of the pixels in block k is obtained from eq. 3.11 and 3.12. V (k) = 1 W2 W i=1 W i=1 (I(i, j) − M(k))2 (3.11) M(k) = 1 W2 W a=1 W b=1 J(a, b) (3.12) I(i,j) and J(a,b) are the grey-level value for pixel (i,j) and (a,b) respectively in block k. Figure 3.6: A fingerprint image and its foreground and background regions 20
  • 33. CHAPTER 4 DATABASE AND FINGERPRINT MATCHING 4.1 Introduction In this chapter we report the orientation field estimation performance and the resulting matching performances on the FVC2002 latent fingerprint database and an overlapped fingerprint input. Finally, we discuss the impact of reference fingerprints on orientation field estimation. 4.2 Database FVC2002 FVC2002 is the Second International Competition for Fingerprint Verification Algo- rithms. The evaluation was held in April 2002 and the results of the 31 participants were presented at 16th ICPR (International Conference on Pattern Recognition). This initiative is organized by D. Maio, D. Maltoni, R. Cappelli from Biometric Systems Lab (University of Bologna), J. L. Wayman from the U.S. National Biometric Test Center (San Jose State University) and A. K. Jain from the Pattern Recognition and Image Pro- cessing Laboratory of Michigan State University. A sample image from each database FVC2002 is shown in Fig. 4.1. The size of database FVC2002 is established as 110 fingers, 8 impressions per finger (880 impressions) (Fig. 4.2). Collecting some additional data provides a margin in case of collection errors, and also allowed us to systematically choose from the collected
  • 34. Figure 4.1: One fingerprint image from each database impressions to include in the test databases. An automatic all-against-all comparison was first performed by using an internally-developed fingerprint matching algorithm, to discover possible data-collection errors. False match and false non-match errors were manually analyzed: two labeling errors were discovered and removed. Fingerprints in each database were then sorted by quality according to a quality index [21]. The top- ten quality fingers were removed from each database since they do not constitute an interesting case study. The remaining 110 fingers were split into set A (100 fingers - evaluation set) and set B (10 fingers - training set). To make set B representative of the whole database, the 110 collected fingers were ordered by quality, then the 8 images from every tenth finger were included in set B. The remaining fingers constituted set A. After training, set B were made available to the participants, some of them informed us of the presence of fingerprint pairs whose relative rotation exceeded the maximum specification of about 35 degrees. We were not much surprised by this, since although the persons in charge of data collection were informed of the constraint, the require- 22
  • 35. ment of exaggerating rotation but remaining within a maximum of about 35 degrees between any two samples is not simple to implement in practice, especially when the volunteers are untrained users. A further semiautomatic analysis was then necessary to ensure that, in the evaluation set A, the samples were compliant with the initial specifications: maximum rotation and non-null overlap between any two impressions of the same finger. Software was developed to support us in this daunting task. All of the 12 originally collected impressions of the same fingers were displayed at the same time and the authors selected a subset of 8 impressions by point and click. Once the selection was made, the software automatically compared the selected impressions and warning was issued in case the rotation or displacement between any two pairs exceeded the maximum allowed. Fortunately, the 12 samples at our disposal always allowed us to find a subset of 8 impressions compliant with the specification. Figure 4.2: Sample images from the database. 4.3 Fingerprint Matching In order to estimate the alignment error, we use ground truth mated minutiae pairs from FVC2002, which are marked by fingerprint examiners, to compute the average distance between the true mated pairs after alignment. If the average Euclidean distance for a given latent is less than a prespecified number of pixels in at least one of the ten best alignments then we consider it a correct alignment. This alignment is done for removal of false minutiae detected in latent sample. 23
  • 36. 4.3.1 Alignment In the latent matching case, singularities are not always present in latents, making it difficult to base the alignment of the fingerprint on singular points alone. To obtain manually marked orientation field is expensive, and to automatically extract orientation field from a latent image is a very challenging problem. Since manually marking minutiae is a common practice for latent matching, our approach to align two fingerprints is based on minutiae. Local descriptors can also be used to align two fingerprints. In this case, usually the most similar minutiae pair is used as a base for the transformation parameters (rotation and translation), and the most similar pair is chosen based on a measure of similarity between the local descriptors of the minutiae pair Given two sets of points (minutiae), a matching score is computed for each transformation in the discretized set of all allowed transformations. For each pair of minutiae, one minutia from each image (latent or full), and for given scale and rotation parameters, unique translation parameters can be computed. Each parameter receives a vote that is proportional to the matching score for the corresponding transformation. In our approach, the alignment is conducted in a similar way, but the evidence for each parameter is accumulated based on the similarity between the local descriptors of the two involved minutiae, with the similarity and descriptor. The assumption here is that true mated minutiae pairs will vote for very similar sets of alignment parameters, while non-mated minutiae pairs will vote randomly throughout the parameter space. As a result, the set of parameters that presents the highest evidence is considered the best one. For robustness, ten sets of alignment parameters with strong evidence are considered. In order to make the alignment computationally efficient and also more accurate, we use the minutiae pairs that vote for a peak to compute a rigid transformation between the two fingerprints. The use of voting minutiae pairs to compute the transformation gives more accurate alignment parameters than directly using the peak parameters. 24
  • 37. 4.3.2 Similarity Measure For each of the 10 different alignments, a matching score between two fingerprints is computed by comparing minutiae and orientation fields. The maximum value of the 10 scores is chosen as the final matching score between the two fingerprints. To compute minutiae matching score under a given alignment, we first find the corresponding minu- tiae pairs (one in the latent, one in the rolled print). For this purpose, we align the minutiae sets of the two fingerprints and then find a one-to-one matching between the two minutiae sets using a greedy algorithm is expressed by eq. 4.1. For each minutia ml in the latent, a set of candidate minutiae in the rolled print is found. A minutia mr in the rolled print is called a candidate if it has not been yet matched to any minutia, and both its location and angle are sufficiently close to ml. The threshold values Ts for spatial distance and TA for angle distance were determined empirically. Among all candidates, the one closest to ml in location is chosen as the matching minutia of ml. SM = 1 N N i=1 (sc(i))(ss(i)) (4.1) where sc(i)denotes the similarity between the minutiae cylinder codes of the ith pair of matched minutiae ss(i)=1-(ds(i)/2T(s)) maps the spatial distance ds(i)of the ith pair of matched minutiae into a similarity score, and denotes the number of minutiae in the latent. According to equation, the matching score depends on the number of matching minutiae, which itself is affected by the distance threshold. However, due to large distortion present in many latents, it is difficult to choose an appropriate value for. While a large threshold value will lead to more matching minutiae for distorted mated pairs, the number of matching minutiae for non-mated pairs will increase too. Hence, we use two different values (15 pixels and 25 pixels) and for each threshold, a set of matching minutiae is found and a matching score is computed using the above equation. 25
  • 38. CHAPTER 5 IMPLEMENTATION OF THE PROPOSED ALGORITHM 5.1 Introduction Using MATLAB Version 7.11.0 (R2010b), both proposed enrolment and verification phases are implemented as described in next three subsections. 5.2 Enhancement of the fingerprint image (a) (b) Figure 5.1: (a)Input image.(b) Binarized output The first step is to enhance the fingerprint image by setting the contrast level using imadjust() function. Then binarization of the image is done above threshold value of 160. Binarization results on a monochrome image is shown in Fig. 5.1.
  • 39. Then thinning is done by using function bwmorph() function as shown in Fig. 5.2, which is a morphological operator which operate on binary images and applies the operation n times and can be Inf, in which case the operator is repeated until the image no longer changes. Syntax: BW= bwmorph(bwoperation) When used with the thin option, bwmorph() uses the following algorithm: 1. Divide the image into two distinct subfields checker board pattern. 2. In the first subiteration, delete pixel p from the first subfield. 3. In the second subiteration, delete pixel p from second subfield. (a) (b) Figure 5.2: (a)Binarized image.(b) Thinned output The two subiterations together make up one iteration of the thinning algorithm. When the user specifies an infinite number of iterations (n=Inf), the iterations are repeated until the image stops changing. The conditions are all tested using applylut with precomputed lookup tables. 27
  • 40. 5.3 Minutiae Extraction Ridge Ending Ridge endings are found by using nlfilter() function as shown in Fig. ??. It consists of general sliding-neighborhood operations. Syntax: B = nlfilter(A, [m n], fun) B = nlfilter(A, ’indexed’,...) B = nlfilter(A, [m n], fun) applies the function fun to each m-by-n sliding block of the grayscale image A. fun is a function that accepts an m-by-n matrix as input and returns a scalar result. c = fun(x), fun must be a function handle. Parameterizing Functions, in the MATLAB Mathematics documentation, explains how to provide additional parameters to the function fun, c is the output value for the center pixel in the m-by-n block x. nlfilter calls fun for each pixel in A. nlfilter zero-pads the m-by-n block at the edges, if necessary. B = nlfilter(A, ’indexed’,...) processes A as an indexed image, padding with 1’s if A is of class single or double and 0’s if A is of class logical, uint8, or uint16. 5.3.1 Ridge Bifurcation Ridge bifurcations are found by using bwlabel() function as shown in Fig. ??. It uses label connected components in 2-D binary image. Syntax: L = bwlabel(BW, n) [L, num] = bwlabel(BW, n) L = bwlabel(BW, n) returns a matrix L, of the same size as BW, containing labels for the connected objects in BW. The variable n can have a value of either 4 or 8, where 28
  • 41. 4 specifies 4-connected objects and 8 specifies 8-connected objects. If the argument is omitted, it defaults to 8. The elements of L are integer values greater than or equal to 0. The pixels labeled 0 are the background. The pixels labeled 1 make up one object; the pixels labeled 2 make up a second object; and so on. [L, num] = bwlabel(BW, n) returns in num the number of connected objects found in BW. (a) (b) Figure 5.3: (a)Thinned image. (b) Extracted ridge ending and bifurcation. 5.3.2 Minutiae Table For constructing minutiae table 5.1, we used round towards infinity function seil(). It rounds the elements of A to the nearest integers greater than or equal to A. For complex A, the imaginary and real parts are rounded independently. Syntax: B=seil(A) 29
  • 42. Table 5.1: Extracted information from image in term of ridge termination and bifur- cation. Ridge termination Ridge bifurcation 144 60 150 66 172 97 127 109 146 120 212 127 191 131 168 136 153 145 132 152 115 157 211 157 215 162 133 192 114 197 180 202 139 208 211 214 192 215 145 218 167 225 163 232 168 239 190 241 215 243 194 246 158 249 5.3.3 False Minutiae Removal The preprocessing stage does not totally heal the fingerprint image. For example, false ridge breaks due to insufficient amount of ink and ridge cross-connections due to over inking are not totally eliminated. Actually all the earlier stages themselves occasionally introduce some artifacts which later lead to spurious minutiae. These false minutiae will significantly affect the accuracy of matching if they are simply regarded as genuine 30
  • 43. minutiae. Our procedures in removing false minutiae are 1. If the distance between one bifurcation and one termination is less than D and the two minutiae are in the same ridge, remove both of them. Where D is the average inter-ridge width representing the average distance between two parallel neighbouring ridges. 2. If the distance between two bifurcations is less than D and they are in the same ridge, remove the two bifurcations. 3. If two terminations are within a distance D and their directions are coincident with a small angle variation and they suffice the condition that no other termination is located between the two terminations, then the two terminations are regarded as false minutia derived from a broken ridge and are removed. 4. If two terminations are located in a short ridge with length less than D, remove the two terminations. 5.4 Orientation field 1. fspecial( ) function creates predefined 2-D filter. Syntax: h = fspecial(type) h = fspecial(type, parameters) h = fspecial(type) creates a two-dimensional filter h of the specified type. fspecial returns h as a correlation kernel, which is the appropriate form to use with imfilter. type is a string having one of these values. For e.g., h = fspecial(’gaussian’, hsize, sigma) returns a rotationally symmetric Gaussian lowpass filter of size hsize with standard deviation sigma (positive). hsize 31
  • 44. can be a vector specifying the number of rows. 2. filter2() function works as a 2-D digital filter. Syntax: Y = filter2(h,X) Y = filter2(h,X,shape) Y = filter2(h,X) filters the data in X with the two-dimensional FIR filter in the matrix h. It computes the result, Y, using two-dimensional correlation, and returns the central part of the correlation that is the same size as X. (a) (b) Figure 5.4: (a)Thinned image.(b) Orientation field. 3. quiver() function gives quiver or velocity plot as shown in Fig. 5.4. Syntax: a=quiver(x,y) A quiver plot displays velocity vectors as arrows with components (u,v) at the points (x,y). 32
  • 45. 5.4.1 Segmentation and Region of interest Another function regionprops() is used which measures properties of image regions which is used to find region of interest as shown in Fig. 5.5. Syntax: STATS = regionprops(L, properties) STATS = regionprops(L, properties) measures a set of properties for each labeled region in the label matrix L. Positive integer elements of L correspond to different regions. For example, the set of elements of L equal to 1 corresponds to region 1; the set of elements of L equal to 2 corresponds to region 2; and so on. STATS is a structure array with length equal to the number of objects in BW, CC.NumObjects, or max(L(:)). The fields of the structure array denote different properties for each re- gion, as specified by properties. Figure 5.5: Marked region of interest 33
  • 46. 5.5 Minutiae Match Given two set of minutiae of two fingerprint images, the minutiae match algorithm determines whether the two minutiae sets are from the same finger or not. Fig. 5.6 shows similarity measure between two fingerprints. Figure 5.6: Similarity Comparison An alignment-based match algorithm includes two consecutive stages: one is alignment stage and the second is match stage. 1. Alignment stage: Given two fingerprint images to be matched, choose any minutiae from each image; calculate the similarity of the two ridges associated with the two referenced minutiae points. If the similarity is larger than a threshold, transform each set of minutiae to a new coordination system whose origin is at the referenced point and whose x-axis is coincident with the direction of the referenced point. 2. Match stage: After we get two set of transformed minutiae points, we use the elastic match algorithm to count the matched minutiae pairs by assuming two minutiae having nearly the same position and direction are identical. 34
  • 47. CHAPTER 6 RESULT 6.1 Result and Discussions This chapter consists of the results generated by the GUI software which is shown in Fig. 6.1, designed by using MATLAB. First we created a database of 10 fingerprints. The steps involved in it are as follows: 1. Image to the software with the details of the person: Figure 6.1: Graphical User Interface(GUI) for creating database After saving the personal information, we have to extract the features of the fin- gerprint.
  • 48. 2. Applying Binarization technique: Figure 6.2: Binarization 3. Applying Thinning process: Figure 6.3: Thinning 36
  • 49. 4. Marking minutiae Points: Figure 6.4: Minutiae extraction 5. Calculating and marking orientation field: Figure 6.5: Orientation field 37
  • 50. 6. Marking the region of interest: Figure 6.6: Marked region of interest The above figure shows the output image of different operation performed on it. The output numerical value obtain in background is used to match with the fingerprint. 7. After creating the database, we match the fingerprints from it. The software takes a latent image as input and matches the minutiae points and orientation with the database and generates matching score. The following results were obtained: Table 6.1: Results after comparing similarities between input with other fingerprints in database FVC2002 FVC Database Input Match Score 101 1 101 1 1 101 2 101 1 0.770 102 1 101 1 0.197 102 2 101 1 0.245 103 1 101 1 0.180 103 2 101 1 0.217 104 1 101 1 0.247 104 2 101 1 0.223 38
  • 51. Table 6.2: Extracted information from image in term of ridge termination ,bifurcation and orientation field Ridge termination Ridge bifurcation Orientation field 144 60 135, 67 150 66 133, 101 172 97 198, 101 127 109 192, 107 146 120 136, 122 212 127 220, 172 191 131 0, 0 168 136 0, 0 153 145 0, 0 132 152 0, 0 115 157 0, 0 211 157 0, 0 215 162 0, 0 133 192 0, 0 114 197 0, 0 180 202 0, 0 139 208 0, 0 211 214 0, 0 192 215 0, 0 145 218 0, 0 167 225 0, 0 163 232 0, 0 168 239 0, 0 190 241 0, 0 215 243 0, 0 194 246 0, 0 158 249 0, 0 39
  • 52. Figure 6.7: Matching with similar fingerprint Figure 6.8: Matching with non-similar fingerprint 40
  • 53. CHAPTER 7 CONCLUSION The primary focus of the work in this project is on the enhancement of fingerprint im- ages, and the subsequent extraction of minutiae. Firstly, we have implemented a series of techniques for fingerprint image enhancement to facilitate the extraction of minutiae. Experiments were then conducted using a combination of both synthetic test images and real fingerprint images in order to provide a well-balanced evaluation on the perfor- mance of the implemented algorithm. The use of synthetic images has provided a more quantitative and accurate measure of the performance. Whereas real images rely on qualitative measures of inspection, but can provide a more realistic evaluation as they provide a natural representation of fingerprint imperfections such as noise and corrupted elements. The experimental results have shown that combined with an accurate esti- mation of the orientation and ridge frequency, our Automated Fingerprint Identification System is able to effectively enhance the clarity of the ridge structures while reducing noise. In contrast, for low quality images that exhibit high intensities of noise, the filter is less effective in enhancing the image due to inaccurate estimation of the orientation and ridge frequency parameters. However, in practice, this does not pose a significant limitation as fingerprint matching techniques generally place more emphasis on the well- defined regions, and will disregard an image if it is severely corrupted. Overall, the results have shown that our Automated Fingerprint Identification System is useful to employ prior to minutiae extraction.
  • 54. CHAPTER 8 APPENDIX 8.1 Matlab Code % ————————————————————————————————– clear all; clc; addpath(genpath(pwd)); % % LOAD FINGERPRINT TEMPLATE DATABASE load(’db.mat’) % % EXTRACT FEATURES FROM AN ARBITRARY FINGERPRINT [filename, PathName] = uigetfile(’ ∗ .jpg; ∗ .png; ∗ .tif; ∗ .jpeg; ∗ .bmp’,’Load image File’); img = imread( [PathName′ /′ filename] ); figure(1) imshow(img) img=imresize(img, [300300] ); if ndims(img) == 3; img = rgb2gray(img); end % Color Images disp( [′ Extractingfeaturesfrom′ filename′ ...′ ] ); img=imadjust(img, [.3.7] , [] ); J=img(:,:,1) > 160; figure(2) imshow(J)
  • 55. set(gcf,’position’, [11600600] ); K=bwmorph( ∼ J,’thin’,’inf’); figure(3) imshow(K) ffnew=extMinutia(img,K) figure(8) % % CALCULATE MATCHING SCORE IN COMPARISION WITH FIRST ONE load(’xdb.mat’) for i=1:x S(i)=match(ffnew,ffi); drawnow end % % OFFER MATCHED FINGERPRINTS Matched FigerPrints=find(S > 0.65) % ————————————————————————————————– % MINUTIAE EXTRACTION function [a5] = extMinutia(I,K) fun=@minutie; L = nlfilter(K, [33] ,fun); LTerm=(L==1); LTermLab=bwlabel(LTerm); propTerm=regionprops(LTermLab,’Centroid’) CentroidTerm=round(cat(1,propTerm(:).Centroid)); figure(4) imshow(K) hold on plot(CentroidTerm(:,1),CentroidTerm(:,2),’ro’) 43
  • 56. hold off CentroidFinX=CentroidTerm(:,1); CentroidFinY=CentroidTerm(:,2); LSep=(L==3); LSepLab=bwlabel(LSep); propSep=regionprops(LSepLab,’Centroid’,’Image’); CentroidSep=round(cat(1,propSep(:).Centroid)); CentroidSepX=CentroidSep(:,1); CentroidSepY=CentroidSep(:,2); figure(5) imshow(K) hold on plot(CentroidSepX,CentroidSepY,’g ∗ ’) hold off figure(6) imshow(K) hold on plot(CentroidTerm(:,1),CentroidTerm(:,2),’ro’) plot(CentroidSepX,CentroidSepY,’g ∗ ’) hold off D=10; % % Process 1 Distance=DistEuclidian( [CentroidSepXCentroidSepY ] , [CentroidFinXCentroidFinY ] ); SpuriousMinutae=Distance < D; [i, j] =find(SpuriousMinutae); CentroidSepX(i)= [] ; 44
  • 57. CentroidSepY(i)= [] ; CentroidFinX(j)= [] ; CentroidFinY(j)= [] ; % % Process 2 D=7; Distance=DistEuclidian( [CentroidSepXCentroidSepY ] ); SpuriousMinutae=Distance < D; [i, j] =find(SpuriousMinutae); CentroidSepX(i)= [] ; CentroidSepY(i)= [] ; D=6; % % Process 3 Distance=DistEuclidian( [CentroidFinXCentroidFinY ] ); SpuriousMinutae=Distance < D; [i, j] =find(SpuriousMinutae); CentroidFinX(i)= [] ; CentroidFinY(i)= [] ; Kopen=imclose(K,strel(’square’,7)); KopenClean= imfill(Kopen,’holes’); KopenClean=bwareaopen(KopenClean,5); KopenClean( [1end] ,:)=0; KopenClean(:, [1end] )=0; ROI=imerode(KopenClean,strel(’disk’,10)); % % Suppress extrema minutiae [m, n] =size(K(:,:,1)); indFin=sub2ind( [m, n] ,CentroidFinX,CentroidFinY); Z=zeros(m,n); 45
  • 58. Z(indFin)=1; size(ROI’) size(Z) ZFin=Z. ∗ ROI’; [CentroidFinX, CentroidFinY ] =find(ZFin); indSep=sub2ind( [m, n] ,CentroidSepX,CentroidSepY); Z=zeros(m,n); Z(indSep)=1; ZSep=Z. ∗ ROI’; [CentroidSepX, CentroidSepY ] =find(ZSep); figure(7) imshow(I) hold on image(255 ∗ ROI) alpha(0.5) plot(CentroidFinX,CentroidFinY,’ro’,’linewidth’,2) plot(CentroidSepX,CentroidSepY,’go’,’linewidth’,2) hold off m1=max(length(CentroidFinX),length(CentroidFinY)); m2=max(length(CentroidSepX),length(CentroidSepY)); m3=max(m1,m2) a1 = [CentroidFinX(1 : length(CentroidFinX), 1); zeros([m3 − length(CentroidFinX), 1])] ; a2 = [CentroidFinY (1 : length(CentroidFinY ), 1); zeros([m3 − length(CentroidFinY ), 1])] ; a3 = [CentroidSepX(1 : length(CentroidSepX), 1); zeros([m3 − length(CentroidSepX), 1])] ; 46
  • 59. a4 = [CentroidSepY (1 : length(CentroidSepY ), 1); zeros([m3 − length(CentroidSepY ), 1])] ; a5= [a1, a2, a3, a4] ; % ————————————————————————————————– % COORDINATION TRANSFORM FUNCTION function [T] = transform( M, i ) Count=size(M,1); XRef=M(i,1); YRef=M(i,2); ThRef=M(i,4); T=zeros(Count,4); R= [cos(ThRef)sin(ThRef)0; − sin(ThRef)cos(ThRef)0; 001] ; % Transformation Ma- trix for i=1:Count B= [M(i, 1) − XRef; M(i, 2) − Y Ref; M(i, 4) − ThRef] ; T(i,1:3)=R ∗ B; T(i,4)=M(i,3); end end % ————————————————————————————————– % COORDINATION TRANSFORM FUNCTION function [Tnew] = transform2( T, alpha ) Count=size(T,1); Tnew=zeros(Count,4); R= [cos(alpha)sin(alpha)00; ... − sin(alpha)cos(alpha)00; ...0010; 0001] ; % Transfor- mation Matrix for i=1:Count 47
  • 60. B=T(i,:)- [00alpha0] ; Tnew(i,:)=R ∗ B’; end end % ————————————————————————————————– % RIDGEORIENTATION CALCULATION function [orientim, reliability, coherence] = ... ridgeorient(im, gradientsigma, block- sigma, orientsmoothsigma) if ∼ exist(’orientsmoothsigma’, ’var’), orientsmoothsigma = 0; end [rows, cols] = size(im); % Calculate image gradients. sze = fix(6 ∗ gradientsigma); if ∼ mod(sze,2); sze = sze+1; end f = fspecial(’gaussian’, sze, gradientsigma); % Generate Gaussian filter. [fx, fy] = gradient(f); % Gradient of Gausian. Gx = filter2(fx, im); % Gradient of the image in x Gy = filter2(fy, im); % ... and y % Estimate the local ridge orientation at each point by finding the Gxx = Gx. ∧ 2; % Covariance data for the image gradients Gxy = Gx. ∗ Gy; Gyy = Gy. ∧ 2; sze = fix(6 ∗ blocksigma); if ∼ mod(sze,2); sze = sze+1; 48
  • 61. end f = fspecial(’gaussian’, sze, blocksigma); Gxx = filter2(f, Gxx); Gxy = 2 ∗ filter2(f, Gxy); Gyy = filter2(f, Gyy); % Analytic solution of principal direction denom = sqrt(Gxy. ∧ 2 + (Gxx - Gyy). ∧ 2) + eps; sin2theta = Gxy./denom; % Sine and cosine of doubled angles cos2theta = (Gxx-Gyy)./denom; if orientsmoothsigma sze = fix(6 ∗ orientsmoothsigma); if ∼ mod(sze,2); sze = sze+1; end f = fspecial(’gaussian’, sze, orientsmoothsigma); cos2theta = filter2(f, cos2theta); of sin2theta = filter2(f, sin2theta); end orientim = pi/2 + atan2(sin2theta,cos2theta)/2; Imin = (Gyy+Gxx)/2 - (Gxx-Gyy). ∗ cos2theta/2 - Gxy. ∗ sin2theta/2; Imax = Gyy+Gxx - Imin; reliability = 1 - Imin./(Imax+.001); coherence = ((Imax-Imin)./(Imax+Imin)). ∧ 2; reliability = reliability. ∗ (denom > .001); % ————————————————————————————————– % PLOTING OF RIDGEORIENTATION function plotridgeorient(orient, spacing, im, figno, I) 49
  • 62. if fix(spacing) ∼ = spacing error(’spacing must be an integer’); end [rows, cols] = size(orient); lw = 2; % linewidth len = 0.8*spacing; % length of orientation lines % Subsample the orientation data according to the specified spacing s orient = orient(spacing:spacing:rows-spacing, ... spacing:spacing:cols-spacing); xoff = len/2*cos(s orient); yoff = len/2*sin(s orient); if nargin > = 3 % Display fingerprint image if nargin == 4 imshow(im, figno); else imshow(im); end end % Determine placement of orientation vectors [x, y] = meshgrid(spacing:spacing:cols-spacing, ... spacing:spacing:rows-spacing); x = x-xoff; y = y-yoff; % Orientation vectors u = xoff*2; v = yoff*2; imshow(I) 50
  • 63. hold on quiver(x,y,u,v,0,’.’,’linewidth’,1, ’color’,’r’); axis equal, axis ij, hold off % ————————————————————————————————– % RIDGE SEGMENTATION function [normim, mask, maskind] = ridgesegment(im, blksze, thresh) im = normalise(im); % normalise to have zero mean, unit std dev fun = inline(’std(x(:))*ones(size(x))’); stddevim = blkproc(im, [blksze blksze], fun); mask = stddevim > thresh; maskind = find(mask); % Renormalise image so that the *ridge regions* have zero mean, unit % standard deviation. im = im - mean(im(maskind)); normim = im/std(im(maskind)); % ——— —————————————————————————————– % COORDINATION TRANSFORM FUNCTION function [normim, mask, maskind] = ridgesegment(im, blksze, thresh) im = normalise(im); fun = inline(’std(x(:)) ∗ ones(size(x))’); stddevim = blkproc(im, [blkszeblksze] , fun); mask = stddevim > thresh; maskind = find(mask); im = im - mean(im(maskind)); normim = im/std(im(maskind)); % ————————————————————————————————– function n = normalise(im, reqmean, reqvar) if ∼ (nargin == 1 — nargin == 3) error(’No of arguments must be 1 or 3’); end 51
  • 64. if nargin == 1 % Normalise 0 1 if ndims(im) == 3 hsv = rgb2hsv(im); v = hsv(:,:,3); v = v - min(v(:)); v = v/max(v(:)); hsv(:,:,3) = v; n = hsv2rgb(hsv); else % Assume greyscale if ∼ isa(im,’double’), im = double(im); end n = im - min(im(:)); n = n/max(n(:)); end else % Normalise to desired mean and variance if ndims(im) == 3 % colour image error(’cannot normalise colour image to desired mean and variance’); end if ∼ isa(im,’double’), im = double(im); end im = im - mean(im(:)); im = im/std(im(:)); n = reqmean + im ∗ sqrt(reqvar); end % ————————————————————————————————– % TRANSFORMED MINUTIAE MATCHING SCORE function [sm] = score( T1, T2 ) Count1=size(T1,1); Count2=size(T2,1); n=0; T=15; 52
  • 65. TT=14; for i=1:Count1 Found=0; j=1; while (Found==0) && (j < =Count2) dx=(T1(i,1)-T2(j,1)); dy=(T1(i,2)-T2(j,2)); d=sqrt(dx ∧ 2+dy ∧ 2); if d < T DTheta=abs(T1(i,3)-T2(j,3)) ∗ 180/pi; DTheta=min(DTheta,360-DTheta); if DTheta < TT n=n+1; Found=1; end end j=j+1; end end sm=sqrt(n ∧ 2/(Count1 ∗ Count2)); % Similarity Index end % ————————————————————————————————– function D=DistEuclidian(dataset1,dataset2) h = waitbar(0,’Distance Computation’); switch nargin case 1 [m1, n1] =size(dataset1); m2=m1; 53
  • 66. D=zeros(m1,m2); for i=1:m1 waitbar(i/m1) for j=1:m2 if i==j D(i,j)=NaN; else D(i,j)=sqrt((dataset1(i,1)-dataset1(j,1)) ∧ 2+(dataset1(i,2)-dataset1(j,2)) ∧ 2); end end end case 2 [m1, n1] =size(dataset1); [m2, n2] =size(dataset2); D=zeros(m1,m2); for i=1:m1 waitbar(i/m1) for j=1:m2 D(i,j)=sqrt((dataset1(i,1)-dataset2(j,1)) ∧ 2+(dataset1(i,2)-dataset2(j,2)) ∧ 2); end end otherwise error(’only one or two input arguments’) end close(h) % ————————————————————————————————– % FINGERPRINT MATCHING SCORE function [S] = match( M1, M2, display flag ) if nargin==2; 54
  • 67. display flag=0; end M1=M1(M1(:,3) < 5,:); M2=M2(M2(:,3) < 5,:); count1=size(M1,1); count2=size(M2,1); bi=0; bj=0; ba=0; % Best i,j,alpha S=0; % Best Similarity Score for i=1:count1 T1=transform(M1,i); for j=1:count2 if M1(i,3)==M2(j,3) T2=transform(M2,j); for a=-5:5 % Alpha T3=transform2(T2,a ∗ pi/180); sm=score(T1,T3); if S < sm S=sm; bi=i; bj=j; ba=a; end end end end 55
  • 68. end if display flag==1 figure, title( [′ SimilarityMeasure :′ num2str(S)] ); T1=transform(M1,bi); T2=transform(M2,bj); T3=transform2(T2,ba ∗ pi/180); plot data(T1,1); plot data(T3,2); end end % ————————————————————————————————– 56
  • 69. REFERENCES [1] A. A. Paulino, J. Feng, and A. K. Jain, “Latent fingerprint matching using descriptor-based hough transform,” IEEE transactions on information forensics and security, vol. 8, pp. 1–15, Jan 2013. [2] A. A. Paulino, J. Feng, and A. K. Jain, “Latent fingerprint matching using descriptor-based hough transform,” Proc. Int. Joint Conf. Biometrics, pp. 1–7, Oct 2011. [3] A. Jain, L. Hong, and R. Bolle, “On-line fingerprint verification,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 19, pp. 302–314, 1997. [4] B. Janani, S. Valarmathi, A. Kumar, and S.Boobalakumaran, “Identification of palmprint and fingerprint using improved hierarchical minutiae matching,” Inter- national Journal of Innovative Science, Engineering and Technology, vol. 1, Nov 2014. [5] JianjiangFeng, J. Zhou, and A. K.Jain, “Orientation field estimation for latent fin- gerprint enhancement,” Pattern Analysis and Machine Intelligence, IEEE Trans- actions, vol. 35, Aug 2012. [6] B. T. Ulery, R. A. Hicklin, J. Buscaglia, and M. A. Roberts, “Accuracy and relia- bility of forensic latent fingerprint decisions,” Proceedings of the National Academy of Sciences, 2011. [7] L. Haber and R. N. Haber, “Error rates for human latent fingerprint examiners,” pp. 339–360, 2003.
  • 70. [8] R.Kausalya and A.Ramya, “International journal of advanced research in computer and communication engineering,” vol. 3, Feb 2014. [9] R. Thai, “Fingerprint image enhancement and minutiae extraction,” School of Com- puter Science and Software Engineering, University of Western Australia, 2003. [10] J. Feng and A. K. Jain, “ Fingerprint reconstruction: From minutiae to phase , journal = IEEE Trans. Pattern Anal. Mach. Intell, volume = 33, pages = 209- 223,,” Feb 2011. [11] Amengual, J. C., Juan, A. Prez, J. C., Prat, F., Sez, S., and Vilar, “ Real-time minutiae extraction in fingerprint images , journal = In Proc.of the 6th Int. Conf. on Image Processing and its Applications , pages = 871-875,,” July 1997. [12] S. Kasaei, M. D., and Boashash, “Fingerprint feature extraction using block- direction on reconstructed images,” IEEE region TEN Conf., digital signal Pro- cessing applications, pp. 303–306, Dec 1997. [13] D. Maltoni, a. A. K. J. D. Maio, and S. Prabhakar, “Handbook of fingerprint recognition,” New York: Springer-Verlag, 2009. [14] R. Cappelli, M. Ferrara, and D. Maltoni, “Minutia cylinder-code: A new represen- tation and matching technique for fingerprint recognition,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 32, p. 21282141, Dec 2010. [15] M. Tico and P. Kuosmanen, “Fingerprint matching using and orientation based minutia descriptor,” IEEE Trans. Pattern Anal. Mach. Intell, vol. 25, p. 10091014, Aug 2003. [16] J. Feng and J. Zhou, “A performance evaluation of fingerprint minutia descriptors,” Proc. Int. Conf. Hand-Based Biometrics, pp. 1–6, Aug 2011. 58
  • 71. [17] B. G. Sherlock and D.M.Monro, “A model for interpreting fingerprint topology,” Pattern Recognit, vol. 26, p. 10471055, 1993. [18] S. Huckemann, T. Hotz, and A. Munk, “Global models for the orientation field of fingerprints: An approach based on quadratic differentials,” IEEE Trans. Pattern Anal.Mach. Intell, p. 15071519, Sep 2008. [19] L. Hong, Y. Wan, and A. Jain, “Pattern recognition and image processing labora- tory, department of computer science,” Department of Computer Science, Michigan State University, pp. 1–30, 2006. [20] R. Thai, “Fingerprint image enhancement and minutiae extraction,” PhD Thesis Submitted to School of Computer Science and Software Engineering University of Western Australia, pp. 1–30, 2003. [21] D. Maio and D. Maltoni, “Direct gray-scale minutiae detection in fingerprints,” IEEE PAMI, vol. 19, pp. 27–40, Sep 1997. 59