SlideShare a Scribd company logo
TRIBHUVAN UNIVERSITY
INSTITUTE OF ENGINEERING
HIMALAYA COLLEGE OF ENGINEERING
[CODE: CT 755]
A
FINAL YEAR PROJECT REPORT
ON
IRIS RECOGNITION SYSTEM
BY:
Bina Acharya (070/BCT/11)
Manjila Khanal (070/BCT/23)
Rabindra Khadka (070/BCT/35)
Radeep Chapagain (070/BCT/36)
A REPORT SUBMITTED TO DEPARTMENT OF ELECTRONICS AND
COMPUTER ENGINEERING IN PARTIAL FULFILLMENT OF THE
REQUIREMENT FOR BACHLORE’S DEGREE IN COMPUTER
ENGINEERING
DEPARTMENT OF ELECTRONICS AND COMPUTER ENGINNERING
LALITPUR, NEPAL
AUGUST, 2017
ii
A
PROJECT REPORT
ON
IRIS RECOGNITION SYSTEM
Prepared For
Department of Electronics and Computer Engineering
Himalaya College of Engineering
Chyasal, Lalitpur
Prepared By
Bina Acharya (070/BCT/11)
Manjila Khanal (070/BCT/23)
Rabindra Khadka (070/BCT/35)
Radeep Chapagain (070/BCT/36)
AUGUST, 2017
iii
Acknowledgment
It gives us immense pleasure to express our deepest sense of gratitude and sincere
thanks to our highly respected supervisor Er. Hari Prasad Pokhrel, for his
insightful advice, motivating suggestions, invaluable guidance, support during the
process and constant encouragement and advice throughout our project hours.
We would like to express our sincere thanks to Er. Alok Kaflea (Project Co-
ordinator, Department of electronics and computer), for giving us the opportunity
to undertake this project. We express our deep gratitude to Er. Ashok GM (Head
of Department, Electronics and computer Engineering Himalaya College of
Engineering) for his regular support, co-operation, and co-ordination. The in-time
facilities provided by the department throughout the project hours are also equally
acknowledgeable.
We would like to convey our thanks to the teaching and non-teaching staffs of the
Department of Electronics and computer Engineering, HCOE for their invaluable
help and support throughout the period of the project hours. We will not miss to
express our gratitude to all our friends and everyone who has been the part of this
project by providing their comments and suggestions.
Bina Acharya
Manjila Khanal
Rabindra Khadka
Radeep Chapagai
iv
Abstract
This report on “Iris Recognition” is submitted in partial fulfillment of the
requirement for Computer Engineering.
“Iris Recognition” is a biometric application. A biometric system is a technological
system where a person is identified with the unique features posed by an individual.
Due to the increasing need of security this technique is gaining more popularity.
Several biometric features like finger, iris, voice, have been continuously
investigated and are still under consideration. Among this, iris recognition has been
a hot topic in pattern recognition and machine learning.
In this project we attempt to develop an app that identifies the person using the
unique pattern of his/her iris. In this app the person will be identified by matching
the features of their iris with the data stored in the database.
Keywords: Iris recognition, Biometric, Pattern recognition.
v
Table of Contents
Acknowledgment .....................................................................................................iii
Abstract ................................................................................................................... iv
Table of Contents.....................................................................................................v
List of Figures .........................................................................................................vii
List of Table........................................................... Error! Bookmark not defined.
Abbreviations.........................................................................................................viii
CHAPTER 1: INTRODUCTION ............................................................................1
1.1 Background ....................................................................................................2
1.2 Objective ........................................................................................................3
1.3 Problem Statement .........................................................................................3
1.4 Scope and Application ...................................................................................3
CHAPTER 2: LITERATURE REVIEW .................................................................4
2.1 Background ....................................................................................................5
2.2 Biometric Security..........................................................................................5
CHAPTER 3: REQUIREMENT ANALYSIS AND FEASIBLITY STUDY.........9
3.1 Feasibility Analysis......................................................................................10
3.1.1 Technical Feasibility .................................................................................. 10
3.1.2 Operational Feasibility............................................................................... 11
3.1.3 Economic Feasibility.................................................................................. 11
3.1.4 Schedule Feasibility................................................................................... 12
3.2 Requirement Definition................................................................................12
3.2.1 Functional Requirements........................................................................... 12
3.2.2 Non-Functional Requirements ................................................................... 12
3.3 Model and Software Process........................................................................13
CHAPTER 4: SYSTEM DESIGN AND ARCHITECTURE................................14
4.1 Use Case Diagram........................................................................................15
CHAPTER 5: METHODOLOGY .........................................................................17
vi
5.1 Image Acquisition........................................................................................18
5.2 Pre-Processing..............................................................................................18
5.2.1 Grayscale.................................................................................................. 18
5.2.2 Median filter............................................................................................. 19
5.2.3 Mean Filter............................................................................................... 20
5.3 Segmentation................................................................................................21
5.3.1 Pupil center detection............................................................................... 21
5.3.2 Canny edge detector................................................................................. 22
5.3.3 Iris radius detection................................................................................... 23
5.4 Normalization...............................................................................................23
5.5 Matching.......................................................................................................24
CHAPTER 6: TESTING........................................................................................25
6.1. Unit Testing.................................................................................................26
6.2. System Testing............................................................................................26
6.3. Performance Testing ...................................................................................27
6.4. Verification and Validation.........................................................................27
CHAPTER 7: DISCUSSION.............................................................................28
CHAPTER 8: CONCLUSION ..............................................................................29
Screenshots.............................................................................................................33
Reference ...............................................................................................................36
vii
List of Figures
Figure 1.1 Human Eye .............................................................................................2
Figure 5.2 System Architecture .............................................................................13
Figure 6.1 Original Gantt chart..............................................................................15
Figure 6.2 Current Gantt chart ...............................................................................16
viii
Abbreviations
App Application
ATM Automated Teller Machine
CASIA Chinese Academy of Science Institute
of Automation
Dr. Doctor
IEEE Institute of Electrical and Electronics
Engineering
Open CV Open Source Computer Vision
PINs Personal Identification Numbers
UI User Interface
1
CHAPTER 1: INTRODUCTION
2
1.1 Background
A biometric system is a technological system where a person is identified with the
unique features possessed by an individual (like voice, fingerprint, facial features,
hand gestures, iris). In any biometric system first the sample of the feature is
captured which is transformed into a biometric template. This template is later on
compared with other templates to determine the identity.
The iris is a thin circular diaphragm, which lies between the cornea and the lens of
a human eye. The iris is close to the center by a circular aperture called pupil. The
average diameter of the iris is 12 mm and the size of pupil varies 10% to 80% of
the iris diameter. The unique pattern of the iris is random and not related to any
genetic factors formed during first year of life. Due to the epigenetic nature of iris
patterns identical twins possess uncorrelated iris patterns.
Figure 1.1 Human Eye
Compared to other biometric technologies, such as face, voice and fingerprint, iris
recognition can be considered as the most reliable form of biometric technology. In
addition iris has many special optical and physiological characteristics which can
be exploited to defend against possible forgery.[1]
3
1.2 Objective
The main objective of our application is to identify an individual with high
efficiency and accuracy by analyzing the random patterns visible within the iris of
an eye.
1.3 Problem Statement
Conventionally passwords, secret codes and PINs are used for identification which
can be easily stolen, observed or forgotten. In pattern recognition problems, the key
issue is the relation between inter-class and intra-class variability: objects can be
reliably classified only if the variability among different instances of a given class
is less than the variability between different classes. For example in face
recognition, difficulties arise from the fact that the face is a changeable social organ
displaying a variety of expressions, as well as being an active 3D object whose
image varies with viewing angle, pose, illumination, accoutrements, and age. So as
an alternative we propose to use biometrics (iris recognition) system to identify an
individual.
1.4 Scope and Application
The purposed system is the iris recognition system that can be used in various fields
for identification and authentication process. Some of its applications are
 Computer login: as a password
 Secure access to bank accounts at ATM
 Premises access control (home, office, laboratory, etc)
 Forensics: birth certificates ; tracing missing or wanted persons
 Credit card authentication
 Secure financial transactions (e-commerce)
 Anti – terrorism (e.g. security screening at airplanes)
 Any existing use of keys, cards, PINs or password
4
CHAPTER 2: LITERATURE REVIEW
5
2.1 Background
Research on biometric methods has gained renewed attention in recent years
brought on by an increase in security concerns. The increasing crime rate has
influenced people and their governments to take action and be more proactive in
security issues. This need for security also extends to the need for individuals to
protect, among other things, their working environments, homes, personal
possessions and assets. Many biometric techniques have been developed and are
being improved with the most successful being applied in everyday law
enforcement and security applications. Biometric methods include several state-of-
the-art techniques. Among them, iris recognition is considered to be the most
powerful technique for security authentication in present context.
Advances in sensor technology and an increasing demand for biometrics are driving
a burgeoning biometric industry to develop new technologies. As commercial
incentives increase, many new technologies for person identification are being
developed, each with its own strengths and weaknesses and a potential niche
market.
2.2 Biometric Security
 The term “Biometrics” is derived from the Greek words “bio” (life) and
“metrics” (to measure) (Rood and Hornak, 2008). Automated biometric
systems have only become available over the last few decades, due to the
significant advances in the field of computer and image processing.
Although biometric technology seems to belong in the twenty first century,
the history of biometrics goes back thousands of years. The ancient
Egyptians and the Chinese played a large role in biometrics history. Today,
the focus is on using biometric face recognition, iris recognition, retina
recognition and identifying characteristics to stop terrorism and improve
security measures. This section provides a brief history on biometric
security and fingerprint recognition.
 During 1858, the first recorded systematic capture of hand and finger images
for identification purposes was used by Sir William Herschel, Civil Service
6
of India, who recorded a handprint on the back of a contract for each worker
to distinguish employees (Komarinski, 2004).
 During 1870, Alphonse Bertillon developed a method of identifying
individuals based on detailed records of their body measurements, physical
descriptions and photographs. This method was termed as “Bertillonage” or
anthropometrics and the usage was aborted in 1903 when it was discovered
that some people share same measurements and physical characteristics
(State University of New York at Canton, 2003).
 Sir Francis Galton, in 1892, developed a classification system for
fingerprints using minutiae characteristics that is being used by researchers
and educationalists even today. Sir Edward Henry, during 1896, paved way
to the success of fingerprint recognition by using Galton's theory to identify
prisoners by their fingerprint impressions. He devised a classification
system that allowed thousands of fingerprints to be easily filed, searched
and traced. He helped in the first establishment of fingerprint bureau in the
same year and his method gained worldwide acceptance for identifying
criminals (Scottish Criminal Record Office, 2002).
 The concept of using iris pattern for identification was first proposed by
Ophthalmologist Frank Burch in 1936 (Iradian Technologies, 2003). During
1960, the first semi-automatic face recognition system was developed by
Woodrow W. Bledsoe, which used the location of eyes, ears, nose and
mouth on the photographs for recognition purposes. In the same year, the
first model of acoustic speech production was creased by a Swedish
Professor, Gunnar Fant. His invention is used in today's speaker recognition
system (Woodward et al, 2003).
 Sir Francis Galton, in 1892, developed a classification system for
fingerprints using minutiae characteristics that is being used by researchers
and educationalists even today. Sir Edward Henry, during 1896, paved way
to the success of fingerprint recognition by using Galton's theory to identify
prisoners by their fingerprint impressions. He devised a classification
system that allowed thousands of fingerprints to be easily filed, searched
and traced. He helped in the first establishment of fingerprint bureau in the
7
same year and his method gained worldwide acceptance for identifying
criminals (Scottish Criminal Record Office, 2002).
 The concept of using iris pattern for identification was first proposed by
Ophthalmologist Frank Burch in 1936 (Iradian Technologies, 2003). During
1960, the first semi-automatic face recognition system was developed by
Woodrow W. Bledsoe, which used the location of eyes, ears, nose and
mouth on the photographs for recognition purposes. In the same year, the
first model of acoustic speech production was creased by a Swedish
Professor, Gunnar Fant. His invention is used in today's speaker recognition
system (Woodward et al, 2003).
 The first automated signature recognition system was developed by North
American Aviation during 1965 (Mauceri, 1965). This technique was later,
in 1969, used by Federal Bureau of Investigation (FBI) in their
investigations to reduce man hours invested in the analysis of signatures.
The year 1970 introduced face recognition towards authentication.
Goldstein et al. (1971) used 21 specific markers such as hair color, lip
thickness to automate the recognition process. The main disadvantage of
such a system was that all these features were manually identified and
computed.
 During the same period, Dr.Joseph Perkell produced the first behavioral
components of speech to identify a person (Woodward et al, 2003). The first
commercial hand geometry system was made available in 1974 for physical
access control, time and attendance and personal identification. The success
of this first biometric automated system motivated several funding agencies
like FBI Fund, NIST for the development of scanners and feature extraction
technology (Ratha and Bolle, 2004), which will finally lead to the
development of a perfect human recognizer. This resulted in the first
prototype of speaker recognition system in 1976, which was developed by
Texas instruments and was tested by US Air Force and the MITRE
Corporation. In 1996, the hand geometry was implemented successfully at
the Olympic Games and the system implemented was able to handle the
enrollment of over 65,000 people.
8
 Drs. Leonard Flom and Aran Safir, in 1985, found out that no two irises are
alike and their findings were awarded a patent during 1986. In the year 1988,
the first semi-automated facial recognition system was deployed by
Lakewood Division of Los Angeles County Sheriff's Department for
identifying suspects (Angela, 2009). This was followed by several land
marked contributiona by Sirovich and Kirby (1989), Turk and Pentland
(1991), Philipis et al. (2000) in the field of face recognition.
 The next stage in fingerprint automation occurred at the end of 1994 with
the Integrated Automated Fingerprint Identification System (IAFIS) 36
competition. The competition identified and investigated three major
challenges:
(1) Digital fingerprint acquisition
(2) Local ridge characteristic extraction and
(3) Ridge characteristic pattern matching (David et al., 2005).
 The first Automated Fingerprint Identification System (AFIS) was
developed by Palm System in 1993. During 1995, the iris biometric was
officially released as a commercial authentication tool by Defense Nuclear
Agency and Iriscan.
 The year 2000 envisaged the first face recognition vendor test (FRVT, 2000)
sponsored by the US Government agencies and the same year paved way
for the first research paper on the use of vascular patterns for recognition
(Im et al., 2001). During 2003, ICAO (International civil Aviation
Organization) adopted blueprints for the integration of biometric
identification information into passports and other Machine Readable
Travel Documents (MRTDs). Facial recognition was selected as the
globally interoperable biometric for machine-assisted identity confirmation
with MRTDs.
 The first statewide automated palm print database was deployed by the US
in 2004. The Face Recognition Grand Challenge (FRGC) began in the same
year to improve the identification problem. In 2005, Iris on the move was
announced by Biometric Consortium Conference for enabling the collection
of iris images from individuals walking through a portal. [2]
9
CHAPTER 3: REQUIREMENT ANALYSIS AND
FEASIBLITY STUDY
10
3.1 Feasibility Analysis
A feasibility study is a preliminary study which investigates the information of
prospective users and determines the resources requirements, costs, benefits and
feasibility of proposed system. A feasibility study takes into account various
constraints within which the system should be implemented and operated. In this
stage, the resource needed for the implementation such as computing equipment,
manpower and costs are estimated. The estimated are compared with available
resources and a cost benefit analysis of the system is made. The feasibility analysis
activity involves the analysis of the problem and collection of all relevant
information relating to the project. The main objectives of the feasibility study are
to determine whether the project would be feasible in terms of economic feasibility,
technical feasibility and operational feasibility and schedule feasibility or not. It is
to make sure that the input data which are required for the project are available.
Thus we evaluated the feasibility of the system in terms of the following categories:
 Technical feasibility
 Operational feasibility
 Economic feasibility
 Schedule feasibility
3.1.1 Technical Feasibility
Evaluating the technical feasibility is the trickiest part of a feasibility study. This is
because, at the point in time there is no any detailed designed of the system, making
it difficult to access issues like performance, costs (on account of the kind of
technology to be deployed) etc. A number of issues have to be considered while
doing a technical analysis; understand the different technologies involved in the
proposed system. Before commencing the project, we have to be very clear about
what are the technologies that are to be required for the development of the new
system. Is the required technology available? Iris recognition system is technically
feasible. All the tools necessary for this system is easily available. It uses NetBeans
11
for application development. Though all the tools seem to be easily available, there
will be other challenges too.
3.1.2 Operational Feasibility
Proposed project is beneficial only if it can be turned into information systems that
will meet the operating requirements. Simply stated, this test of feasibility asks if
the system will work when it is developed and installed. Are there major barriers to
Implementation?
Since the proposed system was to help reduce the hardships encountered in the
current verification system, the new system was considered to be operational
feasible. The purpose of any project is that targeted audience/ client uses our
software. Thus it is necessary for developers to understand the need of targeted
audience and implement it in our software.
The targeted users of our system are any organization where authentication of
individual plays a vital role. Though it may not be 100% efficient but it makes easy
and minor error can be handled after the extraction.
3.1.3 Economic Feasibility
Economic feasibility attempts to weigh the costs of developing and implementing a
new system, against the benefits that would accrue from having the new system in
place. This feasibility study gives the top management the economic justification
for the new system. A simple economic analysis which gives the actual comparison
of costs and benefits are much more meaningful in this case. In addition, this proves
to be useful point of reference to compare actual costs as the project progresses.
There could be various types of intangible benefits on account of automation. These
could increase customer satisfaction, improvement in product quality, better
decision making, and timeliness of information, expediting activities, improved
accuracy of operations, better documentation and record keeping, faster retrieval of
information, better employee morale.
This application is an application based project. So tools for students can be
obtained at affordable price. The creation of the application is not costly.
12
3.1.4 Schedule Feasibility
A project will fail if it takes too long to be completed before it is useful. Typically,
this means estimating how long the system will take to develop, and if it can be
completed in a given period of time using some methods like payback period.
Schedule feasibility is a measure how reasonable the project timetable is. Given our
technical expertise, are the project deadlines reasonable? Some project is initiated
with specific deadlines. It is necessary to determine whether the deadlines are
mandatory or desirable.
A minor deviation can be encountered in the original schedule decided at the
beginning of the project. The application development is feasible in terms of
schedule. [3]
3.2 Requirement Definition
After the extensive analysis of the problems in the system, we are familiarized with
the requirement that the current system needs. The requirement that the system
needs is categorized into the functional and non-functional requirements. These
requirements are listed below:
3.2.1 Functional Requirements
Functional requirement are the functions or features that must be included in any
system to satisfy the business needs and be acceptable to the users. Based on this,
the functional requirements that the system must require are as follows:
 System should detect the individual on the basis of iris.
 System should process the input given by the user only if it is an image
file.
3.2.2 Non-Functional Requirements
Non-functional requirements is a description of features, characteristics and
attribute of the system as well as any constraints that may limit the boundaries of
the proposed system. The non-functional requirements are essentially based on the
13
performance, information, economy, control and security efficiency and services.
[4] Based on these the non-functional requirements are as follows:
 User friendly
 System should provide better accuracy
 To perform with efficient throughput and response time
3.3 Model and Software Process
For the development of software starting from beginning to the completion of final
product, specific models have to be used. In software development life cycle, it is
possible to use any software models like waterfall, incremental, prototype agile etc.
based on the project requirements. In this project, agile model has been used. It is
also a type of incremental model in which software is developed in incremental and
rapid cycles. Whenever new changes have to be made, agile model allows easy
implementation at very little cost. It minimizes risk by developing software in small
iterations. Planning, developing and testing phase has been iteratively used to
implement new changes easily that reduces the risk of project failure.
14
CHAPTER 4: SYSTEM DESIGN AND
ARCHITECTURE
15
4.1 Use Case Diagram
16
4.2 Diagram
17
CHAPTER 5: METHODOLOGY
18
The project we are working on “Iris Recognition System” has been completed. We
have used Java as our platform. The whole system is further divided into
subsystems.
5.1 Image Acquisition
In this phase we acquire basic information from user like name, phone no, email id
and image of an eye. This information will be stored in the memory. For the image
first it goes through several process and then only gets stored in the memory.
5.2 Pre-Processing
Pre-processing of image refers to operations performed on image at the lowest level
of abstraction. Raw image without pre-processing may have a variety of problems,
and therefore it is not likely to produce the best computer vision results. The aim of
pre-processing is an improvement of image data that suppress unwanted distortions
or enhances some image features for further processing. Image pre-processing can
have a dramatic positive effects on the quality of feature extraction and the results
of image analysis. [5]
5.2.1 Grayscale Image
Grayscale is a range of shades of gray without apparent color. The darkest possible
is black and the lightest possible is white. Grayscale images use only one channel
of color. Converting an image to grayscale is a common technique in image
processing as a pre-processing technique. This is because of the benefits like the
processing operation should only be applied in a single plane with grayscale image,
it is simpler than using RGBA image. There are different methods for converting
an image into grayscale.
The lightness method averages the most prominent and least prominent colors:
(max(R, G, B) + min(R, G, B)) / 2.
The average method simply averages the values: (R + G + B) / 3.
The luminosity method is a more sophisticated version of the average method. It
also averages the values, but it forms a weighted average to account for human
perception. The formula for luminosity is 0.21 R + 0.72 G + 0.07 B. [6]
19
We have used the luminosity method. Red, green and blue are not equally bright so
we are using a weighted average.
5.2.2 Median filter
The median filter is a non-linear digital filtering technique often used to remove
noise from an image. It preserves the useful information along with reducing noise.
Median filter considers each pixel in the image in turn and looks at its nearby
neighborhood. First all the pixel values from the surrounding are sorted and then
the pixel being considered is replaced by the middle pixel value. If the
neighborhood under consideration contains an even number of pixels, the average
of the two middle pixel values is used. An example calculation is shown below:
20
[7]
5.2.3 Mean Filter
The mean filter is a simple sliding-window spatial filter that replaces the center
value in the window with the average (mean) of all the pixel values in the window.
Mean filtering is usually thought of as a convolution filter. Like other convolutions
it is based around a kernel, which represents the shape and size of the neighborhood
to be sampled when calculating the mean. Often a 3×3 square kernel is used,
although larger kernels (e.g. 5×5 squares) can be used for more severe smoothing.
The effect of applying small kernel more than once is similar but not identical as a
single pass with a large kernel.
21
[8]
5.3 Segmentation
5.3.1 Pupil center detection
The main objective here is to detect the pupil center. The algorithm scans through
the median image from top left to bottom right. The algorithm makes no assumption
about position of pupil.
First it finds the pixel that is below threshold (a combination of lowest intensity in
current image and current variance). Then the amount of pixel (block size) adjacent
to its right that have an intensity below threshold are detected. Center of the detected
block is center of the pupil. But, if the block is largest observed (i.e. larger that
maximum block size ) it determines if the block of pixels going in vertical direction
up and down from center are also below threshold and some variance. If so, the max
block size is updated and the center of that block is new pupil center.
22
5.3.2 Canny edge detector
Canny edge detector is an edge detection operator that uses a multi-stage algorithm
to detect a wide range of edges in an image. It detects the edges of the image based
on the current threshold and sigma values. This will generate the binary image that
shows the edges of the image. Canny edge detector aims to satisfy main three
criteria:
 Low error rate: A good detection of only existing edges.
 Good localization: Distance between edge pixels detected and real edge
pixels have to be minimized.
 Minimal response: Only one detector response per edge.
The algorithm works in following steps:
1) Filter out any noise. The Gaussian filter is used for this purpose. An example
of Gaussian kernel of size=5 might be used as shown below:
2) Find the intensity gradient of the image. For this, we follow a procedure
analogous to sobel:
a) Apply a pair of convolution masks (in x and y directions):
b) Find the gradient strength and direction with:
23
The direction is rounded to one of four possible angles (namely 0,
45, 90 or 135).
3) Non-maximum suppression is applied. This removes pixels that are not
considered to be part of an edge. Hence, only thin lines will remain.
4) Hysteresis is the final step. Canny does use two threshold (upper and
lower) :
a) If a pixel gradient is higher than the upper threshold, the pixel is
accepted as an edge.
b) If a pixel gradient is below the lower threshold, then it is rejected.
c) If the pixel gradient is between the two thresholds, then it will be
accepted only if it is connected to a pixel that is above the upper
threshold. [9]
5.3.3 Iris radius detection
Now that the pupil center and edge been detected we start with radius of pupil
identified by the pupil center detection and finds a radius for which the circle in the
edge image has at least a certain amount of black pixels (edges) on or nearly on the
circle. If the proportion of the iris radius meeting this criteria (radius of iris radius
to pupil radius should be of about 0.30 to 0.40) is between two bounds and the
percentage of pixels along the circle defined by the current radius and the pupil
center that are black is greater than certain percentage then the iris radius is
successfully detected.
5.4 Normalization
The main goal is to define the area between the pupil radius and the iris radius. For
this the iris a circular portion is transformed into rectangular. For each coordinate
in the image, we determine the polar angle and the distance between the radius of
the iris and pupil. We also determine the relative distance from the pupil radius to
24
the point. Using this information we convert each polar coordinates to Cartesian co-
ordinates in each iteration. For this we use the formula:
X= cos (Ɵ)* r + (x-coordinate of center)
Y= sin (Ɵ)* r + (y-coordinate of center)
Where,
X= x Cartesian coordinate
Y= y Cartesian coordinate
r = radius of pupil and relative distance
Ɵ = angle of the current polar coordinate
Centre = pupil center
5.5 Matching
Initially the image is stored in database only after the unwrapping of the iris portion.
So, when a new identity is to be matched first the median filter is applied and then
pupil and iris center is detected and both radius is found. And then the region of the
iris is unrolled for the identity. Median filter is applied and then this image is
compared to the one in the database by subtracting the intensity of two images and
change in pixel is determined for each pixel. Then the average percentage change
per pixel between the subject and identity is determined. Iteratively, the percentage
change (probability) between the two images is compared for each image in the
database to find the best match. And finally the best match is identified.
25
CHAPTER 6: TESTING
26
6.1. Unit Testing
Unit testing is performed for testing modules against detailed design. Inputs to the
process are usually compiled modules from the coding process. Each modules are
assembled into a larger unit during the unit testing process.
Testing has been performed on each phase of project design and coding. We carry
out the testing of module interface to ensure the proper flow of information into and
out of the program unit while testing. We make sure that the temporarily stored data
maintains its integrity throughout the algorithm's execution by examining the local
data structure. Finally, all error-handling paths are also tested. [10]
6.2. System Testing
We usually perform system testing to find errors resulting from unanticipated
interaction between the sub-system and system components. Software must be
tested to detect and rectify all possible errors once the source code is generated
before delivering it to the customers. For finding errors, series of test cases must be
developed which ultimately uncover all the possibly existing errors. Different
software techniques can be used for this process. These techniques provide
systematic guidance for designing test that
 Exercise the internal logic of the software components,
 Exercise the input and output domains of a program to uncover errors in
program function, behavior and performance.
We test the software using two methods:
White Box testing: Internal program logic is exercised using this test case design
techniques.
Black Box testing: Software requirements are exercised using this test case design
techniques.
Both techniques help in finding maximum number of errors with minimal effort and
time.
27
6.3. Performance Testing
It is done to test the run-time performance of the software within the context of
integrated system. These tests are carried out throughout the testing process. For
example, the performance of individual module are accessed during white box
testing under unit testing.
6.4. Verification and Validation
The testing process is a part of broader subject referring to verification and
validation. We have to acknowledge the system specifications and try to meet the
customer’s requirements and for this sole purpose, we have to verify and validate
the product to make sure everything is in place. Verification and validation are two
different things. One is performed to ensure that the software correctly implements
a specific functionality and other is done to ensure if the customer requirements are
properly met or not by the end product.
Verification is more like 'are we building the product right?' and validation is more
like 'are we building the right product?’[11]
28
CHAPTER 7: RESULT ANALYSIS AND LIMITATION
29
7.1 Result analysis
After facing a number of errors, successful elimination of those error we have
completed our project with continuous effort. At the end of the project the results
can be summarized as:
 A user friendly desktop application to use.
 No expertise is required for using the application.
 Organizations can use the application to authenticate individuals.
 A strong method of authentication compared to other traditional
mechanisms.
7.2 Limitations
CHAPTER 8: CONCLUSION
30
We have completed our project using java as our programming language and
NetBeans IDE. From the initial phase of the project till its completion we have
encountered a number of problems which were later eliminated. With continuous
effort finally the application was run successfully with all the test being a success.
31
CHAPTER 9: FUTURE ENHANCMENT
32
9.1 Future Enhancement
33
Screenshots
34
35
36
Reference
[1] E. M. Ali, E. S. Ahmed, and A. F. Ali, “Recognition of human iris patterns for
biometric identification,” J. Eng. Appl. Sci., vol. 54, no. 6, pp. 635–651,
2007.
[2] G. Wavelet, “Chapter-2 LITERATURE REVIEW ON IRIS RECOGNITION
SYTSEM,” pp. 14–31.
[3] I. Staff, "Feasibility Study", Investopedia, 2017. [Online]. Available:
http://www.investopedia.com/terms/f/feasibility-study.asp. [Accessed: 08- Jan-
2017].
[4]2017. [Online]. Available: https://ifs.host.cs.st-
andrews.ac.uk/Books/SE7/Presentations/PDF/ch6.pdf. [Accessed: 04- Aug- 2017]
[5]]2017. [Online]. Available: https://www.embedded-
vision.com/sites/default/files/apress/computervisionmetrics/chapter2/9781430259
299_Ch02.pdf. [Accessed: 04- Aug- 2017]
[6] C. Kanan and G. Cottrell, "Color-to-Grayscale: Does the Method Matter in Image
Recognition?", 2017. .
[7] "Spatial Filters - Median Filter", Homepages.inf.ed.ac.uk, 2017. [Online]. Available:
https://homepages.inf.ed.ac.uk/rbf/HIPR2/median.htm. [Accessed: 04- Aug- 2017]
[8] "Spatial Filters - Mean Filter", Homepages.inf.ed.ac.uk, 2017. [Online]. Available:
https://homepages.inf.ed.ac.uk/rbf/HIPR2/mean.htm. [Accessed: 04- Aug- 2017]
[9] "Canny Edge Detector — OpenCV 2.4.13.3 documentation", Docs.opencv.org, 2017.
[Online]. Available:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.
html. [Accessed: 04- Aug- 2017]
[10] "What are Unit Testing, Integration Testing and Functional Testing? |
CodeUtopia", Codeutopia.net, 2017. [Online]. Available:
https://codeutopia.net/blog/2015/04/11/what-are-unit-testing-integration-testing-and-
functional-testing/. [Accessed: 04- Aug- 2017]
37
[11]"What is Verification and Validation? — Software Testing
Help", Softwaretestinghelp.com, 2017. [Online]. Available:
http://www.softwaretestinghelp.com/what-is-verification-and-validation/. [Accessed: 04-
Aug- 2017]
[12]

More Related Content

What's hot

Finger vein technology
Finger vein technologyFinger vein technology
Finger vein technology
Shamili Nookala
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
shyamalaramesh2
 
Fingerprint recognition
Fingerprint recognitionFingerprint recognition
Fingerprint recognition
ranjit banshpal
 
iris recognition system as means of unique identification
iris recognition system as means of unique identification iris recognition system as means of unique identification
iris recognition system as means of unique identification
Being Topper
 
Iris ppt
Iris pptIris ppt
Iris ppt
Sri Harati K
 
Biometrics/fingerprint sensors
Biometrics/fingerprint sensorsBiometrics/fingerprint sensors
Biometrics/fingerprint sensors
Jeffrey Funk
 
Pattern recognition IRIS recognition
Pattern recognition IRIS recognitionPattern recognition IRIS recognition
Pattern recognition IRIS recognition
Mazin Alwaaly
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
NARAHARISRUTHI1
 
Study and development of Iris Segmentation and Normalization Technique
Study and development of Iris Segmentation and Normalization TechniqueStudy and development of Iris Segmentation and Normalization Technique
Study and development of Iris Segmentation and Normalization Technique
Sunil Kumar Chawla
 
Vein Recognition Method
Vein Recognition MethodVein Recognition Method
Vein Recognition Method
Hafsa Tahir
 
Biometrics security
Biometrics securityBiometrics security
Biometrics security
Vuda Sreenivasarao
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
Anjali Awasthi
 
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptxSMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
BikashUpadhaya1
 
Iris scanner technology
Iris scanner technologyIris scanner technology
Iris scanner technology
shams tabrez
 
poster on biometrics
poster on biometricsposter on biometrics
poster on biometrics
deepthi palavelli
 
Human activity recognition
Human activity recognitionHuman activity recognition
Human activity recognition
Randhir Gupta
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
Santosh Kumar
 
Presentation Fingervein Authentication
Presentation Fingervein AuthenticationPresentation Fingervein Authentication
Presentation Fingervein Authentication
ANEESH SASIDHARAN
 
Biometrics Security System
Biometrics Security SystemBiometrics Security System
Biometrics Security System
Shalika Dissanayaka
 
Palm vein technology ppt
Palm vein technology pptPalm vein technology ppt
Palm vein technology ppt
Dhara k
 

What's hot (20)

Finger vein technology
Finger vein technologyFinger vein technology
Finger vein technology
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
 
Fingerprint recognition
Fingerprint recognitionFingerprint recognition
Fingerprint recognition
 
iris recognition system as means of unique identification
iris recognition system as means of unique identification iris recognition system as means of unique identification
iris recognition system as means of unique identification
 
Iris ppt
Iris pptIris ppt
Iris ppt
 
Biometrics/fingerprint sensors
Biometrics/fingerprint sensorsBiometrics/fingerprint sensors
Biometrics/fingerprint sensors
 
Pattern recognition IRIS recognition
Pattern recognition IRIS recognitionPattern recognition IRIS recognition
Pattern recognition IRIS recognition
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
 
Study and development of Iris Segmentation and Normalization Technique
Study and development of Iris Segmentation and Normalization TechniqueStudy and development of Iris Segmentation and Normalization Technique
Study and development of Iris Segmentation and Normalization Technique
 
Vein Recognition Method
Vein Recognition MethodVein Recognition Method
Vein Recognition Method
 
Biometrics security
Biometrics securityBiometrics security
Biometrics security
 
Iris recognition
Iris recognitionIris recognition
Iris recognition
 
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptxSMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
SMART ATTENDANCE SYSTEM USING FACE RECOGNITION (233.pptx
 
Iris scanner technology
Iris scanner technologyIris scanner technology
Iris scanner technology
 
poster on biometrics
poster on biometricsposter on biometrics
poster on biometrics
 
Human activity recognition
Human activity recognitionHuman activity recognition
Human activity recognition
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
 
Presentation Fingervein Authentication
Presentation Fingervein AuthenticationPresentation Fingervein Authentication
Presentation Fingervein Authentication
 
Biometrics Security System
Biometrics Security SystemBiometrics Security System
Biometrics Security System
 
Palm vein technology ppt
Palm vein technology pptPalm vein technology ppt
Palm vein technology ppt
 

Similar to Iris recognition system

final report (parking project).pdf
final report (parking project).pdffinal report (parking project).pdf
final report (parking project).pdf
gamefacegamer
 
libor maserk
libor maserklibor maserk
libor maserk
arslanziabhatti
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interface
mangal das
 
report
reportreport
Wearable Biosensors Report
Wearable Biosensors ReportWearable Biosensors Report
Wearable Biosensors Report
Shubham Rokade
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
Sumanta Bhattacharyya
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
kurkute1994
 
Future Inspection of Overhead Transmission Lines
 Future Inspection of Overhead Transmission Lines Future Inspection of Overhead Transmission Lines
Future Inspection of Overhead Transmission Lines
Corporación Eléctrica del Ecuador, CELEC EP
 
Nweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italyNweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italy
AimonJamali
 
Project final report
Project final reportProject final report
Project final report
ALIN BABU
 
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
TanuAgrawal27
 
Project Report Distance measurement system
Project Report Distance measurement systemProject Report Distance measurement system
Project Report Distance measurement system
kurkute1994
 
Seminar Report on RFID Based Trackin System
Seminar Report on RFID Based Trackin SystemSeminar Report on RFID Based Trackin System
Seminar Report on RFID Based Trackin System
Shahrikh Khan
 
Security in mobile banking apps
Security in mobile banking appsSecurity in mobile banking apps
Security in mobile banking apps
Alexandre Teyar
 
Vehicle to Vehicle Communication using Bluetooth and GPS.
Vehicle to Vehicle Communication using Bluetooth and GPS.Vehicle to Vehicle Communication using Bluetooth and GPS.
Vehicle to Vehicle Communication using Bluetooth and GPS.
Mayur Wadekar
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - Report
Nandu B Rajan
 
2016XXXX_Sensor_system_WEB
2016XXXX_Sensor_system_WEB2016XXXX_Sensor_system_WEB
2016XXXX_Sensor_system_WEB
Shan Guan
 
Diplomarbeit
DiplomarbeitDiplomarbeit
Diplomarbeit
Dr. Robert MacKenzie
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health Informatics
Jason J Pulikkottil
 
Thesis
ThesisThesis

Similar to Iris recognition system (20)

final report (parking project).pdf
final report (parking project).pdffinal report (parking project).pdf
final report (parking project).pdf
 
libor maserk
libor maserklibor maserk
libor maserk
 
Design of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interfaceDesign of a bionic hand using non invasive interface
Design of a bionic hand using non invasive interface
 
report
reportreport
report
 
Wearable Biosensors Report
Wearable Biosensors ReportWearable Biosensors Report
Wearable Biosensors Report
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Future Inspection of Overhead Transmission Lines
 Future Inspection of Overhead Transmission Lines Future Inspection of Overhead Transmission Lines
Future Inspection of Overhead Transmission Lines
 
Nweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italyNweke digital-forensics-masters-thesis-sapienza-university-italy
Nweke digital-forensics-masters-thesis-sapienza-university-italy
 
Project final report
Project final reportProject final report
Project final report
 
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
Smart Traffic Management System using Internet of Things (IoT)-btech-cse-04-0...
 
Project Report Distance measurement system
Project Report Distance measurement systemProject Report Distance measurement system
Project Report Distance measurement system
 
Seminar Report on RFID Based Trackin System
Seminar Report on RFID Based Trackin SystemSeminar Report on RFID Based Trackin System
Seminar Report on RFID Based Trackin System
 
Security in mobile banking apps
Security in mobile banking appsSecurity in mobile banking apps
Security in mobile banking apps
 
Vehicle to Vehicle Communication using Bluetooth and GPS.
Vehicle to Vehicle Communication using Bluetooth and GPS.Vehicle to Vehicle Communication using Bluetooth and GPS.
Vehicle to Vehicle Communication using Bluetooth and GPS.
 
iGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - ReportiGUARD: An Intelligent Way To Secure - Report
iGUARD: An Intelligent Way To Secure - Report
 
2016XXXX_Sensor_system_WEB
2016XXXX_Sensor_system_WEB2016XXXX_Sensor_system_WEB
2016XXXX_Sensor_system_WEB
 
Diplomarbeit
DiplomarbeitDiplomarbeit
Diplomarbeit
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health Informatics
 
Thesis
ThesisThesis
Thesis
 

More from Anil Shrestha

Syllabus for Internship.pdf
Syllabus for Internship.pdfSyllabus for Internship.pdf
Syllabus for Internship.pdf
Anil Shrestha
 
Tweet sentiment analysis (Data mining)
Tweet sentiment analysis (Data mining)Tweet sentiment analysis (Data mining)
Tweet sentiment analysis (Data mining)
Anil Shrestha
 
Tweet sentiment analysis
Tweet sentiment analysisTweet sentiment analysis
Tweet sentiment analysis
Anil Shrestha
 
Real time-handwritten-devanagari-character-recoginition
Real time-handwritten-devanagari-character-recoginitionReal time-handwritten-devanagari-character-recoginition
Real time-handwritten-devanagari-character-recoginition
Anil Shrestha
 
Performance Evaluation of Employee
Performance Evaluation of EmployeePerformance Evaluation of Employee
Performance Evaluation of Employee
Anil Shrestha
 
Stock Market Analysis and Prediction
Stock Market Analysis and PredictionStock Market Analysis and Prediction
Stock Market Analysis and Prediction
Anil Shrestha
 
Transaction and concurrency control
Transaction and concurrency controlTransaction and concurrency control
Transaction and concurrency control
Anil Shrestha
 
Case study o & m
Case study o & mCase study o & m
Case study o & m
Anil Shrestha
 

More from Anil Shrestha (8)

Syllabus for Internship.pdf
Syllabus for Internship.pdfSyllabus for Internship.pdf
Syllabus for Internship.pdf
 
Tweet sentiment analysis (Data mining)
Tweet sentiment analysis (Data mining)Tweet sentiment analysis (Data mining)
Tweet sentiment analysis (Data mining)
 
Tweet sentiment analysis
Tweet sentiment analysisTweet sentiment analysis
Tweet sentiment analysis
 
Real time-handwritten-devanagari-character-recoginition
Real time-handwritten-devanagari-character-recoginitionReal time-handwritten-devanagari-character-recoginition
Real time-handwritten-devanagari-character-recoginition
 
Performance Evaluation of Employee
Performance Evaluation of EmployeePerformance Evaluation of Employee
Performance Evaluation of Employee
 
Stock Market Analysis and Prediction
Stock Market Analysis and PredictionStock Market Analysis and Prediction
Stock Market Analysis and Prediction
 
Transaction and concurrency control
Transaction and concurrency controlTransaction and concurrency control
Transaction and concurrency control
 
Case study o & m
Case study o & mCase study o & m
Case study o & m
 

Recently uploaded

Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
bijceesjournal
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
IJECEIAES
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
amsjournal
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
NazakatAliKhoso2
 
International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...
gerogepatton
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Christina Lin
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
Las Vegas Warehouse
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
JamalHussainArman
 
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
ecqow
 
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by AnantLLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
Anant Corporation
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
171ticu
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
Victor Morales
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 

Recently uploaded (20)

Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
 
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
UNLOCKING HEALTHCARE 4.0: NAVIGATING CRITICAL SUCCESS FACTORS FOR EFFECTIVE I...
 
Textile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdfTextile Chemical Processing and Dyeing.pdf
Textile Chemical Processing and Dyeing.pdf
 
International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...International Conference on NLP, Artificial Intelligence, Machine Learning an...
International Conference on NLP, Artificial Intelligence, Machine Learning an...
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
 
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
一比一原版(CalArts毕业证)加利福尼亚艺术学院毕业证如何办理
 
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by AnantLLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
LLM Fine Tuning with QLoRA Cassandra Lunch 4, presented by Anant
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样官方认证美国密歇根州立大学毕业证学位证书原版一模一样
官方认证美国密歇根州立大学毕业证学位证书原版一模一样
 
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsKuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 

Iris recognition system

  • 1. TRIBHUVAN UNIVERSITY INSTITUTE OF ENGINEERING HIMALAYA COLLEGE OF ENGINEERING [CODE: CT 755] A FINAL YEAR PROJECT REPORT ON IRIS RECOGNITION SYSTEM BY: Bina Acharya (070/BCT/11) Manjila Khanal (070/BCT/23) Rabindra Khadka (070/BCT/35) Radeep Chapagain (070/BCT/36) A REPORT SUBMITTED TO DEPARTMENT OF ELECTRONICS AND COMPUTER ENGINEERING IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR BACHLORE’S DEGREE IN COMPUTER ENGINEERING DEPARTMENT OF ELECTRONICS AND COMPUTER ENGINNERING LALITPUR, NEPAL AUGUST, 2017
  • 2. ii A PROJECT REPORT ON IRIS RECOGNITION SYSTEM Prepared For Department of Electronics and Computer Engineering Himalaya College of Engineering Chyasal, Lalitpur Prepared By Bina Acharya (070/BCT/11) Manjila Khanal (070/BCT/23) Rabindra Khadka (070/BCT/35) Radeep Chapagain (070/BCT/36) AUGUST, 2017
  • 3. iii Acknowledgment It gives us immense pleasure to express our deepest sense of gratitude and sincere thanks to our highly respected supervisor Er. Hari Prasad Pokhrel, for his insightful advice, motivating suggestions, invaluable guidance, support during the process and constant encouragement and advice throughout our project hours. We would like to express our sincere thanks to Er. Alok Kaflea (Project Co- ordinator, Department of electronics and computer), for giving us the opportunity to undertake this project. We express our deep gratitude to Er. Ashok GM (Head of Department, Electronics and computer Engineering Himalaya College of Engineering) for his regular support, co-operation, and co-ordination. The in-time facilities provided by the department throughout the project hours are also equally acknowledgeable. We would like to convey our thanks to the teaching and non-teaching staffs of the Department of Electronics and computer Engineering, HCOE for their invaluable help and support throughout the period of the project hours. We will not miss to express our gratitude to all our friends and everyone who has been the part of this project by providing their comments and suggestions. Bina Acharya Manjila Khanal Rabindra Khadka Radeep Chapagai
  • 4. iv Abstract This report on “Iris Recognition” is submitted in partial fulfillment of the requirement for Computer Engineering. “Iris Recognition” is a biometric application. A biometric system is a technological system where a person is identified with the unique features posed by an individual. Due to the increasing need of security this technique is gaining more popularity. Several biometric features like finger, iris, voice, have been continuously investigated and are still under consideration. Among this, iris recognition has been a hot topic in pattern recognition and machine learning. In this project we attempt to develop an app that identifies the person using the unique pattern of his/her iris. In this app the person will be identified by matching the features of their iris with the data stored in the database. Keywords: Iris recognition, Biometric, Pattern recognition.
  • 5. v Table of Contents Acknowledgment .....................................................................................................iii Abstract ................................................................................................................... iv Table of Contents.....................................................................................................v List of Figures .........................................................................................................vii List of Table........................................................... Error! Bookmark not defined. Abbreviations.........................................................................................................viii CHAPTER 1: INTRODUCTION ............................................................................1 1.1 Background ....................................................................................................2 1.2 Objective ........................................................................................................3 1.3 Problem Statement .........................................................................................3 1.4 Scope and Application ...................................................................................3 CHAPTER 2: LITERATURE REVIEW .................................................................4 2.1 Background ....................................................................................................5 2.2 Biometric Security..........................................................................................5 CHAPTER 3: REQUIREMENT ANALYSIS AND FEASIBLITY STUDY.........9 3.1 Feasibility Analysis......................................................................................10 3.1.1 Technical Feasibility .................................................................................. 10 3.1.2 Operational Feasibility............................................................................... 11 3.1.3 Economic Feasibility.................................................................................. 11 3.1.4 Schedule Feasibility................................................................................... 12 3.2 Requirement Definition................................................................................12 3.2.1 Functional Requirements........................................................................... 12 3.2.2 Non-Functional Requirements ................................................................... 12 3.3 Model and Software Process........................................................................13 CHAPTER 4: SYSTEM DESIGN AND ARCHITECTURE................................14 4.1 Use Case Diagram........................................................................................15 CHAPTER 5: METHODOLOGY .........................................................................17
  • 6. vi 5.1 Image Acquisition........................................................................................18 5.2 Pre-Processing..............................................................................................18 5.2.1 Grayscale.................................................................................................. 18 5.2.2 Median filter............................................................................................. 19 5.2.3 Mean Filter............................................................................................... 20 5.3 Segmentation................................................................................................21 5.3.1 Pupil center detection............................................................................... 21 5.3.2 Canny edge detector................................................................................. 22 5.3.3 Iris radius detection................................................................................... 23 5.4 Normalization...............................................................................................23 5.5 Matching.......................................................................................................24 CHAPTER 6: TESTING........................................................................................25 6.1. Unit Testing.................................................................................................26 6.2. System Testing............................................................................................26 6.3. Performance Testing ...................................................................................27 6.4. Verification and Validation.........................................................................27 CHAPTER 7: DISCUSSION.............................................................................28 CHAPTER 8: CONCLUSION ..............................................................................29 Screenshots.............................................................................................................33 Reference ...............................................................................................................36
  • 7. vii List of Figures Figure 1.1 Human Eye .............................................................................................2 Figure 5.2 System Architecture .............................................................................13 Figure 6.1 Original Gantt chart..............................................................................15 Figure 6.2 Current Gantt chart ...............................................................................16
  • 8. viii Abbreviations App Application ATM Automated Teller Machine CASIA Chinese Academy of Science Institute of Automation Dr. Doctor IEEE Institute of Electrical and Electronics Engineering Open CV Open Source Computer Vision PINs Personal Identification Numbers UI User Interface
  • 10. 2 1.1 Background A biometric system is a technological system where a person is identified with the unique features possessed by an individual (like voice, fingerprint, facial features, hand gestures, iris). In any biometric system first the sample of the feature is captured which is transformed into a biometric template. This template is later on compared with other templates to determine the identity. The iris is a thin circular diaphragm, which lies between the cornea and the lens of a human eye. The iris is close to the center by a circular aperture called pupil. The average diameter of the iris is 12 mm and the size of pupil varies 10% to 80% of the iris diameter. The unique pattern of the iris is random and not related to any genetic factors formed during first year of life. Due to the epigenetic nature of iris patterns identical twins possess uncorrelated iris patterns. Figure 1.1 Human Eye Compared to other biometric technologies, such as face, voice and fingerprint, iris recognition can be considered as the most reliable form of biometric technology. In addition iris has many special optical and physiological characteristics which can be exploited to defend against possible forgery.[1]
  • 11. 3 1.2 Objective The main objective of our application is to identify an individual with high efficiency and accuracy by analyzing the random patterns visible within the iris of an eye. 1.3 Problem Statement Conventionally passwords, secret codes and PINs are used for identification which can be easily stolen, observed or forgotten. In pattern recognition problems, the key issue is the relation between inter-class and intra-class variability: objects can be reliably classified only if the variability among different instances of a given class is less than the variability between different classes. For example in face recognition, difficulties arise from the fact that the face is a changeable social organ displaying a variety of expressions, as well as being an active 3D object whose image varies with viewing angle, pose, illumination, accoutrements, and age. So as an alternative we propose to use biometrics (iris recognition) system to identify an individual. 1.4 Scope and Application The purposed system is the iris recognition system that can be used in various fields for identification and authentication process. Some of its applications are  Computer login: as a password  Secure access to bank accounts at ATM  Premises access control (home, office, laboratory, etc)  Forensics: birth certificates ; tracing missing or wanted persons  Credit card authentication  Secure financial transactions (e-commerce)  Anti – terrorism (e.g. security screening at airplanes)  Any existing use of keys, cards, PINs or password
  • 13. 5 2.1 Background Research on biometric methods has gained renewed attention in recent years brought on by an increase in security concerns. The increasing crime rate has influenced people and their governments to take action and be more proactive in security issues. This need for security also extends to the need for individuals to protect, among other things, their working environments, homes, personal possessions and assets. Many biometric techniques have been developed and are being improved with the most successful being applied in everyday law enforcement and security applications. Biometric methods include several state-of- the-art techniques. Among them, iris recognition is considered to be the most powerful technique for security authentication in present context. Advances in sensor technology and an increasing demand for biometrics are driving a burgeoning biometric industry to develop new technologies. As commercial incentives increase, many new technologies for person identification are being developed, each with its own strengths and weaknesses and a potential niche market. 2.2 Biometric Security  The term “Biometrics” is derived from the Greek words “bio” (life) and “metrics” (to measure) (Rood and Hornak, 2008). Automated biometric systems have only become available over the last few decades, due to the significant advances in the field of computer and image processing. Although biometric technology seems to belong in the twenty first century, the history of biometrics goes back thousands of years. The ancient Egyptians and the Chinese played a large role in biometrics history. Today, the focus is on using biometric face recognition, iris recognition, retina recognition and identifying characteristics to stop terrorism and improve security measures. This section provides a brief history on biometric security and fingerprint recognition.  During 1858, the first recorded systematic capture of hand and finger images for identification purposes was used by Sir William Herschel, Civil Service
  • 14. 6 of India, who recorded a handprint on the back of a contract for each worker to distinguish employees (Komarinski, 2004).  During 1870, Alphonse Bertillon developed a method of identifying individuals based on detailed records of their body measurements, physical descriptions and photographs. This method was termed as “Bertillonage” or anthropometrics and the usage was aborted in 1903 when it was discovered that some people share same measurements and physical characteristics (State University of New York at Canton, 2003).  Sir Francis Galton, in 1892, developed a classification system for fingerprints using minutiae characteristics that is being used by researchers and educationalists even today. Sir Edward Henry, during 1896, paved way to the success of fingerprint recognition by using Galton's theory to identify prisoners by their fingerprint impressions. He devised a classification system that allowed thousands of fingerprints to be easily filed, searched and traced. He helped in the first establishment of fingerprint bureau in the same year and his method gained worldwide acceptance for identifying criminals (Scottish Criminal Record Office, 2002).  The concept of using iris pattern for identification was first proposed by Ophthalmologist Frank Burch in 1936 (Iradian Technologies, 2003). During 1960, the first semi-automatic face recognition system was developed by Woodrow W. Bledsoe, which used the location of eyes, ears, nose and mouth on the photographs for recognition purposes. In the same year, the first model of acoustic speech production was creased by a Swedish Professor, Gunnar Fant. His invention is used in today's speaker recognition system (Woodward et al, 2003).  Sir Francis Galton, in 1892, developed a classification system for fingerprints using minutiae characteristics that is being used by researchers and educationalists even today. Sir Edward Henry, during 1896, paved way to the success of fingerprint recognition by using Galton's theory to identify prisoners by their fingerprint impressions. He devised a classification system that allowed thousands of fingerprints to be easily filed, searched and traced. He helped in the first establishment of fingerprint bureau in the
  • 15. 7 same year and his method gained worldwide acceptance for identifying criminals (Scottish Criminal Record Office, 2002).  The concept of using iris pattern for identification was first proposed by Ophthalmologist Frank Burch in 1936 (Iradian Technologies, 2003). During 1960, the first semi-automatic face recognition system was developed by Woodrow W. Bledsoe, which used the location of eyes, ears, nose and mouth on the photographs for recognition purposes. In the same year, the first model of acoustic speech production was creased by a Swedish Professor, Gunnar Fant. His invention is used in today's speaker recognition system (Woodward et al, 2003).  The first automated signature recognition system was developed by North American Aviation during 1965 (Mauceri, 1965). This technique was later, in 1969, used by Federal Bureau of Investigation (FBI) in their investigations to reduce man hours invested in the analysis of signatures. The year 1970 introduced face recognition towards authentication. Goldstein et al. (1971) used 21 specific markers such as hair color, lip thickness to automate the recognition process. The main disadvantage of such a system was that all these features were manually identified and computed.  During the same period, Dr.Joseph Perkell produced the first behavioral components of speech to identify a person (Woodward et al, 2003). The first commercial hand geometry system was made available in 1974 for physical access control, time and attendance and personal identification. The success of this first biometric automated system motivated several funding agencies like FBI Fund, NIST for the development of scanners and feature extraction technology (Ratha and Bolle, 2004), which will finally lead to the development of a perfect human recognizer. This resulted in the first prototype of speaker recognition system in 1976, which was developed by Texas instruments and was tested by US Air Force and the MITRE Corporation. In 1996, the hand geometry was implemented successfully at the Olympic Games and the system implemented was able to handle the enrollment of over 65,000 people.
  • 16. 8  Drs. Leonard Flom and Aran Safir, in 1985, found out that no two irises are alike and their findings were awarded a patent during 1986. In the year 1988, the first semi-automated facial recognition system was deployed by Lakewood Division of Los Angeles County Sheriff's Department for identifying suspects (Angela, 2009). This was followed by several land marked contributiona by Sirovich and Kirby (1989), Turk and Pentland (1991), Philipis et al. (2000) in the field of face recognition.  The next stage in fingerprint automation occurred at the end of 1994 with the Integrated Automated Fingerprint Identification System (IAFIS) 36 competition. The competition identified and investigated three major challenges: (1) Digital fingerprint acquisition (2) Local ridge characteristic extraction and (3) Ridge characteristic pattern matching (David et al., 2005).  The first Automated Fingerprint Identification System (AFIS) was developed by Palm System in 1993. During 1995, the iris biometric was officially released as a commercial authentication tool by Defense Nuclear Agency and Iriscan.  The year 2000 envisaged the first face recognition vendor test (FRVT, 2000) sponsored by the US Government agencies and the same year paved way for the first research paper on the use of vascular patterns for recognition (Im et al., 2001). During 2003, ICAO (International civil Aviation Organization) adopted blueprints for the integration of biometric identification information into passports and other Machine Readable Travel Documents (MRTDs). Facial recognition was selected as the globally interoperable biometric for machine-assisted identity confirmation with MRTDs.  The first statewide automated palm print database was deployed by the US in 2004. The Face Recognition Grand Challenge (FRGC) began in the same year to improve the identification problem. In 2005, Iris on the move was announced by Biometric Consortium Conference for enabling the collection of iris images from individuals walking through a portal. [2]
  • 17. 9 CHAPTER 3: REQUIREMENT ANALYSIS AND FEASIBLITY STUDY
  • 18. 10 3.1 Feasibility Analysis A feasibility study is a preliminary study which investigates the information of prospective users and determines the resources requirements, costs, benefits and feasibility of proposed system. A feasibility study takes into account various constraints within which the system should be implemented and operated. In this stage, the resource needed for the implementation such as computing equipment, manpower and costs are estimated. The estimated are compared with available resources and a cost benefit analysis of the system is made. The feasibility analysis activity involves the analysis of the problem and collection of all relevant information relating to the project. The main objectives of the feasibility study are to determine whether the project would be feasible in terms of economic feasibility, technical feasibility and operational feasibility and schedule feasibility or not. It is to make sure that the input data which are required for the project are available. Thus we evaluated the feasibility of the system in terms of the following categories:  Technical feasibility  Operational feasibility  Economic feasibility  Schedule feasibility 3.1.1 Technical Feasibility Evaluating the technical feasibility is the trickiest part of a feasibility study. This is because, at the point in time there is no any detailed designed of the system, making it difficult to access issues like performance, costs (on account of the kind of technology to be deployed) etc. A number of issues have to be considered while doing a technical analysis; understand the different technologies involved in the proposed system. Before commencing the project, we have to be very clear about what are the technologies that are to be required for the development of the new system. Is the required technology available? Iris recognition system is technically feasible. All the tools necessary for this system is easily available. It uses NetBeans
  • 19. 11 for application development. Though all the tools seem to be easily available, there will be other challenges too. 3.1.2 Operational Feasibility Proposed project is beneficial only if it can be turned into information systems that will meet the operating requirements. Simply stated, this test of feasibility asks if the system will work when it is developed and installed. Are there major barriers to Implementation? Since the proposed system was to help reduce the hardships encountered in the current verification system, the new system was considered to be operational feasible. The purpose of any project is that targeted audience/ client uses our software. Thus it is necessary for developers to understand the need of targeted audience and implement it in our software. The targeted users of our system are any organization where authentication of individual plays a vital role. Though it may not be 100% efficient but it makes easy and minor error can be handled after the extraction. 3.1.3 Economic Feasibility Economic feasibility attempts to weigh the costs of developing and implementing a new system, against the benefits that would accrue from having the new system in place. This feasibility study gives the top management the economic justification for the new system. A simple economic analysis which gives the actual comparison of costs and benefits are much more meaningful in this case. In addition, this proves to be useful point of reference to compare actual costs as the project progresses. There could be various types of intangible benefits on account of automation. These could increase customer satisfaction, improvement in product quality, better decision making, and timeliness of information, expediting activities, improved accuracy of operations, better documentation and record keeping, faster retrieval of information, better employee morale. This application is an application based project. So tools for students can be obtained at affordable price. The creation of the application is not costly.
  • 20. 12 3.1.4 Schedule Feasibility A project will fail if it takes too long to be completed before it is useful. Typically, this means estimating how long the system will take to develop, and if it can be completed in a given period of time using some methods like payback period. Schedule feasibility is a measure how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some project is initiated with specific deadlines. It is necessary to determine whether the deadlines are mandatory or desirable. A minor deviation can be encountered in the original schedule decided at the beginning of the project. The application development is feasible in terms of schedule. [3] 3.2 Requirement Definition After the extensive analysis of the problems in the system, we are familiarized with the requirement that the current system needs. The requirement that the system needs is categorized into the functional and non-functional requirements. These requirements are listed below: 3.2.1 Functional Requirements Functional requirement are the functions or features that must be included in any system to satisfy the business needs and be acceptable to the users. Based on this, the functional requirements that the system must require are as follows:  System should detect the individual on the basis of iris.  System should process the input given by the user only if it is an image file. 3.2.2 Non-Functional Requirements Non-functional requirements is a description of features, characteristics and attribute of the system as well as any constraints that may limit the boundaries of the proposed system. The non-functional requirements are essentially based on the
  • 21. 13 performance, information, economy, control and security efficiency and services. [4] Based on these the non-functional requirements are as follows:  User friendly  System should provide better accuracy  To perform with efficient throughput and response time 3.3 Model and Software Process For the development of software starting from beginning to the completion of final product, specific models have to be used. In software development life cycle, it is possible to use any software models like waterfall, incremental, prototype agile etc. based on the project requirements. In this project, agile model has been used. It is also a type of incremental model in which software is developed in incremental and rapid cycles. Whenever new changes have to be made, agile model allows easy implementation at very little cost. It minimizes risk by developing software in small iterations. Planning, developing and testing phase has been iteratively used to implement new changes easily that reduces the risk of project failure.
  • 22. 14 CHAPTER 4: SYSTEM DESIGN AND ARCHITECTURE
  • 23. 15 4.1 Use Case Diagram
  • 26. 18 The project we are working on “Iris Recognition System” has been completed. We have used Java as our platform. The whole system is further divided into subsystems. 5.1 Image Acquisition In this phase we acquire basic information from user like name, phone no, email id and image of an eye. This information will be stored in the memory. For the image first it goes through several process and then only gets stored in the memory. 5.2 Pre-Processing Pre-processing of image refers to operations performed on image at the lowest level of abstraction. Raw image without pre-processing may have a variety of problems, and therefore it is not likely to produce the best computer vision results. The aim of pre-processing is an improvement of image data that suppress unwanted distortions or enhances some image features for further processing. Image pre-processing can have a dramatic positive effects on the quality of feature extraction and the results of image analysis. [5] 5.2.1 Grayscale Image Grayscale is a range of shades of gray without apparent color. The darkest possible is black and the lightest possible is white. Grayscale images use only one channel of color. Converting an image to grayscale is a common technique in image processing as a pre-processing technique. This is because of the benefits like the processing operation should only be applied in a single plane with grayscale image, it is simpler than using RGBA image. There are different methods for converting an image into grayscale. The lightness method averages the most prominent and least prominent colors: (max(R, G, B) + min(R, G, B)) / 2. The average method simply averages the values: (R + G + B) / 3. The luminosity method is a more sophisticated version of the average method. It also averages the values, but it forms a weighted average to account for human perception. The formula for luminosity is 0.21 R + 0.72 G + 0.07 B. [6]
  • 27. 19 We have used the luminosity method. Red, green and blue are not equally bright so we are using a weighted average. 5.2.2 Median filter The median filter is a non-linear digital filtering technique often used to remove noise from an image. It preserves the useful information along with reducing noise. Median filter considers each pixel in the image in turn and looks at its nearby neighborhood. First all the pixel values from the surrounding are sorted and then the pixel being considered is replaced by the middle pixel value. If the neighborhood under consideration contains an even number of pixels, the average of the two middle pixel values is used. An example calculation is shown below:
  • 28. 20 [7] 5.2.3 Mean Filter The mean filter is a simple sliding-window spatial filter that replaces the center value in the window with the average (mean) of all the pixel values in the window. Mean filtering is usually thought of as a convolution filter. Like other convolutions it is based around a kernel, which represents the shape and size of the neighborhood to be sampled when calculating the mean. Often a 3×3 square kernel is used, although larger kernels (e.g. 5×5 squares) can be used for more severe smoothing. The effect of applying small kernel more than once is similar but not identical as a single pass with a large kernel.
  • 29. 21 [8] 5.3 Segmentation 5.3.1 Pupil center detection The main objective here is to detect the pupil center. The algorithm scans through the median image from top left to bottom right. The algorithm makes no assumption about position of pupil. First it finds the pixel that is below threshold (a combination of lowest intensity in current image and current variance). Then the amount of pixel (block size) adjacent to its right that have an intensity below threshold are detected. Center of the detected block is center of the pupil. But, if the block is largest observed (i.e. larger that maximum block size ) it determines if the block of pixels going in vertical direction up and down from center are also below threshold and some variance. If so, the max block size is updated and the center of that block is new pupil center.
  • 30. 22 5.3.2 Canny edge detector Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in an image. It detects the edges of the image based on the current threshold and sigma values. This will generate the binary image that shows the edges of the image. Canny edge detector aims to satisfy main three criteria:  Low error rate: A good detection of only existing edges.  Good localization: Distance between edge pixels detected and real edge pixels have to be minimized.  Minimal response: Only one detector response per edge. The algorithm works in following steps: 1) Filter out any noise. The Gaussian filter is used for this purpose. An example of Gaussian kernel of size=5 might be used as shown below: 2) Find the intensity gradient of the image. For this, we follow a procedure analogous to sobel: a) Apply a pair of convolution masks (in x and y directions): b) Find the gradient strength and direction with:
  • 31. 23 The direction is rounded to one of four possible angles (namely 0, 45, 90 or 135). 3) Non-maximum suppression is applied. This removes pixels that are not considered to be part of an edge. Hence, only thin lines will remain. 4) Hysteresis is the final step. Canny does use two threshold (upper and lower) : a) If a pixel gradient is higher than the upper threshold, the pixel is accepted as an edge. b) If a pixel gradient is below the lower threshold, then it is rejected. c) If the pixel gradient is between the two thresholds, then it will be accepted only if it is connected to a pixel that is above the upper threshold. [9] 5.3.3 Iris radius detection Now that the pupil center and edge been detected we start with radius of pupil identified by the pupil center detection and finds a radius for which the circle in the edge image has at least a certain amount of black pixels (edges) on or nearly on the circle. If the proportion of the iris radius meeting this criteria (radius of iris radius to pupil radius should be of about 0.30 to 0.40) is between two bounds and the percentage of pixels along the circle defined by the current radius and the pupil center that are black is greater than certain percentage then the iris radius is successfully detected. 5.4 Normalization The main goal is to define the area between the pupil radius and the iris radius. For this the iris a circular portion is transformed into rectangular. For each coordinate in the image, we determine the polar angle and the distance between the radius of the iris and pupil. We also determine the relative distance from the pupil radius to
  • 32. 24 the point. Using this information we convert each polar coordinates to Cartesian co- ordinates in each iteration. For this we use the formula: X= cos (Ɵ)* r + (x-coordinate of center) Y= sin (Ɵ)* r + (y-coordinate of center) Where, X= x Cartesian coordinate Y= y Cartesian coordinate r = radius of pupil and relative distance Ɵ = angle of the current polar coordinate Centre = pupil center 5.5 Matching Initially the image is stored in database only after the unwrapping of the iris portion. So, when a new identity is to be matched first the median filter is applied and then pupil and iris center is detected and both radius is found. And then the region of the iris is unrolled for the identity. Median filter is applied and then this image is compared to the one in the database by subtracting the intensity of two images and change in pixel is determined for each pixel. Then the average percentage change per pixel between the subject and identity is determined. Iteratively, the percentage change (probability) between the two images is compared for each image in the database to find the best match. And finally the best match is identified.
  • 34. 26 6.1. Unit Testing Unit testing is performed for testing modules against detailed design. Inputs to the process are usually compiled modules from the coding process. Each modules are assembled into a larger unit during the unit testing process. Testing has been performed on each phase of project design and coding. We carry out the testing of module interface to ensure the proper flow of information into and out of the program unit while testing. We make sure that the temporarily stored data maintains its integrity throughout the algorithm's execution by examining the local data structure. Finally, all error-handling paths are also tested. [10] 6.2. System Testing We usually perform system testing to find errors resulting from unanticipated interaction between the sub-system and system components. Software must be tested to detect and rectify all possible errors once the source code is generated before delivering it to the customers. For finding errors, series of test cases must be developed which ultimately uncover all the possibly existing errors. Different software techniques can be used for this process. These techniques provide systematic guidance for designing test that  Exercise the internal logic of the software components,  Exercise the input and output domains of a program to uncover errors in program function, behavior and performance. We test the software using two methods: White Box testing: Internal program logic is exercised using this test case design techniques. Black Box testing: Software requirements are exercised using this test case design techniques. Both techniques help in finding maximum number of errors with minimal effort and time.
  • 35. 27 6.3. Performance Testing It is done to test the run-time performance of the software within the context of integrated system. These tests are carried out throughout the testing process. For example, the performance of individual module are accessed during white box testing under unit testing. 6.4. Verification and Validation The testing process is a part of broader subject referring to verification and validation. We have to acknowledge the system specifications and try to meet the customer’s requirements and for this sole purpose, we have to verify and validate the product to make sure everything is in place. Verification and validation are two different things. One is performed to ensure that the software correctly implements a specific functionality and other is done to ensure if the customer requirements are properly met or not by the end product. Verification is more like 'are we building the product right?' and validation is more like 'are we building the right product?’[11]
  • 36. 28 CHAPTER 7: RESULT ANALYSIS AND LIMITATION
  • 37. 29 7.1 Result analysis After facing a number of errors, successful elimination of those error we have completed our project with continuous effort. At the end of the project the results can be summarized as:  A user friendly desktop application to use.  No expertise is required for using the application.  Organizations can use the application to authenticate individuals.  A strong method of authentication compared to other traditional mechanisms. 7.2 Limitations CHAPTER 8: CONCLUSION
  • 38. 30 We have completed our project using java as our programming language and NetBeans IDE. From the initial phase of the project till its completion we have encountered a number of problems which were later eliminated. With continuous effort finally the application was run successfully with all the test being a success.
  • 39. 31 CHAPTER 9: FUTURE ENHANCMENT
  • 42. 34
  • 43. 35
  • 44. 36 Reference [1] E. M. Ali, E. S. Ahmed, and A. F. Ali, “Recognition of human iris patterns for biometric identification,” J. Eng. Appl. Sci., vol. 54, no. 6, pp. 635–651, 2007. [2] G. Wavelet, “Chapter-2 LITERATURE REVIEW ON IRIS RECOGNITION SYTSEM,” pp. 14–31. [3] I. Staff, "Feasibility Study", Investopedia, 2017. [Online]. Available: http://www.investopedia.com/terms/f/feasibility-study.asp. [Accessed: 08- Jan- 2017]. [4]2017. [Online]. Available: https://ifs.host.cs.st- andrews.ac.uk/Books/SE7/Presentations/PDF/ch6.pdf. [Accessed: 04- Aug- 2017] [5]]2017. [Online]. Available: https://www.embedded- vision.com/sites/default/files/apress/computervisionmetrics/chapter2/9781430259 299_Ch02.pdf. [Accessed: 04- Aug- 2017] [6] C. Kanan and G. Cottrell, "Color-to-Grayscale: Does the Method Matter in Image Recognition?", 2017. . [7] "Spatial Filters - Median Filter", Homepages.inf.ed.ac.uk, 2017. [Online]. Available: https://homepages.inf.ed.ac.uk/rbf/HIPR2/median.htm. [Accessed: 04- Aug- 2017] [8] "Spatial Filters - Mean Filter", Homepages.inf.ed.ac.uk, 2017. [Online]. Available: https://homepages.inf.ed.ac.uk/rbf/HIPR2/mean.htm. [Accessed: 04- Aug- 2017] [9] "Canny Edge Detector — OpenCV 2.4.13.3 documentation", Docs.opencv.org, 2017. [Online]. Available: http://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector. html. [Accessed: 04- Aug- 2017] [10] "What are Unit Testing, Integration Testing and Functional Testing? | CodeUtopia", Codeutopia.net, 2017. [Online]. Available: https://codeutopia.net/blog/2015/04/11/what-are-unit-testing-integration-testing-and- functional-testing/. [Accessed: 04- Aug- 2017]
  • 45. 37 [11]"What is Verification and Validation? — Software Testing Help", Softwaretestinghelp.com, 2017. [Online]. Available: http://www.softwaretestinghelp.com/what-is-verification-and-validation/. [Accessed: 04- Aug- 2017] [12]