A

         PROJECT REPORT
               ON

     “IRIS BASED HUMAN
    RECOGNITION SYSTEM”

          SUBMITTED BY:

      MR. DESAI GAURAV S.
       MR. JOSHI SUSHIL D.
      MS. IYER ANANDHI G.
    MR.WAZALWAR DHAWAL S.
            (BE. EXTC)

            GUIDED BY:
    MR. R.R. MANTHALKAR




SGGS INSTITUTE OF ENGINEERING AND
 TECHNOLOGY, VISHNUPURI, NANDED

   SWAMI RAMANAND TEERTH
MARATHWADA UNIVERSITY, NANDED



                                    1
CERTIFICATE

  THIS IS TO CERTIFY THAT THE PROJECT REPORT

                      ENTITILED

  “IRIS BASED HUMAN RECOGNITION SYSTEM”

                     SUBMITED BY

               MR. DESAI GAURAV S.
               MR. JOSHI SUSHIL D.
               MS. IYER ANANDHI G.
             MR.WAZALWAR DHAWAL S.
                   (BE. EXTC)
  HAS BEEN COMPLETED AS PER THE REQUIREMENT OF

“SWAMI RAMANAND TEERTH MARATHWADA
         UNIVERSITY, NANDED”.

     IN PARTIAL FULFILMENT OF THE DEGREE
   B.E. (Electronic and Telecommunication Engineering)

        FOR THE ACADEMIC YEAR 2006-2007.




GUIDE                                   HEAD OF DEPT.

DR. R.R MANTHALKAR                 Prof. A.N .KAMTHANE



                                                         2
INDEX

•   ACKNOWLEDGEMENT………………………………........4
•   ABSTRACT………………………………………………......5
•   LIST OF FIGURES……………………………………..........6
•   INTRODUCTION TO BIOMETRICS ………………………7
o       ADVANTAGES OF BIOMETRICS………………………………8
o       COMMON BIOMETRIC PATTERNS……………………………9
o       THE HUMAN IRIS………………………………………………..10

•   PROJECT INTRODUCTION…………………………...…..14
o       OBJECTIVE………………………………………………………14
o       BASIC ASSUMPTIONS………………………………………….14
o       DATABASE CHARACTERISTICS……………………………...15

•   IMAGE SEGMENTATION ………………………….…….16
o       AVAILABLE TECHNIQUE…………………………….……….16
o       OUR APPROACH……………………………………….……….18

•   IMAGE NORMALIZATION ………………………………23
o       BASIC MOTIVATION…………………………………..………23
o       OUR APPROACH……………………………………… ……….24

•   FEATURE EXTRACTION ………………………………...26
o       BASIC TECHNIQUE……………………………………………26
o       OUR APPROACH…………………………………………….....29

•   MATCHING …………………………………….…………31
o       DIFFERENT DISTANCE METRICS…………………………..31
o       OUR APPROACH………………………………………………34

•   ENROLLMENT AND IDENTIFICATION…….…….........35
o       ENROLLMENT PHASE………………………………………..35
o       IDENTIFICATION PHASE………………………………….....36


                                                   3
•    CONCLUSION …………………………………………….37
          SUMMARY OF WORK………………………………37
          SUGGESTED IMPROVEMENTS…………………….38
•    GRAPHIC USER INTERFACE……………………...…….39
•    REFERENCES …………………………………………......43




    ACKNOWLEDGEMENT
                                               4
For the accomplishment of any Endeavour the most
important factor besides Hard-work & Dedication is the Proper
Guidance. Just as a light-house provides guidance to the sea
voyagers even in the extreme stormy conditions the same job has
been done by our guide Dr. R.R. Manthalkar.
We heartily thank him for his timely support & guidance.
His keen interest & enthusiasm in executing the project has always
been a source of inspiration for us.

              We would like to thank our Head of department Prof.
A.N Kamthane and also Dr. M.B.kokare for their moral support &
encouragement. We would also take this opportunity to extend our
sincere thanks to all the other professors of our department for their
direct or indirect help extended by them which was a cap in the
feather of our efforts.

               We would also like to thank all our friends who also
played an instrumental role in the completion of our project. Last
but not the least we would like to thank our parents who always
support us in our efforts.




                                      MR. DESAI GAURAV S.
                                       MR. JOSHI SUSHIL D.
                                      MS. IYER ANANDHI G.
                                  MR.WAZALWAR DHAWAL S.
                                             (BE. EXTC)




                                                                   5
ABSTRACT

             The successful implementation of any system is largely
determined by its reliability, authenticity & the amount of secrecy it
provides. In today’s highly techno world where security & privacy
are the concerns of prime importance the crucial systems must
employ techniques to achieve this.
            Our project is just a small step towards this. The Iris based
system can cope up with a lot of the individual biological variations
& still provide the identification system with much accuracy &
reliability.
             In this project we have designed system which involves
recognition of a person using IRIS as biometric parameter. We have
first segmented pupil & iris structure from the original eye image.
Then we have normalized it to build a feature vector which
characterizes each iris distinctly. This feature vector is then used for
matching among various templates & identifies the individual.
                The work provided in this project is mainly aimed at
providing a recognition system in order to verify the uniqueness of
the human iris and also its performance as a biometric. The paper
has implementation of all the algorithms on the CASIA database.
The various results of the different implementations and their
accuracies have been tested in this paper.
              All in all this paper is a sincere effort in suggesting an
efficient system for the implementation of the Human Identification
system based on IRIS recognition.




LIST OF FIGURES
                                                                      6
•   Fig no. 1.3.1 Picture of Human Eye……………………….12
•   Fig no. 3.1.1 Demonstration of Hough Transform………...17
•   Fig no.3.2.1 Fully Segmented Iris Image………………....21
•   Fig no 4.1.1 Daugman`s Rubber sheet model…………....23
•   Fig no. 4.2.1 Fully and Enhanced Normalized Image……..24
•   Fig no. 5.1.1 2 Level Decomposition of Wavelet………….26
•   Fig no. 5.2.1 Application of db2 Wavelet to eye image…...29
•   Fig no. 7.1.1 General Iris Identification System ………….34




CHAPTER 1
INTRODUCTION TO BIOMETRICS


                                                              7
In this vastly interconnected society establishing
the true identity of the person is becoming the most critical issue.
The questions like “Is she really who she claims to be?”, “Is this
person authorized to use this facility?” or “Is he in the watch list
posted by the government?” are routinely being asked in a variety of
scenarios ranging from issuing a driver’s license to gaining entry
into a country. With the advent of the newer networking
technologies the sharing of vast audio & video resources has
become much easier but has raised brows over the issues concerning
the security in the transactions. The need for reliable user
authentication techniques has increased in the wake of heightened
concerns about security and rapid advancements in networking
, communication and mobility.
                     Biometrics, defined as a science of recognizing
an individual based on her physiological, biological or behavioral
traits, is beginning to gain acceptance as a legitimate method for
determining an individual’s identity. Biometrics aims to accurately
identify each individual using various physiological or behavioral
characteristics such as fingerprints, Face, iris, retina, gait, palm-
prints and hand geometry. The developments in science and
technology have made it possible to use biometrics in applications
where it is required to establish or confirm the identity of
individuals.
                          Applications such as passenger control in
airports, access control in restricted areas, border control, database
access and financial services are some of the examples where the
biometric technology has been applied for more reliable
identification and verification. In recent years, biometric identity
cards and passports have been issued in some countries based on
iris. Fingerprint and face recognition technologies to improve
border control process and simplify passenger travel at the airports.
In the field of financial services, biometric technology has shown
a great potential in offering more comfort to customers while
increasing their security. Although there are still some concerns
about using biometrics in the mass consumer applications due to

                                                                    8
information protection issues, it is believed that the technology will
 find its way to be widely used in many different applications.


 1.1 ADVANTAGES OF BIOMETRICS:
  1. It links an event to a particular individual, not just to a
     password or token.
  2. It is very convenient from user friendliness point of view since
     there is nothing to remember, unlike in the case of passwords
     or some code words.
  3. It can’t be guessed, stolen, shared, lost or forgotten.
  4. It prevents impersonation by protecting against identity theft
     and providing higher degree of non- repudiation.
  5. It enhances privacy by protecting against unauthorized access
     to personal information.
                 A good biometric is characterized by use of a feature
  that is highly unique – so that the chance of any two people
  having the same characteristic will be minimal, stable – so that
  the feature does not change over time, and be easily captured – in
  order to provide convenience to the user, and prevent
  misrepresentation of the feature.

1.2 COMMON BIOMETRIC PATTERNS

 1. Finger print Recognition
    Features:
    a. Measures characteristics associated with the friction ridge
       pattern on the fingertip.
    b. General ease and speed of use.
    c. Supports both 1:1 verification and 1:N applications

   Considerations:
   a. Ridge patterns may get affected due to accidents or aging
      effects.


                                                                      9
b. Requires Physical contact with sensor.

2. Facial Recognition
   Features:
   a. Analyzes geometry of the face or the relative distance
      between features.
   b. No physical contact required.
   c. Supports both 1:1 verification and 1:N identification
      applications.

 Considerations:
  a. Can be affected by surrounding lightning conditions.
  b. Appearance may change over time.

3. Hand Geometry
   Features:
   a. Measures dimensions of hand, including shape and length of
      finger.
   b. Very low failure to enroll rate.
   c. Rugged.

  Considerations:
  a. Suitable only for 1:1 contexts.

4. Speech Recognition
   Features:
   a. Compares live speech with previously created speech model
      of person’s voice.
   b. Measures pitch, cadence and tone to create voice print
      cadence.

Considerations:
  a. Background Noise can interfere
  b. Suitable only for 1:1 contexts.


                                                              10
1.3 THE HUMAN IRIS

              The human iris is rich in features which can be used to
quantitatively and positively distinguish one eye from another. The
iris contains many collagenous fibers, contraction furrows,
coronas, crypts, color, serpentine vasculature, striations, freckles,
rifts, and pits. Measuring the patterns of these features and their
spatial relationships to each other provides other quantifiable
parameters useful to the identification process. The iris is unique
because of the chaotic morphogenesis of that organ.
 To quote Dr. John Daugman, “An advantage the iris shares with
fingerprints is the chaotic morphogenesis of its minutiae. The iris
texture has chaotic dimension because its details depend on initial
conditions in embryonic genetic expression; yet, the limitation of
partial genetic diffusion (beyond expression of form, function,
color and general textural quality), ensures that even identical
twins have uncorrelated iris minutiae. Thus the uniqueness of
every iris, including the pair possessed by one individual, parallels
the uniqueness of every fingerprint regardless of whether there is a
common genome.” Given this, the statistical probability that two
irises would be exactly the same is estimated at 1 in 10^72.

1.3. a. STABLITY OF IRIS

             Notwithstanding its delicate nature, the iris is protected
behind eyelid, cornea, aqueous humor, and frequently eyeglasses
or contact lenses (which have negligible effect on the recognition
process). An iris is not normally contaminated with foreign
material, and human instinct being what it is, the iris, or eye, is one
of the most carefully protected organs in one’s body. In this
environment, and not subject to deleterious effects of aging, the
features of the iris remain stable and fixed from about one year of
age until death.



                                                                     11
1.3. b. STRUCTURE OF IRIS




              Fig no. 1.3.1 Picture of Human Eye


            The iris is a thin circular diaphragm, which lies between
the cornea and the lens of the human eye. A front-on view of the
iris is shown in Figure. The iris is perforated close to its centre by a
circular aperture known as the pupil. The function of the iris is to
control the amount of light entering through the pupil, and this is
done by the sphincter and the dilator muscles, which adjust the size
of the pupil. The average diameter of the iris is 12 mm, and the
pupil size can vary from 10% to 80% of the iris diameter.
            The iris consists of a number of layers; the lowest is the
epithelium layer, which contains dense pigmentation cells. The
stromal layer lies above the epithelium layer, and contains blood
vessels, pigment cells and the two iris muscles. The density of
stromal pigmentation determines the color of the iris. The
externally visible surface of the multi-layered iris contains two
zones, which often differ in color. An outer ciliary’s zone and an


                                                                     12
inner pupillary zone, and these two zones are divided by the
collarets – which appears as a zigzag pattern.

1.3. c. ADVANTAGES OF USING IRIS PATTERN

1. Highly protected internal organ of the eye.
2. Iris patterns possess a high degree of randomness.
3. Patterns are apparently stable throughout life.
4. Comparatively fast matching technique.




                                                          13
CHAPTER 2

PROJECT INTRODUCTION
2.1. OBJECTIVE
              The objective will be to implement an open-source
      iris recognition system in order to verify the claimed
      performance of the technology. Our main system consists of
      many subsystems. Important steps involved can be
      demonstrated as follows:
1. Segmentation:
      Locates the iris region in the eye image by eliminating the
      unwanted parts.
2. Normalization
      Creating a dimensionally consistent representation of iris
      region.
3. Feature Encoding and Matching
      Creating a template containing only the most discriminating
      features of the iris and matching it.
                    So our main aim is to implement best possible
algorithm for each step and obtain required degree of accuracy.

2.2 BASIC ASSUMPTIONS
a) Methods for image segmentation and feature extraction will
   assume all patterns have the same rotation angle.
b) The Iris and the pupil regions are assumed to be perfectly
   concentric circles.
c) The pupil region has a constant intensity throughout.
d) Of the available seven images of a subject four are considered
   as principle images while three are considered for test images.
e) We have resized normalized image of each of the iris into a
   vector of constant dimensions irrespective of their original
   size.



                                                                14
2.3. DATABASE CHARACTERISTICS
            This paper is based on the CASIA image database. Its
ethnical distribution is composed mainly of Asians. Each iris class
is composed of 7 samples taken in two sessions, three in the first
session and four in the second. Sessions were taken with an
interval of one month. Images are 320x280 pixels gray scale taken
by a digital optical sensor designed by NLPR (National Laboratory
of Pattern Recognition – Chinese Academy of Sciences). There are
108 classes or irises in a total of 756 iris images.Each of the iris
images is preprocessed to eliminate the effect of the illumination
variations and other noise effects.




                                                                  15
CHAPTER 3
IMAGE SEGMENTATION
            The main aim of the segmentation step is to distinguish
an iris texture from the rest of the eye image. Properly detecting
the inner and outer boundaries of iris texture is significantly
important in all iris recognition systems. The segmentation is a
crucial step in that any false representation here may corrupt the
resulting template in poor recognition rates.
            The iris is an annular portion between the pupil (inner)
and the sclera (outer boundary). The iris regions can be
approximated by two circles, one for the iris/sclera boundary and
another, interior to the first, for the iris/pupil boundary. The eyelids
and eyelashes normally occlude the upper and lower parts of the
iris region. Also, specular reflections can occur within the iris
region corrupting the iris pattern. However, our project work is
mainly focused on CASIA iris database which do not contain
specular reflections due to the use of near infra-red light for
illumination. The success of the segmentation depends upon the
imaging quality of the eye image. Imaging of the iris must acquire
sufficient detail for recognition while being minimally invasive to
the operator.

3.1 Available Techniques
  a. Daugman`s Integro Differential Operator
                       Daugman proposed a method by making use
  of first derivatives of image intensity to signal the location of
  edges that corresponds to the location of edges that corresponds
  to the borders of the iris. The notion is that the magnitude of the
  derivative across an imaged border will show a local maximum
  due to the local change of image intensity. The limbus and pupil
  are modeled with circular contours. The expected configuration


                                                                     16
of model components is used to fine tune the image intensity
  derivative information.
     Daugman`s operator is expressed as




    I(x, y) is an image containing an eye
The operator searches over the image domain (x, y) for the
maximum in blurred partial derivative with respect to increasing
radius r of the normalized contour integral of I(x, y) along a
circular arc ds of radius r and center coordinates (x0, y0) .
           Is smoothing function
Operator serves and behaves as circular edge detector blurred at a
scale by σ, which searches iteratively for maximum contour
integral derivative with increasing radius at successively finer
scales of analysis through the three parameter space of center
coordinates and radius (x0, y0, r) defining a path of contour
integration.

b. Hough Transform
         This is a standard computer algorithm that can be used to
determine the parameters of single geometric objects such as lines
and circles, present in an image. Circular Hough Transform is used
to deduce the radius and center coordinates of the pupil and iris
regions, while to detect the eyelids, parabolic Hough Transform is
used.Hough Transform can be demonstrated as follows:




  Fig. no. 3.1.1 Demonstration of Hough Transform

                                                                17
These edge maps along with determination of some appropriate points
in Hough space will give us required parameters.
Some problems in use of Hough Transform are:
1. It is difficult to determine the threshold values to be chosen for
edge detection resulting in critical edge points being removed
sometimes, resulting in failure to detect circles/arcs.
2. This approach is computationally intensive due to its brute force
nature and thus may not be suitable for real time applications.

c. Other Methods for Segmentation
              Iris localization proposed by Tisse. et al is combination of
Integro differential and the Hough Transform. The Hough Transform
is used for a quick guess of the pupil center and then the integro
differential is used to accurately locate pupil and limbus using a
smaller search space.
               Lim et al localize pupil and limbus by providing an edge
map of the intensity values of the image. The center of the pupil is
then chosen using a bisection method that passes perpendicular lines
from every two points on the edge map. The center point is then
obtained by voting the point that has the largest number of line
crossovers. The pupil and limbus boundaries are then selected by
increasing the radius of a virtual circle with selected center point and
choosing the two radii that have the maximum number of edges by the
virtual circle as the pupil and limbus radii.

3.2 Our Approach

Our approach is mainly divided into four parts as listed below:
 1. Separation of Pupil Region from Eye image.
 2. Determination of Pupil Coordinates
 3. Determination of Iris Edges based on the above two steps
 4. Removing the unwanted part and getting the segmented part




                                                                        18
a. Separation of Pupil Region
             CASIA Database images have been preprocessed and
      each has constant intensity pupil region. Firstly we determine
      this constant value so that we can use it as a threshold value to
      separate pupil region from eye image. For this we employ
      following steps:
      • First, We obtain mean of the image (say ’m’)
      • Then we scan the image completely to determine the regions
         having same pixel value consecutively for 15 times.
      • Sometimes, Camera effect may develop bright constant
         intensity regions on the eyelids. To avoid selecting this
         region value as threshold value, we compare each region
         value with the obtained mean (here ‘m’) and select that value
         which is less than the mean value.
      • After getting the threshold value, we stop scanning the
         image.
      • Sometimes, eyelashes also satisfy the threshold condition, but
         have much smaller area than pupil area. Using this
         knowledge and concept of 8 connected pixels, we can cycle
         through all regions and apply the following conditions
                        For each region R
                              If Area(R) <1300,Set all pixels of R to 0
      • To this finally obtained pupil image, some statistical
         MATLAB functions are applied and it’s Center Coordinates
         and radius is determined.

b. Determination of Pupil Coordinates
        This function is mainly used to determine the edges of pupil.
Here, we take the help of MATLAB ‘find’ function to detect the
abrupt change in the image intensity. This function also helps us to
verify the previously determined Pupil parameters with the help of
following formula:
    Center       = (|Right/Top Edge - left/Bottom Edge|)/2
    Coordinates       coordinates           Coordinates


                                                                     19
c. Determination of iris edges
          This is used to obtain the contour of the iris. Here, we
   assume that iris and pupil are concentric circular regions. It takes
   into consideration that areas of the iris at the right and left of the
   pupil are the ones that most often present visible to data extraction.
   The areas above and below the pupil also carry unique
   information, but it is very common that they are totally or partially
   occluded by eyelash or eyelid.
         Firstly, we enhance the iris image so that edges can be seen
   more prominently and their detection will become simpler. For this
   we employ Histogram equalization method which gives us an
   image with increased dynamic range, which will tend to have
   higher contrast. Then we follow certain steps as listed below to get
   the iris edges:
      • Firstly, we get the pupil edges from previously explained
          functions.
      • Then, we start the process with 30 pixels left to the left edge
          of pupil (say this point is ‘a’), so as to avoid abrupt intensity
          changes in collarette region, which may mislead our
          algorithm. We form a vector containing image intensity at
          that point ‘a’ and 4 pixels above and below it. Mean of this
          vector is obtained (say it is m1).
      • Another vector containing image intensity at point ‘a-1’ and
          3 points above and below it is obtained. Mean of this vector
          is obtained (say it is m2).
      • Now, We compute ‘m’, where m=|m1- m2|
      • If m>. 04, there may be abrupt change in image intensity.
      • To avoid false detection which may cause due to iris region
          getting corrupted by some reasons, we obtain similar vector
          for ‘a- 2’also and calculate its mean and store it in m2.
      • Again step 5 is repeated and if condition is satisfied for 10
          times consecutively we conclude that this is the left edge of


                                                                        20
the iris and stop further calculations. We calculate this edge’s
        distance from pupil center and thus we get iris radius towards
        left i.e. r1.
      • The above steps are repeated for right part also. Here, the
        exception is that we start from 30 pixels right to the right
        most pupil edge. This gives radius towards right i.e. r2.
      • We take the larger value amongst r1 and r2 so as to avoid any
        data loss.

   d. Removal of unwanted part from the image

                  Now, the above data so far obtained can be used to
eliminate portions other than iris region.         This is done by
approximating Iris as a circle with center at pupil center. Any region
outside this circle is deleted and then finally we get the segmented
image as shown:




            Fig no. 3.2.1 Fully Segmented Iris Image




                                                                      21
CHAPTER 4
NORMALIZATION
                              The main aim of normalization is to
  transform the iris region into fixed dimensions, in order to null
  the effect of dimensional inconsistencies. These inconsistencies
  are the result of changes that occur in image distance and
  angular position with respect to the camera. Other sources of
  inconsistencies include stretching of the iris caused by pupil
  dilation from varying levels of illumination, head tilt and
  rotation of eye within the eye socket. Normalization is a very
  important step in that success of subsequent steps depends
  largely on the accuracy of this step.

  4.1 BASIC MOTIVATION
   Daugman`s Rubber Sheet Model
     This method is based on remapping each point within the iris
    region to a pair of polar coordinates (r, θ) where r is on the
    interval [0 1] and θ is angle in [0 2Π]. The proposed method is
    capable of compensating the unwanted variations due to
    distance of eye from the camera (scale) and its position with
    respect to the camera (Translation). The Cartesian to polar
    transform is defined as:



    Where,




  I(x, y) is the iris region image,(x, y) are the original Cartesian
coordinates ,(r, θ) are the corresponding normalized           polar

                                                                  22
coordinates and        ,         and        ,        are the pupil
and iris boundaries along θ direction. Pictorial Depiction of this
model is as shown




           Fig no. 4.1.1 Daugman`s rubber sheet model

  This Rubber sheet model takes into account pupil dilation and
  size inconsistencies in order to produce a normalized with
  constant dimensions. However, this model does not compensate
  for rotational inconsistencies. In Daugman`s system, rotation is
  accounted for during matching by shifting the iris template in θ
  direction until iris templates are aligned.


  4.2 OUR APPROACH:

          Our algorithm involves the polar conversion of the iris
  image. The polar conversion of the iris image accounts for all
  the rotational inconsistencies in the iris image. In this algorithm
  we map only the annular iris region coordinates into a fixed
  dimension vector thus eliminating the unwanted and redundant
  pupil & other non iris regions.
  The algorithm can be explained as follows:

     1. Finding out incremental values of the radius and the
        rotation angle.

                                                                   23
2.  Creating a pseudo polar mesh grid of fixed dimensions
         using the incremental values and the radius of the iris
         region.
      3. Separating only the annular iris ring thus eliminating pupil
         & other regions.
      4. Conversions of each coordinate of the iris region into its
         equivalent polar coordinate using the relations.
5.       Mapping of the each of the iris coordinate onto the polar
grid a using linear interpolation.
6.       The iris region is mapped into a vector of fixed dimensions
of size 100x360 which can be seen as shown below




           Fig no.4.2.1 Fully and enhanced Normalized Image




                                                                   24
CHAPTER 5
FEATURE EXTRACTION
      The normalized image is used to extract the unique features in
the iris image. Each iris is characterized by its unique collarette
pattern, iris variation patterns. Hence it is very essential to extract
these features to represent each iris distinctively. Most Iris
recognition systems make use of a band pass decomposition of iris
image to create a biometric template. This step is responsible of
extracting the patterns of iris taking into account the correlation
between the adjacent pixels.

5.1 BASIC TECHNIQUE
 a. THEORY OF WAVELETS
                Wavelets are the basis functions wjk(t) in continuous
time. A basis is a set of linearly independent functions that can be
used to produce all admissible functions f(t).
            F(t)=combination of basis functions = ∑bjk wjk(t)
The special feature of the wavelet is that all the functions are
constructed from a single mother wavelet w(t).
Wavelet transform overcomes the resolution problem of the
traditional Fourier transform techniques by using a variable length
window. Techniques like short time fourier transform used to divide
the signal into short time domains. The fourier transform of the
signal was then computed in each domain around a particular center
frequency. However this led to the time resolution problem which
led to the wavelet approach. Analysis windows of different lengths
are used for different frequencies:
         Analysis of high frequenciesè Use narrower windows for
         better time resolution
         Analysis of low frequencies è Use wider windows for
         better frequency resolution


                                                                   25
This works well, if the signal to be analyzed mainly consists of
   slowly varying characteristics with occasional short high
   frequency bursts.
   The function used to window the signal is called the wavelet
           ψ             ψ           1           ∗ t −τ 
       CWT x (τ , s ) = Ψ (τ , s ) =
                         x              ∫ x( t )ψ  s dt
                                      s t               



                                                         ~
x[n                                  yhigh [k ] = ∑ x[n] g[− n + 2k ]              ∑ yhigh [k ] ⋅ g[−n + 2k ]        x[n]
                                                                                   k
]                                                n

               ~
               G                                                                                          G          +
                                 2                                                            2

               ~                 2           ~                                               2
               H                             G               2          2   G      +                      H

                                             ~               2          2
                   ~
ylow[k ] = ∑ x[n] h[− n + 2k ]               H                              H           ∑ yhigh [k ] ⋅ g[−n + 2k ]
           n                                                                             k



                   Decomposition                                            Reconstruction

                   G                                                                                        2

                   H                                                                                        2


                Fig no. 5.1.1 2 Level Decomposition of Wavelet

b. Discrete Wavelet Transform
     The DWT analyzes the signal at different frequency bands
with different resolutions by decomposing the signal into coarse
approximation and detail information. DWT employs two sets of
functions, called scaling functions and wavelet functions, which
are associated with low pass and high pass filters, respectively.
The decomposition of signal into different frequency bands is
simply obtained by successive high pass and low pass filtering of
the time domain signal. The original signal x[n] is first passed
through a half band high pass filter g[n] and low pass filter h[n].
After the filtering, half of the samples can be eliminated

                                                                                                                         26
according to the Nyquist`s rule, since the signal now has a highest
 frequency of Π/2 radians instead Π. The signal can therefore be
 sub sampled by discarding every other sample. This constitutes
 one level of decomposition and can mathematically be expressed
 as follows:

 Yhigh[k] =Σx[n].g[2k - n]
 Ylow[k] =Σx[n].h[2k - n] for all n;

 Where Yhigh[k] and Ylow[k] are the outputs of the high pass and
 low pass filters, respectively, after sub sampling by 2.

 This decomposition halves the time resolution since only half the
 number of samples characterizes the entire signal. However, this
 operation doubles the frequency resolution since the frequency
 band of signal now spans only half the previous frequency band,
 effectively reducing the uncertainty in the frequency by half. The
 above procedure, which is also known as the sub band coding,
 can be repeated for further decomposition. At every level, the
 filtering and sub sampling will result in half the number of
 samples (and hence half the time resolution) and half the
 frequency bands spanned (and hence double the frequency
 resolution).


c. VARIOUS TYPES OF WAVELETS
1. Haar Wavelet
      This uses the wavelet transform to extract features from the
 iris region. Both the Gabor transform and the Haar wavelet are
 considered as the mother wavelet. From multi-dimensionally
 filtering, a feature vector with 87 dimensions is computed. Since
 each dimension has a real value ranging from -1.0 to +1.0, the
 feature vector is sign quantized so that any positive value is
 represented by 1, and negative value as 0. This results in a
 compact biometric template consisting of only 87 bits.

                                                                  27
2. Daubechies Wavelet
      These are a family of orthogonal wavelets defining a DWT
 and characterized by a maximal number of vanishing moments
 for some given support. With each wavelet type of this class,
 there is a scaling function (also called as Father Wavelet) which
 an orthogonal multi resolution analysis. An orthogonal wavelet is
 a wavelet where the associated wavelet transform is orthogonal
 i.e. the inverse wavelet transform is ad joint of the wavelet
 transform.
       In general the Daubechies wavelets are chosen to have the
 highest number ‘A’ of vanishing moments (This does not imply
 the best smoothness)for given support width N=2A and among
 the 2^(A-1) possible solutions, the one is chosen whose scaling
 filter has external phase. These wavelets are widely used in
 solving broad range of problems, example similarity property of a
 signal, signal discontinuities, etc. Daubechies orthogonal wavelet
 D2-D0 is commonly used. The index number refers to the
 number N of coefficients. Each wavelet has a number of zero
 moments or vanishing moments equal to half the number of
 coefficients.


5.2 OUR APPROACH:
        We have applied a 5- level Wavelet transform and have
analyzed the results for all types of Wavelets like Haar, Daubechies
and Bior. We found the best results for Daubechies that to for db2
type. In each case, feature size varied depending upon the type
wavelet applied. Normally, the final template size should not be a
function of type of wavelet, but since our image size is not exactly
in terms of powers of two (i.e it is 100x360 ) and so subsequent
desampling will result in loss of some pixels. This causes template
size to be a function of type of Wavelet and also on the number of
levels. Following figure shows 4- level DB2 Wavelet applied to the
eye image.

                                                                 28
Fig no. 5.2.1 Application of db2 Wavelet to Eye image
The results obtained by applying different types of Wavelet are
shown separately in Results section.




                                                                  29
CHAPTER 6
MATCHING

           For comparing the templates obtained by feature
extraction process, there are number of design metrics available.
Some of them are explained:

6.1 Different Distance Metrics
                       If x and yare two d-dimensional feature
vectors of database image and query image respectively, then the
distance metrics are defined as:
   a. Euclidean or L2 metric :




 Euclidean distance is not always the best metric. The fact that the
 distances in each dimension are squared before summation,
 places great emphasis on those features for which the
 dissimilarity is large.

 b. Weighted Euclidean distance metric:
        The weighted Euclidean distance (WED) can be used to
compare two templates, especially if the template is composed of
integer values. The weighting Euclidean distance gives a measure
of how similar a collection of values are between two templates.
This metric is specified as




                                                                    30
where     is the    feature of the unknown iris, and         is the
     feature of iris template, ,and is the standard deviation of the
 feature in iris template . The unknown iris template is found to
 match iris template , when WED is a minimum at .

c. The Manhattan or LI metric:




  The Manhattan distance metric uses using sum of the absolute
  differences in each feature, rather than their squares, as the
  overall measure of dissimilarity. It is obvious that the distance
  of an image from itself is zero. The distances are then stored in
  increasing order and closest sets of patterns are then retrieved.
  In ideal case all the top 16 retrievals are from same large image.
  The performance is measured in terms of the average retrieval
  rate, which is defined as the average percentage of patterns
  belonging to the same image as the query pattern in the top 16
  matches.

  d. Hamming Distance metric:
        The Hamming distance gives a measure of how many bits
  are the same between two bit patterns. Using the Hamming
  distance of two bit patterns, a decision can be made as to
  whether the two patterns were generated from different irises or
  from the same one. In comparing the bit patterns X and Y, the
  Hamming distance, HD, is defined as the sum of disagreeing
  bits (sum of the exclusive-OR between X and Y) over N, the
  total number of bits in the bit pattern.




                                                                   31
Since an individual iris region contains features with high
degrees of freedom, each iris region will produce a bit-pattern
which is independent to that produced by another iris, on the
other hand, two iris codes produced from the same iris will be
highly correlated. If two bits patterns are completely
independent, such as iris templates generated from different
irises, the Hamming distance between the two patterns should
equal 0.5. This occurs because independence implies the two bit
patterns will be totally random, so there is 0.5 chance of setting
any bit to 1, and vice versa. Therefore, half of the bits will agree
and half will disagree between the two patterns. If two patterns
are derived from the same iris, the Hamming distance between
them will be close to 0.0, since they are highly correlated and
the bits should agree between the two iris codes.

e. Canberra distance metric:
       The Canberra distance metric is given by




         In these metric equations the numerator signifies the
difference and denominator normalizes the difference. Thus
distance values will never exceed one, being equal to one
whenever either of the attributes is zero. Thus it would seem to
be a good expression to use, which avoids scaling effect.




                                                                 32
6.2 Our approach:
           Running the experiment with different distance metric
on same set of images we were able to find which distance
metric gives best result.
           The Canberra distance metric performed exceptionally
well than other distance metric. The fact is that in this metrics
equation the numerator signifies the difference and denominator
normalizes the difference. Thus distance values will never
exceed one, being equal to one whenever either of the attributes
is zero.




                                                               33
CHAPTER 7
ENROLLMENT AND IDENTIFICATION
7.1   ENROLLMENT PHASE
a. PROBLEM STATEMENT
    The enrollment phase in any biometric system is required to
validate its users. If an authentic person supposed to use the system
does not have his images registered in the database then every time
the person tries enter the system the biometric system rejects him
and refrains from providing access to the system to that person.
Hence a person worthy of access is denied the service. In such cases
it becomes necessary to first enroll the eligible persons into the
database for their future identification. Our enrollment algorithm is
basically aimed at solving this problem.




 Fig no. 7.1.1 General Iris identification system

b. STEPS:
    The enrollment phase of the project involves checking whether
any of the subject is already enrolled in the database or not and if
not getting the subject enrolled. Here a basic graphic user interface
window is implemented. Of the available seven image s of a subject
three are considered as principle images and rest four are considered

                                                                 34
for testing the enrollment. The basic steps are highlighted as
follows:

•      A test image of a subject is fed to the algorithm.
•       The image is subjected to series of steps of segmentation,
normalization, and feature extraction to form a feature vector
template.
•      This is used to compute the distance between the template and
combined pattern of all the subjects using the Canberra distance
metric.
•      The distances are stored in a vector.
•      This is then sorted in ascending order.
•       The minimum distance indicates the class of images to which
this test image belongs to.
•       The test image is then once again compared with the principle
images of the class and the distance between them is computed.
•      If the above distance is less than the one obtained for the
combined pattern of that class then we can establish that the image
indeed belongs to that class and hence is already enrolled.
•      If the distance computed in the step seven is greater than the
combined distance we conclude that the image does not belong to
that class and is not enrolled.
•      In case the test image is not enrolled then it is enrolled by
storing all the patterns of the subjects in the database.


7.2 IDENTIFICATION PHASE
a. PROBLEM STATEMENT

    This phase verifies the claimed identity of the individual. In this
phase a person who wants to get access to a system claims his
identity as being a particular authentic person. The biometric system
verifies for the claim and establishes whether he is indeed the person


                                                                   35
who he claims to be or an imposter.The following algorithm solves
this problem.

b. STEPS

   This algorithm is implemented using a graphic user interface.
•      A test image is input through the graphic interface.
•      The image is subjected through the processing steps such as
segmentation, normalization and feature extraction to the retrieve
the feature vector.
•      This is used to compute the distance between the template and
combined pattern of all the subjects using the Canberra distance
metric.
•      The distances are stored in a vector.
•      This is then sorted in ascending order.
•       The minimum distance indicates the class of images to which
this test image belongs to.
•       The test image is then once again compared with the principle
images of the class and the distance between them is computed.
•      If the above distance is less than the one obtained for the
combined pattern of that class then we can establish that the image
indeed belongs to that class and hence the claimed identity of the
person is established.
•      If the distance computed in the step seven is greater than the
combined distance we conclude that the image does not belong to
that class and hence the person is an imposter.




CHAPTER 8

                                                                 36
CONCLUSION
8.1 Summary of work

        This project work presents an Iris recognition system, which
was tested on CASIA Iris Image Database, in order to verify the
claimed performance of iris recognition technology. Analysis of the
developed system has revealed a number of interesting conclusions.
Accuracy of Biometric identification systems is specified in terms of
FAR (False Acceptance Rate) and FRR (False Rejection Rate) .
FAR measures how often a non –authorized user, who should not be
granted access, is falsely recognized, while FRR measures how
often an authorized user, who should have been granted access is not
recognized.


 a. Results for applying different types of Wavelets

     Following is the table which shows the effect of applying various
     types of the Wavelet

Wavelet        Vector Size FAR             FRR          No. of Faulty
Type                                                    Subjects
Db3            764X1         5             130          74
Db1            418X1         17            87           52
Db4            1020X1        15            133          71
Bior 4.4       1047X1        4             87           52
Bior 1.1       418X1         17            87           52
Db2            477X1         1             73           30

Thus we can conclude that db2 gives the best possible results and so
we have selected this type of Wavelet.

b.     Results for Overall Database

                                                                  37
Thus after analyzing the results over complete database we
  found following results.

  False Acceptance Rate=1
  False Rejection Rate=73
  Number of Faulty Subjects= 41
           Thus, our project has very small FAR which is what a
  reliable biometric system should possess. However,
  comparatively our FRR is quite high and some measures can be
  taken to reduce it.

8.2 Suggested Improvements
1.    Segmentation is a very crucial step in Iris Recognition Systems
and so to whatever extent its accuracy can be increased will result in
more improved results. To improve segmentation algorithm, a more
elaborate eyelid and eyelash detection system could be
implemented.
2.    Presently, our Template size varies with the type of Wavelet
applied. So, some measures can be taken to avoid this effect.
                    Our aim of project was mainly to have a very low
      FAR, which we have achieved to a large extent. Also, our
      segmentation part has a very high accuracy and has worked
      satisfactorily over entire CASIA database. Since, we had
      restricted ourselves to software part implementation and not
      taken too much hardware implications in too concern, our
      efforts have resulted in efficient Recognition System.




CHAPTER 9

                                                                  38
GUI Using MATLAB

                Since MATLAB provides a very easy way of
     implementing a GUI (Graphical User Interface), we have
     prepared a GUI which gives a general idea of the work we have
     done. Following are some of its details:
1.           Starting Window:




     This is starting window of our GUI which provides link to
     various phases of our project. The Results for overall can be
     obtained by clicking on 1st pushbutton. To have Individual
     analysis of the database, we have provided one option in the form
     of 2nd pushbutton. Different steps in pattern formation can be seen
     by clicking on 3rd pushbutton.
     2. Results for Complete Database


                                                                    39
The above window shows the overall
database results. The update facility is provided so as to again run
the code over entire database, in case any changes made in the
original code.




3. Individual Enrollment

                                                                   40
This window allows us to individually analyze each person by
enrolling that person separately and then check if we get proper
results against each image of that person in database. Here, we
have used four images for pattern formation and then keep
remaining three images as test images.

4. Steps in Pattern formation

                                                                   41
This window provides a detailed analysis of pattern formation.
Here, we have made provisions to see Pupil separation, Iris
segmentation and also Normalized Image for any image in
database.




                                                                 42
CHAPTER 10
REFERENCES
1. JOURNALS AND CONFERENCE PAPERS
   • Daugman, J., How Iris Recognition Works, IEEE
     Transactions on Circuits and Systems for Video Technology,
     Vol. 14, Number 1, January 2004.

  •    Masek L., Recognition of Human Iris Patterns for Biometric
       Identification,
      [http://www.csse.uwa.edu.au/~pk/studentprojects/libor].

  •   CASIA iris image database, Institute of Automation, Chinese
      Academy of Sciences, [http://www.sinobiometrics.com]

  •   R. Wildes. Iris recognition: an emerging biometric
      technology. Proceedings of the IEEE, Vol. 85, No. 9,1997.

  • Daugman J., Biometric Personal Identification System based
    on Iris Analysis,United States Patent, Patent No. 5,291,560,
    March 1994.

  •   J. Daugman. High confidence visual recognition of persons
      by a test of statistical independence. IEEE Transactions on
      Pattern Analysis and Machine Intelligence, Vol. 15, No. 11,
      1993.

  • W.W.Boles, A security system based on human Iris
    Identification using Wavelet Transform ,Engineering
    Applications of Artificial Intelligence, 11:77-85,1998

2. OTHER REFERENCES
   • A book on Digital Image Processing using MATLAB by
     Rafael C. Gonzalez, Richard E. Woods and Steven L.Eddins.
   • Wavelet tutorial By Robi Polikar.

                                                                  43
44

Iris based Human Identification

  • 1.
    A PROJECT REPORT ON “IRIS BASED HUMAN RECOGNITION SYSTEM” SUBMITTED BY: MR. DESAI GAURAV S. MR. JOSHI SUSHIL D. MS. IYER ANANDHI G. MR.WAZALWAR DHAWAL S. (BE. EXTC) GUIDED BY: MR. R.R. MANTHALKAR SGGS INSTITUTE OF ENGINEERING AND TECHNOLOGY, VISHNUPURI, NANDED SWAMI RAMANAND TEERTH MARATHWADA UNIVERSITY, NANDED 1
  • 2.
    CERTIFICATE THISIS TO CERTIFY THAT THE PROJECT REPORT ENTITILED “IRIS BASED HUMAN RECOGNITION SYSTEM” SUBMITED BY MR. DESAI GAURAV S. MR. JOSHI SUSHIL D. MS. IYER ANANDHI G. MR.WAZALWAR DHAWAL S. (BE. EXTC) HAS BEEN COMPLETED AS PER THE REQUIREMENT OF “SWAMI RAMANAND TEERTH MARATHWADA UNIVERSITY, NANDED”. IN PARTIAL FULFILMENT OF THE DEGREE B.E. (Electronic and Telecommunication Engineering) FOR THE ACADEMIC YEAR 2006-2007. GUIDE HEAD OF DEPT. DR. R.R MANTHALKAR Prof. A.N .KAMTHANE 2
  • 3.
    INDEX • ACKNOWLEDGEMENT………………………………........4 • ABSTRACT………………………………………………......5 • LIST OF FIGURES……………………………………..........6 • INTRODUCTION TO BIOMETRICS ………………………7 o ADVANTAGES OF BIOMETRICS………………………………8 o COMMON BIOMETRIC PATTERNS……………………………9 o THE HUMAN IRIS………………………………………………..10 • PROJECT INTRODUCTION…………………………...…..14 o OBJECTIVE………………………………………………………14 o BASIC ASSUMPTIONS………………………………………….14 o DATABASE CHARACTERISTICS……………………………...15 • IMAGE SEGMENTATION ………………………….…….16 o AVAILABLE TECHNIQUE…………………………….……….16 o OUR APPROACH……………………………………….……….18 • IMAGE NORMALIZATION ………………………………23 o BASIC MOTIVATION…………………………………..………23 o OUR APPROACH……………………………………… ……….24 • FEATURE EXTRACTION ………………………………...26 o BASIC TECHNIQUE……………………………………………26 o OUR APPROACH…………………………………………….....29 • MATCHING …………………………………….…………31 o DIFFERENT DISTANCE METRICS…………………………..31 o OUR APPROACH………………………………………………34 • ENROLLMENT AND IDENTIFICATION…….…….........35 o ENROLLMENT PHASE………………………………………..35 o IDENTIFICATION PHASE………………………………….....36 3
  • 4.
    CONCLUSION …………………………………………….37  SUMMARY OF WORK………………………………37  SUGGESTED IMPROVEMENTS…………………….38 • GRAPHIC USER INTERFACE……………………...…….39 • REFERENCES …………………………………………......43 ACKNOWLEDGEMENT 4
  • 5.
    For the accomplishmentof any Endeavour the most important factor besides Hard-work & Dedication is the Proper Guidance. Just as a light-house provides guidance to the sea voyagers even in the extreme stormy conditions the same job has been done by our guide Dr. R.R. Manthalkar. We heartily thank him for his timely support & guidance. His keen interest & enthusiasm in executing the project has always been a source of inspiration for us. We would like to thank our Head of department Prof. A.N Kamthane and also Dr. M.B.kokare for their moral support & encouragement. We would also take this opportunity to extend our sincere thanks to all the other professors of our department for their direct or indirect help extended by them which was a cap in the feather of our efforts. We would also like to thank all our friends who also played an instrumental role in the completion of our project. Last but not the least we would like to thank our parents who always support us in our efforts. MR. DESAI GAURAV S. MR. JOSHI SUSHIL D. MS. IYER ANANDHI G. MR.WAZALWAR DHAWAL S. (BE. EXTC) 5
  • 6.
    ABSTRACT The successful implementation of any system is largely determined by its reliability, authenticity & the amount of secrecy it provides. In today’s highly techno world where security & privacy are the concerns of prime importance the crucial systems must employ techniques to achieve this. Our project is just a small step towards this. The Iris based system can cope up with a lot of the individual biological variations & still provide the identification system with much accuracy & reliability. In this project we have designed system which involves recognition of a person using IRIS as biometric parameter. We have first segmented pupil & iris structure from the original eye image. Then we have normalized it to build a feature vector which characterizes each iris distinctly. This feature vector is then used for matching among various templates & identifies the individual. The work provided in this project is mainly aimed at providing a recognition system in order to verify the uniqueness of the human iris and also its performance as a biometric. The paper has implementation of all the algorithms on the CASIA database. The various results of the different implementations and their accuracies have been tested in this paper. All in all this paper is a sincere effort in suggesting an efficient system for the implementation of the Human Identification system based on IRIS recognition. LIST OF FIGURES 6
  • 7.
    Fig no. 1.3.1 Picture of Human Eye……………………….12 • Fig no. 3.1.1 Demonstration of Hough Transform………...17 • Fig no.3.2.1 Fully Segmented Iris Image………………....21 • Fig no 4.1.1 Daugman`s Rubber sheet model…………....23 • Fig no. 4.2.1 Fully and Enhanced Normalized Image……..24 • Fig no. 5.1.1 2 Level Decomposition of Wavelet………….26 • Fig no. 5.2.1 Application of db2 Wavelet to eye image…...29 • Fig no. 7.1.1 General Iris Identification System ………….34 CHAPTER 1 INTRODUCTION TO BIOMETRICS 7
  • 8.
    In this vastlyinterconnected society establishing the true identity of the person is becoming the most critical issue. The questions like “Is she really who she claims to be?”, “Is this person authorized to use this facility?” or “Is he in the watch list posted by the government?” are routinely being asked in a variety of scenarios ranging from issuing a driver’s license to gaining entry into a country. With the advent of the newer networking technologies the sharing of vast audio & video resources has become much easier but has raised brows over the issues concerning the security in the transactions. The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking , communication and mobility. Biometrics, defined as a science of recognizing an individual based on her physiological, biological or behavioral traits, is beginning to gain acceptance as a legitimate method for determining an individual’s identity. Biometrics aims to accurately identify each individual using various physiological or behavioral characteristics such as fingerprints, Face, iris, retina, gait, palm- prints and hand geometry. The developments in science and technology have made it possible to use biometrics in applications where it is required to establish or confirm the identity of individuals. Applications such as passenger control in airports, access control in restricted areas, border control, database access and financial services are some of the examples where the biometric technology has been applied for more reliable identification and verification. In recent years, biometric identity cards and passports have been issued in some countries based on iris. Fingerprint and face recognition technologies to improve border control process and simplify passenger travel at the airports. In the field of financial services, biometric technology has shown a great potential in offering more comfort to customers while increasing their security. Although there are still some concerns about using biometrics in the mass consumer applications due to 8
  • 9.
    information protection issues,it is believed that the technology will find its way to be widely used in many different applications. 1.1 ADVANTAGES OF BIOMETRICS: 1. It links an event to a particular individual, not just to a password or token. 2. It is very convenient from user friendliness point of view since there is nothing to remember, unlike in the case of passwords or some code words. 3. It can’t be guessed, stolen, shared, lost or forgotten. 4. It prevents impersonation by protecting against identity theft and providing higher degree of non- repudiation. 5. It enhances privacy by protecting against unauthorized access to personal information. A good biometric is characterized by use of a feature that is highly unique – so that the chance of any two people having the same characteristic will be minimal, stable – so that the feature does not change over time, and be easily captured – in order to provide convenience to the user, and prevent misrepresentation of the feature. 1.2 COMMON BIOMETRIC PATTERNS 1. Finger print Recognition Features: a. Measures characteristics associated with the friction ridge pattern on the fingertip. b. General ease and speed of use. c. Supports both 1:1 verification and 1:N applications Considerations: a. Ridge patterns may get affected due to accidents or aging effects. 9
  • 10.
    b. Requires Physicalcontact with sensor. 2. Facial Recognition Features: a. Analyzes geometry of the face or the relative distance between features. b. No physical contact required. c. Supports both 1:1 verification and 1:N identification applications. Considerations: a. Can be affected by surrounding lightning conditions. b. Appearance may change over time. 3. Hand Geometry Features: a. Measures dimensions of hand, including shape and length of finger. b. Very low failure to enroll rate. c. Rugged. Considerations: a. Suitable only for 1:1 contexts. 4. Speech Recognition Features: a. Compares live speech with previously created speech model of person’s voice. b. Measures pitch, cadence and tone to create voice print cadence. Considerations: a. Background Noise can interfere b. Suitable only for 1:1 contexts. 10
  • 11.
    1.3 THE HUMANIRIS The human iris is rich in features which can be used to quantitatively and positively distinguish one eye from another. The iris contains many collagenous fibers, contraction furrows, coronas, crypts, color, serpentine vasculature, striations, freckles, rifts, and pits. Measuring the patterns of these features and their spatial relationships to each other provides other quantifiable parameters useful to the identification process. The iris is unique because of the chaotic morphogenesis of that organ. To quote Dr. John Daugman, “An advantage the iris shares with fingerprints is the chaotic morphogenesis of its minutiae. The iris texture has chaotic dimension because its details depend on initial conditions in embryonic genetic expression; yet, the limitation of partial genetic diffusion (beyond expression of form, function, color and general textural quality), ensures that even identical twins have uncorrelated iris minutiae. Thus the uniqueness of every iris, including the pair possessed by one individual, parallels the uniqueness of every fingerprint regardless of whether there is a common genome.” Given this, the statistical probability that two irises would be exactly the same is estimated at 1 in 10^72. 1.3. a. STABLITY OF IRIS Notwithstanding its delicate nature, the iris is protected behind eyelid, cornea, aqueous humor, and frequently eyeglasses or contact lenses (which have negligible effect on the recognition process). An iris is not normally contaminated with foreign material, and human instinct being what it is, the iris, or eye, is one of the most carefully protected organs in one’s body. In this environment, and not subject to deleterious effects of aging, the features of the iris remain stable and fixed from about one year of age until death. 11
  • 12.
    1.3. b. STRUCTUREOF IRIS Fig no. 1.3.1 Picture of Human Eye The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. A front-on view of the iris is shown in Figure. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter. The iris consists of a number of layers; the lowest is the epithelium layer, which contains dense pigmentation cells. The stromal layer lies above the epithelium layer, and contains blood vessels, pigment cells and the two iris muscles. The density of stromal pigmentation determines the color of the iris. The externally visible surface of the multi-layered iris contains two zones, which often differ in color. An outer ciliary’s zone and an 12
  • 13.
    inner pupillary zone,and these two zones are divided by the collarets – which appears as a zigzag pattern. 1.3. c. ADVANTAGES OF USING IRIS PATTERN 1. Highly protected internal organ of the eye. 2. Iris patterns possess a high degree of randomness. 3. Patterns are apparently stable throughout life. 4. Comparatively fast matching technique. 13
  • 14.
    CHAPTER 2 PROJECT INTRODUCTION 2.1.OBJECTIVE The objective will be to implement an open-source iris recognition system in order to verify the claimed performance of the technology. Our main system consists of many subsystems. Important steps involved can be demonstrated as follows: 1. Segmentation: Locates the iris region in the eye image by eliminating the unwanted parts. 2. Normalization Creating a dimensionally consistent representation of iris region. 3. Feature Encoding and Matching Creating a template containing only the most discriminating features of the iris and matching it. So our main aim is to implement best possible algorithm for each step and obtain required degree of accuracy. 2.2 BASIC ASSUMPTIONS a) Methods for image segmentation and feature extraction will assume all patterns have the same rotation angle. b) The Iris and the pupil regions are assumed to be perfectly concentric circles. c) The pupil region has a constant intensity throughout. d) Of the available seven images of a subject four are considered as principle images while three are considered for test images. e) We have resized normalized image of each of the iris into a vector of constant dimensions irrespective of their original size. 14
  • 15.
    2.3. DATABASE CHARACTERISTICS This paper is based on the CASIA image database. Its ethnical distribution is composed mainly of Asians. Each iris class is composed of 7 samples taken in two sessions, three in the first session and four in the second. Sessions were taken with an interval of one month. Images are 320x280 pixels gray scale taken by a digital optical sensor designed by NLPR (National Laboratory of Pattern Recognition – Chinese Academy of Sciences). There are 108 classes or irises in a total of 756 iris images.Each of the iris images is preprocessed to eliminate the effect of the illumination variations and other noise effects. 15
  • 16.
    CHAPTER 3 IMAGE SEGMENTATION The main aim of the segmentation step is to distinguish an iris texture from the rest of the eye image. Properly detecting the inner and outer boundaries of iris texture is significantly important in all iris recognition systems. The segmentation is a crucial step in that any false representation here may corrupt the resulting template in poor recognition rates. The iris is an annular portion between the pupil (inner) and the sclera (outer boundary). The iris regions can be approximated by two circles, one for the iris/sclera boundary and another, interior to the first, for the iris/pupil boundary. The eyelids and eyelashes normally occlude the upper and lower parts of the iris region. Also, specular reflections can occur within the iris region corrupting the iris pattern. However, our project work is mainly focused on CASIA iris database which do not contain specular reflections due to the use of near infra-red light for illumination. The success of the segmentation depends upon the imaging quality of the eye image. Imaging of the iris must acquire sufficient detail for recognition while being minimally invasive to the operator. 3.1 Available Techniques a. Daugman`s Integro Differential Operator Daugman proposed a method by making use of first derivatives of image intensity to signal the location of edges that corresponds to the location of edges that corresponds to the borders of the iris. The notion is that the magnitude of the derivative across an imaged border will show a local maximum due to the local change of image intensity. The limbus and pupil are modeled with circular contours. The expected configuration 16
  • 17.
    of model componentsis used to fine tune the image intensity derivative information. Daugman`s operator is expressed as I(x, y) is an image containing an eye The operator searches over the image domain (x, y) for the maximum in blurred partial derivative with respect to increasing radius r of the normalized contour integral of I(x, y) along a circular arc ds of radius r and center coordinates (x0, y0) . Is smoothing function Operator serves and behaves as circular edge detector blurred at a scale by σ, which searches iteratively for maximum contour integral derivative with increasing radius at successively finer scales of analysis through the three parameter space of center coordinates and radius (x0, y0, r) defining a path of contour integration. b. Hough Transform This is a standard computer algorithm that can be used to determine the parameters of single geometric objects such as lines and circles, present in an image. Circular Hough Transform is used to deduce the radius and center coordinates of the pupil and iris regions, while to detect the eyelids, parabolic Hough Transform is used.Hough Transform can be demonstrated as follows: Fig. no. 3.1.1 Demonstration of Hough Transform 17
  • 18.
    These edge mapsalong with determination of some appropriate points in Hough space will give us required parameters. Some problems in use of Hough Transform are: 1. It is difficult to determine the threshold values to be chosen for edge detection resulting in critical edge points being removed sometimes, resulting in failure to detect circles/arcs. 2. This approach is computationally intensive due to its brute force nature and thus may not be suitable for real time applications. c. Other Methods for Segmentation Iris localization proposed by Tisse. et al is combination of Integro differential and the Hough Transform. The Hough Transform is used for a quick guess of the pupil center and then the integro differential is used to accurately locate pupil and limbus using a smaller search space. Lim et al localize pupil and limbus by providing an edge map of the intensity values of the image. The center of the pupil is then chosen using a bisection method that passes perpendicular lines from every two points on the edge map. The center point is then obtained by voting the point that has the largest number of line crossovers. The pupil and limbus boundaries are then selected by increasing the radius of a virtual circle with selected center point and choosing the two radii that have the maximum number of edges by the virtual circle as the pupil and limbus radii. 3.2 Our Approach Our approach is mainly divided into four parts as listed below: 1. Separation of Pupil Region from Eye image. 2. Determination of Pupil Coordinates 3. Determination of Iris Edges based on the above two steps 4. Removing the unwanted part and getting the segmented part 18
  • 19.
    a. Separation ofPupil Region CASIA Database images have been preprocessed and each has constant intensity pupil region. Firstly we determine this constant value so that we can use it as a threshold value to separate pupil region from eye image. For this we employ following steps: • First, We obtain mean of the image (say ’m’) • Then we scan the image completely to determine the regions having same pixel value consecutively for 15 times. • Sometimes, Camera effect may develop bright constant intensity regions on the eyelids. To avoid selecting this region value as threshold value, we compare each region value with the obtained mean (here ‘m’) and select that value which is less than the mean value. • After getting the threshold value, we stop scanning the image. • Sometimes, eyelashes also satisfy the threshold condition, but have much smaller area than pupil area. Using this knowledge and concept of 8 connected pixels, we can cycle through all regions and apply the following conditions For each region R If Area(R) <1300,Set all pixels of R to 0 • To this finally obtained pupil image, some statistical MATLAB functions are applied and it’s Center Coordinates and radius is determined. b. Determination of Pupil Coordinates This function is mainly used to determine the edges of pupil. Here, we take the help of MATLAB ‘find’ function to detect the abrupt change in the image intensity. This function also helps us to verify the previously determined Pupil parameters with the help of following formula: Center = (|Right/Top Edge - left/Bottom Edge|)/2 Coordinates coordinates Coordinates 19
  • 20.
    c. Determination ofiris edges This is used to obtain the contour of the iris. Here, we assume that iris and pupil are concentric circular regions. It takes into consideration that areas of the iris at the right and left of the pupil are the ones that most often present visible to data extraction. The areas above and below the pupil also carry unique information, but it is very common that they are totally or partially occluded by eyelash or eyelid. Firstly, we enhance the iris image so that edges can be seen more prominently and their detection will become simpler. For this we employ Histogram equalization method which gives us an image with increased dynamic range, which will tend to have higher contrast. Then we follow certain steps as listed below to get the iris edges: • Firstly, we get the pupil edges from previously explained functions. • Then, we start the process with 30 pixels left to the left edge of pupil (say this point is ‘a’), so as to avoid abrupt intensity changes in collarette region, which may mislead our algorithm. We form a vector containing image intensity at that point ‘a’ and 4 pixels above and below it. Mean of this vector is obtained (say it is m1). • Another vector containing image intensity at point ‘a-1’ and 3 points above and below it is obtained. Mean of this vector is obtained (say it is m2). • Now, We compute ‘m’, where m=|m1- m2| • If m>. 04, there may be abrupt change in image intensity. • To avoid false detection which may cause due to iris region getting corrupted by some reasons, we obtain similar vector for ‘a- 2’also and calculate its mean and store it in m2. • Again step 5 is repeated and if condition is satisfied for 10 times consecutively we conclude that this is the left edge of 20
  • 21.
    the iris andstop further calculations. We calculate this edge’s distance from pupil center and thus we get iris radius towards left i.e. r1. • The above steps are repeated for right part also. Here, the exception is that we start from 30 pixels right to the right most pupil edge. This gives radius towards right i.e. r2. • We take the larger value amongst r1 and r2 so as to avoid any data loss. d. Removal of unwanted part from the image Now, the above data so far obtained can be used to eliminate portions other than iris region. This is done by approximating Iris as a circle with center at pupil center. Any region outside this circle is deleted and then finally we get the segmented image as shown: Fig no. 3.2.1 Fully Segmented Iris Image 21
  • 22.
    CHAPTER 4 NORMALIZATION The main aim of normalization is to transform the iris region into fixed dimensions, in order to null the effect of dimensional inconsistencies. These inconsistencies are the result of changes that occur in image distance and angular position with respect to the camera. Other sources of inconsistencies include stretching of the iris caused by pupil dilation from varying levels of illumination, head tilt and rotation of eye within the eye socket. Normalization is a very important step in that success of subsequent steps depends largely on the accuracy of this step. 4.1 BASIC MOTIVATION Daugman`s Rubber Sheet Model This method is based on remapping each point within the iris region to a pair of polar coordinates (r, θ) where r is on the interval [0 1] and θ is angle in [0 2Π]. The proposed method is capable of compensating the unwanted variations due to distance of eye from the camera (scale) and its position with respect to the camera (Translation). The Cartesian to polar transform is defined as: Where, I(x, y) is the iris region image,(x, y) are the original Cartesian coordinates ,(r, θ) are the corresponding normalized polar 22
  • 23.
    coordinates and , and , are the pupil and iris boundaries along θ direction. Pictorial Depiction of this model is as shown Fig no. 4.1.1 Daugman`s rubber sheet model This Rubber sheet model takes into account pupil dilation and size inconsistencies in order to produce a normalized with constant dimensions. However, this model does not compensate for rotational inconsistencies. In Daugman`s system, rotation is accounted for during matching by shifting the iris template in θ direction until iris templates are aligned. 4.2 OUR APPROACH: Our algorithm involves the polar conversion of the iris image. The polar conversion of the iris image accounts for all the rotational inconsistencies in the iris image. In this algorithm we map only the annular iris region coordinates into a fixed dimension vector thus eliminating the unwanted and redundant pupil & other non iris regions. The algorithm can be explained as follows: 1. Finding out incremental values of the radius and the rotation angle. 23
  • 24.
    2. Creatinga pseudo polar mesh grid of fixed dimensions using the incremental values and the radius of the iris region. 3. Separating only the annular iris ring thus eliminating pupil & other regions. 4. Conversions of each coordinate of the iris region into its equivalent polar coordinate using the relations. 5. Mapping of the each of the iris coordinate onto the polar grid a using linear interpolation. 6. The iris region is mapped into a vector of fixed dimensions of size 100x360 which can be seen as shown below Fig no.4.2.1 Fully and enhanced Normalized Image 24
  • 25.
    CHAPTER 5 FEATURE EXTRACTION The normalized image is used to extract the unique features in the iris image. Each iris is characterized by its unique collarette pattern, iris variation patterns. Hence it is very essential to extract these features to represent each iris distinctively. Most Iris recognition systems make use of a band pass decomposition of iris image to create a biometric template. This step is responsible of extracting the patterns of iris taking into account the correlation between the adjacent pixels. 5.1 BASIC TECHNIQUE a. THEORY OF WAVELETS Wavelets are the basis functions wjk(t) in continuous time. A basis is a set of linearly independent functions that can be used to produce all admissible functions f(t). F(t)=combination of basis functions = ∑bjk wjk(t) The special feature of the wavelet is that all the functions are constructed from a single mother wavelet w(t). Wavelet transform overcomes the resolution problem of the traditional Fourier transform techniques by using a variable length window. Techniques like short time fourier transform used to divide the signal into short time domains. The fourier transform of the signal was then computed in each domain around a particular center frequency. However this led to the time resolution problem which led to the wavelet approach. Analysis windows of different lengths are used for different frequencies: Analysis of high frequenciesè Use narrower windows for better time resolution Analysis of low frequencies è Use wider windows for better frequency resolution 25
  • 26.
    This works well,if the signal to be analyzed mainly consists of slowly varying characteristics with occasional short high frequency bursts. The function used to window the signal is called the wavelet ψ ψ 1 ∗ t −τ  CWT x (τ , s ) = Ψ (τ , s ) = x ∫ x( t )ψ  s dt s t   ~ x[n yhigh [k ] = ∑ x[n] g[− n + 2k ] ∑ yhigh [k ] ⋅ g[−n + 2k ] x[n] k ] n ~ G G + 2 2 ~ 2 ~ 2 H G 2 2 G + H ~ 2 2 ~ ylow[k ] = ∑ x[n] h[− n + 2k ] H H ∑ yhigh [k ] ⋅ g[−n + 2k ] n k Decomposition Reconstruction G 2 H 2 Fig no. 5.1.1 2 Level Decomposition of Wavelet b. Discrete Wavelet Transform The DWT analyzes the signal at different frequency bands with different resolutions by decomposing the signal into coarse approximation and detail information. DWT employs two sets of functions, called scaling functions and wavelet functions, which are associated with low pass and high pass filters, respectively. The decomposition of signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original signal x[n] is first passed through a half band high pass filter g[n] and low pass filter h[n]. After the filtering, half of the samples can be eliminated 26
  • 27.
    according to theNyquist`s rule, since the signal now has a highest frequency of Π/2 radians instead Π. The signal can therefore be sub sampled by discarding every other sample. This constitutes one level of decomposition and can mathematically be expressed as follows: Yhigh[k] =Σx[n].g[2k - n] Ylow[k] =Σx[n].h[2k - n] for all n; Where Yhigh[k] and Ylow[k] are the outputs of the high pass and low pass filters, respectively, after sub sampling by 2. This decomposition halves the time resolution since only half the number of samples characterizes the entire signal. However, this operation doubles the frequency resolution since the frequency band of signal now spans only half the previous frequency band, effectively reducing the uncertainty in the frequency by half. The above procedure, which is also known as the sub band coding, can be repeated for further decomposition. At every level, the filtering and sub sampling will result in half the number of samples (and hence half the time resolution) and half the frequency bands spanned (and hence double the frequency resolution). c. VARIOUS TYPES OF WAVELETS 1. Haar Wavelet This uses the wavelet transform to extract features from the iris region. Both the Gabor transform and the Haar wavelet are considered as the mother wavelet. From multi-dimensionally filtering, a feature vector with 87 dimensions is computed. Since each dimension has a real value ranging from -1.0 to +1.0, the feature vector is sign quantized so that any positive value is represented by 1, and negative value as 0. This results in a compact biometric template consisting of only 87 bits. 27
  • 28.
    2. Daubechies Wavelet These are a family of orthogonal wavelets defining a DWT and characterized by a maximal number of vanishing moments for some given support. With each wavelet type of this class, there is a scaling function (also called as Father Wavelet) which an orthogonal multi resolution analysis. An orthogonal wavelet is a wavelet where the associated wavelet transform is orthogonal i.e. the inverse wavelet transform is ad joint of the wavelet transform. In general the Daubechies wavelets are chosen to have the highest number ‘A’ of vanishing moments (This does not imply the best smoothness)for given support width N=2A and among the 2^(A-1) possible solutions, the one is chosen whose scaling filter has external phase. These wavelets are widely used in solving broad range of problems, example similarity property of a signal, signal discontinuities, etc. Daubechies orthogonal wavelet D2-D0 is commonly used. The index number refers to the number N of coefficients. Each wavelet has a number of zero moments or vanishing moments equal to half the number of coefficients. 5.2 OUR APPROACH: We have applied a 5- level Wavelet transform and have analyzed the results for all types of Wavelets like Haar, Daubechies and Bior. We found the best results for Daubechies that to for db2 type. In each case, feature size varied depending upon the type wavelet applied. Normally, the final template size should not be a function of type of wavelet, but since our image size is not exactly in terms of powers of two (i.e it is 100x360 ) and so subsequent desampling will result in loss of some pixels. This causes template size to be a function of type of Wavelet and also on the number of levels. Following figure shows 4- level DB2 Wavelet applied to the eye image. 28
  • 29.
    Fig no. 5.2.1Application of db2 Wavelet to Eye image The results obtained by applying different types of Wavelet are shown separately in Results section. 29
  • 30.
    CHAPTER 6 MATCHING For comparing the templates obtained by feature extraction process, there are number of design metrics available. Some of them are explained: 6.1 Different Distance Metrics If x and yare two d-dimensional feature vectors of database image and query image respectively, then the distance metrics are defined as: a. Euclidean or L2 metric : Euclidean distance is not always the best metric. The fact that the distances in each dimension are squared before summation, places great emphasis on those features for which the dissimilarity is large. b. Weighted Euclidean distance metric: The weighted Euclidean distance (WED) can be used to compare two templates, especially if the template is composed of integer values. The weighting Euclidean distance gives a measure of how similar a collection of values are between two templates. This metric is specified as 30
  • 31.
    where is the feature of the unknown iris, and is the feature of iris template, ,and is the standard deviation of the feature in iris template . The unknown iris template is found to match iris template , when WED is a minimum at . c. The Manhattan or LI metric: The Manhattan distance metric uses using sum of the absolute differences in each feature, rather than their squares, as the overall measure of dissimilarity. It is obvious that the distance of an image from itself is zero. The distances are then stored in increasing order and closest sets of patterns are then retrieved. In ideal case all the top 16 retrievals are from same large image. The performance is measured in terms of the average retrieval rate, which is defined as the average percentage of patterns belonging to the same image as the query pattern in the top 16 matches. d. Hamming Distance metric: The Hamming distance gives a measure of how many bits are the same between two bit patterns. Using the Hamming distance of two bit patterns, a decision can be made as to whether the two patterns were generated from different irises or from the same one. In comparing the bit patterns X and Y, the Hamming distance, HD, is defined as the sum of disagreeing bits (sum of the exclusive-OR between X and Y) over N, the total number of bits in the bit pattern. 31
  • 32.
    Since an individualiris region contains features with high degrees of freedom, each iris region will produce a bit-pattern which is independent to that produced by another iris, on the other hand, two iris codes produced from the same iris will be highly correlated. If two bits patterns are completely independent, such as iris templates generated from different irises, the Hamming distance between the two patterns should equal 0.5. This occurs because independence implies the two bit patterns will be totally random, so there is 0.5 chance of setting any bit to 1, and vice versa. Therefore, half of the bits will agree and half will disagree between the two patterns. If two patterns are derived from the same iris, the Hamming distance between them will be close to 0.0, since they are highly correlated and the bits should agree between the two iris codes. e. Canberra distance metric: The Canberra distance metric is given by In these metric equations the numerator signifies the difference and denominator normalizes the difference. Thus distance values will never exceed one, being equal to one whenever either of the attributes is zero. Thus it would seem to be a good expression to use, which avoids scaling effect. 32
  • 33.
    6.2 Our approach: Running the experiment with different distance metric on same set of images we were able to find which distance metric gives best result. The Canberra distance metric performed exceptionally well than other distance metric. The fact is that in this metrics equation the numerator signifies the difference and denominator normalizes the difference. Thus distance values will never exceed one, being equal to one whenever either of the attributes is zero. 33
  • 34.
    CHAPTER 7 ENROLLMENT ANDIDENTIFICATION 7.1 ENROLLMENT PHASE a. PROBLEM STATEMENT The enrollment phase in any biometric system is required to validate its users. If an authentic person supposed to use the system does not have his images registered in the database then every time the person tries enter the system the biometric system rejects him and refrains from providing access to the system to that person. Hence a person worthy of access is denied the service. In such cases it becomes necessary to first enroll the eligible persons into the database for their future identification. Our enrollment algorithm is basically aimed at solving this problem. Fig no. 7.1.1 General Iris identification system b. STEPS: The enrollment phase of the project involves checking whether any of the subject is already enrolled in the database or not and if not getting the subject enrolled. Here a basic graphic user interface window is implemented. Of the available seven image s of a subject three are considered as principle images and rest four are considered 34
  • 35.
    for testing theenrollment. The basic steps are highlighted as follows: • A test image of a subject is fed to the algorithm. • The image is subjected to series of steps of segmentation, normalization, and feature extraction to form a feature vector template. • This is used to compute the distance between the template and combined pattern of all the subjects using the Canberra distance metric. • The distances are stored in a vector. • This is then sorted in ascending order. • The minimum distance indicates the class of images to which this test image belongs to. • The test image is then once again compared with the principle images of the class and the distance between them is computed. • If the above distance is less than the one obtained for the combined pattern of that class then we can establish that the image indeed belongs to that class and hence is already enrolled. • If the distance computed in the step seven is greater than the combined distance we conclude that the image does not belong to that class and is not enrolled. • In case the test image is not enrolled then it is enrolled by storing all the patterns of the subjects in the database. 7.2 IDENTIFICATION PHASE a. PROBLEM STATEMENT This phase verifies the claimed identity of the individual. In this phase a person who wants to get access to a system claims his identity as being a particular authentic person. The biometric system verifies for the claim and establishes whether he is indeed the person 35
  • 36.
    who he claimsto be or an imposter.The following algorithm solves this problem. b. STEPS This algorithm is implemented using a graphic user interface. • A test image is input through the graphic interface. • The image is subjected through the processing steps such as segmentation, normalization and feature extraction to the retrieve the feature vector. • This is used to compute the distance between the template and combined pattern of all the subjects using the Canberra distance metric. • The distances are stored in a vector. • This is then sorted in ascending order. • The minimum distance indicates the class of images to which this test image belongs to. • The test image is then once again compared with the principle images of the class and the distance between them is computed. • If the above distance is less than the one obtained for the combined pattern of that class then we can establish that the image indeed belongs to that class and hence the claimed identity of the person is established. • If the distance computed in the step seven is greater than the combined distance we conclude that the image does not belong to that class and hence the person is an imposter. CHAPTER 8 36
  • 37.
    CONCLUSION 8.1 Summary ofwork This project work presents an Iris recognition system, which was tested on CASIA Iris Image Database, in order to verify the claimed performance of iris recognition technology. Analysis of the developed system has revealed a number of interesting conclusions. Accuracy of Biometric identification systems is specified in terms of FAR (False Acceptance Rate) and FRR (False Rejection Rate) . FAR measures how often a non –authorized user, who should not be granted access, is falsely recognized, while FRR measures how often an authorized user, who should have been granted access is not recognized. a. Results for applying different types of Wavelets Following is the table which shows the effect of applying various types of the Wavelet Wavelet Vector Size FAR FRR No. of Faulty Type Subjects Db3 764X1 5 130 74 Db1 418X1 17 87 52 Db4 1020X1 15 133 71 Bior 4.4 1047X1 4 87 52 Bior 1.1 418X1 17 87 52 Db2 477X1 1 73 30 Thus we can conclude that db2 gives the best possible results and so we have selected this type of Wavelet. b. Results for Overall Database 37
  • 38.
    Thus after analyzingthe results over complete database we found following results. False Acceptance Rate=1 False Rejection Rate=73 Number of Faulty Subjects= 41 Thus, our project has very small FAR which is what a reliable biometric system should possess. However, comparatively our FRR is quite high and some measures can be taken to reduce it. 8.2 Suggested Improvements 1. Segmentation is a very crucial step in Iris Recognition Systems and so to whatever extent its accuracy can be increased will result in more improved results. To improve segmentation algorithm, a more elaborate eyelid and eyelash detection system could be implemented. 2. Presently, our Template size varies with the type of Wavelet applied. So, some measures can be taken to avoid this effect. Our aim of project was mainly to have a very low FAR, which we have achieved to a large extent. Also, our segmentation part has a very high accuracy and has worked satisfactorily over entire CASIA database. Since, we had restricted ourselves to software part implementation and not taken too much hardware implications in too concern, our efforts have resulted in efficient Recognition System. CHAPTER 9 38
  • 39.
    GUI Using MATLAB Since MATLAB provides a very easy way of implementing a GUI (Graphical User Interface), we have prepared a GUI which gives a general idea of the work we have done. Following are some of its details: 1. Starting Window: This is starting window of our GUI which provides link to various phases of our project. The Results for overall can be obtained by clicking on 1st pushbutton. To have Individual analysis of the database, we have provided one option in the form of 2nd pushbutton. Different steps in pattern formation can be seen by clicking on 3rd pushbutton. 2. Results for Complete Database 39
  • 40.
    The above windowshows the overall database results. The update facility is provided so as to again run the code over entire database, in case any changes made in the original code. 3. Individual Enrollment 40
  • 41.
    This window allowsus to individually analyze each person by enrolling that person separately and then check if we get proper results against each image of that person in database. Here, we have used four images for pattern formation and then keep remaining three images as test images. 4. Steps in Pattern formation 41
  • 42.
    This window providesa detailed analysis of pattern formation. Here, we have made provisions to see Pupil separation, Iris segmentation and also Normalized Image for any image in database. 42
  • 43.
    CHAPTER 10 REFERENCES 1. JOURNALSAND CONFERENCE PAPERS • Daugman, J., How Iris Recognition Works, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 14, Number 1, January 2004. • Masek L., Recognition of Human Iris Patterns for Biometric Identification, [http://www.csse.uwa.edu.au/~pk/studentprojects/libor]. • CASIA iris image database, Institute of Automation, Chinese Academy of Sciences, [http://www.sinobiometrics.com] • R. Wildes. Iris recognition: an emerging biometric technology. Proceedings of the IEEE, Vol. 85, No. 9,1997. • Daugman J., Biometric Personal Identification System based on Iris Analysis,United States Patent, Patent No. 5,291,560, March 1994. • J. Daugman. High confidence visual recognition of persons by a test of statistical independence. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 11, 1993. • W.W.Boles, A security system based on human Iris Identification using Wavelet Transform ,Engineering Applications of Artificial Intelligence, 11:77-85,1998 2. OTHER REFERENCES • A book on Digital Image Processing using MATLAB by Rafael C. Gonzalez, Richard E. Woods and Steven L.Eddins. • Wavelet tutorial By Robi Polikar. 43
  • 44.