SlideShare a Scribd company logo
1 of 10
Download to read offline
ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 61
In this paper, we propose an efficient method for per-
sonal identification by analyzing iris patterns that have a
high level of stability and distinctiveness. To improve the ef-
ficiency and accuracy of the proposed system, we present a
new approach to making a feature vector compact and ef-
ficient by using wavelet transform, and two straightfor-
ward but efficient mechanisms for a competitive learning
method such as a weight vector initialization and the win-
ner selection. With all of these novel mechanisms, the ex-
perimental results showed that the proposed system could
be used for personal identification in an efficient and effec-
tive manner.
Manuscriptreceived May8,2000;revisedMay2, 2001.
ShinyounLimiswiththeElectronicPaymentTeam,ETRI,Taejon,
Korea.(phone:+82428605015,e-mail: sylim@econos.etri.re.kr)
Kwanyong LeeiswiththeYonseiUniversity,Seoul,Korea.
(phone:+82115503683,e-mail:kylee@csai.yonsei.ac.kr)
Okhwan ByeoniswiththeKISTI,Taejon,Korea.
(phone:+82428690570,e-mail: ohbyeon@garam.kreonet.re.kr)
TaiyunKimiswiththe Korea University, Seoul,Korea.
(phone:+82232903194,e-mail:tykim@netlab.korea.ac.kr)
I. INTRODUCTION
To control the access to secure areas or materials, a reliable
personal identification infrastructure is required. Conventional
methods of recognizing the identity of a person by using pass-
words or cards are not altogether reliable, because they can be
forgotten or stolen. Biometric technology, which is based on
physical and behavioral features of human body such as face,
fingerprints, hand shape, eyes, signature and voice, has now
been considered as an alternative to existing systems in a great
deal of application domains. Such application domains include
entrance management for specified areas, and airport security
checking system.
Among various physical characteristics, iris patterns have at-
tracted a lot of attention for the last few decades in biometric
technology because they have stable and distinctive features for
personal identification. That is because every iris has fine and
unique patterns and does not change over time since two or
three years after the birth, so it might be called as a kind of op-
tical fingerprint [1], [2]. Figure 1 shows an image of human iris
pattern.
Most works on personal identification and verification using
iris patterns have been done in the 1990s [4]-[8]. Through these
works, we could achieve a great deal of progress in iris-based
identification systems much more than we expected. Some
work, however, has limited capabilities in recognizing the iden-
tity of person accurately and efficiently, so there is much room
for improvement of some technologies affecting performance
in a practical viewpoint. The main difficulty of human iris rec-
ognition is that it is hard to find apparent feature points in the
image and to keep their representability high in an efficient way.
In addition, the identification or verification process suitable for
iris patterns is required to get high accuracy.
Efficient Iris Recognition through Improvement of
Feature Vector and Classifier
Shinyoung Lim, Kwanyong Lee, Okhwan Byeon, and Taiyun Kim
62 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001
Fig. 1.An image of human iris.
Pupil
Iris
In this paper, we propose some optimized and robust meth-
ods for improving the performance of human identification
system based on the iris patterns from the practical viewpoint.
To achieve better performance of iris-based identification sys-
tems by proposing some efficient methods, we conduct the fol-
lowing experiments: the performance evaluation of the popular
feature extraction methods – Gabor transform and Haar wave-
let transform – to select a good method suitable for iris patterns,
the performance comparison according to the dimension of a
feature vector to make a feature vector compact, and the per-
formance comparison of a competitive learning neural network
by adding revised mechanisms for the initialization of weight
vectors and the winner selection. Through various experiments,
we show that the proposed methods can be used for personal
identification systems in an efficient way.
The contents of this paper are as follows. In the following
section, some related works are briefly mentioned. Section III
gives the details of the proposed method for extracting features
and recognizing them. Experimental results and analyses will
be stated in Section IV, and finally the conclusions are given in
Section V.
II. RELATED WORKS
In the process of the iris recognition, it is essential to convert
an acquired iris image into a suitable code that can be easily
manipulated. Thus, we will take a brief look at the process of
feature extraction and representation from the recent remark-
able works.
Daugman [4] developed the feature extraction process based
on information from a set of 2-D Gabor filter. He generated a
256byte code by quantizing the local phase angle according to
the outputs of the real and imaginary parts of the filtered image,
comparing the percentage of mismatched bits between a pair of
iris representations via XOR operator, and selecting a separa-
tion point in the space of Hamming distance.
On the contrary, the Wildes system made use of Laplacian
pyramid constructed with four different resolution levels to
generate iris code [7]. It also exploited a normalized correlation
based on goodness-of-match values and Fisher’s linear dis-
criminant for pattern matching. Both iris recognition systems
make use of bandpass image decompositions to get multi-scale
information.
Boles [8] implemented the system operating the set of 1-D
signals composed of normalized iris signatures at a few inter-
mediate resolution levels and obtaining the iris representation
of these signals via the zerocrossing of the dyadic wavelet
transform. It made use of two dissimilarity functions to com-
pare the new pattern with the reference patterns.
Boles’ approaches have the advantage of processing 1-D iris
signals rather than 2-D image used in both [4] and [7]. How-
ever, [4] and [7] proposed and implemented a whole system for
personal identification or verifications including the configura-
tion of image acquisition device, but [8] only focused on the
iris representation and matching algorithm without an image
acquisition module.
In this paper, we propose an iris recognition system which
includes a compact representation scheme for iris patterns by
the 2-D wavelet transform, a method of initializing weight vec-
tors, and a method of determining winners for recognition in a
competitive learning method like LVQ.
III.ANALYSIS AND RECOGNITION OF IRIS
IMAGE
The overall structure of the proposed system is illustrated in
Fig. 2, and its processing flow is as follows. An image sur-
rounding human eye region is obtained at a distance from a
CCD camera without any physical contact to the device. In the
preprocessing stage, the following steps are taken. First, we
should localize an iris, the portion of the image to be processed
actually. Second, Cartesian coordinate system of the image is
converted into the polar coordinate system so as to facilitate the
feature extraction process. In the feature extraction stage, 2-D
wavelet transform is used to extract a feature vector from the
iris image. In the final stage, the identification and verification
stage, two revised competitive learning methods for LVQ are
exploited to classify the feature vectors and recognize the iden-
tity of person. In order to improve the efficiency of the system,
some methods are applied to the feature extraction stage and
the identification stage.
ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 63
Fig. 2. Structure of the proposed iris recognition system.
Identification/Verification
LVQ
(Uniform distribution of initial vectors)
(Multidimensional winner selection)
Feature Extraction
2D Wavelet Transform
Binary representation
Human Eye B/W CCD Camera
55mm Macro Lens
50W Lamp×2
Eye Image
(320×240)
Preprocessing
Iris Localization
Polar Coordinate Transform
1. Image Acquisition
An image surrounding human eye region is obtained at a dis-
tance from a CCD camera without any physical contact to the
device. Figure 3 shows the device configuration for acquiring
human eye images. To acquire more clear images through a
CCD camera and minimize the effect of the reflected lights
caused by the surrounding illumination, we arrange two halo-
gen lamps as the surrounding lights, as the figure illustrates.
The size of the image acquired under this circumstances is
320×240.
Fig. 3. Configuration of the proposed image acquisition device.
monitor
frame
grabber
lens
halogen
lamp(50w)
eye
CCD
camera
halogen
lamp(50w)
100mm
100mm
320mm320mm
2. Preprocessing Stage
In this stage, we should determine an iris part of the image
by localizing the portion of the image derived from inside the
limbus (outer boundary) and outside the pupil (inner boundary),
and finally convert the iris part into a suitable representation.
To localize an iris, we should find the center of the pupil at
first, and then determine the inner and outer boundaries. Be-
cause there is some obvious difference in the intensity around
each boundary, an edge detection method is easily applied to
acquire the edge information. For every two points of the edge
that may be regarded as the inner boundary by some prior
knowledge of the images, we apply the bisection method to
determine the center of the inner boundary, which is also used
for the reference point of the following processes. By applying
the bisection method to every two points on the same edge, we
can get only one point in the ideal case which crosses every
perpendicular line over the line connecting to two points, but
actually we cannot obtain only one point so we select the center
point as the most frequently crossed point.
After determining the center point, we find the inner bound-
ary and the outer boundary by extending the radius of a virtual
circle from the center of pupil and counting the number of
points of the edge on the corresponding virtual circle. Two vir-
tual circles with the maximum number of points of the edge
within each corresponding range determined by some prior
knowledge are selected as the two boundaries that we want to
find. Figure 4 shows the center of the pupil and the iris part sur-
rounded by two boundaries.
Fig.4.Exampleofresultsinthepreprocessingstage.
Portiontobe
localized
60
θ
r
450
The localized iris part from the image is transformed into po-
lar coordination system in an efficient way so as to facilitate the
next process, the feature extraction process. The portion of the
pupil is excluded from the conversion process because it has no
biological characteristics at all. The distance between the inner
boundary and the outer boundary is normalized into [0, 60] ac-
cording to the radius r. By increasing the angle θ by 0.8° for an
arbitrary radius r, we obtain 450 values. We, therefore, can get
a 450×60 iris image for the plane (θ, r). Figure 4 shows the
process of converting the Cartesian coordinate system into the
polar coordinate system for the iris part.
64 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001
3. Feature Extraction Stage
Gabor transform and wavelet transform are typically used for
analyzing the human iris patterns and extracting feature points
from them [4], [9]-[11]. In this paper, a wavelet transform is
used to extract features from iris images. Among the mother
wavelets, we use Haar wavelet illustrated in Fig. 5 as a basis
function.
Fig. 5. Haar Mother Wavelet.
Haar Wavelet1.0
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1.0
Figure 6 shows the conceptual process of obtaining the fea-
ture vectors with the optimized dimension. Here, H and L
mean the high-pass filter and the low-pass filter, respectively,
and HH indicates that the high-pass filter is applied to the sig-
nals of both axes. For the 450×60 iris image obtained from the
preprocessing stage, we apply wavelet transform four times in
order to get the 28×3 sub-images. Finally, we organize a feature
vector by combining 84 features in the HH sub-image of the
high-pass filter of the fourth transform (HH4 of Fig. 6) and each
average value for the three remaining high-pass filter areas
(HH1, HH2, and HH3 in Fig. 6). The dimension of the resulting
feature vector is 87.
Fig. 6. Conceptual diagram for organizing a feature vector.
Each value of 87 dimensions has a real value between –1.0
and 1.0. To reduce space and computational time for manipu-
lating the feature vector, we quantize each real value into bi-
nary value by simply converting the positive value into 1 and
the negative value into 0. Therefore, we can represent an iris
image with only 87 bits.
4. Identification and Verification Stage
In general, the competitive learning neural network like LVQ
has the faster learning mechanism than the error backpropaga-
tion algorithm but its performance is easily affected by initial
weight vectors [12], [13].
To solve such a problem for at least iris patterns, a new
method for initializing the weight vectors in an effective man-
ner is proposed. This method generates the initial vectors that
can be located around the boundary of each class. In the learn-
ing process, the common learning process for LVQ is accom-
plished after initializing the weight vectors by the proposed
method. In the recognition process, we set the acceptance level
and use it to determine whether the final result is accepted or
rejected [15], [16].
The process of the proposed initialization algorithm called
the uniform distribution of initial weight vectors is as follows.
(see Fig. 7)
Fig. 7. Concept of the uniform distribution of initial weight vectors.
Hidden
Layer
Input
Layer
Output
Layer
W1
W2
W2
W2
W3
W3
W4
W4 Wn
W1
W1
0
0
00
Step 1 Set initial weight vectors with the vector of the first
learning pattern of each class and other weight vec-
tors to be zero.
MkforXW kk
,...2,111 == (1)
where
ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 65
k
X1 : the vector of the first learning pattern of
the k-th class.
k
W1 : the first weight vector of the k-th class.
M : the number of class.
Step 2 Select another pattern of each class as a new learning
pattern.
Step 3 Calculate the distance dj between the learning pattern
and the weight vector by the following equation.
( )∑
−
=
−=
1
0
22
N
i
k
ij
k
ipj WXd (2)
where
k
ipX : the i-th component of p-th learning pattern
of k-th class.
k
ijW : the i-th component of the j-th weight vector
of k-th class.
N : the dimension of a learning pattern.
Step 4 Determine whether the class of the weight vector with
the minimum distance among all dj is equal to the
class of the learning pattern. If the class of the weight
vector is not equal to the class of the learning pattern,
then add the vector of the learning pattern as a new
weight vector.
Step 5 Go to step 2 until all of the learning patterns are used
in the learning process.
The winner selection method based on Euclidean distance
that is generally used in competitive learning neural networks
has no problem in determining the minimum distance of each
class, as a whole. However, if the dimension of a feature vector
increases, so does the possibility of selecting a wrong winner
because of the failure of obtaining the information on each di-
mension. To solve such a problem, a new algorithm of winner
selection called multidimensional winner selection method is
proposed. The proposed algorithm is to determine the winner
of each dimension, count the frequency of becoming the win-
ner according to each class, and then select a class with the
largest value as the final winner. Figure 8 shows the conceptual
diagram of the proposed winner selection method. In the figure,
each plate in a neuron indicates each dimension of a feature
vector.
Fig. 8. Conceptual diagram of multi-dimensional
winner selection method.
............
............................................................
O u tp u t N e u ro nIn p u t P a tte r n Output NeuronInput Pattern
IV. EXPERIMENTAL RESULTS
To evaluate the performance of the proposed human iris rec-
ognition system, we collected 6,000 data acquired from 30 im-
ages per people for 3 months with the help of volunteers of 200
Korean university students, who are in their early twenties. The
environment of image acquisition is illustrated in Fig. 2. The
parameters used in LVQ such as the learning rate and the itera-
tion number are shown in Table 1. The learning rate is a con-
stant that decreases from 1 to near 0 as learning is processed.
And the iteration number is ‘t’in Table 1.
Table 1. Parameters for LVQ.
Initial learning rate 0.1
Update of
learning rate
Total iteration 300






=
iterationofnumbertotal
t
-1(0))( αα t
Under the experimental environments above, we have con-
ducted two kinds of experiments: one is to see the performance
of each method proposed in this paper, and the other is to pro-
vide two kinds of error rates such as false accept rate (FAR)
and false reject rate (FRR).
1. Results on Preprocessing Stage
In the experiments for the preprocessing stage, we checked
the accuracy of the boundaries subjectively and obtained the
66 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001
success rate of 88.2% (5,292data) on 6,000 data. Table 2 shows
the causes of the failure of preprocessing. As you can see the
table, the remarkable thing is that the failure of data with
glasses takes 18.8 % over the total failure.
Table 2. Analysis of the failure of preprocessing according
to the causes.
Cause of Failure
# of
Data
Ratio
(%)
(1) Occlusion by eyelids 178 31
(2) Inappropriate eye positioning 127 22
(3) Shadow of eyelids 121 21
(4) Noises within pupil 34 6
(5) Etc 115 20
Data
without
Glasses
and with
Lens
Total 575 100
(6) Noises or dirt on the glasses 49 37
(7) Reflection of glasses 28 21
(8) Shadow of the rim of glasses 20 15
(9) Etc 36 27
Data
with
Glasses
Total 133 100
Figure 9 shows the examples of the failure in the preprocess-
ing stage according to the causes. Each number of the figure
corresponds to the cause of Table 1.
Fig. 9. Examples of the failure in the preprocessing stage.
Also, we can see that the success rate for data with glasses in
the preprocessing stage is about 10% less than that of other data
without glasses or with lens.
2. Performance Comparison of Individual Method
A half of 5,292 data that are obtained successfully from the
preprocessing stage are used as the learning data for LVQ, and
the remaining half as the test data. The following subsections
describe the results on each stage or method proposed in this
paper.
A. Feature Extraction Method
Table 3 shows the recognition rate on two different feature
extraction methods, Gabor transform and Haar wavelet trans-
form, with the same classifier. The recognition rate of wavelet
transform is better than that of Gabor transform by 0.9% and
2.1% for the learning data and test data, respectively. Therefore,
we used Haar wavelet transform as the basis of feature extrac-
tion method in the following experiments.
Table 3. Comparison of two feature extraction methods.
Gabor Transform Wavelet Transform
Learning Data 95.3% 96.2 %
Test Data 92.3 % 94.4 %
B. Weight Vector Initialization Method
Table 4 shows the results on the accuracy comparison of two
initialization methods under the same experimental environ-
ments. In the case of the proposed method called the uniform
distribution of initial weight vectors, the experimental results
on both the learning data and the test data showed better per-
formance than those of the initialization with random values
which is regarded as a basic initialization method.
Table 4. Comparison of weight vector initialization methods.
Initialization with random
values
Proposed method
Learning Data 96.2 % 97.1 %
Test Data 94.4 % 95.9 %
C. Winner Selection Method
Table 5 shows the experimental results on two methods of
winner selection when we use Haar wavelet transform for fea-
ture extraction and LVQ with the proposed initialization
method. You can see that the proposed method, the multidi-
mensional method showed a good result for human iris features.
ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 67
Table 5. Comparison of winner selection methods.
Euclidean distance
method
Multi-dimensional
method
Learning Data 97.1 % 97.8 %
Test Data 95.9 % 97.2 %
D. Size of Feature Vector
From the three experimental results above, we selected each
method with high accuracy to configure a good system for per-
sonal identification based on iris patterns. The selected methods
for each stage are as follows; Haar wavelet transform for fea-
ture extraction, uniform distribution method for initializing
weight vectors, and multidimensional method for winner selec-
tion.
With the iris recognition system composed of these methods,
we try to minimize or optimize the dimension of feature vector
Fig. 10. Degree of match by 87 demensions for a feature vector.
40
50
60
70
80
80
70
60
50
40
40
50
60
70
80
Fig. 11. Degree of match by 18 dimensions for a feature vector.
80
70
60
50
40
without any influence to the recognition accuracy. We pro-
posed a new feature extraction process. This method can effi-
ciently represent a feature vector with 87 dimensions and it re-
quires only one bit per dimension. Regardless of the successive
transform of an image four times, we can separate an input
space according to the degree of matching as shown in Fig. 10.
In the Fig. 10, the black points mean success of match and the
white points mean failure of match. In the Figs. 10 and 11, x-
axis means each person and y-axis means the degree of match.
If we run transform of the image five times, however, we can
not keep a threshold of recognition even though we might ob-
tain much less size of feature vector as shown in Fig. 11. That
is why we choose the 87 dimensions for each feature vector,
not 18 dimensions. Table 6 shows the performance evaluation
according to the size of a feature vector.
Table 6. Performance evaluation according to the size of feature
vectors.
256 dimension
(1 byte/dimension)
87 dimension
(1 bit/dimension)
Learning Data 97.8 % 98.0 %
Test Data 97.2 % 97.2 %
For efficient comparison with the proposed scheme for orga-
nizing a feature vector, we used 256 dimensions (1 byte per
dimension) for each vector introduced in [4]. As the feature
vector size is one twentieth compared with the 256 dimensions
(1 byte per dimension), the performance in the process of rec-
ognition and verification is expected to be improved. All of the
experimental results on the proposed methods are summarized
in Table 7.
Table 7. Performance evaluation on the proposed methods.
Feature
Extraction
Gabor
transform
Wavelet transform
Initializaiton with
random values
Uniform distribution of initial
weight
Recognition
Euclidean distance-based
winner selection
Multi-dimensional
Winner Selection
Size of
Feature
Vector
256 dimension (1 bytes/dimension)
2,048 bits
87
dimension
(1bit/
dimension)
87 bits
Learning
Data
97.8 % 98.0 % 99.2 % 99.6 % 99.8 %
Test Data 92.3 % 94.4 % 97.6 % 98.7 % 99.3 %
68 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001
3. Overall Performance of Proposed System
The performance of biometric systems is usually described
by the two error rates: FAR and FRR. In order to determine a
threshold separating FRR and FAR, we compare the feature
vector of an unknown pattern with the weight vector obtained
from the corresponding output node of LVQ, count the number
of matched bits, and then use the ratio of matched bits over 87
bits as the degree of match. For these experiments, we divide
data into two groups including each 100 person: one is for
LVQ learning and for false reject test (Group 1), and the other
for false accept test (Group 2). We use 5 data per person for
LVQ learning from Group 1.
A. Experiment for FRR
For this experiment, we use 20 data per person from data of
Group 1 that are not exploited in the LVQ learning process.
The degree of match between the unknown patterns and the
registered (trained) patterns is illustrated in Fig. 12. In the figure,
x-axis and y-axis indicate the number of data and the degree of
match, respectively.
Fig. 12. Degree of match for the same persons (Authentic).
0
20
40
60
80
100
120
140
160
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100
B. Experiment for FAR
For this experiment, we use 20 data per person from Group 2,
which can be regarded as the imposters. The degree of match
between the unknown patterns for imposters and the registered
patterns is illustrated in Fig. 13. In the figure, x-axis and y-axis
indicate the number of data and the degree of match, respec-
tively.
Fig. 13. Degree of match for the different persons (Imposter).
0
20
40
60
80
100
120
140
160
180
200
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97100
Figure 14 shows the change of two error rates according to
the degree of match for selecting a proper point as a threshold.
By selecting the intersection point of two error curves as a
threshold, we can minimize two error rates simultaneously.
When we use the threshold of 60.5 or 61.5, we can get the per-
formance of about 97.1% to 98.4%. Table 8 indicates FAR and
FRR according to the degree of match.
Fig. 14. Change of two error rates according to the degree of match.
53.5 55.5 57.5 59.5 61.5 63.5 65.5 67.5 Degreeof match
(%)
40
35
30
25
20
15
10
5
0
FAR&FRR (%)
FRR
FAR
Because the experiments on FAR and FRR have been con-
ducted for the data preprocessed successfully, the recognition
rate over all of the data including the unsuccessful data in the
preprocessing stage is decreased by about 10%. Therefore, a
ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 69
great deal of efforts should be placed on the improvement of
techniques in the preprocessing stage to get higher reliability.
The processing time from the data acquisition to the identifica-
tion/verification takes about 2 seconds.
Table 8. FAR and FRR according to the degree of match.
Degree of Match (%) FRR (%) FAR (%)
53.5 0.00 37.15
54.5 0.15 28.45
55.5 0.15 23.40
56.5 0.25 17.40
57.5 0.40 12.90
58.5 0.55 8.40
59.5 0.80 5.15
60.5 1.65 2.90
61.5 1.65 2.90
62.5 3.70 2.15
63.5 5.30 1.55
64.5 7.15 1.30
65.5 8.85 0.55
66.5 11.65 0.15
67.5 14.1 0.15
68.5 17.75 0.00
V. CONCLUSIONS
In this paper, an efficient method for personal identification
and verification by means of human iris patterns is presented.
To process the iris patterns in an efficient and effective way
against existing methods, the following studies are conducted:
First, two methods–Gabor transform and Haar wavelet trans-
form which are widely used for extracting features–were
evaluated. From this evaluation, we found that Haar wavelet
transform has better performance than that of Gabor transform.
Second, Haar wavelet transform was used for optimizing the
dimension of feature vectors in order to reduce processing time
and space. With only 87 bits, we could present an iris pattern
without any negative influence on the system performance.
Last, we improved the accuracy of a classifier, a competitive
learning neural network, by proposing an initialization method
of the weight vectors and a new winner selection method de-
signed for iris recognition. With these methods, we could in-
crease the recognition performance to 98.4%. From the ex-
perimental results, we are convinced that the proposed system
is optimized enough to be applied to various real applications.
REFERENCES
[1] Adler F. H., Physiology of the Eye: Clinical Application, The C. V.
Mosby Company, 1965.
[2] Hallinan P. W., "Recognizing Human Eyes", SPIE Proc. Geomet-
ric Methods in Computer Vision, 1570, 1991, pp. 214-226.
[3] Flom L. and Safir A., “Iris recognition system,” U.S. Patent 4 641
349, 1987.
[4] John G. Daugman, "High Confidence Visual Recognition of Per-
sons by a Test of Statistical Independence", IEEE Trans. on Pat-
tern Analysis and Machine Intelligence, 15(11), 1993, pp. 1148-
1161.
[5] Williams, G.O., "Iris Recognition Technology", IEEE Aerospace
and Electronics Systems Magazine, 12(4), 1997, pp.23-29.
[6] Wildes, R.P., "Iris Recognition: An Emerging Biometric Technol-
ogy", Proc. of the IEEE, 85(9), 1997, pp.1348-1363.
[7] Wildes, R.P., Asmuth, J.C. et al., "A System for Automated Iris
Recognition", Proc. of the Second IEEE Workshop on Applica-
tions of ComputerVision, 1994, pp.121-128.
[8] Boles, W.W. and Boashash, B., "A Human Identification Tech-
nique Using Images of the Iris and Wavelet Transform", IEEE
Trans. on Signal Processing, 46(4), 1998, pp.1185-1188.
[9] Randy K. Young, Wavelet Theory and Its Application, Kluwer
Academic Publisher, 1992.
[10] Rioul O. and Vetterli M., "Wavelet and Signal Processing", IEEE
Signal Processing Magazine, October 1981, pp. 14-38.
[11] Gilbert Strang, Truong Nguyen, Wavelets and Filter Banks,
Wellesley-Cambridge Press, 1996.
[12] Fausset L., Fundamentals of Neural Networks, Prentice Hall, 1994.
[13] Kohonen T., The Self-organization and Associate Memory,
Springer-Verlag, 1985.
[14] Ilgu Yun et al., “Extraction of Passive Device Model Parameters
Using Genetic Algorithms”, ETRI J., Vol. 22, No.1, 2000, pp.38-
46.
[15] Sang-Mi Lee, Hee-Jung Bae, and Sung-Hwan Jung, “Efficient
Content-based Image Retrieval Methods using Color and Tex-
ture”, ETRI J., Vol.20, No.3, 1998, pp.272-283.
[16] Young-Sum Kim et al., “Development of Content-based Trade-
mark Retrieval System on the World Wide Web”, ETRI J., Vol.21,
No.1, 1999, pp.40-53.
70 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001
Shinyoung Lim received B.E. degree in indus-
trial chemistry from Kon-Kuk University, Seoul,
Korea in 1983, and M.S. degrees in chemical
engineering and computer science from Kon-
Kuk University in 1985 and 1992, respectively,
and Ph.D. degree in computer science from Ko-
rea University, Seoul in 2001. He joined the
Systems Engineering Research Institute(SERI),
Korea Institute of Science and Technol-
ogy(KIST) in 1986. Since then, he was a principal member of research
staff in the field of software engineering, data communication and
computer networks, and information security until 1996. He is cur-
rently working in Electronic Commerce Technology of Electronics and
Telecommunications Research Institute(ETRI), as a team leader of
Electronic Payment Team. His current interests include the area of elec-
tronic commerce security, digital contents copyright protection, biomet-
rics, and mobile commerce security. He is a member of the Korean
Electronic Payment Forum, KIPS, and KICS.
Kwanyong Lee received the M.S. and the Ph.D.
degrees in computer science from Yonsei Uni-
versity, Seoul, Korea in 1991 and 1994, respec-
tively. From 1989 to 1999, he was a researcher
in the Research Institute of Natural Science,
Yonsei University. From 1997 to 1999, he
joined the department of information and com-
munication engineering at the University of To-
kyo in Japan as a visiting researcher. In 1999, he
was a senior researcher in the EC/CALS division of the Electronics and
Telecommunications Research Institute in Taejon, Korea. Since August
2000, he has been a researcher in the department of computer science
of Yonsei University and he is responsible for developing and evaluat-
ing various image-processing-based biometric systems. His current re-
search interests include video-based biometrics, pattern recognition,
computer vision, and image processing.
Okhwan Byeon received his B.E. degree in
communication engineering from National
Aviation University, Seoul, Korea in 1979, and
M.S. degree in information engineering from
Inha University in 1985, and Ph.D. degree in in-
formation communication engineering from
Kyeonghee University in 1995, respectively. He
joined the Data Communication Section, Korea
Institute of Science and Technology(KIST) in
1978. Since then, he was a principal member of engineering staff in the
field of Data Communication and Computer Networks until 1996. He
is currently working in Supercomputing Center of Korean Information
of Science & Technology Institute(KISTI), as a head of High Perform-
ance Networking Lab. His current interests include the area of distrib-
uted computing, internet traffic engineering and security. He is a mem-
ber of the KIPS, KISS, KICS, and KIISC.
Taiyun Kim received B.S. in industrial engi-
neering science from Korea University, 1981.
He received M.S. in computer science from the
Wayne State University, 1983. He received
Ph.D. in computer science from the Auburn
University, 1987. At present, he is a professor at
the department of computer science and engi-
neering in the Korea University. His research in-
terests are computer networks, EDI systems, se-
curity,biometrics,ISDN,satellite communication, and computer graphics.

More Related Content

What's hot

An Assimilated Face Recognition System with effective Gender Recognition Rate
An Assimilated Face Recognition System with effective Gender Recognition RateAn Assimilated Face Recognition System with effective Gender Recognition Rate
An Assimilated Face Recognition System with effective Gender Recognition RateIRJET Journal
 
A Survey : Iris Based Recognition Systems
A Survey : Iris Based Recognition SystemsA Survey : Iris Based Recognition Systems
A Survey : Iris Based Recognition SystemsEditor IJMTER
 
An Image Based PCB Fault Detection and Its Classification
An Image Based PCB Fault Detection and Its ClassificationAn Image Based PCB Fault Detection and Its Classification
An Image Based PCB Fault Detection and Its Classificationrahulmonikasharma
 
EDGE DETECTION OF MICROSCOPIC IMAGE
EDGE DETECTION OF MICROSCOPIC IMAGEEDGE DETECTION OF MICROSCOPIC IMAGE
EDGE DETECTION OF MICROSCOPIC IMAGEIAEME Publication
 
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...idescitation
 
Binary operation based hard exudate detection and fuzzy based classification ...
Binary operation based hard exudate detection and fuzzy based classification ...Binary operation based hard exudate detection and fuzzy based classification ...
Binary operation based hard exudate detection and fuzzy based classification ...IJECEIAES
 
Literature Survey on Image Deblurring Techniques
Literature Survey on Image Deblurring TechniquesLiterature Survey on Image Deblurring Techniques
Literature Survey on Image Deblurring TechniquesEditor IJCATR
 
Enhanced Thinning Based Finger Print Recognition
Enhanced Thinning Based Finger Print RecognitionEnhanced Thinning Based Finger Print Recognition
Enhanced Thinning Based Finger Print RecognitionIJCI JOURNAL
 
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformRotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformIRJET Journal
 
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...IDES Editor
 
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...IDES Editor
 
A new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowA new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowAlexander Decker
 
Recent developments in iris based biometric authentication systems
Recent developments in iris based biometric authentication systemsRecent developments in iris based biometric authentication systems
Recent developments in iris based biometric authentication systemseSAT Journals
 
An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...IJCI JOURNAL
 

What's hot (19)

An Assimilated Face Recognition System with effective Gender Recognition Rate
An Assimilated Face Recognition System with effective Gender Recognition RateAn Assimilated Face Recognition System with effective Gender Recognition Rate
An Assimilated Face Recognition System with effective Gender Recognition Rate
 
A Survey : Iris Based Recognition Systems
A Survey : Iris Based Recognition SystemsA Survey : Iris Based Recognition Systems
A Survey : Iris Based Recognition Systems
 
An Image Based PCB Fault Detection and Its Classification
An Image Based PCB Fault Detection and Its ClassificationAn Image Based PCB Fault Detection and Its Classification
An Image Based PCB Fault Detection and Its Classification
 
A05610109
A05610109A05610109
A05610109
 
EDGE DETECTION OF MICROSCOPIC IMAGE
EDGE DETECTION OF MICROSCOPIC IMAGEEDGE DETECTION OF MICROSCOPIC IMAGE
EDGE DETECTION OF MICROSCOPIC IMAGE
 
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...
Spot Edge Detection in cDNA Microarray Images using Window based Bi-Dimension...
 
D45012128
D45012128D45012128
D45012128
 
Binary operation based hard exudate detection and fuzzy based classification ...
Binary operation based hard exudate detection and fuzzy based classification ...Binary operation based hard exudate detection and fuzzy based classification ...
Binary operation based hard exudate detection and fuzzy based classification ...
 
Literature Survey on Image Deblurring Techniques
Literature Survey on Image Deblurring TechniquesLiterature Survey on Image Deblurring Techniques
Literature Survey on Image Deblurring Techniques
 
N010226872
N010226872N010226872
N010226872
 
Enhanced Thinning Based Finger Print Recognition
Enhanced Thinning Based Finger Print RecognitionEnhanced Thinning Based Finger Print Recognition
Enhanced Thinning Based Finger Print Recognition
 
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET TransformRotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
Rotation Invariant Face Recognition using RLBP, LPQ and CONTOURLET Transform
 
K011138084
K011138084K011138084
K011138084
 
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...
Authentication of Degraded Fingerprints Using Robust Enhancement and Matching...
 
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
Image De-noising and Enhancement for Salt and Pepper Noise using Genetic Algo...
 
A new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial windowA new technique to fingerprint recognition based on partial window
A new technique to fingerprint recognition based on partial window
 
366 369
366 369366 369
366 369
 
Recent developments in iris based biometric authentication systems
Recent developments in iris based biometric authentication systemsRecent developments in iris based biometric authentication systems
Recent developments in iris based biometric authentication systems
 
An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...An efficient method for recognizing the low quality fingerprint verification ...
An efficient method for recognizing the low quality fingerprint verification ...
 

Viewers also liked

IRIS Based Human Recognition System
IRIS Based Human Recognition SystemIRIS Based Human Recognition System
IRIS Based Human Recognition SystemCSCJournals
 
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...AM Publications
 
Remedy for disease affected iris in iris recognition
Remedy for disease affected iris in iris recognitionRemedy for disease affected iris in iris recognition
Remedy for disease affected iris in iris recognitioneSAT Journals
 
Pattern Recognition #1 - Gulraj
Pattern Recognition #1 - GulrajPattern Recognition #1 - Gulraj
Pattern Recognition #1 - GulrajMuhammad GulRaj
 
Ieeepro techno solutions ieee embedded project secure and robust iris recog...
Ieeepro techno solutions   ieee embedded project secure and robust iris recog...Ieeepro techno solutions   ieee embedded project secure and robust iris recog...
Ieeepro techno solutions ieee embedded project secure and robust iris recog...srinivasanece7
 
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...CSCJournals
 
www1.cs.columbia.edu
www1.cs.columbia.eduwww1.cs.columbia.edu
www1.cs.columbia.edubutest
 
Internation Journal Conference
Internation Journal ConferenceInternation Journal Conference
Internation Journal ConferenceHemanth Kumar
 
A Review on Feature Extraction Techniques and General Approach for Face Recog...
A Review on Feature Extraction Techniques and General Approach for Face Recog...A Review on Feature Extraction Techniques and General Approach for Face Recog...
A Review on Feature Extraction Techniques and General Approach for Face Recog...Editor IJCATR
 
High Security Human Recognition System using Iris Images
High Security Human Recognition System using Iris ImagesHigh Security Human Recognition System using Iris Images
High Security Human Recognition System using Iris ImagesIDES Editor
 
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...CSCJournals
 
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsDerek Kane
 
Data Science - Part XVI - Fourier Analysis
Data Science - Part XVI - Fourier AnalysisData Science - Part XVI - Fourier Analysis
Data Science - Part XVI - Fourier AnalysisDerek Kane
 

Viewers also liked (20)

Machine Learning 1
Machine Learning 1Machine Learning 1
Machine Learning 1
 
IriCore SW Brochure
IriCore SW BrochureIriCore SW Brochure
IriCore SW Brochure
 
Ijcatr04051013
Ijcatr04051013Ijcatr04051013
Ijcatr04051013
 
IRIS Based Human Recognition System
IRIS Based Human Recognition SystemIRIS Based Human Recognition System
IRIS Based Human Recognition System
 
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...
Iris Encryption using (2, 2) Visual cryptography & Average Orientation Circul...
 
Remedy for disease affected iris in iris recognition
Remedy for disease affected iris in iris recognitionRemedy for disease affected iris in iris recognition
Remedy for disease affected iris in iris recognition
 
Pattern Recognition #1 - Gulraj
Pattern Recognition #1 - GulrajPattern Recognition #1 - Gulraj
Pattern Recognition #1 - Gulraj
 
Human Iris Biometry
Human Iris BiometryHuman Iris Biometry
Human Iris Biometry
 
Ijarcce 27
Ijarcce 27Ijarcce 27
Ijarcce 27
 
Ieeepro techno solutions ieee embedded project secure and robust iris recog...
Ieeepro techno solutions   ieee embedded project secure and robust iris recog...Ieeepro techno solutions   ieee embedded project secure and robust iris recog...
Ieeepro techno solutions ieee embedded project secure and robust iris recog...
 
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
 
E017443136
E017443136E017443136
E017443136
 
www1.cs.columbia.edu
www1.cs.columbia.eduwww1.cs.columbia.edu
www1.cs.columbia.edu
 
Internation Journal Conference
Internation Journal ConferenceInternation Journal Conference
Internation Journal Conference
 
A Review on Feature Extraction Techniques and General Approach for Face Recog...
A Review on Feature Extraction Techniques and General Approach for Face Recog...A Review on Feature Extraction Techniques and General Approach for Face Recog...
A Review on Feature Extraction Techniques and General Approach for Face Recog...
 
High Security Human Recognition System using Iris Images
High Security Human Recognition System using Iris ImagesHigh Security Human Recognition System using Iris Images
High Security Human Recognition System using Iris Images
 
L_3011_62.+1908
L_3011_62.+1908L_3011_62.+1908
L_3011_62.+1908
 
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
Enhancement of Multi-Modal Biometric Authentication Based on IRIS and Brain N...
 
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
 
Data Science - Part XVI - Fourier Analysis
Data Science - Part XVI - Fourier AnalysisData Science - Part XVI - Fourier Analysis
Data Science - Part XVI - Fourier Analysis
 

Similar to 23-02-03[1]

International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid TechniqueBiometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Techniqueijsc
 
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...IJNSA Journal
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique  Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique ijsc
 
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsA Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsIOSR Journals
 
IRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET Journal
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcsitconf
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORcscpconf
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPIRJET Journal
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)theijes
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...IOSR Journals
 
Iris recognition based on 2D Gabor filter
Iris recognition based on 2D Gabor filterIris recognition based on 2D Gabor filter
Iris recognition based on 2D Gabor filterIJECEIAES
 
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...IJTET Journal
 
Quality assessment for online iris
Quality assessment for online irisQuality assessment for online iris
Quality assessment for online iriscsandit
 

Similar to 23-02-03[1] (20)

International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid TechniqueBiometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
 
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
EFFECTIVENESS OF FEATURE DETECTION OPERATORS ON THE PERFORMANCE OF IRIS BIOME...
 
Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique  Biometric Iris Recognition Based on Hybrid Technique
Biometric Iris Recognition Based on Hybrid Technique
 
G01114650
G01114650G01114650
G01114650
 
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition SystemsA Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
A Literature Review on Iris Segmentation Techniques for Iris Recognition Systems
 
IRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition TechniquesIRJET- Survey of Iris Recognition Techniques
IRJET- Survey of Iris Recognition Techniques
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
K0966468
K0966468K0966468
K0966468
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
 
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATORIRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
IRIS BIOMETRIC RECOGNITION SYSTEM EMPLOYING CANNY OPERATOR
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDPAN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
AN EFFICIENT FACE RECOGNITION EMPLOYING SVM AND BU-LDP
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)
 
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATIONA SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
A SURVEY ON IRIS RECOGNITION FOR AUTHENTICATION
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
 
Iris recognition based on 2D Gabor filter
Iris recognition based on 2D Gabor filterIris recognition based on 2D Gabor filter
Iris recognition based on 2D Gabor filter
 
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...
A Comprehensive Approach for Multi Biometric Recognition Using Sclera Vein an...
 
Quality assessment for online iris
Quality assessment for online irisQuality assessment for online iris
Quality assessment for online iris
 

23-02-03[1]

  • 1. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 61 In this paper, we propose an efficient method for per- sonal identification by analyzing iris patterns that have a high level of stability and distinctiveness. To improve the ef- ficiency and accuracy of the proposed system, we present a new approach to making a feature vector compact and ef- ficient by using wavelet transform, and two straightfor- ward but efficient mechanisms for a competitive learning method such as a weight vector initialization and the win- ner selection. With all of these novel mechanisms, the ex- perimental results showed that the proposed system could be used for personal identification in an efficient and effec- tive manner. Manuscriptreceived May8,2000;revisedMay2, 2001. ShinyounLimiswiththeElectronicPaymentTeam,ETRI,Taejon, Korea.(phone:+82428605015,e-mail: sylim@econos.etri.re.kr) Kwanyong LeeiswiththeYonseiUniversity,Seoul,Korea. (phone:+82115503683,e-mail:kylee@csai.yonsei.ac.kr) Okhwan ByeoniswiththeKISTI,Taejon,Korea. (phone:+82428690570,e-mail: ohbyeon@garam.kreonet.re.kr) TaiyunKimiswiththe Korea University, Seoul,Korea. (phone:+82232903194,e-mail:tykim@netlab.korea.ac.kr) I. INTRODUCTION To control the access to secure areas or materials, a reliable personal identification infrastructure is required. Conventional methods of recognizing the identity of a person by using pass- words or cards are not altogether reliable, because they can be forgotten or stolen. Biometric technology, which is based on physical and behavioral features of human body such as face, fingerprints, hand shape, eyes, signature and voice, has now been considered as an alternative to existing systems in a great deal of application domains. Such application domains include entrance management for specified areas, and airport security checking system. Among various physical characteristics, iris patterns have at- tracted a lot of attention for the last few decades in biometric technology because they have stable and distinctive features for personal identification. That is because every iris has fine and unique patterns and does not change over time since two or three years after the birth, so it might be called as a kind of op- tical fingerprint [1], [2]. Figure 1 shows an image of human iris pattern. Most works on personal identification and verification using iris patterns have been done in the 1990s [4]-[8]. Through these works, we could achieve a great deal of progress in iris-based identification systems much more than we expected. Some work, however, has limited capabilities in recognizing the iden- tity of person accurately and efficiently, so there is much room for improvement of some technologies affecting performance in a practical viewpoint. The main difficulty of human iris rec- ognition is that it is hard to find apparent feature points in the image and to keep their representability high in an efficient way. In addition, the identification or verification process suitable for iris patterns is required to get high accuracy. Efficient Iris Recognition through Improvement of Feature Vector and Classifier Shinyoung Lim, Kwanyong Lee, Okhwan Byeon, and Taiyun Kim
  • 2. 62 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001 Fig. 1.An image of human iris. Pupil Iris In this paper, we propose some optimized and robust meth- ods for improving the performance of human identification system based on the iris patterns from the practical viewpoint. To achieve better performance of iris-based identification sys- tems by proposing some efficient methods, we conduct the fol- lowing experiments: the performance evaluation of the popular feature extraction methods – Gabor transform and Haar wave- let transform – to select a good method suitable for iris patterns, the performance comparison according to the dimension of a feature vector to make a feature vector compact, and the per- formance comparison of a competitive learning neural network by adding revised mechanisms for the initialization of weight vectors and the winner selection. Through various experiments, we show that the proposed methods can be used for personal identification systems in an efficient way. The contents of this paper are as follows. In the following section, some related works are briefly mentioned. Section III gives the details of the proposed method for extracting features and recognizing them. Experimental results and analyses will be stated in Section IV, and finally the conclusions are given in Section V. II. RELATED WORKS In the process of the iris recognition, it is essential to convert an acquired iris image into a suitable code that can be easily manipulated. Thus, we will take a brief look at the process of feature extraction and representation from the recent remark- able works. Daugman [4] developed the feature extraction process based on information from a set of 2-D Gabor filter. He generated a 256byte code by quantizing the local phase angle according to the outputs of the real and imaginary parts of the filtered image, comparing the percentage of mismatched bits between a pair of iris representations via XOR operator, and selecting a separa- tion point in the space of Hamming distance. On the contrary, the Wildes system made use of Laplacian pyramid constructed with four different resolution levels to generate iris code [7]. It also exploited a normalized correlation based on goodness-of-match values and Fisher’s linear dis- criminant for pattern matching. Both iris recognition systems make use of bandpass image decompositions to get multi-scale information. Boles [8] implemented the system operating the set of 1-D signals composed of normalized iris signatures at a few inter- mediate resolution levels and obtaining the iris representation of these signals via the zerocrossing of the dyadic wavelet transform. It made use of two dissimilarity functions to com- pare the new pattern with the reference patterns. Boles’ approaches have the advantage of processing 1-D iris signals rather than 2-D image used in both [4] and [7]. How- ever, [4] and [7] proposed and implemented a whole system for personal identification or verifications including the configura- tion of image acquisition device, but [8] only focused on the iris representation and matching algorithm without an image acquisition module. In this paper, we propose an iris recognition system which includes a compact representation scheme for iris patterns by the 2-D wavelet transform, a method of initializing weight vec- tors, and a method of determining winners for recognition in a competitive learning method like LVQ. III.ANALYSIS AND RECOGNITION OF IRIS IMAGE The overall structure of the proposed system is illustrated in Fig. 2, and its processing flow is as follows. An image sur- rounding human eye region is obtained at a distance from a CCD camera without any physical contact to the device. In the preprocessing stage, the following steps are taken. First, we should localize an iris, the portion of the image to be processed actually. Second, Cartesian coordinate system of the image is converted into the polar coordinate system so as to facilitate the feature extraction process. In the feature extraction stage, 2-D wavelet transform is used to extract a feature vector from the iris image. In the final stage, the identification and verification stage, two revised competitive learning methods for LVQ are exploited to classify the feature vectors and recognize the iden- tity of person. In order to improve the efficiency of the system, some methods are applied to the feature extraction stage and the identification stage.
  • 3. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 63 Fig. 2. Structure of the proposed iris recognition system. Identification/Verification LVQ (Uniform distribution of initial vectors) (Multidimensional winner selection) Feature Extraction 2D Wavelet Transform Binary representation Human Eye B/W CCD Camera 55mm Macro Lens 50W Lamp×2 Eye Image (320×240) Preprocessing Iris Localization Polar Coordinate Transform 1. Image Acquisition An image surrounding human eye region is obtained at a dis- tance from a CCD camera without any physical contact to the device. Figure 3 shows the device configuration for acquiring human eye images. To acquire more clear images through a CCD camera and minimize the effect of the reflected lights caused by the surrounding illumination, we arrange two halo- gen lamps as the surrounding lights, as the figure illustrates. The size of the image acquired under this circumstances is 320×240. Fig. 3. Configuration of the proposed image acquisition device. monitor frame grabber lens halogen lamp(50w) eye CCD camera halogen lamp(50w) 100mm 100mm 320mm320mm 2. Preprocessing Stage In this stage, we should determine an iris part of the image by localizing the portion of the image derived from inside the limbus (outer boundary) and outside the pupil (inner boundary), and finally convert the iris part into a suitable representation. To localize an iris, we should find the center of the pupil at first, and then determine the inner and outer boundaries. Be- cause there is some obvious difference in the intensity around each boundary, an edge detection method is easily applied to acquire the edge information. For every two points of the edge that may be regarded as the inner boundary by some prior knowledge of the images, we apply the bisection method to determine the center of the inner boundary, which is also used for the reference point of the following processes. By applying the bisection method to every two points on the same edge, we can get only one point in the ideal case which crosses every perpendicular line over the line connecting to two points, but actually we cannot obtain only one point so we select the center point as the most frequently crossed point. After determining the center point, we find the inner bound- ary and the outer boundary by extending the radius of a virtual circle from the center of pupil and counting the number of points of the edge on the corresponding virtual circle. Two vir- tual circles with the maximum number of points of the edge within each corresponding range determined by some prior knowledge are selected as the two boundaries that we want to find. Figure 4 shows the center of the pupil and the iris part sur- rounded by two boundaries. Fig.4.Exampleofresultsinthepreprocessingstage. Portiontobe localized 60 θ r 450 The localized iris part from the image is transformed into po- lar coordination system in an efficient way so as to facilitate the next process, the feature extraction process. The portion of the pupil is excluded from the conversion process because it has no biological characteristics at all. The distance between the inner boundary and the outer boundary is normalized into [0, 60] ac- cording to the radius r. By increasing the angle θ by 0.8° for an arbitrary radius r, we obtain 450 values. We, therefore, can get a 450×60 iris image for the plane (θ, r). Figure 4 shows the process of converting the Cartesian coordinate system into the polar coordinate system for the iris part.
  • 4. 64 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001 3. Feature Extraction Stage Gabor transform and wavelet transform are typically used for analyzing the human iris patterns and extracting feature points from them [4], [9]-[11]. In this paper, a wavelet transform is used to extract features from iris images. Among the mother wavelets, we use Haar wavelet illustrated in Fig. 5 as a basis function. Fig. 5. Haar Mother Wavelet. Haar Wavelet1.0 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1.0 Figure 6 shows the conceptual process of obtaining the fea- ture vectors with the optimized dimension. Here, H and L mean the high-pass filter and the low-pass filter, respectively, and HH indicates that the high-pass filter is applied to the sig- nals of both axes. For the 450×60 iris image obtained from the preprocessing stage, we apply wavelet transform four times in order to get the 28×3 sub-images. Finally, we organize a feature vector by combining 84 features in the HH sub-image of the high-pass filter of the fourth transform (HH4 of Fig. 6) and each average value for the three remaining high-pass filter areas (HH1, HH2, and HH3 in Fig. 6). The dimension of the resulting feature vector is 87. Fig. 6. Conceptual diagram for organizing a feature vector. Each value of 87 dimensions has a real value between –1.0 and 1.0. To reduce space and computational time for manipu- lating the feature vector, we quantize each real value into bi- nary value by simply converting the positive value into 1 and the negative value into 0. Therefore, we can represent an iris image with only 87 bits. 4. Identification and Verification Stage In general, the competitive learning neural network like LVQ has the faster learning mechanism than the error backpropaga- tion algorithm but its performance is easily affected by initial weight vectors [12], [13]. To solve such a problem for at least iris patterns, a new method for initializing the weight vectors in an effective man- ner is proposed. This method generates the initial vectors that can be located around the boundary of each class. In the learn- ing process, the common learning process for LVQ is accom- plished after initializing the weight vectors by the proposed method. In the recognition process, we set the acceptance level and use it to determine whether the final result is accepted or rejected [15], [16]. The process of the proposed initialization algorithm called the uniform distribution of initial weight vectors is as follows. (see Fig. 7) Fig. 7. Concept of the uniform distribution of initial weight vectors. Hidden Layer Input Layer Output Layer W1 W2 W2 W2 W3 W3 W4 W4 Wn W1 W1 0 0 00 Step 1 Set initial weight vectors with the vector of the first learning pattern of each class and other weight vec- tors to be zero. MkforXW kk ,...2,111 == (1) where
  • 5. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 65 k X1 : the vector of the first learning pattern of the k-th class. k W1 : the first weight vector of the k-th class. M : the number of class. Step 2 Select another pattern of each class as a new learning pattern. Step 3 Calculate the distance dj between the learning pattern and the weight vector by the following equation. ( )∑ − = −= 1 0 22 N i k ij k ipj WXd (2) where k ipX : the i-th component of p-th learning pattern of k-th class. k ijW : the i-th component of the j-th weight vector of k-th class. N : the dimension of a learning pattern. Step 4 Determine whether the class of the weight vector with the minimum distance among all dj is equal to the class of the learning pattern. If the class of the weight vector is not equal to the class of the learning pattern, then add the vector of the learning pattern as a new weight vector. Step 5 Go to step 2 until all of the learning patterns are used in the learning process. The winner selection method based on Euclidean distance that is generally used in competitive learning neural networks has no problem in determining the minimum distance of each class, as a whole. However, if the dimension of a feature vector increases, so does the possibility of selecting a wrong winner because of the failure of obtaining the information on each di- mension. To solve such a problem, a new algorithm of winner selection called multidimensional winner selection method is proposed. The proposed algorithm is to determine the winner of each dimension, count the frequency of becoming the win- ner according to each class, and then select a class with the largest value as the final winner. Figure 8 shows the conceptual diagram of the proposed winner selection method. In the figure, each plate in a neuron indicates each dimension of a feature vector. Fig. 8. Conceptual diagram of multi-dimensional winner selection method. ............ ............................................................ O u tp u t N e u ro nIn p u t P a tte r n Output NeuronInput Pattern IV. EXPERIMENTAL RESULTS To evaluate the performance of the proposed human iris rec- ognition system, we collected 6,000 data acquired from 30 im- ages per people for 3 months with the help of volunteers of 200 Korean university students, who are in their early twenties. The environment of image acquisition is illustrated in Fig. 2. The parameters used in LVQ such as the learning rate and the itera- tion number are shown in Table 1. The learning rate is a con- stant that decreases from 1 to near 0 as learning is processed. And the iteration number is ‘t’in Table 1. Table 1. Parameters for LVQ. Initial learning rate 0.1 Update of learning rate Total iteration 300       = iterationofnumbertotal t -1(0))( αα t Under the experimental environments above, we have con- ducted two kinds of experiments: one is to see the performance of each method proposed in this paper, and the other is to pro- vide two kinds of error rates such as false accept rate (FAR) and false reject rate (FRR). 1. Results on Preprocessing Stage In the experiments for the preprocessing stage, we checked the accuracy of the boundaries subjectively and obtained the
  • 6. 66 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001 success rate of 88.2% (5,292data) on 6,000 data. Table 2 shows the causes of the failure of preprocessing. As you can see the table, the remarkable thing is that the failure of data with glasses takes 18.8 % over the total failure. Table 2. Analysis of the failure of preprocessing according to the causes. Cause of Failure # of Data Ratio (%) (1) Occlusion by eyelids 178 31 (2) Inappropriate eye positioning 127 22 (3) Shadow of eyelids 121 21 (4) Noises within pupil 34 6 (5) Etc 115 20 Data without Glasses and with Lens Total 575 100 (6) Noises or dirt on the glasses 49 37 (7) Reflection of glasses 28 21 (8) Shadow of the rim of glasses 20 15 (9) Etc 36 27 Data with Glasses Total 133 100 Figure 9 shows the examples of the failure in the preprocess- ing stage according to the causes. Each number of the figure corresponds to the cause of Table 1. Fig. 9. Examples of the failure in the preprocessing stage. Also, we can see that the success rate for data with glasses in the preprocessing stage is about 10% less than that of other data without glasses or with lens. 2. Performance Comparison of Individual Method A half of 5,292 data that are obtained successfully from the preprocessing stage are used as the learning data for LVQ, and the remaining half as the test data. The following subsections describe the results on each stage or method proposed in this paper. A. Feature Extraction Method Table 3 shows the recognition rate on two different feature extraction methods, Gabor transform and Haar wavelet trans- form, with the same classifier. The recognition rate of wavelet transform is better than that of Gabor transform by 0.9% and 2.1% for the learning data and test data, respectively. Therefore, we used Haar wavelet transform as the basis of feature extrac- tion method in the following experiments. Table 3. Comparison of two feature extraction methods. Gabor Transform Wavelet Transform Learning Data 95.3% 96.2 % Test Data 92.3 % 94.4 % B. Weight Vector Initialization Method Table 4 shows the results on the accuracy comparison of two initialization methods under the same experimental environ- ments. In the case of the proposed method called the uniform distribution of initial weight vectors, the experimental results on both the learning data and the test data showed better per- formance than those of the initialization with random values which is regarded as a basic initialization method. Table 4. Comparison of weight vector initialization methods. Initialization with random values Proposed method Learning Data 96.2 % 97.1 % Test Data 94.4 % 95.9 % C. Winner Selection Method Table 5 shows the experimental results on two methods of winner selection when we use Haar wavelet transform for fea- ture extraction and LVQ with the proposed initialization method. You can see that the proposed method, the multidi- mensional method showed a good result for human iris features.
  • 7. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 67 Table 5. Comparison of winner selection methods. Euclidean distance method Multi-dimensional method Learning Data 97.1 % 97.8 % Test Data 95.9 % 97.2 % D. Size of Feature Vector From the three experimental results above, we selected each method with high accuracy to configure a good system for per- sonal identification based on iris patterns. The selected methods for each stage are as follows; Haar wavelet transform for fea- ture extraction, uniform distribution method for initializing weight vectors, and multidimensional method for winner selec- tion. With the iris recognition system composed of these methods, we try to minimize or optimize the dimension of feature vector Fig. 10. Degree of match by 87 demensions for a feature vector. 40 50 60 70 80 80 70 60 50 40 40 50 60 70 80 Fig. 11. Degree of match by 18 dimensions for a feature vector. 80 70 60 50 40 without any influence to the recognition accuracy. We pro- posed a new feature extraction process. This method can effi- ciently represent a feature vector with 87 dimensions and it re- quires only one bit per dimension. Regardless of the successive transform of an image four times, we can separate an input space according to the degree of matching as shown in Fig. 10. In the Fig. 10, the black points mean success of match and the white points mean failure of match. In the Figs. 10 and 11, x- axis means each person and y-axis means the degree of match. If we run transform of the image five times, however, we can not keep a threshold of recognition even though we might ob- tain much less size of feature vector as shown in Fig. 11. That is why we choose the 87 dimensions for each feature vector, not 18 dimensions. Table 6 shows the performance evaluation according to the size of a feature vector. Table 6. Performance evaluation according to the size of feature vectors. 256 dimension (1 byte/dimension) 87 dimension (1 bit/dimension) Learning Data 97.8 % 98.0 % Test Data 97.2 % 97.2 % For efficient comparison with the proposed scheme for orga- nizing a feature vector, we used 256 dimensions (1 byte per dimension) for each vector introduced in [4]. As the feature vector size is one twentieth compared with the 256 dimensions (1 byte per dimension), the performance in the process of rec- ognition and verification is expected to be improved. All of the experimental results on the proposed methods are summarized in Table 7. Table 7. Performance evaluation on the proposed methods. Feature Extraction Gabor transform Wavelet transform Initializaiton with random values Uniform distribution of initial weight Recognition Euclidean distance-based winner selection Multi-dimensional Winner Selection Size of Feature Vector 256 dimension (1 bytes/dimension) 2,048 bits 87 dimension (1bit/ dimension) 87 bits Learning Data 97.8 % 98.0 % 99.2 % 99.6 % 99.8 % Test Data 92.3 % 94.4 % 97.6 % 98.7 % 99.3 %
  • 8. 68 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001 3. Overall Performance of Proposed System The performance of biometric systems is usually described by the two error rates: FAR and FRR. In order to determine a threshold separating FRR and FAR, we compare the feature vector of an unknown pattern with the weight vector obtained from the corresponding output node of LVQ, count the number of matched bits, and then use the ratio of matched bits over 87 bits as the degree of match. For these experiments, we divide data into two groups including each 100 person: one is for LVQ learning and for false reject test (Group 1), and the other for false accept test (Group 2). We use 5 data per person for LVQ learning from Group 1. A. Experiment for FRR For this experiment, we use 20 data per person from data of Group 1 that are not exploited in the LVQ learning process. The degree of match between the unknown patterns and the registered (trained) patterns is illustrated in Fig. 12. In the figure, x-axis and y-axis indicate the number of data and the degree of match, respectively. Fig. 12. Degree of match for the same persons (Authentic). 0 20 40 60 80 100 120 140 160 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 B. Experiment for FAR For this experiment, we use 20 data per person from Group 2, which can be regarded as the imposters. The degree of match between the unknown patterns for imposters and the registered patterns is illustrated in Fig. 13. In the figure, x-axis and y-axis indicate the number of data and the degree of match, respec- tively. Fig. 13. Degree of match for the different persons (Imposter). 0 20 40 60 80 100 120 140 160 180 200 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97100 Figure 14 shows the change of two error rates according to the degree of match for selecting a proper point as a threshold. By selecting the intersection point of two error curves as a threshold, we can minimize two error rates simultaneously. When we use the threshold of 60.5 or 61.5, we can get the per- formance of about 97.1% to 98.4%. Table 8 indicates FAR and FRR according to the degree of match. Fig. 14. Change of two error rates according to the degree of match. 53.5 55.5 57.5 59.5 61.5 63.5 65.5 67.5 Degreeof match (%) 40 35 30 25 20 15 10 5 0 FAR&FRR (%) FRR FAR Because the experiments on FAR and FRR have been con- ducted for the data preprocessed successfully, the recognition rate over all of the data including the unsuccessful data in the preprocessing stage is decreased by about 10%. Therefore, a
  • 9. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim et al. 69 great deal of efforts should be placed on the improvement of techniques in the preprocessing stage to get higher reliability. The processing time from the data acquisition to the identifica- tion/verification takes about 2 seconds. Table 8. FAR and FRR according to the degree of match. Degree of Match (%) FRR (%) FAR (%) 53.5 0.00 37.15 54.5 0.15 28.45 55.5 0.15 23.40 56.5 0.25 17.40 57.5 0.40 12.90 58.5 0.55 8.40 59.5 0.80 5.15 60.5 1.65 2.90 61.5 1.65 2.90 62.5 3.70 2.15 63.5 5.30 1.55 64.5 7.15 1.30 65.5 8.85 0.55 66.5 11.65 0.15 67.5 14.1 0.15 68.5 17.75 0.00 V. CONCLUSIONS In this paper, an efficient method for personal identification and verification by means of human iris patterns is presented. To process the iris patterns in an efficient and effective way against existing methods, the following studies are conducted: First, two methods–Gabor transform and Haar wavelet trans- form which are widely used for extracting features–were evaluated. From this evaluation, we found that Haar wavelet transform has better performance than that of Gabor transform. Second, Haar wavelet transform was used for optimizing the dimension of feature vectors in order to reduce processing time and space. With only 87 bits, we could present an iris pattern without any negative influence on the system performance. Last, we improved the accuracy of a classifier, a competitive learning neural network, by proposing an initialization method of the weight vectors and a new winner selection method de- signed for iris recognition. With these methods, we could in- crease the recognition performance to 98.4%. From the ex- perimental results, we are convinced that the proposed system is optimized enough to be applied to various real applications. REFERENCES [1] Adler F. H., Physiology of the Eye: Clinical Application, The C. V. Mosby Company, 1965. [2] Hallinan P. W., "Recognizing Human Eyes", SPIE Proc. Geomet- ric Methods in Computer Vision, 1570, 1991, pp. 214-226. [3] Flom L. and Safir A., “Iris recognition system,” U.S. Patent 4 641 349, 1987. [4] John G. Daugman, "High Confidence Visual Recognition of Per- sons by a Test of Statistical Independence", IEEE Trans. on Pat- tern Analysis and Machine Intelligence, 15(11), 1993, pp. 1148- 1161. [5] Williams, G.O., "Iris Recognition Technology", IEEE Aerospace and Electronics Systems Magazine, 12(4), 1997, pp.23-29. [6] Wildes, R.P., "Iris Recognition: An Emerging Biometric Technol- ogy", Proc. of the IEEE, 85(9), 1997, pp.1348-1363. [7] Wildes, R.P., Asmuth, J.C. et al., "A System for Automated Iris Recognition", Proc. of the Second IEEE Workshop on Applica- tions of ComputerVision, 1994, pp.121-128. [8] Boles, W.W. and Boashash, B., "A Human Identification Tech- nique Using Images of the Iris and Wavelet Transform", IEEE Trans. on Signal Processing, 46(4), 1998, pp.1185-1188. [9] Randy K. Young, Wavelet Theory and Its Application, Kluwer Academic Publisher, 1992. [10] Rioul O. and Vetterli M., "Wavelet and Signal Processing", IEEE Signal Processing Magazine, October 1981, pp. 14-38. [11] Gilbert Strang, Truong Nguyen, Wavelets and Filter Banks, Wellesley-Cambridge Press, 1996. [12] Fausset L., Fundamentals of Neural Networks, Prentice Hall, 1994. [13] Kohonen T., The Self-organization and Associate Memory, Springer-Verlag, 1985. [14] Ilgu Yun et al., “Extraction of Passive Device Model Parameters Using Genetic Algorithms”, ETRI J., Vol. 22, No.1, 2000, pp.38- 46. [15] Sang-Mi Lee, Hee-Jung Bae, and Sung-Hwan Jung, “Efficient Content-based Image Retrieval Methods using Color and Tex- ture”, ETRI J., Vol.20, No.3, 1998, pp.272-283. [16] Young-Sum Kim et al., “Development of Content-based Trade- mark Retrieval System on the World Wide Web”, ETRI J., Vol.21, No.1, 1999, pp.40-53.
  • 10. 70 Shinyoung Lim et al. ETRI Journal, Volume 23, Number 2, June 2001 Shinyoung Lim received B.E. degree in indus- trial chemistry from Kon-Kuk University, Seoul, Korea in 1983, and M.S. degrees in chemical engineering and computer science from Kon- Kuk University in 1985 and 1992, respectively, and Ph.D. degree in computer science from Ko- rea University, Seoul in 2001. He joined the Systems Engineering Research Institute(SERI), Korea Institute of Science and Technol- ogy(KIST) in 1986. Since then, he was a principal member of research staff in the field of software engineering, data communication and computer networks, and information security until 1996. He is cur- rently working in Electronic Commerce Technology of Electronics and Telecommunications Research Institute(ETRI), as a team leader of Electronic Payment Team. His current interests include the area of elec- tronic commerce security, digital contents copyright protection, biomet- rics, and mobile commerce security. He is a member of the Korean Electronic Payment Forum, KIPS, and KICS. Kwanyong Lee received the M.S. and the Ph.D. degrees in computer science from Yonsei Uni- versity, Seoul, Korea in 1991 and 1994, respec- tively. From 1989 to 1999, he was a researcher in the Research Institute of Natural Science, Yonsei University. From 1997 to 1999, he joined the department of information and com- munication engineering at the University of To- kyo in Japan as a visiting researcher. In 1999, he was a senior researcher in the EC/CALS division of the Electronics and Telecommunications Research Institute in Taejon, Korea. Since August 2000, he has been a researcher in the department of computer science of Yonsei University and he is responsible for developing and evaluat- ing various image-processing-based biometric systems. His current re- search interests include video-based biometrics, pattern recognition, computer vision, and image processing. Okhwan Byeon received his B.E. degree in communication engineering from National Aviation University, Seoul, Korea in 1979, and M.S. degree in information engineering from Inha University in 1985, and Ph.D. degree in in- formation communication engineering from Kyeonghee University in 1995, respectively. He joined the Data Communication Section, Korea Institute of Science and Technology(KIST) in 1978. Since then, he was a principal member of engineering staff in the field of Data Communication and Computer Networks until 1996. He is currently working in Supercomputing Center of Korean Information of Science & Technology Institute(KISTI), as a head of High Perform- ance Networking Lab. His current interests include the area of distrib- uted computing, internet traffic engineering and security. He is a mem- ber of the KIPS, KISS, KICS, and KIISC. Taiyun Kim received B.S. in industrial engi- neering science from Korea University, 1981. He received M.S. in computer science from the Wayne State University, 1983. He received Ph.D. in computer science from the Auburn University, 1987. At present, he is a professor at the department of computer science and engi- neering in the Korea University. His research in- terests are computer networks, EDI systems, se- curity,biometrics,ISDN,satellite communication, and computer graphics.