SlideShare a Scribd company logo
1 of 8
Download to read offline
Texture Detection for Segmentation of Iris Images
ASHEER KASAR BACHOO
University of Kwa-Zulu Natal, South Africa
and
JULES-RAYMOND TAPAMO
University of Kwa-Zulu Natal, South Africa
The idea of using the distinct spatial distribution of patterns in the human iris for person authentication is now a widely developing
technology. Current systems rely on a set of basic assumptions in order to improve the accuracy and running time of the recognition
process. The advent of a robust system implies a viable solution to a number of general problems. This paper focuses on a common
yet difficult problem - the segmentation of eyelashes from iris texture. Tests give promising results when using grey level co-occurrence
matrix (GLCM) approach.
Categories and Subject Descriptors: I.4.6. [Image Processing and Computer Vision]: Segmentation - region growing and
partition; I.5.3. [Pattern Recognition]: Clustering - algorithms, similarity measures
General Terms: texture, classification, experimentation
Additional Key Words and Phrases: iris, localization, GLCM, K-Means
1. INTRODUCTION
The iris begins its formation in the 3rd
month of gestation [Adler 1965]. By the 8th
month, its distinctive pattern
is complete. However, pigmentation and even pupil size increase as far up as adolescence [Kronfeld 1968]. The
iris has a multilayered texture. This combination of layers and colour provide a highly distinctive pattern. An
assortment of texture variations are possible. They include:
—Contractile lines related to the state of the pupil
—Crypts - irregular atrophy of the border layer.
—Naevi - small elevations of the border layer.
—Freckles - collections of chromatophores.
—Colour variation - an increase in pigmentation yields darker coloured irides.
Of the utmost importance in a biometric identification system is the stability and uniqueness of the object
being analysed. Ophthalmologists [Flom and Safir 1987] and anatomists [Davson 1990], during the course of
clinical observations, have noted that the irises of individuals are highly distinctive. This extends to the left and
right eye of the same person. Repeated observations over a period of time have highlighted little variation in the
patterns.
Developmental biology has also provided evidence of the particular characteristics of the iris [Kronfeld 1968].
Although the general structure of the iris is genetically determined, the uniqueness of its minutiae is highly
dependent on circumstances. As a result, replication is almost impossible. It has also been noted that, following
adolescence, the iris remains stable and varies little for the remainder of the person’s life. Development is
continuous during the early and adolescent years (pigmentation continues as well as an increase in pupil size)
[Davson 1990; Kronfeld 1968]. Figure 1 shows an iris and the variety of its texture patterns.
In 1936, Frank Burch, an ophthalmologist, proposed the idea of using iris patterns for personal identification.
However, this was only documented by James Doggarts in 1949. The idea of iris identification for automated
recognition was finally patented by Aran Safir and Leonard Flom in 1987. Although they had patented the idea,
the two ophthalmologists were unsure as to a practical implementation for the system. They commissioned John
Daugman to develop the fundamental algorithms in 1989. These algorithms were patented by Daugman in 1994
Asheer Kasar Bachoo, School of Computer Science, University of Kwa-Zulu Natal, Durban, 4041, e-mail - 200266744@ukzn.ac.za
Jules-Raymond Tapamo, School of Computer Science, University of Kwa-Zulu Natal, Durban, 4041, e-mail - tapamoj@ukzn.ac.za
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that
the copies are not made or distributed for profit or commercial advantage, that the copies bear this notice and the full citation on the
first page. Copyrights for components of this work owned by others than SAICSIT or the ACM must be honoured. Abstracting with
credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission
and/or a fee.
c 2005 SAICSIT
Proceedings of SAICSIT 2005, Pages 111–118
112 • Bachoo and Tapamo
Radial furrow -
Crypt -
Ciliary zone
Collarette
Pupillary zone
Pupillary frillPupil
Sclera
Iris
Figure 1. The iris
and now form the basis for all current commercial iris recognition systems. The Daugman algorithms are owned
by Iridian Technologies and they are licensed to several other companies.
2. RELATED WORK
The fundamental iris algorithm and system was developed by John Daugman. This was presented in his landmark
paper [Daugman 1993] and subsequently updated [Daugman 2003]. The iris is segmented using integro-differential
operators. To extract the rich details of the texture, he uses complex-valued 2D Gabor wavelets to extract
discriminating information. Recognition is done by means of a test of statistical independence for two iris codes.
A failure of the test implies a match. The matching system implements a normalized Hamming distance criteria.
Features can be extracted using an application of Laplacian of Gaussian filters at different resolutions [Wildes
1997]. This proves to be efficient and consistent. The system uses edge maps and a Hough transform to locate
the region of interest. A normalized correlation between the acquired iris representation and the stored one is
employed for pattern matching.
Feature extraction using zero-crossing representations of a 1D wavelet transform [Boles and Boashash 1998],
the Haar wavelet [Ali and Hassanien 2003] and the Haar wavelet with a neural network for identification [K.
Lim, K. Lee, O. Byeon and T. Kim 2001] are well documented. Multi-channel Gabor filtering [K. Lim, Y. Wang
and T. Tan 2002], circular symmetric filters [L. Ma, Y. Wang and T. Tan 2002] and key local variations in 1D
signals [L.Ma, T. Tan, Y. Wang and D. Zhang 2004] have also shown to be very promising for generating digital
iris codes.
There have also been a number of other interesting ideas for iris recognition. Multiresolution Independent
Component Analysis (MICA) in [K. Bae, S Noh and J. Kim 2003] and application of the Hough transform for
iris identification [W. Zorski, B. Foxon, J. Blackledge and M. Turner 2002] have produced some good results. A
promising avenue has been an analysis of the Fourier spectra of the optical transmission binary models of human
irises [A. Muron, P. Kois and J. Pospisil 2001].
Proceedings of SAICSIT 2005
Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 113
Very often, in real world problems, images do not demonstrate uniform intensities. Observation shows that
these surfaces generally contain intensity variations that follow a particular pattern or periodicity. This is
visual texture [C.H. Chen, L.F. Pau and P.S.P. Wang 1998]. A multitude of definitions are available for this
characteristic of images - each one is particular to its domain and application. As a result, no formal definition
exists.
The human visual system can discriminate easily between different textures - it analyzes spatial, orientation,
phase, frequency and colour information. Texture analysis in machine vision is time and space intensive. It
is also sensitive to scale, resolution and orientation. The techniques available for machine learning consist of
co-occurrence matrices, Markov random fields, wavelets and fractals, each one being adaptable to a particular
situation [B. Jahne, H. Haubecker and P. Geibler 1999].
The fundamental iris recognition papers [Daugman 1993; Wildes 1997; Boles and Boashash 1998], while instru-
mental in providing rich knowledge for all subsequent endeavors, have not addressed the problem of interference
caused by eyelashes. Iris texture is the region of interest (ROI) and it is required that we remove useless artifacts
- such as eyelashes and eyelids - from the iris region. The notion of eyelashes as a texture can be observed in their
displaying of a particular pattern and periodicity which is distinct from their surrounding areas. As a result, a
texture segmentation approach is undertaken.
An algorithm - void of texture classification - to detect eyelashes for accurate iris segmentation has been
proposed in the literature [Kong and Zhang 2003]. It divides the problem into two possibilities - separable
eyelashes and multiple eyelashes. Separable eyelashes are treated as edges. The image is convolved with a Gabor
filter and thresholded to segment the eyelashes. The Gabor function is defined as
G(x, u, σ) = exp{
x2
2σ2
}cos(2πux) (1)
where u is the frequency of the sinusoidal wave and σ is the standard deviation of the Gaussian. If the resultant
value of a point falls below a threshold, it belongs to an eyelash. This can be stated as:
f(x) ∗ G(v, u, σ)  K1 (2)
where K1 is a pre-determined threshold and ∗ is the convolution operator. The success of this process depends
on a high intensity difference between iris pixels and eyelash pixels. Multiple eyelashes are modelled using an
intensity variation model - eyelashes overlapping in a small area have a low intensity variation. If the variance
of intensity in the area is below a threshold, the center of the window is labelled an eyelash pixel. This can be
stated as:
N
i=−N
N
j=−N (f(x + i, y + j) − M)2
(2N + 1)2
 K2 (3)
where M is the mean intensity in the window and (2N +1)2
is the window size and K2 is a threshold. A connected
components algorithm is also implemented to avoid misclassification of pixels. K1 and K2 are empirically
determined parameters.
The frequency distribution of iris images has been analyzed to determine occlusions by eyelids and eyelashes
[L. Ma, T. Tan, Y. Wang and D. Zhang 2003]. While effective, the technique does not provide a solution to
removing the eyelashes.
3. IRIS LOCALIZATION
Segmentation of the iris from an image of the eye is sensitive to numerous factors - noise, uneven lighting and
eyelid and eyelash interference. Although the pupil and iris borders can be modelled using two non-concentric
circles, with the larger one forming a closed contour around the smaller one, detection of these two boundaries
is not a simple problem. In a typical case, the outer iris boundary is a very soft gradient that standard edge
functions do not detect. This boundary is sometimes only partially visible or, possibly, covered by eyelash. The
details of the algorithm can be found in [Bachoo and Tapamo 2004].
4. EYELASH SEGMENTATION USING GREY LEVEL CO-OCCURRENCE MATRIX (GLCM) FEATURES
The technique presented by Kong and Zhang to segment eyelashes assumes a clear intensity distinction between
iris and eyelash pixels. As a result, edge and variance operators are used for segmentation purposes. However,
edge detectors are not always optimal and will fail when high texture variations are present or when the intensity
gradient between edge and iris pixels is gradual rather then a step. There may also be cases of misclassification
of pixels. The technique proposed in this paper uses texture analysis and pattern classification for segmentation.
Proceedings of SAICSIT 2005
114 • Bachoo and Tapamo
Figure 2. An iris image
(a) (b) (c) (d)
(e) (f) (g) (h)
Figure 3. Segmentation results using different features computed from GLCM - (a)-(d) K-means clustering;
(e)-(h) Fuzzy C-Means clustering. The features used for classification are (for each image from left to right)
energy, entropy, correlation and mean.
Co-occurrence features are an effective texture descriptor [R.M. Haralick, K. Shanmugam and I. Dinstein
1973]. It estimates the image properties to second-order statistical features. A GLCM (Grey Level Co-occurrence
Matrix) element pθ,d(i, j) is the joint probability of the grey level pairs i and j in a given direction θ separated
by distance of d pixels. G denotes the number of grey levels. For each ROI (Region Of Interest) in this work,
the feature computed using symmetrical GLCMs is a new one defined as follows:
F =
G
i,j=1
pθ,d(i, j)(
i
j
) (4)
The texture features computed from the GLCM are not all useful. A number of test runs have shown that
the most reliable feature for segmenting regions in an iris image is the one defined above. A possible reason for
this is the small size of the ROI - it is a disc with an inner radius of approximately 40 pixels and an outer radius
of about 110 pixels. Features such as entropy, energy and correlation are difficult to capture effectively within
a small region with a large number of grey levels. Selection of features for texture discrimination is application
dependent. A feature set for describing grass will not classify cotton threads although it may produce useful
results [Q. Zhang, J. Wang, P. Gong and P. Shi 2003]. Texture discrimination for eyelash segmentation is, thus,
dependent on a careful analysis of the problem and preliminary implementation and test results.
Using a GLCM to classify textures is sensitive to many factors. Selection of a window size is important for
capturing the relevant statistics - a small window will omit important grey levels and structure; a large window
size will include different texture areas and, thus, add incorrect statistics to the GLCM. The number of grey
levels in the image texture also affect the feature extraction process - a large number of grey levels will imply
exhaustive computational time and also provide poor relationships between pixels of the same texture due to
the high randomness present if there are too few pixels to examine. A reduced set of grey levels will improve
Proceedings of SAICSIT 2005
Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 115
the computational burden but destroy certain textural characteristics [R.M. Haralick, K. Shanmugam and I.
Dinstein 1973]. These considerations must be taken in order to design an accurate and robust image processing
system. While an automatic system is highly desirable, it is difficult to know all the variables in the problem.
A good way to approach the problem is to do a manual analysis of the images and problem and a few test runs
to ascertain reliable parameters. These can then be built into the system so that most situations can be dealt
with. Figure 2 shows a test image of an iris followed by Figure 3 with some segmentation results using different
GLCM features.
To produce meaningful results, different parameter values for d and θ must be used. This improves the
statistical details of the texture and helps generalize a texture definition for a particular class, improving the
class uniqueness. However, unnecessary parameters can also influence and degrade the feature class.
5. UNSUPERVISED FEATURE CLASSIFICATION
The segmentation of eyelashes is a problem of unsupervised classification. In essence, we are required to partition
the set of feature vectors into classes that differ from each other as much as possible. Concurrently, the vectors
in the same class must differ as little as possible. The number of distinct texture classes in the region of interest
is unknown. Ideally, we require a two class classifier that separates iris texture and any other textures. The
K-means [MacQueen 1967] and fuzzy C-means [Dunn 1974; Bezdek 1981] algorithms are implemented. There
are a multitude of textures possible when an image of the eye is captured - skin, eyelash, sclera, iris and pupil.
The texture variations are random due to the following factors:
(1) Uneven illumination caused by the image acquisition process.
(2) The uniqueness of every iris pattern.
(3) The presence of multiple, single or no eyelashes.
(4) The pupil dilating or constricting, which changes the density of the iris pattern.
5.1 K-means Clustering
The MacQueens k-means algorithm is a popular clustering algorithm in which the number of clusters (k) is
known. It is an iterative process that assigns patterns to the closest cluster using a distance function (such as
the Euclidean distance measure). The basic algorithm is described below.
(1) Define the number of clusters k.
(2) Initialize (randomly) the clusters prototypes pi (i = 1..k).
(3) For each pattern x, assign x to the nearest cluster pi (i = 1..k).
(4) Recompute pi (i = 1..k).
(5) Repeat steps 3 and 4 until the prototypes do not change.
5.2 Fuzzy C-means Clustering
Fuzzy clustering allows data to belong to more than one class. This is reflected by their degree of membership
in a particular cluster. It is based on the minimization of the objective function
Jm =
N
i=1
C
j=1
um
ij xi − vj
2
, 1 ≤ m ≤ ∞ (5)
where m is a real number greater than 1 (called the fuzzification factor). uij is the degree of membership of xi
in the cluster j where 0 ≤ uij ≤ 1. xi is the ith
component in a d-dimensional data set (vectors). vj is the
d-dimension center of cluster j and ∗ is the Euclidean norm. C denotes the number of clusters and N the
number of pattern vectors. Fuzzy partitioning is an iterative optimization process. The degree of membership
(uij) and the cluster centers vj are computed by the following equation:
uij =
1
C
k=1(
xi−vj
xi−vk
)
2
m−1
(6)
vj =
N
i=1 um
ij xi
N
i=1 um
ij
(7)
The stopping criterion is | uk
ij − uk−1
ij | , where is a threshold between zero and one. The algorithm is as
follows:
Proceedings of SAICSIT 2005
116 • Bachoo and Tapamo
Figure 4. 3 test images
Figure 5. Test results using K-Means clustering
Figure 6. Test results using fuzzy C-means clustering
(1) Select the values of and c and initialize the counter k to 0 (k ← 0).
(2) Obtain (randomly) the initial membership matrix, U(0)
.
(3) k ← k + 1
(4) Compute the centroids vj (j = 1, ..., c) using equation (7).
(5) Compute the membership matrix U(k)
using equation (6).
(6) If || U(k)
− U(k−1)
|| then stop else go to step 3.
6. EXPERIMENTAL RESULTS AND DISCUSSION
The number of texture classes in an iris image can vary from 2 to 5. Thus, ideally, the maximum number of
classes is 5. A reasonable parameter for the clustering algorithms is 3. Input images are of size 320 × 280 pixels
with 256 grey levels. 12 symmetrical GLCMs are generated for θ ∈ {0, 45, 90, 135} and d ∈ {1, 3, 5} with a
reduced set of 64 grey levels. A window size of 21 × 21 pixels (non-overlapping sub-images) is implemented.
Smaller windows can be implemented for finer segmentation but classification results may be poorer.
Test images are used from the CASIA iris database [CASIA Iris Image Database ]. 30 images were tested
using the new GLCM feature and the parameters described above. The K-Means and fuzzy C-Means were
both implemented on each image for comparison. To test the accuracy of the proposed system, these outputs
were manually inspected and poor or unsatisfactory results were discarded. The K-Means algorithm effectively
segmented 22 out of 30 images while the fuzzy C-Means clustering approach performed slightly better with 24
Proceedings of SAICSIT 2005
Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 117
K-Means Fuzzy C-Means
Number of images 30 30
Segmented 22 24
Rejected 8 6
Accuracy 73.3% 80%
Table I. Clustering results
images being well segmented. Figure 4 shows 3 test images, with their segmentation outputs in Figure 5 and
Figure 6. Table I shows the clustering results.
The sample space for the above experiments comprised of images with multiple eyelashes, single eyelashes,
uneven illumination and various texture details. The algorithm fails in the following instances:
—When the texture of the eyelash zone is indiscernible from that of another zone.
—In the presence of uneven lighting.
—Poor parameter selection eg. the texture window size or the number of clusters for the clustering.
Criteria for manual rejection or acceptance of clustering results were based on over segmentation and correct
classification. Clustering outputs produced an image with colours for different zones. These zones were compared
to the original image to check for misclassification. Misclassified pixels were similar objects with different colours
or different objects with the same colour.
7. FUTURE DEVELOPMENTS
The new GLCM texture measure for segmenting iris images is a promising technique. As mentioned earlier,
a number of factors affect the performance of the segmentation algorithm. It is hoped that a fully automatic
approach can be designed for establishing parameters for window size, number of grey levels and distinguishing
features for a particular texture class in any application. Our current experiments also entail Gabor filters [P.
Kruizinga, N. Petkov and S.E. Grigorescu 1999], discrete wavelet transform (DWT) [Mallat 1989] and Markov
Random Fields (MRF) [Chellappa and Chatterjee 1985] for texture discrimination. A combination of some of
these methods will be performed for optimal segmentation. Currently, the system is fully unsupervised. A super-
vised approach is being implemented by performing training on iris images in order to compute a feature library
to assist in pattern recognition. This will also provide statistics for cluster validation and finer segmentation.
It is also hope that image enhancement can be improved in order to address uneven lighting and indiscernible
texture boundaries.
8. CONCLUSION
A texture analysis approach to improve iris segmentation for person identification has been presented. A new
feature derived from the GLCM has been proposed as effective for texture classification. The test results show
that the proposed approach is very promising and can help improve the recognition rate of iris systems by
removing noisy artifacts.
ACKNOWLEDGMENTS
Portions of the research in this paper use the CASIA iris image database collected by the Institute of Automation,
Chinese Academy of Sciences. The financial assistance of the National Research Foundation (NRF) towards this
research is hereby acknowledged. Opinions expressed and conclusions arrived at are those of the authors and not
necessarily to be attributed to the NRF. Armscor (SA) is also acknowledged for their financial support towards
this project.
REFERENCES
A. Muron, P. Kois and J. Pospisil. 2001. Identification of persons by means of the fourier spectra of the optical transmission
binary models of human irises. Optics Communications 192, 161–167.
Adler, F. 1965. Physiology of the eye. MO:Mosby, St. Louis.
Ali, J. and Hassanien, A. 2003. An iris recognition system to enhance e-security environment based on wavelet theory. Advanced
Modelling and Optimization 5, 2.
B. Jahne, H. Haubecker and P. Geibler. 1999. Handbook of Computer Vision and Applications - Volume 2. Academic Press.
Bachoo, A. and Tapamo, J.-R. 2004. A segmentation method to improve iris-based person identification. In Africon 2004. Vol. 1.
403–408.
Proceedings of SAICSIT 2005
118 • Bachoo and Tapamo
Bezdek, J. 1981. Pattern Recognition with Fuzzy Object Function. Plenum Press.
Boles, L. and Boashash, B. 1998. A human identification technique using images of the iris and wavelet transform. IEEE Trans.
on Signal Processing 46, 4, 1185–1188.
CASIA Iris Image Database. http://www.sinobiometrics.com.
C.H. Chen, L.F. Pau and P.S.P. Wang. 1998. The Handbook of Pattern Recognition and Computer Vision (2nd Edition). World
Scientific Publishing Co.
Chellappa, R. and Chatterjee, S. 1985. Classification of textures using gaussian markov random fields. IEEE Trans. Acous.,
Speech Signal Proc. 33, 4, 959–963.
Daugman, J. 1993. High confidence visual recognition of persons by a test of statistical independance. IEEE Trans. Pattern
Analysis Mach. Intell. 15, 11, 1148–1961.
Daugman, J. 2003. The importance of being random: stastical principles of iris recognition. Pattern Recognition 36, 2, 279–291.
Davson, H. 1990. Davson’s Physiology of the Eye. MacMillan, London.
Dunn, J. 1974. A fuzzy relative of the isodata process and its use in detecting compact well separated clusters. Journal of
Cybernetics 3, 32–57.
Flom, L. and Safir, A. 1987. Iris recognition system.
K. Bae, S Noh and J. Kim. 2003. Iris feature extraction using independent component analysis. In AVBPA. 838–844.
K. Lim, K. Lee, O. Byeon and T. Kim. 2001. Efficient iris recognition through improvement of feature vector and classifier. ETRI
Journal 23, 2, 61–70.
K. Lim, Y. Wang and T. Tan. 2002. Iris recognition based on multichannel gabor filtering. In Fifth Asian Conference on Computer
Vision. Vol. 1. 279–283.
Kong, W. and Zhang, D. 2003. Detecting eyelash and reflection for accurate iris segmentation. International Journal of Pattern
Recognition and Artificial Intelligence 17, 6, 1025–1034.
Kronfeld, P. 1968. The gross embryology of the eye. The Eye 1, 1–66.
L. Ma, T. Tan, Y. Wang and D. Zhang. 2003. Personal identification based on iris texture analysis. IEEE Trans. Pattern Analysis
Mach. Intell. 25, 12, 1519–1533.
L. Ma, Y. Wang and T. Tan. 2002. Iris recognition using circular symmetric filters. In ICPR. Vol. 2. 414–417.
L.Ma, T. Tan, Y. Wang and D. Zhang. 2004. Efficient iris recognition by characterizing key local variations. IEEE Transactions
on Image Processing 13, 6, 739–750.
MacQueen, J. 1967. Some methods for classification and analysis of multivariate observations. Proceedings of the 5th Berkeley
Symposium-1, 281–297.
Mallat, S. 1989. A theory of multiresolution signal decomposition. IEEE Trans. on Pattern Analysis and Machine Inteligence 11, 7,
674–693.
P. Kruizinga, N. Petkov and S.E. Grigorescu. 1999. Comparison of texture features based on gabor filters. In Proceedings of
the 10th International Conference on Image Analysis and Processing. 142–147.
Q. Zhang, J. Wang, P. Gong and P. Shi. 2003. Study of urban spatial patterns from spot panchromatic imagery using textural
analysis. Int. J. Remote Sensing 24, 21, 4137–4160.
R.M. Haralick, K. Shanmugam and I. Dinstein. 1973. Texture features for image classification. IEEE Trans. System Man.
Cybernat. 8, 6, 610–621.
W. Zorski, B. Foxon, J. Blackledge and M. Turner. 2002. Fingerprint and iris identification method based on the hough
transform. In Proceedings of IMA Third Conference on imaging and Digital Image Processing.
Wildes, R. 1997. Iris recognition: an emerging biometric technology. Proceedings of the IEEE 85, 9, 1348–1362.
Proceedings of SAICSIT 2005

More Related Content

What's hot

International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...ijcseit
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face RecognitionIJCSIS Research Publications
 
an approach to identify caries in dental image
an approach to identify caries in dental imagean approach to identify caries in dental image
an approach to identify caries in dental imageINFOGAIN PUBLICATION
 
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
 
Automated Diagnosis of Glaucoma using Haralick Texture Features
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesAutomated Diagnosis of Glaucoma using Haralick Texture Features
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesIOSR Journals
 
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint RecognitionExtended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint RecognitionCSCJournals
 
3D Riesz-wavelet Based Covariance Descriptors for Texture Classi cation of Lu...
3D Riesz-wavelet Based Covariance Descriptors for Texture Classication of Lu...3D Riesz-wavelet Based Covariance Descriptors for Texture Classication of Lu...
3D Riesz-wavelet Based Covariance Descriptors for Texture Classi cation of Lu...Institute of Information Systems (HES-SO)
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
 
Shadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective ViewShadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective Viewijtsrd
 
Wavelet based histogram method for classification of textu
Wavelet based histogram method for classification of textuWavelet based histogram method for classification of textu
Wavelet based histogram method for classification of textuIAEME Publication
 
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion Approach
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion ApproachFocal Cortical Dysplasia Lesion Analysis with Complex Diffusion Approach
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion ApproachQuEST Global (erstwhile NeST Software)
 
National Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component AnalysisNational Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component Analysisijtsrd
 
Detection of Cranial- Facial Malformations: Towards an Automatic Method
Detection of Cranial- Facial Malformations: Towards an Automatic MethodDetection of Cranial- Facial Malformations: Towards an Automatic Method
Detection of Cranial- Facial Malformations: Towards an Automatic MethodCSCJournals
 

What's hot (20)

International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face Recognition
 
A010320106
A010320106A010320106
A010320106
 
an approach to identify caries in dental image
an approach to identify caries in dental imagean approach to identify caries in dental image
an approach to identify caries in dental image
 
[IJCT-V3I2P37] Authors: Amritpal Singh, Prithvipal Singh
[IJCT-V3I2P37] Authors: Amritpal Singh, Prithvipal Singh[IJCT-V3I2P37] Authors: Amritpal Singh, Prithvipal Singh
[IJCT-V3I2P37] Authors: Amritpal Singh, Prithvipal Singh
 
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...
 
Lip recognition
Lip recognition Lip recognition
Lip recognition
 
Automated Diagnosis of Glaucoma using Haralick Texture Features
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesAutomated Diagnosis of Glaucoma using Haralick Texture Features
Automated Diagnosis of Glaucoma using Haralick Texture Features
 
ICPR Workshop ETCHB 2010
ICPR Workshop ETCHB 2010ICPR Workshop ETCHB 2010
ICPR Workshop ETCHB 2010
 
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint RecognitionExtended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
 
3D Riesz-wavelet Based Covariance Descriptors for Texture Classi cation of Lu...
3D Riesz-wavelet Based Covariance Descriptors for Texture Classication of Lu...3D Riesz-wavelet Based Covariance Descriptors for Texture Classication of Lu...
3D Riesz-wavelet Based Covariance Descriptors for Texture Classi cation of Lu...
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
 
Shadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective ViewShadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective View
 
Wavelet based histogram method for classification of textu
Wavelet based histogram method for classification of textuWavelet based histogram method for classification of textu
Wavelet based histogram method for classification of textu
 
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion Approach
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion ApproachFocal Cortical Dysplasia Lesion Analysis with Complex Diffusion Approach
Focal Cortical Dysplasia Lesion Analysis with Complex Diffusion Approach
 
National Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component AnalysisNational Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component Analysis
 
Kh3418561861
Kh3418561861Kh3418561861
Kh3418561861
 
human face counting
human face countinghuman face counting
human face counting
 
Detection of Cranial- Facial Malformations: Towards an Automatic Method
Detection of Cranial- Facial Malformations: Towards an Automatic MethodDetection of Cranial- Facial Malformations: Towards an Automatic Method
Detection of Cranial- Facial Malformations: Towards an Automatic Method
 
Ai in healthcare
Ai in healthcareAi in healthcare
Ai in healthcare
 

Similar to Noura1

A robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsA robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsijcseit
 
Enhance iris segmentation method for person recognition based on image proces...
Enhance iris segmentation method for person recognition based on image proces...Enhance iris segmentation method for person recognition based on image proces...
Enhance iris segmentation method for person recognition based on image proces...TELKOMNIKA JOURNAL
 
Iris Segmentation: a survey
Iris Segmentation: a surveyIris Segmentation: a survey
Iris Segmentation: a surveyIJMER
 
Low level vision - A tuturial
Low level vision - A tuturialLow level vision - A tuturial
Low level vision - A tuturialpotaters
 
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATIONYOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATIONIRJET Journal
 
An improved particle filter tracking
An improved particle filter trackingAn improved particle filter tracking
An improved particle filter trackingijcsit
 
Feature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyFeature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyShashank Dhariwal
 
An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...csandit
 
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...cscpconf
 
Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...IJAAS Team
 
DevFest19 - Early Diagnosis of Chronic Diseases by Smartphone AI
DevFest19 -  Early Diagnosis of Chronic Diseases by Smartphone AIDevFest19 -  Early Diagnosis of Chronic Diseases by Smartphone AI
DevFest19 - Early Diagnosis of Chronic Diseases by Smartphone AIGaurav Kheterpal
 
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...
IRJET-  	  Identification of Missing Person in the Crowd using Pretrained Neu...IRJET-  	  Identification of Missing Person in the Crowd using Pretrained Neu...
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...IRJET Journal
 
PERIOCULAR RECOGNITION USING REDUCED FEATURES
PERIOCULAR RECOGNITION USING REDUCED FEATURESPERIOCULAR RECOGNITION USING REDUCED FEATURES
PERIOCULAR RECOGNITION USING REDUCED FEATURESijcsit
 
Ijcse13 05-01-001
Ijcse13 05-01-001Ijcse13 05-01-001
Ijcse13 05-01-001vital vital
 
Ijcse13 05-01-001
Ijcse13 05-01-001Ijcse13 05-01-001
Ijcse13 05-01-001vital vital
 
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONWAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONsipij
 
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYDCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYIAEME Publication
 

Similar to Noura1 (20)

Ijcatr04051013
Ijcatr04051013Ijcatr04051013
Ijcatr04051013
 
A robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditionsA robust iris recognition method on adverse conditions
A robust iris recognition method on adverse conditions
 
Enhance iris segmentation method for person recognition based on image proces...
Enhance iris segmentation method for person recognition based on image proces...Enhance iris segmentation method for person recognition based on image proces...
Enhance iris segmentation method for person recognition based on image proces...
 
Iris Segmentation: a survey
Iris Segmentation: a surveyIris Segmentation: a survey
Iris Segmentation: a survey
 
L_3011_62.+1908
L_3011_62.+1908L_3011_62.+1908
L_3011_62.+1908
 
Low level vision - A tuturial
Low level vision - A tuturialLow level vision - A tuturial
Low level vision - A tuturial
 
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATIONYOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
 
An improved particle filter tracking
An improved particle filter trackingAn improved particle filter tracking
An improved particle filter tracking
 
Feature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A SurveyFeature Extraction Techniques for Ear Biometrics: A Survey
Feature Extraction Techniques for Ear Biometrics: A Survey
 
An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...
 
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
 
Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...Powerful processing to three-dimensional facial recognition using triple info...
Powerful processing to three-dimensional facial recognition using triple info...
 
DevFest19 - Early Diagnosis of Chronic Diseases by Smartphone AI
DevFest19 -  Early Diagnosis of Chronic Diseases by Smartphone AIDevFest19 -  Early Diagnosis of Chronic Diseases by Smartphone AI
DevFest19 - Early Diagnosis of Chronic Diseases by Smartphone AI
 
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...
IRJET-  	  Identification of Missing Person in the Crowd using Pretrained Neu...IRJET-  	  Identification of Missing Person in the Crowd using Pretrained Neu...
IRJET- Identification of Missing Person in the Crowd using Pretrained Neu...
 
PERIOCULAR RECOGNITION USING REDUCED FEATURES
PERIOCULAR RECOGNITION USING REDUCED FEATURESPERIOCULAR RECOGNITION USING REDUCED FEATURES
PERIOCULAR RECOGNITION USING REDUCED FEATURES
 
Ijcse13 05-01-001
Ijcse13 05-01-001Ijcse13 05-01-001
Ijcse13 05-01-001
 
Ijcse13 05-01-001
Ijcse13 05-01-001Ijcse13 05-01-001
Ijcse13 05-01-001
 
2010_JAS_Database
2010_JAS_Database2010_JAS_Database
2010_JAS_Database
 
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATIONWAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
 
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITYDCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
DCT AND DFT BASED BIOMETRIC RECOGNITION AND MULTIMODAL BIOMETRIC SECURITY
 

More from Dr-mahmoud Algamel

More from Dr-mahmoud Algamel (6)

Noura2
Noura2Noura2
Noura2
 
Published irisrecognitionpaper
Published irisrecognitionpaperPublished irisrecognitionpaper
Published irisrecognitionpaper
 
Zhenan sun
Zhenan sunZhenan sun
Zhenan sun
 
Published irisrecognitionpaper
Published irisrecognitionpaperPublished irisrecognitionpaper
Published irisrecognitionpaper
 
Phd dr mahmoud hassan elgamel
Phd dr mahmoud hassan elgamel Phd dr mahmoud hassan elgamel
Phd dr mahmoud hassan elgamel
 
مهم في برامج تصميم مستودع
مهم في برامج تصميم مستودع مهم في برامج تصميم مستودع
مهم في برامج تصميم مستودع
 

Recently uploaded

US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionMebane Rash
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)dollysharma2066
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfg
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfgUnit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfg
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfgsaravananr517913
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
lifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxlifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxsomshekarkn64
 

Recently uploaded (20)

US Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of ActionUS Department of Education FAFSA Week of Action
US Department of Education FAFSA Week of Action
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
Call Us ≽ 8377877756 ≼ Call Girls In Shastri Nagar (Delhi)
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfg
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfgUnit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfg
Unit7-DC_Motors nkkjnsdkfnfcdfknfdgfggfg
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
lifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptxlifi-technology with integration of IOT.pptx
lifi-technology with integration of IOT.pptx
 

Noura1

  • 1. Texture Detection for Segmentation of Iris Images ASHEER KASAR BACHOO University of Kwa-Zulu Natal, South Africa and JULES-RAYMOND TAPAMO University of Kwa-Zulu Natal, South Africa The idea of using the distinct spatial distribution of patterns in the human iris for person authentication is now a widely developing technology. Current systems rely on a set of basic assumptions in order to improve the accuracy and running time of the recognition process. The advent of a robust system implies a viable solution to a number of general problems. This paper focuses on a common yet difficult problem - the segmentation of eyelashes from iris texture. Tests give promising results when using grey level co-occurrence matrix (GLCM) approach. Categories and Subject Descriptors: I.4.6. [Image Processing and Computer Vision]: Segmentation - region growing and partition; I.5.3. [Pattern Recognition]: Clustering - algorithms, similarity measures General Terms: texture, classification, experimentation Additional Key Words and Phrases: iris, localization, GLCM, K-Means 1. INTRODUCTION The iris begins its formation in the 3rd month of gestation [Adler 1965]. By the 8th month, its distinctive pattern is complete. However, pigmentation and even pupil size increase as far up as adolescence [Kronfeld 1968]. The iris has a multilayered texture. This combination of layers and colour provide a highly distinctive pattern. An assortment of texture variations are possible. They include: —Contractile lines related to the state of the pupil —Crypts - irregular atrophy of the border layer. —Naevi - small elevations of the border layer. —Freckles - collections of chromatophores. —Colour variation - an increase in pigmentation yields darker coloured irides. Of the utmost importance in a biometric identification system is the stability and uniqueness of the object being analysed. Ophthalmologists [Flom and Safir 1987] and anatomists [Davson 1990], during the course of clinical observations, have noted that the irises of individuals are highly distinctive. This extends to the left and right eye of the same person. Repeated observations over a period of time have highlighted little variation in the patterns. Developmental biology has also provided evidence of the particular characteristics of the iris [Kronfeld 1968]. Although the general structure of the iris is genetically determined, the uniqueness of its minutiae is highly dependent on circumstances. As a result, replication is almost impossible. It has also been noted that, following adolescence, the iris remains stable and varies little for the remainder of the person’s life. Development is continuous during the early and adolescent years (pigmentation continues as well as an increase in pupil size) [Davson 1990; Kronfeld 1968]. Figure 1 shows an iris and the variety of its texture patterns. In 1936, Frank Burch, an ophthalmologist, proposed the idea of using iris patterns for personal identification. However, this was only documented by James Doggarts in 1949. The idea of iris identification for automated recognition was finally patented by Aran Safir and Leonard Flom in 1987. Although they had patented the idea, the two ophthalmologists were unsure as to a practical implementation for the system. They commissioned John Daugman to develop the fundamental algorithms in 1989. These algorithms were patented by Daugman in 1994 Asheer Kasar Bachoo, School of Computer Science, University of Kwa-Zulu Natal, Durban, 4041, e-mail - 200266744@ukzn.ac.za Jules-Raymond Tapamo, School of Computer Science, University of Kwa-Zulu Natal, Durban, 4041, e-mail - tapamoj@ukzn.ac.za Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, that the copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than SAICSIT or the ACM must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. c 2005 SAICSIT Proceedings of SAICSIT 2005, Pages 111–118
  • 2. 112 • Bachoo and Tapamo Radial furrow - Crypt - Ciliary zone Collarette Pupillary zone Pupillary frillPupil Sclera Iris Figure 1. The iris and now form the basis for all current commercial iris recognition systems. The Daugman algorithms are owned by Iridian Technologies and they are licensed to several other companies. 2. RELATED WORK The fundamental iris algorithm and system was developed by John Daugman. This was presented in his landmark paper [Daugman 1993] and subsequently updated [Daugman 2003]. The iris is segmented using integro-differential operators. To extract the rich details of the texture, he uses complex-valued 2D Gabor wavelets to extract discriminating information. Recognition is done by means of a test of statistical independence for two iris codes. A failure of the test implies a match. The matching system implements a normalized Hamming distance criteria. Features can be extracted using an application of Laplacian of Gaussian filters at different resolutions [Wildes 1997]. This proves to be efficient and consistent. The system uses edge maps and a Hough transform to locate the region of interest. A normalized correlation between the acquired iris representation and the stored one is employed for pattern matching. Feature extraction using zero-crossing representations of a 1D wavelet transform [Boles and Boashash 1998], the Haar wavelet [Ali and Hassanien 2003] and the Haar wavelet with a neural network for identification [K. Lim, K. Lee, O. Byeon and T. Kim 2001] are well documented. Multi-channel Gabor filtering [K. Lim, Y. Wang and T. Tan 2002], circular symmetric filters [L. Ma, Y. Wang and T. Tan 2002] and key local variations in 1D signals [L.Ma, T. Tan, Y. Wang and D. Zhang 2004] have also shown to be very promising for generating digital iris codes. There have also been a number of other interesting ideas for iris recognition. Multiresolution Independent Component Analysis (MICA) in [K. Bae, S Noh and J. Kim 2003] and application of the Hough transform for iris identification [W. Zorski, B. Foxon, J. Blackledge and M. Turner 2002] have produced some good results. A promising avenue has been an analysis of the Fourier spectra of the optical transmission binary models of human irises [A. Muron, P. Kois and J. Pospisil 2001]. Proceedings of SAICSIT 2005
  • 3. Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 113 Very often, in real world problems, images do not demonstrate uniform intensities. Observation shows that these surfaces generally contain intensity variations that follow a particular pattern or periodicity. This is visual texture [C.H. Chen, L.F. Pau and P.S.P. Wang 1998]. A multitude of definitions are available for this characteristic of images - each one is particular to its domain and application. As a result, no formal definition exists. The human visual system can discriminate easily between different textures - it analyzes spatial, orientation, phase, frequency and colour information. Texture analysis in machine vision is time and space intensive. It is also sensitive to scale, resolution and orientation. The techniques available for machine learning consist of co-occurrence matrices, Markov random fields, wavelets and fractals, each one being adaptable to a particular situation [B. Jahne, H. Haubecker and P. Geibler 1999]. The fundamental iris recognition papers [Daugman 1993; Wildes 1997; Boles and Boashash 1998], while instru- mental in providing rich knowledge for all subsequent endeavors, have not addressed the problem of interference caused by eyelashes. Iris texture is the region of interest (ROI) and it is required that we remove useless artifacts - such as eyelashes and eyelids - from the iris region. The notion of eyelashes as a texture can be observed in their displaying of a particular pattern and periodicity which is distinct from their surrounding areas. As a result, a texture segmentation approach is undertaken. An algorithm - void of texture classification - to detect eyelashes for accurate iris segmentation has been proposed in the literature [Kong and Zhang 2003]. It divides the problem into two possibilities - separable eyelashes and multiple eyelashes. Separable eyelashes are treated as edges. The image is convolved with a Gabor filter and thresholded to segment the eyelashes. The Gabor function is defined as G(x, u, σ) = exp{ x2 2σ2 }cos(2πux) (1) where u is the frequency of the sinusoidal wave and σ is the standard deviation of the Gaussian. If the resultant value of a point falls below a threshold, it belongs to an eyelash. This can be stated as: f(x) ∗ G(v, u, σ) K1 (2) where K1 is a pre-determined threshold and ∗ is the convolution operator. The success of this process depends on a high intensity difference between iris pixels and eyelash pixels. Multiple eyelashes are modelled using an intensity variation model - eyelashes overlapping in a small area have a low intensity variation. If the variance of intensity in the area is below a threshold, the center of the window is labelled an eyelash pixel. This can be stated as: N i=−N N j=−N (f(x + i, y + j) − M)2 (2N + 1)2 K2 (3) where M is the mean intensity in the window and (2N +1)2 is the window size and K2 is a threshold. A connected components algorithm is also implemented to avoid misclassification of pixels. K1 and K2 are empirically determined parameters. The frequency distribution of iris images has been analyzed to determine occlusions by eyelids and eyelashes [L. Ma, T. Tan, Y. Wang and D. Zhang 2003]. While effective, the technique does not provide a solution to removing the eyelashes. 3. IRIS LOCALIZATION Segmentation of the iris from an image of the eye is sensitive to numerous factors - noise, uneven lighting and eyelid and eyelash interference. Although the pupil and iris borders can be modelled using two non-concentric circles, with the larger one forming a closed contour around the smaller one, detection of these two boundaries is not a simple problem. In a typical case, the outer iris boundary is a very soft gradient that standard edge functions do not detect. This boundary is sometimes only partially visible or, possibly, covered by eyelash. The details of the algorithm can be found in [Bachoo and Tapamo 2004]. 4. EYELASH SEGMENTATION USING GREY LEVEL CO-OCCURRENCE MATRIX (GLCM) FEATURES The technique presented by Kong and Zhang to segment eyelashes assumes a clear intensity distinction between iris and eyelash pixels. As a result, edge and variance operators are used for segmentation purposes. However, edge detectors are not always optimal and will fail when high texture variations are present or when the intensity gradient between edge and iris pixels is gradual rather then a step. There may also be cases of misclassification of pixels. The technique proposed in this paper uses texture analysis and pattern classification for segmentation. Proceedings of SAICSIT 2005
  • 4. 114 • Bachoo and Tapamo Figure 2. An iris image (a) (b) (c) (d) (e) (f) (g) (h) Figure 3. Segmentation results using different features computed from GLCM - (a)-(d) K-means clustering; (e)-(h) Fuzzy C-Means clustering. The features used for classification are (for each image from left to right) energy, entropy, correlation and mean. Co-occurrence features are an effective texture descriptor [R.M. Haralick, K. Shanmugam and I. Dinstein 1973]. It estimates the image properties to second-order statistical features. A GLCM (Grey Level Co-occurrence Matrix) element pθ,d(i, j) is the joint probability of the grey level pairs i and j in a given direction θ separated by distance of d pixels. G denotes the number of grey levels. For each ROI (Region Of Interest) in this work, the feature computed using symmetrical GLCMs is a new one defined as follows: F = G i,j=1 pθ,d(i, j)( i j ) (4) The texture features computed from the GLCM are not all useful. A number of test runs have shown that the most reliable feature for segmenting regions in an iris image is the one defined above. A possible reason for this is the small size of the ROI - it is a disc with an inner radius of approximately 40 pixels and an outer radius of about 110 pixels. Features such as entropy, energy and correlation are difficult to capture effectively within a small region with a large number of grey levels. Selection of features for texture discrimination is application dependent. A feature set for describing grass will not classify cotton threads although it may produce useful results [Q. Zhang, J. Wang, P. Gong and P. Shi 2003]. Texture discrimination for eyelash segmentation is, thus, dependent on a careful analysis of the problem and preliminary implementation and test results. Using a GLCM to classify textures is sensitive to many factors. Selection of a window size is important for capturing the relevant statistics - a small window will omit important grey levels and structure; a large window size will include different texture areas and, thus, add incorrect statistics to the GLCM. The number of grey levels in the image texture also affect the feature extraction process - a large number of grey levels will imply exhaustive computational time and also provide poor relationships between pixels of the same texture due to the high randomness present if there are too few pixels to examine. A reduced set of grey levels will improve Proceedings of SAICSIT 2005
  • 5. Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 115 the computational burden but destroy certain textural characteristics [R.M. Haralick, K. Shanmugam and I. Dinstein 1973]. These considerations must be taken in order to design an accurate and robust image processing system. While an automatic system is highly desirable, it is difficult to know all the variables in the problem. A good way to approach the problem is to do a manual analysis of the images and problem and a few test runs to ascertain reliable parameters. These can then be built into the system so that most situations can be dealt with. Figure 2 shows a test image of an iris followed by Figure 3 with some segmentation results using different GLCM features. To produce meaningful results, different parameter values for d and θ must be used. This improves the statistical details of the texture and helps generalize a texture definition for a particular class, improving the class uniqueness. However, unnecessary parameters can also influence and degrade the feature class. 5. UNSUPERVISED FEATURE CLASSIFICATION The segmentation of eyelashes is a problem of unsupervised classification. In essence, we are required to partition the set of feature vectors into classes that differ from each other as much as possible. Concurrently, the vectors in the same class must differ as little as possible. The number of distinct texture classes in the region of interest is unknown. Ideally, we require a two class classifier that separates iris texture and any other textures. The K-means [MacQueen 1967] and fuzzy C-means [Dunn 1974; Bezdek 1981] algorithms are implemented. There are a multitude of textures possible when an image of the eye is captured - skin, eyelash, sclera, iris and pupil. The texture variations are random due to the following factors: (1) Uneven illumination caused by the image acquisition process. (2) The uniqueness of every iris pattern. (3) The presence of multiple, single or no eyelashes. (4) The pupil dilating or constricting, which changes the density of the iris pattern. 5.1 K-means Clustering The MacQueens k-means algorithm is a popular clustering algorithm in which the number of clusters (k) is known. It is an iterative process that assigns patterns to the closest cluster using a distance function (such as the Euclidean distance measure). The basic algorithm is described below. (1) Define the number of clusters k. (2) Initialize (randomly) the clusters prototypes pi (i = 1..k). (3) For each pattern x, assign x to the nearest cluster pi (i = 1..k). (4) Recompute pi (i = 1..k). (5) Repeat steps 3 and 4 until the prototypes do not change. 5.2 Fuzzy C-means Clustering Fuzzy clustering allows data to belong to more than one class. This is reflected by their degree of membership in a particular cluster. It is based on the minimization of the objective function Jm = N i=1 C j=1 um ij xi − vj 2 , 1 ≤ m ≤ ∞ (5) where m is a real number greater than 1 (called the fuzzification factor). uij is the degree of membership of xi in the cluster j where 0 ≤ uij ≤ 1. xi is the ith component in a d-dimensional data set (vectors). vj is the d-dimension center of cluster j and ∗ is the Euclidean norm. C denotes the number of clusters and N the number of pattern vectors. Fuzzy partitioning is an iterative optimization process. The degree of membership (uij) and the cluster centers vj are computed by the following equation: uij = 1 C k=1( xi−vj xi−vk ) 2 m−1 (6) vj = N i=1 um ij xi N i=1 um ij (7) The stopping criterion is | uk ij − uk−1 ij | , where is a threshold between zero and one. The algorithm is as follows: Proceedings of SAICSIT 2005
  • 6. 116 • Bachoo and Tapamo Figure 4. 3 test images Figure 5. Test results using K-Means clustering Figure 6. Test results using fuzzy C-means clustering (1) Select the values of and c and initialize the counter k to 0 (k ← 0). (2) Obtain (randomly) the initial membership matrix, U(0) . (3) k ← k + 1 (4) Compute the centroids vj (j = 1, ..., c) using equation (7). (5) Compute the membership matrix U(k) using equation (6). (6) If || U(k) − U(k−1) || then stop else go to step 3. 6. EXPERIMENTAL RESULTS AND DISCUSSION The number of texture classes in an iris image can vary from 2 to 5. Thus, ideally, the maximum number of classes is 5. A reasonable parameter for the clustering algorithms is 3. Input images are of size 320 × 280 pixels with 256 grey levels. 12 symmetrical GLCMs are generated for θ ∈ {0, 45, 90, 135} and d ∈ {1, 3, 5} with a reduced set of 64 grey levels. A window size of 21 × 21 pixels (non-overlapping sub-images) is implemented. Smaller windows can be implemented for finer segmentation but classification results may be poorer. Test images are used from the CASIA iris database [CASIA Iris Image Database ]. 30 images were tested using the new GLCM feature and the parameters described above. The K-Means and fuzzy C-Means were both implemented on each image for comparison. To test the accuracy of the proposed system, these outputs were manually inspected and poor or unsatisfactory results were discarded. The K-Means algorithm effectively segmented 22 out of 30 images while the fuzzy C-Means clustering approach performed slightly better with 24 Proceedings of SAICSIT 2005
  • 7. Texture Detection for Segmentation of Iris Images using a New GLCM Feature • 117 K-Means Fuzzy C-Means Number of images 30 30 Segmented 22 24 Rejected 8 6 Accuracy 73.3% 80% Table I. Clustering results images being well segmented. Figure 4 shows 3 test images, with their segmentation outputs in Figure 5 and Figure 6. Table I shows the clustering results. The sample space for the above experiments comprised of images with multiple eyelashes, single eyelashes, uneven illumination and various texture details. The algorithm fails in the following instances: —When the texture of the eyelash zone is indiscernible from that of another zone. —In the presence of uneven lighting. —Poor parameter selection eg. the texture window size or the number of clusters for the clustering. Criteria for manual rejection or acceptance of clustering results were based on over segmentation and correct classification. Clustering outputs produced an image with colours for different zones. These zones were compared to the original image to check for misclassification. Misclassified pixels were similar objects with different colours or different objects with the same colour. 7. FUTURE DEVELOPMENTS The new GLCM texture measure for segmenting iris images is a promising technique. As mentioned earlier, a number of factors affect the performance of the segmentation algorithm. It is hoped that a fully automatic approach can be designed for establishing parameters for window size, number of grey levels and distinguishing features for a particular texture class in any application. Our current experiments also entail Gabor filters [P. Kruizinga, N. Petkov and S.E. Grigorescu 1999], discrete wavelet transform (DWT) [Mallat 1989] and Markov Random Fields (MRF) [Chellappa and Chatterjee 1985] for texture discrimination. A combination of some of these methods will be performed for optimal segmentation. Currently, the system is fully unsupervised. A super- vised approach is being implemented by performing training on iris images in order to compute a feature library to assist in pattern recognition. This will also provide statistics for cluster validation and finer segmentation. It is also hope that image enhancement can be improved in order to address uneven lighting and indiscernible texture boundaries. 8. CONCLUSION A texture analysis approach to improve iris segmentation for person identification has been presented. A new feature derived from the GLCM has been proposed as effective for texture classification. The test results show that the proposed approach is very promising and can help improve the recognition rate of iris systems by removing noisy artifacts. ACKNOWLEDGMENTS Portions of the research in this paper use the CASIA iris image database collected by the Institute of Automation, Chinese Academy of Sciences. The financial assistance of the National Research Foundation (NRF) towards this research is hereby acknowledged. Opinions expressed and conclusions arrived at are those of the authors and not necessarily to be attributed to the NRF. Armscor (SA) is also acknowledged for their financial support towards this project. REFERENCES A. Muron, P. Kois and J. Pospisil. 2001. Identification of persons by means of the fourier spectra of the optical transmission binary models of human irises. Optics Communications 192, 161–167. Adler, F. 1965. Physiology of the eye. MO:Mosby, St. Louis. Ali, J. and Hassanien, A. 2003. An iris recognition system to enhance e-security environment based on wavelet theory. Advanced Modelling and Optimization 5, 2. B. Jahne, H. Haubecker and P. Geibler. 1999. Handbook of Computer Vision and Applications - Volume 2. Academic Press. Bachoo, A. and Tapamo, J.-R. 2004. A segmentation method to improve iris-based person identification. In Africon 2004. Vol. 1. 403–408. Proceedings of SAICSIT 2005
  • 8. 118 • Bachoo and Tapamo Bezdek, J. 1981. Pattern Recognition with Fuzzy Object Function. Plenum Press. Boles, L. and Boashash, B. 1998. A human identification technique using images of the iris and wavelet transform. IEEE Trans. on Signal Processing 46, 4, 1185–1188. CASIA Iris Image Database. http://www.sinobiometrics.com. C.H. Chen, L.F. Pau and P.S.P. Wang. 1998. The Handbook of Pattern Recognition and Computer Vision (2nd Edition). World Scientific Publishing Co. Chellappa, R. and Chatterjee, S. 1985. Classification of textures using gaussian markov random fields. IEEE Trans. Acous., Speech Signal Proc. 33, 4, 959–963. Daugman, J. 1993. High confidence visual recognition of persons by a test of statistical independance. IEEE Trans. Pattern Analysis Mach. Intell. 15, 11, 1148–1961. Daugman, J. 2003. The importance of being random: stastical principles of iris recognition. Pattern Recognition 36, 2, 279–291. Davson, H. 1990. Davson’s Physiology of the Eye. MacMillan, London. Dunn, J. 1974. A fuzzy relative of the isodata process and its use in detecting compact well separated clusters. Journal of Cybernetics 3, 32–57. Flom, L. and Safir, A. 1987. Iris recognition system. K. Bae, S Noh and J. Kim. 2003. Iris feature extraction using independent component analysis. In AVBPA. 838–844. K. Lim, K. Lee, O. Byeon and T. Kim. 2001. Efficient iris recognition through improvement of feature vector and classifier. ETRI Journal 23, 2, 61–70. K. Lim, Y. Wang and T. Tan. 2002. Iris recognition based on multichannel gabor filtering. In Fifth Asian Conference on Computer Vision. Vol. 1. 279–283. Kong, W. and Zhang, D. 2003. Detecting eyelash and reflection for accurate iris segmentation. International Journal of Pattern Recognition and Artificial Intelligence 17, 6, 1025–1034. Kronfeld, P. 1968. The gross embryology of the eye. The Eye 1, 1–66. L. Ma, T. Tan, Y. Wang and D. Zhang. 2003. Personal identification based on iris texture analysis. IEEE Trans. Pattern Analysis Mach. Intell. 25, 12, 1519–1533. L. Ma, Y. Wang and T. Tan. 2002. Iris recognition using circular symmetric filters. In ICPR. Vol. 2. 414–417. L.Ma, T. Tan, Y. Wang and D. Zhang. 2004. Efficient iris recognition by characterizing key local variations. IEEE Transactions on Image Processing 13, 6, 739–750. MacQueen, J. 1967. Some methods for classification and analysis of multivariate observations. Proceedings of the 5th Berkeley Symposium-1, 281–297. Mallat, S. 1989. A theory of multiresolution signal decomposition. IEEE Trans. on Pattern Analysis and Machine Inteligence 11, 7, 674–693. P. Kruizinga, N. Petkov and S.E. Grigorescu. 1999. Comparison of texture features based on gabor filters. In Proceedings of the 10th International Conference on Image Analysis and Processing. 142–147. Q. Zhang, J. Wang, P. Gong and P. Shi. 2003. Study of urban spatial patterns from spot panchromatic imagery using textural analysis. Int. J. Remote Sensing 24, 21, 4137–4160. R.M. Haralick, K. Shanmugam and I. Dinstein. 1973. Texture features for image classification. IEEE Trans. System Man. Cybernat. 8, 6, 610–621. W. Zorski, B. Foxon, J. Blackledge and M. Turner. 2002. Fingerprint and iris identification method based on the hough transform. In Proceedings of IMA Third Conference on imaging and Digital Image Processing. Wildes, R. 1997. Iris recognition: an emerging biometric technology. Proceedings of the IEEE 85, 9, 1348–1362. Proceedings of SAICSIT 2005