20. personal authentication using 3 d finger geometry
Upcoming SlideShare
Loading in...5
×
 

20. personal authentication using 3 d finger geometry

on

  • 619 views

 

Statistics

Views

Total Views
619
Views on SlideShare
619
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

20. personal authentication using 3 d finger geometry 20. personal authentication using 3 d finger geometry Document Transcript

  • 12 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 1, NO. 1, MARCH 2006 Personal Authentication Using 3-D Finger Geometry Sotiris Malassiotis, Niki Aifanti, and Michael G. Strintzis, Fellow, IEEE Abstract—In this paper, a biometric authentication system based demonstrate that the accuracy of the system is comparable withon measurements of the user’s three-dimensional (3-D) hand geom- state-of-the-art hand geometry recognition systems.etry is proposed. The system relies on a novel real-time and low-cost In the following section, we review the related work in this3-D sensor that generates a dense range image of the scene. By ex-ploiting 3-D information we are able to limit the constraints usu- field highlighting the novelties of the proposed work. The em-ally posed on the environment and the placement of the hand, and ployed 3-D acquisition setup is briefly described in Section III.this greatly contributes to the unobtrusiveness of the system. Ef- Then, hand detection, hand localization and finger localizationficient, close to real-time algorithms for hand segmentation, local- algorithms are presented in Sections IV, V, and VI respectively.ization and 3-D feature measurement are described and tested on Extraction of features, which are used for measuring the simi-an image database simulating a variety of working conditions. Theperformance of the system is shown to be similar to state-of-the-art larity of hand geometry, is explained in VII. The performancehand geometry authentication techniques but without sacrificing of the system is evaluated with extensive experiments in Sec-the convenience of the user. tion VIII while Section IX concludes the paper. Index Terms—Biometric authentication, hand geometry, rangeimages. II. PREVIOUS WORK Hand geometry recognition is one of the most popular biomet- I. INTRODUCTION rics used today for user verification. It works by comparing the 3-D geometry of the hand with a previously enrolled sample. AB IOMETRIC authentication, once used for granting access to high security infrastructures, is gradually finding placein a wider range of applications. However, until today the re- simple two-dimensional (2-D) camera sensor is commonly used to capture an image of the user’s palm, while a lateral view of the hand is captured on the same CCD thanks to a mirror. The userquirement for highly reliable authentication has led to compro- has to put his/her hand on a special platter with knobs or pegs thatmises with respect to user acceptance. It is clear that reliability constrain the placing of the hand on the platter. This greatly sim-and user convenience should coexist in order to achieve a wide- plifies the process of feature extraction performed by analyzingspread acceptance of biometrics. the image contours of the hand views [1], [2]. Various features The work in this paper is partly motivated by applications such as width of the fingers, length of the fingers and width of thewhere the convenience of the user is the first priority. These palm have been proposed. Satisfactory recognition results are ob-applications include personalization of services (home, office, tained (96% for recognition and less than 5% EER 1 for authenti-car) and attendance tracking in working environments. A user cation). Instead of using measurements of the hand for verifica-authentication system based on measurements of three-dimen- tion, [3] uses points on a hand silhouette contour as features, whilesional (3-D) hand geometry is proposed. Unlike other hand ge- matching is based on the mean alignment error between two setsometry verification techniques the proposed system is less ob- of silhouette points. The authentication accuracy of the systemtrusive. The user is not obliged to place his/her hand on a sur- for a database of 53 person was about 2% FAR and 1.5% FRR.face and generally there are less constraints regarding the place- The major limitation of the above approaches is their obtru-ment of the hand (e.g., using pegs) or the environment (e.g., siveness imposed by the use of pegs, which constrain the posi-uniform background). To achieve this, a low-cost 3-D sensor tioning and posture of the hand. Moreover, correct placement ofis used that captures both an image of the hand as well as its the hand requires some training, and presents difficulties for spe-3-D structure, and novel algorithms for robust estimation of cific user groups such as young children and elderly. Therefore,3-D geometric hand features are proposed. Experimental results several researchers have proposed to remove the requirement for pegs, and use a document scanner or back-lit display for acqui- sition of hand images. In [4] a feature-based approach is used and an FRR close to 3% was achieved for an FAR of 1% on a Manuscript received April 28, 2005; revised November 9, 2005. This workwas supported by Research Project BioSec IST-2002-001766 (Biometrics and database of 70 people. [5] also extracts hand features from handSecurity (http://www.biosec.org) under the Information Society Technologies silhouettes and employees a hierarchical authentication scheme.(IST) priority of the 6th Framework Programme of the European Community. For a database of 22 people FRR at 12% for FAR at 2.22% isThe associate editor coordianting the review of this manuscript and approvingit for publication was Prof. Davide Maltoni. reported. Finally in [6], implicit polynomials are fitted on hand S. Malassiotis is with the Center for Research and Technology Hellas/Infor- contours and geometric invariants are subsequently computedmatics and Telematics Institute, Thermi-Thessaloniki 57001, Greece (e-mail: from these polynomials. The invariants are then combined withmalasiot@iti.gr). N. Aifanti and M. G. Strintzis are with the Information Processing Labora- geometric hand features to perform matching and an FRR of 1%tory, Electrical and Computer Engineering Department, Aristotle University of for an FAR of 1% on a small database (45 images) is reported.Thessaloniki and the Informatics and Telematics Institute, Thessaloniki 541 24,Greece (e-mail: naif@iti.gr; strintzi@iti.gr.). 1FAR: False acceptance rate, FRR: False rejection Rate, EER: Equal error Digital Object Identifier 10.1109/TIFS.2005.863508 rate. 1556-6013/$20.00 © 2006 IEEE
  • MALASSIOTIS et al.: PERSONAL AUTHENTICATION USING 3-D FINGER GEOMETRY 13 A different approach is adopted in [7] for hand shape basedauthentication. A grating pattern is projected on the back of theuser’s hand and an image of the distorted pattern is captured witha CCD camera. Then a quad-tree representation of a binarizedversion of the image is used for similarity measurements. Theauthors achieved a verification rate of 99.04% using 100 images. A touch-free technique of extracting hand geometry bio-metric data is proposed in [8]. It is based on the localization offinger creases on the front size of the palm and the computationof cross ratios among five point tuples that are invariant toprojective transformations. Zero FAR for an FRR of 2.8%is claimed on a small dataset containing hand images from15 people. A uniform background was assumed in order to Fig. 1. (a) Image captured by the color camera with color pattern projected.facilitate the detection of fingers. (b) Computed range image. Brighter pixels correspond to points closer to the Recently [9] investigated the utility of 3-D finger shape as a camera and white pixels correspond to undetermined depth values.biometric identifier. They use a 3-D scanner to acquire a rangeimage and a corresponding color image of the back of the hand an off-the-shelf CCTV color camera and a standard video pro-placed on a flat, dark surface. After finger alignment, a shape jector. While a colored pattern is projected on the surface ofindex is computed for each pixel based on the principal curva- the object, the color camera captures an image of the objecttures of the surface points. Matching of finger shapes is then [see Fig. 1(a)]. By analyzing the deformation of this pattern andachieved by computing the correlation between corresponding using the triangulation principle, the 3-D coordinates of everyshape index images. They have tested their technique on a data- pixel are estimated [Fig. 1(b)]. Switching rapidly between thebase of 132 persons with five images for each person collected colored pattern and white light, a color image, which is approx-during two recording sessions with one week lapse. For images imately synchronized with the depth image, may be captured asacquired on the same week the recognition rate was 99.4% but well. Using the double acquisition mode a frame-rate of aboutdropped to 75% when probe and gallery images were acquired 15 frames/s is achieved. Although the current system prototypewith one week lapse. uses visible light, which is moderately annoying to the user, a The main difference of our approach in comparison with the version using invisible light is under development.above techniques is that less constraints are posed on the place- In our experiments the system was optimized for an access con-ment of the hand and the on environment. Working on a com- trol application scenario. For subjects located about one meterbination of color and 3-D information, we present robust al- from the camera, and an effective working space of 60 cm 50gorithms which are capable of withstanding in some degree, cm 50 cm, the average depth accuracy is about 0.5 mm. Thecluttered background, illumination variations, hand pose, finger spatial resolution of the range images is equal to the color camerabending and appearance of rings. There are certainly limitations resolution in one direction, while in the other direction it dependson the working conditions under which the system may operate on the width of the color stripes of the projected light pattern andreliably, e.g., working outdoors or under large pose and finger the bandwidth of the surface signal. For a low-bandwidth surfacebending conditions may be problematic. However, these con- such as the back of the hand the resolution is close to the resolu-straints are far less than those imposed by existing systems. tion of the color camera. The image size is 580 780. The average Another novelty of this paper, is the exploitation of the 3-D size of the hand on the image is 400 450.shape of fingers along with their 2-D silhouette to extract a Due to the 3-D acquisition principle, apart from image noise,set of discriminative features. Although 3-D finger shape as the acquired range images contain areas where no depth valuesa possible biometric has been proposed in [9], the present are assigned (see Fig. 1). These are mainly areas that cannottechnique offers several advantages. The work in this paper be reached by the projected light (e.g., the sides of the fingers)utilizes a real-time low-cost 3-D sensor for 3-D image acqui- and/or are highly refractive (e.g., painted finger nails and rings).sition instead of a high-end range scanner used in [9]. Also, Using the above setup, a hand image database was compiledthe proposed system facilitates relatively unconstrained hand and used for conducting the experiments. A group of 73 vol-placement, while in [9] the hand is placed on a flat surface unteers participated in the recording. The recording was super-with uniform background. Finally, the use of a limited number vised mainly because it was difficult for the users to place theirof cross-sectional 3-D finger measurements is adopted in this hands inside the limited working volume of the sensor withoutpaper, while [9] uses “3-D shape images” to represent each any feedback. In the future we plan to provide such a feedbackfinger. Therefore, biometric templates are only a few bytes and automatically.thus may be efficiently stored in smart-cards. Each subject was asked to place his/her hand in front of his/her face with the back of the palm facing the sensor. We have found that this posture is the most convenient for the users, III. DATA ACQUISITION is interpreted unambiguously and provides the best resolution The proposed system relies on real-time quasisynchronous of the hand in the images. Also in this way hand geometrycolor and 3-D image acquisition based on the color structured- authentication may be applied as an additional biometric afterlight approach [10]. The sensor is based on low cost devices, 3-D face authentication.
  • 14 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 1, NO. 1, MARCH 2006 where are prior probabilities of the head/torso, hand and arm blobs respectively, and Maximum-likelihood estimation of the unknown parameters from the 3-D data is achieved by means of the Expectation-Maximization algorithm: where are the posterior probabilities of the state given the data and the model parameters. The convergence of the above iterative procedure relies on good initial param-Fig. 2. Representative images from the recorded hand database depicting hand eter values. In our case these may be obtained by exploitingpose and posture variations and worn rings. prior knowledge of the body geometry. Initial estimates of the head/torso 3-D blob parameters are obtained by means of We have subsequently acquired several pairs of color and depth an initial rough segmentation of the depth values into fore-images for each subject (10–15 image pairs per person) depicting ground/background objects. This is easily obtained by findingseveral variations of the “ideal” hand posture (see Fig. 2). The the threshold that optimally separates the two modes in theusers were initially asked to keep their fingers straight and af- histogram of depth values [11]. Background pixels are assumedterwards to relax their fingers introducing some finger bending. to belong to the head/torso blob and their mean and covari-Then the users were asked to wear a ring on one of their fingers. ance are readily obtained from the 3-D coordinates of theseFinally, they were asked to variate the orientation of the palm for points. The remaining points with depth values smaller thanabout 10–15 degrees with respect to the sensor. The recording are initially classified as hand/arm points. The parameters ofwas repeated after one week with the same conditions. the two foreground blobs (hand and forearm) are initialized by exploiting prior knowledge of their geometric structure. Let be the sample mean of foreground points and IV. HAND DETECTION be the eigenvectors of the scatter matrix The first step is the segmentation of the hand from the body, , computed from these foreground data pointswhich is achieved using available depth information and ex- , ordered according to the magnitude of the correspondingploiting a priori knowledge of the human body geometric struc- eigenvalues. Initial estimates of the unknown parametersture and the authentication scenario. were obtained by The distance of the user’s hand from his/her face is not guaran-teed to be sufficiently large (usually between 5–20 cm). Thereforewe may not rely on simple thresholding to separate the hand from wherethe face. Detection and segmentation of the hand is based on amore elaborate scheme that relies on statistical modeling of thehand, arm and head plus torso points in 3-D space. Let be the 3-D coordinates of each pixel computed fromthe range image. The probability distribution of a 3-D point ismodeled as a mixture of three Gaussians: where is the orthogonal eigenvector matrix of and the corresponding eigenvalues. Also, are constants related to the relative size of the hand and forearm with respect to the arm (in the experiments, the values , and and were used). The physical interpretation of the above equations is illustrated in
  • MALASSIOTIS et al.: PERSONAL AUTHENTICATION USING 3-D FINGER GEOMETRY 15 Fig. 4. Hand detection and segmentation. (a) Original image and (b) segmented image. Fig. 5. Estimation of the center and radius of the palm. (a) Foreground pixel mask and (b) distance transform with estimated palm circle centered on the distance transform maximum.Fig. 3. Illustration of knowledge-based initialization of 3-D blob distributionparameters. Ellipses represent iso-probability contours of posteriordistributions. Since depth estimation along the boundaries of the fingers is inaccurate, containing holes and gaps, we may not rely on theFig. 3. The centers of the blobs corresponding to the hand and silhouette of the object but we rather follow a more elaborateforearm are placed along the principal axis of the arm, while the procedure which is described bellow.blobs’ relative position and size is selected a priori with respect The projection of the center of the hand blob on the imageto the position and size of the full arm. Moreover, the orientation plane is used as an initial estimate of the palm center that is sub-of each blob (reflected by their covariance matrix) is aligned to sequently refined as described in the following. The chamferthe orientation of the full arm. distance [12] algorithm is used to efficiently compute the dis- The algorithm converges after a few iterations to a local max- tance transform inside a bounding box centered on the projec-imum of the data likelihood function given the parameters. The tion of . Since the distance transform provides the smallestobtained parameters corresponding to the hand blob pro- distance of each point to the object boundary, and the structurevide an estimate of the center and 3-D pose of the hand. In par- of the palm is approximately circular, a maximum distance isticular, the eigenvector of matrix corresponding to the expected near the center of the palm . Therefore, is an esti-smallest eigenvalue is approximately perpendicular to the plane mate of the palm radius (see Fig. 5).defined by the hand surface. Then, it is straightforward to de- Then, a rough estimate of the hand-forearm axis is obtainedfine a cutting plane that separates the hand by computing the eigenvector corresponding to the largestfrom the face, where is the distance of the cutting plane from eigenvalue of the covariance matrix of 2-D foreground pixelthe hand plane. The choice of the parameter is a compromise coordinates. The slant of this axis with respect to the verticalbetween allowable closeness of the hand to the face and finger image axis gives approximately the orientation angle of thebending. A measure of finger bending is the scattering of hand hand.points with respect to the estimated hand plane and is propor- Given these global hand features we may proceed withtional to the smallest eigenvalue of . The points that lie be- the detection of the fingers. Homocentric circular arcs arehind the cutting plane are discarded, while the rest of the points drawn around with increasing radius ( to at(termed foreground points in the sequel of the paper) are used steps) excluding the lower part of the circle that correspondsin subsequent processing steps (see Fig. 4). to the wrist as shown in Fig. 6(b). By scanning the nonzero pixels on each of these arcs, a list of circular segments is obtained. Very small or very large seg- V. HAND LOCALIZATION ments, compared to the average finger width, are rejected. In this step, we use the foreground pixels mask, whose values The midpoints of the remaining segmentsare one for foreground pixels and zero otherwise, in order to are considered candidates for the finger skeletons. Our nextlocalize the center and radius of the palm as well as the fingers. goal is to cluster together the skeleton points corresponding toIn the following we assume without loss of generality that the each finger. First the minimum-spanning tree that has the setright hand is used. of points as its vertices is computed. Then by discarding
  • 16 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 1, NO. 1, MARCH 2006Fig. 6. Finger detection. (a) Original depth image, (b) circular arcs used todetect finger segments, (c) clustering of segment mid-points, and (d) 2-D linesfitted on skeleton points and boundaries.from the minimum spanning tree those edges that are longerthan some threshold, an initial clustering is obtained. Then,clusters containing only a few points (fewer than three in ourexperiments) are discarded, while pairs of clusters that satisfya proximity threshold are merged together. The proximity oftwo clusters is measured by the collinearity of the two sets of Fig. 7. The 2-D finger model. Finger end-points (white circles) are thepoints. Let be the error in fitting a 2-D line on the set of end-points of the projections of left and right boundary segments on skeleton axis (dashed line). The width of the finger model on the top and bottom, d andpoints belonging to the first cluster, for the second cluster d respectively, are given by d = (d + d )=2 + ; d = (d + d )=2 +  ,and for the union of the cluster points. Then the merging where  is a constant offset that depends mainly on the distance of the handcriterion is simply , where is a constant from the camera.(equal to 1 in our experiments). After the merging step, the fourlongest remaining clusters are retained. These will normally by the projector are visible by the camera, and not all pointscorrespond to the four fingers, excluding the thumb [Fig. 6(c)]. visible from the camera are illuminated by the projector. This The correct ordering of the fingers is established by sorting has the effect of missing depth information along finger bound-the clusters with respect to the slant of the line segment de- aries. Therefore, color images are used as well to localize fingerfined by the furthest point of each cluster and the center of the boundaries more accurately. Unfortunately, we have to copepalm . Then, 2-D line segments are fitted on the points of each with cluttered background and low contrast, since the hand iscluster, and approximate finger skeletons are obtained. Finally, in front of the face and they have the same color. Thus, priorby searching in a direction perpendicular to the finger skeleton, knowledge of human hand geometry and photometric proper-we locate finger boundary points, and fit 2-D line segments cor- ties of the object surface are exploited.responding to the left and right finger boundaries [Fig. 6(d)]. First, a 2-D geometric model of each finger is created, using The advantage of the approach described above compared with finger skeleton and boundaries estimated from the foregroundthose based on the silhouette of the hand, is that it works success- pixel mask during the previous step (see Fig. 7). Then, we samplefully for noisy masks, which contain also disconnected bound- points from the model boundary, and for each point a set ofaries (e.g., due to rings worn in one or more fingers, sensor errors local gradient maxima is collected along a lineor segmentation errors). In addition, it is computationally effi- that passes from and is perpendicular to the model boundarycient since it operates only on a small subset of image pixels. We ( few pixels around ). The optimal set from these candidatebelieve that this algorithm is also useful when hand segmentation points corresponding to the finger boundary , is subsequentlyis obtained by other means such as skin color segmentation. estimated by minimizing a global cost function VI. FINGER BOUNDARY LOCALIZATION Depth acquisition is not always feasible in the vicinity offinger discontinuities. This is due to the occlusion problem thatis inherent in 3-D acquisition techniques employing the trian- where and ) is the distancegulation principle. In our case, not all hand points illuminated of the line segment defined by points from the 2-D finger
  • MALASSIOTIS et al.: PERSONAL AUTHENTICATION USING 3-D FINGER GEOMETRY 17 VII. FEATURE EXTRACTION AND MATCHING We use the above signature functions to extract 3-D geometric features of the fingers and we utilize these features for simi- larity measurement. Altogether, ninety six measurements quasi- invariant to hand position, orientation and finger bending are used. The thumb is excluded since its measurements are unre- liable. Twelve width and twelve curvature measurements are computed for each of the remaining four fingers. The length of each finger is estimated by the largest computed distance from the corresponding finger-tip. During the training phase weFig. 8. Finger boundary segments (a) before and (b) after subpixel correction. also compute the average length of each finger . Finger width and curvature measurements are computed by sampling and respectively on , where takesmodel. The optimization of the cost function is efficiently equally spaced values from 0.2 to 0.8. All 96 measurements areachieved using dynamic programming. From this set of points subsequently concatenated on a feature vector . Similarity be-we may subsequently estimate the finger-tip of each finger tween two feature vectors is estimated by means of the(by projection of all points on the symmetry axis of the model) distance .and a sequence of linear segments , The resulting similarity score is normalized in the rangewhich are perpendicular to the finger axis and their end-points by applying the normalization functionare on the polygon . Finger boundary estimates obtained using the abovemodel-based scheme are biased with respect to the trueoccluding boundary, since the profile of the brightness functionacross finger boundaries is not the step function but rather de-cays more smoothly. The bias is also increased proportionally where and are the initial and normalized scores respec-to the width of the smoothing kernel, which is normally used tively (see also [13] for a discussion on different normalizationfor computing the image gradient. To compensate for this error, schemes). This requires, separately for each enrolled user, thethe initial estimate of the boundary is subsequently refined, estimation of the mean and variance of the genuine transac-by fitting a parametric edge model to the brightness values in tion score distribution. This is obtained using the Hampel [14]the vicinity of the current boundary estimate ( pixels). The robust estimator.brightness profile across the boundary is approximated with the Specifically, let be a set of gen-function uine transaction scores, obtained from a bootstrap set of im- ages of a given subject. Then, a robust estimate of their mean value is obtained by means of the iterative re-weighted least squares technique where are given by the Hampel influence function:where , and are the unknown parameters estimatedby minimizing the mean square error between and imagebrightness samples using the Levenberg–Marquard technique. has a ramp-like shape for and isflat outside this interval, taking values and on the left andright of the ramp respectively. Therefore the correction term is and are chosen as the 70th, 85th, and 95th percentiles ofgiven by (for left/right finger boundary respectively). An . is initially set equal to the median of the scores.example of boundary refinement is illustrated in Fig. 8. Convergence is usually attained after a few (less than ten) iter- Then, for each finger we define two signature functions, ations. For the standard deviation the value of above was , which are parameterized by the 3-D distance shown to be a fairly robust estimate.from the finger tip computed along the ridge of each finger.For each such measurement the distance from the finger-tip VIII. EXPERIMENTAL RESULTSis computed as wherethe vector represents the 3-D coordinates of the segment A database containing several hand appearance variationsmid-point. The first function corresponds to the width of the was recorded as described in Section III and used for ex-finger in 3-D. This is computed by fitting a 3-D line on the perimental evaluation of the full authentication chain under3-D points corresponding to each segment , projecting the different conditions. The database contains images from 73end-points of this segment on this line and computing their volunteers and is separated into two recording sessions withEuclidian distance. The second signature corresponds to the at least one week lapse between the recordings. The firstmean curvature of the curve that is defined by the 3-D points database session was used for training and the second one forcorresponding to each segment. testing. In order to minimize the training bias, we have used
  • 18 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 1, NO. 1, MARCH 2006Fig. 9. Receiver operating characteristic curve showing false acceptance rate (FAR) versus verification rate (VR) for M -tuples of probe images and varying M .for training only images depicting “canonical” cases (hand ap- matching every image in the test set with all images in the galleryproximately parallel and close to the camera, extended fingers, set. This implies that a matching score is computed for each probeno rings worn). Also, we have performed several experiments image in the test set and for each claimed identity (1 to 73) giving aby varying the training set. For each experiment we select table (2880 73) of genuineand impostormatching scores.Froma subset of the training images and use this for training the this table we can subsequently draw Receiver Operator Charac-algorithms. Specifically, we randomly select 50 persons (totally teristic curves (false acceptance rate versus verification rate) by20 such samplings), and four image pairs from the images varying the authentication threshold from 0 to 1 and Cumulativecorresponding to each person in the sampling (ten such sam- Match Characteristic curves by computing the rank of genuineplings), leading to 100 experiments. The results presented in transaction scores (identification).the sequel are averages over all experiments. For each training We have also noticed that the performance of the system mayimage pair we applied the proposed algorithms and extracted a be significantly improved by combining the scores computed onfeature vector containing 3-D finger shape measurements. images of the same individual. The best results were obtained In the following we evaluate the performance of the system by computing the average of the matching scores. In practiceunder various operating conditions and also describe its limita- this may be achieved by acquiring a sequence of hand imagestions and breakdown points. instead of a single image and then computing matching scores for a subset of the acquired frames. Averaging of the scores hasA. Accuracy the effect of reducing the noise in finger shape measurements. The execution time of the algorithm including 3-D image ac- The test set contains all images of the second recording ses- quisition, hand detection and feature extraction is 0.2 s on a Pen-sion (totally 2880 image pairs). The matching score for a probe tium 4, 3 GHz processor. In particular hand detection and globalimage is then computed as follows. Hand detection, localization feature extraction is executed with a rate of 10 frames/s. So, inand feature extraction algorithms are applied on the pair of color a practical application, feedback to the user may be provided inand depth images and a feature vector is extracted. Then, the real-time regarding the quality of recorded images using priordistance between this vector and the four previously estimated knowledge of hand geometry (e.g., palm radius, lengths of thegallery feature vectors corresponding to the claimed identity are fingers). Thus, low quality images may be discarded.computed. The matchingscore isthensettotheminimumdistance Figs. 9 and 10 demonstrate the verification and identificationand normalized in the range [0–1] as described in Section VII. results obtained for various values of , while the matching We have subsequently performed several experiments under scores were computed by randomly generating -tuples of thedifferent variations of the test set. Each experiment consists of probe images corresponding to the same individual.
  • MALASSIOTIS et al.: PERSONAL AUTHENTICATION USING 3-D FINGER GEOMETRY 19Fig. 10. Cumulative Match Characteristic curve showing identification rate (IR) versus identification rank for M -tuples of probe images and varying M . From the graphs it is evident that the performance improves as fails to find a separating plane and crops part of the fingers. Also gets larger; however after the relative improvement the hand should be located inside the working volume of the 3-Dbecomes very small. For the equal error rate is 5.4% and sensor. This is a limitation of the current setup since the 3-Dthe rank-1 identification rate is 90%. For the equal error working volume is rather limited (see Section III), and it mayrate becomes 3.6% while the rank-1 identification rate rises to be partially lifted by automatically detecting the center of the98%. Of course the identification performance is a function of hand (as described in Section IV) and providing feedback to thethe database size. Using the extrapolation formula in [15] user. The accuracy of the system is obviously a function of the image resolution, and the size of the hand on the image and therefore it depends on the distance of the hand from the camera. We have tested the effect of the distance from the camera by splitting the test set into two subsets. The first subset containedwhere and are the false rejection rate and false images with the hand close to the camera and the other set con-acceptance rate as a function of the threshold , and is the tained images with the hand further from the camera (15 cm ongallery size, the extrapolated identification rate is equal to 89% average). The average hand size in the first set is approximatelyfor and 85% for . 50% of the image area while in the second set is about 38%. We have also compared our results ( and ) with The equal error rate corresponding to the first test set was 3.2%those obtained by other authors [1]–[3], [9] and similar perfor- while for the second set was 3.7%.mance was obtained. Since the conditions of these experiments The proposed system was shown to work reliably with palmvary greatly (e.g., size of the data set, acquisition setup, experi- orientations up to 15 degrees with respect to the camera. Formental protocol, imagequality etc.) wehaveomitted these results. larger angles the accuracy was shown to deteriorate rapidly, mainly due to finger occlusion and self-occlusion. The deterio-B. Hand Placement and Pose ration in verification accuracy between images approximately The placement of the hand with respect to the camera and parallel to the camera plane and those with an orientationthe configuration of the fingers have an effect on the classifica- angle (with respect to the Y axis) greater than 10 was abouttion accuracy. The system is designed to operate with the hand 14%. We believe that there is room for improvement of theplaced in front of the body. The distance of the hand from the results with respect to the hand pose, but this relies mostly onbody can be as small as 10 cm. For smaller distances there are a the improvement of 3-D acquisition in the vicinity of fingerfew cases (usually exhibiting finger bending) that the algorithm boundaries. Obviously the accuracy of authentication systems
  • 20 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 1, NO. 1, MARCH 2006based on hand geometry relies on the accuracy of hand sil- those required in security applications, there are several otherhouette extraction and the best results were obtained using emerging applications such as personalization of services andhigh resolution document scanners. In our case, cross-sectional attendance control that may benefit from the unobtrusive userfinger measurements are computed from the occluding bound- authentication achieved by the proposed system. Furthermore,aries localized on the color image and the finger orientation if the proposed system is combined with other authenticationobtained from the depth image. This leads to fairly accurate modalities such as face recognition, the overall performanceresults when the hand is approximately parallel to the 3-D of the multimodal system is expected to be superior since 3-Dsensor plane, but if there is a slant (e.g., more than 10 ) the hand geometry is not affected by variations in illumination, age,measurements will be biased. In the future we plan to cope obstructions, etc. In particular the same 3-D sensor may be usedwith this problem by using a shape-from-shading approach to to capture face and hand images and therefore the proposedreconstruct the 3-D structure of the finger surface over regions technique is ideal for fusion with 3-D face biometrics [16], [17].where such information is missing. Then it would be possible This is expected to lead to a low-cost solution offering highlyto obtain more accurate measurements even under relatively reliable authentication without sacrificing user convenience.large hand pose variations.C. Hand Posture The proposed algorithm was designed to operate on hand REFERENCESimages depicting a wide open palm with extended fingers, not [1] A. Jain, A. Ross, and S. Pankanti, “A prototype hand geometry-basedtouching each other. Finger bending violates the assumptions verification system,” in Proc. 2nd Int. Conference on Audio and Video-used in various phases of the algorithm, such as the detection Based Biometric Person Authentication (AVBPA), Washington, DC, Mar.of fingers and their image boundaries. However, some finger 1999, pp. 166–171.bending may be tolerated by the algorithm, depending on the [2] R. Sanchez-Reillo, C. Sanchez-Avila, and A. Gonzalez-Marcos, “Bio- metric identification through hand geometry measurements,” IEEEorientation of the hand. The worst case is when the hand is Trans. Pattern Anal. Mach. Intell., vol. 22, no. 10, pp. 1168–1171,mostly slanted with respect to the camera and the fingers appear 2000.the least straight when projected on the image plane. In our [3] A. K. Jain and N. Duta, “Deformable matching of hand shapes for ver-experiments we have included cases depicting finger bending up ification,” in Int. Conf. Image Processing, Kobe, Japan, Oct. 1999, pp.to 15 of the knuckle joints. In all cases the algorithm was shown 857–861. [4] Y. Bulatov, S. Jambawalikar, P. Kumar, and S. Sethia, “Hand recognitionto detect the fingers and extract measurements successfully. using geometric classifiers,” in 1st Int. Conf. Biometric AuthenticationHowever, the classification accuracy was shown to deteriorate. (ICBA), Hong Kong, China, Jul. 2004, pp. 753–759.The equal error rate computed on images depicting only straight [5] L. Wong and P. Shi, “Peg-free hand geometry recognition using hierar-fingers is 3.1% and falls to 4.2% for fingers exhibiting bending. chical geometry and shape matching,” in IAPR Workshop on Machine Vision Applications, Nara, Japan, 2002, pp. 281–284. [6] C. Oden, A. Ercil, and B. Buke, “Hand recognition using implicit poly-D. Background and Illumination nomials and geometric features,” Pattern Recognit. Lett., vol. 24, no. 13, pp. 2145–2152, 2003. Images used in the experiments were recorded in an office [7] Y. Lay, “Hand shape recognition,” Opt. Laser Technol., vol. 32, pp. 1–5,environment with a cluttered background, while the placement 2000. [8] G. Zheng, C. Wang, and T. E. Boult, “Personal identification by crossof the hand in front of the body implies additional clutter. ratios of finger features,” in Int. Conf. Pattern Recognition, WorkshopWith the use of depth information and prior knowledge of on Biometrics, Cambridge, MA, Aug. 2004.hand geometry, obstructions from the background are efficiently [9] D. L. Woodard and P. J. Flynn, “3D finger biometrics,” in Workshop onavoided. Biometric Authentication Prague, May 2004. The illumination of the hand is mostly attributed to the video [10] F. Forster, M. Lang, and B. Radic, “Real-time 3D and color camera,” in Proc. ICAV3D 2001, Mykonos, Greece, May 2001.projector light. Therefore, variations in ambient illumination of [11] N. Otsu, “A threshold selection method from gray-level histograms,”the scene (the recording was performed during the day with nat- IEEE Trans. Syst., Man. Cybern., vol. SMC-9, pp. 62–66, 1979.ural or artificial illumination) were not shown to affect the per- [12] G. Borgefors, “Distance transformation in arbitrary dimensions,”formance of the algorithm. We have also experimented with im- Comput. Vis., Graph. Image Process., vol. 27, pp. 321–145, 1984. [13] A. Jain, K. Nandakumar, and A. Ross, “Score normalization in mul-ages recorded outdoors, however the quality of depth images timodal biometric systems,” Pattern Recognit., vol. 38, no. 12, pp.was not sufficient to reliably extract measurements. 2270–2285, Dec. 2005. [14] F. R. Hampel, P. J. Rousseeuw, E. M. Ronchetti, and W. A. Stahel, Robust Statistics: The Approach Based on Influence Functions. New York: Wiley, 1986. IX. CONCLUSION [15] P. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone. (2003, Mar.) Face Recognition Vendor Test 2002: In this paper, we have proposed a new approach for biometric Evaluation Report. Nat. Inst. Stand. Technology.. [Online]. Available:authentication that is based on measurements of the 3-D hand www.itl.nist.gov/iad/894.03/face/face.htmlgeometry using a real-time low-cost 3-D sensor. We have [16] S. Malassiotis and M. G. Strintzis, “Robust face recognition using 2Ddemonstrated the ability of the proposed algorithms to work and 3D data: Pose and illumination compensation,” Pattern Recognit., vol. 38, no. 12, pp. 2536–2548, Dec. 2005.robustly in relatively unconstrained conditions, while the results [17] F. Tsalakanidou, S. Malassiotis, and M. G. Strintzis, “Face authentica-obtained on a relatively large database indicate that performance tion using color and depth images,” IEEE Trans. Image Process., vol.is not sacrificed. Although the error rates achieved are higher than 14, no. 2, pp. 152–168, Feb. 2005.
  • MALASSIOTIS et al.: PERSONAL AUTHENTICATION USING 3-D FINGER GEOMETRY 21 Sotiris Malassiotis was born in Thessaloniki, Michael G. Strintzis (S’68–M’70–SM’80–F’04) Greece, in 1971. He received the B.S. and Ph.D. received the Diploma in electrical engineering from degrees in electrical engineering from the Aristotle the National Technical University of Athens, Athens, University of Thessaloniki in 1993 and 1998, re- Greece, in 1967, and the M.A. and Ph.D. degrees spectively. in electrical engineering from Princeton University, From 1994 to 1997, he was conducting research Princeton, NJ, in 1969 and 1970, respectively. in the Information Processing Laboratory of Aristotle He then joined the Electrical Engineering De- University of Thessaloniki. He is currently a Senior partment at the University of Pittsburgh, Pittsburgh, Researcher in the Informatics and Telematics Insti- PA, where he served as Assistant (1970–1976) and tute, Thessaloniki. He has participated in several Eu- Associate (1976–1980) Professor. Since 1980, he is ropean and National research projects. He is the au- Professor of electrical and computer engineering atthor of more than 20 articles in refereed journals and more than thirty papers the University of Thessaloniki, and, since 1999, Director of the Informaticsin international conferences. His research interests include stereoscopic image and Telematics Research Institute, Thessaloniki. His current research interestsanalysis, range image analysis, pattern recognition, and computer graphics. include 2-D and 3-D image coding, image processing, pattern recognition, biomedical signal and image processing and DVD and Internet data authenti- cation and copy protection. Dr. Strintzis has served as an Associate Editor of the IEEE TRANSACTIONS Niki Aifanti received the Diploma in electrical and ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY. In 1984, he was awarded computer engineering in 1999 from the Electrical one of the Centennial Medals of the IEEE. and Computer Engineering Department, University of Thessaloniki, Thessaloniki, Greece, and the M.Sc. degree in multimedia signal processing in 2001 from the University of Surrey, U.K. She is currently pursuing the Ph.D. degree in the same department, where she holds research and teaching assistantship positions. She is a Graduate Research Assistant with the Informatics and Telematics Institute, Thessaloniki,Greece. Her research interests include face and gesture recognition, computervision, pattern recognition, image processing. Ms. Aifanti is a member of the Technical Chamber of Greece.