An embedded real time finger-vein recognition system for mobile devices

3,447
-1

Published on

services on......
embedded(ARM9,ARM11,LINUX,DEVICE DRIVERS,RTOS)
VLSI-FPGA
DIP/DSP
PLC AND SCADA
JAVA AND DOTNET
iPHONE
ANDROID
If ur intrested in these project please feel free to contact us@09640648777,Mallikarjun.V

Published in: Technology
4 Comments
3 Likes
Statistics
Notes
No Downloads
Views
Total Views
3,447
On Slideshare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
0
Comments
4
Likes
3
Embeds 0
No embeds

No notes for slide

An embedded real time finger-vein recognition system for mobile devices

  1. 1. 522 IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, May 2012 An Embedded Real-Time Finger-Vein Recognition System for Mobile Devices Zhi Liu and Shangling Song Abstract — With the development of consumer electronics, recognition can be made difficult by occlusions or face-liftsthe demand for simple, convenient, and high-security [3]; and biometrics, such as fingerprints and iris and faceauthentication systems for protecting private information recognition, are susceptible to spoofing attacks, that is, thestored in mobile devices has steadily increased. In biometric identifiers can be copied and used to create artifactsconsideration of emerging requirements for information that can deceive many currently available biometric devices.protection, biometrics, which uses human physiological or The great challenge to biometrics is thus to improvebehavioral features for personal identification, has been recognition performance in terms of both accuracy andextensively studied as a solution to security issues. However, efficiency and be maximally resistant to deceptive practices.most existing biometric systems have high complexity in time To this end, many researchers have sought to improveor space or both, and are thus not suitable for mobile devices. reliability and frustrate spoofers by developing biometrics thatIn this paper, we propose a real-time embedded finger-vein are highly individuating; yet at the same time, present a highlyrecognition system for authentication on mobile devices. The complex, hopefully insuperable challenge to those who wishsystem is implemented on a DSP platform and equipped with to defeat them [4]. Especially for consumer electronicsa novel finger-vein recognition algorithm. The proposedsystem takes only about 0.8 seconds to verify one input finger- applications, biometrics authentication systems need to bevein sample and achieves an equal error rate (EER) of 0.07% cost-efficient and easy to implement [5].on a database of 100 subjects. The experimental results The finger-vein is a promising biometric pattern for personaldemonstrate that the proposed finger-vein recognition system identification in terms of its security and convenience [6].is qualified for authentication on mobile devices. 1 Compared with other biometric traits, the finger-vein has the following advantages [7]: (1) The vein is hidden inside the Index Terms — finger-vein recognition; biometrics; mobile body and is mostly invisible to human eyes, so it is difficult todevices; DSP forge or steal. (2) The non-invasive and contactless capture of finger-veins ensures both convenience and hygiene for the I. INTRODUCTION user, and is thus more acceptable. (3) The finger-vein pattern Private information is traditionally provided by using can only be taken from a live body. Therefore, it is a naturalpasswords or Personal Identification Numbers (PINs), which and convincing proof that the subject whose finger-vein isare easy to implement but is vulnerable to the risk of exposure successfully captured is alive.and being forgotten. Biometrics, which uses human We designed a special device for acquiring high qualityphysiological or behavioral features for personal finger-vein images and propose a DSP based embeddedidentification, has attracted more and more attention and is platform to implement the finger-vein recognition system inbecoming one of the most popular and promising alternatives the present study to achieve better recognition performanceto the traditional password or PIN based authentication and reduce computational cost.techniques [1]. Moreover, some multimedia content in The rest of this paper is organized as follows. An overviewconsumer electronic appliances can be secured by biometrics of the proposed system is given in Section 2. The device for[2]. There is a long list of available biometric patterns, and finger-vein image acquisition is introduced in Section 3. Ourmany such systems have been developed and implemented, recognition method is addressed in Section 4. Experimentalincluding those for the face, iris, fingerprint, palmprint, hand results are then presented in Section 5. Finally, concludingshape, voice, signature, and gait. Notwithstanding this great remarks are given in Section 6.and increasing variety of biometrics patterns, no biometric hasyet been developed that is perfectly reliable or secure. Forexample, fingerprints and palmprints are usually frayed; voice, II. OVERVIEW OF THE SYSTEMsignatures, hand shapes and iris images are easily forged; face The proposed system consists of three hardware modules: 1 image acquisition module, DSP mainboard, and human- This work was supported in part by the National Natural Science machine communication module. The structure diagram of theFoundation of China (No.60902068), Shandong Provincial Natural ScienceFoundation (No.2009ZRB019RX) and Technology Development Program of system is shown in Fig. 1. The image acquisition module isShandong Province (No. 2010GGX10125). used to collect finger-vein images. The DSP mainboard Zhi Liu is with the School of Information Science and Engineering, including the DSP chip, memory (flash), and communicationShandong University, Jinan, 250100, China (e-mail: liuzhi@sdu.edu.cn). port is used to execute the finger-vein recognition algorithmContributed PaperManuscript received 11/27/11Current version published 06/22/12Electronic version published 06/22/12. 0098 3063/12/$20.00 © 2012 IEEE
  2. 2. Z. Liu and S. Song: An Embedded Real-Time Finger-Vein Recognition System for Mobile Devices 523and communicate with the peripheral device. The human- Our device mainly includes the following modules: amachine communication module (LED or keyboard) is used to monochromatic camera of resolution 580 × 600 pixels,display recognition results and receive inputs from users. daylight cut-off filters (lights with the wavelength less than 800 nm are cut off), transparent acryl (thickness is 10 mm), and the NIR light source. The structure of this device is illustrated in Fig. 3. The transparent acryl serves as the platform for locating the finger and removing uneven illumination. The NIR light irradiates the backside of the finger. In [9], a light-emitting diode (LED) was used as the illumination source for NIR light. With the LED illumination source, however, the shadow of the finger-vein Fig.1. The hardware diagram of the proposed system. obviously appears in the captured images. To address this Input finger-vein Input finger-vein problem, an NIR laser diode (LD) was used in our system. images with images with Compared with LED, LD has stronger permeability and corresponding ID corresponding ID higher power. In our device, the wavelength of LD is 808 nm. Fig. 4 shows an example raw finger-vein image Segmentation and Segmentation and captured by using our device. Alignment Alignment Enhancement Enhancement Feature Extraction Feature Extraction No Feature Matching Reject Templates Yes Accept Fig. 3. Illustration of the imaging device. Fig.2. The flow-chart of the proposed recognition algorithm. The proposed finger-vein recognition algorithm contains twostages: the enrollment stage and the verification stage. Bothstages start with finger-vein image pre-processing, whichincludes detection of the region of interest (ROI), imagesegmentation, alignment, and enhancement. For theenrollment stage, after the pre-processing and the featureextraction step, the finger-vein template database is built. Forthe verification stage, the input finger-vein image is matchedwith the corresponding template after its features areextracted. Fig. 2 shows the flow chart of the proposed Fig. 4. An example raw finger-vein image captured by our device.algorithm. Some different methods may have been proposedfor finger-vein matching. Considering the computationcomplexity, efficiency, and practicability, however, we IV. PROPOSED ALGORITHMpropose a novel method based on the fractal theory, whichwill be introduced in Section 4 in detail. A. Image Segmentation and Alignment Because the position of fingers usually varies across III. IMAGE ACQUISITION different finger-vein images, it is necessary to normalize the To obtain high quality near-infrared (NIR) images, a special images before feature extraction and matching. The bone indevice was developed for acquiring the images of the finger- the finger joint is articular cartilage. Unlike other bones, it canvein without being affected by ambient temperature. be easily penetrated by NIR light. When a finger is irradiatedGenerally, finger-vein patterns can be imaged based on the by the uniform NIR light, the image of the joint is brighterprinciples of light reflection or light transmission [8]. We than that of other parts. Therefore, in the horizontal projectiondeveloped a finger-vein imaging device based on light of a finger-vein image, the peaks of the projection curvetransmission for more distinct imaging. correspond to the approximate position of the joints (see Fig.
  3. 3. 524 IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, May 20125). Since the second joint of the finger is thicker than the first B. Image Enhancementjoint, the peak value at the second joint is less prominent. The segmented finger-vein image is then enhanced to improveHence, the position of the first joint is used for determining its contrast as shown in Fig. 7. The image is resized to 1/4 of thethe position of the finger. original size, and enlarged back to its original size. Next, the image is resized to 1/3 of the original size for recognition. Bicubic interpolation is used in this resizing procedure. Finally, histogram equalization is used for enhancing the gray level contrast of the image. C. Feature Extraction The fractal model developed by Mandelbrot [10] provides an excellent method for representing the ruggedness of natural surfaces and it has served as a successful image analysis tool for image compression and classification. Since different fractal sets with obviously different textures may share the same fractal dimension [11], the concept of lacunarity is used to discriminate among textures. The basic idea of lacunarity in many definitions is to quantify the “gaps or lacunae” presented in a given surface, which is used to quantify the denseness of a surface image. In this study, we focus on combining fractal and lacunarity measures for improving finger-vein recognition. Fig. 5. Horizontal projection of the raw image. Let f  g (i, j ), i  0,1, , k , j  0,1, , l , where f denotes an image with k  l pixels, and g  i, j  means the gray level value at the  i , j  pixel. The gray level surface of g  i , j  can be viewed as a fractal [12]. First, for g  i , j  , u0  i , j   b0  i, j   g  i, j  . Second, for   1, 2,3, , the blanket surface is defined as follows: Fig. 6. The segmented ROI of the finger-vein image.  u  i , j   max u 1  i , j   1, max  m , n   i , j  1 u 1  m, n   (1) The alignment module includes the following steps.First, the part between the two joints in the finger-vein  b  i , j   min b 1  i , j   1, min  m , n   i , j  1 b 1  m, n image is segmented based on the peak values of the which ensures that the upper surface u is above u 1 andhorizontal projection of the image. Second, a Canny also at a distance of at least 1 from u 1 in the verticaloperator with locally adaptive threshold is used to get the direction. The profile of u and b do not change whensingle pixel edge of the finger. Third, the midpoints offinger edge are determined by edge tracing so that the  increases to  n . The volume of the blanket v can bemidline can be obtained. Fourth, the image is rotated to computed byadjust the midline of the finger horizontally. Finally, theROI of the finger-vein image is segmented according to v   (u (i, j)  b (i, j)) i, j (2)the midline (see Fig. 6). The surface area a measured with the radius  calculated by a   v  v 1  / 2 (3) Let a    be the surface area of the blanket. Considering the Minkowski dimension [13], if  is sufficiently small, we have 2 D a ( )  F  (4) where F is a constant, and D stands for the fractal dimension (FD) of the image. Two values of  , i.e. 1 and  2 , are used to compute FD, then we can get 2 D 2 D a1  F 1 and a 2  F  2 . Thus, we can deduce a1  12 D , and take the logarithm at both Fig. 7. The procedure of our method for image enhancement. a 2  2 2 D
  4. 4. Z. Liu and S. Song: An Embedded Real-Time Finger-Vein Recognition System for Mobile Devices 525sides to yield: Thus, lacunarity can be computed by  2   1  2 log 2 a1  log 2 a 2 D  2 (5) M  M  log 2 1  log 2  2     (9) M  2 1 Peleg [14] discussed the factors affecting shrinking rate.When high gray level stands for white, the min operator of (1)will shrink the light regions corresponding to the particles, E. Matchingand the rate of this shrinking will only depend on the shape The blanket dimension distance HD between two finger-properties of the high gray level object. The max operator of vein patterns and the lacunarity distance H  are defined as(1), however, will shrink the background regions, and the rate 4of this shrinking will mainly be affected by the distribution ofthe high gray level object. In the case of finger-vein images, HD   D  (i, j)  D  (i, j)  2 i, j 1 2 (10)due to the directionality of the finger-vein, blanket growth can 4be made by directional maximizing (or minimizing) in theasymmetrical neighborhood instead of the symmetrical H     (i, j)    (i, j)  2 i, j 1 2 (11)circular neighborhood. Considering the shape of the finger-vein pattern, we modified (1) as follows, which can improve In our method, the dimension and lacunarity features arethe rate of the shrinking and reveal the directional combined for finger-vein recognition: if HD  th1 andcharacteristics of the finger vein pattern. H   th 2 ( th1 and th 2 are thresholds), then the two finger- vein patterns are considered to be from the same finger; if  u (i, j )  max u 1 (i, j)  1, max {u 1 (m, n), u 1 (i  2, j )} HD  th1 or H   th2 , they are considered to be from  ( m,n) (i , j ) 1  different fingers.  b (i, j )  min b 1 (i, j )  1, min {b 1 (m, n), b 1 (i  2, j )} V. EXPERIMENTAL RESULTS  ( m,n) (i , j ) 1  A. Dataset (6) To the best of our knowledge, is no public finger-vein imageD. Lacunarity Based on Blanket Technique database has yet been introduced. Therefore, we constructed a Lacunarity is another concept introduced by Mandelbrot to finger-vein image database for evaluation, which containsquantify the gaps in texture images. It is a measure for spatial finger-vein images from 100 subjects (55% male and 45%heterogeneity. Visually different images sometimes may have female) from a variety of ethnic/racial ancestries. The ages ofsimilar values for their fractal dimensions. Lacunarity the subjects were between 21 years old and 58 years old. Weestimation can help distinguish such images. collected finger-vein images from the forefinger, middle Lacunarity can be defined quantitatively as the mean-square finger, and ring finger of both hands of each subject. Tendeviation of the fluctuations of mass distribution function images were captured for each finger at different timesdivided by its square mean. It is also defined as the width of (summer and winter). Therefore, there were a total of 6,000the mass distribution function of a set of points, given the finger-vein images in the database. Fig. 8 shows some‘‘box size’’ [15]. Thus, a higher value of lacunarity implies example finger-vein images (after preprocessing) frommore heterogeneity, as it means a wider mass distribution different fingers.function, or a larger number of different mass values, of theset of points [16]. A lacunarity value is assigned for the centerpixel of the image window, and the lacunarity value of eachpixel in an image can be obtained by moving the W  Wwindow throughout the whole image. In our method, lacunarity is computed based on the blanketmethod [17]. The image d (i, j ) is obtained according to d (i, j )  u (i, j )  b (i, j ) (7) Let p ( gv) be the probability of the intensity points whose Fig. 8. Finger-vein images from different fingers after preprocessinggray values are gv on the surface of d . The first and second B. Performance Evaluationmoments of this distribution are then determined as There are two types of errors in matching results in M1   d (i, j) p  d (i, j)  i, j biometric verification. The first is false rejection, which (8) claims a genuine pair as impostor, and the second is false   d (i, j)  p  d (i, j )  2 M 2 acceptance, which claims an impostor pair as genuine. These i, j two types of errors are in a trade-off relationship. In
  5. 5. 526 IEEE Transactions on Consumer Electronics, Vol. 58, No. 2, May 2012biometrics, the performance of a system is evaluated by theEER (equal error rate). The EER is the error rate when theFRR (false rejection rate) equals the FAR (false acceptancerate) and is, therefore, suitable for measuring the overallperformance of biometric systems because the FRR and FARare treated equally. Fig. 10. The FAR and FRR curves of the method combining the blanket dimension and lacunarity. C. Comparison with Previous Methods Miura et al. [19] used a database that contained 678 different infrared images of fingers. These images were obtained from persons working in their laboratory aged 20 to 40, approximately (a) 70% of whom were male. Song’s [20] finger-vein image dataset contained 1,125 images collected using an infrared imaging device they built. Nine images were taken for each of 125 fingers. Compared with these databases, ours is larger and the data-collection interval is longer. Thus, our database is more challenging. Moreover, our system is implemented on a general DSP chip. Table 1 shows that the average times required for feature extraction and matching in our system are 343 ms and 13 ms, respectively. For the whole system, plus the time for image capturing, the time required for the authentication of a user is less than 0.8 s. Although the feature extraction in our system is a little bit more complicated than that in Songs method, our system achieves an EER of 0.07%, indicating that our method (b) significantly outperforms previous methods.Fig. 9. The FAR and FRR curves of the methods based on (a) blanketdimension and (b) lacunarity, respectively. TABLE 1 RECOGNITION RATE AND RESPONSE TIME The curves of FRR and FAR were used to evaluate the Sample number EER Time Method #finger (*#image Featureperformance of our proposed method. Fig. 9 shows the FAR and (%) Matching per finger) extractionFRR curves corresponding to the two methods based on blanket Our method 600(*10) 0.07 343 ms 13 msdimension and lacunarity, respectively. From Fig. 9, it can be Miura’s method 678(*2) 0.145 450 ms 10 msseen that the EER of the two methods are 0.155% and 0.146%, [19] Song’s method [20] 125(*9) 0.25 118 ms 88 mswhich are similar. However, when the two kinds of features arecombined, the ERR is decreased to 0.07%, as shown in Fig. 10. Because the proposed finger-vein recognition system is VI. CONCLUSIONtargeted for application in mobile devices, according to [18] the The present study proposed an end-to-end finger-veinenergy efficiency of the system is very important. When the recognition system based on the blanket dimension andproposed system is idle, the power consumption of DSP is about lacunarity implemented on a DSP platform. The proposed42.72 milliwatts (mW), and the power consumption of the system includes a device for capturing finger-vein images, awhole system is under 70 mW in standby mode. In other words, method for ROI segmentation, and a novel method combiningthe system can maintain a standby state for six days, with a blanket dimension features and lacunarity features fortypical mobile setting of four batteries with 2300 milliamperes recognition. The images from 600 fingers in the dataset wereper hour. In full active model, the power consumption of the taken over long time interval (i.e., from summer to winter) byaforementioned model is 1636.4 mW. On average, the actual a prototype device we built. The experimental results showedpower consumption of the proposed system is no more than 1.5 that the EER of our method was 0.07%, significantly lowerwatts. The lower power consumption of the proposed system than those of other existing methods. Our system is suitablemeans that it is very efficient and is thus very suitable for for application in mobile devices because of its relatively lowmobile consumer electronic devices. computational complexity and low power consumption.
  6. 6. Z. Liu and S. Song: An Embedded Real-Time Finger-Vein Recognition System for Mobile Devices 527 REFERENCE [15] C. Allain and M. Cloitre, “Characterizing the lacunarity of random and deterministic fractal sets”, Physical Review A, vol.44, no.6, pp. 3552-[1] A. K. Jain, S. Pankanti, S. Prabhakar, H. Lin, and A. Ross, “Biometrics: 3558, 1991. a grand challenge”, Proceedings of the 17th International Conference on [16] K. I. Kilic and R. H. Abiyev, “Exploiting the synergy between fractal Pattern Recognition (ICPR), vol. 2, pp. 935-942, 2004. dimension and lacunarity for improved texture recognition”, Signal[2] P. Corcoran and A. Cucos, “Techniques for securing multimedia content in Processing, vol. 91, no. 10, pp. 2332-2344, 2011. consumer electronic appliances using biometric signatures,” IEEE Transactions [17] Novianto, Suzuki, and Maeda, “Optimum estimation of local fractal on Consumer Electronics, vol 51, no. 2, pp. 545-551, May 2005. dimension based on the blanket method,” Transactions of the[3] Y. Kim, J. Yoo, and K. Choi, “A motion and similarity-based fake detection Information Processing Society of Japan, vol. 43, no.3, pp. 825-828, method for biometric face recognition systems,” IEEE Transactions on 2002. Consumer Electronics, vol.57, no.2, pp.756-762, May 2011. [18] D. D. Hwang and I. Verbauwhede, “Design of portable biometric[4] D. Wang , J. Li, and G. Memik, “User identification based on finger- authenticators - energy, performance, and security tradeoffs,” IEEE vein patterns for consumer electronics devices”, IEEE Transactions on Transactions on Consumer Electronics, vol. 50, no. 4, pp. 1222-1231, Consumer Electronics, vol. 56, no. 2, pp. 799-804, 2010. Nov.2004.[5] H. Lee, S. Lee, T. Kim, and Hyokyung Bahn, “Secure user identification [19] N. Miura, A. Nagasaka, and T. Miyatake, “Feature extraction of finger- for consumer electronics devices,” IEEE Transactions on Consumer vein patterns based on repeated line tracking and its application to Electronics, vol.54, no.4, pp.1798-1802, Nov. 2008. personal identification”, Machine Vision Application, vol. 15, no.4,[6] D. Mulyono and S. J. Horng, “A study of finger vein biometric for pp.194–203, 2004. personal identification”, Proceedings of the International Symposium [20] W. Song, T. Kim, H. C. Kim, J. H. Choi, H. Kong and S. Lee, “A Biometrics and Security Technologies, pp. 134-141, 2008. finger-vein verification system using mean curvature”, Pattern[7] Z. Liu, Y. Yin, H. Wang, S. Song, and Q. Li ,“Finger vein recognition Recognition Letters, vol. 32, no.11, pp. 1541-1547, 2011. with manifold learning”, Journal of Network and Computer Applications, vol.33, no.3, pp. 275-282, 2010.[8] Y. G. Dai and B. N. Huang, “A method for capturing the finger-vein image using nonuniform intensity infrared light”, Image and Signal BIOGRAPHIES Processing, vol.4, pp.27-30, 2008.[9] X. Sun, C. Lin, M. Li, H. Lin, and Q. Chen, “A DSP-based finger vein Zhi Liu received the M.Sc. degree in Circuit and System authentication system”, Proceedings of the Fourth International from Shandong University, China (2004) and the Ph.D. Conference on Intelligent Computation Technology and Automation, degree in Pattern Recognition and Intelligence System pp.333-336, 2011. from Shanghai Jiao Tong University, China (2008). He[10] B. B. Mandelbrot, Fractals: Form, Chance and Dimension, San worked in the School of Information Science and Francisco, CA: Freeman, 1977. Engineering, Shandong University since 2008. His[11] B. B. Mandelbrot and D. Stauffer, “Antipodal correlations and the current research interests include image processing texture (fractal lacunarity) in critical percolation clusters”, Journal of (texture analysis, image classification, and image segmentation), computer Physics A: Mathematical and General, vol.27, pp.237-242, 1994. vision, and pattern recognition.[12] J. Berke, “Using Spectral Fractal Dimension in Image Classification”, Innovations and advances in computer sciences and engineering, pp. 237-241, 2010.[13] Z. Feng, “Variation and Minkowski dimension of fractal interpolation Shangling Song received a BS degree in electrical surface”, Journal of Mathematical Analysis and Applications, vol. 345, engineering from Zhejiang Gongshang University, no.1, pp. 322-334, 2008. Hangzhou, China, in 2001. She received the PhD degree[14] S. Peleg and J. Naor, “Multiple resolution texture analysis and from Shandong University, China (2010). She was a classification”, IEEE Transactions on Pattern Analysis and Machine dual-culture student at Chiba University of Japan from Intelligence, vol.6, no.4, pp.518-523, 1984. 2007 to 2008.

×