Effects of Illumination Changes on the Performance of
                       Geometrix Facevision@3D FRS
Eric P. Kukula   ...
using different lighting conditions have been
                                                                            ...
The COTS unit is currently optimized to capture
                                                                    faces ...
performed during each enrollment as part of                          of the image labeled light condition 2 was 1.631.625
...
Table 2: Volunteer crew demographic information                     distance used for both enrollment and verification in
...
mode when presented with the various light levels.
                                                                       ...
Waupotitsch R. & Medioni G. Robust
       Automated Face Modeling and Recognition
       Based on 3 0 Shape. Biometrics Sy...
Upcoming SlideShare
Loading in …5
×

(2004) Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS

712 views

Published on

This evaluation examined the effects of four
frontal light intensities on the performance of a 3D
face recognition algorithm, specifically testing the
significance between an unchanging enrollment
illumination condition (220-225 lux) and four
different illumination levels for verification. The
evaluation also analyzed the significance of external
artifacts (i.e. glasses) and personal characteristics
(i.e. facial hair) on the performance of the face
recognition system (FRS).
Collected variables from the volunteer crew
included age, gender, ethnicity, facial
characteristics, hair covering the forehead, scars on
the face, and glasses.
The analysis of data revealed that there are no
statistically significant differences between
environmental lighting and 3D FRS performance
when a uniform or constant enrollment illumination
level is used.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
712
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

(2004) Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS

  1. 1. Effects of Illumination Changes on the Performance of Geometrix Facevision@3D FRS Eric P. Kukula Stephen 1. Elliott, PhD Roman Waupotitsch Bastien Pesenti kukula@ purdue.edu ellion@purdue.edu romanw@geometrix.com bastienp@geometrix.com Industrial Technology, Industrial Technology, Vice President of R&D Research Engineer Purdue University Purdue University Geometrix Inc Geometrix Inc West Lafayette, IN 47906, West Lafayene, IN 47906, 1590 The Alameda Ste 200 1590 The Alameda Ste 200 USA USA San Jose. CA 95 124 San Jose, CA 95 124 ABSTRACT environmental conditions, such as lighting, may be inconsistent, consequently affecting the This evaluation examined the effects of four performance of the face recognition system. In frontal light intensities on the performance of a 3D previous research by Kukula and Elliott [l], a face recognition algorithm, specifically testing the commercially off the shelf software (COTS) 2D significance between an unchanging enrollment facial recognition algorithm was assessed, which illumination condition (220-225 lux) and four revealed that 2D face recognition still has different illumination levels for verification. The significant challenges to overcome with regard to evaluation also analyzed the significance of external illumination, specifically when the ambient lighting artifacts (i.e. glasses) and personal characteristics is low, as well as when light was not held constant. (i.e. facial hair) on the performance of the face Recently, three dimensional face recognition recognition system (FRS). algorithms have started to emerge in the Collected variables from the volunteer crew marketplace. According to the manufacturers, 3D included age, gender, ethnicity, facial face recognition has advantages over 2D face since characteristics, hair covering the forehead, scars on it compares the 3D shape of the face, which is the face, and glasses. invariant in different lighting conditions and pose, The analysis of data revealed that there are no although light conditions were only evaluated in this statistically significant differences between evaluation. environmental lighting and 3D FRS performance Over the past ten years there have been three when a uniform or constant enrollment illumination large scale independent evaluations conducted on level is used. 2D COTS facial recognition systems which have shown that performance dramatically decreases Keywords: biometrics, 3D face recognition, when environment lighting changes [ 2 4 environmental conditions, performance testing Currently, independent testing of 3D systems is sparse as it is an emerging biometric technology. MOTIVATION However, intemal testing conducted by Geometrix have reported equal error rates (EER) of less than As govemment and private corporations begin 2% using image databases from University of to implement biometric technologies in operational Southern California and the University of Notre settings, such as in airports and facility access Dame. At the time of writing, no independent control, the environment and application must be testing of COTS 3D face recognition has been fully examined before implementation. With regard complete. However the NIST Face Recognition to face recognition, there are several challenges to Grand Challenge (FRGC) is currently underway face recognition systems, including illumination, with report set to be released in August of 2005. which may affect the performance of the system. Further internal studies of the Geometrix Face The implementation of biometric systems, including Vision system commissioned by the Defense face recognition systems into legacy environments Advanced Research Projects Agency (DARPA) that may not have ideal environmental conditions, concluded that as little as 6 gray values are 4 indicate that this is an area of research that is sufficient for the Facevision system to perform important as deployments of face recognition high-quality 3D reconstruction of faces. However, systems become pervasive. As a result until now no independent performance assessments 331 02004 IEEE 0-7803-8506-3/02/$17.00 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  2. 2. using different lighting conditions have been S l o i - Zd’aflIhegrauird performed. The purpose of the evaluation reported 3 Light sources . a s ’ /mm ground Io the Mitom 01 the enclosure here was designed to address exactly this aspect of 3D recognition, namely to perform a system-level test of the Geometnx Facevision system. CONCEPT OF THE SYSTEM The 3D face recognition system used in this evaluation was the Geometrix Human Identification System (HIS). The system’s fundamental algorithms were inspired by Chen and Medioni [ 5 ] .The sensor used was the Face Vision 200, which captures two images using two stereo calibrated cameras. The system then processes the images using proprietary and patented algorithms to construct a metrically accurate 3D model of the face. The 3D face model is then further processed to Figure 1: Testing Environment create a fully textured version of the face that may be used for visual inspection by an operator. Moreover, a 3D face template is extracted from the A light impermeable curtain segregated the testing model [6,7], which is 3 kilobytes for one-to-one environment from the educational computer lab. All verification and less than 200 bytes for one-to-many fluorescent lighting was removed from the testing identification. Verification time in this evaluation environment and the curtain impeded the averaged 12 seconds on a single processor, while uncontrolled fluorescent illumination from the intemal testing using dual processors averaged less educational lab area, resulting in a stable zero than 6 seconds. illuminance (lux) environment. The background The 3D face template encodes the salient used was very close to the recommended 18% gray features of the face with patented Active FusionTM [9-IO]. The extemal lighting used for verification algorithms, which allows a very accurate composed of three JTL Everlight continuous comparison between the “enrollment” face with the halogen lamps with 500 Watt USHIO halogen bulbs captured “verification” face. Robustness techniques covered by 24 inch softboxes. The lamps were are used to weigh different aspects of the face positioned in a manner that created an evenly according to their contribution to “being able to illuminated face. The Geometrix Facevision 200 distinguish two faces” and their robustness to camera system, shown in included a lighting system changes in the facial shape over time and changes that remained constant throughout the evaluation due to facial expression, which were both outside (both enrollment and verification). The illumination the scope of this study. of the experimental area was monitored with a NIST certified broad range ludfc light meter. The Face SETUP EXPERIMENTAL Vision 200 camera system included two off-the- shelf USB cameras. The cameras were attached to a This evaluation took place in the Biometric Dell Omniplex GX260 computer through an Orange Standards, Performance, and Assurance Laboratory Micro USB 2.0 PCI card. The computer was a in the School of Technology at Purdue University. single 2.0 GHz processor, 512 MB RAM, 40 GB The testing environment, shown in Figure 1, was hard drive. The operating system was Microsoft similar to that of Blackbum, Bone, and Phillips [8] Windows XP Pro SPl. and the setup described by Kukula and Elliott [1,8]. 332 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  3. 3. The COTS unit is currently optimized to capture faces between 18 inches and 30 inches for enrollment, and 16 inches to 36 inches for verification or identification. The sensor was originally calibrated by Geometrix. On-site color and sensitivity calibration was performed once in the Biometrics Standards, Performance, and Assurance Laboratory to optimize the sensor in the environment. It was subsequently Figure 2: Geometrix Facevision 200 camera system inspected each day in accordance with the testing Lighting protocol. This evaluation tested the performance of a 3D Software face recognition algorithm using one enrollment lighting intensity and four verification lighting Geometrix provided all software that was used intensities. The enrollment lighting intensity used in this evaluation. The 3D model creator was only the Geometrix system LED lights, which were Facevision 200 Series v5.1. The evaluation also fastened to each side of the camera mount, which used the Geometrix Facevision Human can be seen in Figure 2. The illumination defined for Identification System (Facevision HIS) version enrollment was 220 - 225 lux. These LED lights 2.3. The system provides both an interface for remained on throughout testing. Verification enrollment and verification or identification occurred at 4 different light intensities as described operations, as well as administrative tools to in Table 1. manage the database of enrolled persons Table 1: Definition of lighting conditions (Figure 3). However for this evaluation only the Use I Name I Light Intensity enrollment and verification software was used. I Light Condition 1 I 220-225 lux U Enrollment/ Verification FACEVISION Verification Light Condition 2 320-325 lux HIS GUI Verification Light Condition 3 650-655 lux Verification Light Condition 4 1020-1140 lux Hardware The COTS Geometrix Facevision FV200 sensor was used (Figure 2) for image acquisition. It is a passive stereo-based sensor incorporating hoard- level cameras and custom lenses, which is connected to a computer using a USB 2.0 interface. The dimensions of the sensor are approximately FACEVISION 6.5x4.3x2.5 inches. This sensor was used for both FVZOO SENSOR saL SERVER enrollment and verification. The Facevision 200 Copyright 0 GEOMETRIX sensor incorporates an LED based lighting unit that is attached on each side of the system. The lights are Figure 3: Facevision HIS dimmable. However, when set at the recommended The enrollment mode is designed to enroll new intensity (220-225 lux), the LED light system persons, add additional biometric templates for provides sufficient illumination for the sensor to existing persons, and access or edit demographic operate in an optimal manner, even in the darkest information. The Facevision HIS software provides environment. For the purpose of this evaluation, the a seamless interface for operating the Facevision protocol called for the system lights to remain at the FV200 capture sensor. While the enrollment process recommended level of 220-225 lux throughout the is fully automatic, a manual step may be performed experiment. to verify the enrollment data. This step was 333 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  4. 4. performed during each enrollment as part of of the image labeled light condition 2 was 1.631.625 protocol, in order to verify model quality. or 2.608. The verification mode is designed to verify the claimed identity of the captured person with the 3D template stored in the database. ABer a few seconds, the system gives a binary answer, “Access Granted or “Access Denied.” The system also displays a confidence rating of the decision made, as well as a list of potential impostors known in the system. However, only the binary response was used for data analysis in this evaluation. Captured Image Specifications To eliminate extemal effects on the experiment and to emphasize the sole effect of lighting on the performance of the system, the subject’s position, facial pose, and face covering artifacts were defined by the test protocol. Specifically, faces were captured with the nose approximately centered in the image. To simplify the process, each participant remained seated during the evaluation two feet from Light Condition 3 the ground. To compensate for the varying heights of participants, the camera was attached to a mechanical tripod that could be adjusted in height. Figure 5: Sample images from the 4 tested light The resulting captured image reflects the proposed intensities face recognition data format specification for captured images [7], which can be seen in Figure 4. EVALUATION CLASSIFICATION This document suggests the image should be The evaluation was defined as cooperative, overt, unhabituated, attended, and closed [ I 11. The experimental evaluation is classified as a modified technology evaluation. A traditional technology evaluation is conducted in a laboratory, by a universal sensor, and using the same data causing repeatability of samples. In this case however, data was collected and was evaluated on-line with the specific results and scores presented after the completion of the computation, hence its classification a s a modified technology evaluation. The purpose of the evaluation was to assess the t!A effects of four frontal light intensities. Failure to Figure 4: WCITS face recognition data format Enroll, Failure to Acquire, and a statistical analysis image requirement (Griffin, 2003) of the differences in light and performance of the device were assessed. centered, meaning the mouth and middle of the nose should lie on the imaginary line AA (Figure 4). The Volunteer Crew location of the eyes in the images should range between 50-70% the distance from the bottom of the This evaluation involved thirty subjects from image and the width of head ratio ( N C C ) should be the School of Technology at Purdue University. no less than 7/4 (1.75). Images collected in this Demographic information can be seen in Table 2. study fully conformed to the requirements proposed in [9], as seen in Figure 5. The width-to-head ratio 334 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  5. 5. Table 2: Volunteer crew demographic information distance used for both enrollment and verification in this evaluation was 28 inches. To monitor the light, subjects were asked to hold a light meter sensor in front of their nose periodically throughout the evaluation to monitor the lighting conditions. These readings were recorded and checked to maintain Caucasian 24 repeatability throughout the study. African The generalized testing protocol model can be American 1 seen in Figure 6. This evaluation was designed to Asian 2 compare the stored 3D face template created in the Hispanic 3 enrollment lighting condition (220-225 lux) against Native verification attempts captured at the four different I 30-39 40-49 3 no 26 Yes 7 no 23 Figure 6: Protocol Design TESTING PROTOCOL light intensities: 1) Enrollment lighting (220-225 lux), 2) Light condition 2 (320-325 lux), 3) Light The protocol used for this evaluation called for condition 3 (650-655 lux), and 4) Light condition 4 calibration of the cameras each day testing occurred. (1020-1 140 lux). The protocol called for 3 At this time the operator also verified the verification attempts in each of the four light experimental setup of all the equipment used for the intensities, for a total of 12 attempts for each study. The testing protocol consisted of one subject. enrollment light condition and four Verification light conditions. The lighting conditions are defined in Enrollment Table 1. Before data collection began, participants were informed of the testing procedures and given The first testing procedure was enrollment. specific instructions, which included: After the subject was seated and the camera position Remove eyeglasses, hats, or caps was verified, the test operator notified the subject Refrain from chewing gum or candy the image capture sequence was beginning. During Look directly at the sensor (between the two this sequence music could be heard. After the cameras) and maintain a neutral expression capture sequence was complete, a 2D image Stay as still as possible while the music is appeared which was checked for quality (no facial playing. expressions, closed eyes, etc). The three At this time, the field of view of the camera was dimensional model was then computed, checked for checked to ensure captured images resembled correct nose position and quality, then stored. An Figure 4. The distance between the camera and the example of a 3D model used in this study is shown test subject's face was also measured to ensure the in Figure 7. proper camera depth of field was achieved. The 335 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  6. 6. mode when presented with the various light levels. A statistical analysis shows that at an alpha level of 0.01, there was no statistically significant difference in the performance of the algorithm when the light level was measured between light level 1 (220-225 lux), and the other levels (320-325 lux; 650-655 lux; 1020-1140 lux). CONCLUSION Because the Geometrix face recognition engine uses Figure 7: Example of a 3D model a template extracted from 3D, unlike the 2D image- Verification based engines, this study shows that this 3-D algorithm seems to have overcome the usual Verification followed the same procedure for limitations of illumination variations. Unlike a each subject. The light conditions followed a previous study [I], this evaluation has shown that structured order and were not randomized. After there are no statistically significant differences in enrollment was complete, 3 verification attempts performance at any of the tested illumination levels. were conducted in the same lighting intensity used Further research is underway to evaluate lighting for enrollment (light condition l), followed by 3 angles and pose, to establish the progress of 3-D attempts in light conditions 2, 3, and 4. Figure 8 face recognition algorithms. shows the visual display given to the operator after each verification attempt. To ensure data collection REFERENCES was accurate a screen shot of each attempt was collected, that used a barcode naming convention Kukula, E., & Elliott, S. (2003). Securing a that removed data collection errors of keying data, Restricted Site: Biometric Authentication at as well as reduced time between attempts. Entry Point. Paper presented at the 37th Annual 2003 Intemational Camahan Conference on Security Technology (ICCST), (pp. 435439), Taipei ,Taiwan, ROC Phillips, P., Rauss, P., & Der, S. (1996). FERET (Face Recognition Technology) Recognition Algorithm Development and Test Report (ARL-TR-995): U.S. Army Research Laboratory. Blackbum, D., Bone, J., & Phillips P. (2000). Face Recognition Vendor Test (FRVT) 2000 y_ Evaluation Report. DoD, DAFVA, NIJ Figure 8: Feedback from a verification attempt Phillips, P., Grother, P., Bone, M., Micheals, R., Blackbum, D., & Tabassi, E. (2003). Face RESULTS Recognition Vendor Test 2002: DARF'A, NIST, DOD, NAVSEA. The study consisted of 30 individuals, 30 enrollment attempts, 30 impostor attempts and 360 Chen G. & Medioni G., Building Human Face genuine verification attempts. At the enrollment Models from Two Images, Journal of VLSI stage there were no failure to enrolls (FTE = O%), or Signal Processing, Kluwer Academic Failure to Acquires (FTA = 0%). Publishers, vol. 27, no. U2, pp. 127-140, The hypotheses were set up to establish whether January 2001 there was any significant difference in the performance of the algorithm in the verification 336 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.
  7. 7. Waupotitsch R. & Medioni G. Robust Automated Face Modeling and Recognition Based on 3 0 Shape. Biometrics Symposium on Research, Crystal City, September 2003. Waupotitsch R. & Medioni G. Face Modeling and Recognition in 3 - 0 . AMFG 2003: 232- 233. Bone, M. and D. Blackbum, Face Recognition at a Chokepoint: Scenario Evaluation Results. 2002, DoD Counterdrug Technology Development Program Office: Dahlgren. p. 58. Griffin, P. (2003). Face Recogntion Format for Data Interchange (M1/04-0041): INCITS M1. Rubenfeld, M., & Wilson, C. (1999). Gray Calibration of Digital Cameras To Meet NIST Mugshot Best Practice. NIST IR-6322 [ l l ] Mansfield, A.J. and J.L. Wayman, Best Practices in Testing and Reporling Performances o Biometric Devices. 2002, f Biometric Working Group. p. 32. 337 Authorized licensed use limited to: Purdue University. Downloaded on February 27,2010 at 12:03:45 EST from IEEE Xplore. Restrictions apply.

×