Your SlideShare is downloading. ×
BSPAWP 060709




                A Definitional Framework for the Human-Biometric Sensor
                                ...
BSPAWP 060709




     III. BIOMETRIC TESTING AND EVALUATION METRICS                             acquired samples (SAS). A...
BSPAWP 060709




Fig. 2. HBSI framework for biometric interactions.

though the interaction with the incorrect finger ove...
1) Failure to Detect (FTD)                                                     in the log. Formally, the definition of fai...
proposed to differentiate the Failure to Enroll (FTE) metric
from the Failure to Extract (FTX) metric, which as mentioned
...
REFERENCES
[1]    International Organization for Standardization, "ISO/IEC JTC1/SC37
       Standing Document 2 - Text of ...
Upcoming SlideShare
Loading in...5
×

(2009) A Definitional Framework for the Human-Biometric Sensor Interaction Model

1,081

Published on

Existing definitions for biometric testing and
evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human
Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of
presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract
(FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model.

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,081
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
29
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "(2009) A Definitional Framework for the Human-Biometric Sensor Interaction Model"

  1. 1. BSPAWP 060709 A Definitional Framework for the Human-Biometric Sensor Interaction Model Stephen J. Elliott, Ph.D. and Eric P. Kukula, Ph.D., Member, IEEE  Abstract—Existing definitions for biometric testing and a biometric system and how the biometric system responds to evaluation do not fully explain errors in a biometric system. this human-sensor interaction. This paper provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper II. THE HUMAN-BIOMETRIC SENSOR INTERACTION (HBSI) proposes six new definitions based around two classifications of presentations, erroneous and correct. The new terms are: The HBSI model is conceptualized in Fig. 1 and derived from defective interaction (DI), concealed interaction (CI), false separate areas of research, namely ergonomics [5], usability interaction (FI), failure to detect (FTD), failure to extract [6], and biometrics [7]. Please see [8-13] for a complete (FTX), and successfully acquired samples (SAS). As with all discussion regarding the HBSI research area. The purpose of definitions, the new terms require a modification to the general the model is to demonstrate how metrics from biometrics biometric model. (sample quality and system performance), ergonomics (physical and cognitive), and usability (efficiency, I. INTRODUCTION effectiveness, and satisfaction) overlap and can be used to evaluate overall functionality and performance of a biometric O BSERVATIONS made during biometric scenario and operational evaluations involving several thousand individuals across multiple modalities have led the authors to system. Including metrics from different disciplines enables evaluators and system designers to better understand what identify and categorize interactions between either the human affects biometric system performance. Core research and sensor, or the samples and biometric system itself, which questions the HBSI addresses are: have not been previously defined or thoroughly analyzed. As  How do users interact with biometric devices? the subfield of Human-Biometric Sensor Interaction matures  What errors do users make? and peers within the biometric community and related fields  What are the most common errors or issues that users of ergonomics and usability provide input, an etymology for face? the field can now be presented. In doing so, the authors  Why do users continually make these interaction re-examine current biometric testing and evaluation metrics errors and how do we prevent or avoid them from and definitions and propose additional metrics that align with happening? the HBSI model.  What level of training and experience is necessary to successfully use biometric devices? The origin of the Human Biometric Sensor Interaction (HBSI) sub-field within biometrics resulted from participation in the U.S. national biometric standards committee INCITS M1, as well as the international committee ISO/IEC JTC1 SC37. Participation on these committees, especially during the development of the framework testing and evaluation standards [1-4], has enabled the authors to understand the breadth of the current standards and their related measurements, as well as isolate areas during biometric performance evaluations that have been traditionally out of scope. The areas outside the traditional metrics include the human-sensor interaction and how this interaction impacts the biometric samples in the system itself. HBSI evaluation concentrates on the minutiae of each interaction to fully understand how users interact with BSPAWP 060709 posted June 7, 2009 on www.bspalabs.org/publications. S. J. Elliott, Ph.D. is Director of the Biometric Standards, Performance, & Assurance (BSPA) Laboratory and Associate Professor in the Department of Industrial Technology at Purdue University, West Lafayette, IN 47907 USA (phone: 765-494-1088; fax: 765-496-2700; e-mail: elliott@ purdue.edu). E. P. Kukula, Ph.D. is a Visiting Assistant Professor & Senior Biometric Researcher in the BSPA Laboratory in the Department of Industrial Fig. 1. The HBSI conceptual model [8, 11, 12]. Technology at Purdue University, West Lafayette, IN 47907 USA (e-mail: kukula@ purdue.edu). © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 1 of 6
  2. 2. BSPAWP 060709 III. BIOMETRIC TESTING AND EVALUATION METRICS acquired samples (SAS). An erroneous presentation is an Traditional biometric performance testing and evaluation interaction with the sensor that should be classified in three metrics include: failure to enroll (FTE), failure to acquire ways: defective interaction (DI), false interaction (FI), or (FTA), the false match rate (FMR), the false non-match rate concealed interaction (CI). The sum of these six (FNMR), the false accept rate (FAR), and the false reject rate classifications is equal to all the human-biometric sensor (FRR). These metrics are used for the different types of interactions with the system being evaluated. The following performance evaluations, including technology, scenario, and subsections discuss the six classifications included in the operational evaluations. A discussion of these metrics and HBSI framework that is shown in Fig. 2. definitions leads to continued dissection of their meaning. Of A. Erroneous Presentation main concern to the HBSI is FTA, which has been the The first subsection of measurements within the HBSI biometric community’s de facto “usability” metric. FTA is framework that is discussed involves erroneous presentations. defined as the “proportion of verification or identification These presentations are further classified into three metrics: attempts for which the system fails to capture or locate an defective interactions, concealed interactions, and false image or signal of sufficient quality” [2]. Through the interactions. authors’ experience in testing and evaluation, interactions with a biometric system are traditionally viewed as a binary 1) Defective Interaction (DI) system; presentations either become successfully acquired A defective interaction (DI) occurs when a bad samples or result in an acquisition failure (FTA). This binary presentation is made to the biometric sensor and is not result has impeded system designers the ability to understand detected by the system. A DI results from an individual how users are interacting with a biometric system and how placing biometric features incorrectly on or to the sensor or in the biometric system responds to this human-sensor the case of biometric camera devices (face, iris, etc…) not interaction as all acquisition errors have been treated the looking in the appropriate area or direction. An example of a same. DI is a user looking away from the camera when approaching As the authors dissected this definition of FTA, it became an access control gate controlled by iris recognition. The apparent that factors and interactions existed, but did not fit biometric system was “correct” in not detecting the presence within the existing definition or purpose of FTA as noted of the subject because the individual did not present above. The HBSI framework for biometric interactions appropriate biometric characteristics to the sensor. examines each recorded human-sensor interaction as an DI’s are important to measure because they provide event. Each event is classified as either an erroneous quantifiable data for system design and throughput time, for presentation or correct presentation. Readers may question example. Additionally, DI’s provide crucial quantifiable data the difference between FTA (traditional performance and to facilitate better training materials or policy guides for evaluation metrics), traditional usability metrics, and the system implementation. metrics contained in the HBSI framework for biometric interactions. Recall that failure to acquire is a system reported 2) Concealed Interaction (CI) metric for which the biometric sub-system “fails to capture or The next classification of erroneous presentations are called locate an image or signal of sufficient quality” [2]. The HBSI concealed interactions (CI). CI’s occur when an erroneous framework for biometric interactions is concerned with each presentation is made to the sensor that is detected by the interaction with the biometric device, regardless of result. biometric system, but is not handled or classified correctly as Summarizing the difference, FTA is concerned with an “error” by the biometric system. Therefore CIs are biometric performance, whereas metrics in the HBSI accepted as successfully acquired samples even though it was framework evaluate biometric systems from both the system from an erroneous presentation. In other words, concealed and user perspective. Including both perspectives interactions are those attempts where the user presents differentiates the HBSI framework from traditional usability biometric characteristic(s) to the sensor, but used the wrong evaluations, as HBSI is not only concerned with improving biometric characteristic, yet the sensor recorded the user efficiency, effectiveness, and satisfaction, but also is interaction as a successfully acquired sample. The CI rate is concerned with the impact the interactions have on the defined as the proportion of presentations that contain biometric system. More in line with traditional usability incorrect biometric characteristics acquired by the sensor that testing, [14] discusses an alternate usability taxonomy for are classified as acceptable by the biometric system. In order biometrics that only considers the user perspective. Work is to better explain where the anomaly occurred CI’s are currently underway with these researchers to map the HBSI segmented into two categories: user concealed interactions framework and usability taxonomy. and system concealed interactions. IV. THE HBSI FRAMEWORK METRICS User CI: User concealed interactions result from The purpose of the HBSI framework is to understand what presentations where the user is at fault for the erroneous common correct and incorrect movements or behaviors are presentation that is recorded as a successfully acquired occurring with biometric devices. Correct presentations are sample. An example of user CI with fingerprint recognition interactions with the sensor that can be classified as a: failure would be a user that is instructed to use one finger, but to detect (FTD), failure to extract (FTX), or successfully chooses, for whatever reason, to use a different one. Even © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 2 of 6
  3. 3. BSPAWP 060709 Fig. 2. HBSI framework for biometric interactions. though the interaction with the incorrect finger over the 3) False Interaction (FI) sensor was performed correctly, the system accepted the The last classification that is possible within the erroneous presentation, even though the incorrect finger was used. presentation category is false interaction (FI). A FI occurs when a user presents their biometric features to the biometric System CI: The second category of CI is from the system system, which are detected by the system and is correctly perspective. System CI’s are the result of an erroneous classified by the system as erroneous due to a fault or errors presentation that contains unrecognizable features which are that originated from an incorrect action, behavior, or subsequently recorded as a successfully acquired sample. An movement executed by the user. The FI rate is defined as the example of this in a fingerprint system would be where a user proportion of interactions with the sensor that the biometric inadvertently interacts with the sensor with their fingertip or system detects and correctly classifies as erroneous. interphalangeal joint when the expected or desired FI’s are concerned with presentations in which a user does characteristics were the volar pads of the fingers. However, not interact properly with the sensor, and the biometric the system records this inadvertent interaction as a system correctly recognizes the attempt as a problematic or successfully acquired sample. Another example of a system erroneous presentation. The user presents biometric CI involving face or iris recognition would be where an image characteristic(s) to a biometric sensor where the system both of a shadow, reflection, unrecognizable features, etc… are detects and acknowledges the error with a message and/or captured by the system and stored as successfully acquired feedback to the user to retry. samples. The false interaction (FI) rate is proposed to evaluate the effectiveness of sensor design and training materials. The CI rate regardless of whether it is user or system Additionally, one would expect a difference in the FI rates of concealed might have implications for measuring habituation. effectively trained or ineffective or non-trained individuals, It should be noted that the commonly used term “habituation” which has been examined in [15]. might have to be revisited as we learn more about how people B. Correct Presentation learn to use such devices. For example, with dynamic signature verification, individuals are probably habituated to The second category of measurements within the HBSI the act of signing, but not the interaction with a particular framework involves correct presentations. These are further signature pad. Another such example would be where users divided into three subcategories: failure to detect (FTD), are used to interacting with a biometric sensor one way, but failure to extract (FTX), and successfully acquired samples not another; for example in fingerprint recognition: swipe (SAS). versus slap. The authors will endeavor to define habituation, training, and learning with metrics and experimental data in subsequent articles. © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 3 of 6
  4. 4. 1) Failure to Detect (FTD) in the log. Formally, the definition of failure to extract is the The first classification involving a correct presentation to the proportion of samples that are unable to process or extract sensor but not detected by the biometric system is called a biometric features. FTX is a system error. An example of this failure to detect (FTD). The definition of FTD is the error would be when the biometric system acquires a proportion of presentations to the sensor that are observed by fingerprint sample from the data collection module, but is test personnel but are not detected by the biometric system. unable to process the sample into biometric features, thus Failure to detect errors can be separated as system errors or returns an error. external factor (environmental) errors. 3) Successfully Acquired Samples (SAS) System FTD: A system FTD occurs when a user presents their As the name implies, a successfully acquired sample (SAS) biometric characteristic(s) properly (or appears correct to the occurs if a correct presentation is detected by the system and data collection / analysis personnel) to the sensor, but the if biometric features are able to be created from the sample. system does not detect that a presentation was made and the SAS result from presentations where biometric features are system remains in the same state as before the user interaction able to be processed from the captured sample, which are then took place. The biometric sub-system does not detect the passed to the biometric matching system. correct interaction of the user. For example, with iris recognition, a user stands in the appropriate position for the V. REVISIONS TO THE BIOMETRIC MODEL camera to collect iris characteristics, but the system fails to The HBSI framework that has been discussed until now was respond and/or does not detect the presence of the subject or dependent upon one thing – the involvement of the human iris with the sensor, i.e. the biometric system remained in the during data collection to thoroughly understand the same state as before the interaction. human-biometric sensor interaction. To align the general biometric model (Fig. 3) with the proposed framework, the External Factor FTD: An external factor FTD results from an authors propose revising the signal processing module where extraneous factor impacting the ability of the biometric template creation exists for two reasons. system to recognize a user’s correct presentation to the Additionally, the HBSI framework can easily be applied to sensor. For example with face recognition, iris recognition, or scenario and operational performance evaluations, but does hand geometry if a light source is in the field of view of the not fully align with technology evaluations. Biometric device, the biometric system will not be able to detect the technology evaluations typically use data that was collected presence of biometric characteristic(s). Due to factors beyond at an earlier point in time to test biometric algorithms. The the user, and the biometric system, the presentation cannot be authors have noticed during some of these evaluations that the detected. data used originally was not meant for a biometric system. Thus, the general biometric model should reflect the use of The failure to detect rate provides data to system designers non-biometrically collected (NBC) data. The following revealing the user interactions the system did not detect, sections will discuss these two revisions. regardless of the cause. The FTD rate exposes user interactions that have typically not been collected during performance evaluations. Understanding these issues will enable system designers to further improve devices and algorithms and reduce user frustration of those who believe they are interacting with the sensor correctly, yet the sensor does not detect their features as being present. 2) Failure to Extract (FTX) After a correct presentation is made to the sensor, and it is detected by the sensor, and acquisition occurs, the system attempts to create biometric features from the collected sample. In the general biometric model, this occurs in the signal processing module. A failure to extract (FTX) is concerned with samples from the data collection module that are unable to be processed completely. This may occur for a number of reasons, such as segmentation, feature extraction, Fig. 3. General Biometric Model [2]. or quality control. The authors debated on whether to segment A. Template Creation and Enrollment these errors into individual components described in the In the current general biometric model (Fig. 3), the creation of general biometric model. However, as these components an enrollment template is undertaken in the signal processing may, or may not be present in a biometric system, it would be module (Fig. 4A). In order to re-align the model with the hard to differentiate these errors into failure to segment, system errors proposed in this paper, namely the failure to failure to feature extract, or failure to determine quality, and extract (FTX), the enrollment template creation is moved therefore a grouping metric of failure to extract was outside of the signal processing module (Fig. 4B). The determined. The biometric system would provide this metric movement of template creation outside signal processing is © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 4 of 6
  5. 5. proposed to differentiate the Failure to Enroll (FTE) metric from the Failure to Extract (FTX) metric, which as mentioned earlier is a grouping metric for failures due to quality, feature extraction, and/or segmentation. In addition to differentiating FTE and FTX, this movement is also proposed to better handle non-biometrically captured (NBC) data. Fig. 4. General biometric model (A) Current signal processing module and Fig. 5. General biometric model with additions of template create and Non (B) Proposed signal processing module with Template Creation relocation. Biometrically Captured Data (NBC). B. Non-Biometrically Captured (NBC) Data VI. CONCLUSION There are some cases where the signal processing system may The general biometric model and the various definitions accept samples offline, such as in technology testing, which wrapped into it have provided the basis for biometric testing may use data that was not collected by a biometric system. and reporting standards. As the biometric community This data is vastly different from data collected with a examines and discusses these definitions and continues to test biometric device and should be labeled accordingly. The and evaluate systems, additions to the general biometric authors propose classifying data of this kind model and testing and reporting standards have to be non-biometrically captured data, or NBC. An example of this considered. In our previous work, we have explained the would be face recognition. In this example, photographs were development of the concept of the human biometric sensor taken by an operator of a camera that was not intended at the interaction, and this paper provides a series of terms and time for face recognition. The camera does not have any definitions for that model. software that specifically looks at the image as a face. Therefore the operator acquires a photograph, which may not VII. FUTURE WORK be able to be extracted by the face recognition system. Another example is inked ten-print cards for use with The authors have a series of future works planned in this area fingerprint recognition. At the time the data was collected, the and have already applied the error framework to fingerprint intent was not to be scanned and used with a biometric recognition. Additional modalities will be evaluated with this system. framework, such as hand geometry, iris recognition and With respect to the biometric model, the hashed line in Fig. dynamic signature verification. Other terms that are in use 5 stemming from the NBC data storage module indicates the within the biometric community will also be examined within process flow. The hashed line divides post signal processing the context of the HBSI model; habituation is one example of once biometric features have been extracted for either this. enrollment or matching. Again, the proposition of the NBC data storage area by the authors is due to the fact that some ACKNOWLEDGMENT performance metrics associated with online data collection The authors would like to thank members of INCITS M1.5 might not be available. In the current general biometric model for their input during a presentation of this HBSI framework it is assumed that the process is done online, and this NBC [12] in Morgantown, WV April 16, 2009. The authors would route allows for offline processing, which is in line with especially like to recognize Brad Wing, Patrick Grother, and technology testing. Rick Lazarick for their feedback and contribution. © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 5 of 6
  6. 6. REFERENCES [1] International Organization for Standardization, "ISO/IEC JTC1/SC37 Standing Document 2 - Text of WD Standing Document 2 Version 11 (SD2) Harmonized Biometric Vocabulary," ISO/IEC, Geneva SC37N3068, 2009. [2] International Standards Organization, "ISO/IEC 19795-1: Information technology - Biometric performance testing and reporting - Part 1: Principles and framework," ISO/IEC, Geneva ISO/IEC 19795-1(E), April 1, 2006 2006. [3] International Standards Organization, "ISO/IEC 19795-2: Information technology - Biometric performance testing and reporting - Part 2: Testing methodologies for technology and scenario evaluation," ISO/IEC, Geneva ISO/IEC 19795-2:2007(E), February 1, 2007. [4] International Standards Organization, "ISO/IEC TR 19795-3: Information technology - Biometric Performance Testing and Reporting – Part 3: Modality-Specific Testing," ISO/IEC, Geneva December 15 2007. [5] F. Tayyari and J. Smith, Occupational Ergonomics: Principles and Applications. Norwell: Kluwer Academic Publishers, 2003. [6] International Organization for Standardization, "ISO 9241: Ergonomic requirements for office work with visual display terminals (VDTs) - Part 11: Guidance on usability," ISO, Geneva 1998. [7] International Standards Organization, "Information technology - Biometric performance testing and reporting - Part 1: Principles and framework," ISO/IEC, Geneva ISO/IEC 19795-1(E), April 1, 2006 2006. [8] S. Elliott, E. Kukula, and S. Modi, "Issues Involving the Human Biometric Sensor Interface," in Image Pattern Recognition: Synthesis and Analysis in Biometrics, vol. 67, Series in Machine Perception and Artificial Intelligence, S. Yanushkevich, P. Wang, M. Gavrilova, and S. Srihari, Eds. Singapore: World Scientific, 2007, pp. 339-363. [9] S. Elliott, E. Kukula, and N. Sickler, "The challenges of environment and the human biometric device interaction on biometric system performance," presented at International Workshop on Biometric Technologies - Special forum on Modeling and Simulation in Biometric Technology, Calgary, Alberta, Canada, 2004. [10] E. Kukula, "Understanding the Impact of the Human-Biometric Sensor Interaction and System Design on Biometric Image Quality," presented at NIST Biometric Quality Workshop II, Gaithersburg, MD, 2007. [11] E. Kukula, "Design and Evaluation of the Human-Biometric Sensor Interaction Method," in Industrial Technology, vol. Ph.D. West Lafayette: Purdue University, 2008, pp. 510. [12] E. Kukula, "Framework for Human, System, and Administrative Errors in Biometric Systems," INCITS, Washington, DC m1/09-0208, April 15 2009. [13] E. Kukula, S. Elliott, and V. Duffy, "The Effects of Human Interaction on Biometric System Performance " presented at 12th International Conference on Human-Computer Interaction and 1st International Conference on Digital-Human Modeling, Beijing, China, 2007. [14] R. Micheals, B. Stanton, M. Theofanos, and S. Orandi, "A Taxonomy of Definitions for Usability Studies in Biometrics," NIST, Gaithersburg NISTIR 7378, November 2006. [15] E. P. Kukula and R. W. Proctor, "Human-Biometric Sensor Interaction: Impact of Training on Biometric System and User Performance," presented at 13th International Conference on Human-Computer Interaction, San Diego, 2009. © Copyright BSPA Laboratory and Purdue University 2009. All rights reserved. Elliott & Kukula| Page 6 of 6

×