The document examines the stability of iris recognition over a short period of time. It discusses how iris recognition works and why the iris is considered unique and stable over time. The research presented in the document analyzed iris image data collected over four weekly visits. The results showed no statistically significant difference in iris matching scores between the different visits, suggesting the iris is stable over a short time period. This supports the idea that the iris can be used for biometric identification applications that require stability over time.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
IT 34500 is an undergraduate course offered to Purdue West Lafayette students. The course gives an introduction into biometrics and automatic identification and data capture technologies
A lot of work done in Center recently has focused around different topics concerning "time". Iris stability across different "times" has been in the forefront due to work in the undergraduate class, IT345, the graduate class IT545, as well as work in Ben Petry's thesis. Of course "time" is a fairly inaccurate word to use. Assessing stability over time is very ambiguous to the research question. For example time may mean millisecond, months, years, or even life of the user. Upon further examination of other academic literature, the reporting of research duration, collection interval, and specific time frame of interest are sporadic at best and missing completely at worst. To solve this issue, the Center has created the biometric duration scale (BDS) model with associated suggested best practices for reporting time duration in biometrics.
The BDS model marries the general biometric model with HBSI model to create a logical flow of five phases: the presentation definition phase, sample phase, processing phase, and enrollment or matching phase. By tracking information through this progression such as specific subject presentations made, HBSI error, and FTE/Enrollment score (to name a few), performance within the general biometric model can be examined. The BDS model goes one step further by creating specific durations to report research specific metrics. By creating this model, outcomes that effect a yearly performance metrics can be looked at by examining monthly performance, daily performance, or even specific user presentations and how those subcomponents effect the whole system.
Additionally, best practices for the reporting of duration is also included. The reporting methodology stems from ISO 8601 and is in compliance with ISO 21920. In the common reporting structure, start date, duration, number of visits at how many intervals, and time scope of interest for the specific research are given in a logical, readily available format along with the very specific, detailed ISO 8601 methodology. The goal of creating a formal script for reporting research duration was to eliminate ambiguity and create an environment where replication and drawing parallels between research is encouraged.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
In this research, intra-visit match score stability was examined for the human iris. Scores were found to be statistically stable in this short time frame.
IT 34500 is an undergraduate course offered to Purdue West Lafayette students. The course gives an introduction into biometrics and automatic identification and data capture technologies
A lot of work done in Center recently has focused around different topics concerning "time". Iris stability across different "times" has been in the forefront due to work in the undergraduate class, IT345, the graduate class IT545, as well as work in Ben Petry's thesis. Of course "time" is a fairly inaccurate word to use. Assessing stability over time is very ambiguous to the research question. For example time may mean millisecond, months, years, or even life of the user. Upon further examination of other academic literature, the reporting of research duration, collection interval, and specific time frame of interest are sporadic at best and missing completely at worst. To solve this issue, the Center has created the biometric duration scale (BDS) model with associated suggested best practices for reporting time duration in biometrics.
The BDS model marries the general biometric model with HBSI model to create a logical flow of five phases: the presentation definition phase, sample phase, processing phase, and enrollment or matching phase. By tracking information through this progression such as specific subject presentations made, HBSI error, and FTE/Enrollment score (to name a few), performance within the general biometric model can be examined. The BDS model goes one step further by creating specific durations to report research specific metrics. By creating this model, outcomes that effect a yearly performance metrics can be looked at by examining monthly performance, daily performance, or even specific user presentations and how those subcomponents effect the whole system.
Additionally, best practices for the reporting of duration is also included. The reporting methodology stems from ISO 8601 and is in compliance with ISO 21920. In the common reporting structure, start date, duration, number of visits at how many intervals, and time scope of interest for the specific research are given in a logical, readily available format along with the very specific, detailed ISO 8601 methodology. The goal of creating a formal script for reporting research duration was to eliminate ambiguity and create an environment where replication and drawing parallels between research is encouraged.
Introduction to Bio-metrics and it's typesVinit Varu
This presentation will introduce you to various bio-metrics like signature, voice, fingerprints, face, iris, retina etc. and their basic working. This group presentation was held at College of Engineering, Pune. It doesn't contain much text in slides, try interpreting from the images.
A study of Iris Recognition technology over the in use biometric technologies these days. These Study shows how beneficial the iris technology can be to the Human in future.
I have put all my efforts in this study and have made an simple easy to understand ppt.
Iris recognition is an automated method of bio metric identification that uses mathematical pattern-recognition techniques on video images of one or both of the irises of an individual's eyes, whose complex patterns are unique, stable, and can be seen from some distance.
Retinal scanning is a different, ocular-based bio metric technology that uses the unique patterns on a person's retina blood vessels and is often confused with iris recognition. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally.
Managing sensitive data at the Australian Data ArchiveARDC
Dr Steven McEachern, Director, Australian Data Archive, presenting at the Managing and publishing sensitive data in the Social Sciences webinar on 29/3/17
FULL webinar recording: https://youtu.be/7wxfeHNfKiQ
Webinar description:
1) Dr Steve McEachern (Director, Aust Data Archive) Stevediscussed how the Australian Data Archive manages and publishes sensitive social science data.
More about ADA: -- The Australian Data Archive (ADA) provides a national service for the collection and preservation of digital research data and to make these data available for secondary analysis by academic researchers and other users. -- The ADA is comprised of seven sub-archives - Social Science, HIstorical, Indigenous, Longitudinal, Qualitative, Crime & Justice and International. -- ADA data is free of charge to all users -- The archive is managed by the ADA central office based in the ANU Centre for Social Research and Methods at the Australian National University (ANU).https://www.ada.edu.au/
This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.
The human signature provides a natural and publically-accepted legally-admissible method for providing authentication to a process. Automatic biometric signature systems assess both the drawn image and the temporal aspects of signature construction, providing enhanced verification rates over and above conventional outcome assessment. To enable the capture of these constructional data requires the use of specialist ‘tablet’ devices. In this paper we explore the enrolment performance using a range of common signature capture devices and investigate the reasons behind user preference. The results show that writing feedback and familiarity with conventional ‘paper and pen’ donation configurations are the primary motivation for user preference. These results inform the choice of signature device from both technical performance and user acceptance viewpoints.
More Related Content
Similar to Examining Intra-Visit Iris Stability - Visit 5
Introduction to Bio-metrics and it's typesVinit Varu
This presentation will introduce you to various bio-metrics like signature, voice, fingerprints, face, iris, retina etc. and their basic working. This group presentation was held at College of Engineering, Pune. It doesn't contain much text in slides, try interpreting from the images.
A study of Iris Recognition technology over the in use biometric technologies these days. These Study shows how beneficial the iris technology can be to the Human in future.
I have put all my efforts in this study and have made an simple easy to understand ppt.
Iris recognition is an automated method of bio metric identification that uses mathematical pattern-recognition techniques on video images of one or both of the irises of an individual's eyes, whose complex patterns are unique, stable, and can be seen from some distance.
Retinal scanning is a different, ocular-based bio metric technology that uses the unique patterns on a person's retina blood vessels and is often confused with iris recognition. Iris recognition uses video camera technology with subtle near infrared illumination to acquire images of the detail-rich, intricate structures of the iris which are visible externally.
Managing sensitive data at the Australian Data ArchiveARDC
Dr Steven McEachern, Director, Australian Data Archive, presenting at the Managing and publishing sensitive data in the Social Sciences webinar on 29/3/17
FULL webinar recording: https://youtu.be/7wxfeHNfKiQ
Webinar description:
1) Dr Steve McEachern (Director, Aust Data Archive) Stevediscussed how the Australian Data Archive manages and publishes sensitive social science data.
More about ADA: -- The Australian Data Archive (ADA) provides a national service for the collection and preservation of digital research data and to make these data available for secondary analysis by academic researchers and other users. -- The ADA is comprised of seven sub-archives - Social Science, HIstorical, Indigenous, Longitudinal, Qualitative, Crime & Justice and International. -- ADA data is free of charge to all users -- The archive is managed by the ADA central office based in the ANU Centre for Social Research and Methods at the Australian National University (ANU).https://www.ada.edu.au/
Similar to Examining Intra-Visit Iris Stability - Visit 5 (20)
This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.
The human signature provides a natural and publically-accepted legally-admissible method for providing authentication to a process. Automatic biometric signature systems assess both the drawn image and the temporal aspects of signature construction, providing enhanced verification rates over and above conventional outcome assessment. To enable the capture of these constructional data requires the use of specialist ‘tablet’ devices. In this paper we explore the enrolment performance using a range of common signature capture devices and investigate the reasons behind user preference. The results show that writing feedback and familiarity with conventional ‘paper and pen’ donation configurations are the primary motivation for user preference. These results inform the choice of signature device from both technical performance and user acceptance viewpoints.
The inherent differences between secret-based authentication (such as passwords and PINs) and biometric authentication have left gaps in the credibility of biometrics. These gaps are due, in large part, to the inability to adequately cross-compare the two types of authentication. This paper provides a comparison between the two types of authentication by equating biometric entropy in the same way entropy of secrets are represented. Similar to the method used by Ratha, Connell, and Bolle [1], the x and y dimensions of the fingerprints were examined to determine all possible locations of minutiae. These locations were then examined based on the observed probability of minutiae occurring in each of the designated locations. The results of this work show statistically significant differences in the frequencies and probabilities of occurrence for minutiae location, type, and angle, across all possible minutiae locations. These components were applied to Shannon’s Information Theory [2] to determine the entropy of fingerprint biometrics, which was estimated to be equivalent to an 8.3-character, randomly chosen password
The stability score index, conceptualized in 2013, was designed to address the weaknesses of the zoo menagerie and other performance metrics by quantifying the relative stability of a user from on condition to another. In this paper, the measure of interoperability is the stability score from enrolling on one sensor and verifying on multiple sensors. The results showed that like performance, individual performance were not stable across these sensors. When examining stability by sensor family (capacitance, optical and thermal) we find that capacitive as the enrollment sensor were the least stable. Both enrolling and verifying on a thermal sensor, individuals were the most stable of the three family types. With respect to interaction type, enrolling on touch and verifying on swipe was more stable than enrolling on swipe and verifying on swipe, which was an interesting finding. Individuals using the thermal sensor generated the most stable stability scores.
According to a report by Frost and Sullivan in 2007, revenues for non-AFIS fingerprint devices in notebook PC's and wireless devices is anticipated to grow from $148.5 million to $1588.0 million by 2014, a compound annual growth rate of 40.3% [1]. The AFIS market has a compound annual growth rate of 15.2% with revenues of $445.0 million in 2007. With the development of mobile applications in a number of different market segments, such as healthcare, retail, and law enforcement, this paper analyzed the performance of fingerprints of different sizes, from different sensors...
This is a preview of the databases we use in the Center. The presentation overviews our data collection GUI, data storage (datawarehouse), and our project management database. Each of these databases work together to allow us to efficiently run our operations.
Presented at The 8th International Conference on Information Technology and Applications (ICITA 2013), Sydney Australia, July 1 - July 4 2013.
The purpose of this paper is to illustrate the automatic detection of biometric transaction times using hand geometry as the modality of interest. Video recordings were segmented into individual frames and processed through a program to automatically detect interactions between the user and the system. Results include a mean enrollment time of 15.860 seconds and a mean verification time of 2.915 seconds.
Michael Brockly's M.S. thesis presentation for Purdue University, December 2013.
This study created a framework to quantify and mitigate the amount of error that test administrators introduced to a biometric system during data collection. Prior research has focused only on the subject and the errors they make when interacting with biometric systems, while ignoring the test administrator. This study used a longitudinal data collection, focusing on demographics in government identification forms such as driver’s licenses, fingerprint metadata such a moisture and skin temperature, and face image compliance to an ISO best practice standard. Error was quantified from the first visit and baseline test administrator error rates were measured. Additional training, software development, and error mitigation techniques were introduced before a second visit, in which the error rates were measured again. The new system greatly reduced the amount of test administrator error and improved the integrity of the data collected. Findings from this study show how to measure test administrator error and how to reduce it in future data collections.
The impact of force on acquiring a high quality
fingerprint image has been systematically studied by researchers
using single print optical scanners. A previous study examined
force levels that ranged from 3N to 21N using a single print
optical sensor. A second experiment used force levels ranging
from 3N to 11N using a capacitive and optical single print sensor.
Additional work has been conducted that looked at smaller
increments of force also using an optical sensor. This paper
contributes to the body of knowledge by using an alternative
fingerprint sensor, an alternative force level and compares it to
the auto-capture method.
The purpose of this study was to investigate bacterial recovery and transfer from three biometric sensors and the survivability of bacteria on the devices. The modalities tested were fingerprint, hand geometry and hand vein recognition, all of which require sensor contact with the hand or fingers to collect the biometric. Each sensor was tested separately with two species of bacteria, Staphylococcus aureus and Escherichia coli. Survivability was investigated by sterilizing the sensor surface, applying a known volume of diluted bacterial culture to the sensor and allowing it to dry. Bacteria were recovered at 5, 20, 40 and 60 minutes after drying by touching the contaminated device with a sterile finger cot. The finger cot was re-suspended in 5 mL of saline solution, and plated dilutions to obtain live cells counts from the bacterial recovery. The transferability of bacteria from each device surface was investigated by touching the contaminated device and then touching a plate to transfer the bacteria to growth medium to obtain live cell counts. The time lapse between consecutive touches was one minute, with the number of touches was n = 50. Again, S. aureus and E. coli were used separately as detection organisms. This paper will describe the results of the study in terms of survival curves and transfer curves of each bacterial strain for each device.
This paper discusses the implementation issues of installing a commercially available hand geometry system in the current infrastructure of Purdue University's Recreational Sports Center. In addition to a performance analysis of the system, a pre- and post- data collection survey was distributed to the 129 test subjects gathering information on perceptions of biometrics, in particular hand geometry, as well as participants' thoughts and feelings during their interaction with the hand geometry device. The results of the survey suggest that participants were accepting of hand geometry. Additionally, analyses of the participants' survey responses revealed that 93% liked using hand geometry, 98% thought it was easy to use, and 87% preferred it to the existing card-based system, while nobody thought the device invaded their personal privacy. System performance achieved a 3-try match rate of 99.02% (FRR 0.98%) when "gaming"/potential impostor attempts were removed from analysis. The failure to enroll rate was zero. Statistical analyses exposed a significant difference in the scores of attempts made by users with prior hand geometry usage, and subjects that could not straighten out their hand on the device. However, there were no statistical difference in the effects of rings/no rings, improper hand placements/proper hand placements, or gender on hand geometry score.
More from International Center for Biometric Research (20)
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Search and Society: Reimagining Information Access for Radical Futures
Examining Intra-Visit Iris Stability - Visit 5
1. EXAMINING INTRA-VISIT
IRIS STABILITY (VISIT 5)
Elizabeth Bartley, Grant Eibling, Tim Kovacic, Tomas
Kratka, Dustin Phillips, Cameron Posey, Shivaadas
Silvadas, Ben Petry, Steve Elliott, Kevin Chan
2. OVERVIEW - INTRODUCTION
• How Do We Identify?
• Verification vs
Identification
• What is Biometrics?
• Physiological vs
Behavioral
• Why Iris Recognition?
• Stucture of the Eye
• Structure of the Iris
• What is Stability?
• Iris Stability Over Time
(Aging)
• Template vs Iris Aging
• Stability Research
• So What?
3. • What you have – Token
• Drivers license, passport, Social Security card
• What you know – Knowledge
• Password, keyword, PIN
• What you are – Biometrics
• Iris, Fingerprint, Face, and Voice
HOW DO WE IDENTIFY?
4. • Verification
• I am who I say I am
• Matching an individual to a stated identity
• 1:1
• Identification
• I am not who I say I am not
• Matching an individual to all templates in a database
• 1:N
VERIFICATION VS IDENTIFICATION
5. “Biometrics is defined as any automatically
measurable, robust, and distinctive physical
characteristic or personal trait that can be used to
identify an individual or verify the claimed identity of
an individual” [1]
WHAT IS BIOMETRICS?
6. •Remains with person barring catastrophic physical
damage
•Must be unique person-to-person
•Found in most of population
•Offers a stable environment
WHY BIOMETRICS?
7. •Physiological – Measurement of body parts
• Fingerprint
• Iris
• Face
• Palm vein
•Behavioral – Measurement of actions of user
• Keystroke
• Gait
• Signature
PHYSIOLOGICAL VS BEHAVORIAL
BIOMETRICS
8. • Daugman states that the iris is an, “internal
(yet externally visible) organ of the eye, the
iris is well protected from the environment
and stable over time” [2]
WHY IRIS RECOGNITION?
9. • Capture from a distance – no interaction, on the move
• Internal, yet externally visible
• The details of the iris, such as striations, patterns, rings, and
freckles make each one completely unique from any other in
the world.
• An individual has a 1:1078 chance of their iris matching completely
to any other iris in the world, even their own opposite iris [2]
WHY IRIS RECOGNITION?
10. STUCTURE OF THE EYE
• The iris is the colored
portion of the eye
• Its’ outer bounds are defined
by the white sclera
• Its’ inner bounds are defined
by the black pupil
[3]
11. •The iris has a plethora of variation and
complex structures unique to each individual
•This makes iris recognition particularly good
for recognition
STRUCTURE OF THE IRIS
12. •The resiliency to variation of a biometric
modality over a determined time interval or the
resiliency to change given certain
environmental factors
WHAT IS STABILITY?
13. IRIS STABILITY OVER TIME (AGING)
• There is debate as to whether or not the iris changes over
time due to aging
• Iris aging is a definitive change in the iris texture pattern
due to human aging
• Evidence has shown that there is no change in the iris over
time over time due to aging
14. • A template aging effect occurs when the quality of
the match between an enrolled biometric sample
and a sample to be verified degrade with the
increased elapsed time between two samples.
• Algorithm to find a match finds a difference causing
the match scores to decrease.
• Iris aging is a definite change in the iris texture
pattern that occurs from human aging.
TEMPLATE VS IRIS AGING
15. • Research determining iris stability over time
• Data collected weekly, over four years [4]
• Data collected bi-annually
• Same result [5]
STABILITY RESEARCH
16. •Given prior research, there are debates about how
stable an iris is over time
•How long is the iris stable for?
•This research will determine if, during a ten-minute
enrollment period do the iris match scores prove to
be statistically stable?
RESEARCH QUESTIONS
17. • How to Identify a Person
• Verification
• Identification
• Biometric Authentication
• History of Biometrics
• What is the Iris?
• Why is the Iris Unique?
• Iris Recognition History
• How Iris Recognition Works
• Stability of the Iris
• Ways of Analyzing Biometric
Consistency
• ROC/DET Curves
• Zoo Menagerie
• Advantages/Disadvantages of the
Zoo Menagerie
• Stability Score Index (SSI)
LITERATURE REVIEW - OVERVIEW
18. •There are three ways to identify a person:
• Knowledge (passwords, PINs, etc.)
• Tokens (credit card, student ID card, etc.)
• Biometrics (fingerprint, iris, etc.)
•The challenge now lies in making biometrics a viable
way to provide security for a person.
HOW TO IDENTIFY A PERSON
19. •Biometrics is a way of uniquely identifying a person
through their physical and behavioral traits
• Physical traits include fingerprints, the iris, the face, etc.
• Behavioral traits include speech, signature, etc.
•Because it relies on these characteristics,
biometrics reduces the chances of fraud.
BIOMETRIC AUTHENTICATION
20. • In ancient cultures, like China, Babylon, and Egypt, people were using
biometrics to identify important documents and mark their land.
• By the late 1800’s, police departments started using fingerprints as a
method of identification.
• Concepts such as face and iris recognition came about in the late
1900’s, which gave us more options for security at places like airports
and government facilities. [6]
HISTORY OF BIOMETRICS
21. •The concept of iris recognition has developed
since Adler proposed an image of the iris as a
means of identification [8]
• John Daugman developed the first iris
recognition algorithm [9]
IRIS RECOGNITION HISTORY
22. • Iris recognition requires a person’s iris to be matched against a
template provided when the person enrolled in the system.
• The iris must first be segmented.
• Involves the use of edge contract techniques to eliminate “irrelevant” information like
the pupil, sclera, and eyelid.
• Next, the iris is normalized
• Translates the image into a rectangular image with fixed dimensions.
• Recognition systems will compare a person’s iris code against a
template using the Hamming Distance algorithm.
HOW IRIS RECOGNITION WORKS
23. •Although the iris is stable over time [10], the
iris template can change.
•Changes that can affect stability include, but are
not limited to:
•The presence of visual aids (like glasses or contacts)
•The occlusion of the iris caused by the eyelids
STABILITY OF THE IRIS
.
24. •To analyze the performance of the iris, we can
use the following tools:
•Reciever Operating Characteristic (ROC) Curve
•Detection Error Trade-off (DET) Curve
•Zoo menagerie
WAYS OF ANALYZING BIOMETRIC
CONSISTENCY
25. •Receiver Operating Characteristics (ROC)
curves
• Display the tradeoff between exactly confirming a user to a template
against analyzing the wrong person
•Detection Error Trade-Off (DET) curves
• Display the trade-off of the false accept rate and false reject rate.
ROC/DET CURVES
26. • The plots that can provide an individual’s performance with respect to others
• A collection of different animals that are used to describe a subject’s matching
tendencies, which include:
• Sheep: the default population; they match well with themselves and poorly
against others.
• Goats: difficult to recognize; they have low match scores against themselves.
• Lambs: easiest to imitate; they match well with others which can lead to false
accepts.
• Wolves: able to imitate others easily [11]
ZOO MENAGERIE
27. •Advantages
• Helps researchers identify the biggest threats to biometric systems and
how they can protect these systems from creating false matches.
• Identify mistakes in a system algorithm or data capturing
•Disadvantages
• Classifications depend on the calibration of the iris recognition system.
• Dependent on the algorithm used to calculate match scores and the iris
used for comparisons
ADVANTAGES/DISADVANTAGES OF
THE ZOO MENAGERIE
28. •Created by O’Connor [12]
•Used to calculate the stability for each
individual from one menagerie level to another
STABILITY SCORE INDEX
29. • Extract dataruns from the image database housed at ICBR
• Identify Errors
• Clean data
• Exporting required subjects from the database
• Create groupings for each iris for each visit
• Split groupings into their own dataruns
METHODOLOGY - OVERVIEW
30. •Number the images per iris per subject
•Examined for number of images then made
determination:
• To many images, check why: > 25
• Usable number: 12 > n > 25
• Unusable number of images: < 12
IDENTIFYING ERRORS
31. • Each cleaned and segmented file was split into files
for each grouping per visit and given a DatarunID.
• These files were then uploaded into the Database
which used the DatarunID and LocatorNum to create
new dataruns which can be exported to Dataset and
ran through Megamatcher.
• Number of LocatorNum per file = 180.
SPLIT GROUPINGS INTO THEIR OWN
DATARUNS
32. • Images were matched using Neurotechnology’s
Megamatcher 4.0
• Output results as genuine and impostor scores
• Genuine scores indicate a proven match to a given
template
• Impostor scores indicate a non-match to a given
template
IMAGE MATCHING
33. •Yager and Dunstone menageries were created
for the dataruns
•Used because it visually compares the
genuine and impostor scores
MENAGERIE EVALUATIONS
34. •Calculates the distance between any two points of
a zoo menagerie using genuine and imposter
scores
•Used to calculate the difference between the data
runs
• 0(stable) -1(unstable)
STABILITY SCORE INDEX (SSI)
35. • Images scored using SSI
• SSI is the Euclidean distance between two
points in menagerie
DATA ANALYSIS
40. • Data collection began on 11 June 2010 and lasted for
1 year and 2 days (2010-06-11Z/P1Y0M0W2D).
• The time scope of interest for this report is in the day
range.
• The collection period of interest for this analysis began
on 11 April 2013 and lasted for four weeks and
1 day (2013-04-11Z/P0Y0M4W1D).
COLLECTION PERIOD
41. VISIT 1 N H DF P
Group 1 60 4.69 2 0.096
Group 2 60 4.39 2 0.111
Group 3 60 5.02 2 0.081
Group 4 60 2.26 2 0.324
RESULTS
42. GROUPING 1 - ANALYSIS
There was not a statistical difference between
the average score of each grouping (H(2) =
4.69, P = 0.096) with a mean score of 0.12999
for grouping 1-2, 0.11809 for grouping 1-3,
0.7645 for grouping 1-4
43. There was not a statistical difference between
the average score of each grouping (H(2) =
4.39, P = 0.111) with a mean score of 0.12999
for grouping 2-1, 0.8321 for grouping 2-3,
0.13582 for grouping 2-4
GROUPING 2 - ANALYSIS
44. GROUP 3 - ANALYSIS
There was not a statistical difference between
the average score of each grouping (H(2) =
5.02, P = 0.081) with a mean score of 0.11809
for grouping 3-1, 0.8321 for grouping 3-2,
0.11038 for grouping 3-4
45. GROUP 4 - ANALYSIS
There was not a statistical difference between
the average score of each grouping (H(2) =
2.26, P = 0.324) with a mean score of 0.7645 for
grouping 4-1, 0.13582 for grouping 4-2, 0.11038
for grouping 4-3
47. In a ten-minute enrollment period do the iris
match scores prove to be statistically stable?
RESTATING THE HYPOTHESIS
48. •There was no statistically significant difference
between the four data runs, as shown in the
results section.
•All data runs have a p-value greater than the
alpha of 0.05, which is why we fail to reject our
null hypothesis
RESULTS SUMMARIZED
49. •These results show that the iris is stable over
time
REVIEWING STABILITY OF THE IRIS
50. •The results show that the iris is stable over a
short period of time (one visit)
•This can be later expanded to see if the iris is
stable over longer periods of time
CONTRIBUTION TO THE FIELD
51. •Testing the stability of the iris over longer
periods of time (days, weeks, etc.)
•Continued replication with similar data
FUTURE WORK
52. [1] Woodward Jr, J. D., Horn, C., Gatune, J., & Thomas, A. (2003). Biometrics: A look at facial recognition. RAND Corp, Santa Monica, CA.
[2] Daugman, J. (2004). How iris recognition works. Circuits and Systems for Video Technology, IEEE Transactions on, 14(1), 21-30.
[3] Structure of the Eye, http://www.uofmhealth.org/health-library/tp9807
[4] Baker, S. E., Bowyer, K. W., & Flynn, P. J. (2009). Empirical evidence for correct iris match score degradation with increased time-lapse
between gallery and probe matches. In Advances in Biometrics (pp. 1170-1179). Springer Berlin Heidelberg.
[5] Tome-Gonzalez, P., Alonso-Fernandez, F., & Ortega-Garcia, J. (2008, September). On the effects of time variability in iris recognition.
In Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International Conference on (pp. 1-6). IEEE.
[6] History of Biometrics. (n.d.). Retrieved February 20, 2015, from http://www.biometricupdate.com/201501/history-of-biometrics
[7] Iris ID - Iris Recognition Technology : Iris Recognition Technology. (n.d.). Retrieved February 20, 2015, from
http://www.irisid.com/irisrecognitiontechnology
[8] Adler, F.H., Physiology of the Eye (Chapter VI, page 143), Mosby (1953)
[9] Daugman, J. (2004). How iris recognition works. Circuits and Systems for Video Technology, IEEE Transactions on, 14(1), 21-30.
[10] Daugman, J. (2006). Probing the uniqueness and randomness of IrisCodes: Results from 200 billion iris pair comparisons. Proceedings of the
IEEE, 94(11), 1927-1935
[11] Doddington, G., Liggett, W., Martin, A., Przybocki, M., & Reynolds, D. (1998, November). Sheep, goats, lambs and wolves: an analysis of
individual differences in speaker recognition performance. In the International Conference on Spoken Language Processing (ICSLP), Sydney.
[12] O'Connor, K. J. (2013). Examination of stability in fingerprint recognition across force levels, MS. Thesis, Purdue University, West Lafayette,
IN.
BIBLIOGRAPHY