This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.
5. KINECT BODY TRACKING
• All face values are a built in
feature of the Kinect.
• These track the eyes, nose,
and mouth corners.
• 17 upper body points
tracked not including the
face.
10. •Subject chooses the type of luggage that
closely represents what they usually carry in
an airport
•They can bring their own, or choose from a
selection
•Given mock passport and immigration form
PROTOCOL
11. • They walk up to the booth and give the forms to the
agent (test admin)
• The test admin asks them to provide their 10-print
samples
• Once that’s done, they start he iris capture process.
• This is where the Kinect is determining any errors
• They provide one sample, gather their belongings, and
walk away from the booth
PROTOCOL
12. Pilot
Ground
Truth
Scenario
PROCESS MAP
Research Question: Can the Kinect 2 be
used to determine Human-Biometric Sensor
Interaction errors automatically in real-time?
Booth and
usability study.
Proved the Kinect
was reliable.
My thesis. Will
determine if the
Kinect can be used
to classify errors
automatically.
Future work. Provide
real-time feedback
to users to test if
Kinect affects
throughput.
13. •Reviewed the video footage of all 100 subjects
•Used to determine if presentation was correct or
incorrect
•Exported the AOptix logs
•Used to determine the HBSI metric
•All done after the data collection had
concluded
GROUND TRUTH CLASSIFICATION
14. •Used the body points from the Kinect sensor
•This data was used to determine if the
presentation was correct or incorrect
•Monitored the AOptix state change over the
network
•Used to determine HBSI Metric
•All done in real-time
KINECT CLASSIFICATION
23. EXAMPLE INTERACTION
Subject ID Ground Truth Classification Kinect Classification Correct Classification
066 FTD FTD Y
066 FTD FTD Y
066 FTD FTD Y
066 FTD FTD Y
066 SPS SPS Y
25. •Cause:
•The AOptix device switched states so quickly, that
the Kinect did not detect the change
•The Kinect has a fixed frame refresh rate (30fps)
•From the Kinect’s point of view, no error
occurred, so it did not classify the presentation
“NONE” CLASSIFICATION
27. Subject ID Ground Truth Classification Kinect Classification Correct Classification
028 FTD FTD Y
028 FTD FTD Y
028 FTD NONE N
028 FTD NONE N
028 FTD NONE N
028 SPS SPS Y
“NONE” EXAMPLE
28. HBSI METRICS CLASSIFIED AS “NONE”
CI
DI
FTD
FTP
SPS
Category
3
1
52
13
1
HBSI Metrics Classified as "NONE" by Kinect
• 70 instances of “NONE”
classification total
• Of these 70, the ground
truth equivalent metric
classification is shown
30. ACCURACY OF KINECT CLASSIFICATIONS
Different Classification
Same Classification
Category
62.9%
37.1%
Kinect Classifications Compared to Ground Truth
31. ACCURACY BY METRIC
CI DI FI
FTD FTP SPS
Different Classification
Same Classification
Category
50.0%
50.0%
51.4%
48.6%
81.0%
19.0%
52.5%
47.5%
80.0%
20.0%
80.6%
19.4%
Kinect Classifications Compared to Ground Truth by Metric
32. •How accurate was the Kinect at determining
these errors when it did notice the state
change?
•By removing the observations that include
“NONE”, does the accuracy improve?
FURTHER QUESTIONS RAISED
33. REMOVING “NONE’ CLASSIFICATIONS
Subject ID Ground Truth Classification Kinect Classification Correct Classification
028 FTD FTD Y
028 FTD FTD Y
028 FTD NONE N
028 FTD NONE N
028 FTD NONE N
028 SPS SPS Y
Subject ID Ground Truth Classification Kinect Classification Correct Classification
028 FTD FTD Y
028 FTD FTD Y
028 SPS SPS Y
35. ACCURACY OF KINECT CLASSIFICATIONS
– WITHOUT “NONE”
Different Classification
Same Classification
Category
85.7%
14.3%
Kinect Classifications Compared to Ground Truth
36. ACCURACY BY METRIC – WITHOUT
“NONE”
CI DI FI
FTD FTP SPS
Different Classification
Same Classification
Category
66.7%
33.3%
79.2%
20.8%
81.0%
19.0%
91.2%
8.8%
88.9%
11.1%
84.4%
15.6%
Kinect Classifications Compared to Ground Truth by Metric
38. •The Kinect can be used to determine HBSI errors
in real-time
• The accuracy of which depends on the thresholds the
Kinect operates under
•The refresh rate of the Kinect was not high enough
to detect all state changes from the AOptix device
•This research provides a foundation for future work
CONCLUSIONS
39. • Increasing Kinect refresh rate or using different sensor
• Developing real-time feedback to both subject and test
administrator
• Test change in throughput and performance
• Adjusting Kinect thresholds for correct/incorrect
presentation classifications
• Use Kinect gesture recognition to use for different
modalities (fingerprint)
• Implement in operational testing
FUTURE WORK
Booth and usability study
Determined that it was a reliable measuring tool
Ground Truth
My thesis
Can it be used to determine errors
Future work
Develop feedback and test to see if it affects throughput in any way
1.
Most of the “NONE” classifications occurred during FTD and DI interactions. The five that were not were ground truthed and it was determined that these were due to the Kinect not tracking the user for part of or the whole transaction.
Jumped from 62.9 percent accuracy to 85.7% accuracy.