Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

HBSI Automation Using the Kinect

787 views

Published on

This research focused on classifying Human-Biometric Sensor Interaction errors in real-time. The Kinect 2 was used as a measuring device to track the position and movements of the subject through a simulated border control environment. Knowing, in detail, the state of the subject ensures that the human element of the HBSI model is analyzed accurately. A network connection was established with the iris device to know the state of the sensor and biometric system elements of the model. Information such as detection rate, extraction rate, quality, capture type, and more metrics was available for use in classifying HBSI errors. A Federal Inspection Station (FIS) booth was constructed to simulate a U.S. border control setting in an International airport. The subjects were taken through the process of capturing iris and fingerprint samples in an immigration setting. If errors occurred, the Kinect 2 program would classify the error and saved these for further analysis.

Published in: Technology
  • Be the first to comment

HBSI Automation Using the Kinect

  1. 1. HUMAN-BIOMETRIC SENSOR INTERACTION AUTOMATION USING THE KINECT ZACH MOORE
  2. 2. •Can the Kinect 2 be used to determine Human-Biometric Sensor Interaction errors automatically in real-time? RESEARCH QUESTION
  3. 3. METHODOLOGY
  4. 4. •Phase 1: Programming •Phase 2: Construction •Phase 3: Pilot Study •Phase 4: Data Collection PHASES
  5. 5. KINECT BODY TRACKING • All face values are a built in feature of the Kinect. • These track the eyes, nose, and mouth corners. • 17 upper body points tracked not including the face.
  6. 6. KINECT MEASUREMENT CHECKS
  7. 7. CORRECT PRESENTATION
  8. 8. INCORRECT PRESENTATION
  9. 9. CLASSIFYING ERRORS
  10. 10. •Subject chooses the type of luggage that closely represents what they usually carry in an airport •They can bring their own, or choose from a selection •Given mock passport and immigration form PROTOCOL
  11. 11. • They walk up to the booth and give the forms to the agent (test admin) • The test admin asks them to provide their 10-print samples • Once that’s done, they start he iris capture process. • This is where the Kinect is determining any errors • They provide one sample, gather their belongings, and walk away from the booth PROTOCOL
  12. 12. Pilot Ground Truth Scenario PROCESS MAP Research Question: Can the Kinect 2 be used to determine Human-Biometric Sensor Interaction errors automatically in real-time? Booth and usability study. Proved the Kinect was reliable. My thesis. Will determine if the Kinect can be used to classify errors automatically. Future work. Provide real-time feedback to users to test if Kinect affects throughput.
  13. 13. •Reviewed the video footage of all 100 subjects •Used to determine if presentation was correct or incorrect •Exported the AOptix logs •Used to determine the HBSI metric •All done after the data collection had concluded GROUND TRUTH CLASSIFICATION
  14. 14. •Used the body points from the Kinect sensor •This data was used to determine if the presentation was correct or incorrect •Monitored the AOptix state change over the network •Used to determine HBSI Metric •All done in real-time KINECT CLASSIFICATION
  15. 15. AOPTIX STATES[1] [2] [3] [4] [5] [6] [8] [11] [13] [15] [21] [22] [23] [25]
  16. 16. CLASSIFICATION PROCESS
  17. 17. RESULTS
  18. 18. GENDER REPORT Female Male Category 47.0% 53.0% Gender Gender Count Male 47 Female 53 Total 100
  19. 19. AGE BREAKDOWN 49+41-4833-4026-3218-25 40 30 20 10 0 Age Group Count 11 10 7 31 41 Age Breakdown
  20. 20. ETHNICITY Indian Arab M ixed H ispanic Asian or Pacific Islander O ther African Am erican Asian Caucasian 70 60 50 40 30 20 10 0 Ethnicity Count 112224 8 23 57 Subject Ethnicity
  21. 21. CLASSIFICATION RESULTS
  22. 22. GROUND TRUTH CLASSIFICATIONS
  23. 23. EXAMPLE INTERACTION Subject ID Ground Truth Classification Kinect Classification Correct Classification 066 FTD FTD Y 066 FTD FTD Y 066 FTD FTD Y 066 FTD FTD Y 066 SPS SPS Y
  24. 24. GROUND TRUTH COMPARED TO KINECT
  25. 25. •Cause: •The AOptix device switched states so quickly, that the Kinect did not detect the change •The Kinect has a fixed frame refresh rate (30fps) •From the Kinect’s point of view, no error occurred, so it did not classify the presentation “NONE” CLASSIFICATION
  26. 26. “NONE” CLASSIFCATION Refresh FrameRefresh Frame Kinect AOptix
  27. 27. Subject ID Ground Truth Classification Kinect Classification Correct Classification 028 FTD FTD Y 028 FTD FTD Y 028 FTD NONE N 028 FTD NONE N 028 FTD NONE N 028 SPS SPS Y “NONE” EXAMPLE
  28. 28. HBSI METRICS CLASSIFIED AS “NONE” CI DI FTD FTP SPS Category 3 1 52 13 1 HBSI Metrics Classified as "NONE" by Kinect • 70 instances of “NONE” classification total • Of these 70, the ground truth equivalent metric classification is shown
  29. 29. Correct Presentation Incorrect Presentation Category 23.9% 76.1% Ground Truth Correct Presentation PRESENTATION ACCURACY Correct Presentation Incorrect Presentation Category 48.3% 51.7% Kinect Presentation Classifications Correct Presentation Incorrect Presentation Category 23.9% 76.1% Ground Truth Presentation Classifications
  30. 30. ACCURACY OF KINECT CLASSIFICATIONS Different Classification Same Classification Category 62.9% 37.1% Kinect Classifications Compared to Ground Truth
  31. 31. ACCURACY BY METRIC CI DI FI FTD FTP SPS Different Classification Same Classification Category 50.0% 50.0% 51.4% 48.6% 81.0% 19.0% 52.5% 47.5% 80.0% 20.0% 80.6% 19.4% Kinect Classifications Compared to Ground Truth by Metric
  32. 32. •How accurate was the Kinect at determining these errors when it did notice the state change? •By removing the observations that include “NONE”, does the accuracy improve? FURTHER QUESTIONS RAISED
  33. 33. REMOVING “NONE’ CLASSIFICATIONS Subject ID Ground Truth Classification Kinect Classification Correct Classification 028 FTD FTD Y 028 FTD FTD Y 028 FTD NONE N 028 FTD NONE N 028 FTD NONE N 028 SPS SPS Y Subject ID Ground Truth Classification Kinect Classification Correct Classification 028 FTD FTD Y 028 FTD FTD Y 028 SPS SPS Y
  34. 34. PRESENTATION ACCURACY – WITHOUT “NONE” Correct Presentation Incorrect Presentation Category 25.4% 74.6% Ground Truth Presentation Classifications Correct Presentation Incorrect Presentation Category 29.1% 70.9% Kinect Presentation Classifications
  35. 35. ACCURACY OF KINECT CLASSIFICATIONS – WITHOUT “NONE” Different Classification Same Classification Category 85.7% 14.3% Kinect Classifications Compared to Ground Truth
  36. 36. ACCURACY BY METRIC – WITHOUT “NONE” CI DI FI FTD FTP SPS Different Classification Same Classification Category 66.7% 33.3% 79.2% 20.8% 81.0% 19.0% 91.2% 8.8% 88.9% 11.1% 84.4% 15.6% Kinect Classifications Compared to Ground Truth by Metric
  37. 37. CONCLUSIONS AND FUTURE WORK
  38. 38. •The Kinect can be used to determine HBSI errors in real-time • The accuracy of which depends on the thresholds the Kinect operates under •The refresh rate of the Kinect was not high enough to detect all state changes from the AOptix device •This research provides a foundation for future work CONCLUSIONS
  39. 39. • Increasing Kinect refresh rate or using different sensor • Developing real-time feedback to both subject and test administrator • Test change in throughput and performance • Adjusting Kinect thresholds for correct/incorrect presentation classifications • Use Kinect gesture recognition to use for different modalities (fingerprint) • Implement in operational testing FUTURE WORK
  40. 40. QUESTIONS?

×