Identification and Authentication Using Clavicles
Yohei Kawasaki, Yuta Sugiura
2023 62nd Annual Conference of the Society of Instrument and Control Engineers (SICE), Mie, Japan, 2023
1. Identification and Authentication Using Clavicles
K e i o U n i v e r s i t y
Yohei Kawasaki a n d Yuta Sugiura
The SICE Annual Conference 2023
2. 2
Background
• Use of neck-mounted device
THINKLET [3]
wireless earphones [1] wireless speaker [2]
3. 3
Examples of utilizing identification and authentication
Use as a mobile device
- Access to highly confidential information
- Device control
Shared device - Identification of the wearing user
Prevention of
impersonation and interception.
- Security of audio information
- Security during remote work
4. • Use difference of clavicle acoustic features
• Clavicle is “鎖骨” in Japanese
4
Purpose and Proposed method
Development of identification and authentication system for
neck-mounted wireless earphones
vibrator
Piezoelectric
microphone
5. 5
Related Work: Acoustic Sensing
Passive Acoustic Sensing
Teeth Gesture recognition
[Sun, 2021]
Touch location estimation
[Paradiso, 2002]
Contact method estimation
[Murray-Smith, 2008]
Active Acoustic Sensing
Hand pose estimation
[Kato, 2016]
Grasp posture estimation
[Ono, 2013]
Touch location estimation on arm
[Mujibiya, 2013]
Acoustic barcodes
[Harrison, 2012]
Gesture recognition on clothing
[Amesaka, 2022]
• Use only microphone • Use microphone and vibrator
6. • A method for analyzing the acoustic properties of objects
(1) Signal is generated from the vibrator and propagated through the object
(2) Sensing the propagated signal using a microphone
(3) Analyze sensed signals
6
Active Acoustic Sensing
vibrator microphone
Object
(1) (2)
(3)
7. • Measuring bone acoustic properties using active acoustic sensing
7
Related Work: identification and authentication using the
acoustic properties of bones
Bone of nose acoustic properties
[Isobe, 2021]
Skull acoustic properties
[Schneegass, 2016]
Bone of wrist acoustic properties
[Sehrt, 2022]
8. • Individual differences in ear acoustics
• Depends on internal environment
• Some use ultrasound
8
Related Work: Identification and Authentication in
Hearable Devices
[Gao, 2019]
[Grabham, 2013]
[Akkermans, 2005]
9. 9
Type of Authentication
Passwords, PINs
Irises, Fingerprint
Knowledge-Based Authentication
IC cards, Keys
Possession-Based Authentication
Biometric Authentication
Physical Features
Gait, Handwriting
Behavioral Features
10. 10
task evaluation
Identification
Whose data is the input data?
(Classification)
Accuracy
Autehtication
Whether the input data is the
person's data or not.
Equal Error Rate (EER)
Identification and Authentication
11. • Input data into the trained model
11
Identification
input Trained model result
12. • Calculate the similarity 𝑺
• between the input data and the data set of the user 𝑨 to be authenticated
12
Authentication (1 / 2)
𝑺 is below the threshold
𝑺 is greater than the threshold
It is the data of user A.
It is not the data of user A.
13. • False Rejection Rate: FRR
• Percentage of recognizing oneself as others
• False Acceptance Rate: FAR
• The percentage of recognizing others as oneself.
• Equal Error Rate: EER
• The rate when FRR and FAR are equal.
13
Authentication (2 / 2)
EER
Varies according to the threshold
14. 14
Hardware (1 / 2)
Generating vibrations with a
vibrator
Propagating vibrations to the
clavicle
Sensing with a piezoelectric
microphone
PC
Audio
interface
Microphone
amplifier
Speaker
amplifier
Vibrator
Piezoelectric
microphone
Experimental apparatus
15. 15
Hardware (2 / 2)
Device How to wear
Piezoelectric
microphone
vibrator
16. • Extracting features from sensor values
• Identification: Mel spectrum
• Authentication: Mel-Frequency Cepstrum Coefficients: MFCC
16
Overview of data analysis
Sensor
values
Frequency
spectrum
Mel
spectrum
Mel
Frequency
Cepstrum
Coefficients
17. • Vibrator
• Generating signal
• Piezoelectric microphone
• Sampling rate: 96000Hz
• Get 218
frames
17
Data Analysis: Feature Extraction (1 / 4)
Sensor values Frequency spectrum Mel spectrum MFCC
Sensor location
Vibrator
Piezoelectric
microphone
18. • Detrend
• Window Function
• Fast Fourier Transform (FFT)
• Convert to 217 spectrums
18
Data Analysis: Feature Extraction (2 / 4)
Sensor values Frequency spectrum Mel spectrum MFCC
Frequency spectrum
19. • Convert to Mel-spectrums
• 217
spectrums → 100 spectrums
19
Data Analysis: Feature Extraction (3 / 4)
Sensor values Frequency spectrum Mel spectrum MFCC
・ =
Frequency spectrum Mel filter bank Mel spectrum
21. • Model evaluation using 10-fold cross-validation
• Identification
• Classifier: Random forest
• Features: Mel-spectrum
• Authentication
• Distance calculation: Mahalanobis’ distance
• Features:MFCC
21
Data Analysis: Identification and Authentication
22. • Preparing 2 types of signals
• White noise
• Pink noise
• 50 measurements per person and per signal type
• Putting on and taking off with each measurement
• Having participants wear the same clothing.
22
Experiment
Participants’ information
Number 5 males
Age 22.8 ± 1.2 years of age
How to wear
25. 25
Limitation and Future work
Limitation
• During the experiment, remain
silent and still
• No vocalization
• No body movement
• Sound of vibrator
Future Work
• Expansion of the number of
subjects
• Improving Robustness
• Various postures
• Changes over time
• Various types of clothing
26. 26
Summary
Background
Identification and Authentication
of neck-mounted wireless earphones
Related work
Acoustic sensing
Identification and Authentication using bone
Suggestion Use clavicle acoustic properties
Implementation Active acoustic sensing to acquire acoustic properties
Result
Accuracy: 98.8%
EER: 4.0%
Limitation
During the experiment, remain silent and still
Sound of vibrator
vibrator
Piezoelectric
microphone
27. • [Paradiso, 2002] Joseph A. Paradiso, Che King Leo, Nisha Checka, and Kaijen Hsiao. 2002. Passive acoustic knock tracking for interactive windows. In
CHI '02 Extended Abstracts on Human Factors in Computing Systems (CHI EA '02). Association for Computing Machinery, New York, NY, USA, 732–733.
• [Murray-Smith, 2008] Roderick Murray-Smith, John Williamson, Stephen Hughes, and Torben Quaade. 2008. Stane: synthesized surfaces for tactile input.
In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). Association for Computing Machinery, New York, NY, USA,
1299–1302.
• [Harrison, 2012] Chris Harrison, Robert Xiao, and Scott Hudson. 2012. Acoustic barcodes: passive, durable and inexpensive notched identification tags.
In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). Association for Computing Machinery, New
York, NY, USA, 563–568.
• [Sun, 2021] Wei Sun, Franklin Mingzhe Li, Benjamin Steeper, Songlin Xu, Feng Tian, and Cheng Zhang. 2021. TeethTap: Recognizing Discrete Teeth
Gestures Using Motion and Acoustic Sensing on an Earpiece. In 26th International Conference on Intelligent User Interfaces (IUI '21). Association for
Computing Machinery, New York, NY, USA, 161–169.
• [Ono, 2013] Makoto Ono, Buntarou Shizuki, and Jiro Tanaka. 2013. Touch & activate: adding interactivity to existing objects using active acoustic sensing.
In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). Association for Computing Machinery, New
York, NY, USA, 31–40.
• [Kato, 2016] Hiroyuki Kato and Kentaro Takemura. 2016. Hand pose estimation based on active bone-conducted sound sensing. In Proceedings of the
2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct (UbiComp '16). Association for Computing Machinery, New
York, NY, USA, 109–112.
• [Mujibiya, 2013] Adiyan Mujibiya, Xiang Cao, Desney S. Tan, Dan Morris, Shwetak N. Patel, and Jun Rekimoto. 2013. The sound of touch: on-body touch
and gesture sensing based on transdermal ultrasound propagation. In Proceedings of the 2013 ACM international conference on Interactive tabletops
and surfaces (ITS '13). Association for Computing Machinery, New York, NY, USA, 189–198.
• [Amesaka, 2022] Takashi Amesaka, Hiroki Watanabe, Masanori Sugimoto, and Buntarou Shizuki. 2022. Gesture Recognition Method Using Acoustic
Sensing on Usual Garment. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 2, Article 41 (July 2022), 27 pages.
• [Schneegass, 2016] Stefan Schneegass, Youssef Oualil, and Andreas Bulling. 2016. SkullConduct: Biometric User Identification on Eyewear Computers
Using Bone Conduction Through the Skull. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). Association
for Computing Machinery, New York, NY, USA, 1379–1384.
27
Reference
28. • [Isobe, 2021] Kaito Isobe and Kazuya Murao. 2021. Person-identification Methodusing Active Acoustic Sensing Applied to Nose. In 2021 International
Symposium on Wearable Computers (ISWC '21). Association for Computing Machinery, New York, NY, USA, 138–140.
• [Sehrt, 2022] Jessica Sehrt, Feng Yi Lu, Leonard Husske, Anton Roesler, and Valentin Schwind. 2022. WristConduct: Biometric User Authentication Using
Bone Conduction at the Wrist. In Proceedings of Mensch und Computer 2022 (MuC '22). Association for Computing Machinery, New York, NY, USA, 371–
375
• [Akkermans, 2005] A. H. M. Akkermans, T. A. M. Kevenaar and D. W. E. Schobben, "Acoustic ear recognition for person identification," Fourth IEEE
Workshop on Automatic Identification Advanced Technologies (AutoID'05), 2005, pp. 219-223.
• [Grabham, 2013] N. J. Grabham et al., "An Evaluation of Otoacoustic Emissions as a Biometric," in IEEE Transactions on Information Forensics and
Security, vol. 8, no. 1, pp. 174-183, Jan. 2013
• [Gao, 2019] Yang Gao, Wei Wang, Vir V. Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using Ear Canal Echo for Wearable Authentication. Proc.
ACM Interact. Mob. Wearable Ubiquitous Technol. 3, 3, Article 81 (September 2019), 24 pages.
• [1] +Style,「周囲の騒音をシャットダウン、 自分だけの音楽空間を作り出す、 dyplay ANC 30 Bluetooth Headphone - +Styleショッピング」 ,
https://plusstyle.jp/shopping/item?id=364 (Accessed on 2022/11/10)
• [2] SONY,「 SRS-WS1 | アクティブスピーカー/ネックスピーカー | ソニー」,https://www.sony.jp/active-speaker/products/SRS-WS1/ (Accessed on
2022/11/24)
• [3] Fairy Devices,「コネクテッドワーカーソリューション」,https://fairydevices.jp/cws (Accessed on 2022/11/24)
28
Reference