Jakub Kaluzny - AusCERT 2017
Abusing voice biometrics solutions
 Security Consultant at The Missing Link Security, Sydney
• Previously pentesting AppSec in Europe:
• payments systems, 2-FA, fraud detection, stocks, enterprise sftware
 Sometimes I have interesting projects:
• Bypassing JS-based malware detection (MiTB)
• RE of proprietary network protocols
• Enterprise solutions: Hadoop, HPaaS
• Voice biometrics
@j_kaluzny
Research idea
 few solutions tested in 2015
 IVR channel, mobile app
 Extremely challenging and interesting, easily underscoped
 2016-2017: more data, more ideas, more tools
 General challenges when testing non-deterministic systems (e.g. WAF, AV)
 Systematic approach to pentest voice biometrics in very limited time
 I am not saying voice biometrics is a bad idea!
Biometric patents
http://digilander.libero.it/p_truth/audio.html
Enrolment:
Say the phrase 3 times
Authentication:
(usually enter customer ID)
Say the phrase
• Should work even if you
have a sore throat
• Should work when you get
older
• Should not work if you
replay a recording
How does it work?
Current state
VOICE AUTHENTICATIONSomebody called that “below-one FA”
IVR -> SECURITY QUESTIONS
AUTH OK
AUTH OK
AUTH OKGO TO BRANCH
STANDARD PASSWORDThe system is as secure as it weakest
link, nobody will attack VB yet
2015: 2-4%,
2016: 1-4%,
2017: 1-2%,
Biometrics strength:
Basic – FAR 1%
Medium – FAR 0.01%
High – FAR 0.0001%
Equal/crossover error rate
FAR in combination with
process security, e.g.
asking a secret question
Only 1%? Or 1% people with a voice
similar enough to authenticate as you.
Horizontal brute-force via VoIP FTW!
Nuance Communications
Some public data on FRR/FAR
In an ideal world statisticians would ask for at least 3000 participants,
1500 of whom would make an enrollment and one true user call and
then another 1500 who would make one impostor attempt into one of
the enrolled voiceprints.
In practice it is generally not practical to obtain 3000 participants to make one or two calls
each and a smaller set of participants makes multiple calls instead. 300 participants who can
make 5-7 true user calls and 10-12 impostor calls (one call each against 10-12 different
voiceprints) is generally considered a very good test, although tests of with anything more than
100 participants are useful. Generally tests with less than 100 participants are considered
anecdotal at best.
How do they calculate it?
High peek pitchLow peek pitch
Users
Spoofers
Real attack scenario
FAR 1%, 3 attempts: ~3% chance of success FAR 20%, 3 calls: ~50% chance of success
Voice biometrics FAR vs. attack surface
Constant threshold Constant FAR
https://erikbern.com/2017/02/01/language-pitch.html
Gender does matter
Enrolment attack
Authentication
bypass
Information
disclosure
Denial of Service
REPLAY from recording:
• Direct (mobile malware /
intercept GSM)
• Distant (“good”
microphone”)
• “Sneakers” attack
SPEECH SYNTHESIS
• Adobe VoCo, Google
WaveNet, Lyrebird
INTEGRATION:
Audio codec exploitation
Mobile API pentest
Abusing the process
VENDOR-approved scenario:
Hiring spoofers
Horizontal brute-force
Threat modelling
Orwell’s 1984 telescreens by Samsung & CIA
a.k.a. Weeping Angel
IoT?
 Mobile app implementation
 IVR implementation
 Both claim replay attack detection and we focus on this scenario
Hacking voice biometrics 101
Voice gathering fuzzing impersonation
• Voice recording
• Noise reduction /
vocal isolation
• Audio files fuzzing
• Thresholds
identification
• Requests automation
• Response gathering
Sony ECM-MS907
~$100
Nexus 5
Rhode VideoMic
~$70
Aldi microphone
$14.99
Klover MIK 26”
~$3000
Rode NTG-3
~$700
$40/day
How good is a good microphone
 Voice trial recorded in medium-loud environment from 5m with a cheap
microphone, vocal amplification needed
 Possibly $700 will increase the range to 10-20m, $3000 to 50m.
 Audacity:
• Vocal isolation
• Noise reduction (max 2m without)
• High & low pass filter
• Amplify
• Add legit noise
• Values differ per VB solution
Vocal extraction Note: some vendors call
this attack “cooperative”
Note: some solutions rely
on noise (each noise is
unique) for replay
detection
Note: some IVRs can
detect noise change
during voice auth vs. the
whole call
2m, cafe 5m, cafe
Voice
gathering
fuzzing impersonation
• Voice recording
• Noise reduction /
vocal isolation
• Audio files fuzzing
• Thresholds
identification
• Requests automation
• Response gathering
Fuzzing the voice
ORIGINAL RECORDING
INCORRECT
CORRECT
Biometrics thresholds identification
REPLAY DETECTED
UNSURE, REPEAT
Note: after few days,
you’ll be asked by
colleagues to work from
home
Note: after a week of
testing, you’ll answer your
phone with “My voice is
my password”
 SOX
 Linear functions, should affect replay detection but
not spectrum and biometrics
• Different level of noise reduction (3 levels)
• Pitch +/- 30% in 5% steps (13 levels)
• Speed +/- 20% in 5% steps (9 levels)
• Random pauses between each word (4 pauses, 3 lengths)
• Add background noise (3 types of noise, 3 volumes)
• 38k recordings -> 2 days (sic!) of audio
• 100-300 -> doable
Voice fuzzing
Or re-enroll each timeReplay detection OFF
• Each of 5 parameters independently (max 50 trials)
• Determine the max parameter value below threshold
• Choose 3 levels for each parameter: 0, max, max/2
Or re-enroll each time
Replay detection OFF
• Combinations of 5 parameters at 3 levels: ~250 trials
• Choose the successful trials: probably <100
Replay detection ON
• Triage selected thresholds, select 5-10 best values for stability tests
• Re-enroll each time if necessary
Identifying auth and replay thresholds
Enrolment:
Authenticate with a standard mechanism
Repeat the phrase 3 times
My name is ABC and my voice is my password
My voice is my password
Sign me in to the XYZ bank
Not unique per
client and provider.
Horizontal brute-
force risk
Data leak risk
Authentication in IVR:
Call the infoline from any number
DTMF tones to enter customer id
Say the phrase
Success -> authenticated menu
Incorrect / replay detected -> end of call
Not sure -> Repeat (up to 3 times)
Authentication in mobile app:
Already paired with the account
Say the phrase
Similar rules for “not sure”
Reconnaissance
 Step 1: create account, enrol your voice
 Step 2: authenticate properly
 Step 3: abuse the system in some way
Reconnaissance and first challenge
Challenge 1: “Weak” enrolment may affect the
results
Challenge 2: Machine learning mechanisms – the
more proper authentications in step 2, the better
FAR
Hint: try to repeat the whole attack
vector many times, BUT:
• The victim may do a “weak”
enrolment by himself
• Attack which may work only
once is good enough
One of the vendors detect weak enrolments (30%)
 Step 1: Create an account (A), enrol your voice (A)
 Step 2: Create a second account (B), enrol the same voice (A)
• Question: is the step 2 possible?
• Question: does it affect replay detection (cross-user)?
Next challenge
Challenge 3: we need more voices!
Challenge 4: we need even more voices
It’s still a vulnerability
Step 1: Create an account (A), enrol the voice (A)
Step 2: Transform the voice with some f() function
Step 3: Pass f(A) to the system -> incorrect
Step 4: Create an account (B), enrol the voice (B)
Step 2: Transform the voice with the same f() function
Step 3: Pass f(B) to the system -> success
More challenges
Challenge 5: previous authentications affect future ones
Hint: ask to turn off
“learning mode” for
authentications.
Step 1: Create an account, enrol the voice and record it (A)
Step 2: Transform the recording with some light f() and strong F() function
Step 3: Pass light modification f(A) to the system -> replay detected
Step 4: Pass strong modification F(A) to the system -> replay detected
vs.
Step 1: Create an account, enrol the voice and record it (A)
Step 2: Transform the voice with strong F() function
Step 3: Pass F(A) to the system -> success
Even more challenges
Challenge 6: different thresholds during one call
 Step 1: Create an account, enrol the voice and record it (A)
 Step 2: Transform the recording with some f() function
 Step 3: Pass modification f(A) to the system -> unsure, please repeat
 Step 3a: Pass modification f(A) to the system -> unsure, please repeat
 Step 3b: Pass modification f(A) to the system -> success
Challenge? Or process weakness
Hint: In some solutions it’s worth
to attack in the third attempt
Challenge 7: different thresholds during many calls after
detecting “brute-force”
 Step 1: Create an account, enrol the voice and record it (A)
 Step 2: Transform the recording with some f() and F() functions
 Step 3: Pass F(A) to the system -> success
 Step 4: Pass f(a) many times -> always failure
 Step 5: Pass F(A) to the system -> incorrect (not replay)
This is the last challenge
Hint: Identify thresholds on mule
accounts and attack only then
Challenge 8: some solutions detect noise
characteristics and include it in replay attack detection
 Step 1: Create an account, enrol the voice, save the recording (A)
 Step 2: Pass the recording (A) -> success
 Step 2: Modify the vocal in A but leave noise
 Step 3: Pass it to the system -> replay detected
I lied
Hint: Generate a new noise each
time
 More or less stable replay detection bypass in standard environment:
 Thresholds set to normal (FRR acceptable for business)
 “standard” enrolment
 “Medium-loud” environment, 5m distance
 process hacked – modifications sent in last step
 Greybox – required turning off learning mode during attacks on mule accs
 No IDS – multiple mule accounts for testing, no thresholds increase
Aftermath
Voice
gathering
fuzzing impersonation
• Voice recording
• Noise reduction /
vocal isolation
• Audio files fuzzing
• Thresholds
identification
• Requests automation
• Response gathering
Target
VoIP integration
 Skype4Py:
• Not perfect, not fully automated, not stable
• Max one connection at a time
• Good enough, easy to integrate
• Big and variable noise (is it bad?)
 Any other VoIP
 Mobile app + jack-jack cable + audio in/out reprogramming (hdajackretask)
Either write your whole spoofer in Skype4Py
Or bash+python+pavucontrol:
• Turn off physical microphone
• Audio output -> Skype audio input
• Skype audio output -> microphone
• aplay will talk to skype, arecord will collect from skype
Relaying audio IN/OUT
Generating DTMF tones:
sox -n 7.wav synth 0.5 sine 852 sine 1209 channels 1
DTMF tones playback
• Perfect: API access
• Free speech recognition: pocketsphinx (EN only)
• Almost free / non-free: Google Cloud speech recognition, Amazon Alexa voice
service
• Response length analysis trick
IVR automated response gathering
Successful authentication,
please press one to hear the
account balance
Please repeat one more time:
my voice is my password.
“Authentication rejected”
Response gathering
sox in.wav –c 1 in2.wav silence 1 0.01 14%
sox in2.wav -n stat 2> temp; cat temp | grep Length | awk '{print $3}'
sox -c 1 in2.wav in2.dat ; tail -n +3 in.dat > in3.dat
gnuplot: plot ‘in3.dat’
Response length analysis trick
Take the next fuzzed
voiceprint
Call the number
Dial DTMF tones to
get to right menu
Play the voiceprintCollect the responseEnd the call
Decide which
voiceprint
Hint: ask for the recordings on provider’s side
Attack process
Integration is the key element, example vulnerability in one of the systems:
POST /voice_auth HTTP/1.1
(…)
-----boundary12345
Content-disposition: form-data; name=d
Content-type: audio/x-wav
[WAV file]
-----boundary12345
Content-disposition: form-data; name=f
Content-Type: text/plain
44100 • Often, voice biometrics as a separate box
• Web/IVR/mobile code as a third-party lib, SDLC?
• HTTPS requests may have another endpoint (possible MiTM)
• Voice-authenticated sessions should be distinguished from
standard ones !
2MB file limit but
f=1 DoSed the system
Mobile applications
 A pentest without threat modelling is bug bounty hunting
 Business assumptions will help design test scenarios, e.g. when the
thresholds are changed
 Turn off “learning mode” and “replay detection” to identify thresholds
 Identify thresholds on one account and attack the other – spoofing/replay
thresholds may be increased for the whole process security
 Do tests with noise cancellation and without - some solutions include noise
analysis in replay detection
 Generally, voice biometrics is a good idea but it should be tested thoroughly
 Solutions integrated in mobile apps require the app to be paired, IVR does not
Summary
OWASP Cheat Sheet
Q&A
 Assumption: non-functional security requirements for voice biometrics
 Cooperation with voice biometrics vendors
 Cooperation with privacy movements
 Draft available here:
 Code soon at https://github.com/jkaluzny
 Twitter: @j_kaluzny
 jkaluzny@themissinglink.com.au
Thank you
Q&A

Pentesting voice biometrics solutions - AusCERT 2017

  • 1.
    Jakub Kaluzny -AusCERT 2017 Abusing voice biometrics solutions
  • 3.
     Security Consultantat The Missing Link Security, Sydney • Previously pentesting AppSec in Europe: • payments systems, 2-FA, fraud detection, stocks, enterprise sftware  Sometimes I have interesting projects: • Bypassing JS-based malware detection (MiTB) • RE of proprietary network protocols • Enterprise solutions: Hadoop, HPaaS • Voice biometrics @j_kaluzny
  • 4.
    Research idea  fewsolutions tested in 2015  IVR channel, mobile app  Extremely challenging and interesting, easily underscoped  2016-2017: more data, more ideas, more tools  General challenges when testing non-deterministic systems (e.g. WAF, AV)  Systematic approach to pentest voice biometrics in very limited time  I am not saying voice biometrics is a bad idea!
  • 5.
  • 6.
    http://digilander.libero.it/p_truth/audio.html Enrolment: Say the phrase3 times Authentication: (usually enter customer ID) Say the phrase • Should work even if you have a sore throat • Should work when you get older • Should not work if you replay a recording How does it work?
  • 7.
    Current state VOICE AUTHENTICATIONSomebodycalled that “below-one FA” IVR -> SECURITY QUESTIONS AUTH OK AUTH OK AUTH OKGO TO BRANCH STANDARD PASSWORDThe system is as secure as it weakest link, nobody will attack VB yet
  • 8.
    2015: 2-4%, 2016: 1-4%, 2017:1-2%, Biometrics strength: Basic – FAR 1% Medium – FAR 0.01% High – FAR 0.0001% Equal/crossover error rate
  • 9.
    FAR in combinationwith process security, e.g. asking a secret question Only 1%? Or 1% people with a voice similar enough to authenticate as you. Horizontal brute-force via VoIP FTW! Nuance Communications Some public data on FRR/FAR
  • 10.
    In an idealworld statisticians would ask for at least 3000 participants, 1500 of whom would make an enrollment and one true user call and then another 1500 who would make one impostor attempt into one of the enrolled voiceprints. In practice it is generally not practical to obtain 3000 participants to make one or two calls each and a smaller set of participants makes multiple calls instead. 300 participants who can make 5-7 true user calls and 10-12 impostor calls (one call each against 10-12 different voiceprints) is generally considered a very good test, although tests of with anything more than 100 participants are useful. Generally tests with less than 100 participants are considered anecdotal at best. How do they calculate it?
  • 11.
    High peek pitchLowpeek pitch Users Spoofers Real attack scenario FAR 1%, 3 attempts: ~3% chance of success FAR 20%, 3 calls: ~50% chance of success Voice biometrics FAR vs. attack surface Constant threshold Constant FAR
  • 12.
  • 13.
    Enrolment attack Authentication bypass Information disclosure Denial ofService REPLAY from recording: • Direct (mobile malware / intercept GSM) • Distant (“good” microphone”) • “Sneakers” attack SPEECH SYNTHESIS • Adobe VoCo, Google WaveNet, Lyrebird INTEGRATION: Audio codec exploitation Mobile API pentest Abusing the process VENDOR-approved scenario: Hiring spoofers Horizontal brute-force Threat modelling
  • 14.
    Orwell’s 1984 telescreensby Samsung & CIA a.k.a. Weeping Angel IoT?
  • 15.
     Mobile appimplementation  IVR implementation  Both claim replay attack detection and we focus on this scenario Hacking voice biometrics 101 Voice gathering fuzzing impersonation • Voice recording • Noise reduction / vocal isolation • Audio files fuzzing • Thresholds identification • Requests automation • Response gathering
  • 16.
    Sony ECM-MS907 ~$100 Nexus 5 RhodeVideoMic ~$70 Aldi microphone $14.99 Klover MIK 26” ~$3000 Rode NTG-3 ~$700 $40/day How good is a good microphone
  • 17.
     Voice trialrecorded in medium-loud environment from 5m with a cheap microphone, vocal amplification needed  Possibly $700 will increase the range to 10-20m, $3000 to 50m.  Audacity: • Vocal isolation • Noise reduction (max 2m without) • High & low pass filter • Amplify • Add legit noise • Values differ per VB solution Vocal extraction Note: some vendors call this attack “cooperative” Note: some solutions rely on noise (each noise is unique) for replay detection Note: some IVRs can detect noise change during voice auth vs. the whole call 2m, cafe 5m, cafe
  • 18.
    Voice gathering fuzzing impersonation • Voicerecording • Noise reduction / vocal isolation • Audio files fuzzing • Thresholds identification • Requests automation • Response gathering Fuzzing the voice
  • 19.
    ORIGINAL RECORDING INCORRECT CORRECT Biometrics thresholdsidentification REPLAY DETECTED UNSURE, REPEAT
  • 20.
    Note: after fewdays, you’ll be asked by colleagues to work from home Note: after a week of testing, you’ll answer your phone with “My voice is my password”  SOX  Linear functions, should affect replay detection but not spectrum and biometrics • Different level of noise reduction (3 levels) • Pitch +/- 30% in 5% steps (13 levels) • Speed +/- 20% in 5% steps (9 levels) • Random pauses between each word (4 pauses, 3 lengths) • Add background noise (3 types of noise, 3 volumes) • 38k recordings -> 2 days (sic!) of audio • 100-300 -> doable Voice fuzzing
  • 21.
    Or re-enroll eachtimeReplay detection OFF • Each of 5 parameters independently (max 50 trials) • Determine the max parameter value below threshold • Choose 3 levels for each parameter: 0, max, max/2 Or re-enroll each time Replay detection OFF • Combinations of 5 parameters at 3 levels: ~250 trials • Choose the successful trials: probably <100 Replay detection ON • Triage selected thresholds, select 5-10 best values for stability tests • Re-enroll each time if necessary Identifying auth and replay thresholds
  • 22.
    Enrolment: Authenticate with astandard mechanism Repeat the phrase 3 times My name is ABC and my voice is my password My voice is my password Sign me in to the XYZ bank Not unique per client and provider. Horizontal brute- force risk Data leak risk Authentication in IVR: Call the infoline from any number DTMF tones to enter customer id Say the phrase Success -> authenticated menu Incorrect / replay detected -> end of call Not sure -> Repeat (up to 3 times) Authentication in mobile app: Already paired with the account Say the phrase Similar rules for “not sure” Reconnaissance
  • 23.
     Step 1:create account, enrol your voice  Step 2: authenticate properly  Step 3: abuse the system in some way Reconnaissance and first challenge Challenge 1: “Weak” enrolment may affect the results Challenge 2: Machine learning mechanisms – the more proper authentications in step 2, the better FAR Hint: try to repeat the whole attack vector many times, BUT: • The victim may do a “weak” enrolment by himself • Attack which may work only once is good enough One of the vendors detect weak enrolments (30%)
  • 24.
     Step 1:Create an account (A), enrol your voice (A)  Step 2: Create a second account (B), enrol the same voice (A) • Question: is the step 2 possible? • Question: does it affect replay detection (cross-user)? Next challenge Challenge 3: we need more voices!
  • 25.
    Challenge 4: weneed even more voices It’s still a vulnerability Step 1: Create an account (A), enrol the voice (A) Step 2: Transform the voice with some f() function Step 3: Pass f(A) to the system -> incorrect Step 4: Create an account (B), enrol the voice (B) Step 2: Transform the voice with the same f() function Step 3: Pass f(B) to the system -> success More challenges
  • 26.
    Challenge 5: previousauthentications affect future ones Hint: ask to turn off “learning mode” for authentications. Step 1: Create an account, enrol the voice and record it (A) Step 2: Transform the recording with some light f() and strong F() function Step 3: Pass light modification f(A) to the system -> replay detected Step 4: Pass strong modification F(A) to the system -> replay detected vs. Step 1: Create an account, enrol the voice and record it (A) Step 2: Transform the voice with strong F() function Step 3: Pass F(A) to the system -> success Even more challenges
  • 27.
    Challenge 6: differentthresholds during one call  Step 1: Create an account, enrol the voice and record it (A)  Step 2: Transform the recording with some f() function  Step 3: Pass modification f(A) to the system -> unsure, please repeat  Step 3a: Pass modification f(A) to the system -> unsure, please repeat  Step 3b: Pass modification f(A) to the system -> success Challenge? Or process weakness Hint: In some solutions it’s worth to attack in the third attempt
  • 28.
    Challenge 7: differentthresholds during many calls after detecting “brute-force”  Step 1: Create an account, enrol the voice and record it (A)  Step 2: Transform the recording with some f() and F() functions  Step 3: Pass F(A) to the system -> success  Step 4: Pass f(a) many times -> always failure  Step 5: Pass F(A) to the system -> incorrect (not replay) This is the last challenge Hint: Identify thresholds on mule accounts and attack only then
  • 29.
    Challenge 8: somesolutions detect noise characteristics and include it in replay attack detection  Step 1: Create an account, enrol the voice, save the recording (A)  Step 2: Pass the recording (A) -> success  Step 2: Modify the vocal in A but leave noise  Step 3: Pass it to the system -> replay detected I lied Hint: Generate a new noise each time
  • 30.
     More orless stable replay detection bypass in standard environment:  Thresholds set to normal (FRR acceptable for business)  “standard” enrolment  “Medium-loud” environment, 5m distance  process hacked – modifications sent in last step  Greybox – required turning off learning mode during attacks on mule accs  No IDS – multiple mule accounts for testing, no thresholds increase Aftermath
  • 31.
    Voice gathering fuzzing impersonation • Voicerecording • Noise reduction / vocal isolation • Audio files fuzzing • Thresholds identification • Requests automation • Response gathering Target
  • 32.
    VoIP integration  Skype4Py: •Not perfect, not fully automated, not stable • Max one connection at a time • Good enough, easy to integrate • Big and variable noise (is it bad?)  Any other VoIP  Mobile app + jack-jack cable + audio in/out reprogramming (hdajackretask)
  • 33.
    Either write yourwhole spoofer in Skype4Py Or bash+python+pavucontrol: • Turn off physical microphone • Audio output -> Skype audio input • Skype audio output -> microphone • aplay will talk to skype, arecord will collect from skype Relaying audio IN/OUT
  • 34.
    Generating DTMF tones: sox-n 7.wav synth 0.5 sine 852 sine 1209 channels 1 DTMF tones playback
  • 35.
    • Perfect: APIaccess • Free speech recognition: pocketsphinx (EN only) • Almost free / non-free: Google Cloud speech recognition, Amazon Alexa voice service • Response length analysis trick IVR automated response gathering
  • 36.
    Successful authentication, please pressone to hear the account balance Please repeat one more time: my voice is my password. “Authentication rejected” Response gathering
  • 37.
    sox in.wav –c1 in2.wav silence 1 0.01 14% sox in2.wav -n stat 2> temp; cat temp | grep Length | awk '{print $3}' sox -c 1 in2.wav in2.dat ; tail -n +3 in.dat > in3.dat gnuplot: plot ‘in3.dat’ Response length analysis trick
  • 38.
    Take the nextfuzzed voiceprint Call the number Dial DTMF tones to get to right menu Play the voiceprintCollect the responseEnd the call Decide which voiceprint Hint: ask for the recordings on provider’s side Attack process
  • 39.
    Integration is thekey element, example vulnerability in one of the systems: POST /voice_auth HTTP/1.1 (…) -----boundary12345 Content-disposition: form-data; name=d Content-type: audio/x-wav [WAV file] -----boundary12345 Content-disposition: form-data; name=f Content-Type: text/plain 44100 • Often, voice biometrics as a separate box • Web/IVR/mobile code as a third-party lib, SDLC? • HTTPS requests may have another endpoint (possible MiTM) • Voice-authenticated sessions should be distinguished from standard ones ! 2MB file limit but f=1 DoSed the system Mobile applications
  • 40.
     A pentestwithout threat modelling is bug bounty hunting  Business assumptions will help design test scenarios, e.g. when the thresholds are changed  Turn off “learning mode” and “replay detection” to identify thresholds  Identify thresholds on one account and attack the other – spoofing/replay thresholds may be increased for the whole process security  Do tests with noise cancellation and without - some solutions include noise analysis in replay detection  Generally, voice biometrics is a good idea but it should be tested thoroughly  Solutions integrated in mobile apps require the app to be paired, IVR does not Summary
  • 41.
    OWASP Cheat Sheet Q&A Assumption: non-functional security requirements for voice biometrics  Cooperation with voice biometrics vendors  Cooperation with privacy movements  Draft available here:
  • 42.
     Code soonat https://github.com/jkaluzny  Twitter: @j_kaluzny  jkaluzny@themissinglink.com.au Thank you Q&A