Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Robust Audio Adversarial Example for a Physical Attack

474 views

Published on

2018/11/29@東京大学 猿渡・小山研究室

Published in: Science
  • Be the first to comment

Robust Audio Adversarial Example for a Physical Attack

  1. 1. 
 
 Goodfellow, I. J., Shlens, J., & Szegedy, C.: Explaining and harnessing adversarial examples. In Proc. of ICLR. (2015) 

  2. 2. 
 
 
 
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018) 

  3. 3. 
 
 
 
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018) 
 

  4. 4. 
 
 f :Rn → {1,…,k} x ∈!n !x ∈"n s.t. f (x) ≠ f (!x) ∧ D(x, !x) ≤ δ !x s.t. f (!x) = l ∧ D(x, !x) ≤ δ l ∈{1,…,k} panda f gibbon x !x f (x) f (!x)
  5. 5. 
 
 !x = x + !v where !v = argmin v Loss f (x + v,l)+ ε " v " ε D(x, !x) ≤ δ panda
  6. 6. 
 
 !x = x + !v where !v = argmin v Loss f (x + v,l)+ ε " v " ε D(x, !x) ≤ δ gibbon
  7. 7. Athalye, A., et. al.: Synthesizing robust adversarial examples. In Proc. of ICML. (2018) 
 f (!x) = l
  8. 8. 
 
 
 argmin v Et~Τ Loss f (t(x + v),l)+ ε D(t(x),t(x + v))⎡ ⎣⎢ ⎤ ⎦⎥ argmin v Loss f (x + v,l)+ ε ! v ! Athalye, A., et. al.: Synthesizing robust adversarial examples. In Proc. of ICML. (2018)
  9. 9. Yuan, X., et. al.: CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. In Proc. of USENIX Security. (2018)
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
  10. 10. 
 Yuan, X., et. al.: CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. In Proc. of USENIX Security. (2018)
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
  11. 11. 
 Yuan, X., et. al.: CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. In Proc. of USENIX Security. (2018)
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
  12. 12. 
 
 
 argmin v Loss f (MFCC(x + v), l)+ ε ! v ! x ∈!T l ∈∑N argmin v Loss f (x + v,l)+ ε ! v ! Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
  13. 13. 
 
 
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018) Loss vt
  14. 14. 
 
 Athalye, A., et. al.: Synthesizing robust adversarial examples. In Proc. of ICML. (2018)
  15. 15. 
 
 
 argmin v Loss f (MFCC(x + BPF 1000~4000Hz (v)), l)+ ε ! v !
  16. 16. 
 
h(t) u(t) ′u (t) = u(x)h(t − x)dx −∞ ∞ ∫ 

  17. 17. 
 
 H 
 argmin v Eh~H Loss(MFCC(Conv h (x + BPF 1000~4000Hz (v))), l)+ ε v⎡ ⎣ ⎤ ⎦ Conv h (⋅)
  18. 18. 
 
 
Ν(0, σ 2 ) argmin v Eh~H ,w~N (0,σ 2 ) Loss(MFCC(Conv h (x + BPF 1000~4000Hz (v))+ w), l)+ ε v⎡ ⎣ ⎤ ⎦
  19. 19. 
 x 
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
 Hannun, A. Y., et. al.: Deep Speech: Scaling up end- to-end speech recognition. arXiv preprint arXiv:1412.05567. (2014)
  20. 20. 
 
 σ
  21. 21. 
 
 
 
 Px = 1 T xt 2 t=1 T ∑ Pv = 1 T vt 2 t=1 T ∑ 10log10 Px Pv
  22. 22.
  23. 23. 
 
 
 

  24. 24. 
 
 
 

  25. 25. 
 
 
 
 
 

  26. 26. 
 
 
 
 
 

  27. 27. 
 
 
 

  28. 28. 
 
 Povey, D., et. al.: The Kaldi Speech Recognition Toolkit. In Proc. of ASRU. (2011)
  29. 29. 
 
 Povey, D., et. al.: The Kaldi Speech Recognition Toolkit. In Proc. of ASRU. (2011)
  30. 30. 
 
 
 Carlini, N., & Wagner, D.: Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. In Proc. of Deep Learning and Security Workshop. (2018)
 Yang, Z., et. al.: Characterizing Audio Adversarial Examples Using Temporal Dependency. arXiv preprint arXiv:1809.10875. (2018)
  31. 31. 
 
 
 Yang, Z., et. al.: Characterizing Audio Adversarial Examples Using Temporal Dependency. arXiv preprint arXiv:1809.10875. (2018)
  32. 32. 
 
 
 Yang, Z., et. al.: Characterizing Audio Adversarial Examples Using Temporal Dependency. arXiv preprint arXiv:1809.10875. (2018)
  33. 33. 
 
 
 

  34. 34. 
 
 
 
 Schönherr, L., et. al.: Adversarial Attacks Against ASR Systems via Psychoacoustic Hiding. In Proc. of NDSS. (2019) Yuan, X., et. al.: CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. In Proc. of USENIX Security. (2018)
  35. 35. 
 
 
 Taori, R., et. al.: Targeted Adversarial Examples for Black Box Audio Systems. arXiv preprint arXiv:1805.07820. (2018)

×