SlideShare a Scribd company logo
2011 IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC2011)
               Special Session on Machine Learning, 9-12/10/2011, Anchorage, Alaska




      Design of robust classifiers for
       adversarial environments

              Battista Biggio, Giorgio Fumera, Fabio Roli




PRAgroup
Pattern Recognition and Applications Group
Department of Electrical and Electronic Engineering (DIEE)
University of Cagliari, Italy
Outline

• Adversarial classification
      – Pattern classifiers under attack


• Our approach
      – Modelling attacks to improve classifier security


• Application examples
      – Biometric identity verification
      – Spam filtering


• Conclusions and future works

Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   2
Adversarial classification
•   Pattern recognition in security applications
       – spam filtering, intrusion detection, biometrics

•   Malicious adversaries aim to evade the system


      x2                                                                         legitimate
                  f(x)
                                                                                 malicious


                                                            Buy viagra!




                                                  Buy vi4gr@!


                                                                  x1
Oct. 10, 2011    Design of robust classifiers for adversarial environments - F. Roli - SMC2011   3
Open issues

1. Vulnerability identification

2. Security evaluation of pattern classifiers

3. Design of secure pattern classifiers




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   4
Our approach
• Rationale
       – to improve classifier security (robustness) by modelling
         data distribution under attack


• Modelling potential attacks at testing time
       – Probabilistic model of data distribution under attack


• Exploiting the data model for designing more
  robust classifiers




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   5
Modelling attacks at test time

                                                             Two class problem
     Y
                               Attack                        X is the feature vector
                                                             Y is the class label:
     X                                                          legitimate (L)
                                                                malicious (M)
    P(X,Y ) = P(Y )P(X | Y )



In adversarial scenarios, attacks can influence X and Y



   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   6
Manipulation attacks against anti-spam filters
•Text classifiers in spam filtering
    binary features (presence / absence of keywords)
•Common attacks
    bad word obfuscation (BWO) and good word insertion (GWI)



     Buy viagra!                                  Buy vi4gr4!

                                                  Did you ever play that game
                                                  when you were a kid where the
                                                  little plastic hippo tries to
                                                  gobble up all your marbles?


  x = [0 0 1 0 0 0 0 0 …]                         x’ = [0 0 0 0 1 0 0 1 …]

                                     x ' = A(x)

   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   7
Modelling attacks at test time

                              Y
                                                             Attack

                              X
                             P(X,Y ) = P(Y )P(X | Y )

In adversarial scenarios, attacks can influence X and Y

We must model this influence to design robust classifiers




   Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   8
Modelling attacks at test time


  A               Y

                               P(X,Y , A) = P(A)P(Y | A)P(X | Y , A)
            X


• A is a r.v. which indicates whether the sample is
  an attack (True) or not (False)
• Y is the class label: legitimate (L), malicious (M)
• X is the feature vector


Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   9
Modelling attacks at test time
                      Ptr (X,Y = L)                                                     Ptr (X,Y = M )
Training time




                                                                                                                     x

                                       Pts (X,Y = M ) =
                 Pts (X,Y = L)         Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time




                                                                                                                     x
                       Attacks which were not present at training phase!

                Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   10
Modelling attacks at testing time
•   Attack distribution
     – P(X,Y=M, A=T) = P(X|Y=M,A=T)P(Y=M|A=T)P(A=T)

•   Choice of P(Y=M|A=T)
     – We set it to 1, since we assume the adversary has only
       control on malicious samples

•   P(A=T) is thus the percentage of attacks among malicious
    samples
     – It is a parameter which tunes the security/accuracy trade-
       off
     – The more attacks are simulated during the training phase,
       the more robust (but less accurate when no attacks) the
       classifier is expected to be at testing time



    Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   11
Modelling attacks at testing time

                  Key issue:

                  modelling Pts(X, Y=M / A=T)

               Pts (X,Y = L)
                                                                                          Pts (X,Y = M , A = F)
Testing time




               Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   12
Modelling attacks at testing time
               •     Choice of Pts(X, Y=M / A=T)
                     – Requires application-specific knowledge
                     – Even if knowledge about the attack is available, still difficult
                       to model analytically
                     – An agnostic choice is the uniform distribution


                                 Pts (X,Y = M ) =
                   Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time




                                                                                                                    x



               Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   13
Experiments
Spoofing attacks against biometric systems
• Multi-modal biometric verification systems
      – Spoofing attacks


    Fake fingerprints
                                                                                      Claimed
                                                                                      identity



                                                                            Face            Fingerprint
                                                                           matcher           matcher
                                                                                 s1                s2
    Photo attack

                                                                                 Fusion module


                                                                             Genuine / Impostor




Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011          14
Experiments
  Multi-modal biometric identity verification

                                                                                   true
                                                                                           genuine
                                                 s1
                Sensor        Face matcher
                                                                           s
                                                       Score fusion rule       s ! s"
                                                 s2        f (s1 , s2 )
                Sensor     Fingerprint matcher
                                                                                   false
                                                                                           impostor




• Data set
      – NIST Biometric Score Set 1 (publicly available)
• Fusion rules
                                                         p(s1 | G)p(s2 | G)
      – Likelihood ratio (LLR)                        s=
                                                          p(s1 | I )p(s2 | I )
      – Extended LLR
          [Rodrigues et al., Robustness of multimodal biometric fusion methods against spoof attacks,
          JVLC 2009]

      – Our approach (Uniform LLR)

Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011      15
Remarks on experiments
        •The Extended LRR [Rodrigues et al., 2009] used for
        comparison assumes that the attack distribution is
        equal to the distribution of legitimate patterns
               Pts (X,Y = L) = Pts (X,Y = M | A = T )
                                                                                             Pts (X,Y = M , A = F)
Testing time




     •Our rule, uniform LRR, assumes a uniform distribution

     Experiments are done assuming that attack patterns
     are exact replicas of legitimate patterns (worst case)
               Oct. 10, 2011    Design of robust classifiers for adversarial environments - F. Roli - SMC2011   16
Experiments
  Multi-modal biometric identity verification
• Uniform LLR under fingerprint spoofing attacks
      – Security (FAR) vs accuracy (GAR) for different P(A=T)
        values
      – No attack (solid) / under attack (dashed)




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   17
Experiments
  Multi-modal biometric identity verification
• Uniform vs                  Extended                LLR         under            fingerprint
  spoofing




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   18
Experiments
                                 Spam filtering
• Similar results obtained in spam filtering
      – TREC 2007 public data set
      – Naive Bayes text classifier
      – GWI/BWO attacks with nMAX modified words per spam




  AUC10%

TP




  0   0.1        FP




Oct. 10, 2011   Design of robust classifiers for adversarial environments - F. Roli - SMC2011   19
Conclusions and future works
• We presented a general generative approach
  for designing robust classifiers against attacks at
  test time

• Reported results show that our approach allow
  to increase the robustness (i.e., the security) of
  classifiers

• Future work
      – To test Uniform LLR against more realistic spoof attacks
            • Preliminary result: worst-case assumption is too pessimistic!
                Biggio, Akhtar, Fumera, Marcialis, Roli, “Robustness of multimodal biometric
                systems under realistic spoof attacks”, IJCB 2011




Oct. 10, 2011      Design of robust classifiers for adversarial environments - F. Roli - SMC2011   20

More Related Content

More from Pluribus One

On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial Settings
Pluribus One
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion Attacks
Pluribus One
 
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Pluribus One
 
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Pluribus One
 
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Pluribus One
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Pluribus One
 
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Pluribus One
 
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware ClusteringBattista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Pluribus One
 
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Pluribus One
 
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Pluribus One
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Pluribus One
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense Slides
Pluribus One
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...
Pluribus One
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Pluribus One
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 Poster
Pluribus One
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011
Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Pluribus One
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011
Pluribus One
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern Analysis
Pluribus One
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011
Pluribus One
 

More from Pluribus One (20)

On Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial SettingsOn Security and Sparsity of Linear Classifiers for Adversarial Settings
On Security and Sparsity of Linear Classifiers for Adversarial Settings
 
Secure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion AttacksSecure Kernel Machines against Evasion Attacks
Secure Kernel Machines against Evasion Attacks
 
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresMachine Learning under Attack: Vulnerability Exploitation and Security Measures
Machine Learning under Attack: Vulnerability Exploitation and Security Measures
 
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...
 
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...
 
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...
 
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...
 
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware ClusteringBattista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
Battista Biggio @ AISec 2014 - Poisoning Behavioral Malware Clustering
 
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...
 
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...
 
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"
 
Zahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense SlidesZahid Akhtar - Ph.D. Defense Slides
Zahid Akhtar - Ph.D. Defense Slides
 
Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...Robustness of multimodal biometric verification systems under realistic spoof...
Robustness of multimodal biometric verification systems under realistic spoof...
 
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...
 
Amilab IJCB 2011 Poster
Amilab IJCB 2011 PosterAmilab IJCB 2011 Poster
Amilab IJCB 2011 Poster
 
Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011Ariu - Workshop on Artificial Intelligence and Security - 2011
Ariu - Workshop on Artificial Intelligence and Security - 2011
 
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterAriu - Workshop on Applications of Pattern Analysis 2010 - Poster
Ariu - Workshop on Applications of Pattern Analysis 2010 - Poster
 
Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011Ariu - Workshop on Multiple Classifier Systems - 2011
Ariu - Workshop on Multiple Classifier Systems - 2011
 
Ariu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern AnalysisAriu - Workshop on Applications of Pattern Analysis
Ariu - Workshop on Applications of Pattern Analysis
 
Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011Ariu - Workshop on Multiple Classifier Systems 2011
Ariu - Workshop on Multiple Classifier Systems 2011
 

Recently uploaded

How to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRMHow to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRM
Celine George
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
WaniBasim
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
Celine George
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
Dr. Shivangi Singh Parihar
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
Nicholas Montgomery
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
Priyankaranawat4
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
ak6969907
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
AyyanKhan40
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Excellence Foundation for South Sudan
 
The Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collectionThe Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collection
Israel Genealogy Research Association
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
Nguyen Thanh Tu Collection
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
PECB
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
eBook.com.bd (প্রয়োজনীয় বাংলা বই)
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Akanksha trivedi rama nursing college kanpur.
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
Colégio Santa Teresinha
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
Celine George
 

Recently uploaded (20)

How to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRMHow to Manage Your Lost Opportunities in Odoo 17 CRM
How to Manage Your Lost Opportunities in Odoo 17 CRM
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
 
World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024World environment day ppt For 5 June 2024
World environment day ppt For 5 June 2024
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
 
Your Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective UpskillingYour Skill Boost Masterclass: Strategies for Effective Upskilling
Your Skill Boost Masterclass: Strategies for Effective Upskilling
 
The Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collectionThe Diamonds of 2023-2024 in the IGRA collection
The Diamonds of 2023-2024 in the IGRA collection
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
BÀI TẬP BỔ TRỢ TIẾNG ANH 8 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2023-2024 (CÓ FI...
 
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...
 
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdfবাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
বাংলাদেশ অর্থনৈতিক সমীক্ষা (Economic Review) ২০২৪ UJS App.pdf
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
 

Design of robust classifiers for adversarial environments - Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on

  • 1. 2011 IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC2011) Special Session on Machine Learning, 9-12/10/2011, Anchorage, Alaska Design of robust classifiers for adversarial environments Battista Biggio, Giorgio Fumera, Fabio Roli PRAgroup Pattern Recognition and Applications Group Department of Electrical and Electronic Engineering (DIEE) University of Cagliari, Italy
  • 2. Outline • Adversarial classification – Pattern classifiers under attack • Our approach – Modelling attacks to improve classifier security • Application examples – Biometric identity verification – Spam filtering • Conclusions and future works Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 2
  • 3. Adversarial classification • Pattern recognition in security applications – spam filtering, intrusion detection, biometrics • Malicious adversaries aim to evade the system x2 legitimate f(x) malicious Buy viagra! Buy vi4gr@! x1 Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 3
  • 4. Open issues 1. Vulnerability identification 2. Security evaluation of pattern classifiers 3. Design of secure pattern classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 4
  • 5. Our approach • Rationale – to improve classifier security (robustness) by modelling data distribution under attack • Modelling potential attacks at testing time – Probabilistic model of data distribution under attack • Exploiting the data model for designing more robust classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 5
  • 6. Modelling attacks at test time Two class problem Y Attack X is the feature vector Y is the class label: X legitimate (L) malicious (M) P(X,Y ) = P(Y )P(X | Y ) In adversarial scenarios, attacks can influence X and Y Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 6
  • 7. Manipulation attacks against anti-spam filters •Text classifiers in spam filtering binary features (presence / absence of keywords) •Common attacks bad word obfuscation (BWO) and good word insertion (GWI) Buy viagra! Buy vi4gr4! Did you ever play that game when you were a kid where the little plastic hippo tries to gobble up all your marbles? x = [0 0 1 0 0 0 0 0 …] x’ = [0 0 0 0 1 0 0 1 …] x ' = A(x) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 7
  • 8. Modelling attacks at test time Y Attack X P(X,Y ) = P(Y )P(X | Y ) In adversarial scenarios, attacks can influence X and Y We must model this influence to design robust classifiers Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 8
  • 9. Modelling attacks at test time A Y P(X,Y , A) = P(A)P(Y | A)P(X | Y , A) X • A is a r.v. which indicates whether the sample is an attack (True) or not (False) • Y is the class label: legitimate (L), malicious (M) • X is the feature vector Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 9
  • 10. Modelling attacks at test time Ptr (X,Y = L) Ptr (X,Y = M ) Training time x Pts (X,Y = M ) = Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F) Testing time x Attacks which were not present at training phase! Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 10
  • 11. Modelling attacks at testing time • Attack distribution – P(X,Y=M, A=T) = P(X|Y=M,A=T)P(Y=M|A=T)P(A=T) • Choice of P(Y=M|A=T) – We set it to 1, since we assume the adversary has only control on malicious samples • P(A=T) is thus the percentage of attacks among malicious samples – It is a parameter which tunes the security/accuracy trade- off – The more attacks are simulated during the training phase, the more robust (but less accurate when no attacks) the classifier is expected to be at testing time Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 11
  • 12. Modelling attacks at testing time Key issue: modelling Pts(X, Y=M / A=T) Pts (X,Y = L) Pts (X,Y = M , A = F) Testing time Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 12
  • 13. Modelling attacks at testing time • Choice of Pts(X, Y=M / A=T) – Requires application-specific knowledge – Even if knowledge about the attack is available, still difficult to model analytically – An agnostic choice is the uniform distribution Pts (X,Y = M ) = Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F) Testing time x Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 13
  • 14. Experiments Spoofing attacks against biometric systems • Multi-modal biometric verification systems – Spoofing attacks Fake fingerprints Claimed identity Face Fingerprint matcher matcher s1 s2 Photo attack Fusion module Genuine / Impostor Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 14
  • 15. Experiments Multi-modal biometric identity verification true genuine s1 Sensor Face matcher s Score fusion rule s ! s" s2 f (s1 , s2 ) Sensor Fingerprint matcher false impostor • Data set – NIST Biometric Score Set 1 (publicly available) • Fusion rules p(s1 | G)p(s2 | G) – Likelihood ratio (LLR) s= p(s1 | I )p(s2 | I ) – Extended LLR [Rodrigues et al., Robustness of multimodal biometric fusion methods against spoof attacks, JVLC 2009] – Our approach (Uniform LLR) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 15
  • 16. Remarks on experiments •The Extended LRR [Rodrigues et al., 2009] used for comparison assumes that the attack distribution is equal to the distribution of legitimate patterns Pts (X,Y = L) = Pts (X,Y = M | A = T ) Pts (X,Y = M , A = F) Testing time •Our rule, uniform LRR, assumes a uniform distribution Experiments are done assuming that attack patterns are exact replicas of legitimate patterns (worst case) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 16
  • 17. Experiments Multi-modal biometric identity verification • Uniform LLR under fingerprint spoofing attacks – Security (FAR) vs accuracy (GAR) for different P(A=T) values – No attack (solid) / under attack (dashed) Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 17
  • 18. Experiments Multi-modal biometric identity verification • Uniform vs Extended LLR under fingerprint spoofing Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 18
  • 19. Experiments Spam filtering • Similar results obtained in spam filtering – TREC 2007 public data set – Naive Bayes text classifier – GWI/BWO attacks with nMAX modified words per spam AUC10% TP 0 0.1 FP Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 19
  • 20. Conclusions and future works • We presented a general generative approach for designing robust classifiers against attacks at test time • Reported results show that our approach allow to increase the robustness (i.e., the security) of classifiers • Future work – To test Uniform LLR against more realistic spoof attacks • Preliminary result: worst-case assumption is too pessimistic! Biggio, Akhtar, Fumera, Marcialis, Roli, “Robustness of multimodal biometric systems under realistic spoof attacks”, IJCB 2011 Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 20