This document summarizes a presentation on designing robust classifiers for adversarial environments given at the 2011 IEEE International Conference on Systems, Man, and Cybernetics. The presentation introduces an approach to model potential attacks at test time using a probabilistic model of the data distribution under attack. This model is then used to design classifiers that are more robust to attacks. Experimental results on biometric identity verification and spam filtering show that the proposed approach can increase classifier security against attacks while maintaining accuracy.
Understanding the risk factors of learning in adversarial environmentsPluribus One
This document summarizes research on developing a theoretical foundation for robust machine learning classifiers that can provide assurances against adversarial manipulation. It proposes measuring a classifier's robustness based on how much its decision boundary rotates under small perturbations to the training data (contamination). For linear classifiers, robustness can be quantified as the expected angular change between the classifier's weight vectors trained on clean vs. contaminated data. This provides an intuitive way to compare learning algorithms and inform the development of more robust algorithms.
Battista Biggio @ AISec 2013 - Is Data Clustering in Adversarial Settings Sec...Pluribus One
Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities.
However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process itself. Whether clustering can be safely adopted in such settings remains thus questionable.
In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data. We show that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters.
We present a case study on single-linkage hierarchical clustering, and report experiments on clustering of malware samples and handwritten digits.
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Pluribus One
Slides of the tutorial held by Battista Biggio, University of Cagliari and Pluribus One Srl, during "2019 International Summer School on Machine Learning and Security (MLS)"
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019Pluribus One
1) Adversarial machine learning studies machine learning systems that operate in adversarial settings such as spam filtering, where the data source is non-neutral and can deliberately attempt to reduce classifier performance.
2) Deep learning models were found to be susceptible to adversarial examples, which are imperceptibly perturbed inputs that cause models to make incorrect predictions.
3) Studies have shown that adversarial examples generated in a digital environment can still fool models when inputs are acquired through a physical system like a camera, indicating these attacks pose a real-world threat.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Pluribus One
This document discusses research into generating adversarial examples to attack the vision system of the iCub humanoid robot. The researchers were able to craft perturbed images that were misclassified by the robot despite being visually indistinguishable from the originals. They developed gradient-based optimization attacks to target specific misclassifications or induce any misclassification. Potential countermeasures include rejecting inputs that fall in the "blind spots" far from the training data. However, deep learning features are unstable, with small pixel changes mapping to large changes in the deep space. Future work aims to address this instability issue.
Understanding the risk factors of learning in adversarial environmentsPluribus One
This document summarizes research on developing a theoretical foundation for robust machine learning classifiers that can provide assurances against adversarial manipulation. It proposes measuring a classifier's robustness based on how much its decision boundary rotates under small perturbations to the training data (contamination). For linear classifiers, robustness can be quantified as the expected angular change between the classifier's weight vectors trained on clean vs. contaminated data. This provides an intuitive way to compare learning algorithms and inform the development of more robust algorithms.
Battista Biggio @ AISec 2013 - Is Data Clustering in Adversarial Settings Sec...Pluribus One
Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities.
However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process itself. Whether clustering can be safely adopted in such settings remains thus questionable.
In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data. We show that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters.
We present a case study on single-linkage hierarchical clustering, and report experiments on clustering of malware samples and handwritten digits.
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Pluribus One
Slides of the tutorial held by Battista Biggio, University of Cagliari and Pluribus One Srl, during "2019 International Summer School on Machine Learning and Security (MLS)"
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019Pluribus One
1) Adversarial machine learning studies machine learning systems that operate in adversarial settings such as spam filtering, where the data source is non-neutral and can deliberately attempt to reduce classifier performance.
2) Deep learning models were found to be susceptible to adversarial examples, which are imperceptibly perturbed inputs that cause models to make incorrect predictions.
3) Studies have shown that adversarial examples generated in a digital environment can still fool models when inputs are acquired through a physical system like a camera, indicating these attacks pose a real-world threat.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Pluribus One
This document discusses research into generating adversarial examples to attack the vision system of the iCub humanoid robot. The researchers were able to craft perturbed images that were misclassified by the robot despite being visually indistinguishable from the originals. They developed gradient-based optimization attacks to target specific misclassifications or induce any misclassification. Potential countermeasures include rejecting inputs that fall in the "blind spots" far from the training data. However, deep learning features are unstable, with small pixel changes mapping to large changes in the deep space. Future work aims to address this instability issue.
Secure Kernel Machines against Evasion AttacksPluribus One
This document summarizes research on developing more secure machine learning classifiers. It discusses how gradient-based and surrogate model approaches can be used to evade existing classifiers. The researchers then propose several techniques for building more robust classifiers, including using infinity-norm regularization, cost-sensitive learning, and modifying kernel parameters. Experiments on handwritten digit and spam filtering datasets show the proposed approaches improve security against evasion attacks compared to standard support vector machines.
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
This document summarizes research on machine learning security and adversarial attacks. It describes how machine learning systems are increasingly being used for consumer applications, but this opens them up to new security risks from skilled attackers. The document outlines different types of adversarial attacks against machine learning, including evasion attacks that aim to evade detection and poisoning attacks that aim to compromise a system's availability. It also discusses approaches for systematically evaluating the security of pattern classification systems against bounded adversaries.
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Pluribus One
This document discusses the security of feature selection algorithms against training data poisoning attacks. It presents a framework to evaluate this, including models of the attacker's goal, knowledge, and capabilities. Experiments show that LASSO feature selection is vulnerable to poisoning attacks, which can significantly affect the selected features. The research aims to better understand these risks and develop more secure feature selection methods.
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Pluribus One
Pattern classifiers have been widely used in adversarial settings like spam and malware detection, although they have not been originally designed to cope with intelligent attackers that manipulate data at test time to evade detection.
While a number of adversary-aware learning algorithms have been proposed, they are computationally demanding and aim to counter specific kinds of adversarial data manipulation.
In this work, we overcome these limitations by proposing a multiple classifier system capable of improving security against evasion attacks at test time by learning a decision function that more tightly encloses the legitimate samples in feature space, without significantly compromising accuracy in the absence of attack. Since we combine a set of one-class and two-class classifiers to this end, we name our approach one-and-a-half-class (1.5C) classification. Our proposal is general and it can be used to improve the security of any classifier against evasion attacks at test time, as shown by the reported experiments on spam and malware detection.
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
Many modern face verification algorithms use a small set of reference templates to save memory and computa- tional resources. However, both the reference templates and the combination of the corresponding matching scores are heuristically chosen. In this paper, we propose a well- principled approach, named sparse support faces, that can outperform state-of-the-art methods both in terms of recog- nition accuracy and number of required face templates, by jointly learning an optimal combination of matching scores and the corresponding subset of face templates. For each client, our method learns a support vector machine using the given matching algorithm as the kernel function, and de- termines a set of reference templates, that we call support faces, corresponding to its support vectors. It then dras- tically reduces the number of templates, without affecting recognition accuracy, by learning a set of virtual faces as well-principled transformations of the initial support faces. The use of a very small set of support face templates makes the decisions of our approach also easily interpretable for designers and end users of the face verification system.
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Pluribus One
Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.
In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems.
However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data.
In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior.
To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Pluribus One
The document discusses poisoning attacks against complete-linkage hierarchical clustering. It introduces hierarchical clustering and describes how attackers can add poisoned samples to compromise the clustering output. The paper evaluates different attack strategies on real and artificial datasets, finding that even random attacks can be effective at poisoning the clusters, while extensions of greedy approaches generally perform best. Future work to develop defenses for clustering algorithms against adversarial inputs is discussed.
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Pluribus One
This document summarizes research on evasion attacks against machine learning systems at test time. The researchers propose a framework for evaluating the security of machine learning algorithms against evasion attacks. They model the adversary's goal, knowledge, capabilities, and attack strategy as an optimization problem. Using this framework, they evaluate gradient-descent evasion attacks against systems like spam filters and malware detectors. They show that machine learning classifiers can be vulnerable, even when the adversary has limited knowledge. The researchers explore techniques like bounding the adversary and adding a "mimicry" component to attacks to improve evasion effectiveness.
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Pluribus One
This document discusses poisoning attacks against support vector machines. The goal of poisoning attacks is to mislead machine learning systems by injecting malicious data points into the training set. The paper proposes an approach to maximize classification error on a validation set by calculating the gradient of the hinge loss with respect to the poisoned point. Experiments on MNIST data show that a single poisoned point can significantly increase error rates. The authors note that real attacks may be less effective and discuss how to improve SVM robustness to poisoning attacks.
This PhD thesis by Zahid Akhtar examines the security of multimodal biometric systems against spoof attacks. It aims to evaluate the robustness of these systems to real spoof attacks, validate assumptions about the "worst-case" spoofing scenario, and develop methods to assess security without fabricating fake traits. Experiments are conducted on systems using face and fingerprint biometrics under various spoof attacks, and results show multimodal systems can be compromised by attacking a single trait, while the worst-case scenario does not always reflect real attacks.
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
The document presents research on evaluating the robustness of multi-modal biometric verification systems against spoofing attacks. It discusses experiments conducted using fake fingerprints and faces to spoof a system using fingerprint and face matchers. The results show that the common assumption that fake scores follow a worst-case distribution may not always hold, and score fusion rules designed under this assumption could paradoxically reduce a system's robustness against realistic spoofing attacks. More accurate modeling of fake score distributions is needed.
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Pluribus One
The document summarizes research on making support vector machines (SVMs) more robust to adversarial label noise. It discusses how adversaries can intentionally flip labels in training data to undermine SVMs. The researchers propose a label noise robust SVM that learns from an expected kernel matrix to be less sensitive to label flips. Experiments on several datasets show their approach maintains higher accuracy than standard SVMs when the training data contains adversarial or random label noise. In conclusions, they discuss further investigating the properties and parameter selection for their kernel correction method.
The document summarizes a multi-clue approach for detecting photo-based face spoofing attacks in face recognition systems. It fuses analysis of both static visual characteristics and video clues, such as motion and eye blinking. For static analysis, it extracts several visual representations from frames to compute scores. Video analysis examines motion and blinks. The scores are fused using different combination methods depending on the level of detected motion. Experimental results on a standard spoofing database show the fused approach is more effective and robust than static analysis alone, especially for higher quality spoofing attacks.
Ariu - Workshop on Artificial Intelligence and Security - 2011Pluribus One
This document discusses applying machine learning to computer forensics. It provides a brief history of computer security and computer forensics research. It then discusses how machine learning can be useful for computer forensics given the complexity of digital investigations and large amounts of data. The document acknowledges limitations of current computer forensics machine learning research and provides guidelines for improving tools by incorporating an investigator's knowledge and prioritizing results.
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterPluribus One
This document summarizes an application of Hidden Markov Models (HMMs) to analyze HTTP payloads:
1. An HMM is used to associate a probability to each sequence of bytes in an HTTP payload and obtain an overall probability for the payload.
2. Real HTTP payload data collected from various sources on the internet is used to train the HMM.
3. The trained HMM can then be used to detect anomalies in new HTTP payloads by flagging payloads with significantly different probabilities as potential attacks or malware.
Ariu - Workshop on Multiple Classifier Systems - 2011Pluribus One
The document proposes a modular architecture for analyzing HTTP payloads using multiple classifiers to detect anomalies and intrusions. It trains ensembles of hidden Markov models on different lines of HTTP payloads like the request line, host, and user agent. The HMM outputs are then used as features for a one-class classifier to classify the full payload. The approach is evaluated on real traffic datasets and shown to outperform similar systems with high detection rates and fast computation.
Ariu - Workshop on Applications of Pattern AnalysisPluribus One
1) Traditionally, IDS have used signatures to detect known attacks but cannot find new attacks. Anomaly detection uses a statistical model of normal patterns and flags deviations, enabling detection of zero-day attacks.
2) Previous work analyzing the byte distribution in HTTP payloads had limitations due to high-dimensional feature spaces and coarse payload representations.
3) The document proposes HMMPayl, which applies HMM to HTTP payload analysis for anomaly detection, achieving increased classification accuracy over previous solutions and enabling use of multiple classifier systems and reduced computational costs.
Ariu - Workshop on Multiple Classifier Systems 2011Pluribus One
The document proposes a modular architecture for analyzing HTTP payloads using multiple classifiers to detect anomalies and intrusions. It trains ensembles of hidden Markov models on different lines of HTTP payloads like the request line, host, and user agent. The HMM outputs are then used as features for a one-class classifier to classify the full payload. The approach is evaluated on real traffic datasets and shown to outperform similar systems with high detection rates while being computationally efficient.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
Secure Kernel Machines against Evasion AttacksPluribus One
This document summarizes research on developing more secure machine learning classifiers. It discusses how gradient-based and surrogate model approaches can be used to evade existing classifiers. The researchers then propose several techniques for building more robust classifiers, including using infinity-norm regularization, cost-sensitive learning, and modifying kernel parameters. Experiments on handwritten digit and spam filtering datasets show the proposed approaches improve security against evasion attacks compared to standard support vector machines.
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
This document summarizes research on machine learning security and adversarial attacks. It describes how machine learning systems are increasingly being used for consumer applications, but this opens them up to new security risks from skilled attackers. The document outlines different types of adversarial attacks against machine learning, including evasion attacks that aim to evade detection and poisoning attacks that aim to compromise a system's availability. It also discusses approaches for systematically evaluating the security of pattern classification systems against bounded adversaries.
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Pluribus One
This document discusses the security of feature selection algorithms against training data poisoning attacks. It presents a framework to evaluate this, including models of the attacker's goal, knowledge, and capabilities. Experiments show that LASSO feature selection is vulnerable to poisoning attacks, which can significantly affect the selected features. The research aims to better understand these risks and develop more secure feature selection methods.
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Pluribus One
Pattern classifiers have been widely used in adversarial settings like spam and malware detection, although they have not been originally designed to cope with intelligent attackers that manipulate data at test time to evade detection.
While a number of adversary-aware learning algorithms have been proposed, they are computationally demanding and aim to counter specific kinds of adversarial data manipulation.
In this work, we overcome these limitations by proposing a multiple classifier system capable of improving security against evasion attacks at test time by learning a decision function that more tightly encloses the legitimate samples in feature space, without significantly compromising accuracy in the absence of attack. Since we combine a set of one-class and two-class classifiers to this end, we name our approach one-and-a-half-class (1.5C) classification. Our proposal is general and it can be used to improve the security of any classifier against evasion attacks at test time, as shown by the reported experiments on spam and malware detection.
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
Many modern face verification algorithms use a small set of reference templates to save memory and computa- tional resources. However, both the reference templates and the combination of the corresponding matching scores are heuristically chosen. In this paper, we propose a well- principled approach, named sparse support faces, that can outperform state-of-the-art methods both in terms of recog- nition accuracy and number of required face templates, by jointly learning an optimal combination of matching scores and the corresponding subset of face templates. For each client, our method learns a support vector machine using the given matching algorithm as the kernel function, and de- termines a set of reference templates, that we call support faces, corresponding to its support vectors. It then dras- tically reduces the number of templates, without affecting recognition accuracy, by learning a set of virtual faces as well-principled transformations of the initial support faces. The use of a very small set of support face templates makes the decisions of our approach also easily interpretable for designers and end users of the face verification system.
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Pluribus One
Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.
In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems.
However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data.
In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior.
To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Pluribus One
The document discusses poisoning attacks against complete-linkage hierarchical clustering. It introduces hierarchical clustering and describes how attackers can add poisoned samples to compromise the clustering output. The paper evaluates different attack strategies on real and artificial datasets, finding that even random attacks can be effective at poisoning the clusters, while extensions of greedy approaches generally perform best. Future work to develop defenses for clustering algorithms against adversarial inputs is discussed.
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Pluribus One
This document summarizes research on evasion attacks against machine learning systems at test time. The researchers propose a framework for evaluating the security of machine learning algorithms against evasion attacks. They model the adversary's goal, knowledge, capabilities, and attack strategy as an optimization problem. Using this framework, they evaluate gradient-descent evasion attacks against systems like spam filters and malware detectors. They show that machine learning classifiers can be vulnerable, even when the adversary has limited knowledge. The researchers explore techniques like bounding the adversary and adding a "mimicry" component to attacks to improve evasion effectiveness.
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Pluribus One
This document discusses poisoning attacks against support vector machines. The goal of poisoning attacks is to mislead machine learning systems by injecting malicious data points into the training set. The paper proposes an approach to maximize classification error on a validation set by calculating the gradient of the hinge loss with respect to the poisoned point. Experiments on MNIST data show that a single poisoned point can significantly increase error rates. The authors note that real attacks may be less effective and discuss how to improve SVM robustness to poisoning attacks.
This PhD thesis by Zahid Akhtar examines the security of multimodal biometric systems against spoof attacks. It aims to evaluate the robustness of these systems to real spoof attacks, validate assumptions about the "worst-case" spoofing scenario, and develop methods to assess security without fabricating fake traits. Experiments are conducted on systems using face and fingerprint biometrics under various spoof attacks, and results show multimodal systems can be compromised by attacking a single trait, while the worst-case scenario does not always reflect real attacks.
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
The document presents research on evaluating the robustness of multi-modal biometric verification systems against spoofing attacks. It discusses experiments conducted using fake fingerprints and faces to spoof a system using fingerprint and face matchers. The results show that the common assumption that fake scores follow a worst-case distribution may not always hold, and score fusion rules designed under this assumption could paradoxically reduce a system's robustness against realistic spoofing attacks. More accurate modeling of fake score distributions is needed.
Support Vector Machines Under Adversarial Label Noise (ACML 2011) - Battista ...Pluribus One
The document summarizes research on making support vector machines (SVMs) more robust to adversarial label noise. It discusses how adversaries can intentionally flip labels in training data to undermine SVMs. The researchers propose a label noise robust SVM that learns from an expected kernel matrix to be less sensitive to label flips. Experiments on several datasets show their approach maintains higher accuracy than standard SVMs when the training data contains adversarial or random label noise. In conclusions, they discuss further investigating the properties and parameter selection for their kernel correction method.
The document summarizes a multi-clue approach for detecting photo-based face spoofing attacks in face recognition systems. It fuses analysis of both static visual characteristics and video clues, such as motion and eye blinking. For static analysis, it extracts several visual representations from frames to compute scores. Video analysis examines motion and blinks. The scores are fused using different combination methods depending on the level of detected motion. Experimental results on a standard spoofing database show the fused approach is more effective and robust than static analysis alone, especially for higher quality spoofing attacks.
Ariu - Workshop on Artificial Intelligence and Security - 2011Pluribus One
This document discusses applying machine learning to computer forensics. It provides a brief history of computer security and computer forensics research. It then discusses how machine learning can be useful for computer forensics given the complexity of digital investigations and large amounts of data. The document acknowledges limitations of current computer forensics machine learning research and provides guidelines for improving tools by incorporating an investigator's knowledge and prioritizing results.
Ariu - Workshop on Applications of Pattern Analysis 2010 - PosterPluribus One
This document summarizes an application of Hidden Markov Models (HMMs) to analyze HTTP payloads:
1. An HMM is used to associate a probability to each sequence of bytes in an HTTP payload and obtain an overall probability for the payload.
2. Real HTTP payload data collected from various sources on the internet is used to train the HMM.
3. The trained HMM can then be used to detect anomalies in new HTTP payloads by flagging payloads with significantly different probabilities as potential attacks or malware.
Ariu - Workshop on Multiple Classifier Systems - 2011Pluribus One
The document proposes a modular architecture for analyzing HTTP payloads using multiple classifiers to detect anomalies and intrusions. It trains ensembles of hidden Markov models on different lines of HTTP payloads like the request line, host, and user agent. The HMM outputs are then used as features for a one-class classifier to classify the full payload. The approach is evaluated on real traffic datasets and shown to outperform similar systems with high detection rates and fast computation.
Ariu - Workshop on Applications of Pattern AnalysisPluribus One
1) Traditionally, IDS have used signatures to detect known attacks but cannot find new attacks. Anomaly detection uses a statistical model of normal patterns and flags deviations, enabling detection of zero-day attacks.
2) Previous work analyzing the byte distribution in HTTP payloads had limitations due to high-dimensional feature spaces and coarse payload representations.
3) The document proposes HMMPayl, which applies HMM to HTTP payload analysis for anomaly detection, achieving increased classification accuracy over previous solutions and enabling use of multiple classifier systems and reduced computational costs.
Ariu - Workshop on Multiple Classifier Systems 2011Pluribus One
The document proposes a modular architecture for analyzing HTTP payloads using multiple classifiers to detect anomalies and intrusions. It trains ensembles of hidden Markov models on different lines of HTTP payloads like the request line, host, and user agent. The HMM outputs are then used as features for a one-class classifier to classify the full payload. The approach is evaluated on real traffic datasets and shown to outperform similar systems with high detection rates while being computationally efficient.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Build a Module in Odoo 17 Using the Scaffold Method
Design of robust classifiers for adversarial environments - Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on
1. 2011 IEEE Int’l Conf. on Systems, Man, and Cybernetics (SMC2011)
Special Session on Machine Learning, 9-12/10/2011, Anchorage, Alaska
Design of robust classifiers for
adversarial environments
Battista Biggio, Giorgio Fumera, Fabio Roli
PRAgroup
Pattern Recognition and Applications Group
Department of Electrical and Electronic Engineering (DIEE)
University of Cagliari, Italy
2. Outline
• Adversarial classification
– Pattern classifiers under attack
• Our approach
– Modelling attacks to improve classifier security
• Application examples
– Biometric identity verification
– Spam filtering
• Conclusions and future works
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 2
3. Adversarial classification
• Pattern recognition in security applications
– spam filtering, intrusion detection, biometrics
• Malicious adversaries aim to evade the system
x2 legitimate
f(x)
malicious
Buy viagra!
Buy vi4gr@!
x1
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 3
4. Open issues
1. Vulnerability identification
2. Security evaluation of pattern classifiers
3. Design of secure pattern classifiers
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 4
5. Our approach
• Rationale
– to improve classifier security (robustness) by modelling
data distribution under attack
• Modelling potential attacks at testing time
– Probabilistic model of data distribution under attack
• Exploiting the data model for designing more
robust classifiers
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 5
6. Modelling attacks at test time
Two class problem
Y
Attack X is the feature vector
Y is the class label:
X legitimate (L)
malicious (M)
P(X,Y ) = P(Y )P(X | Y )
In adversarial scenarios, attacks can influence X and Y
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 6
7. Manipulation attacks against anti-spam filters
•Text classifiers in spam filtering
binary features (presence / absence of keywords)
•Common attacks
bad word obfuscation (BWO) and good word insertion (GWI)
Buy viagra! Buy vi4gr4!
Did you ever play that game
when you were a kid where the
little plastic hippo tries to
gobble up all your marbles?
x = [0 0 1 0 0 0 0 0 …] x’ = [0 0 0 0 1 0 0 1 …]
x ' = A(x)
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 7
8. Modelling attacks at test time
Y
Attack
X
P(X,Y ) = P(Y )P(X | Y )
In adversarial scenarios, attacks can influence X and Y
We must model this influence to design robust classifiers
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 8
9. Modelling attacks at test time
A Y
P(X,Y , A) = P(A)P(Y | A)P(X | Y , A)
X
• A is a r.v. which indicates whether the sample is
an attack (True) or not (False)
• Y is the class label: legitimate (L), malicious (M)
• X is the feature vector
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 9
10. Modelling attacks at test time
Ptr (X,Y = L) Ptr (X,Y = M )
Training time
x
Pts (X,Y = M ) =
Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time
x
Attacks which were not present at training phase!
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 10
11. Modelling attacks at testing time
• Attack distribution
– P(X,Y=M, A=T) = P(X|Y=M,A=T)P(Y=M|A=T)P(A=T)
• Choice of P(Y=M|A=T)
– We set it to 1, since we assume the adversary has only
control on malicious samples
• P(A=T) is thus the percentage of attacks among malicious
samples
– It is a parameter which tunes the security/accuracy trade-
off
– The more attacks are simulated during the training phase,
the more robust (but less accurate when no attacks) the
classifier is expected to be at testing time
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 11
12. Modelling attacks at testing time
Key issue:
modelling Pts(X, Y=M / A=T)
Pts (X,Y = L)
Pts (X,Y = M , A = F)
Testing time
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 12
13. Modelling attacks at testing time
• Choice of Pts(X, Y=M / A=T)
– Requires application-specific knowledge
– Even if knowledge about the attack is available, still difficult
to model analytically
– An agnostic choice is the uniform distribution
Pts (X,Y = M ) =
Pts (X,Y = L) Pts (X,Y = M | A = T )P(A = T ) + Pts (X,Y = M , A = F)
Testing time
x
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 13
14. Experiments
Spoofing attacks against biometric systems
• Multi-modal biometric verification systems
– Spoofing attacks
Fake fingerprints
Claimed
identity
Face Fingerprint
matcher matcher
s1 s2
Photo attack
Fusion module
Genuine / Impostor
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 14
15. Experiments
Multi-modal biometric identity verification
true
genuine
s1
Sensor Face matcher
s
Score fusion rule s ! s"
s2 f (s1 , s2 )
Sensor Fingerprint matcher
false
impostor
• Data set
– NIST Biometric Score Set 1 (publicly available)
• Fusion rules
p(s1 | G)p(s2 | G)
– Likelihood ratio (LLR) s=
p(s1 | I )p(s2 | I )
– Extended LLR
[Rodrigues et al., Robustness of multimodal biometric fusion methods against spoof attacks,
JVLC 2009]
– Our approach (Uniform LLR)
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 15
16. Remarks on experiments
•The Extended LRR [Rodrigues et al., 2009] used for
comparison assumes that the attack distribution is
equal to the distribution of legitimate patterns
Pts (X,Y = L) = Pts (X,Y = M | A = T )
Pts (X,Y = M , A = F)
Testing time
•Our rule, uniform LRR, assumes a uniform distribution
Experiments are done assuming that attack patterns
are exact replicas of legitimate patterns (worst case)
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 16
17. Experiments
Multi-modal biometric identity verification
• Uniform LLR under fingerprint spoofing attacks
– Security (FAR) vs accuracy (GAR) for different P(A=T)
values
– No attack (solid) / under attack (dashed)
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 17
18. Experiments
Multi-modal biometric identity verification
• Uniform vs Extended LLR under fingerprint
spoofing
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 18
19. Experiments
Spam filtering
• Similar results obtained in spam filtering
– TREC 2007 public data set
– Naive Bayes text classifier
– GWI/BWO attacks with nMAX modified words per spam
AUC10%
TP
0 0.1 FP
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 19
20. Conclusions and future works
• We presented a general generative approach
for designing robust classifiers against attacks at
test time
• Reported results show that our approach allow
to increase the robustness (i.e., the security) of
classifiers
• Future work
– To test Uniform LLR against more realistic spoof attacks
• Preliminary result: worst-case assumption is too pessimistic!
Biggio, Akhtar, Fumera, Marcialis, Roli, “Robustness of multimodal biometric
systems under realistic spoof attacks”, IJCB 2011
Oct. 10, 2011 Design of robust classifiers for adversarial environments - F. Roli - SMC2011 20