This document evaluates the robustness of multimodal biometric systems against realistic spoof attacks on all traits. It finds that while multimodal systems are more robust than unimodal ones under attack, their performance is still worsened significantly, showing they can be cracked by spoofing all traits. The study also finds that the common assumption of a "worst-case scenario" is not a good approximation of realistic attacks, and a new method is needed to properly evaluate system robustness under attack without constructing spoofed data sets.
The increasing use of distributed authentication architecture has highlighted interoperability issues of biometric systems. This presentation highlights the ongoing efforts at understanding fingerprint sensor interoperability. BSPA Labs has conducted experiments over the past few years aimed at addressing the challenges related to sensor interoperability. This presentation was given at a research seminar which covered the following: Importance of fingerprint sensor interoperability, sources of issues related to sensor interoperability, analysis framework for evaluating sensor interoperability, discussion of experimental results and its practical applicability
Controlled experimentation (A/B testing) allows companies to systematically study the effects of potential product changes or treatments by randomly assigning users to a control group or treatment group. The document discusses how controlled experiments can validate hypotheses with data, determine if a treatment has a causal effect, and provide examples of how A/B testing can be used for website variants, call-to-actions, and personalized recommendations. It also outlines best practices for running controlled experiments such as ensuring identical distributions between control and treatment groups and carefully monitoring each variant.
Dhananjay Prajapati presented on bar codes. The document discussed the history of bar codes from their invention in 1949 to widespread adoption in the 1970s. It also covered different types of bar codes like linear and 2D codes, as well as bar code scanning technologies and applications in inventory control, shipping, retail, and healthcare. Benefits of bar codes included accuracy, labor savings, and real-time data collection.
This document summarizes a seminar presentation on multimodal biometric systems. It discusses the limitations of unimodal biometric systems and how combining multiple biometric traits can improve accuracy. It covers classification of multimodal systems based on architecture, sources, fusion level, and methodology. Score normalization and different fusion techniques at the sensor, feature, matching score, and decision levels are also summarized. The conclusion states that multimodal biometrics provides higher security than unimodal systems through appropriate normalization and fusion methods.
Paper multi-modal biometric system using fingerprint , face and speechAalaa Khattab
Biometric system is often not able to meet the desired performance requirements.
In order to enable a biometric system to operate effectively in different applications and environments, a multimodal biometric system is preferred.
In this paper introduce a multimodal biometric system which integrates fingerprint verification , face recognition and speaker verification.
This document proposes a multimodal biometric security system that combines fingerprint, speech, and face recognition for authentication. It discusses different biometric techniques including fingerprint, face, and speech recognition and describes the modules involved in a multimodal system, such as the sensor, feature extraction, matching, and decision making modules. Different levels of data fusion are also covered, including sensor, feature, matching score, and decision level fusion. The document concludes that a multimodal system can improve performance over unimodal systems by reducing false acceptance and rejection rates, while increasing security.
Multimodal biometric systems are those that utilize more than one physical or behavioural characteristic for enrolment , verification, or identification.
Fingerprint recognition involves comparing fingerprints to determine if they match. It operates by acquiring fingerprints, extracting minutiae features like ridge endings and bifurcations, and matching minutiae between fingerprints. It has high accuracy but can be affected by dirt or wounds. Applications include banking security, access control, and criminal identification. The presented algorithm accurately and quickly extracts minutiae and identifies corrupted regions for removal.
The increasing use of distributed authentication architecture has highlighted interoperability issues of biometric systems. This presentation highlights the ongoing efforts at understanding fingerprint sensor interoperability. BSPA Labs has conducted experiments over the past few years aimed at addressing the challenges related to sensor interoperability. This presentation was given at a research seminar which covered the following: Importance of fingerprint sensor interoperability, sources of issues related to sensor interoperability, analysis framework for evaluating sensor interoperability, discussion of experimental results and its practical applicability
Controlled experimentation (A/B testing) allows companies to systematically study the effects of potential product changes or treatments by randomly assigning users to a control group or treatment group. The document discusses how controlled experiments can validate hypotheses with data, determine if a treatment has a causal effect, and provide examples of how A/B testing can be used for website variants, call-to-actions, and personalized recommendations. It also outlines best practices for running controlled experiments such as ensuring identical distributions between control and treatment groups and carefully monitoring each variant.
Dhananjay Prajapati presented on bar codes. The document discussed the history of bar codes from their invention in 1949 to widespread adoption in the 1970s. It also covered different types of bar codes like linear and 2D codes, as well as bar code scanning technologies and applications in inventory control, shipping, retail, and healthcare. Benefits of bar codes included accuracy, labor savings, and real-time data collection.
This document summarizes a seminar presentation on multimodal biometric systems. It discusses the limitations of unimodal biometric systems and how combining multiple biometric traits can improve accuracy. It covers classification of multimodal systems based on architecture, sources, fusion level, and methodology. Score normalization and different fusion techniques at the sensor, feature, matching score, and decision levels are also summarized. The conclusion states that multimodal biometrics provides higher security than unimodal systems through appropriate normalization and fusion methods.
Paper multi-modal biometric system using fingerprint , face and speechAalaa Khattab
Biometric system is often not able to meet the desired performance requirements.
In order to enable a biometric system to operate effectively in different applications and environments, a multimodal biometric system is preferred.
In this paper introduce a multimodal biometric system which integrates fingerprint verification , face recognition and speaker verification.
This document proposes a multimodal biometric security system that combines fingerprint, speech, and face recognition for authentication. It discusses different biometric techniques including fingerprint, face, and speech recognition and describes the modules involved in a multimodal system, such as the sensor, feature extraction, matching, and decision making modules. Different levels of data fusion are also covered, including sensor, feature, matching score, and decision level fusion. The document concludes that a multimodal system can improve performance over unimodal systems by reducing false acceptance and rejection rates, while increasing security.
Multimodal biometric systems are those that utilize more than one physical or behavioural characteristic for enrolment , verification, or identification.
Fingerprint recognition involves comparing fingerprints to determine if they match. It operates by acquiring fingerprints, extracting minutiae features like ridge endings and bifurcations, and matching minutiae between fingerprints. It has high accuracy but can be affected by dirt or wounds. Applications include banking security, access control, and criminal identification. The presented algorithm accurately and quickly extracts minutiae and identifies corrupted regions for removal.
Automated measurement of Physiological and/or behavioral characteristics to determine or authenticate identity”.“Automated measurement”.No human involvement.Comparison takes place in Real-Time.DNA is not a Biometric
This document discusses using software traceability to analyze the impact of software maintenance. It involves two steps: 1) reconstructing traceability links between requirements, test cases, bug reports and source code units; and 2) analyzing how traceability evolves over multiple versions. Text mining, static and dynamic analysis can be used to establish initial traceability links. The evolution of traceability and source code changes can then be analyzed to understand the impact of changing requirements and reported bugs on the source code.
This document provides an outline and overview of biometrics and biometric systems. It begins with definitions of biometrics and describes the main components of a biometric system, including the sensor, feature extraction, matching, and database modules. It then covers various biometric techniques including fingerprint, iris, retina, face, voice, signature, and hand scans. It discusses identification vs verification modes and types of errors in biometric systems. Application areas are identified as well as limitations of unimodal systems. Finally, it introduces multimodal biometric systems and different levels of fusion.
This document provides an outline for a paper on biometric systems. It begins with an introduction that defines biometrics and discusses identification vs verification systems. It then covers the basic structure of a biometric system including the sensor, feature extraction, matching, and database modules. It discusses various biometric techniques including fingerprint, iris, retina, face, voice, signature, and keystroke recognition. It also covers biometric system errors, vulnerabilities, applications, and the limitations of unimodal systems. Finally, it discusses multimodal biometric systems including different fusion levels and examples of multimodal combinations.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
The document presents research on evaluating the robustness of multi-modal biometric verification systems against spoofing attacks. It discusses experiments conducted using fake fingerprints and faces to spoof a system using fingerprint and face matchers. The results show that the common assumption that fake scores follow a worst-case distribution may not always hold, and score fusion rules designed under this assumption could paradoxically reduce a system's robustness against realistic spoofing attacks. More accurate modeling of fake score distributions is needed.
MAGIC: A Motion Gesture Design Tool
Daniel Ashbrook, Georgia Tech and Nokia Research Center Hollywood
Thad Starner, Georgia Tech
http://research.nokia.com/files/2010-Ashbrook-CHI10-MAGIC.pdf
Presented at the 28th Annual ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)
Abstract:
Devices capable of gestural interaction through motion sensing are increasingly becoming available to consumers; however, motion gesture control has yet to appear outside of game consoles. Interaction designers are frequently not expert in pattern recognition, which may be one reason for this lack of availability. Another issue is how to effectively test gestures to ensure that they are not unintentionally activated by a user’s normal movements during everyday usage. We present MAGIC, a gesture design tool that addresses both of these issues, and detail the results of an evaluation.
Symantec Endpoint Protection and Symantec Endpoint Protection Small Business Edition will provide businesses of all sizes with advanced new protection while improving system performance. Complete with advanced features to secure virtual infrastructures and powered by Insight, Symantec’s award-winning community-based reputation technology, Symantec Endpoint Protection 12 will detect sophisticated new threats earlier and more accurately than any other security product. Symantec Endpoint Protection offers comprehensive defense against all types of attacks for both physical and virtual systems. It seamlessly integrates 9 essential security technologies in a single, high performance agent with a single management console.
Register for the public beta program here: http://tinyurl.com/6xslnfn
A Bayesian Approach for Modeling SensorInfluence on Quality, Liveness and Ma...AjitaRattani
This document presents a Bayesian approach for modeling the influence of sensors on match scores, quality values, and liveness measures in fingerprint verification. The approach develops a graphical model that accounts for the impact of the sensor on these three variables. The model is evaluated on fingerprint data from two different sensors in the LivDet 2011 database. Experimental results show that existing fusion approaches do not perform well in a multi-sensor environment, while the proposed graphical model effectively operates across different sensors. The graphical model improves fingerprint spoof detection by explicitly modeling the relationship between the sensor and match scores, quality, and liveness values.
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Pluribus One
Slides of the tutorial held by Battista Biggio, University of Cagliari and Pluribus One Srl, during "2019 International Summer School on Machine Learning and Security (MLS)"
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019Pluribus One
1) Adversarial machine learning studies machine learning systems that operate in adversarial settings such as spam filtering, where the data source is non-neutral and can deliberately attempt to reduce classifier performance.
2) Deep learning models were found to be susceptible to adversarial examples, which are imperceptibly perturbed inputs that cause models to make incorrect predictions.
3) Studies have shown that adversarial examples generated in a digital environment can still fool models when inputs are acquired through a physical system like a camera, indicating these attacks pose a real-world threat.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Pluribus One
This document discusses research into generating adversarial examples to attack the vision system of the iCub humanoid robot. The researchers were able to craft perturbed images that were misclassified by the robot despite being visually indistinguishable from the originals. They developed gradient-based optimization attacks to target specific misclassifications or induce any misclassification. Potential countermeasures include rejecting inputs that fall in the "blind spots" far from the training data. However, deep learning features are unstable, with small pixel changes mapping to large changes in the deep space. Future work aims to address this instability issue.
Secure Kernel Machines against Evasion AttacksPluribus One
This document summarizes research on developing more secure machine learning classifiers. It discusses how gradient-based and surrogate model approaches can be used to evade existing classifiers. The researchers then propose several techniques for building more robust classifiers, including using infinity-norm regularization, cost-sensitive learning, and modifying kernel parameters. Experiments on handwritten digit and spam filtering datasets show the proposed approaches improve security against evasion attacks compared to standard support vector machines.
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
This document summarizes research on machine learning security and adversarial attacks. It describes how machine learning systems are increasingly being used for consumer applications, but this opens them up to new security risks from skilled attackers. The document outlines different types of adversarial attacks against machine learning, including evasion attacks that aim to evade detection and poisoning attacks that aim to compromise a system's availability. It also discusses approaches for systematically evaluating the security of pattern classification systems against bounded adversaries.
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Pluribus One
This document discusses the security of feature selection algorithms against training data poisoning attacks. It presents a framework to evaluate this, including models of the attacker's goal, knowledge, and capabilities. Experiments show that LASSO feature selection is vulnerable to poisoning attacks, which can significantly affect the selected features. The research aims to better understand these risks and develop more secure feature selection methods.
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Pluribus One
Pattern classifiers have been widely used in adversarial settings like spam and malware detection, although they have not been originally designed to cope with intelligent attackers that manipulate data at test time to evade detection.
While a number of adversary-aware learning algorithms have been proposed, they are computationally demanding and aim to counter specific kinds of adversarial data manipulation.
In this work, we overcome these limitations by proposing a multiple classifier system capable of improving security against evasion attacks at test time by learning a decision function that more tightly encloses the legitimate samples in feature space, without significantly compromising accuracy in the absence of attack. Since we combine a set of one-class and two-class classifiers to this end, we name our approach one-and-a-half-class (1.5C) classification. Our proposal is general and it can be used to improve the security of any classifier against evasion attacks at test time, as shown by the reported experiments on spam and malware detection.
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
Many modern face verification algorithms use a small set of reference templates to save memory and computa- tional resources. However, both the reference templates and the combination of the corresponding matching scores are heuristically chosen. In this paper, we propose a well- principled approach, named sparse support faces, that can outperform state-of-the-art methods both in terms of recog- nition accuracy and number of required face templates, by jointly learning an optimal combination of matching scores and the corresponding subset of face templates. For each client, our method learns a support vector machine using the given matching algorithm as the kernel function, and de- termines a set of reference templates, that we call support faces, corresponding to its support vectors. It then dras- tically reduces the number of templates, without affecting recognition accuracy, by learning a set of virtual faces as well-principled transformations of the initial support faces. The use of a very small set of support face templates makes the decisions of our approach also easily interpretable for designers and end users of the face verification system.
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Pluribus One
Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.
In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.
More Related Content
Similar to Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks against All Traits
Automated measurement of Physiological and/or behavioral characteristics to determine or authenticate identity”.“Automated measurement”.No human involvement.Comparison takes place in Real-Time.DNA is not a Biometric
This document discusses using software traceability to analyze the impact of software maintenance. It involves two steps: 1) reconstructing traceability links between requirements, test cases, bug reports and source code units; and 2) analyzing how traceability evolves over multiple versions. Text mining, static and dynamic analysis can be used to establish initial traceability links. The evolution of traceability and source code changes can then be analyzed to understand the impact of changing requirements and reported bugs on the source code.
This document provides an outline and overview of biometrics and biometric systems. It begins with definitions of biometrics and describes the main components of a biometric system, including the sensor, feature extraction, matching, and database modules. It then covers various biometric techniques including fingerprint, iris, retina, face, voice, signature, and hand scans. It discusses identification vs verification modes and types of errors in biometric systems. Application areas are identified as well as limitations of unimodal systems. Finally, it introduces multimodal biometric systems and different levels of fusion.
This document provides an outline for a paper on biometric systems. It begins with an introduction that defines biometrics and discusses identification vs verification systems. It then covers the basic structure of a biometric system including the sensor, feature extraction, matching, and database modules. It discusses various biometric techniques including fingerprint, iris, retina, face, voice, signature, and keystroke recognition. It also covers biometric system errors, vulnerabilities, applications, and the limitations of unimodal systems. Finally, it discusses multimodal biometric systems including different fusion levels and examples of multimodal combinations.
Lecture 10 from a course on Mobile Based Augmented Reality Development taught by Mark Billinghurst and Zi Siang See on November 29th and 30th 2015 at Johor Bahru in Malaysia. This lecture provides an overview of research directions in Mobile AR. Look for the other 9 lectures in the course.
Robustness of multimodal biometric verification systems under realistic spoof...Pluribus One
The document presents research on evaluating the robustness of multi-modal biometric verification systems against spoofing attacks. It discusses experiments conducted using fake fingerprints and faces to spoof a system using fingerprint and face matchers. The results show that the common assumption that fake scores follow a worst-case distribution may not always hold, and score fusion rules designed under this assumption could paradoxically reduce a system's robustness against realistic spoofing attacks. More accurate modeling of fake score distributions is needed.
MAGIC: A Motion Gesture Design Tool
Daniel Ashbrook, Georgia Tech and Nokia Research Center Hollywood
Thad Starner, Georgia Tech
http://research.nokia.com/files/2010-Ashbrook-CHI10-MAGIC.pdf
Presented at the 28th Annual ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)
Abstract:
Devices capable of gestural interaction through motion sensing are increasingly becoming available to consumers; however, motion gesture control has yet to appear outside of game consoles. Interaction designers are frequently not expert in pattern recognition, which may be one reason for this lack of availability. Another issue is how to effectively test gestures to ensure that they are not unintentionally activated by a user’s normal movements during everyday usage. We present MAGIC, a gesture design tool that addresses both of these issues, and detail the results of an evaluation.
Symantec Endpoint Protection and Symantec Endpoint Protection Small Business Edition will provide businesses of all sizes with advanced new protection while improving system performance. Complete with advanced features to secure virtual infrastructures and powered by Insight, Symantec’s award-winning community-based reputation technology, Symantec Endpoint Protection 12 will detect sophisticated new threats earlier and more accurately than any other security product. Symantec Endpoint Protection offers comprehensive defense against all types of attacks for both physical and virtual systems. It seamlessly integrates 9 essential security technologies in a single, high performance agent with a single management console.
Register for the public beta program here: http://tinyurl.com/6xslnfn
A Bayesian Approach for Modeling SensorInfluence on Quality, Liveness and Ma...AjitaRattani
This document presents a Bayesian approach for modeling the influence of sensors on match scores, quality values, and liveness measures in fingerprint verification. The approach develops a graphical model that accounts for the impact of the sensor on these three variables. The model is evaluated on fingerprint data from two different sensors in the LivDet 2011 database. Experimental results show that existing fusion approaches do not perform well in a multi-sensor environment, while the proposed graphical model effectively operates across different sensors. The graphical model improves fingerprint spoof detection by explicitly modeling the relationship between the sensor and match scores, quality, and liveness values.
Similar to Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks against All Traits (9)
Wild Patterns: A Half-day Tutorial on Adversarial Machine Learning - 2019 Int...Pluribus One
Slides of the tutorial held by Battista Biggio, University of Cagliari and Pluribus One Srl, during "2019 International Summer School on Machine Learning and Security (MLS)"
WILD PATTERNS - Introduction to Adversarial Machine Learning - ITASEC 2019Pluribus One
1) Adversarial machine learning studies machine learning systems that operate in adversarial settings such as spam filtering, where the data source is non-neutral and can deliberately attempt to reduce classifier performance.
2) Deep learning models were found to be susceptible to adversarial examples, which are imperceptibly perturbed inputs that cause models to make incorrect predictions.
3) Studies have shown that adversarial examples generated in a digital environment can still fool models when inputs are acquired through a physical system like a camera, indicating these attacks pose a real-world threat.
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub...Pluribus One
This document discusses research into generating adversarial examples to attack the vision system of the iCub humanoid robot. The researchers were able to craft perturbed images that were misclassified by the robot despite being visually indistinguishable from the originals. They developed gradient-based optimization attacks to target specific misclassifications or induce any misclassification. Potential countermeasures include rejecting inputs that fall in the "blind spots" far from the training data. However, deep learning features are unstable, with small pixel changes mapping to large changes in the deep space. Future work aims to address this instability issue.
Secure Kernel Machines against Evasion AttacksPluribus One
This document summarizes research on developing more secure machine learning classifiers. It discusses how gradient-based and surrogate model approaches can be used to evade existing classifiers. The researchers then propose several techniques for building more robust classifiers, including using infinity-norm regularization, cost-sensitive learning, and modifying kernel parameters. Experiments on handwritten digit and spam filtering datasets show the proposed approaches improve security against evasion attacks compared to standard support vector machines.
Machine Learning under Attack: Vulnerability Exploitation and Security MeasuresPluribus One
This document summarizes research on machine learning security and adversarial attacks. It describes how machine learning systems are increasingly being used for consumer applications, but this opens them up to new security risks from skilled attackers. The document outlines different types of adversarial attacks against machine learning, including evasion attacks that aim to evade detection and poisoning attacks that aim to compromise a system's availability. It also discusses approaches for systematically evaluating the security of pattern classification systems against bounded adversaries.
Battista Biggio @ ICML 2015 - "Is Feature Selection Secure against Training D...Pluribus One
This document discusses the security of feature selection algorithms against training data poisoning attacks. It presents a framework to evaluate this, including models of the attacker's goal, knowledge, and capabilities. Experiments show that LASSO feature selection is vulnerable to poisoning attacks, which can significantly affect the selected features. The research aims to better understand these risks and develop more secure feature selection methods.
Battista Biggio @ MCS 2015, June 29 - July 1, Guenzburg, Germany: "1.5-class ...Pluribus One
Pattern classifiers have been widely used in adversarial settings like spam and malware detection, although they have not been originally designed to cope with intelligent attackers that manipulate data at test time to evade detection.
While a number of adversary-aware learning algorithms have been proposed, they are computationally demanding and aim to counter specific kinds of adversarial data manipulation.
In this work, we overcome these limitations by proposing a multiple classifier system capable of improving security against evasion attacks at test time by learning a decision function that more tightly encloses the legitimate samples in feature space, without significantly compromising accuracy in the absence of attack. Since we combine a set of one-class and two-class classifiers to this end, we name our approach one-and-a-half-class (1.5C) classification. Our proposal is general and it can be used to improve the security of any classifier against evasion attacks at test time, as shown by the reported experiments on spam and malware detection.
Sparse Support Faces - Battista Biggio - Int'l Conf. Biometrics, ICB 2015, Ph...Pluribus One
Many modern face verification algorithms use a small set of reference templates to save memory and computa- tional resources. However, both the reference templates and the combination of the corresponding matching scores are heuristically chosen. In this paper, we propose a well- principled approach, named sparse support faces, that can outperform state-of-the-art methods both in terms of recog- nition accuracy and number of required face templates, by jointly learning an optimal combination of matching scores and the corresponding subset of face templates. For each client, our method learns a support vector machine using the given matching algorithm as the kernel function, and de- termines a set of reference templates, that we call support faces, corresponding to its support vectors. It then dras- tically reduces the number of templates, without affecting recognition accuracy, by learning a set of virtual faces as well-principled transformations of the initial support faces. The use of a very small set of support face templates makes the decisions of our approach also easily interpretable for designers and end users of the face verification system.
Battista Biggio, Invited Keynote @ AISec 2014 - On Learning and Recognition o...Pluribus One
Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.
In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems.
However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data.
In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior.
To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Battista Biggio @ S+SSPR2014, Joensuu, Finland -- Poisoning Complete-Linkage ...Pluribus One
The document discusses poisoning attacks against complete-linkage hierarchical clustering. It introduces hierarchical clustering and describes how attackers can add poisoned samples to compromise the clustering output. The paper evaluates different attack strategies on real and artificial datasets, finding that even random attacks can be effective at poisoning the clusters, while extensions of greedy approaches generally perform best. Future work to develop defenses for clustering algorithms against adversarial inputs is discussed.
Battista Biggio @ AISec 2013 - Is Data Clustering in Adversarial Settings Sec...Pluribus One
Clustering algorithms have been increasingly adopted in security applications to spot dangerous or illicit activities.
However, they have not been originally devised to deal with deliberate attack attempts that may aim to subvert the clustering process itself. Whether clustering can be safely adopted in such settings remains thus questionable.
In this work we propose a general framework that allows one to identify potential attacks against clustering algorithms, and to evaluate their impact, by making specific assumptions on the adversary's goal, knowledge of the attacked system, and capabilities of manipulating the input data. We show that an attacker may significantly poison the whole clustering process by adding a relatively small percentage of attack samples to the input data, and that some attack samples may be obfuscated to be hidden within some existing clusters.
We present a case study on single-linkage hierarchical clustering, and report experiments on clustering of malware samples and handwritten digits.
Battista Biggio @ ECML PKDD 2013 - Evasion attacks against machine learning a...Pluribus One
This document summarizes research on evasion attacks against machine learning systems at test time. The researchers propose a framework for evaluating the security of machine learning algorithms against evasion attacks. They model the adversary's goal, knowledge, capabilities, and attack strategy as an optimization problem. Using this framework, they evaluate gradient-descent evasion attacks against systems like spam filters and malware detectors. They show that machine learning classifiers can be vulnerable, even when the adversary has limited knowledge. The researchers explore techniques like bounding the adversary and adding a "mimicry" component to attacks to improve evasion effectiveness.
Battista Biggio @ ICML2012: "Poisoning attacks against support vector machines"Pluribus One
This document discusses poisoning attacks against support vector machines. The goal of poisoning attacks is to mislead machine learning systems by injecting malicious data points into the training set. The paper proposes an approach to maximize classification error on a validation set by calculating the gradient of the hinge loss with respect to the poisoned point. Experiments on MNIST data show that a single poisoned point can significantly increase error rates. The authors note that real attacks may be less effective and discuss how to improve SVM robustness to poisoning attacks.
This PhD thesis by Zahid Akhtar examines the security of multimodal biometric systems against spoof attacks. It aims to evaluate the robustness of these systems to real spoof attacks, validate assumptions about the "worst-case" spoofing scenario, and develop methods to assess security without fabricating fake traits. Experiments are conducted on systems using face and fingerprint biometrics under various spoof attacks, and results show multimodal systems can be compromised by attacking a single trait, while the worst-case scenario does not always reflect real attacks.
Design of robust classifiers for adversarial environments - Systems, Man, and...Pluribus One
This document summarizes a presentation on designing robust classifiers for adversarial environments given at the 2011 IEEE International Conference on Systems, Man, and Cybernetics. The presentation introduces an approach to model potential attacks at test time using a probabilistic model of the data distribution under attack. This model is then used to design classifiers that are more robust to attacks. Experimental results on biometric identity verification and spam filtering show that the proposed approach can increase classifier security against attacks while maintaining accuracy.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Robustness of Multimodal Biometric Systems under Realistic Spoof Attacks against All Traits
1. Robustness of Multimodal Biometric
Systems under Realistic Spoof Attacks
against All Traits
Zahid Akhtar, Battista Biggio, Giorgio Fumera, Gian Luca Marcialis
Pattern Recognition and Applications Group
P R A G Department of Electrical and Electronic Engineering
University of Cagliari, Italy
3. Biometric systems
• Unimodal Biometrics System
score ≥ Threshold Genuine
Sensor Feature Matcher Decision
Extractor score < Threshold Impostor
Database
• Multimodal Biometrics System
Sensor and
scorefingerprint
Fingerprint
Feature Ext. Matcher
score ≥ Threshold Genuine
Score Fusion Rule Decision
Database f(scorefingerprint , scoreface) score < Threshold Impostor
Sensor and scoreface
Face
Feature Ext. Matcher
3
4. Spoof attacks
• Spoof attack : attacks at the user interface
• Presentation of a fake biometric trait
• Solutions:
• Liveness Detection Methods
• Increase of false rejection rate (FRR)
• Multimodal biometric Systems “intrinsically” robust?
4
5. Aim of our work
• State-of-the-art:
• Fabrication of fake traits is a cumbersome task
• Robustness evaluation of multimodal systems using simulated attacks1,2
• Substantial increase of false acceptance rate (FAR) under only one trait spoofing
• Hypothesis: worst-case scenario1,2
• the attacker is able to fabricate exact replica of the genuine biometric trait
• match score distribution of spoofed trait is equal to one of the genuine trait
• Need of investigation of robustness against realistic (non-worst case) spoof
attacks
1 R. N. Rodrigues, L. L. Ling, V. Govindaraju, “Robustness of multimodal biometric fusion methods against spoof attacks”, JVLC, 2009.
2 P. A. Johnson, B. Tan and S. Schuckers, “Multimodal Fusion Vulnerability To Non-Zero Effort (Spoof) Imposters”, WIFS, 2010.
5
6. Aim of our work
• Main goal:
• Robustness evaluation methods under spoof attacks in realistic scenarios
without fabrication of fake biometric traits
• Aim of this paper:
• To investigate whether a realistic spoof attacks against all modalities
can allow the attacker to crack the multimodal system
• and whether the worst-case assumption is realistic
6
7. Experimental setting
• Data set:
• Two separate data sets of faces and fingerprints
• Chimerical multimodal data set
• Live:
• No. of clients: 40
• No. of samples per client: 40
• Spoofed (Fake):
• No. of clients: 40
• No. of samples per client: 40
8. Experimental setting
• Spoofed (Fake) traits production
• Fake fingerprints by “consensual method”
• mould: plasticine-like material
• cast: two-compound mixture of liquid silicon
!!!!!!!!!!!!!!!!
Live Spoofed (Fake)
!!!!!!! !!
!
• Fake faces by “photo attack”
• photo displayed on a laptop screen to camera !
!!!!!!! !! !!
Live Spoofed (Fake)
!
8
!
10. Experimental Results
• Detection Error Trade-off (DET) curves:
• False Rejection rate (FRR) vs. false acceptance rate (FAR)
Sum LLR
2 2
10 10
1 1
10 10
FRR (%)
FRR (%)
fing.+face fing.+face
fing. fing.
face face
0 0
10 10
−1 −1
10 −1 10 −1 0 1 2
0 1 2
10 10 10 10 10 10 10 10
FAR (%) FAR (%)
• Performance of multimodal systems improved under no spoofing attacks with
the exception of Sum rule
10
11. Experimental Results
Sum LLR
2 2
10 10
1 1
10 fing.+face 10 fing.+face
fing.+face spoof fing.+face spoof
FRR (%)
FRR (%)
fing. fing.
fing. spoof fing. spoof
0
face 0
face
10 face spoof 10 face spoof
−1 −1
10 −1 10 −1 0 1 2
0 1 2
10 10 10 10 10 10 10 10
FAR (%) FAR (%)
• spoof attacks worsen considerably the performance of individual systems,
allowing an attacker to crack them
• spoof attacks against both traits also worsen the performance of the multimodal
systems
• however the considered multimodal systems are more robust than unimodal
ones, under attack
11
12. Experimental Results
Sum LLR
2 2
10 10
1 1
10 10
FRR (%)
FRR (%)
fing.+face fing.+face
fing.+face spoof fing.+face spoof
FAR=FRR FAR=FRR
0 0
10 10
−1 −1
10 −1 10 −1 0 1 2
0 1 2
10 10 10 10 10 10 10 10
FAR (%) FAR (%)
• the performance of multimodal systems under attack is worsen considerably,
which confirms that they can be cracked by spoofing all traits
• the worst-case assumption is not a good approximation of realistic attacks
12
13. Conclusions
• State-of-the-art: “worst-case” scenario
• Evidence of two common beliefs under spoof attacks:
• Multimodal systems can be more robust than unimodal systems
• Multimodal systems can be cracked by spoofing all the fused traits
even when the attacker does not fabricate worst-case scenario
• Worst-case scenario is not suitable for evaluating the performance under attack
• Ongoing works:
• development of methods for evaluating robustness, without constructing
data sets of spoof attacks
• development of robust score fusion rules
13