The 4th session of AI Trust, Bias, Explainability Series by IBM AI.
Date: 8/24, 2020 10am PST
Title: Adversarial Robustness 360 Toolbox For ML
Website: https://learn.xnextcon.com/event/eventdetails/W20082410
Abstract:
Welcome to the "AI Trust, Bias and Explainability" learning series, by IBM AI. In collaboration with IBM team, we host a series of practical introductory sessions to AI trust, bias and explainability.
This is the 4th session:
Adversarial samples are inputs to Machine Learning models that an adversary has tampered with in order to cause specific misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend ML models against them. This poses potential threats to the deployment of ML in security critical applications.
In this webinar I will review the state-of-the-art on adversarial samples and discuss recent progress in developing ML models that are robust against adversarial samples. Most time will spent on looking how to use the Adversarial Robustness Toolbox (ART) open source project to evaluate the robustness of ML models under various types of threats.
All sessions of the series:
Jul 27th - AI Security Privacy-Preserving Machine Learning by IBM AI. Session 1
Aug 10th - Explainable AI Workflows using Python. Session 2
Aug 17th - Understanding and Removing Unfair Bias in ML. Session 3
Aug 24th - Adversarial Robustness 360 Toolbox For ML. Session 4
Aug 31st - Workshop: Explainable AI Workflows. Session 5
4. But… AI is also surprisingly brittle!
https://art-demo.mybluemix.net/
5. This does not only apply to images…
“Basketball throw” (72.5%) “Tennis swing” (49.5%)
Original Adversarial
https://github.com/Trusted-AI/adversarial-robustness-
toolbox/blob/main/notebooks/adversarial_action_recognition.ipynb
7. Financial services:
• Evade fraud detection
Autonomous vehicles:
• Targeted/untargeted
attacks on object
recognition and image
segmentation models
Cybersecurity:
• Evade spam filters,
malware detectors,
network intrusion
detection etc.
Security:
• Disappearance attacks
against CCTV
surveillance
Adversarial Threats to AI
Scenarios
Undermine trust in AI
8. 8
Reports of cybersecurity vulnerabilities
due to evasion attacks against AI in
anti-malware / -virus products.
Such attacks are already happening...
11. • Check Contributions page:
• https://github.com/Trusted-AI/adversarial-robustness-
toolbox/blob/main/CONTRIBUTING.md
• Create github issues for suspected bugs, missing features, ideas for
improvements etc.
• Contribute bug fixes, new features etc. via pull requests to dev branch
• Follow PEP 8 coding style, provide unit tests
• Sign DCO (via ‘-s’ flag) for every commit
ART – How to contribute?