To trust a decision made by an algorithm, we need to know that it is reliable and fair, that it can be accounted for, and that it will cause no harm. We need assurance that it cannot be tampered with and that the system itself is secure. We need to understand the rationale behind the algorithmic assessment, recommendation or outcome, and be able to interact with it, probe it – even ask questions. And we need assurance that the values and norms of our societies are also reflected in those outcomes.
Learn about how bias can take root in machine learning algorithms and ways to overcome it. From the power of open source, to tools built to detect and remove bias in machine learning models, there is a vibrant ecosystem of contributors who are working to build a digital future that is inclusive and fair. Learn how to achieve AI fairness, robustness, explainability and accountability. You can become part of the solution.
Sonagachi * best call girls in Kolkata | ₹,9500 Pay Cash 8005736733 Free Home...
Trusting machines with robust, unbiased and reproducible AI
1. Trusting machines with robust, unbiased and
reproducible AI
—
Dr. Margriet Groenendijk
Data & AI Developer Advocate
Data Science London Meetup
November 25, 2019
13. "AI is the state of
the art of
computers, but
calling it
intelligence bothers
me”
@stevewoz
#GOTOcph
@MargrietGr
14. Machine learning
Algorithm selection
Deep learning
Neural network design
Natural Language Processing
Interactions between computers and
human languages
Artificial intelligence
Systems architecture
@MargrietGr
15. AI is used in many decision making applications
Credit Employment Admission HealthcareSentencing
@MargrietGr
22. What does it take to trust a decision made by a
machine?
(Other than that it is 99% accurate)?
@MargrietGr
23. Is it fair? Is it
accountable?
What does it take to trust a decision made by a
machine?
(Other than that it is 99% accurate)?
Did anyone
tamper
with it?
#21, #32, #93
#21, #32, #93
Is it easy to
understand?
@MargrietGr
30. Models Model Asset Exchange (MAX)
https://ibm.biz/model-exchange
Acumos marketplace
https://marketplace.acumos.org
Model Zoo
https://modelzoo.co
Google AI Hub
https://cloud.google.com/ai-hub
Tensorflow
https://www.tensorflow.org/resources/models-datasets@MargrietGr
Build your own from scratch
Use an open source package to
build your own
Or use pre-trained models
34. Is it fair? Is it
accountable?
What does it take to trust a decision made by a
machine?
Did anyone
tamper
with it?
#21, #32, #93
#21, #32, #93
Is it easy to
understand?
@MargrietGr
35. Misclassification
Adversarial machine learning
Adversarial machine learning can be used to “trick” machine learning models into providing
incorrect predictions
https://www.ibm.com/blogs/research/2018/04/ai-adversarial-robustness-toolbox/
49. Is it fair? Is it
accountable?
What does it take to trust a decision made by a
machine?
Did anyone
tamper
with it?
#21, #32, #93
#21, #32, #93
Is it easy to
understand?
@MargrietGr
50. Criminal Justice System
Risk scores using
Northpointe’s COMPAS
algorithm.
Defendants with low risk
scores are released on
bail.
It falsely flagged black
defendants as future
criminals, wrongly
labeling them this way at
almost twice the rate as
white defendants
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
.
@MargrietGr
52. AIF360 Demo
@MargrietGr
Disparate Impact
Computed as the ratio of rate of favorable
outcome for the unprivileged group to that
of the privileged group.
Compas (ProPublica recidivism)
Predict a criminal defendant’s likelihood of reoffending.
Protected Attributes:
- Sex, privileged: Female, unprivileged: Male
- Race, privileged: Caucasian, unprivileged: Not Caucasian
53. Reweighing
Weights the examples in each (group, label) combination
differently to ensure fairness before classification.
@MargrietGr
54. Reject Option Based Classification algorithm
applied
Changes predictions from a classifier to make them fairer. Provides
favorable outcomes to unprivileged groups in a confidence band around
the decision boundary with the highest uncertainty.
@MargrietGr
55. Is it fair? Is it
accountable?
What does it take to trust a decision made by a
machine?
Did anyone
tamper
with it?
#21, #32, #93
#21, #32, #93
Is it easy to
understand?
@MargrietGr
56. AIX360
AI Explainability 360
↳ (AIX360)
https://github.com/IBM/AIX360
Toolbox
Local post-hoc
Global post-hoc
Directly interpretable
http://aix360.mybluemix.net
@MargrietGr
57. AIX360: Different Ways
to explain
End users/customers (trust)
Doctors: Why did you recommend this treatment?
Customers: Why was my loan denied?
Teachers: Why was my teaching evaluated in this
way?
@MargrietGr
58. AIX360: Different Ways
to explain
End users/customers (trust)
Doctors: Why did you recommend this treatment?
Customers: Why was my loan denied?
Teachers: Why was my teaching evaluated in this
way?
Gov’t/regulators (compliance, safety)
Prove to me that you didn't discriminate.
@MargrietGr
59. AIX360: Different Ways
to explain
End users/customers (trust)
Doctors: Why did you recommend this treatment?
Customers: Why was my loan denied?
Teachers: Why was my teaching evaluated in this
way?
Gov’t/regulators (compliance, safety)
Prove to me that you didn't discriminate.
Developers (quality, “debuggability”)
Is our system performing well?
How can we improve it?
@MargrietGr
62. @MargrietGr
ExternalRiskEstimate is an important
feature positively correlated with
good credit risk.
The jumps in the plot indicate that
applicants with above average
ExternalRiskEstimate (the mean is 72)
get an additional boost.
66. Is it fair? Is it
accountable?
What does it take to trust a decision made by a
machine?
(Other than that it is 99% accurate)?
Did anyone
tamper
with it?
#21, #32, #93
#21, #32, #93
Is it easy to
understand?
@MargrietGr
67. FAIRNESS EXPLAINABILITYROBUSTNESS LINEAGE
Trusted AI Lifecycle through Open Source
Adversarial
Robustness 360
↳ (ART)
AI Fairness
360
↳ (AIF360)
AI Explainability
360
↳ (AIX360)
github.com/IBM/adversa
rial-robustness-toolbox
art-demo.mybluemix.net
github.com/IBM/AIF360
aif360.mybluemix.net
• github.com/IBM/AIX360
aix360.mybluemix.net
In the works!
Is it fair?
Is it easy to
understand?
Is it accountable?
Did anyone
tamper with it?
68. Machine learning
Algorithm selection
Deep learning
Neural network design
Natural Language Processing
Interactions between computers and
human languages
Artificial intelligence
Systems architecture
@MargrietGr
75. @MargrietGr
3RD PARTY IDE &
FRAMEWORKS
Watson OpenScale
Automated Anomaly and
Drift detection
Business KPIs
Watson Studio
Watson Machine
Learning
3RD PARTY RUNTIMES
Build Deploy and run Operate trusted AI
Fairness and Explainability
Inputs for Continuous
Evolution
Accuracy
Validation and Feedback
SPSS Modeler
Custom (Kubernetes etc.)
Microsoft Azure ML
Amazon Web Services
Keras
Pytorch
Scikit-learn
Spark ML
Caffe2 …
Free IBM Cloud account - https://ibm.biz/Bdz35F
76. Provision services Watson Studio
Cloud Object Store
Watson Machine Learning
Watson OpenScale
@MargrietGr
78. Provision services
Setup a project
Deploy pre-trained
model
Watson Studio
Jupyter notebooks
SparkML model
Deploy as API
Test API
@MargrietGr
79. Provision services
Setup a project
Deploy pre-trained
model
Configure monitoring
Watson Studio + OpenScale
Jupyter notebooks + UI
Set up datamart
Subscribe to monitoring of deployed model
- Quality and explainability
- Fairness
- Drift
@MargrietGr
89. Recap
@MargrietGr
Add trust by asking these questions
Did anyone tamper with it?
Is it fair?
Is it easy to understand?
Is it accountable?
AI is a systems architecture with lots of
moving parts
It is not magic
It is not intelligent
Trusted AI is what I will focus on in
2020
90. Models Model Asset Exchange (MAX)
https://ibm.biz/model-exchange
Acumos marketplace
https://marketplace.acumos.org
Model Zoo
https://modelzoo.co
Google AI Hub
https://cloud.google.com/ai-hub
Tensorflow
https://www.tensorflow.org/resources/models-datasets@MargrietGr
Build your own…
Or use pre-trained models
91. How do they work?
@MargrietGr
https://github.com/EthicalML/awesome-production-machine-
learning#explaining-black-box-models-and-datasets
93. @MargrietGr
Trusted AI
Open-source!
- EU https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
- Microsoft https://www.microsoft.com/en-us/ai/our-approach-to-ai
- Google Cloud Explainable AI
- Partnership on AI https://www.partnershiponai.org
- Linux Foundation AI