SlideShare a Scribd company logo
1 of 78
Download to read offline
Confidential1 Confidential1
• Explainable AI is in the news, and for good reason. Financial services
companies have cited the ability to explain AI-based decisions as one of
the critical roadblocks to further adoption of AI for their industry.
Transparency, accountability, and trustworthiness of data-driven decision
support systems based on AI and machine learning are serious regulatory
mandates in banking, insurance, healthcare, and other industries. From
pertinent regulations, to increasing customer trust, data scientists and
business decision makers must show AI-based decisions can be
explained.
• H2O Driverless AI does explainable AI today with its machine learning
interpretability (MLI) module. This capability in H2O Driverless AI employs
a unique combination of techniques and methodologies to explain the
results of both Driverless AI models and external models.
Explainable AI with H2O Driverless AI's machine
learning interpretability module
Explainable AI
with H2O Driverless AI’s
machine learning interpretability
module
Martin Dvorak
Software Engineer, H2O.ai
martin.dvorak@h2o.ai
H2O.ai Prague Meetup #3 2019/5/16
ABOUT
ME
Martin is a passionate software engineer and RESTafarian who is interested in machine
learning, VM construction, enterprise software and knowledge management. He
holds Master degree in Computer Science from Charles University Prague with
specializations in compilers, operating systems and AI/ML.
Martin is a backend engineer on the MLI project at H2O.ai
AGENDA
• Intro
– Context and scope.
• Why
– Explainability matters.
• What
– Steps to build human-centered, low-risk models.
• How
– Explaining models using of H2O.ai’s solution.
Intro
Terminology, scope and context
Confidential6
Terminology, Scope and Context
• Machine Learning Interpretability
– “[Machine learning interpretability] is the ability to explain or present in
understandable terms to a human.“ –https://arxiv.org/pdf/1702.08608.pdf
• Structured data
– No image, video and sound > deep learning typically not used.
– Tabular data and supervised ML.
• Auto ML
– H2O Driverless AI (DAI) product (not OSS).
• MLI module
– Solution based on MLI module of H2O Driverless AI.
INTRO
Confidential7
Terminology, Scope and Context
INTRO
Model
Model
Training
Feature
Engineering
Data
Integration & Quality
Machine
Learning
Interpretability
End to end
Confidential8
Terminology, Scope and Context
INTRO
Model
Model
Training
Feature
Engineering
Data
Integration & Quality
Explainable model
End to end
Why explainability matters
Problem statement
(Trade-off)
Potential Performance and Interpretability Trade-off
White box model Black box model
Feature engineering + Algorithm(s)
Balance
(Trade-off)
Potential Performance and Interpretability Trade-off
(Trade-off)
Potential Performance and Interpretability Trade-off
(Trade-off)
Age
NumberofPurchases
Lost profits.
Wasted marketing.
“For a one unit increase in age, the number
of purchases increases by 0.8 on average.”
Age
“Slope begins to
decrease here. Act to
optimize savings.”
“Slope begins to
increase here sharply.
Act to optimize profits.”
NumberofPurchase
Exact explanations for
approximate models.
Approximate explanations for
exact models.
Linear models
Machine learning models
Potential Performance and Interpretability Trade-off
Sometimes…
Trade-off
Multiplicity
• For a given well-understood dataset there
is usually one best linear model, but…
Multiplicity of Good Models
Trade-off
Multiplicity
• … for a given well-understood dataset there are usually
many good ML models. Which one to choose?
• Same objective metrics values, performance, …
• This is often referred to as “the multiplicity of good
models.” -- Leo Breiman
Multiplicity of Good Models
Trade-off
Multiplicity
Fairness
• Gender
• Age
• Ethnicity
• Health
• Sexual behavior
• Avoid discriminatory models and
remediate disparate impact.
Fairness and Social Aspects
Trade-off
Multiplicity
Fairness
Trust
• Dataset
vs.
real world
• ML adoption
• Introspection
• Sensitivity
• OOR
• Diagnostics
• “Debugging”
Trust of model producers & consumers
Source: http://www.vias.org/tmdatanaleng/
Trade-off
Multiplicity
Fairness
Trust
Security
• Goal: compromise model integrity
• Attack types:
– Exploratory
– Surrogate model trained to identify vulnerabilities ~ MLI.
– Trial and error (for specific class) x indiscriminate attacks.
– Causative
– Models trained w/ adversary datasets.
– Local model > adversarial instance > target model.
– Standard / continuous learning.
– Integrity (compromise system integrity)
– False negative instance e.g. fraud passes check.
– Availability (compromise system availability)
– False positive instance e.g. blocks access to legitimate instances.
Security and Hacking
Trade-off
Multiplicity
Fairness
Trust
Security
Regulation
• Legal requirements
– Banking, insurance, healthcare, …
• Predictions explanation
– Decisions justification (reason codes*, …).
• Fairness
• Security
• Accuracy first vs. interpretability first
– Competitions vs. real world.
Regulated & Controlled Environments
Trade-off
Multiplicity
Fairness
Trust
Security
Regulation
• Balance Performance and interpretability.
• Multiplicity of good models.
• Fairness and social aspects.
• Trust of model producers and consumers.
• Security and hacking.
• Regulated/controlled environments .
Explainability Matters
What’s needed
Building human-centered, low-risk models
Confidential23 Confidential23
• Big picture.
• Interpretability focused.
• MLI module demo only.
• DAI auto ML models.*
• MLI UCs coverage by DAI.
• Techniques and algorithms.
• Possible workflow.
• IID and TS.
• Where MLI module fits in E2E.
Building Human-Centered, Low-Risk Models
*) MLI module is not limited to DAI’s models
Confidential24
Building Human-Centered, Low-Risk Models
Load data
Confidential25
Explanatory Data Analysis and Visualization
EDALoad data
Confidential26
Explanatory Data Analysis and Visualization
EDALoad data
Confidential27
Explanatory Data Analysis and Visualization
EDALoad data
Confidential28
Explanatory Data Analysis and Visualization
EDALoad data
Confidential30
Feature Engineering (Manual & Auto ML)
EDA
Feature engineering
Load data
Black box
model
Confidential31
Feature Engineering
EDA
Feature engineering
Load data
White box
model
Confidential32
Model Choice: Constrained, Simple, Fair
EDA
Feature engineering
Models
Load data
Black box
model
White box
model
Holy Grail
Confidential34
Model Choice: Constrained, Simple, Fair
EDA
Feature engineering
Models
Load data
Black box
model
White box
model
GLM (log regr.), Monotonic GBM (DT), XNN, …
Confidential35
Model Choice
EDA
Feature engineering
Models
Load data
Interpretability
Ensemble
Level
Target
Transformation
Feature Engineering
Feature
Pre-Pruning
Monotonicity
Constraints
1 - 3 <= 3 None Disabled
4 <= 3 Inverse None Disabled
5 <= 3 Anscombe
Clustering (ID, distance)
Truncated SVD
None Disabled
6 <= 2
Logit
Sigmoid
Feature selection Disabled
7 <= 2 Frequency Encoding Feature selection Enabled
8 <= 1 4th
Root Feature selection Enabled
9 <= 1
Square
Square Root
Bulk Interactions (add,
subtract, multiply,
divide)
Weight of Evidence
Feature selection Enabled
10 0
Identity
Unit Box
Log
Date Decompositions
Number Encoding
Target Encoding
Text (TF-IDF,
Frequency)
Feature selection Enabled
Confidential36
Model Choice
EDA
Feature engineering
Models
Load data
Confidential37
Traditional Model Assessment and Diagnostics
EDA
Feature engineering
Models
Assessment
Load data
Confidential38
Traditional Model Assessment and Diagnostics
EDA
Feature engineering
Models
Assessment
Load data
Confidential39
Traditional Model Assessment and Diagnostics
EDA
Feature engineering
Models
Assessment
Load data Experiment summary (document + YAML) + AutoDoc
Confidential40
Traditional Model Assessment and Diagnostics
EDA
Feature engineering
Models
Assessment
Load data
Confidential41 Confidential41
• Post-hoc model debugging
– What-if, sensitivity analysis (accuracy).
• Post-hoc explanations
– Reason codes.
• Post-hoc bias assessment and remediation
– Disparate impact analysis.
Post-hoc Model Explanations and Debugging
EDA
Feature engineering
Models
Assessment
Explanations
Post-hoc
Bias remediation
Post-hoc
Model debugging
Post-hoc
Load data
Confidential42
Human Review
EDA
Feature engineering
Models
Assessment
Human review
Explanations
Post-hoc
Bias remediation
Post-hoc
Model debugging
Post-hoc
Load data
Semantics
Confidential43
Human Review
EDA
Feature engineering
Models
Assessment
Human review
Explanations
Post-hoc
Bias remediation
Post-hoc
Model debugging
Post-hoc
Load data
Semantics
Confidential44
Iterative Improvement
EDA
Feature engineering
Models
Assessment
Human review
Deployment
Iteratetoimprovemodel
Explanations
Post-hoc
Bias remediation
Post-hoc
Model debugging
Post-hoc
Load data
Semantics
How
Explaining models - MLI module deep dive
Confidential46
H2O Driverless AI’s MLI module
Confidential47
IID and Time Series
Confidential48
H2O Driverless AI’s MLI module
Global approximate model behavior/interactions
Global feature importance
Shapley
DT
Global feature behavior
Reason codes
K-LIME
PDP
Model
Local feature importance
Local feature behavior
ICE
Local approximate model behavior
Model
RF
Confidential49
Demo Dataset: Credit Card (IID)
Column Name Description
ID ID of each client
LIMIT_BAL Amount of given credit in NT dollars (includes
individual and family/supplementary credit)
SEX Gender (1=male, 2=female)
EDUCATION (1=graduate school, 2=university, 3=high school,
4=others, 5=unknown, 6=unknown)
MARRIAGE Marital status (1=married, 2=single, 3=others)
AGE Age in years
PAY_x {1, …,6} Repayment status in August, 2005 – April, 2005 (-1=paid
duly,1=payment delay for 1 month, …,8=payment delay for
8 months)
BILL_AMTx {1, …, 6} Amount of bill statement in
September, 2005 – April, 2005 (NT dollar)
PAY_AMTx {1, …, 6} Amount of previous payment in September,
2005 – April, 2005 (NT dollar)
default_payment_
next_month
Default payment (1=yes, 0=no)
Target
Education,
Marriage, Age,
Sex,
Repayment
Status, Limit
Balance, ...
Features
Default
Payment Next
Month
(Binary)
Predictions
Probability
(0...1)
Confidential50
• Challenge:
– Black-box models
– Original vs. transformed features.
• Solution: Surrogate models
– Pros
– Increases any black-box model’s
interpretability
– Time complexity
– Cons
– Accuracy
Global Approximate Model Behavior/Interaction
Confidential51
Surrogate Models
Model
Surrogate Models
Model
training
Model
training
Confidential52
• Challenges:
– Black-box models
– Original vs. transformed features
• Solutions:
– Surrogate model: RF (introspection)
– Pros:
– Original features
– Time complexity
– Cons:
– Accuracy
Global Feature Importance: Random Forest
Confidential53
• Challenges:
– Black-box models
– Original vs. transformed features
• Solutions:
– Original (DAI) Model Introspection
– Pros:
– Accuracy
– Cons:
– Transformed features
– Global only
Global Feature Importance: Original Model
Confidential54
• Challenge
– Black-box models
– Original vs. transformed features
• Solutions:
– Shapley values
– Pros:
– Accuracy
– Math correctness
– Cons:
– Time complexity
– Transformed features
Global Feature Importance: Shapley Values
Confidential55
• Lloyd Shapley
– Americal mathematician who won Nobel prize in
2012 (Economics).
– Shapley values was his Ph.D. thesis written
in 50s.
• Shapley values:
– Supported by solid mathematical (game) theory.
– Calculation has exponential time complexity
(number of coalitions) .
– Typically unrealistic to compute in real world.
– Can be computed in global or local scope.
– Guarantee fair distribution among features in
the instance.
– Does not work well in sparse cases, all features
must be used.
– Return single value per feature, not a model.
Shapley Values
Confidential56
Feature importance: Leave One Covariate Out
• UC:
– Complete other feature
importance charts with bias
tendency
• Challenge:
– Black-box models
• Solution:
– LOCO
Confidential57
Feature importance: Leave One Covariate Out
Confidential58
• Methods
– Surrogate models:
– RF (introspection)
– Leave One Covariate Out (LOCO)
– Original model (introspection)
– Shapley values
Global Feature Importance
Confidential59
Global Feature Behavior: Partial Dependence Plot
• Solution: Surrogate model PDP
– Pros
– Time complexity
– Original features
– White/black model interpretability
– Cons
– Accuracy
Confidential60
• Solution: Surrogate model PDP
– Pros
– Time complexity
– Original features
– White/black model interpretability
– Cons
– Accuracy
Global Feature Behavior: Partial Dependence Plot
Model
Prediction
Xj
Confidential61
PDP: Character of the Feature Behavior
Confidential62
Reason codes: Local Feature Importance
Global approximate model behavior/interactions
Global feature importance
Shapley
DT
Global feature behavior
Reason codes
K-LIME
PDP
Model
Model
RF
Confidential63
Reason codes: Local Feature Importance
• UCs:
– Predictions explanations
– Legal
– Debugging
– Drill-down,
– …
• From global to local scope
• Surrogate methods:
– K-LIME (K-means)
– LIME-SUP (trees)
Confidential64
LIME: Local Interpretable Model-agnostic Explanations
Source: https://github.com/marcotcr/lime
Weighted
explanatory
samples
• Weighted linear surrogate model used to explain
non-linear decision boundary in local region.
• Single prediction.
• example:
– Set of explainable records are scored using
the original model.
– To interpret a decision about another
record, the explanatory records are
weighted by their closeness to that record.
– L1 regularized linear model is trained on this
weighted explanatory set.
– The parameters of the linear model then
help explain the prediction for the selected
record.
Confidential65
K-LIME: Clustered LIME
Confidential66
Reason codes: Local Feature Importance
• UCs:
– Predictions explanations
– Legal
– Debugging
– Drill-down,
– …
• From global to local scope
• From global explanatory model
to cluster-scoped explanatory
model.
• UCs:
– Predictions explanations
– Legal
– Debugging
– Drill-down,
– …
• From global to local scope
Confidential67
Reason codes: Local Feature Importance
• Challenges:
– Black-box models
– Original vs. transformed features
• Solutions:
– Surrogate model: K-LIME
– Pros:
– Original features
– Time complexity
– Cons:
– Accuracy
reason
code
Confidential68
• UC:
– Particular instance explanation
– Note path segments thickness.
• Challenge:
– Black-box models
• Solution: Surrogate models
– Pros
– Black-box models interpretability
– Time complexity
– Cons
Local Approximate Model Behavior/Interaction
Confidential69
• Mean absolute value vs.
local contributions
• Challenge
– Black-box models
– Original vs. transformed features
• Solutions:
– Surrogate models:
– RF (introspection)
– Leave One Covariate Out (LOCO)
– Shapley values
Local Feature Importance
Confidential70
Local Feature Behavior: ICE
• Solution: Surrogate model ICE
– Pros
– Time complexity
– Original features
– White/black model interpretability
– Cons
– Accuracy
(dotted line vs. gray dot discrepancy)
Confidential71
ICE: Individual Conditional Expectations
• Solution: Surrogate model ICE
– Pros
– Time complexity
– Original features
– White/black model interpretability
– Cons
– Accuracy
Model
Prediction
Xj
Confidential72
AutoDoc
Confidential73
• Time series experiments
– Test dataset
• Explainability:
– Original model
– Global and per-group
– Forecast horizon
– Feature importance
– Per-group
– Local Shapley values
MLI for Time Series
Confidential74
MLI Cheatsheet
https://github.com/h2oai/mli-resources/blob/master/cheatsheet.png
Confidential75
MLI Functional Architecture
Local k-LIME
<<diagram>>
T: RMSE, R2
E: how much are
local GLMs preds off
user model MU
(Ŷ
curve vs. ŶS
dots)
T: math behind Sh.
Local Shapleys
<<report>>
E: how much fU
i
influences preds in
case of ŷi (+/- contr.
coefs. via MU
GBM)
T: math behind Sh.
Local Shapleys
<<report>>
E: how much fU
i
influences preds in
case of ŷi (+/- contr.
coefs. via MU
GBM)
ICE (local char.)
<<diagram>>
T: N/A
E: direct/inverse/no
proportion (correl.)
of fi in case of
particular ŷi
ICE (local char.)
<<diagram>>
T: N/A
E: direct/inverse/no
proportion (correl.)
of fi in case of
particular ŷi
Local k-LIME
<<diagram>>
T: RMSE, R2
E: how much are
local GLMs preds off
user model MU
(Ŷ
curve vs. ŶS
dots)
Local reason codes
<<report>>
T: RMSE, R2
E: how much fi
influences preds in
case of ŷi (+/- contr.
coefs. in local GLM)
Local reason codes
<<report>>
T: RMSE, R2
E: how much fi
influences preds in
case of ŷi (+/- contr.
coefs. in local GLM)
ŶS
RF fiŶS
RF fi
XfiXfi
ŶS
k-lime
ŶS
k-lime
MS
local k-LIME
predict(X)
MS
local k-LIME
predict(X)
XFi
ŶU
FiŶU
Fi
MS
local k-LIME
1 x GLM
<<model>>
MS
local k-LIME
1 x GLM
<<model>>
XFi
DAI: create user model fit(X,Y)
X Y
XU
ŶU
H2O-3: create surrogate models: DT.fit(X, ŶU
), 1+k * GLM.fit(X, ŶU
) and RF.fit(X, ŶU
)
X
Figure: MLI-2 functional architecture flow diagram
Xfi
Mu
predict(Xfi) ~ specific
values (bins) fixed for fi
ŶU
fi
MU
<<model>>
Transformed
features
MU
: use user model to predict(X)
MS
global GLM
predict(X)
X
F
FU
MU
GBM
<<model>>
Local surrogate DT
<<diagram>>
T: RMSE, R2
E: how fis influence
predictions in case of
particular ŷi +
interactions (typical
path in DT)
Global surrogate DT
<<diagram>>
T: RMSE, R2
E: how fis influence
predictions of Ŷ +
interactions
(typical path in DT)
Global reason codes
<<report>>
T: RMSE, R2
E: how much fi
influences preds in
case of ŷi (+/- contr.
coefs. in glob. GLM)
Local LOCO
<<report>>
T: bias (plot, contrib.)
Global LOCO
<<report>>
ŶS
global GLM
MS
RF
1 x RF
<<model>>
MS
global GLM
1 x GLM
<<model>>
MS
local GLM
k x GLM
<<model>>
PDP (global char.)
<<diagram>>
T: N/A
MS
local GLM
predict(X)
X
ŶS
local GLM
Global feature importance
<<diagram>>
T: N/A
E: how much fi influences
predictions of Ŷ
(importance not
contribution, depth in RF)
MS
RF
predict(X)
X
ŶS
RF
MS
RF
predict(Xfi)
Xfi
ŶS
RF fi
E: how much fi
contributes to
predictions in case of
ŷi (leave fi in vs out,
def p RF)
T: N/A
E: how much fi
contributes to
predictions of Ŷ (leave
fi in vs. out difference
for all ŷi)
Local reason codes
<<report>>
T: RMSE, R2
E: how much fi
influences preds in
case of ŷi (+/- contr.
coefs. in local GLM)
Global k-LIME
<<diagram>>
T: RMSE, R2
E: how much are global
GLM preds off for user
model MU
(Ŷ curve vs.
ŶS
dots) + MU
linearity
quantification
Local k-LIME
<<diagram>>
T: RMSE, R2
E: how much are
local GLMs preds off
user model MU
(Ŷ
curve vs. ŶS
dots)
ICE (local char.)
<<diagram>>
T: N/A
E: direct/inverse/no
proportion (correl.)
of fi in case of
particular ŷi preds
E: direct/inverse/no
proportion (correl.)
of fi across all Ŷ
preds
T: bias (plot, contrib.), math
Local Shapley
<<report>>
E: how much transf.
fU
i influences preds
in case of ŷi (contr.
coefs. via MU
GBM)
T: math behind Sh.
Global Shapley
<<report>>
E: how much transf.
fU
i influences preds
in Ŷ (+/- contr.
coefs. via MU
GBM)
Local feature importance
<<diagram>>
T: N/A
E: how much fi influences
predictions in case of
particular ŷi (importance not
contribution, unsigned LOCO
based)
Global feature importance
<<diagram>>
T: N/A
E: how much transf. fU
i
influences predictions of Ŷ
(importance not contrib.,
depth in GBM trees)
MU
GBM
<<model>>
MS
DT
1 x DT
<<model>>
MS
DT
predict(X)
X
ŶS
DT
NewMLI-2PDP/ICEcalculation
Old
Conclusion
Takeaways
TAKEAWAYS
• ML interpretability matters.
• Multiplicity of good models.
• H2O Driverless AI has interpretability.
• Control model interpretability end to end.
• Prefer interpretable models.
• Test both your model and explanatory SW.
• Use synergy of local & global techniques.
• Shapley values.
MLI TEAM
Patrick Navdeep Mateusz
Zac Laco Martin
Thank you!
ConfidentialConfidential
Resources
Books, articles, links and Git repos,
Confidential81 Confidential81
• https://www.h2oai.com/explainable-ai/
• Booklets:
– Machine Learning Interpretability with DAI
– Ideas on Interpreting Machine Learning
• Driverless AI’s MLI module cheatsheet
• MLI presentations:
– MLI walkthrough by Patrick Hall
– Human Friendly Machine Learning by Patrick Hall
• GitHub repositories:
– MLI Resources
– H2O Meetups
Resources

More Related Content

What's hot

Explainable AI in Healthcare
Explainable AI in HealthcareExplainable AI in Healthcare
Explainable AI in Healthcarevonaurum
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...Sri Ambati
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsKrishnaram Kenthapadi
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessManojit Nandi
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
 
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...Patrick Van Renterghem
 
Brief Tour of Machine Learning
Brief Tour of Machine LearningBrief Tour of Machine Learning
Brief Tour of Machine Learningbutest
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language ProcessingYunyao Li
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine LearningDelip Rao
 
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYC
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYCPatrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYC
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYCSri Ambati
 

What's hot (17)

Explainable AI in Healthcare
Explainable AI in HealthcareExplainable AI in Healthcare
Explainable AI in Healthcare
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...
Scott Lundberg, Microsoft Research - Explainable Machine Learning with Shaple...
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Fairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML SystemsFairness and Privacy in AI/ML Systems
Fairness and Privacy in AI/ML Systems
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Measures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairnessMeasures and mismeasures of algorithmic fairness
Measures and mismeasures of algorithmic fairness
 
Fairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedInFairness, Transparency, and Privacy in AI @LinkedIn
Fairness, Transparency, and Privacy in AI @LinkedIn
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...
Fairness and Transparency: Algorithmic Explainability, some Legal and Ethical...
 
Brief Tour of Machine Learning
Brief Tour of Machine LearningBrief Tour of Machine Learning
Brief Tour of Machine Learning
 
Explainability for Natural Language Processing
Explainability for Natural Language ProcessingExplainability for Natural Language Processing
Explainability for Natural Language Processing
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
 
Fairness in Machine Learning
Fairness in Machine LearningFairness in Machine Learning
Fairness in Machine Learning
 
Fairness definitions explained
Fairness definitions explainedFairness definitions explained
Fairness definitions explained
 
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYC
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYCPatrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYC
Patrick Hall, H2O.ai - The Case for Model Debugging - H2O World 2019 NYC
 

Similar to Explainable AI with H2O Driverless AI's MLI module

AI for Software Engineering
AI for Software EngineeringAI for Software Engineering
AI for Software EngineeringMiroslaw Staron
 
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...Jai Natarajan
 
Challenges in the integration of Systems Engineering and the AI/ML model life...
Challenges in the integration of Systems Engineering and the AI/ML model life...Challenges in the integration of Systems Engineering and the AI/ML model life...
Challenges in the integration of Systems Engineering and the AI/ML model life...CARLOS III UNIVERSITY OF MADRID
 
Digital Transformation and Process Optimization in Manufacturing
Digital Transformation and Process Optimization in ManufacturingDigital Transformation and Process Optimization in Manufacturing
Digital Transformation and Process Optimization in ManufacturingBigML, Inc
 
Towards the Industrialization of AI
Towards the Industrialization of AITowards the Industrialization of AI
Towards the Industrialization of AIHui Lei
 
AI4SE: Challenges and opportunities in the integration of Systems Engineering...
AI4SE: Challenges and opportunities in the integration of Systems Engineering...AI4SE: Challenges and opportunities in the integration of Systems Engineering...
AI4SE: Challenges and opportunities in the integration of Systems Engineering...CARLOS III UNIVERSITY OF MADRID
 
Webinar: Machine Learning para Microcontroladores
Webinar: Machine Learning para MicrocontroladoresWebinar: Machine Learning para Microcontroladores
Webinar: Machine Learning para MicrocontroladoresEmbarcados
 
Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)AnandSRao1962
 
Data Analytics Today - Data, Tech, and Regulation.pdf
Data Analytics Today - Data, Tech, and Regulation.pdfData Analytics Today - Data, Tech, and Regulation.pdf
Data Analytics Today - Data, Tech, and Regulation.pdfHendri Karisma
 
Gse uk-cedrinemadera-2018-shared
Gse uk-cedrinemadera-2018-sharedGse uk-cedrinemadera-2018-shared
Gse uk-cedrinemadera-2018-sharedcedrinemadera
 
Feature store: Solving anti-patterns in ML-systems
Feature store: Solving anti-patterns in ML-systemsFeature store: Solving anti-patterns in ML-systems
Feature store: Solving anti-patterns in ML-systemsAndrzej Michałowski
 
AI Orange Belt - Session 4
AI Orange Belt - Session 4AI Orange Belt - Session 4
AI Orange Belt - Session 4AI Black Belt
 
InTTrust -IBM Artificial Intelligence Event
InTTrust -IBM Artificial Intelligence  EventInTTrust -IBM Artificial Intelligence  Event
InTTrust -IBM Artificial Intelligence EventMichail Pagiatakis
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneySri Ambati
 
Test-Driven Machine Learning
Test-Driven Machine LearningTest-Driven Machine Learning
Test-Driven Machine LearningC4Media
 
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...Dell World
 

Similar to Explainable AI with H2O Driverless AI's MLI module (20)

AI for Software Engineering
AI for Software EngineeringAI for Software Engineering
AI for Software Engineering
 
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...
Enterprise Grade Data Labeling - Design Your Ground Truth to Scale in Produ...
 
Challenges in the integration of Systems Engineering and the AI/ML model life...
Challenges in the integration of Systems Engineering and the AI/ML model life...Challenges in the integration of Systems Engineering and the AI/ML model life...
Challenges in the integration of Systems Engineering and the AI/ML model life...
 
Digital Transformation and Process Optimization in Manufacturing
Digital Transformation and Process Optimization in ManufacturingDigital Transformation and Process Optimization in Manufacturing
Digital Transformation and Process Optimization in Manufacturing
 
Towards the Industrialization of AI
Towards the Industrialization of AITowards the Industrialization of AI
Towards the Industrialization of AI
 
AI4SE: Challenges and opportunities in the integration of Systems Engineering...
AI4SE: Challenges and opportunities in the integration of Systems Engineering...AI4SE: Challenges and opportunities in the integration of Systems Engineering...
AI4SE: Challenges and opportunities in the integration of Systems Engineering...
 
Webinar: Machine Learning para Microcontroladores
Webinar: Machine Learning para MicrocontroladoresWebinar: Machine Learning para Microcontroladores
Webinar: Machine Learning para Microcontroladores
 
Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)Functionalities in AI Applications and Use Cases (OECD)
Functionalities in AI Applications and Use Cases (OECD)
 
Data Analytics Today - Data, Tech, and Regulation.pdf
Data Analytics Today - Data, Tech, and Regulation.pdfData Analytics Today - Data, Tech, and Regulation.pdf
Data Analytics Today - Data, Tech, and Regulation.pdf
 
Gse uk-cedrinemadera-2018-shared
Gse uk-cedrinemadera-2018-sharedGse uk-cedrinemadera-2018-shared
Gse uk-cedrinemadera-2018-shared
 
INCOSE IS 2019: AI and Systems Engineering
INCOSE IS 2019: AI and Systems EngineeringINCOSE IS 2019: AI and Systems Engineering
INCOSE IS 2019: AI and Systems Engineering
 
IoT won't work without AI
IoT won't work without AIIoT won't work without AI
IoT won't work without AI
 
Feature store: Solving anti-patterns in ML-systems
Feature store: Solving anti-patterns in ML-systemsFeature store: Solving anti-patterns in ML-systems
Feature store: Solving anti-patterns in ML-systems
 
AI Orange Belt - Session 4
AI Orange Belt - Session 4AI Orange Belt - Session 4
AI Orange Belt - Session 4
 
InTTrust -IBM Artificial Intelligence Event
InTTrust -IBM Artificial Intelligence  EventInTTrust -IBM Artificial Intelligence  Event
InTTrust -IBM Artificial Intelligence Event
 
SESE 2021: Where Systems Engineering meets AI/ML
SESE 2021: Where Systems Engineering meets AI/MLSESE 2021: Where Systems Engineering meets AI/ML
SESE 2021: Where Systems Engineering meets AI/ML
 
AI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation JourneyAI Foundations Course Module 1 - An AI Transformation Journey
AI Foundations Course Module 1 - An AI Transformation Journey
 
Test-Driven Machine Learning
Test-Driven Machine LearningTest-Driven Machine Learning
Test-Driven Machine Learning
 
Deep learning
Deep learningDeep learning
Deep learning
 
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...
If You Are Not Embedding Analytics Into Your Day To Day Processes, You Are Do...
 

More from Martin Dvorak

How a woman's mind (May) work
How a woman's mind (May) workHow a woman's mind (May) work
How a woman's mind (May) workMartin Dvorak
 
On NASA Space Shuttle Program Hardware and Software
On NASA Space Shuttle Program Hardware and SoftwareOn NASA Space Shuttle Program Hardware and Software
On NASA Space Shuttle Program Hardware and SoftwareMartin Dvorak
 
Google Cluster Innards
Google Cluster InnardsGoogle Cluster Innards
Google Cluster InnardsMartin Dvorak
 

More from Martin Dvorak (6)

How a woman's mind (May) work
How a woman's mind (May) workHow a woman's mind (May) work
How a woman's mind (May) work
 
Doom in SpaceX
Doom in SpaceXDoom in SpaceX
Doom in SpaceX
 
On NASA Space Shuttle Program Hardware and Software
On NASA Space Shuttle Program Hardware and SoftwareOn NASA Space Shuttle Program Hardware and Software
On NASA Space Shuttle Program Hardware and Software
 
Fly Me to the Moon
Fly Me to the MoonFly Me to the Moon
Fly Me to the Moon
 
MindRaider
MindRaiderMindRaider
MindRaider
 
Google Cluster Innards
Google Cluster InnardsGoogle Cluster Innards
Google Cluster Innards
 

Recently uploaded

VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnAmarnathKambale
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension AidPhilip Schwarz
 
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...WSO2
 
WSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - KanchanaWSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - KanchanaWSO2
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...masabamasaba
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...masabamasaba
 
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburgmasabamasaba
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareJim McKeeth
 
tonesoftg
tonesoftgtonesoftg
tonesoftglanshi9
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park masabamasaba
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open SourceWSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open SourceWSO2
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrandmasabamasaba
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benonimasabamasaba
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...masabamasaba
 
WSO2CON 2024 Slides - Unlocking Value with AI
WSO2CON 2024 Slides - Unlocking Value with AIWSO2CON 2024 Slides - Unlocking Value with AI
WSO2CON 2024 Slides - Unlocking Value with AIWSO2
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024VictoriaMetrics
 

Recently uploaded (20)

VTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learnVTU technical seminar 8Th Sem on Scikit-learn
VTU technical seminar 8Th Sem on Scikit-learn
 
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidDirect Style Effect Systems -The Print[A] Example- A Comprehension Aid
Direct Style Effect Systems - The Print[A] Example - A Comprehension Aid
 
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
WSO2Con2024 - From Blueprint to Brilliance: WSO2's Guide to API-First Enginee...
 
WSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - KanchanaWSO2Con2024 - Hello Choreo Presentation - Kanchana
WSO2Con2024 - Hello Choreo Presentation - Kanchana
 
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Toronto Psychic Readings, Attraction spells,Brin...
 
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
WSO2CON 2024 - Building the API First Enterprise – Running an API Program, fr...
 
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
%+27788225528 love spells in Boston Psychic Readings, Attraction spells,Bring...
 
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
 
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital TransformationWSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
WSO2Con2024 - WSO2's IAM Vision: Identity-Led Digital Transformation
 
Announcing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK SoftwareAnnouncing Codolex 2.0 from GDK Software
Announcing Codolex 2.0 from GDK Software
 
tonesoftg
tonesoftgtonesoftg
tonesoftg
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open SourceWSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
WSO2CON 2024 - Freedom First—Unleashing Developer Potential with Open Source
 
%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand%in Midrand+277-882-255-28 abortion pills for sale in midrand
%in Midrand+277-882-255-28 abortion pills for sale in midrand
 
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni%in Benoni+277-882-255-28 abortion pills for sale in Benoni
%in Benoni+277-882-255-28 abortion pills for sale in Benoni
 
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
%+27788225528 love spells in Atlanta Psychic Readings, Attraction spells,Brin...
 
WSO2CON 2024 Slides - Unlocking Value with AI
WSO2CON 2024 Slides - Unlocking Value with AIWSO2CON 2024 Slides - Unlocking Value with AI
WSO2CON 2024 Slides - Unlocking Value with AI
 
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
 

Explainable AI with H2O Driverless AI's MLI module

  • 1. Confidential1 Confidential1 • Explainable AI is in the news, and for good reason. Financial services companies have cited the ability to explain AI-based decisions as one of the critical roadblocks to further adoption of AI for their industry. Transparency, accountability, and trustworthiness of data-driven decision support systems based on AI and machine learning are serious regulatory mandates in banking, insurance, healthcare, and other industries. From pertinent regulations, to increasing customer trust, data scientists and business decision makers must show AI-based decisions can be explained. • H2O Driverless AI does explainable AI today with its machine learning interpretability (MLI) module. This capability in H2O Driverless AI employs a unique combination of techniques and methodologies to explain the results of both Driverless AI models and external models. Explainable AI with H2O Driverless AI's machine learning interpretability module
  • 2. Explainable AI with H2O Driverless AI’s machine learning interpretability module Martin Dvorak Software Engineer, H2O.ai martin.dvorak@h2o.ai H2O.ai Prague Meetup #3 2019/5/16
  • 3. ABOUT ME Martin is a passionate software engineer and RESTafarian who is interested in machine learning, VM construction, enterprise software and knowledge management. He holds Master degree in Computer Science from Charles University Prague with specializations in compilers, operating systems and AI/ML. Martin is a backend engineer on the MLI project at H2O.ai
  • 4. AGENDA • Intro – Context and scope. • Why – Explainability matters. • What – Steps to build human-centered, low-risk models. • How – Explaining models using of H2O.ai’s solution.
  • 6. Confidential6 Terminology, Scope and Context • Machine Learning Interpretability – “[Machine learning interpretability] is the ability to explain or present in understandable terms to a human.“ –https://arxiv.org/pdf/1702.08608.pdf • Structured data – No image, video and sound > deep learning typically not used. – Tabular data and supervised ML. • Auto ML – H2O Driverless AI (DAI) product (not OSS). • MLI module – Solution based on MLI module of H2O Driverless AI. INTRO
  • 7. Confidential7 Terminology, Scope and Context INTRO Model Model Training Feature Engineering Data Integration & Quality Machine Learning Interpretability End to end
  • 8. Confidential8 Terminology, Scope and Context INTRO Model Model Training Feature Engineering Data Integration & Quality Explainable model End to end
  • 10. (Trade-off) Potential Performance and Interpretability Trade-off White box model Black box model Feature engineering + Algorithm(s) Balance
  • 11. (Trade-off) Potential Performance and Interpretability Trade-off
  • 12. (Trade-off) Potential Performance and Interpretability Trade-off
  • 13. (Trade-off) Age NumberofPurchases Lost profits. Wasted marketing. “For a one unit increase in age, the number of purchases increases by 0.8 on average.” Age “Slope begins to decrease here. Act to optimize savings.” “Slope begins to increase here sharply. Act to optimize profits.” NumberofPurchase Exact explanations for approximate models. Approximate explanations for exact models. Linear models Machine learning models Potential Performance and Interpretability Trade-off Sometimes…
  • 14. Trade-off Multiplicity • For a given well-understood dataset there is usually one best linear model, but… Multiplicity of Good Models
  • 15. Trade-off Multiplicity • … for a given well-understood dataset there are usually many good ML models. Which one to choose? • Same objective metrics values, performance, … • This is often referred to as “the multiplicity of good models.” -- Leo Breiman Multiplicity of Good Models
  • 16. Trade-off Multiplicity Fairness • Gender • Age • Ethnicity • Health • Sexual behavior • Avoid discriminatory models and remediate disparate impact. Fairness and Social Aspects
  • 17. Trade-off Multiplicity Fairness Trust • Dataset vs. real world • ML adoption • Introspection • Sensitivity • OOR • Diagnostics • “Debugging” Trust of model producers & consumers Source: http://www.vias.org/tmdatanaleng/
  • 18. Trade-off Multiplicity Fairness Trust Security • Goal: compromise model integrity • Attack types: – Exploratory – Surrogate model trained to identify vulnerabilities ~ MLI. – Trial and error (for specific class) x indiscriminate attacks. – Causative – Models trained w/ adversary datasets. – Local model > adversarial instance > target model. – Standard / continuous learning. – Integrity (compromise system integrity) – False negative instance e.g. fraud passes check. – Availability (compromise system availability) – False positive instance e.g. blocks access to legitimate instances. Security and Hacking
  • 19. Trade-off Multiplicity Fairness Trust Security Regulation • Legal requirements – Banking, insurance, healthcare, … • Predictions explanation – Decisions justification (reason codes*, …). • Fairness • Security • Accuracy first vs. interpretability first – Competitions vs. real world. Regulated & Controlled Environments
  • 20. Trade-off Multiplicity Fairness Trust Security Regulation • Balance Performance and interpretability. • Multiplicity of good models. • Fairness and social aspects. • Trust of model producers and consumers. • Security and hacking. • Regulated/controlled environments . Explainability Matters
  • 22. Confidential23 Confidential23 • Big picture. • Interpretability focused. • MLI module demo only. • DAI auto ML models.* • MLI UCs coverage by DAI. • Techniques and algorithms. • Possible workflow. • IID and TS. • Where MLI module fits in E2E. Building Human-Centered, Low-Risk Models *) MLI module is not limited to DAI’s models
  • 24. Confidential25 Explanatory Data Analysis and Visualization EDALoad data
  • 25. Confidential26 Explanatory Data Analysis and Visualization EDALoad data
  • 26. Confidential27 Explanatory Data Analysis and Visualization EDALoad data
  • 27. Confidential28 Explanatory Data Analysis and Visualization EDALoad data
  • 28. Confidential30 Feature Engineering (Manual & Auto ML) EDA Feature engineering Load data Black box model
  • 30. Confidential32 Model Choice: Constrained, Simple, Fair EDA Feature engineering Models Load data Black box model White box model Holy Grail
  • 31. Confidential34 Model Choice: Constrained, Simple, Fair EDA Feature engineering Models Load data Black box model White box model GLM (log regr.), Monotonic GBM (DT), XNN, …
  • 32. Confidential35 Model Choice EDA Feature engineering Models Load data Interpretability Ensemble Level Target Transformation Feature Engineering Feature Pre-Pruning Monotonicity Constraints 1 - 3 <= 3 None Disabled 4 <= 3 Inverse None Disabled 5 <= 3 Anscombe Clustering (ID, distance) Truncated SVD None Disabled 6 <= 2 Logit Sigmoid Feature selection Disabled 7 <= 2 Frequency Encoding Feature selection Enabled 8 <= 1 4th Root Feature selection Enabled 9 <= 1 Square Square Root Bulk Interactions (add, subtract, multiply, divide) Weight of Evidence Feature selection Enabled 10 0 Identity Unit Box Log Date Decompositions Number Encoding Target Encoding Text (TF-IDF, Frequency) Feature selection Enabled
  • 34. Confidential37 Traditional Model Assessment and Diagnostics EDA Feature engineering Models Assessment Load data
  • 35. Confidential38 Traditional Model Assessment and Diagnostics EDA Feature engineering Models Assessment Load data
  • 36. Confidential39 Traditional Model Assessment and Diagnostics EDA Feature engineering Models Assessment Load data Experiment summary (document + YAML) + AutoDoc
  • 37. Confidential40 Traditional Model Assessment and Diagnostics EDA Feature engineering Models Assessment Load data
  • 38. Confidential41 Confidential41 • Post-hoc model debugging – What-if, sensitivity analysis (accuracy). • Post-hoc explanations – Reason codes. • Post-hoc bias assessment and remediation – Disparate impact analysis. Post-hoc Model Explanations and Debugging EDA Feature engineering Models Assessment Explanations Post-hoc Bias remediation Post-hoc Model debugging Post-hoc Load data
  • 39. Confidential42 Human Review EDA Feature engineering Models Assessment Human review Explanations Post-hoc Bias remediation Post-hoc Model debugging Post-hoc Load data Semantics
  • 40. Confidential43 Human Review EDA Feature engineering Models Assessment Human review Explanations Post-hoc Bias remediation Post-hoc Model debugging Post-hoc Load data Semantics
  • 41. Confidential44 Iterative Improvement EDA Feature engineering Models Assessment Human review Deployment Iteratetoimprovemodel Explanations Post-hoc Bias remediation Post-hoc Model debugging Post-hoc Load data Semantics
  • 42. How Explaining models - MLI module deep dive
  • 45. Confidential48 H2O Driverless AI’s MLI module Global approximate model behavior/interactions Global feature importance Shapley DT Global feature behavior Reason codes K-LIME PDP Model Local feature importance Local feature behavior ICE Local approximate model behavior Model RF
  • 46. Confidential49 Demo Dataset: Credit Card (IID) Column Name Description ID ID of each client LIMIT_BAL Amount of given credit in NT dollars (includes individual and family/supplementary credit) SEX Gender (1=male, 2=female) EDUCATION (1=graduate school, 2=university, 3=high school, 4=others, 5=unknown, 6=unknown) MARRIAGE Marital status (1=married, 2=single, 3=others) AGE Age in years PAY_x {1, …,6} Repayment status in August, 2005 – April, 2005 (-1=paid duly,1=payment delay for 1 month, …,8=payment delay for 8 months) BILL_AMTx {1, …, 6} Amount of bill statement in September, 2005 – April, 2005 (NT dollar) PAY_AMTx {1, …, 6} Amount of previous payment in September, 2005 – April, 2005 (NT dollar) default_payment_ next_month Default payment (1=yes, 0=no) Target Education, Marriage, Age, Sex, Repayment Status, Limit Balance, ... Features Default Payment Next Month (Binary) Predictions Probability (0...1)
  • 47. Confidential50 • Challenge: – Black-box models – Original vs. transformed features. • Solution: Surrogate models – Pros – Increases any black-box model’s interpretability – Time complexity – Cons – Accuracy Global Approximate Model Behavior/Interaction
  • 49. Confidential52 • Challenges: – Black-box models – Original vs. transformed features • Solutions: – Surrogate model: RF (introspection) – Pros: – Original features – Time complexity – Cons: – Accuracy Global Feature Importance: Random Forest
  • 50. Confidential53 • Challenges: – Black-box models – Original vs. transformed features • Solutions: – Original (DAI) Model Introspection – Pros: – Accuracy – Cons: – Transformed features – Global only Global Feature Importance: Original Model
  • 51. Confidential54 • Challenge – Black-box models – Original vs. transformed features • Solutions: – Shapley values – Pros: – Accuracy – Math correctness – Cons: – Time complexity – Transformed features Global Feature Importance: Shapley Values
  • 52. Confidential55 • Lloyd Shapley – Americal mathematician who won Nobel prize in 2012 (Economics). – Shapley values was his Ph.D. thesis written in 50s. • Shapley values: – Supported by solid mathematical (game) theory. – Calculation has exponential time complexity (number of coalitions) . – Typically unrealistic to compute in real world. – Can be computed in global or local scope. – Guarantee fair distribution among features in the instance. – Does not work well in sparse cases, all features must be used. – Return single value per feature, not a model. Shapley Values
  • 53. Confidential56 Feature importance: Leave One Covariate Out • UC: – Complete other feature importance charts with bias tendency • Challenge: – Black-box models • Solution: – LOCO
  • 55. Confidential58 • Methods – Surrogate models: – RF (introspection) – Leave One Covariate Out (LOCO) – Original model (introspection) – Shapley values Global Feature Importance
  • 56. Confidential59 Global Feature Behavior: Partial Dependence Plot • Solution: Surrogate model PDP – Pros – Time complexity – Original features – White/black model interpretability – Cons – Accuracy
  • 57. Confidential60 • Solution: Surrogate model PDP – Pros – Time complexity – Original features – White/black model interpretability – Cons – Accuracy Global Feature Behavior: Partial Dependence Plot Model Prediction Xj
  • 58. Confidential61 PDP: Character of the Feature Behavior
  • 59. Confidential62 Reason codes: Local Feature Importance Global approximate model behavior/interactions Global feature importance Shapley DT Global feature behavior Reason codes K-LIME PDP Model Model RF
  • 60. Confidential63 Reason codes: Local Feature Importance • UCs: – Predictions explanations – Legal – Debugging – Drill-down, – … • From global to local scope • Surrogate methods: – K-LIME (K-means) – LIME-SUP (trees)
  • 61. Confidential64 LIME: Local Interpretable Model-agnostic Explanations Source: https://github.com/marcotcr/lime Weighted explanatory samples • Weighted linear surrogate model used to explain non-linear decision boundary in local region. • Single prediction. • example: – Set of explainable records are scored using the original model. – To interpret a decision about another record, the explanatory records are weighted by their closeness to that record. – L1 regularized linear model is trained on this weighted explanatory set. – The parameters of the linear model then help explain the prediction for the selected record.
  • 63. Confidential66 Reason codes: Local Feature Importance • UCs: – Predictions explanations – Legal – Debugging – Drill-down, – … • From global to local scope • From global explanatory model to cluster-scoped explanatory model. • UCs: – Predictions explanations – Legal – Debugging – Drill-down, – … • From global to local scope
  • 64. Confidential67 Reason codes: Local Feature Importance • Challenges: – Black-box models – Original vs. transformed features • Solutions: – Surrogate model: K-LIME – Pros: – Original features – Time complexity – Cons: – Accuracy reason code
  • 65. Confidential68 • UC: – Particular instance explanation – Note path segments thickness. • Challenge: – Black-box models • Solution: Surrogate models – Pros – Black-box models interpretability – Time complexity – Cons Local Approximate Model Behavior/Interaction
  • 66. Confidential69 • Mean absolute value vs. local contributions • Challenge – Black-box models – Original vs. transformed features • Solutions: – Surrogate models: – RF (introspection) – Leave One Covariate Out (LOCO) – Shapley values Local Feature Importance
  • 67. Confidential70 Local Feature Behavior: ICE • Solution: Surrogate model ICE – Pros – Time complexity – Original features – White/black model interpretability – Cons – Accuracy (dotted line vs. gray dot discrepancy)
  • 68. Confidential71 ICE: Individual Conditional Expectations • Solution: Surrogate model ICE – Pros – Time complexity – Original features – White/black model interpretability – Cons – Accuracy Model Prediction Xj
  • 70. Confidential73 • Time series experiments – Test dataset • Explainability: – Original model – Global and per-group – Forecast horizon – Feature importance – Per-group – Local Shapley values MLI for Time Series
  • 72. Confidential75 MLI Functional Architecture Local k-LIME <<diagram>> T: RMSE, R2 E: how much are local GLMs preds off user model MU (Ŷ curve vs. ŶS dots) T: math behind Sh. Local Shapleys <<report>> E: how much fU i influences preds in case of ŷi (+/- contr. coefs. via MU GBM) T: math behind Sh. Local Shapleys <<report>> E: how much fU i influences preds in case of ŷi (+/- contr. coefs. via MU GBM) ICE (local char.) <<diagram>> T: N/A E: direct/inverse/no proportion (correl.) of fi in case of particular ŷi ICE (local char.) <<diagram>> T: N/A E: direct/inverse/no proportion (correl.) of fi in case of particular ŷi Local k-LIME <<diagram>> T: RMSE, R2 E: how much are local GLMs preds off user model MU (Ŷ curve vs. ŶS dots) Local reason codes <<report>> T: RMSE, R2 E: how much fi influences preds in case of ŷi (+/- contr. coefs. in local GLM) Local reason codes <<report>> T: RMSE, R2 E: how much fi influences preds in case of ŷi (+/- contr. coefs. in local GLM) ŶS RF fiŶS RF fi XfiXfi ŶS k-lime ŶS k-lime MS local k-LIME predict(X) MS local k-LIME predict(X) XFi ŶU FiŶU Fi MS local k-LIME 1 x GLM <<model>> MS local k-LIME 1 x GLM <<model>> XFi DAI: create user model fit(X,Y) X Y XU ŶU H2O-3: create surrogate models: DT.fit(X, ŶU ), 1+k * GLM.fit(X, ŶU ) and RF.fit(X, ŶU ) X Figure: MLI-2 functional architecture flow diagram Xfi Mu predict(Xfi) ~ specific values (bins) fixed for fi ŶU fi MU <<model>> Transformed features MU : use user model to predict(X) MS global GLM predict(X) X F FU MU GBM <<model>> Local surrogate DT <<diagram>> T: RMSE, R2 E: how fis influence predictions in case of particular ŷi + interactions (typical path in DT) Global surrogate DT <<diagram>> T: RMSE, R2 E: how fis influence predictions of Ŷ + interactions (typical path in DT) Global reason codes <<report>> T: RMSE, R2 E: how much fi influences preds in case of ŷi (+/- contr. coefs. in glob. GLM) Local LOCO <<report>> T: bias (plot, contrib.) Global LOCO <<report>> ŶS global GLM MS RF 1 x RF <<model>> MS global GLM 1 x GLM <<model>> MS local GLM k x GLM <<model>> PDP (global char.) <<diagram>> T: N/A MS local GLM predict(X) X ŶS local GLM Global feature importance <<diagram>> T: N/A E: how much fi influences predictions of Ŷ (importance not contribution, depth in RF) MS RF predict(X) X ŶS RF MS RF predict(Xfi) Xfi ŶS RF fi E: how much fi contributes to predictions in case of ŷi (leave fi in vs out, def p RF) T: N/A E: how much fi contributes to predictions of Ŷ (leave fi in vs. out difference for all ŷi) Local reason codes <<report>> T: RMSE, R2 E: how much fi influences preds in case of ŷi (+/- contr. coefs. in local GLM) Global k-LIME <<diagram>> T: RMSE, R2 E: how much are global GLM preds off for user model MU (Ŷ curve vs. ŶS dots) + MU linearity quantification Local k-LIME <<diagram>> T: RMSE, R2 E: how much are local GLMs preds off user model MU (Ŷ curve vs. ŶS dots) ICE (local char.) <<diagram>> T: N/A E: direct/inverse/no proportion (correl.) of fi in case of particular ŷi preds E: direct/inverse/no proportion (correl.) of fi across all Ŷ preds T: bias (plot, contrib.), math Local Shapley <<report>> E: how much transf. fU i influences preds in case of ŷi (contr. coefs. via MU GBM) T: math behind Sh. Global Shapley <<report>> E: how much transf. fU i influences preds in Ŷ (+/- contr. coefs. via MU GBM) Local feature importance <<diagram>> T: N/A E: how much fi influences predictions in case of particular ŷi (importance not contribution, unsigned LOCO based) Global feature importance <<diagram>> T: N/A E: how much transf. fU i influences predictions of Ŷ (importance not contrib., depth in GBM trees) MU GBM <<model>> MS DT 1 x DT <<model>> MS DT predict(X) X ŶS DT NewMLI-2PDP/ICEcalculation Old
  • 74. TAKEAWAYS • ML interpretability matters. • Multiplicity of good models. • H2O Driverless AI has interpretability. • Control model interpretability end to end. • Prefer interpretable models. • Test both your model and explanatory SW. • Use synergy of local & global techniques. • Shapley values.
  • 75. MLI TEAM Patrick Navdeep Mateusz Zac Laco Martin
  • 78. Confidential81 Confidential81 • https://www.h2oai.com/explainable-ai/ • Booklets: – Machine Learning Interpretability with DAI – Ideas on Interpreting Machine Learning • Driverless AI’s MLI module cheatsheet • MLI presentations: – MLI walkthrough by Patrick Hall – Human Friendly Machine Learning by Patrick Hall • GitHub repositories: – MLI Resources – H2O Meetups Resources