SlideShare a Scribd company logo
Achieving algorithmic
transparency with Shapley
Additive Explanations
Contents:
• Trade-off between Explainability & Performance
• Shapley additive explanations
• SHAP: illustrative example
• XAI use case 1: clinical operations
• XAI use case 2: driver genome
• Future developments
3All content copyright © 2017 QuantumBlack, a McKinsey company
Explainability vs Performance trade-off
Performance Explainability Natural trade-off between these
two concepts
• Provide highly accurate solutions
for our clients
• Local interpretations of black-box
models
Neural networks
Random forests
Linear regression
Logistic regression
• Explain why our models perform
as they do. How can we interpret
the contribution of drivers in
black-box models?
• How can we get drivers for an
individual prediction?
4All content copyright © 2017 QuantumBlack, a McKinsey company
Explainability vs Performance trade-off integration
Performance Explainability Natural trade-off between these
two concepts
• Provide highly accurate solutions
for our clients
• Explain why our models perform
as they do. How can we interpret
the contribution of drivers in
black-box models?
• How can we get drivers for an
individual prediction?
• Local interpretations of black-box
models
Neural networks
Random forests
Linear regression
Logistic regression
?
5All content copyright © 2017 QuantumBlack, a McKinsey company
Methods for XAI
LIME
Figure 3: Toy example to present intuition for LIME.
The black-box model’s complex decision function f
(unknown to LIME) is represented by the blue/pink
background, which cannot be approximated well by
a linear model. The bold red cross is the instance
being explained. LIME samples instances, gets pre-
dictions using f, and weighs them by the proximity
to the instance being explained (represented here
Algorithm 1 Sparse Linear Explanations using LIME
Require: Classifier f, Number of samples N
Require: Instance x, and its interpretable version x0
Require: Similarity kernel ⇡x, Length of explanation K
Z {}
for i 2 {1, 2, 3, ..., N} do
z0
i sample around(x0
)
Z Z [ hz0
i, f(zi), ⇡x(zi)i
end for
w K-Lasso(Z, K) . with z0
i as features, f(z) as target
return w
the explanation on Z, and present this information to the
user. This estimate of faithfulness can also be used for
selecting an appropriate family of explanations from a set of
multiple interpretable model classes, thus adapting to the
given dataset and the classifier. We leave such exploration
1. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938
2. Lei et al., Rationalizing Neural Predictions, https://arxiv.org/abs/1602.04938
3. Letham et al., Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, https://arxiv.org/abs/1511.01644
4. Lundberg and Lee, A unified approach to interpreting model predictions, http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions
5. Ribeiro et al., Anchors: High-Precision Model-Agnostic Explanations, https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
6. All images taken from respective publications/github repos
(Locally Interpretable Model-
agnostic Explanations)1
6All content copyright © 2017 QuantumBlack, a McKinsey company
Methods for XAI
LIME
Figure 3: Toy example to present intuition for LIME.
The black-box model’s complex decision function f
(unknown to LIME) is represented by the blue/pink
background, which cannot be approximated well by
a linear model. The bold red cross is the instance
being explained. LIME samples instances, gets pre-
dictions using f, and weighs them by the proximity
to the instance being explained (represented here
Algorithm 1 Sparse Linear Explanations using LIME
Require: Classifier f, Number of samples N
Require: Instance x, and its interpretable version x0
Require: Similarity kernel ⇡x, Length of explanation K
Z {}
for i 2 {1, 2, 3, ..., N} do
z0
i sample around(x0
)
Z Z [ hz0
i, f(zi), ⇡x(zi)i
end for
w K-Lasso(Z, K) . with z0
i as features, f(z) as target
return w
the explanation on Z, and present this information to the
user. This estimate of faithfulness can also be used for
selecting an appropriate family of explanations from a set of
multiple interpretable model classes, thus adapting to the
given dataset and the classifier. We leave such exploration
(Locally Interpretable Model-
agnostic Explanations)1
• Trade-off between Explainability & Performance
• Shapley additive explanations
• SHAP: illustrative example
• XAI use case 1: clinical operations
• XAI use case 2: driver genome
• Future developments
8All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP (Shapley Additive Explanations)
3 properties
Local accuracy: require the output of the local
explanation model 𝑔 𝑥# 	to match the original model 𝑓 𝑥 	,
where 𝑥#
is the simplified input, i.e:
𝑓 𝑥 = 𝑔 𝑥#
Missingness: require features missing in the original input
to have no attributed impact, i.e.
𝑥&
#
= 0 ⟹ 	𝜑& = 0
Consistency: stipulates 𝜑& 𝑓#
, 𝑥 ≥ 𝜑&(𝑓, 𝑥) for any two
models 𝑓 and 𝑓# if the feature’s contribution in 𝑓# ≥ the
feature’s contribution in 𝑓.
1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874
2. Lloyd S Shapley (1953) “A value for n-person games”, In: Contributions to the Theory of Games, 2:28, pp.307-317
Explanation model1:
𝑔 𝑥# = 𝜑/ +	∑ 𝜑& 𝑥&
#2
&34
(class of additive feature attribution models)
Shapley regression values2, 𝜑& ∈ 	ℝ
- unified measure of additive feature attributions
𝜑& =	7
𝑆 ! 𝑀 − 𝑆 − 1 !
𝑀!
[𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ]
>∈C{&}
9All content copyright © 2017 QuantumBlack, a McKinsey company
Computing SHAP values
SHAP values - unified measure of additive feature
attributions, 𝜑& ∈ 	ℝ:
𝜑& =	7
𝑆 ! 𝑀 − 𝑆 − 1 !
𝑀!
[𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ]
>∈C{&}
where
F = {all input features}
S = {subset of input features}
M = |F| = number of input features
output with
𝑖	th feature
output without
𝑖th feature
Computing SHAP values:
• 𝑓>∪ & 	is trained with the 𝑖th feature
present
• 𝑓> is trained without the 𝑖th feature
• compute difference 𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@
for the current input
• retrain the model on all feature subsets
𝑆 ∈ 𝐹{𝑖}
• take weighted average of all possible
differences
weighted average of all
possible subsets of S in F
10All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP in practice
XAI
1
2
Implementations:
Kernel SHAP1 = LIME + Shapley Values
where loss function 𝐿, weighting kernel 𝜋, and regularisation term Ω are computed
so that LIME meets Shapley properties:
1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874
2. Lundberg, Erion, Lee (2018) “Consistent Individualized Feature Attribution for Tree Ensembles”, https://arxiv.org/abs/1802.03888
pip install shap
+ beautiful out-of-box JS visualisations
Integration
---------------------------------------------------------------------
Deep SHAP 1
= DeepLIFT + Shapley Values
Linear SHAP1
, Low-Order SHAP1
, Max SHAP1
Tree SHAP2
• speed-up from O(TL2M) to O(TLD2)
11All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: illustrative example
Task: given demographic data, classify adult income groups
Dataset: US 1994 Census data1
Base model: sklearn Random Forest classifier
Explanation model: TreeSHAP
1. 1994 Census database, donated by B. Becker: https://archive.ics.uci.edu/ml/datasets/Adult
12All content copyright © 2017 QuantumBlack, a McKinsey company
‘Native’ RF feature importance scores SHAP summary plot
RF importance scores vs. SHAP’s explanations
0.002
0.002
0.008
0.017
0.022
0.065
0.076
0.081
0.138
0.283
0.307
0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350
Country
Race
Workclass
Occupation
Sex
Age
Hours per week
Capital Loss
Education-Num
Capital Gain
Marital Status
13All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: individualised explanations
Typical example
Legend to categorical value:
Sex: 0 = F, 1 = M | Marital status: 2 = never married, 3 = married, spouse absent
Occupation: 0 = Adm-clerical, 5 = Sales
Low probability of >$50k earnings
High probability of >$50k earnings
14All content copyright © 2017 QuantumBlack, a McKinsey company
SHAP: individualised explanations across the cohort
NetShapleyvalue
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driver genome
• Future developments
16All content copyright © 2017 QuantumBlack, a McKinsey company
Use case: Clinical Operations
Task: given a new drug trial, predict which hospitals will enroll more patients (high enrolment rate).
Hospital A Hospital B
17All content copyright © 2017 QuantumBlack, a McKinsey company
Two questions raised frequently by our client
How can we interpret the predictions given by your
black-box model? What drives the direction (high or
low) of the predicted enrollment rate?
How can we get personalised explanations? Why
hospital B enrolls more patients than hospital A?
18All content copyright © 2017 QuantumBlack, a McKinsey company
Local explanations (Enrolment Rate)
Local prediction = 4.96 pt/month
Black-box prediction = 4.99 pt/month
Local prediction = 2.07 pt/month
Black-box prediction = 2.19 pt/month
Hospital A Hospital B
number of trials in city
prior PI experience
feature E
feature F
feature G
number of trials in city
feature B
feature C
feature D
prior activation time
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• Use case 1: clinical operations
• Use case 2: driving safety
• Future developments
20All content copyright © 2017 QuantumBlack, a McKinsey company
Use case: Driving Safety
Journey A Journey B
21All content copyright © 2017 QuantumBlack, a McKinsey company
Two questions raised frequently by our client
How can we interpret the predictions given by your black-
box model? What drives the direction (high or low) of the
predicted probability of an accident?
Drivers can use personalised explanations to improve
their driving behaviour. Also, in case of a change in their
insurance premium, they have the right to know what is
the cause
22All content copyright © 2017 QuantumBlack, a McKinsey company
The proposed solution uses Logistic Regression and Random Forest plus
LIME for interpretability
Feature engineering
~1500 features ~100 features
Creation of single and
multi-dimensional
features based on
hypotheses tree
Logistic regression
Use of elastic net
regularization to prevent
overfitting to the training
set
Random Forest
A random forest can
discover complex non-
linear relationships
between features
LIME/SHAP
Personalized
interpretations
of the RF results
SOURCE: Team analysis
23All content copyright © 2017 QuantumBlack, a McKinsey company
Example of a driver profile
rest time
driving experience
duration of driving per month
feature D
feature E
feature F
feature G
feature H
feature I
feature J
Contents:
• Explainability vs Performance trade-off
• Shapley additive explanations
• SHAP: illustrative example
• Use case 1: clinical operations
• Use case 2: driving safety
• Future developments
25All content copyright © 2017 QuantumBlack, a McKinsey company
Model agnostic: LIME, Kernel SHAP <- drive development & community interest
Model-specific: DeepLIFT (NNs), TreeSHAP (XGBoost, RFs) <- drive implementation
Future developments
SOURCE: Team analysis
26All content copyright © 2017 QuantumBlack, a McKinsey company
Individualised explanations ≠ Transparency
Sneeze
weight
headache no
fatigue age
Flu Sneeze
Headache
No fatigue
Explainer
(LIME)
Model Data and
prediction
Explanation Human makes
decision
Explain one
instance using
explanation
model
Pick a number
of representable
examples from
a dataset Model Human makes
decision
Dataset and
prediction
Pick step Explanations
27All content copyright © 2017 QuantumBlack, a McKinsey company
Explanation model ≠ causality
Age
Feature 4
Count of
complaints
Number of
meetings
Feature 8
Count of
Customers
pm
Deals
closed
Revenue
Feature 11
Feature 1
Feature 7
Feature 5
Customer
Tenure
Sales
meetings
for
product A
Number
of Emails
Feature 10
Feature 2
Feature 9
Performance
Rating
Feature 6
SOURCE: Team analysis
28All content copyright © 2017 QuantumBlack, a McKinsey company
Questions?

More Related Content

What's hot

Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
Krishnaram Kenthapadi
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
Krishnaram Kenthapadi
 
A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)
Rama Irsheidat
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Arithmer Inc.
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
Krishnaram Kenthapadi
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
Krishnaram Kenthapadi
 
Towards Human-Centered Machine Learning
Towards Human-Centered Machine LearningTowards Human-Centered Machine Learning
Towards Human-Centered Machine Learning
Sri Ambati
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
Data Science Dojo
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
Aditya Bhattacharya
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
Bill Liu
 
Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021
Vincenzo Lomonaco
 
Explainable AI (XAI)
Explainable AI (XAI)Explainable AI (XAI)
Explainable AI (XAI)
Manojkumar Parmar
 
Explainability for NLP
Explainability for NLPExplainability for NLP
Explainability for NLP
Isabelle Augenstein
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Wagston Staehler
 
eScience SHAP talk
eScience SHAP talkeScience SHAP talk
eScience SHAP talk
Scott Lundberg
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Equifax Ltd
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
Krishnaram Kenthapadi
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Sri Ambati
 
ChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptxChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptx
Jesus Rodriguez
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
Mansour Saffar
 

What's hot (20)

Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
 
Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)Explainable AI in Industry (AAAI 2020 Tutorial)
Explainable AI in Industry (AAAI 2020 Tutorial)
 
A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
Responsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons LearnedResponsible AI in Industry: Practical Challenges and Lessons Learned
Responsible AI in Industry: Practical Challenges and Lessons Learned
 
Towards Human-Centered Machine Learning
Towards Human-Centered Machine LearningTowards Human-Centered Machine Learning
Towards Human-Centered Machine Learning
 
Large Language Models Bootcamp
Large Language Models BootcampLarge Language Models Bootcamp
Large Language Models Bootcamp
 
Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021Continual Learning with Deep Architectures - Tutorial ICML 2021
Continual Learning with Deep Architectures - Tutorial ICML 2021
 
Explainable AI (XAI)
Explainable AI (XAI)Explainable AI (XAI)
Explainable AI (XAI)
 
Explainability for NLP
Explainability for NLPExplainability for NLP
Explainability for NLP
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
eScience SHAP talk
eScience SHAP talkeScience SHAP talk
eScience SHAP talk
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
ChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptxChatGPT, Foundation Models and Web3.pptx
ChatGPT, Foundation Models and Web3.pptx
 
An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!An Introduction to XAI! Towards Trusting Your ML Models!
An Introduction to XAI! Towards Trusting Your ML Models!
 

Similar to Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup)

IEEE Datamining 2016 Title and Abstract
IEEE  Datamining 2016 Title and AbstractIEEE  Datamining 2016 Title and Abstract
IEEE Datamining 2016 Title and Abstract
tsysglobalsolutions
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_Jie
MDO_Lab
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
ijaia
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
Mayur Sand
 
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdfitm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
beshahashenafe20
 
Reference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural NetworkReference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural Network
Saurav Jha
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
Francesca Lazzeri, PhD
 
Recommender system
Recommender systemRecommender system
Recommender system
Bhumi Patel
 
final
finalfinal
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdf
Bong-Ho Lee
 
Gooey data sets
Gooey data setsGooey data sets
Gooey data sets
Charlotte Broadbent
 
SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0
Vineetha Vishnu
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
SBGC
 
Final Report
Final ReportFinal Report
Final Report
Aman Soni
 
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data StreamsNovel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
irjes
 
Declarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemTDeclarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemT
Laura Chiticariu
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to Rank
Sease
 
Data Structures problems 2006
Data Structures problems 2006Data Structures problems 2006
Data Structures problems 2006
Sanjay Goel
 

Similar to Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup) (20)

IEEE Datamining 2016 Title and Abstract
IEEE  Datamining 2016 Title and AbstractIEEE  Datamining 2016 Title and Abstract
IEEE Datamining 2016 Title and Abstract
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_Jie
 
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATIONGENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
GENETIC ALGORITHM FOR FUNCTION APPROXIMATION: AN EXPERIMENTAL INVESTIGATION
 
Interpretable ML
Interpretable MLInterpretable ML
Interpretable ML
 
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdfitm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
itm661-lecture0VBBBBBBBBBBBBBBM3-part2-2015.pdf
 
Reference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural NetworkReference Scope Identification of Citances Using Convolutional Neural Network
Reference Scope Identification of Citances Using Convolutional Neural Network
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
 
Recommender system
Recommender systemRecommender system
Recommender system
 
final
finalfinal
final
 
CounterFactual Explanations.pdf
CounterFactual Explanations.pdfCounterFactual Explanations.pdf
CounterFactual Explanations.pdf
 
Gooey data sets
Gooey data setsGooey data sets
Gooey data sets
 
SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0SA_MSA_ICBAI_2016_presentation_v1.0
SA_MSA_ICBAI_2016_presentation_v1.0
 
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
2017 IEEE Projects 2017 For Cse ( Trichy, Chennai )
 
Final Report
Final ReportFinal Report
Final Report
 
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data StreamsNovel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
Novel Class Detection Using RBF SVM Kernel from Feature Evolving Data Streams
 
Declarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemTDeclarative Multilingual Information Extraction with SystemT
Declarative Multilingual Information Extraction with SystemT
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to Rank
 
sab17idealtop233.comtop233.comtop233.com
sab17idealtop233.comtop233.comtop233.comsab17idealtop233.comtop233.comtop233.com
sab17idealtop233.comtop233.comtop233.com
 
sab17ideal.pdfthis is for test title a ha
sab17ideal.pdfthis is for test title a hasab17ideal.pdfthis is for test title a ha
sab17ideal.pdfthis is for test title a ha
 
Data Structures problems 2006
Data Structures problems 2006Data Structures problems 2006
Data Structures problems 2006
 

More from Sri Ambati

GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
Sri Ambati
 
Generative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptxGenerative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptx
Sri Ambati
 
AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek
Sri Ambati
 
LLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5thLLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5th
Sri Ambati
 
Building, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for ProductionBuilding, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for Production
Sri Ambati
 
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Sri Ambati
 
Risk Management for LLMs
Risk Management for LLMsRisk Management for LLMs
Risk Management for LLMs
Sri Ambati
 
Open-Source AI: Community is the Way
Open-Source AI: Community is the WayOpen-Source AI: Community is the Way
Open-Source AI: Community is the Way
Sri Ambati
 
Building Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2OBuilding Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2O
Sri Ambati
 
Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical
Sri Ambati
 
Cutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM PapersCutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM Papers
Sri Ambati
 
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Sri Ambati
 
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Sri Ambati
 
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
Sri Ambati
 
LLM Interpretability
LLM Interpretability LLM Interpretability
LLM Interpretability
Sri Ambati
 
Never Reply to an Email Again
Never Reply to an Email AgainNever Reply to an Email Again
Never Reply to an Email Again
Sri Ambati
 
Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)
Sri Ambati
 
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
Sri Ambati
 
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
Sri Ambati
 

More from Sri Ambati (20)

GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Generative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptxGenerative AI Masterclass - Model Risk Management.pptx
Generative AI Masterclass - Model Risk Management.pptx
 
AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek AI and the Future of Software Development: A Sneak Peek
AI and the Future of Software Development: A Sneak Peek
 
LLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5thLLMOps: Match report from the top of the 5th
LLMOps: Match report from the top of the 5th
 
Building, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for ProductionBuilding, Evaluating, and Optimizing your RAG App for Production
Building, Evaluating, and Optimizing your RAG App for Production
 
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
Building LLM Solutions using Open Source and Closed Source Solutions in Coher...
 
Risk Management for LLMs
Risk Management for LLMsRisk Management for LLMs
Risk Management for LLMs
 
Open-Source AI: Community is the Way
Open-Source AI: Community is the WayOpen-Source AI: Community is the Way
Open-Source AI: Community is the Way
 
Building Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2OBuilding Custom GenAI Apps at H2O
Building Custom GenAI Apps at H2O
 
Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical Applied Gen AI for the Finance Vertical
Applied Gen AI for the Finance Vertical
 
Cutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM PapersCutting Edge Tricks from LLM Papers
Cutting Edge Tricks from LLM Papers
 
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
Practitioner's Guide to LLMs: Exploring Use Cases and a Glimpse Beyond Curren...
 
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
Open Source h2oGPT with Retrieval Augmented Generation (RAG), Web Search, and...
 
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
KGM Mastering Classification and Regression with LLMs: Insights from Kaggle C...
 
LLM Interpretability
LLM Interpretability LLM Interpretability
LLM Interpretability
 
Never Reply to an Email Again
Never Reply to an Email AgainNever Reply to an Email Again
Never Reply to an Email Again
 
Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)Introducción al Aprendizaje Automatico con H2O-3 (1)
Introducción al Aprendizaje Automatico con H2O-3 (1)
 
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
From Rapid Prototypes to an end-to-end Model Deployment: an AI Hedge Fund Use...
 
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
AI Foundations Course Module 1 - Shifting to the Next Step in Your AI Transfo...
 

Recently uploaded

Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
Quotidiano Piemontese
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
Uni Systems S.M.S.A.
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 

Recently uploaded (20)

Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
National Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practicesNational Security Agency - NSA mobile device best practices
National Security Agency - NSA mobile device best practices
 
Microsoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdfMicrosoft - Power Platform_G.Aspiotis.pdf
Microsoft - Power Platform_G.Aspiotis.pdf
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 

Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O London AI & DL Meetup)

  • 1. Achieving algorithmic transparency with Shapley Additive Explanations
  • 2. Contents: • Trade-off between Explainability & Performance • Shapley additive explanations • SHAP: illustrative example • XAI use case 1: clinical operations • XAI use case 2: driver genome • Future developments
  • 3. 3All content copyright © 2017 QuantumBlack, a McKinsey company Explainability vs Performance trade-off Performance Explainability Natural trade-off between these two concepts • Provide highly accurate solutions for our clients • Local interpretations of black-box models Neural networks Random forests Linear regression Logistic regression • Explain why our models perform as they do. How can we interpret the contribution of drivers in black-box models? • How can we get drivers for an individual prediction?
  • 4. 4All content copyright © 2017 QuantumBlack, a McKinsey company Explainability vs Performance trade-off integration Performance Explainability Natural trade-off between these two concepts • Provide highly accurate solutions for our clients • Explain why our models perform as they do. How can we interpret the contribution of drivers in black-box models? • How can we get drivers for an individual prediction? • Local interpretations of black-box models Neural networks Random forests Linear regression Logistic regression ?
  • 5. 5All content copyright © 2017 QuantumBlack, a McKinsey company Methods for XAI LIME Figure 3: Toy example to present intuition for LIME. The black-box model’s complex decision function f (unknown to LIME) is represented by the blue/pink background, which cannot be approximated well by a linear model. The bold red cross is the instance being explained. LIME samples instances, gets pre- dictions using f, and weighs them by the proximity to the instance being explained (represented here Algorithm 1 Sparse Linear Explanations using LIME Require: Classifier f, Number of samples N Require: Instance x, and its interpretable version x0 Require: Similarity kernel ⇡x, Length of explanation K Z {} for i 2 {1, 2, 3, ..., N} do z0 i sample around(x0 ) Z Z [ hz0 i, f(zi), ⇡x(zi)i end for w K-Lasso(Z, K) . with z0 i as features, f(z) as target return w the explanation on Z, and present this information to the user. This estimate of faithfulness can also be used for selecting an appropriate family of explanations from a set of multiple interpretable model classes, thus adapting to the given dataset and the classifier. We leave such exploration 1. Ribeiro et al., "Why Should I Trust You?": Explaining the Predictions of Any Classifier, https://arxiv.org/abs/1602.04938 2. Lei et al., Rationalizing Neural Predictions, https://arxiv.org/abs/1602.04938 3. Letham et al., Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model, https://arxiv.org/abs/1511.01644 4. Lundberg and Lee, A unified approach to interpreting model predictions, http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions 5. Ribeiro et al., Anchors: High-Precision Model-Agnostic Explanations, https://homes.cs.washington.edu/~marcotcr/aaai18.pdf 6. All images taken from respective publications/github repos (Locally Interpretable Model- agnostic Explanations)1
  • 6. 6All content copyright © 2017 QuantumBlack, a McKinsey company Methods for XAI LIME Figure 3: Toy example to present intuition for LIME. The black-box model’s complex decision function f (unknown to LIME) is represented by the blue/pink background, which cannot be approximated well by a linear model. The bold red cross is the instance being explained. LIME samples instances, gets pre- dictions using f, and weighs them by the proximity to the instance being explained (represented here Algorithm 1 Sparse Linear Explanations using LIME Require: Classifier f, Number of samples N Require: Instance x, and its interpretable version x0 Require: Similarity kernel ⇡x, Length of explanation K Z {} for i 2 {1, 2, 3, ..., N} do z0 i sample around(x0 ) Z Z [ hz0 i, f(zi), ⇡x(zi)i end for w K-Lasso(Z, K) . with z0 i as features, f(z) as target return w the explanation on Z, and present this information to the user. This estimate of faithfulness can also be used for selecting an appropriate family of explanations from a set of multiple interpretable model classes, thus adapting to the given dataset and the classifier. We leave such exploration (Locally Interpretable Model- agnostic Explanations)1
  • 7. • Trade-off between Explainability & Performance • Shapley additive explanations • SHAP: illustrative example • XAI use case 1: clinical operations • XAI use case 2: driver genome • Future developments
  • 8. 8All content copyright © 2017 QuantumBlack, a McKinsey company SHAP (Shapley Additive Explanations) 3 properties Local accuracy: require the output of the local explanation model 𝑔 𝑥# to match the original model 𝑓 𝑥 , where 𝑥# is the simplified input, i.e: 𝑓 𝑥 = 𝑔 𝑥# Missingness: require features missing in the original input to have no attributed impact, i.e. 𝑥& # = 0 ⟹ 𝜑& = 0 Consistency: stipulates 𝜑& 𝑓# , 𝑥 ≥ 𝜑&(𝑓, 𝑥) for any two models 𝑓 and 𝑓# if the feature’s contribution in 𝑓# ≥ the feature’s contribution in 𝑓. 1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874 2. Lloyd S Shapley (1953) “A value for n-person games”, In: Contributions to the Theory of Games, 2:28, pp.307-317 Explanation model1: 𝑔 𝑥# = 𝜑/ + ∑ 𝜑& 𝑥& #2 &34 (class of additive feature attribution models) Shapley regression values2, 𝜑& ∈ ℝ - unified measure of additive feature attributions 𝜑& = 7 𝑆 ! 𝑀 − 𝑆 − 1 ! 𝑀! [𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ] >∈C{&}
  • 9. 9All content copyright © 2017 QuantumBlack, a McKinsey company Computing SHAP values SHAP values - unified measure of additive feature attributions, 𝜑& ∈ ℝ: 𝜑& = 7 𝑆 ! 𝑀 − 𝑆 − 1 ! 𝑀! [𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ ] >∈C{&} where F = {all input features} S = {subset of input features} M = |F| = number of input features output with 𝑖 th feature output without 𝑖th feature Computing SHAP values: • 𝑓>∪ & is trained with the 𝑖th feature present • 𝑓> is trained without the 𝑖th feature • compute difference 𝑓>∪ & (𝑥>∪ & ) − 𝑓> 𝑥@ for the current input • retrain the model on all feature subsets 𝑆 ∈ 𝐹{𝑖} • take weighted average of all possible differences weighted average of all possible subsets of S in F
  • 10. 10All content copyright © 2017 QuantumBlack, a McKinsey company SHAP in practice XAI 1 2 Implementations: Kernel SHAP1 = LIME + Shapley Values where loss function 𝐿, weighting kernel 𝜋, and regularisation term Ω are computed so that LIME meets Shapley properties: 1. Lundberg and Lee (2017) „A unified approach to interpreting model predictions“, https://arxiv.org/abs/1705.07874 2. Lundberg, Erion, Lee (2018) “Consistent Individualized Feature Attribution for Tree Ensembles”, https://arxiv.org/abs/1802.03888 pip install shap + beautiful out-of-box JS visualisations Integration --------------------------------------------------------------------- Deep SHAP 1 = DeepLIFT + Shapley Values Linear SHAP1 , Low-Order SHAP1 , Max SHAP1 Tree SHAP2 • speed-up from O(TL2M) to O(TLD2)
  • 11. 11All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: illustrative example Task: given demographic data, classify adult income groups Dataset: US 1994 Census data1 Base model: sklearn Random Forest classifier Explanation model: TreeSHAP 1. 1994 Census database, donated by B. Becker: https://archive.ics.uci.edu/ml/datasets/Adult
  • 12. 12All content copyright © 2017 QuantumBlack, a McKinsey company ‘Native’ RF feature importance scores SHAP summary plot RF importance scores vs. SHAP’s explanations 0.002 0.002 0.008 0.017 0.022 0.065 0.076 0.081 0.138 0.283 0.307 0.000 0.050 0.100 0.150 0.200 0.250 0.300 0.350 Country Race Workclass Occupation Sex Age Hours per week Capital Loss Education-Num Capital Gain Marital Status
  • 13. 13All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: individualised explanations Typical example Legend to categorical value: Sex: 0 = F, 1 = M | Marital status: 2 = never married, 3 = married, spouse absent Occupation: 0 = Adm-clerical, 5 = Sales Low probability of >$50k earnings High probability of >$50k earnings
  • 14. 14All content copyright © 2017 QuantumBlack, a McKinsey company SHAP: individualised explanations across the cohort NetShapleyvalue
  • 15. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • SHAP: illustrative example • Use case 1: clinical operations • Use case 2: driver genome • Future developments
  • 16. 16All content copyright © 2017 QuantumBlack, a McKinsey company Use case: Clinical Operations Task: given a new drug trial, predict which hospitals will enroll more patients (high enrolment rate). Hospital A Hospital B
  • 17. 17All content copyright © 2017 QuantumBlack, a McKinsey company Two questions raised frequently by our client How can we interpret the predictions given by your black-box model? What drives the direction (high or low) of the predicted enrollment rate? How can we get personalised explanations? Why hospital B enrolls more patients than hospital A?
  • 18. 18All content copyright © 2017 QuantumBlack, a McKinsey company Local explanations (Enrolment Rate) Local prediction = 4.96 pt/month Black-box prediction = 4.99 pt/month Local prediction = 2.07 pt/month Black-box prediction = 2.19 pt/month Hospital A Hospital B number of trials in city prior PI experience feature E feature F feature G number of trials in city feature B feature C feature D prior activation time
  • 19. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • Use case 1: clinical operations • Use case 2: driving safety • Future developments
  • 20. 20All content copyright © 2017 QuantumBlack, a McKinsey company Use case: Driving Safety Journey A Journey B
  • 21. 21All content copyright © 2017 QuantumBlack, a McKinsey company Two questions raised frequently by our client How can we interpret the predictions given by your black- box model? What drives the direction (high or low) of the predicted probability of an accident? Drivers can use personalised explanations to improve their driving behaviour. Also, in case of a change in their insurance premium, they have the right to know what is the cause
  • 22. 22All content copyright © 2017 QuantumBlack, a McKinsey company The proposed solution uses Logistic Regression and Random Forest plus LIME for interpretability Feature engineering ~1500 features ~100 features Creation of single and multi-dimensional features based on hypotheses tree Logistic regression Use of elastic net regularization to prevent overfitting to the training set Random Forest A random forest can discover complex non- linear relationships between features LIME/SHAP Personalized interpretations of the RF results SOURCE: Team analysis
  • 23. 23All content copyright © 2017 QuantumBlack, a McKinsey company Example of a driver profile rest time driving experience duration of driving per month feature D feature E feature F feature G feature H feature I feature J
  • 24. Contents: • Explainability vs Performance trade-off • Shapley additive explanations • SHAP: illustrative example • Use case 1: clinical operations • Use case 2: driving safety • Future developments
  • 25. 25All content copyright © 2017 QuantumBlack, a McKinsey company Model agnostic: LIME, Kernel SHAP <- drive development & community interest Model-specific: DeepLIFT (NNs), TreeSHAP (XGBoost, RFs) <- drive implementation Future developments SOURCE: Team analysis
  • 26. 26All content copyright © 2017 QuantumBlack, a McKinsey company Individualised explanations ≠ Transparency Sneeze weight headache no fatigue age Flu Sneeze Headache No fatigue Explainer (LIME) Model Data and prediction Explanation Human makes decision Explain one instance using explanation model Pick a number of representable examples from a dataset Model Human makes decision Dataset and prediction Pick step Explanations
  • 27. 27All content copyright © 2017 QuantumBlack, a McKinsey company Explanation model ≠ causality Age Feature 4 Count of complaints Number of meetings Feature 8 Count of Customers pm Deals closed Revenue Feature 11 Feature 1 Feature 7 Feature 5 Customer Tenure Sales meetings for product A Number of Emails Feature 10 Feature 2 Feature 9 Performance Rating Feature 6 SOURCE: Team analysis
  • 28. 28All content copyright © 2017 QuantumBlack, a McKinsey company Questions?