SlideShare a Scribd company logo
1 of 38
1. Introduction - Why do we need Model Interpretability?
2. Eli5 and Permutation Importance
3. LIME - Local Interpretable Model agnostic
Explanations
4. SHAP - Shapley Additive Explanation
5. Microsoft IntepretML
Agenda
Why interpretability is important
Data Science algortihms are used to make lot of cirtical decisions in
various
industries
• Credit Rating
• Disease diagnonsis
• Banks (Loan processing)
• Recommendations (Products , movies)
• Crime control
Why interpretability is important
ExplainDecisions
Improve Models
Strengthen Trust &
Transparency
Satisfy Regulatory
Requirements
Fairness & Transperability in ML Model
Example : Predicting Employees performance in big company
Input Data : Past perfromance reviews of the employees for the last 10
yrs
Caveat : What if company promotes more Men than Women ?
Output : Model will learn the bias and predict favourable result for Men
Models are Opinions Embedded in Mathematics — Cathy O’Neil
Some Models are easy to Interpret
X1
X2
IF 5X1 + 2X2 > 0
OTHERWISE
Savings
Income
X1
X2
X1 > 0.5
X2 > 0.5
Decision TreesLinear / Logistic Regression
Some models are harder to interpret
Ensemble models (random forest, boosting, etc…)
1. Hard to understand the role of each feature
2. Usually comes with feature importance
3. Doesn’t tell us if feature affects decision
positively or negatively
Deep Neural Networks
1. No correlation between the input layer & output
2. Black-box
Should we use only simple model
• Use simple models like Linear or Logessitic regressions , so that you can
explain the results
• Use complex models & use interpredicbility techinques to explain the res
Types of Interpretations
● Local: Explain how and why a specific prediction was made
○ Why did our model predict certain outcome for given a given customer
● Global: Explain how our model works overall
○ What features are important for our model and how do they impact the prediction ?
○ Feature Importance provides amplitude , not direction of impact
ELI5 & Permutation Importance
• Can be used to interpret sklearn-like models
• Works for both Regression & Classification models
• Create nice visualisations for white-box models
o Can be used for local and global interpretation
• Implements Permutation Importance for black-box models
(boosting)
o Only global interpretation
ELI5 – White-box Models
Explain model globally (feature importance):
import eli5
eli5.show_weights(model)
ELI5 – White-box Models
Explain model Locally
import eli5
eli5.show_prediction(model,
observation)
ELI5 – Permutation Importance
• Model Agnostic
• Provides global interpretation
• For each feature:
o Shuffle values in the provided dataset
o Generate predictions using the model on the modified dataset
o Compute the decrease in accuracy vs before shuffling
ELI5 – Permutation Importance
perm = PermutationImportance(model)
perm.fit(X, y)
eli5.show_weights(perm)
LIME - Local Interpretable Model-Agnostic
Explanations
Local: Explains why a single data point was classified as a
specific class
Model-agnostic: Treats the model as a black-box. Doesn’t need to
know how it makes predictions
Paper “Why should I trust you?” published in August 2016.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
Being Model Agnostic
No assumptions about the internal structure…
X1 >0.5
f
Explain any existing, or future, model
Data Decision
F(x
)
Being Local & Model-Agnostic
Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s
“Global” explanation is too complicated
Being Local & Model-Agnostic
Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s
“Global” explanation is too complicated
Being Local & Model-Agnostic
Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s
“Global” explanation is too complicated
Explanation is an interpretable model,
that is locally accurate
LIME - How does it work?
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
How LIME Works
1. Permute data -Create new dataset around observation by sampling from
distribution learnt on training data (In the Perturbed dataset for Numerical
features are picked based on the distribution & categorical values based on the
occurrence)
2. Calculate distance between permutations and original observations
3. Use model to predict probability on new points, that’s our new y
4. Pick m features best describing the complex model outcome from the permuted
data
5. Fit a linear model on data in m dimensions weighted by similarity
● Central plot shows importance of the top features for this prediction
○ Value corresponds to the weight of the linear model
○ Direction corresponds to whether it pushes in one way or the
other
● Numerical features are discretized into bins
○ Easier to interpret: weight == contribution / sign of weight ==
direction
How LIME Works
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
LIME - Canbe used on images
Lime supports many types of data:
● Tabular Explainer
● Recurrent Tabular Explainer
● Image Explainer
● Text Explainer
LIME Available Explainers
Shapley Additive Explanations(SHAP)
● Anexplanation model is a simpler model that is a good approximation of our
complex model.
● “Additive feature attribution methods”: the local explanation is a linear
combination of the features.
Dataset
Complex Model Prediction
Explainer Explanation
ShapleyAdditive Explanations(SHAP)
● For a given observation, we compute a SHAP value per feature, such that:
sum(SHAP_values_for_obs) = prediction_for_obs - model_expected_value
● Model’s expected value is the average prediction made by our model
How much has each feature contributed to the
prediction compared to the average prediction
House price
Prediction
400,000 €
Average house
price prediction
for all the
apartments
420,000 €
Delta in Price
-20,000 €
How much has each feature contributed to the
prediction compared to the average prediction
Park Contributed
+ 10k
Cats Contributed
-50k
Secure
Neighbourhood
Contributed
+ 30k
Paint Contributed
-10k
Shapley Additive Explanations(SHAP)
For the model agnostic explainer, SHAP leverages Shapley values from Game Theory.
To get the importance of feature X{i}:
● Get all subsets of features S that do not contain X{i}
● Compute the effect on our predictions of adding X{i} to all
those subsets
● Aggregate all contributions to compute marginal contribution
of the feature
Shapley Additive Explanations
(SHAP)
● TreeExplainer - For tree based models . Works with scikit-
learn, xgboost, lightgbm, catboost
● DeepExplainer - For Deep Learning models
● KernelExplainer - Model agnostic explainer
SHAP - Local Interpretation
● Base value is the expected value of your model
● Output value is what your model predicted for this observation
● In red are the positive SHAP values, the features that contribute positively to the
outcome
● In blue the negative SHAP values, features that contribute negatively to the
outcome
SHAP - Global Interpretation
Although SHAP values are local, by plotting all of them at once we can learn a
lot about our model globally
SHAP-TreeExplainerAPI
1. Create a new explainer, with our model as argument
explainer = TreeExplainer(my_tree_model)
2. Calculate shap_values from our model using some observations
shap_values = explainer.shap_values(observations)
3. Use SHAP visualisation functions with our shap_values
shap.force_plot(base_value, shap_values[0]) # local explanation
shap.summary_plot(shap_values) # Global features importance
Microsoft IntepretML
Glassbox -
Models
Blackbox
Explanations
Interpredibility Approaches
https://github.com/interpretml/interpret
Glassbox – Models
Models designed to be interpretable.
Lossesless explainablity .
Linear Regression
Decision Models
Rule Lists
Explainable Boosting Machines
(EBM)
……..
Explainable Boosting Machines (EBM)
1. EBM preserves interpredibulty of a linear
model
2. Matches Accuracy of powerfull blackbox
models – Xgboost , Random Forest
3. Light in deployment
4. Slow in fitting on data
from interpret.glassbox import *
from interpret import show
ebm =ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
ebm_global = ebm.explain_global() show(ebm_global)
ebm_local = ebm.explain_local(X_test[:5], y_test[:5],
show(ebm_local)
Blackbox – Explanations
Model
Perturb
Inputs
Analyse
Explanations
SHAP
LIME
Partial Dependence
Senstive Analysis
Questions. ?

More Related Content

What's hot

Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Krishnaram Kenthapadi
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality ReductionKnoldus Inc.
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Krishnaram Kenthapadi
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Sri Ambati
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AIBill Liu
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximizationbutest
 
Feature selection
Feature selectionFeature selection
Feature selectionDong Guo
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
 
Knowledge Graph Embeddings for Recommender Systems
Knowledge Graph Embeddings for Recommender SystemsKnowledge Graph Embeddings for Recommender Systems
Knowledge Graph Embeddings for Recommender SystemsEnrico Palumbo
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine LearningSri Ambati
 
Explainable AI
Explainable AIExplainable AI
Explainable AIDinesh V
 
07 dimensionality reduction
07 dimensionality reduction07 dimensionality reduction
07 dimensionality reductionMarco Quartulli
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
Tips for data science competitions
Tips for data science competitionsTips for data science competitions
Tips for data science competitionsOwen Zhang
 
Wrapper feature selection method
Wrapper feature selection methodWrapper feature selection method
Wrapper feature selection methodAmir Razmjou
 
Feature Engineering in Machine Learning
Feature Engineering in Machine LearningFeature Engineering in Machine Learning
Feature Engineering in Machine LearningKnoldus Inc.
 

What's hot (20)

Explainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretableExplainable AI - making ML and DL models more interpretable
Explainable AI - making ML and DL models more interpretable
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
Explainability and bias in AI
Explainability and bias in AIExplainability and bias in AI
Explainability and bias in AI
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximization
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...
 
Knowledge Graph Embeddings for Recommender Systems
Knowledge Graph Embeddings for Recommender SystemsKnowledge Graph Embeddings for Recommender Systems
Knowledge Graph Embeddings for Recommender Systems
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
XGBoost & LightGBM
XGBoost & LightGBMXGBoost & LightGBM
XGBoost & LightGBM
 
07 dimensionality reduction
07 dimensionality reduction07 dimensionality reduction
07 dimensionality reduction
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
Tips for data science competitions
Tips for data science competitionsTips for data science competitions
Tips for data science competitions
 
Wrapper feature selection method
Wrapper feature selection methodWrapper feature selection method
Wrapper feature selection method
 
Explainable AI (XAI)
Explainable AI (XAI)Explainable AI (XAI)
Explainable AI (XAI)
 
Feature Engineering in Machine Learning
Feature Engineering in Machine LearningFeature Engineering in Machine Learning
Feature Engineering in Machine Learning
 

Similar to Interpretable ML

A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)Rama Irsheidat
 
Kaggle Days Paris - Alberto Danese - ML Interpretability
Kaggle Days Paris - Alberto Danese - ML InterpretabilityKaggle Days Paris - Alberto Danese - ML Interpretability
Kaggle Days Paris - Alberto Danese - ML InterpretabilityAlberto Danese
 
Using SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsUsing SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsJonathan Bechtel
 
Understanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesUnderstanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesJonathan Bechtel
 
VSSML17 Review. Summary Day 1 Sessions
VSSML17 Review. Summary Day 1 SessionsVSSML17 Review. Summary Day 1 Sessions
VSSML17 Review. Summary Day 1 SessionsBigML, Inc
 
Shapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionShapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionTushar Tank
 
Machine Learning and Deep Learning 4 dummies
Machine Learning and Deep Learning 4 dummies Machine Learning and Deep Learning 4 dummies
Machine Learning and Deep Learning 4 dummies Dori Waldman
 
Machine learning4dummies
Machine learning4dummiesMachine learning4dummies
Machine learning4dummiesMichael Winer
 
The Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkThe Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkIvo Andreev
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to RankSease
 
STAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxSTAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxJishanAhmed24
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial IndustrySubrat Panda, PhD
 
BSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 SessionsBSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 SessionsBigML, Inc
 
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...taeseon ryu
 
Steering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsSteering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsMelissa Moody
 
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Rebecca Bilbro
 

Similar to Interpretable ML (20)

C3 w5
C3 w5C3 w5
C3 w5
 
A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)A Unified Approach to Interpreting Model Predictions (SHAP)
A Unified Approach to Interpreting Model Predictions (SHAP)
 
ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18
 
Kaggle Days Paris - Alberto Danese - ML Interpretability
Kaggle Days Paris - Alberto Danese - ML InterpretabilityKaggle Days Paris - Alberto Danese - ML Interpretability
Kaggle Days Paris - Alberto Danese - ML Interpretability
 
Using SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsUsing SHAP to Understand Black Box Models
Using SHAP to Understand Black Box Models
 
Understanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley ValuesUnderstanding Black Box Models with Shapley Values
Understanding Black Box Models with Shapley Values
 
VSSML17 Review. Summary Day 1 Sessions
VSSML17 Review. Summary Day 1 SessionsVSSML17 Review. Summary Day 1 Sessions
VSSML17 Review. Summary Day 1 Sessions
 
Shapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley DiscussionShapley Tech Talk - SHAP and Shapley Discussion
Shapley Tech Talk - SHAP and Shapley Discussion
 
Machine Learning and Deep Learning 4 dummies
Machine Learning and Deep Learning 4 dummies Machine Learning and Deep Learning 4 dummies
Machine Learning and Deep Learning 4 dummies
 
Machine learning4dummies
Machine learning4dummiesMachine learning4dummies
Machine learning4dummies
 
The Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it WorkThe Power of Auto ML and How Does it Work
The Power of Auto ML and How Does it Work
 
Deep Learning Opening Workshop - Improving Generative Models - Junier Oliva, ...
Deep Learning Opening Workshop - Improving Generative Models - Junier Oliva, ...Deep Learning Opening Workshop - Improving Generative Models - Junier Oliva, ...
Deep Learning Opening Workshop - Improving Generative Models - Junier Oliva, ...
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to Rank
 
STAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxSTAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptx
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial Industry
 
BSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 SessionsBSSML16 L5. Summary Day 1 Sessions
BSSML16 L5. Summary Day 1 Sessions
 
CSSC ML Workshop
CSSC ML WorkshopCSSC ML Workshop
CSSC ML Workshop
 
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Eff...
 
Steering Model Selection with Visual Diagnostics
Steering Model Selection with Visual DiagnosticsSteering Model Selection with Visual Diagnostics
Steering Model Selection with Visual Diagnostics
 
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019
 

Recently uploaded

Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样vhwb25kk
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...ThinkInnovation
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一F La
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改atducpo
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 

Recently uploaded (20)

Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...
Predictive Analysis - Using Insight-informed Data to Determine Factors Drivin...
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
(PARI) Call Girls Wanowrie ( 7001035870 ) HI-Fi Pune Escorts Service
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 

Interpretable ML

  • 1. 1. Introduction - Why do we need Model Interpretability? 2. Eli5 and Permutation Importance 3. LIME - Local Interpretable Model agnostic Explanations 4. SHAP - Shapley Additive Explanation 5. Microsoft IntepretML Agenda
  • 2. Why interpretability is important Data Science algortihms are used to make lot of cirtical decisions in various industries • Credit Rating • Disease diagnonsis • Banks (Loan processing) • Recommendations (Products , movies) • Crime control
  • 3. Why interpretability is important ExplainDecisions Improve Models Strengthen Trust & Transparency Satisfy Regulatory Requirements
  • 4. Fairness & Transperability in ML Model Example : Predicting Employees performance in big company Input Data : Past perfromance reviews of the employees for the last 10 yrs Caveat : What if company promotes more Men than Women ? Output : Model will learn the bias and predict favourable result for Men Models are Opinions Embedded in Mathematics — Cathy O’Neil
  • 5.
  • 6. Some Models are easy to Interpret X1 X2 IF 5X1 + 2X2 > 0 OTHERWISE Savings Income X1 X2 X1 > 0.5 X2 > 0.5 Decision TreesLinear / Logistic Regression
  • 7. Some models are harder to interpret Ensemble models (random forest, boosting, etc…) 1. Hard to understand the role of each feature 2. Usually comes with feature importance 3. Doesn’t tell us if feature affects decision positively or negatively Deep Neural Networks 1. No correlation between the input layer & output 2. Black-box
  • 8. Should we use only simple model • Use simple models like Linear or Logessitic regressions , so that you can explain the results • Use complex models & use interpredicbility techinques to explain the res
  • 9. Types of Interpretations ● Local: Explain how and why a specific prediction was made ○ Why did our model predict certain outcome for given a given customer ● Global: Explain how our model works overall ○ What features are important for our model and how do they impact the prediction ? ○ Feature Importance provides amplitude , not direction of impact
  • 10. ELI5 & Permutation Importance • Can be used to interpret sklearn-like models • Works for both Regression & Classification models • Create nice visualisations for white-box models o Can be used for local and global interpretation • Implements Permutation Importance for black-box models (boosting) o Only global interpretation
  • 11. ELI5 – White-box Models Explain model globally (feature importance): import eli5 eli5.show_weights(model)
  • 12. ELI5 – White-box Models Explain model Locally import eli5 eli5.show_prediction(model, observation)
  • 13. ELI5 – Permutation Importance • Model Agnostic • Provides global interpretation • For each feature: o Shuffle values in the provided dataset o Generate predictions using the model on the modified dataset o Compute the decrease in accuracy vs before shuffling
  • 14. ELI5 – Permutation Importance perm = PermutationImportance(model) perm.fit(X, y) eli5.show_weights(perm)
  • 15. LIME - Local Interpretable Model-Agnostic Explanations Local: Explains why a single data point was classified as a specific class Model-agnostic: Treats the model as a black-box. Doesn’t need to know how it makes predictions Paper “Why should I trust you?” published in August 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
  • 16. Being Model Agnostic No assumptions about the internal structure… X1 >0.5 f Explain any existing, or future, model Data Decision F(x )
  • 17. Being Local & Model-Agnostic Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s “Global” explanation is too complicated
  • 18. Being Local & Model-Agnostic Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s “Global” explanation is too complicated
  • 19. Being Local & Model-Agnostic Source: https://www.youtube.com/watch?v=LAm4QmVaf0E&t=3658s “Global” explanation is too complicated Explanation is an interpretable model, that is locally accurate
  • 20. LIME - How does it work? "Why Should I Trust You?": Explaining the Predictions of Any Classifier Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
  • 21. How LIME Works 1. Permute data -Create new dataset around observation by sampling from distribution learnt on training data (In the Perturbed dataset for Numerical features are picked based on the distribution & categorical values based on the occurrence) 2. Calculate distance between permutations and original observations 3. Use model to predict probability on new points, that’s our new y 4. Pick m features best describing the complex model outcome from the permuted data 5. Fit a linear model on data in m dimensions weighted by similarity
  • 22. ● Central plot shows importance of the top features for this prediction ○ Value corresponds to the weight of the linear model ○ Direction corresponds to whether it pushes in one way or the other ● Numerical features are discretized into bins ○ Easier to interpret: weight == contribution / sign of weight == direction How LIME Works
  • 23. "Why Should I Trust You?": Explaining the Predictions of Any Classifier Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin LIME - Canbe used on images
  • 24. Lime supports many types of data: ● Tabular Explainer ● Recurrent Tabular Explainer ● Image Explainer ● Text Explainer LIME Available Explainers
  • 25. Shapley Additive Explanations(SHAP) ● Anexplanation model is a simpler model that is a good approximation of our complex model. ● “Additive feature attribution methods”: the local explanation is a linear combination of the features. Dataset Complex Model Prediction Explainer Explanation
  • 26. ShapleyAdditive Explanations(SHAP) ● For a given observation, we compute a SHAP value per feature, such that: sum(SHAP_values_for_obs) = prediction_for_obs - model_expected_value ● Model’s expected value is the average prediction made by our model
  • 27. How much has each feature contributed to the prediction compared to the average prediction House price Prediction 400,000 € Average house price prediction for all the apartments 420,000 € Delta in Price -20,000 €
  • 28. How much has each feature contributed to the prediction compared to the average prediction Park Contributed + 10k Cats Contributed -50k Secure Neighbourhood Contributed + 30k Paint Contributed -10k
  • 29. Shapley Additive Explanations(SHAP) For the model agnostic explainer, SHAP leverages Shapley values from Game Theory. To get the importance of feature X{i}: ● Get all subsets of features S that do not contain X{i} ● Compute the effect on our predictions of adding X{i} to all those subsets ● Aggregate all contributions to compute marginal contribution of the feature
  • 30. Shapley Additive Explanations (SHAP) ● TreeExplainer - For tree based models . Works with scikit- learn, xgboost, lightgbm, catboost ● DeepExplainer - For Deep Learning models ● KernelExplainer - Model agnostic explainer
  • 31. SHAP - Local Interpretation ● Base value is the expected value of your model ● Output value is what your model predicted for this observation ● In red are the positive SHAP values, the features that contribute positively to the outcome ● In blue the negative SHAP values, features that contribute negatively to the outcome
  • 32. SHAP - Global Interpretation Although SHAP values are local, by plotting all of them at once we can learn a lot about our model globally
  • 33. SHAP-TreeExplainerAPI 1. Create a new explainer, with our model as argument explainer = TreeExplainer(my_tree_model) 2. Calculate shap_values from our model using some observations shap_values = explainer.shap_values(observations) 3. Use SHAP visualisation functions with our shap_values shap.force_plot(base_value, shap_values[0]) # local explanation shap.summary_plot(shap_values) # Global features importance
  • 34. Microsoft IntepretML Glassbox - Models Blackbox Explanations Interpredibility Approaches https://github.com/interpretml/interpret
  • 35. Glassbox – Models Models designed to be interpretable. Lossesless explainablity . Linear Regression Decision Models Rule Lists Explainable Boosting Machines (EBM) ……..
  • 36. Explainable Boosting Machines (EBM) 1. EBM preserves interpredibulty of a linear model 2. Matches Accuracy of powerfull blackbox models – Xgboost , Random Forest 3. Light in deployment 4. Slow in fitting on data from interpret.glassbox import * from interpret import show ebm =ExplainableBoostingClassifier() ebm.fit(X_train, y_train) ebm_global = ebm.explain_global() show(ebm_global) ebm_local = ebm.explain_local(X_test[:5], y_test[:5], show(ebm_local)