Data Science | Design | Technology
https://www.meetup.com/DSDTmtl
July
21
2021
2
Data Science | Design | Technology
https://www.meetup.com/DSDTmtl
July
21
(2021)
Please, don't forget to
mute yourself
JL Maréchaux
DSDT Co-Organizer
Houda Kaddioui
Senior AI Scientist
https://www.meetup.com/DSDTmtl
Agenda
3:45 - 4:00 Arrival & Networking 
4:00 - 4:15 News & Intro
4:15 - 5:15 Practical XAI: Model explainability
for Machine Learning practitioners
5:15 - 5:30: Virtual Snack & Networking
4
DSDT Meetup - July 21, 2021
DSDT Mtl meetup
Pdipiscing elit
322,722 views
DSDT Meetup
Pdipiscing elit
322,722 views
DSDT Meetup
Pdipiscing elit
322,722 views
DSDT
Pdipiscin
322,722
Virtual Meetups
Until we can do in-person events
again in Montreal…
Past (and future) presentations
available on Slideshare.
slideshare.net/DSDT_MTL
Monthly cadence, on Wednesdays.
Incredible sessions already planned. Contact us with your expectations & ideas.
ML
Validation
Reinforcement
Learning
Explainable
AI
Fly brain
& NN
Lorem ipsum
Commodo
April 28
May 26 Aug 25
DSDT meetups in 2021
July 21
Your ideas,
your meetup.
http://bit.ly/DSDTsurvey2021
RNN & Time
Series
Sept 29
7
Our 2021 campaign to fight against
poverty and social exclusion.
Data Science.
Design.
Technology.
https://centraide-mtl.org/dsdtmtl
8
Q&A for questions & upvote
Chats for comments
Leverage the virtual
meeting tools for
comments and
questions
Raise your hand
Practical XAI
Model explainability
for Machine Learning
practitioners
Data Science | Design | Technology 9
Houda Kaddioui
Picture
Data Science | Design | Technology 10
Practical XAI
Model explainability for ML practitioners
Houda Kaddioui
Picture
10
Source: https://medium.com/@BonsaiAI/what-do-we-want-from-explainable-ai-5ed12cb36c07
Responsible
AI
Best
practices
XAI
Privacy Security
Fairness
Resistance to
attacks
Invariant to
demographics
ML Ops
Protecting users
and data
?
11
Agenda
What is Explainable AI
When do we need XAI
Why would you care
How can you use the XAI tools
Considerations when using XAI tools
12
Image source: https://doi.org/10.1016/j.anndiagpath.2020.151490
These are pathology
slides of breast
tissue
13
Image source: https://doi.org/10.1038/s41746-019-0112-2
Image source: https://doi.org/10.1038/s41746-019-0112-2
14
15
Fox
16
17
Providing insight for a particular audience into a chosen domain problem
Meaningful explanation
Explainability fact sheet (FAccT 2020) : Functionality, Operability, Usability,
safety and validation.
Explainable AI : Motivations
18
Explainable AI : Do we need explanations
Low-stakes domain:
- Music recommendation
- Online advertising
- Web search
- Classifying floki as a cat
High-stakes domain:
- Healthcare
- Aerospace
- Autonomous vehicles
- Security
- Law
19
20
Explainable AI : Stakeholders
Developers
Debug
Improve performances
Communicate
Regulators
Audit
Compliance
Laws
Administrators
Risk management
User trust
21
“Complex models are often required, but have low
interpretability”
Reduce complexity
- Decision trees
- Linear models
- Bayesian models
Derive
explanations
Reduce performance
Intrinsic
Explainability
Post- hoc
Explainability
Global:
Entire model
Local:
Single prediction
● Model agnostic
● Model specific
22
Complex models are often required, but have low
interpretability
Reduce complexity
- Decision trees
- Linear models
- Bayesian models
Derive
explanations
Reduce performance
Intrinsic
Explainability
Post- hoc
Explainability
Local:
Single prediction
● Model agnostic
● Model specific
Global:
Entire model
23
“Understand why a machine learning
model makes predictions”
Made a specific prediction,
different from the one we expected
24
SHAP
25
Shapley Value
26
Shapley Value
$100,000
27
Shapley Value How do we
split
this?
28
Shapley Value
$60,000 $20,000
Alone
Alone
29
Shapley Value
Alone Synergy
$60 K f ( , )$100K
$ 20 K
= $ 80 K
30
Shapley Value
Alone Split
$60 K $60K + ½ Gains = $70 K
$ 20 K $ 20K + ½ Gains = $ 30K
Gains = $100 K- $80K= $ 20 K
31
Shapley Value
Game Prediction task
Features
Gain Prediction
32
Shapley Value
33
Computationally expensive
Assumes features are not correlated
MODEL
Features Feature importances
No stroke - Stroke
34
35
SHAP: SHapley Additive exPlanations
import shap
patient_id = 10
patient = X.loc[[patient_id]]
# %% Create SHAP explainer
explainer = shap.TreeExplainer(model)
# Calculate shapley values for patient
shap_values = explainer.shap_values(patient)
Age = 67
Sex = Male
[...]
BMI = 36
Gluc= 190.7
MODEL Prediction: Stroke
36
SHAP: SHapley Additive exPlanations
# %% Visualize force plot
shap.initjs()
shap.force_plot(explainer.expected_value[1], shap_values[1], patient)
37
SHAP: SHapley Additive exPlanations
# %% Visualize force plot
shap.initjs()
shap.force_plot(explainer.expected_value[1], shap_values[1], patient)
38
SHAP: SHapley Additive exPlanations
# %% Visualize decision plot
shap.decision_plot(explainer.expected_value[1],shap_values[1],patient)
LIME
39
LIME: Local Interpretable Model-agnostic Explanation
instance to explain
Black-box model’s complex decision function
Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier
arXiv:1602.04938 [cs.LG]
40
LIME: Local Interpretable Model-agnostic Explanation
Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier
arXiv:1602.04938 [cs.LG]
Black-box model’s complex decision function
41
LIME: Local Interpretable Model-agnostic Explanation
Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier
arXiv:1602.04938 [cs.LG]
Surrogate model to explain locally
42
LIME: Local Interpretable Model-agnostic Explanation
- Select the instance of interest
- Perturb the dataset and get a prediction for the new point
- Assign weights to the new new samples by their proximity to the instance
- Train a weighted interpretable model on the dataset with variations
- Explain the prediction by interpreting the local model
43
LIME: Local Interpretable Model-agnostic Explanation
44
Source: arXiv:1602.04938 [cs.LG]
Explaining image data: LIME
45
Source: arXiv:1602.04938 [cs.LG]
Integrated gradients and XRAI
46
47
Explaining image data: Integrated gradients
GIFed from : https://distill.pub/2020/attribution-baselines/
import saliency.core as saliency
# Construct the saliency object. This alone doesn't do anything.
integrated_gradients = saliency.IntegratedGradients()
# Define your baseline
baseline = np.zeros(im.shape)
48
import saliency.core as saliency
# Construct the saliency object. This alone doesn't do anything.
integrated_gradients = saliency.IntegratedGradients()
# Define your baseline
baseline = np.zeros(im.shape)
49
import saliency.core as saliency
# Construct the saliency object. This alone doesn't do anything.
integrated_gradients = saliency.IntegratedGradients()
# Set a baseline
baseline = np.zeros(im.shape)
# Compute the ig masks
vanilla_ig_mask = integrated_gradients.GetMask(im, call_model_function,
call_model_args, x_steps=25, x_baseline=baseline, batch_size=20)
Smoothgrad_ig_mask = integrated_gradients.GetSmoothedMask( im,
call_model_function, call_model_args, x_steps=25, x_baseline=baseline,
batch_size=20)
50
import saliency.core as saliency
# Construct the saliency object. This alone doesn't do anything.
integrated_gradients = saliency.IntegratedGradients()
# Set a baseline
baseline = np.zeros(im.shape)
# Compute the ig masks
vanilla_ig_mask = integrated_gradients.GetMask(im, call_model_function,
call_model_args, x_steps=25, x_baseline=baseline, batch_size=20)
Smoothgrad_ig_mask = integrated_gradients.GetSmoothedMask( im,
call_model_function, call_model_args, x_steps=25, x_baseline=baseline,
batch_size=20)
51
# Call the vis methods to convert the 3D tensors to 2D grayscale.
vanilla_mask_gray = saliency.VisualizeImageGrayscale(vanilla_igm)
Smoothgrad_mask_gray = saliency.VisualizeImageGrayscale(smoothgrad_igm)
52
Explaining image data: XRAI
53
Learn more
# Construct the saliency object
xrai_object = saliency.XRAI()
# Compute XRAI attributions with default parameters
xrai_attributions = xrai_object.GetMask(im, call_model_function, call_model_args, batch_size=20)
####
# Show most salient 30% of the image
mask = xrai_attributions > np.percentile(xrai_attributions, 70)
im_mask = np.array(im_orig)
im_mask[~mask] = 0
ShowImage(im_mask, title='Top 30%', ax=P.subplot(ROWS, COLS, 3))
54
Discussion
55
The interpretability - accuracy tradeoff is not always true
Souce: Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions.
September 2019 DO I:10.1109/ACCESS.2019.2942390
56
Accounting for the human factor and confirmation bias
COVID-Net: a tailored deep convolutional neural network design for
detection of COVID-19 cases from chest X-ray images. Wang, L., Lin, Z.Q. &
Wong, A. (2020)
Was there COVID-19 back in 2012? – Challenge for AI in
Diagnosis with Similar Indications. I Banerjee et. al (2020)
COVID-19 cases detected along with critical regions
57
Accounting for the human factor and mental models
58
Managing expectations
Communicate in a meaningful way
Be clear about what is being explained
Can the user act on the explanations
59
Calibrating trust
Dataset
- Quantity - quality
- Typos and Errors in EHR
Data collection
- Diversity, Fairness
The project team
- Design
- Involving domain experts etc..)
Organisation motivation and incentives
60
Pushing for interpretable models
61
Source: https://arxiv.org/abs/1806.10574
Plan for system failures
It’s called the Trust fall ok?
62
Plan for system failures
Define “errors” & “failure”
Allow the opportunity for feedback
Have an option for users to take over control and shut down systems if needed
llustration of experimental conditions: (left) unassisted, (center) grades only, (right) grades plus heatmap
learn more: https://health.google/for-clinicians/ophthalmology/ 63
Summary
- Use interpretable models whenever possible
- Flag data issues, and spend time to curate data properly
- Use explanations as part of your iterative development process
- Explain in a meaningful way
- Think about what you expect from displaying explanations to end users
- Communicate the limit of you explanations
- Include domain experts
64
Thank you
http://bit.ly/XAI_code
houda@rubrick.ca
65
Merci / Thank You
@DsdtMtl
Data Science | Design | Technology
(Check for next DSDT meetup at meetup.com/DSDTmtl)
http://bit.ly/dsdtmtl-in

DSDT meetup July 2021

  • 1.
    Data Science |Design | Technology https://www.meetup.com/DSDTmtl July 21 2021
  • 2.
    2 Data Science |Design | Technology https://www.meetup.com/DSDTmtl July 21 (2021) Please, don't forget to mute yourself
  • 3.
    JL Maréchaux DSDT Co-Organizer HoudaKaddioui Senior AI Scientist https://www.meetup.com/DSDTmtl
  • 4.
    Agenda 3:45 - 4:00Arrival & Networking  4:00 - 4:15 News & Intro 4:15 - 5:15 Practical XAI: Model explainability for Machine Learning practitioners 5:15 - 5:30: Virtual Snack & Networking 4 DSDT Meetup - July 21, 2021
  • 5.
    DSDT Mtl meetup Pdipiscingelit 322,722 views DSDT Meetup Pdipiscing elit 322,722 views DSDT Meetup Pdipiscing elit 322,722 views DSDT Pdipiscin 322,722 Virtual Meetups Until we can do in-person events again in Montreal… Past (and future) presentations available on Slideshare. slideshare.net/DSDT_MTL
  • 6.
    Monthly cadence, onWednesdays. Incredible sessions already planned. Contact us with your expectations & ideas. ML Validation Reinforcement Learning Explainable AI Fly brain & NN Lorem ipsum Commodo April 28 May 26 Aug 25 DSDT meetups in 2021 July 21 Your ideas, your meetup. http://bit.ly/DSDTsurvey2021 RNN & Time Series Sept 29
  • 7.
    7 Our 2021 campaignto fight against poverty and social exclusion. Data Science. Design. Technology. https://centraide-mtl.org/dsdtmtl
  • 8.
    8 Q&A for questions& upvote Chats for comments Leverage the virtual meeting tools for comments and questions Raise your hand
  • 9.
    Practical XAI Model explainability forMachine Learning practitioners Data Science | Design | Technology 9 Houda Kaddioui Picture
  • 10.
    Data Science |Design | Technology 10 Practical XAI Model explainability for ML practitioners Houda Kaddioui Picture 10 Source: https://medium.com/@BonsaiAI/what-do-we-want-from-explainable-ai-5ed12cb36c07
  • 11.
  • 12.
    Agenda What is ExplainableAI When do we need XAI Why would you care How can you use the XAI tools Considerations when using XAI tools 12
  • 13.
  • 14.
    Image source: https://doi.org/10.1038/s41746-019-0112-2 Imagesource: https://doi.org/10.1038/s41746-019-0112-2 14
  • 15.
  • 16.
  • 17.
  • 18.
    Providing insight fora particular audience into a chosen domain problem Meaningful explanation Explainability fact sheet (FAccT 2020) : Functionality, Operability, Usability, safety and validation. Explainable AI : Motivations 18
  • 19.
    Explainable AI :Do we need explanations Low-stakes domain: - Music recommendation - Online advertising - Web search - Classifying floki as a cat High-stakes domain: - Healthcare - Aerospace - Autonomous vehicles - Security - Law 19
  • 20.
  • 21.
    Explainable AI :Stakeholders Developers Debug Improve performances Communicate Regulators Audit Compliance Laws Administrators Risk management User trust 21
  • 22.
    “Complex models areoften required, but have low interpretability” Reduce complexity - Decision trees - Linear models - Bayesian models Derive explanations Reduce performance Intrinsic Explainability Post- hoc Explainability Global: Entire model Local: Single prediction ● Model agnostic ● Model specific 22
  • 23.
    Complex models areoften required, but have low interpretability Reduce complexity - Decision trees - Linear models - Bayesian models Derive explanations Reduce performance Intrinsic Explainability Post- hoc Explainability Local: Single prediction ● Model agnostic ● Model specific Global: Entire model 23
  • 24.
    “Understand why amachine learning model makes predictions” Made a specific prediction, different from the one we expected 24
  • 25.
  • 26.
  • 27.
  • 28.
    Shapley Value Howdo we split this? 28
  • 29.
  • 30.
    Shapley Value Alone Synergy $60K f ( , )$100K $ 20 K = $ 80 K 30
  • 31.
    Shapley Value Alone Split $60K $60K + ½ Gains = $70 K $ 20 K $ 20K + ½ Gains = $ 30K Gains = $100 K- $80K= $ 20 K 31
  • 32.
    Shapley Value Game Predictiontask Features Gain Prediction 32
  • 33.
  • 34.
  • 35.
    35 SHAP: SHapley AdditiveexPlanations import shap patient_id = 10 patient = X.loc[[patient_id]] # %% Create SHAP explainer explainer = shap.TreeExplainer(model) # Calculate shapley values for patient shap_values = explainer.shap_values(patient) Age = 67 Sex = Male [...] BMI = 36 Gluc= 190.7 MODEL Prediction: Stroke
  • 36.
    36 SHAP: SHapley AdditiveexPlanations # %% Visualize force plot shap.initjs() shap.force_plot(explainer.expected_value[1], shap_values[1], patient)
  • 37.
    37 SHAP: SHapley AdditiveexPlanations # %% Visualize force plot shap.initjs() shap.force_plot(explainer.expected_value[1], shap_values[1], patient)
  • 38.
    38 SHAP: SHapley AdditiveexPlanations # %% Visualize decision plot shap.decision_plot(explainer.expected_value[1],shap_values[1],patient)
  • 39.
  • 40.
    LIME: Local InterpretableModel-agnostic Explanation instance to explain Black-box model’s complex decision function Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier arXiv:1602.04938 [cs.LG] 40
  • 41.
    LIME: Local InterpretableModel-agnostic Explanation Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier arXiv:1602.04938 [cs.LG] Black-box model’s complex decision function 41
  • 42.
    LIME: Local InterpretableModel-agnostic Explanation Source: Why Should I Trust You?": Explaining the Predictions of Any Classifier arXiv:1602.04938 [cs.LG] Surrogate model to explain locally 42
  • 43.
    LIME: Local InterpretableModel-agnostic Explanation - Select the instance of interest - Perturb the dataset and get a prediction for the new point - Assign weights to the new new samples by their proximity to the instance - Train a weighted interpretable model on the dataset with variations - Explain the prediction by interpreting the local model 43
  • 44.
    LIME: Local InterpretableModel-agnostic Explanation 44 Source: arXiv:1602.04938 [cs.LG]
  • 45.
    Explaining image data:LIME 45 Source: arXiv:1602.04938 [cs.LG]
  • 46.
  • 47.
    47 Explaining image data:Integrated gradients GIFed from : https://distill.pub/2020/attribution-baselines/
  • 48.
    import saliency.core assaliency # Construct the saliency object. This alone doesn't do anything. integrated_gradients = saliency.IntegratedGradients() # Define your baseline baseline = np.zeros(im.shape) 48
  • 49.
    import saliency.core assaliency # Construct the saliency object. This alone doesn't do anything. integrated_gradients = saliency.IntegratedGradients() # Define your baseline baseline = np.zeros(im.shape) 49
  • 50.
    import saliency.core assaliency # Construct the saliency object. This alone doesn't do anything. integrated_gradients = saliency.IntegratedGradients() # Set a baseline baseline = np.zeros(im.shape) # Compute the ig masks vanilla_ig_mask = integrated_gradients.GetMask(im, call_model_function, call_model_args, x_steps=25, x_baseline=baseline, batch_size=20) Smoothgrad_ig_mask = integrated_gradients.GetSmoothedMask( im, call_model_function, call_model_args, x_steps=25, x_baseline=baseline, batch_size=20) 50
  • 51.
    import saliency.core assaliency # Construct the saliency object. This alone doesn't do anything. integrated_gradients = saliency.IntegratedGradients() # Set a baseline baseline = np.zeros(im.shape) # Compute the ig masks vanilla_ig_mask = integrated_gradients.GetMask(im, call_model_function, call_model_args, x_steps=25, x_baseline=baseline, batch_size=20) Smoothgrad_ig_mask = integrated_gradients.GetSmoothedMask( im, call_model_function, call_model_args, x_steps=25, x_baseline=baseline, batch_size=20) 51
  • 52.
    # Call thevis methods to convert the 3D tensors to 2D grayscale. vanilla_mask_gray = saliency.VisualizeImageGrayscale(vanilla_igm) Smoothgrad_mask_gray = saliency.VisualizeImageGrayscale(smoothgrad_igm) 52
  • 53.
    Explaining image data:XRAI 53 Learn more
  • 54.
    # Construct thesaliency object xrai_object = saliency.XRAI() # Compute XRAI attributions with default parameters xrai_attributions = xrai_object.GetMask(im, call_model_function, call_model_args, batch_size=20) #### # Show most salient 30% of the image mask = xrai_attributions > np.percentile(xrai_attributions, 70) im_mask = np.array(im_orig) im_mask[~mask] = 0 ShowImage(im_mask, title='Top 30%', ax=P.subplot(ROWS, COLS, 3)) 54
  • 55.
  • 56.
    The interpretability -accuracy tradeoff is not always true Souce: Machine Learning for 5G/B5G Mobile and Wireless Communications: Potential, Limitations, and Future Directions. September 2019 DO I:10.1109/ACCESS.2019.2942390 56
  • 57.
    Accounting for thehuman factor and confirmation bias COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Wang, L., Lin, Z.Q. & Wong, A. (2020) Was there COVID-19 back in 2012? – Challenge for AI in Diagnosis with Similar Indications. I Banerjee et. al (2020) COVID-19 cases detected along with critical regions 57
  • 58.
    Accounting for thehuman factor and mental models 58
  • 59.
    Managing expectations Communicate ina meaningful way Be clear about what is being explained Can the user act on the explanations 59
  • 60.
    Calibrating trust Dataset - Quantity- quality - Typos and Errors in EHR Data collection - Diversity, Fairness The project team - Design - Involving domain experts etc..) Organisation motivation and incentives 60
  • 61.
    Pushing for interpretablemodels 61 Source: https://arxiv.org/abs/1806.10574
  • 62.
    Plan for systemfailures It’s called the Trust fall ok? 62
  • 63.
    Plan for systemfailures Define “errors” & “failure” Allow the opportunity for feedback Have an option for users to take over control and shut down systems if needed llustration of experimental conditions: (left) unassisted, (center) grades only, (right) grades plus heatmap learn more: https://health.google/for-clinicians/ophthalmology/ 63
  • 64.
    Summary - Use interpretablemodels whenever possible - Flag data issues, and spend time to curate data properly - Use explanations as part of your iterative development process - Explain in a meaningful way - Think about what you expect from displaying explanations to end users - Communicate the limit of you explanations - Include domain experts 64
  • 65.
  • 66.
    Merci / ThankYou @DsdtMtl Data Science | Design | Technology (Check for next DSDT meetup at meetup.com/DSDTmtl) http://bit.ly/dsdtmtl-in