SlideShare a Scribd company logo
1 of 78
Download to read offline
Interpretabilitat
Interpretability / Explainable AI (XAI)
Xavier Giro-i-Nieto
@DocXavi
xavier.giro@upc.edu
Associate Professor
Universitat Politècnica de Catalunya
[GCED] [Lectures repo]
2
Acknowledgements
Amaia Salvador
amaia.salvador@upc.edu
PhD Candidate
Universitat Politècnica de Catalunya
Eva Mohedano
eva.mohedano@insight-centre.org
Postdoctoral Researcher
Insight-centre for Data Analytics
Dublin City University
3
Videolectures
Amaia Salvador
[UPC DLCV 2016]
Eva Mohedano
[UPC DLCV 2018]
Laura Leal-Taixé
[UPC DLCV 2019]
4
Motivation
5
In many cases, an explanation for NN prediction is required.
Motivation
6
Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of
skin cancer with deep neural networks. Nature, 542(7639), 115.
Motivation
7
Broad picture
8
Ning Xie, Gabrielle Ras, Marcel van Gerven, Derek Doran, “Explainable Deep Learning: A Field Guide for the Uninitiated”
arXiv 2020.
Interpretability
● Visualization
○ Learned Weights
○ Feature Maps
● Attribution
● Feature visualization
9
Visualization of Learned Weights AlexNet
conv1
Only convolutional filters from the first layer can be
“visualized”, because their depth of 3 matches the
input RGB channels. 10
Visualization of Learned Weights
2D Convolutional filters learned in Layer #2 from ConvnetJS 11
Filters with depth larger than 3 can be visualized showing a gray-scale image of
each depth, one next to the other.
Visualization of Learned Weights
12
Demo: Classify MNIST digits with a Convolutional Neural Network
“ConvNetJS is a Javascript library for
training Deep Learning models (mainly
Neural Networks) entirely in your browser.
Open a tab and you're training. No
software requirements, no compilers, no
installations, no GPUs, no sweat.”
Interpretability
● Visualization
○ Learned Weights
○ Feature Maps
● Attribution
● Feature visualization
13
Visualization of Feature Activations (2D)
14
conv1
Input
image
15
conv5
Input
image
Visualization of Feature Activations (2D)
16
Demo: 3D Visualization of a Convolutional Neural Network
Harley, Adam W. "An Interactive Node-Link Visualization of Convolutional Neural Networks." In Advances in Visual Computing,
pp. 867-877. Springer International Publishing, 2015.
Visualization of Feature Activations (2D)
Can be used with features from layer before
classification
17
Visualization of Feature Activations (any dimension)
Visualization of Feature Activations (any dimension)
18
#t-sne Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of machine learning research
9, no. Nov (2008): 2579-2605. [Python scikit-learn]
t-SNE:
Embeds high dimensional data
points (i.e. feature maps) so
that pairwise distances are
preserved in a local 2D
neighborhoods.
Example: 10 classes from MNIST dataset
19
CNN codes:
t-SNE on fc7 features from
AlexNet.
Source: Andrey Karpath (Stanford University)
Visualization of Feature Activations (any dimension)
#t-sne Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of machine learning research
9, no. Nov (2008): 2579-2605. [Python scikit-learn]
20
Maaten, Laurens van der @ CVPR 2018 Tutorial on Interpretable Machine Learning for Computer Vision.
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Gradient-based
● Feature visualization
21
Attribution
Attribution studies what part of an example is responsible for the network
activating a particular way.
22
Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (November 2017)
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Gradient-based
● Feature visualization
23
Reminder: Receptive Field
24
Receptive field: Part of the input that is visible to a neuron. It increases as we stack more convolutional
layers (i.e. neurons in deeper layers have larger receptive fields).
André Araujo, Wade Norris, Jack Sim, “Computing Receptive Fields of Convolutional Neural Networks”. Distill.pub
2019.
Receptive Field of Highest Activations
Girshick et al. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR 2014 25
“people”
“text”
Visualize the receptive field of a neuron over those images that activate this neuron
the most
“Dot
arrays”
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Guided Backpropagation
● Feature visualization
26
Activation Changes after Occlusions (global scale)
1. Iteratively forward the same image through the network, occluding a
different region at a time.
2. Keep track of the probability of the correct class with respect to the position
of the occluder
Zeiler and Fergus. Visualizing and Understanding Convolutional Networks. ECCV 2015
27
Activation Changes after Occlusions (global scale)
28
The changes in activations can be observed in any layer. This allowed identifying
some neurons as weak object detectors, trained with image labels.
Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Object detectors emerge in deep scene
CNNs." ICLR 2015.
Activation Changes after Occlusions (pixel scale)
29
Same idea, but label each neuron (unit) by means of an image dataset with pixel-level annotations.
Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual
representations. CVPR 2017.
Activation Changes after Occlusions (pixel scale)
30
By measuring the concept that best matches each neuron (unit), Net Dissection can break down the types of
concepts represented in a layer.
Here the 256 units of AlexNet conv5 trained on Places dataset represent many objects and textures, as well
as some scenes, parts, materials, and a color:
Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual
representations. CVPR 2017.
Activation Changes after Occlusions (pixel scale)
31
Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual
representations. CVPR 2017.
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Guided Backpropagation
● Feature visualization
32
Fully Connected into Convolutional Layers
33
3x2x2 tensor
(RGB image of 2x2)
2 fully connected
neurons
3x2x2 * 2 weights
2 convolutional filters of 3 x 2 x 2
(same size as input tensor)
3x2x2 * 2 weights
By redefining fully connected neurons into convolutional neurons...
Fully Connected into Convolutional Layers
34
...a model trained for image classification on low-definition images can provide local
response when fed with high-definition images.
Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." CVPR
2015. (original figure has been modified)
Fully Connected into Convolutional Layers
Campos, V., Jou, B., & Giro-i-Nieto, X. . From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction.
Image and Vision Computing. (2017)
35
The FC to Conv redefinition allows generating heatmaps of the class prediction over
the input images.
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Guided Backpropagation
● Feature visualization
36
Reminder: Global Average Pooling (GAP)
37
#NiN Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." ICLR 2014.
Figure:
Alexis Cook
38
Change in a classic CNN+MLP architecture: Replace FC layer after last conv with
Global Average Pooling (GAP), which corresponds to averaging per channel.
#CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for
discriminative localization." CVPR 2016
densely connected weights
Class Activation Maps (CAMs)
Class Activation Maps (CAMs)
39
Weighted Fusion of Feature Maps : The classifier weights define the contribution
of each channel in the previous layer
contribution of the blue
channel in the conv layer to
the prediction of “australian
terrier”
#CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for
discriminative localization." CVPR 2016
40
#CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for
discriminative localization." CVPR 2016
41
Different architectures generate different CAMs.
Jimenez, Albert, Jose M. Alvarez, and Xavier Giro-i-Nieto. "Class-weighted convolutional features for visual instance
search." BMVC 2017.
Class Activation Maps (CAMs)
42
Applications of CAMs: Rough object localization without weak labels (image).
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for discriminative
localization." CVPR 2016
Class Activation Maps (CAMs)
43
Applications of CAMs: Spatial feature weighting for object retrieval.
Class Activation Maps (CAMs)
Jimenez, Albert, Jose M. Alvarez, and Xavier Giro-i-Nieto. "Class-weighted convolutional features for visual instance
search." BMVC 2017.
Interpretability: Attribution
● Visualization
● Attribution
○ Receptive Field of the Highest Activations
○ Activation Changes after Occlusions
○ Fully Connected into Convolutional layers
○ Class Activation Maps (CAMs)
○ Guided Backpropagation
● Feature visualization
44
Guided Backpropagation
Compute the gradient of any neuron w.r.t. the image
1. Forward image up to the layer that want to be interpreted (e.g. conv5)
2. Set all gradients to 0
3. Set gradient for the specific neuron we are interested in to 1
4. Backpropagate and update the values on the image (not the network parameters).
Goal: Visualize the part of an image that mostly activates one of the neurons.
45
46
w1
x1
w2
x2
b
x
+
x
+ σ
1 0.73
0,2
0,2
0,2
-0,2
0,2
0,2
0,4
-0,4
-0,6
dŷ/dŷ=1
Example extracted from Andrej Karpathy’s notes for CS231n from Stanford University.
During NN training:
Normally, we will be
interested only on the
weights (wi
) and biases (b),
not the inputs (xi
). The
weights are the parameters
to learn in our models.
Backward pass
dy/dw1
dy/dx1
dy/dw2
dy/dx2
dy/db
ŷ
Guided Backpropagation
47
w1
x1
w2
x2
b
x
+
x
+ σ
1 0.73
0,2
0,2
0,2
-0,2
0,2
0,2
0,4
-0,4
-0,6
dŷ/dŷ=1
Example extracted from Andrej Karpathy’s notes for CS231n from Stanford University.
In this approach:
We will be interested on the
inputs (xi
), not in updating
the weights (wi
) and biases
(b), which are what we
actually want to be able to
interpret.
Backward pass
dy/dw1
dy/dx1
dy/dw2
dy/dx2
dy/db
ŷ
Guided Backpropagation
Guided Backpropagation
Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M.. Striving for Simplicity: The All Convolutional Net. ICLR 2015
48
Goal: Visualize the part of an image that mostly activates one of the neurons.
Guided Backpropagation
Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M.. Striving for Simplicity: The All Convolutional Net. ICLR 2015
49
Grad-CAM
50
#Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra.
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]
Same idea as Class Activation Maps (CAMs), but:
● No modifications required to the network
● Weight feature map by the gradient of the class wrt each channel
Grad-CAM
51
#Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra.
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]
52
#Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra.
Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]
Grad-CAM
53
Belén del Río, Marc Boher, Borja Velasco, Toni Serena. Advised by Santi Puch., “FastConvNet”. UPC
School 2020
Interpretability: Attribution
● Visualization
● Attribution
● Feature visualization
54
Feature Visualization
55
Feature visualization answers questions about what a network — or parts of a
network — are looking for by generating examples.
Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (November 2017)
Feature Visualization
Goal: Generate an image that maximizes the activation of a neuron.
1. Forward random image
2. Set the gradient of the neuron to be [0,0,0…,1,...,0,0]
3. Backprop to get gradient on the image
4. Update image (small step in the gradient direction)
5. Repeat
56
K. Simonyan, A. Vedaldi, A. Ziseramnn, Deep Inside Convolutional Networks: Visualising Image Classification Models and
Saliency Maps, ICLR 2014
57
K. Simonyan, A. Vedaldi, A. Ziseramnn, Deep Inside Convolutional Networks: Visualising Image Classification Models and
Saliency Maps, ICLR 2014
Feature Visualization
Example: Specific case of a neuron from the last layer, which is associated to a
semantic class.
Feature Visualization
58
Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (2017) [Google Colab]
Feature Visualization
59
Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (2017) [Google Colab]
Activation Atlases
60
Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation
Atlases”. Distill.pub (March 2019)
Similar to t-SNE CNN codes, but instead of showing input data, activation atlases
show feature visualizations of averaged activations located in each grid cell.
Activation Atlases
61
Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation
Atlases”. Distill.pub (March 2019)
For each grid cell, the ImageNet classes of the contributing activations are ranked:
62
Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation
Atlases”. Distill.pub (March 2019)
Activation Atlas + Links = OpenAI Microscope
63
Ludwig Schubert, Michael PetrovShan Carter, Nick Cammarata, Gabriel Goh, Chris Olah, OpenAI Microscope 2020
For each grid cell, the ImageNet classes of the contributing activations are ranked:
Feature Visualization: Regularizations
64
Bonus: Adversarial Attacks (eg. patch on sticker)
Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint
arXiv:1712.09665 (2017). [Google Colab]
65
Goal: Generate a patch P that will make a Neural Network always predict class the
same class for any image containing the patch.
Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint
arXiv:1712.09665 (2017).
Sachin Joglekar, “Adversarial patches for CNNs explained” (2018)
66
How: Backpropagate the gradient from the desired class to images where the
patch has been inserted.
Bonus: Adversarial Attacks (eg. patch on sticker)
Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint
arXiv:1712.09665 (2017). 67
Xu, K., Zhang, G., Liu, S., Fan, Q., Sun, M., Chen, H., ... & Lin, X. (2019). Evading Real-Time Person Detectors by Adversarial
T-shirt. arXiv preprint arXiv:1910.11099.
68
Bonus: Adversarial Attacks (eg. T-shirts for Privacy)
Feature Visualization: Generative prior
69
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for
neurons in neural networks via deep generator networks." NIPS 2016.
1) A Deep Generator Network (DGN) is trained to invert from features to image.
Feature Visualization: Generative prior
70
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for
neurons in neural networks via deep generator networks." NIPS 2016.
1) A Deep Generator Network (DGN) is trained to invert from features to image.
2) A DNN to visualize is chosen.
Feature Visualization: Generative prior
71
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for
neurons in neural networks via deep generator networks." NIPS 2016.
3) Fixing parameters in DGN & DNN, a feature is optimed with backprop to
maximize a class (eg. candle).
Feature Visualization: Generative prior
72
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for
neurons in neural networks via deep generator networks." NIPS 2016.
4) The image corresponding to the optimzed feature is synthesized with DGN.
73
Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the
preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016.
Feature Visualization: Generative prior
Interpretability for PyTorch
74
Software: Captum (multiple examples)
Interpretability for PyTorch
75
Software: Lucent, based on Kornia [tweet]
Learn more
76
Additional material
77
Adebayo, Julius, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. "Sanity checks for saliency maps."
NeurIPS 2018
#HINT Selvaraju, Ramprasaath R., Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. "Taking
a hint: Leveraging explanations to make vision and language models more grounded." ICCV 2019. [blog].
#Ablation-CAM Ramaswamy, Harish Guruprasad. "Ablation-CAM: Visual Explanations for Deep Convolutional Network via
Gradient-free Localization." WACV 2020.
Chris Olah et al, “An Overview of Early Vision in InceptionV1”. Distill .pub, 2020.
Rebuffi, S. A., Fong, R., Ji, X., & Vedaldi, A. There and Back Again: Revisiting Backpropagation Saliency Methods. CVPR 2020. [tweet]
Bau, D., Zhu, J. Y., Strobelt, H., Lapedriza, A., Zhou, B., & Torralba, A. Understanding the role of individual units in a deep neural
network. PNAS 2020. [tweet]
Oana-Maria Camburu, “Explaining Deep Neural Networks”. Oxford VGG 2020.
78
Questions ?

More Related Content

What's hot

GAN in medical imaging
GAN in medical imagingGAN in medical imaging
GAN in medical imagingCheng-Bin Jin
 
Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)Hsing-chuan Hsieh
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
 
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...SlideTeam
 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictionsAnton Kulesh
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Krishnaram Kenthapadi
 
Deep Generative Models
Deep Generative Models Deep Generative Models
Deep Generative Models Chia-Wen Cheng
 
Ai vs machine learning vs deep learning
Ai vs machine learning vs deep learningAi vs machine learning vs deep learning
Ai vs machine learning vs deep learningSanjay Patel
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
 
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckAI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckSlideTeam
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual IntroductionLukas Masuch
 
Deep neural networks and tabular data
Deep neural networks and tabular dataDeep neural networks and tabular data
Deep neural networks and tabular dataJimmyLiang20
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learningSri Ambati
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Saurabh Kaushik
 
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...Edureka!
 
Lesson 1 intro to ai
Lesson 1   intro to aiLesson 1   intro to ai
Lesson 1 intro to aiankit_ppt
 
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Simplilearn
 

What's hot (20)

GAN in medical imaging
GAN in medical imagingGAN in medical imaging
GAN in medical imaging
 
Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)Introduction to Grad-CAM (complete version)
Introduction to Grad-CAM (complete version)
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
 
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...
Artificial Intelligence Machine Learning Deep Learning Ppt Powerpoint Present...
 
DC02. Interpretation of predictions
DC02. Interpretation of predictionsDC02. Interpretation of predictions
DC02. Interpretation of predictions
 
Deep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial IntelligenceDeep Learning - The Past, Present and Future of Artificial Intelligence
Deep Learning - The Past, Present and Future of Artificial Intelligence
 
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI in Industry (KDD 2019 Tutorial)
 
Deep Generative Models
Deep Generative Models Deep Generative Models
Deep Generative Models
 
Explainable AI (XAI)
Explainable AI (XAI)Explainable AI (XAI)
Explainable AI (XAI)
 
Shap
ShapShap
Shap
 
Ai vs machine learning vs deep learning
Ai vs machine learning vs deep learningAi vs machine learning vs deep learning
Ai vs machine learning vs deep learning
 
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
 
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckAI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
 
Deep neural networks and tabular data
Deep neural networks and tabular dataDeep neural networks and tabular data
Deep neural networks and tabular data
 
Interpretable machine learning
Interpretable machine learningInterpretable machine learning
Interpretable machine learning
 
Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective Explainable AI (XAI) - A Perspective
Explainable AI (XAI) - A Perspective
 
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
AI vs Machine Learning vs Deep Learning | Machine Learning Training with Pyth...
 
Lesson 1 intro to ai
Lesson 1   intro to aiLesson 1   intro to ai
Lesson 1 intro to ai
 
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
 

Similar to Interpretability and Explainable AI (XAI

Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...
Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...
Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...Universitat Politècnica de Catalunya
 
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...Universitat Politècnica de Catalunya
 
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)Universitat Politècnica de Catalunya
 
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020Universitat Politècnica de Catalunya
 
CNN Structure: From LeNet to ShuffleNet
CNN Structure: From LeNet to ShuffleNetCNN Structure: From LeNet to ShuffleNet
CNN Structure: From LeNet to ShuffleNetDalin Zhang
 
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Universitat Politècnica de Catalunya
 
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...Universitat Politècnica de Catalunya
 
A Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalA Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalBhaskar Mitra
 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal clubHayaru SHOUNO
 
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019Universitat Politècnica de Catalunya
 
#4 Convolutional Neural Networks for Natural Language Processing
#4 Convolutional Neural Networks for Natural Language Processing#4 Convolutional Neural Networks for Natural Language Processing
#4 Convolutional Neural Networks for Natural Language ProcessingBerlin Language Technology
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural NetworksYogendra Tamang
 
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)Universitat Politècnica de Catalunya
 
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...An Evolutionary-based Neural Network for Distinguishing between Genuine and P...
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...Md Rakibul Hasan
 
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-ResolutionTaegyun Jeon
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networksananth
 
Claudio Gallicchio - Deep Reservoir Computing for Structured Data
Claudio Gallicchio - Deep Reservoir Computing for Structured DataClaudio Gallicchio - Deep Reservoir Computing for Structured Data
Claudio Gallicchio - Deep Reservoir Computing for Structured DataMeetupDataScienceRoma
 
imageclassification-160206090009.pdf
imageclassification-160206090009.pdfimageclassification-160206090009.pdf
imageclassification-160206090009.pdfKammetaJoshna
 

Similar to Interpretability and Explainable AI (XAI (20)

Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...
Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...
Interpretability of Convolutional Neural Networks - Xavier Giro - UPC Barcelo...
 
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
 
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)
D1L5 Visualization (D1L2 Insight@DCU Machine Learning Workshop 2017)
 
conv_nets.pptx
conv_nets.pptxconv_nets.pptx
conv_nets.pptx
 
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020
Deep Learning Representations for All - Xavier Giro-i-Nieto - IRI Barcelona 2020
 
CNN Structure: From LeNet to ShuffleNet
CNN Structure: From LeNet to ShuffleNetCNN Structure: From LeNet to ShuffleNet
CNN Structure: From LeNet to ShuffleNet
 
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
Convolutional Neural Networks (DLAI D5L1 2017 UPC Deep Learning for Artificia...
 
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...
Interpretability of Convolutional Neural Networks - Eva Mohedano - UPC Barcel...
 
A Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information RetrievalA Simple Introduction to Neural Information Retrieval
A Simple Introduction to Neural Information Retrieval
 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal club
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019
Neural Architectures for Still Images - Xavier Giro- UPC Barcelona 2019
 
#4 Convolutional Neural Networks for Natural Language Processing
#4 Convolutional Neural Networks for Natural Language Processing#4 Convolutional Neural Networks for Natural Language Processing
#4 Convolutional Neural Networks for Natural Language Processing
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
 
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)
Deep convnets for global recognition (Master in Computer Vision Barcelona 2016)
 
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...An Evolutionary-based Neural Network for Distinguishing between Genuine and P...
An Evolutionary-based Neural Network for Distinguishing between Genuine and P...
 
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution
[OSGeo-KR Tech Workshop] Deep Learning for Single Image Super-Resolution
 
Overview of Convolutional Neural Networks
Overview of Convolutional Neural NetworksOverview of Convolutional Neural Networks
Overview of Convolutional Neural Networks
 
Claudio Gallicchio - Deep Reservoir Computing for Structured Data
Claudio Gallicchio - Deep Reservoir Computing for Structured DataClaudio Gallicchio - Deep Reservoir Computing for Structured Data
Claudio Gallicchio - Deep Reservoir Computing for Structured Data
 
imageclassification-160206090009.pdf
imageclassification-160206090009.pdfimageclassification-160206090009.pdf
imageclassification-160206090009.pdf
 

More from Universitat Politècnica de Catalunya

The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...
The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...
The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...Universitat Politècnica de Catalunya
 
Towards Sign Language Translation & Production | Xavier Giro-i-Nieto
Towards Sign Language Translation & Production | Xavier Giro-i-NietoTowards Sign Language Translation & Production | Xavier Giro-i-Nieto
Towards Sign Language Translation & Production | Xavier Giro-i-NietoUniversitat Politècnica de Catalunya
 
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...Universitat Politècnica de Catalunya
 
Generation of Synthetic Referring Expressions for Object Segmentation in Videos
Generation of Synthetic Referring Expressions for Object Segmentation in VideosGeneration of Synthetic Referring Expressions for Object Segmentation in Videos
Generation of Synthetic Referring Expressions for Object Segmentation in VideosUniversitat Politècnica de Catalunya
 
Learn2Sign : Sign language recognition and translation using human keypoint e...
Learn2Sign : Sign language recognition and translation using human keypoint e...Learn2Sign : Sign language recognition and translation using human keypoint e...
Learn2Sign : Sign language recognition and translation using human keypoint e...Universitat Politècnica de Catalunya
 
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Universitat Politècnica de Catalunya
 
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...Universitat Politècnica de Catalunya
 
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020Universitat Politècnica de Catalunya
 
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...Universitat Politècnica de Catalunya
 
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020Universitat Politècnica de Catalunya
 
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)Universitat Politècnica de Catalunya
 
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...Universitat Politècnica de Catalunya
 
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...Universitat Politècnica de Catalunya
 

More from Universitat Politècnica de Catalunya (20)

Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Deep Generative Learning for All
Deep Generative Learning for AllDeep Generative Learning for All
Deep Generative Learning for All
 
The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...
The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...
The Transformer in Vision | Xavier Giro | Master in Computer Vision Barcelona...
 
Towards Sign Language Translation & Production | Xavier Giro-i-Nieto
Towards Sign Language Translation & Production | Xavier Giro-i-NietoTowards Sign Language Translation & Production | Xavier Giro-i-Nieto
Towards Sign Language Translation & Production | Xavier Giro-i-Nieto
 
The Transformer - Xavier Giró - UPC Barcelona 2021
The Transformer - Xavier Giró - UPC Barcelona 2021The Transformer - Xavier Giró - UPC Barcelona 2021
The Transformer - Xavier Giró - UPC Barcelona 2021
 
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...
Learning Representations for Sign Language Videos - Xavier Giro - NIST TRECVI...
 
Open challenges in sign language translation and production
Open challenges in sign language translation and productionOpen challenges in sign language translation and production
Open challenges in sign language translation and production
 
Generation of Synthetic Referring Expressions for Object Segmentation in Videos
Generation of Synthetic Referring Expressions for Object Segmentation in VideosGeneration of Synthetic Referring Expressions for Object Segmentation in Videos
Generation of Synthetic Referring Expressions for Object Segmentation in Videos
 
Discovery and Learning of Navigation Goals from Pixels in Minecraft
Discovery and Learning of Navigation Goals from Pixels in MinecraftDiscovery and Learning of Navigation Goals from Pixels in Minecraft
Discovery and Learning of Navigation Goals from Pixels in Minecraft
 
Learn2Sign : Sign language recognition and translation using human keypoint e...
Learn2Sign : Sign language recognition and translation using human keypoint e...Learn2Sign : Sign language recognition and translation using human keypoint e...
Learn2Sign : Sign language recognition and translation using human keypoint e...
 
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
 
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...
Self-Supervised Audio-Visual Learning - Xavier Giro - UPC TelecomBCN Barcelon...
 
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
 
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...
Generative Adversarial Networks GAN - Xavier Giro - UPC TelecomBCN Barcelona ...
 
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020
Q-Learning with a Neural Network - Xavier Giró - UPC Barcelona 2020
 
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)
Language and Vision with Deep Learning - Xavier Giró - ACM ICMR 2020 (Tutorial)
 
Curriculum Learning for Recurrent Video Object Segmentation
Curriculum Learning for Recurrent Video Object SegmentationCurriculum Learning for Recurrent Video Object Segmentation
Curriculum Learning for Recurrent Video Object Segmentation
 
Deep Self-supervised Learning for All - Xavier Giro - X-Europe 2020
Deep Self-supervised Learning for All - Xavier Giro - X-Europe 2020Deep Self-supervised Learning for All - Xavier Giro - X-Europe 2020
Deep Self-supervised Learning for All - Xavier Giro - X-Europe 2020
 
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...
Transcription-Enriched Joint Embeddings for Spoken Descriptions of Images and...
 
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...
Object Detection with Deep Learning - Xavier Giro-i-Nieto - UPC School Barcel...
 

Recently uploaded

04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一F La
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home ServiceSapana Sha
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxStephen266013
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...Boston Institute of Analytics
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...limedy534
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxEmmanuel Dauda
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 

Recently uploaded (20)

04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
办理(Vancouver毕业证书)加拿大温哥华岛大学毕业证成绩单原版一比一
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docx
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptx
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
Call Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort ServiceCall Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort Service
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 

Interpretability and Explainable AI (XAI

  • 1. Interpretabilitat Interpretability / Explainable AI (XAI) Xavier Giro-i-Nieto @DocXavi xavier.giro@upc.edu Associate Professor Universitat Politècnica de Catalunya [GCED] [Lectures repo]
  • 2. 2 Acknowledgements Amaia Salvador amaia.salvador@upc.edu PhD Candidate Universitat Politècnica de Catalunya Eva Mohedano eva.mohedano@insight-centre.org Postdoctoral Researcher Insight-centre for Data Analytics Dublin City University
  • 3. 3 Videolectures Amaia Salvador [UPC DLCV 2016] Eva Mohedano [UPC DLCV 2018] Laura Leal-Taixé [UPC DLCV 2019]
  • 4. 4
  • 5. Motivation 5 In many cases, an explanation for NN prediction is required.
  • 6. Motivation 6 Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115.
  • 8. Broad picture 8 Ning Xie, Gabrielle Ras, Marcel van Gerven, Derek Doran, “Explainable Deep Learning: A Field Guide for the Uninitiated” arXiv 2020.
  • 9. Interpretability ● Visualization ○ Learned Weights ○ Feature Maps ● Attribution ● Feature visualization 9
  • 10. Visualization of Learned Weights AlexNet conv1 Only convolutional filters from the first layer can be “visualized”, because their depth of 3 matches the input RGB channels. 10
  • 11. Visualization of Learned Weights 2D Convolutional filters learned in Layer #2 from ConvnetJS 11 Filters with depth larger than 3 can be visualized showing a gray-scale image of each depth, one next to the other.
  • 12. Visualization of Learned Weights 12 Demo: Classify MNIST digits with a Convolutional Neural Network “ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you're training. No software requirements, no compilers, no installations, no GPUs, no sweat.”
  • 13. Interpretability ● Visualization ○ Learned Weights ○ Feature Maps ● Attribution ● Feature visualization 13
  • 14. Visualization of Feature Activations (2D) 14 conv1 Input image
  • 16. 16 Demo: 3D Visualization of a Convolutional Neural Network Harley, Adam W. "An Interactive Node-Link Visualization of Convolutional Neural Networks." In Advances in Visual Computing, pp. 867-877. Springer International Publishing, 2015. Visualization of Feature Activations (2D)
  • 17. Can be used with features from layer before classification 17 Visualization of Feature Activations (any dimension)
  • 18. Visualization of Feature Activations (any dimension) 18 #t-sne Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of machine learning research 9, no. Nov (2008): 2579-2605. [Python scikit-learn] t-SNE: Embeds high dimensional data points (i.e. feature maps) so that pairwise distances are preserved in a local 2D neighborhoods. Example: 10 classes from MNIST dataset
  • 19. 19 CNN codes: t-SNE on fc7 features from AlexNet. Source: Andrey Karpath (Stanford University) Visualization of Feature Activations (any dimension) #t-sne Maaten, Laurens van der, and Geoffrey Hinton. "Visualizing data using t-SNE." Journal of machine learning research 9, no. Nov (2008): 2579-2605. [Python scikit-learn]
  • 20. 20 Maaten, Laurens van der @ CVPR 2018 Tutorial on Interpretable Machine Learning for Computer Vision.
  • 21. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Gradient-based ● Feature visualization 21
  • 22. Attribution Attribution studies what part of an example is responsible for the network activating a particular way. 22 Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (November 2017)
  • 23. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Gradient-based ● Feature visualization 23
  • 24. Reminder: Receptive Field 24 Receptive field: Part of the input that is visible to a neuron. It increases as we stack more convolutional layers (i.e. neurons in deeper layers have larger receptive fields). André Araujo, Wade Norris, Jack Sim, “Computing Receptive Fields of Convolutional Neural Networks”. Distill.pub 2019.
  • 25. Receptive Field of Highest Activations Girshick et al. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR 2014 25 “people” “text” Visualize the receptive field of a neuron over those images that activate this neuron the most “Dot arrays”
  • 26. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Guided Backpropagation ● Feature visualization 26
  • 27. Activation Changes after Occlusions (global scale) 1. Iteratively forward the same image through the network, occluding a different region at a time. 2. Keep track of the probability of the correct class with respect to the position of the occluder Zeiler and Fergus. Visualizing and Understanding Convolutional Networks. ECCV 2015 27
  • 28. Activation Changes after Occlusions (global scale) 28 The changes in activations can be observed in any layer. This allowed identifying some neurons as weak object detectors, trained with image labels. Zhou, Bolei, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Object detectors emerge in deep scene CNNs." ICLR 2015.
  • 29. Activation Changes after Occlusions (pixel scale) 29 Same idea, but label each neuron (unit) by means of an image dataset with pixel-level annotations. Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. CVPR 2017.
  • 30. Activation Changes after Occlusions (pixel scale) 30 By measuring the concept that best matches each neuron (unit), Net Dissection can break down the types of concepts represented in a layer. Here the 256 units of AlexNet conv5 trained on Places dataset represent many objects and textures, as well as some scenes, parts, materials, and a color: Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. CVPR 2017.
  • 31. Activation Changes after Occlusions (pixel scale) 31 Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. Network dissection: Quantifying interpretability of deep visual representations. CVPR 2017.
  • 32. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Guided Backpropagation ● Feature visualization 32
  • 33. Fully Connected into Convolutional Layers 33 3x2x2 tensor (RGB image of 2x2) 2 fully connected neurons 3x2x2 * 2 weights 2 convolutional filters of 3 x 2 x 2 (same size as input tensor) 3x2x2 * 2 weights By redefining fully connected neurons into convolutional neurons...
  • 34. Fully Connected into Convolutional Layers 34 ...a model trained for image classification on low-definition images can provide local response when fed with high-definition images. Long, Jonathan, Evan Shelhamer, and Trevor Darrell. "Fully convolutional networks for semantic segmentation." CVPR 2015. (original figure has been modified)
  • 35. Fully Connected into Convolutional Layers Campos, V., Jou, B., & Giro-i-Nieto, X. . From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction. Image and Vision Computing. (2017) 35 The FC to Conv redefinition allows generating heatmaps of the class prediction over the input images.
  • 36. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Guided Backpropagation ● Feature visualization 36
  • 37. Reminder: Global Average Pooling (GAP) 37 #NiN Lin, Min, Qiang Chen, and Shuicheng Yan. "Network in network." ICLR 2014. Figure: Alexis Cook
  • 38. 38 Change in a classic CNN+MLP architecture: Replace FC layer after last conv with Global Average Pooling (GAP), which corresponds to averaging per channel. #CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for discriminative localization." CVPR 2016 densely connected weights Class Activation Maps (CAMs)
  • 39. Class Activation Maps (CAMs) 39 Weighted Fusion of Feature Maps : The classifier weights define the contribution of each channel in the previous layer contribution of the blue channel in the conv layer to the prediction of “australian terrier” #CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for discriminative localization." CVPR 2016
  • 40. 40 #CAM Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for discriminative localization." CVPR 2016
  • 41. 41 Different architectures generate different CAMs. Jimenez, Albert, Jose M. Alvarez, and Xavier Giro-i-Nieto. "Class-weighted convolutional features for visual instance search." BMVC 2017. Class Activation Maps (CAMs)
  • 42. 42 Applications of CAMs: Rough object localization without weak labels (image). Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. "Learning deep features for discriminative localization." CVPR 2016 Class Activation Maps (CAMs)
  • 43. 43 Applications of CAMs: Spatial feature weighting for object retrieval. Class Activation Maps (CAMs) Jimenez, Albert, Jose M. Alvarez, and Xavier Giro-i-Nieto. "Class-weighted convolutional features for visual instance search." BMVC 2017.
  • 44. Interpretability: Attribution ● Visualization ● Attribution ○ Receptive Field of the Highest Activations ○ Activation Changes after Occlusions ○ Fully Connected into Convolutional layers ○ Class Activation Maps (CAMs) ○ Guided Backpropagation ● Feature visualization 44
  • 45. Guided Backpropagation Compute the gradient of any neuron w.r.t. the image 1. Forward image up to the layer that want to be interpreted (e.g. conv5) 2. Set all gradients to 0 3. Set gradient for the specific neuron we are interested in to 1 4. Backpropagate and update the values on the image (not the network parameters). Goal: Visualize the part of an image that mostly activates one of the neurons. 45
  • 46. 46 w1 x1 w2 x2 b x + x + σ 1 0.73 0,2 0,2 0,2 -0,2 0,2 0,2 0,4 -0,4 -0,6 dŷ/dŷ=1 Example extracted from Andrej Karpathy’s notes for CS231n from Stanford University. During NN training: Normally, we will be interested only on the weights (wi ) and biases (b), not the inputs (xi ). The weights are the parameters to learn in our models. Backward pass dy/dw1 dy/dx1 dy/dw2 dy/dx2 dy/db ŷ Guided Backpropagation
  • 47. 47 w1 x1 w2 x2 b x + x + σ 1 0.73 0,2 0,2 0,2 -0,2 0,2 0,2 0,4 -0,4 -0,6 dŷ/dŷ=1 Example extracted from Andrej Karpathy’s notes for CS231n from Stanford University. In this approach: We will be interested on the inputs (xi ), not in updating the weights (wi ) and biases (b), which are what we actually want to be able to interpret. Backward pass dy/dw1 dy/dx1 dy/dw2 dy/dx2 dy/db ŷ Guided Backpropagation
  • 48. Guided Backpropagation Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M.. Striving for Simplicity: The All Convolutional Net. ICLR 2015 48 Goal: Visualize the part of an image that mostly activates one of the neurons.
  • 49. Guided Backpropagation Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M.. Striving for Simplicity: The All Convolutional Net. ICLR 2015 49
  • 50. Grad-CAM 50 #Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video] Same idea as Class Activation Maps (CAMs), but: ● No modifications required to the network ● Weight feature map by the gradient of the class wrt each channel
  • 51. Grad-CAM 51 #Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]
  • 52. 52 #Grad-CAM Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. ICCV 2017 [video]
  • 53. Grad-CAM 53 Belén del Río, Marc Boher, Borja Velasco, Toni Serena. Advised by Santi Puch., “FastConvNet”. UPC School 2020
  • 54. Interpretability: Attribution ● Visualization ● Attribution ● Feature visualization 54
  • 55. Feature Visualization 55 Feature visualization answers questions about what a network — or parts of a network — are looking for by generating examples. Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (November 2017)
  • 56. Feature Visualization Goal: Generate an image that maximizes the activation of a neuron. 1. Forward random image 2. Set the gradient of the neuron to be [0,0,0…,1,...,0,0] 3. Backprop to get gradient on the image 4. Update image (small step in the gradient direction) 5. Repeat 56 K. Simonyan, A. Vedaldi, A. Ziseramnn, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, ICLR 2014
  • 57. 57 K. Simonyan, A. Vedaldi, A. Ziseramnn, Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, ICLR 2014 Feature Visualization Example: Specific case of a neuron from the last layer, which is associated to a semantic class.
  • 58. Feature Visualization 58 Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (2017) [Google Colab]
  • 59. Feature Visualization 59 Chris Olah, Alexander Mordvintsev, Ludwig Schubert, “Feature Visualization”. Distill.pub (2017) [Google Colab]
  • 60. Activation Atlases 60 Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation Atlases”. Distill.pub (March 2019) Similar to t-SNE CNN codes, but instead of showing input data, activation atlases show feature visualizations of averaged activations located in each grid cell.
  • 61. Activation Atlases 61 Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation Atlases”. Distill.pub (March 2019) For each grid cell, the ImageNet classes of the contributing activations are ranked:
  • 62. 62 Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, Chris Olah, “Exploring Neural Networks with Activation Atlases”. Distill.pub (March 2019)
  • 63. Activation Atlas + Links = OpenAI Microscope 63 Ludwig Schubert, Michael PetrovShan Carter, Nick Cammarata, Gabriel Goh, Chris Olah, OpenAI Microscope 2020 For each grid cell, the ImageNet classes of the contributing activations are ranked:
  • 65. Bonus: Adversarial Attacks (eg. patch on sticker) Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint arXiv:1712.09665 (2017). [Google Colab] 65 Goal: Generate a patch P that will make a Neural Network always predict class the same class for any image containing the patch.
  • 66. Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint arXiv:1712.09665 (2017). Sachin Joglekar, “Adversarial patches for CNNs explained” (2018) 66 How: Backpropagate the gradient from the desired class to images where the patch has been inserted. Bonus: Adversarial Attacks (eg. patch on sticker)
  • 67. Brown, Tom B., Dandelion Mané, Aurko Roy, Martín Abadi, and Justin Gilmer. "Adversarial patch." arXiv preprint arXiv:1712.09665 (2017). 67
  • 68. Xu, K., Zhang, G., Liu, S., Fan, Q., Sun, M., Chen, H., ... & Lin, X. (2019). Evading Real-Time Person Detectors by Adversarial T-shirt. arXiv preprint arXiv:1910.11099. 68 Bonus: Adversarial Attacks (eg. T-shirts for Privacy)
  • 69. Feature Visualization: Generative prior 69 Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016. 1) A Deep Generator Network (DGN) is trained to invert from features to image.
  • 70. Feature Visualization: Generative prior 70 Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016. 1) A Deep Generator Network (DGN) is trained to invert from features to image. 2) A DNN to visualize is chosen.
  • 71. Feature Visualization: Generative prior 71 Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016. 3) Fixing parameters in DGN & DNN, a feature is optimed with backprop to maximize a class (eg. candle).
  • 72. Feature Visualization: Generative prior 72 Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016. 4) The image corresponding to the optimzed feature is synthesized with DGN.
  • 73. 73 Nguyen, Anh, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. "Synthesizing the preferred inputs for neurons in neural networks via deep generator networks." NIPS 2016. Feature Visualization: Generative prior
  • 74. Interpretability for PyTorch 74 Software: Captum (multiple examples)
  • 75. Interpretability for PyTorch 75 Software: Lucent, based on Kornia [tweet]
  • 77. Additional material 77 Adebayo, Julius, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. "Sanity checks for saliency maps." NeurIPS 2018 #HINT Selvaraju, Ramprasaath R., Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. "Taking a hint: Leveraging explanations to make vision and language models more grounded." ICCV 2019. [blog]. #Ablation-CAM Ramaswamy, Harish Guruprasad. "Ablation-CAM: Visual Explanations for Deep Convolutional Network via Gradient-free Localization." WACV 2020. Chris Olah et al, “An Overview of Early Vision in InceptionV1”. Distill .pub, 2020. Rebuffi, S. A., Fong, R., Ji, X., & Vedaldi, A. There and Back Again: Revisiting Backpropagation Saliency Methods. CVPR 2020. [tweet] Bau, D., Zhu, J. Y., Strobelt, H., Lapedriza, A., Zhou, B., & Torralba, A. Understanding the role of individual units in a deep neural network. PNAS 2020. [tweet] Oana-Maria Camburu, “Explaining Deep Neural Networks”. Oxford VGG 2020.