Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
[course site]
Visualization
#DLUPC
Amaia Salvador
amaia.salvador@upc.edu
PhD Candidate
Universitat Politècnica de Catalunya
2
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
3
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
4
Visualize Learned Weights
AlexNet
conv1
Filters are only “interpretable” on the first layer
5
Visualize Learned Weights
layer 2 weights
layer 3 weights
Source: ConvnetJS
6
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
7
Activations from data
8
conv1 conv5
Receptive Field
Girshick et al. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR 201...
Reminder: Receptive Field
10
Receptive field: Part of the input that is visible to a neuron. It increases as we stack more...
Occlusion experiments
1. Iteratively forward the same image through the
network, occluding a different region at a time.
2...
Occlusion experiments
12
Can be applied to any arbitrary layer. “Manual” label assignment (AMT)
Zhou et al. Object detecto...
Network Dissection
Bau, Zhou et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR ...
Network Dissection
Bau, Zhou et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR ...
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
15
Gradient-based approach
Compute the gradient of any neuron w.r.t. the image
1. Forward image up to the desired layer (e.g....
Gradient-based approach
1. Forward image up to the desired layer (e.g. conv5)
2. Set all gradients to 0
3. Set gradient fo...
Gradient-based approach
Springenberg, Dosovitskiy, et al. Striving for Simplicity: The All Convolutional Net. ICLR 2015
18
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
19
Optimization approach
Simonyan et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Sali...
Optimization approach
Simonyan et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Sali...
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
22
DeepDream
https://github.com/google/deepdream
23
DeepDream
1. Forward image up to some layer (e.g. conv5)
2. Set the gradients to equal the layer activations
3. Backprop t...
DeepDream
1. Forward image up to some layer (e.g. conv5)
2. Set the gradients to equal the layer activations
3. Backprop t...
DeepDream
26
More examples here
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
28
Neural Style
Style image Content image Result
29Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVP...
Neural Style
Extract raw activations in all layers. These activations will represent the contents of the image.
30Gatys et...
Neural Style
V =
● Activations are also extracted from the style image for all layers.
● Instead of the raw activations, g...
Neural Style
match content
match style
Match gram matrices
from style image
Match activations
from content image
32Gatys e...
Neural Style
match content
match style
Match gram matrices
from style image
Match activations
from content image
33Gatys e...
Neural Style
34Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
Neural Style
35Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
A year later...
36
Content Image Style Image Result
Luan et al. Deep Photo Style Transfer. arXiv Apr 2017
37
Content Image Style Image Result
Luan et al. Deep Photo Style Transfer. arXiv Apr 2017
Visualization
● Learned weights
● Activations from data
● Gradient-based
● Optimization-based
● DeepDream
● Neural Style
38
Resources
● ConvnetJS
● Deepvis toolbox
● DrawNet from MIT: Visualize strong activations & connections between units
● 3D ...
Questions?
t-SNE
Embed high dimensional data points
(i.e. feature codes) so that pairwise
distances are preserved in local
neighborho...
t-SNE
Can be used with features from layer before classification
42
Optimization approach
Bojarski et al. VisualBackProp: efficient visualization of CNNs. arXiv May 2017
Value-based backprop...
Visualization of Deep Learning Models (D1L6 2017 UPC Deep Learning for Computer Vision)
Upcoming SlideShare
Loading in …5
×

Visualization of Deep Learning Models (D1L6 2017 UPC Deep Learning for Computer Vision)

991 views

Published on

https://telecombcn-dl.github.io/2017-dlcv/

Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.

Published in: Data & Analytics
  • Be the first to comment

Visualization of Deep Learning Models (D1L6 2017 UPC Deep Learning for Computer Vision)

  1. 1. [course site] Visualization #DLUPC Amaia Salvador amaia.salvador@upc.edu PhD Candidate Universitat Politècnica de Catalunya
  2. 2. 2
  3. 3. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 3
  4. 4. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 4
  5. 5. Visualize Learned Weights AlexNet conv1 Filters are only “interpretable” on the first layer 5
  6. 6. Visualize Learned Weights layer 2 weights layer 3 weights Source: ConvnetJS 6
  7. 7. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 7
  8. 8. Activations from data 8 conv1 conv5
  9. 9. Receptive Field Girshick et al. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR 2014 9 “people” “text” Visualize the receptive field of a neuron on those images that activate it the most
  10. 10. Reminder: Receptive Field 10 Receptive field: Part of the input that is visible to a neuron. It increases as we stack more convolutional layers (i.e. neurons in deeper layers have larger receptive fields).
  11. 11. Occlusion experiments 1. Iteratively forward the same image through the network, occluding a different region at a time. 2. Keep track of the probability of the correct class w.r.t. the position of the occluder Zeiler and Fergus. Visualizing and Understanding Convolutional Networks. ECCV 2015 11
  12. 12. Occlusion experiments 12 Can be applied to any arbitrary layer. “Manual” label assignment (AMT) Zhou et al. Object detectors emerge in deep scene CNNs. ICLR 2015
  13. 13. Network Dissection Bau, Zhou et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR 2017 13 Same idea, but automatic unit labeling using dense labeling dataset. The thresholded activation of each conv unit in the network is evaluated for semantic segmentation.
  14. 14. Network Dissection Bau, Zhou et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations. CVPR 2017 14
  15. 15. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 15
  16. 16. Gradient-based approach Compute the gradient of any neuron w.r.t. the image 1. Forward image up to the desired layer (e.g. conv5) 2. Set all gradients to 0 3. Set gradient for the neuron we are interested in to 1 4. Backpropagate to get reconstructed image (gradient on the image) Visualize the part of an image that mostly activates a neuron 16
  17. 17. Gradient-based approach 1. Forward image up to the desired layer (e.g. conv5) 2. Set all gradients to 0 3. Set gradient for the neuron we are interested in to 1 4. Backpropagate to get reconstructed image (gradient on the image) Springenberg, Dosovitskiy, et al. Striving for Simplicity: The All Convolutional Net. ICLR 2015 17
  18. 18. Gradient-based approach Springenberg, Dosovitskiy, et al. Striving for Simplicity: The All Convolutional Net. ICLR 2015 18
  19. 19. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 19
  20. 20. Optimization approach Simonyan et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, 2014 Obtain the image that maximizes a class score (or a neuron activation) 1. Forward random image 2. Set the gradient of the scores vector to be [0,0,0…,1,...,0,0] 3. Backprop to get gradient on the image 4. Update image (small step in the gradient direction) 5. Repeat 20
  21. 21. Optimization approach Simonyan et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps, 2014 21
  22. 22. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 22
  23. 23. DeepDream https://github.com/google/deepdream 23
  24. 24. DeepDream 1. Forward image up to some layer (e.g. conv5) 2. Set the gradients to equal the layer activations 3. Backprop to get gradient on the image 4. Update image (small step in the gradient direction) 5. Repeat 24
  25. 25. DeepDream 1. Forward image up to some layer (e.g. conv5) 2. Set the gradients to equal the layer activations 3. Backprop to get gradient on the image 4. Update image (small step in the gradient direction) 5. Repeat At each iteration, the image is updated to boost all features that activated in that layer in the forward pass. 25
  26. 26. DeepDream 26 More examples here
  27. 27. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 28
  28. 28. Neural Style Style image Content image Result 29Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
  29. 29. Neural Style Extract raw activations in all layers. These activations will represent the contents of the image. 30Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
  30. 30. Neural Style V = ● Activations are also extracted from the style image for all layers. ● Instead of the raw activations, gram matrices (G) are computed at each layer to represent the style. E.g. at conv5 [13x13x256], reshape to: 169 256 ... G = VT V The Gram matrix G gives the correlations between filter responses. 31
  31. 31. Neural Style match content match style Match gram matrices from style image Match activations from content image 32Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
  32. 32. Neural Style match content match style Match gram matrices from style image Match activations from content image 33Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
  33. 33. Neural Style 34Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016
  34. 34. Neural Style 35Gatys et al. Image Style Transfer Using Convolutional Neural Networks. CVPR 2016 A year later...
  35. 35. 36 Content Image Style Image Result Luan et al. Deep Photo Style Transfer. arXiv Apr 2017
  36. 36. 37 Content Image Style Image Result Luan et al. Deep Photo Style Transfer. arXiv Apr 2017
  37. 37. Visualization ● Learned weights ● Activations from data ● Gradient-based ● Optimization-based ● DeepDream ● Neural Style 38
  38. 38. Resources ● ConvnetJS ● Deepvis toolbox ● DrawNet from MIT: Visualize strong activations & connections between units ● 3D Visualization of a Convolutional Neural Network ● Latest NeuralStyle in torch ● Keras examples: ○ Optimization-based visualization Example in Keras ○ DeepDream in Keras ○ NeuralStyle in Keras ● Picasso: visualization tool (supports keras models - use it in your projects !) ○ blog post to convince you here 39
  39. 39. Questions?
  40. 40. t-SNE Embed high dimensional data points (i.e. feature codes) so that pairwise distances are preserved in local neighborhoods. Maaten & Hinton. Visualizing High-Dimensional Data using t-SNE. Journal of Machine Learning Research (2008). 41
  41. 41. t-SNE Can be used with features from layer before classification 42
  42. 42. Optimization approach Bojarski et al. VisualBackProp: efficient visualization of CNNs. arXiv May 2017 Value-based backpropagation 43

×