Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Day 1 Lecture 3
Deep Networks
Elisa Sayrol
[course site]
From Neurons to Convolutional Neural Networks
Figures Credit: Hugo Laroche NN course
2
From Neurons to Convolutional Neural Networks
Figure Credit: Hugo Laroche NN course
Hidden pre-activation
Hidden activatio...
From Neurons to Convolutional Neural Networks
L Hidden Layers
Hidden pre-activation (k>0)
Hidden activation (k=1,…L)
Outpu...
From Neurons to Convolutional Neural Networks
What if the input is all the
pixels within an image?
5
From Neurons to Convolutional Neural Networks
For a 200x200 image,
we have 4x104
neurons
each one with 4x104
inputs, that ...
From Neurons to Convolutional Neural Networks
For a 200x200 image, we
have 4x104
neurons each one
with 10x10 “local
connec...
From Neurons to Convolutional Neural Networks
Translation invariance: we can use same
parameters to capture a specific “fe...
From Neurons to Convolutional Neural Networks
…and don’t forget the activation function!
Figure Credit: Ranzatto
ReLu PReL...
From Neurons to Convolutional Neural Networks
Most ConvNets use Pooling
(or subsampling) to reduce
dimensionality and prov...
From Neurons to Convolutional Neural Networks
Padding (P): When doing the
convolution in the borders, you may
add values t...
From Neurons to Convolutional Neural Networks
Padding (P): When doing the
convolution in the borders, you may
add values t...
From Neurons to Convolutional Neural Networks
Stride (S): When doing the
convolution or another operation, like
pooling, w...
From Neurons to Convolutional Neural Networks
Example: Most convnets contain several convolutional layers, interspersed wi...
Upcoming SlideShare
Loading in …5
×

Deep Learning for Computer Vision: Deep Networks (UPC 2016)

1,462 views

Published on

http://imatge-upc.github.io/telecombcn-2016-dlcv/

Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Deep Learning for Computer Vision: Deep Networks (UPC 2016)

  1. 1. Day 1 Lecture 3 Deep Networks Elisa Sayrol [course site]
  2. 2. From Neurons to Convolutional Neural Networks Figures Credit: Hugo Laroche NN course 2
  3. 3. From Neurons to Convolutional Neural Networks Figure Credit: Hugo Laroche NN course Hidden pre-activation Hidden activation Output activation g(x) activation function: sigmoid: tangh: ReLU: o(x) output activation function: Softmax: 3
  4. 4. From Neurons to Convolutional Neural Networks L Hidden Layers Hidden pre-activation (k>0) Hidden activation (k=1,…L) Output activation (k=L+1) Slide Credit: Hugo Laroche NN course 4
  5. 5. From Neurons to Convolutional Neural Networks What if the input is all the pixels within an image? 5
  6. 6. From Neurons to Convolutional Neural Networks For a 200x200 image, we have 4x104 neurons each one with 4x104 inputs, that is 16x108 parameters, only for one layer!!! Figure Credit: Ranzatto 6
  7. 7. From Neurons to Convolutional Neural Networks For a 200x200 image, we have 4x104 neurons each one with 10x10 “local connections” (also called receptive field) inputs, that is 4x106 What else can we do to reduce the number of parameters? Figure Credit: Ranzatto 7
  8. 8. From Neurons to Convolutional Neural Networks Translation invariance: we can use same parameters to capture a specific “feature” in any area of the image. We can try different sets of parameters to capture different features. These operations are equivalent to perform convolutions with different filters. Ex: With100 different filters (or feature extractors) of size 10x10, the number of parameters is 104 That is why they are called Convolutional Neural Networks, ( ConvNets or CNNs) Figure Credit: Ranzatto 8
  9. 9. From Neurons to Convolutional Neural Networks …and don’t forget the activation function! Figure Credit: Ranzatto ReLu PReLu 9
  10. 10. From Neurons to Convolutional Neural Networks Most ConvNets use Pooling (or subsampling) to reduce dimensionality and provide invariance to small local changes. Pooling options: • Max • Average • Stochastic pooling Figure Credit: Ranzatto 10
  11. 11. From Neurons to Convolutional Neural Networks Padding (P): When doing the convolution in the borders, you may add values to compute the convolution. When the values are zero, that is quite common, the technique is called zero-padding. When padding is not used the output size is reduced. FxF=3x3 11
  12. 12. From Neurons to Convolutional Neural Networks Padding (P): When doing the convolution in the borders, you may add values to compute the convolution. When the values are zero, that is quite common, the technique is called zero-padding. When padding is not used the output size is reduced. FxF=5x5 12
  13. 13. From Neurons to Convolutional Neural Networks Stride (S): When doing the convolution or another operation, like pooling, we may decide to slide not pixel by pixel but every 2 or more pixels. The number of pixels that we skip is the value of the stride. It might be used to reduce the dimensionality of the output 13
  14. 14. From Neurons to Convolutional Neural Networks Example: Most convnets contain several convolutional layers, interspersed with pooling layers, and followed by a small number of fully connected layers A layer is characterized by its width, height and depth (that is, the number of filters used to generate the feature maps) An architecture is characterized by the number of layers LeNet-5 From Lecun ´98 14

×