Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Perceptrons (D1L2 2017 UPC Deep Learning for Computer Vision)

408 views

Published on

https://telecombcn-dl.github.io/2017-dlcv/

Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

Perceptrons (D1L2 2017 UPC Deep Learning for Computer Vision)

  1. 1. [course site] #DLUPC Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University Perceptrons Day 1 Lecture 2
  2. 2. 2 Outline 1. Supervised learning: regression/classification 2. Single neuron models (perceptrons) a. Linear regression b. Logistic regression c. Multiple outputs and softmax regression 3. Multi-layered perceptrons
  3. 3. Types of machine learning We can categorize three types of learning procedures: 1. Supervised Learning: = ƒ( ) 2. Unsupervised Learning: ƒ( ) 3. Reinforcement Learning: = ƒ( ) We have a labeled dataset with pairs (x, y), e.g. classify a signal window as containing speech or not: x1 = [x(1), x(2), …, x(T)] y1 = “no” x2 = [x(T+1), …, x(2T)] y2 = “yes” x3 = [x(2T+1), …, x(3T)] y3 = “yes” ...
  4. 4. Supervised learning Fit a function: = ƒ( ), ∈ ℝm Given paired training examples {(xi , yi )} Key point: generalize well to unseen examples Depending on the type of target we get: ● Regression: ∈ ℝN is continuous (e.g. temperatures = {19º, 23º, 22º}) ● Classification: is discrete (e.g. = {1, 2, 5, 2, 2}). ○ Beware! These are unordered categories, not numerically meaningful outputs: e.g. code[1] = “dog”, code[2] = “cat”, code[5] = “ostrich”, ...
  5. 5. Black box abstraction of supervised learning
  6. 6. Regression
  7. 7. Regression example
  8. 8. Classification
  9. 9. Classification example
  10. 10. Single neuron model (perceptron)
  11. 11. Biological inspiration
  12. 12. Biological inspiration
  13. 13. Single neuron model (perceptron)
  14. 14. Ingredients
  15. 15. Linear regression
  16. 16. Logistic regression
  17. 17. Multiple outputs
  18. 18. Multiple classes
  19. 19. Multi-class classification with softmax regression The softmax activation function is the analogue of the sigmoid for more than two classes Max-likelihood loss function is categorical cross entropy
  20. 20. Effect of the softmax
  21. 21. Non-linear decision boundaries Perceptrons can only produce linear decision boundaries. Many interesting problems are not linearly separable. Real world problems often need non-linear boundaries ● Images ● Audio ● Text
  22. 22. Multi layer perceptrons
  23. 23. Deep learning vs shallow learning ● Old style machine learning: ○ Engineer features (by some unspecified method) ○ Create a representation (descriptor) ○ Train shallow classifier on representation ● Example: ○ SIFT features (engineered) ○ BoW representation (engineered + unsupervised learning) ○ SVM classifier (convex optimization) ● Deep learning ○ Learn layers of features, representation, and classifier in one go based on the data alone ○ Primary methodology: deep neural networks (non-convex) Data Pixels Deep learning Convolutional Neural Network Data Pixels Low-level features SIFT Representation BoW/VLAD/Fisher Classifier SVM EngineeredUnsupervisedSupervised Supervised 23
  24. 24. Example: feature engineering in computer vision 24 ϕ(x)
  25. 25. Deep nets Just a neural network with several hidden layers ● Often uses different types of layers, especially fully connected, convolution, and pooling layers ● Output of each layer can be thought of as a representation of the input ● Outputs of the lower layers are more closely related to the input features ● Outputs of the higher layers contain more abstract features closer in semantics to the target variable 25 x1 x2 x3 x4 y1 Layer 1 Layer 2 Layer 3 Layer 0 y2
  26. 26. Questions?

×