Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

The Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intelligence)

585 views

Published on

https://telecombcn-dl.github.io/2017-dlai/

Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.

Published in: Data & Analytics
  • Be the first to comment

The Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intelligence)

  1. 1. #DLUPC The Perceptron Day 1 Lecture 2 [course site] Xavier Giro-i-Nieto xavier.giro@upc.edu Associate Professor Universitat Politecnica de Catalunya Technical University of Catalonia 1
  2. 2. 2 Acknowledgements Santiago Pascual Kevin McGuinness kevin.mcguinness@dcu.ie Research Fellow Insight Centre for Data Analytics Dublin City University
  3. 3. 3 Video lecture (DLSL 2017)
  4. 4. 4 Outline 1. Supervised learning: regression/classification 2. Single neuron models (perceptrons) a. Linear regression b. Logistic regression c. Multiple outputs and softmax regression
  5. 5. Types of machine learning Yann Lecun’s Black Forest cake 5
  6. 6. Types of machine learning We can categorize three types of learning procedures: 1. Supervised Learning: = ƒ( ) 2. Unsupervised Learning: ƒ( ) 3. Reinforcement Learning: = ƒ( ) 6
  7. 7. Types of machine learning We can categorize three types of learning procedures: 1. Supervised Learning: = ƒ( ) 2. Unsupervised Learning: ƒ( ) 3. Reinforcement Learning: = ƒ( ) 7
  8. 8. Types of machine learning We can categorize three types of learning procedures: 1. Supervised Learning: = ƒ( ) 2. Unsupervised Learning: ƒ( ) 3. Reinforcement Learning: = ƒ( ) 8
  9. 9. Types of machine learning We can categorize three types of learning procedures: 1. Supervised Learning: = ƒ( ) We have a labeled dataset with pairs (x, y), e.g. classify a signal window as containing speech or not: x1 = [x(1), x(2), …, x(T)] y1 = “no” x2 = [x(T+1), …, x(2T)] y2 = “yes” x3 = [x(2T+1), …, x(3T)] y3 = “yes” ... 9
  10. 10. Supervised learning Fit a function: = ƒ( ), ∈ ℝm Given paired training examples {(xi , yi )} Key point: generalize well to unseen examples 10
  11. 11. Black box abstraction of supervised learning 11 y^
  12. 12. Regression vs Classification Depending on the type of target we get: ● Regression: ∈ ℝN is continuous (e.g. temperatures = {19º, 23º, 22º}) ● Classification: is discrete (e.g. = {1, 2, 5, 2, 2}). 12
  13. 13. Regression vs Classification Depending on the type of target we get: ● Regression: ∈ ℝN is continuous (e.g. temperatures = {19º, 23º, 22º}) ● Classification: is discrete (e.g. = {1, 2, 5, 2, 2}). 13
  14. 14. Linear Regression (eg. 1D input - 1D ouput) 14
  15. 15. Linear Regression (eg. 1D input - 1D ouput) 15 = w · x + b Training a model means learning parameters w and b from data.
  16. 16. Linear Regression (M-D input) 16 Input data can also be M-dimensional with vector x: y = wT · x + b = w1·x1 + w2·x2 + w3·x3 + … + wM·xM + b e.g. we want to predict the price of a house (y) based on: x1 = square-meters (sqm) x2,3 = location (lat, lon) x4 = year of construction (yoc) y = price = w1·(sqm) + w2·(lat) + w3·(lon) + w4·(yoc) + b
  17. 17. Regression vs Classification Depending on the type of target we get: ● Regression: ∈ ℝN is continuous (e.g. temperatures = {19º, 23º, 22º}) ● Classification: is discrete (e.g. = {1, 2, 5, 2, 2}). 17
  18. 18. Binary Classification (eg. 2D input, 1D ouput) 18
  19. 19. 19 Multi-class Classification
  20. 20. Multi-class Classification ● Classification: is discrete (e.g. = {1, 2, 5, 2, 2}). ○ Beware! These are unordered categories, not numerically meaningful outputs: e.g. code[1] = “dog”, code[2] = “cat”, code[5] = “ostrich”, … ○ Classes are often coded as one-hot vector (each class corresponds to a different dimension of the output space) 20 Perronin, F., CVPR Tutorial on LSVR @ CVPR’14, Output embedding for LSVR [1,0,0] [0,1,0] [0,0,1] One-hot representation
  21. 21. Single Neuron Model (Perceptron) Both regression and classification problems can be addressed with the perceptron: 21
  22. 22. 22 The Perceptron is seen as an analogy to a biological neuron. Biological neurons fire an impulse once the sum of all inputs is over a threshold. The perceptron acts like a switch (learn how in the next slides...). Single neuron model (perceptron)
  23. 23. Single neuron model (perceptron) 23
  24. 24. Single neuron model (perceptron) 24 Weights and bias are the parameters that define the behavior (must be learned).
  25. 25. Single neuron model (perceptron) 25 The output y is derived from a sum of the weighted inputs plus a bias term.
  26. 26. Single neuron model: Regression 26 The perceptron can solve regression problems when f(a)=a. [identity]
  27. 27. Single neuron model: Binary Classification 27 The perceptron can solve classification problems when f(a)=σ(a). [sigmoid]
  28. 28. Single neuron model: Binary Classification 28 The perceptron can solve classification problems when f(a)=σ(a). [sigmoid]
  29. 29. Single neuron model: Binary Classification 29 The sigmoid function σ(x) or logistic curve maps any input x between [0,1]:
  30. 30. Single neuron model: Binary Classification 30 For classification, regressed values must be bounded between 0 and 1 to represent probabilities.
  31. 31. Single neuron model: Binary Classification 31 y > thr → class 1 (eg. green) y < thr → class 2 (eg. no green) Setting a threshold (thr) at the output of the perceptron allows solving classification problems between two classes (binary) & estimate probabilities: Logits
  32. 32. Single neuron model: Binary Classification 32 Setting a threshold (thr) at the output of the perceptron allows solving classification problems between two classes (binary) & estimate probabilities: Linear regression Logistic regression
  33. 33. Softmax classifier: Mulitclass 33
  34. 34. Softmax classifier: Multiclass 34 J. Alammar, “A visual and interactive guide to the Basics of Neural Networks” (2016) Probability estimations for each class can also be obtained by softmax normalization on the output of two neurons, one specialised for each class. Softmax regression
  35. 35. Softmax classifier: Multiclass 35 Normalization factor so that the sum of probabilities sum up to 1. J. Alammar, “A visual and interactive guide to the Basics of Neural Networks” (2016) Softmax regression
  36. 36. 36 Softmax classifier: Multiclass (3 classes) TensorFlow, “MNIST for ML beginners”
  37. 37. 37 TensorFlow, “MNIST for ML beginners” Softmax classifier: Multiclass (3 classes)
  38. 38. 38 TensorFlow, “MNIST for ML beginners” Softmax classifier: Multiclass (3 classes)
  39. 39. 39 Softmax classifier: Multiclass (3 classes) 39 Multiple classes can be predicted by putting many neurons in parallel, each processing its binary output out of N possible classes. 0.3 “dog” 0.08 “cat” 0.6 “whatever” raw pixels unrolled img Normalization factor, remember: we want a pdf at the output! → all output P’s sum up to 1. Softmax function
  40. 40. Effect of the softmax 40
  41. 41. Next lecture... 41 Perceptrons can only produce linear decision boundaries. Many interesting problems are not linearly separable. Real world problems often need non-linear boundaries ● Images ● Audio ● Text
  42. 42. Questions? 42

×