Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Intro to Deep Learning

1,043 views

Published on

A presentation I gave in Data Science Research lab covering basics of Neural Networks and Deep Learning and an brief introduction to Theano.

Published in: Engineering
  • Be the first to comment

Intro to Deep Learning

  1. 1. Introduction to Neural Networks and Theano Kushal Arora Karora@cise.ufl.edu
  2. 2. Outline 1. Why Care? 2. Why it works? 3. Logistic Regression to Neural Networks 4. Deep Networks and Issues 5. Autoencoders and Stacked Autoencoders 6. Why Deep Learning Works 7. Theano Overview 8. Code Hands On
  3. 3. Why Care? ● Has bettered state of art of various tasks – Speech Recognition ● Microsoft Audio Video Indexing Service speech system uses Deep Learning ● Brought down word error rate by 30% compared to SOTA GMM – Natural Language Understanding/Processing ● Neural net language models are current state of art in language modeling, Sentiment  Analysis, Paraphrase Detection and many other NLP task ● SENNA system uses neural embedding for various NLP tasks like POS tagging, NER,  chunking etc.  ● SENNA is not only better than SOTA but also much faster  – Object Recognition ● Breakthrough started in 2006 with MNIST dataset. Still the best (0..27% error ) ● SOTA error rate over ImageNet dataset down to 15.3% compared to 26.1%   
  4. 4. Why Care? ● MultiTask and Transfer Learning – Ability to exploit  commonalities of different learning task by transferring learned knowledge – Example Google Image Search (Woman in a red dress in a garden)  – Other example is Google's Image Caption (http://googleresearch.blogspot.com/2014/11/a­ picture­is­worth­thousand­coherent.html) 
  5. 5. Why it works? ● Learning Representations – Time to move over hand crafted features ● Distributed Representation – More robust and expressive features ● Learning  Multiple Level of Representation – Learning multiple level of abstraction
  6. 6. #1 Learning Representation • Handcrafting features is inefficient and time consuming • Must be done again for each task and domain • The features are often incomplete and highly correlated
  7. 7. #2 Distributed Representation ● Features can be non­mutually exclusive example language.  ● Need  to  move  beyond  one­hot  representations  such  as  from  clustering  algorithms, k­nearest neighbors etc ● O(n)  parameters/examples  for  O(N)  input  regions  but  we  can  do  better  O(k)  parameters/inputs for O(2k ) input regions ● Similar to multiclustering, where multiple clustering algorithms are applied in  parallel or same clustering algorithm is applied to different input region 
  8. 8. #3 Learning multiple level of  representation ● Deep architectures promote reuse of features ● Deep architecture can potentially lead to progressively more abstract  features at higher layer ● Good intermediate representation can be shared across tasks ● Insufficient depth can be exponentially inefficient  
  9. 9. The Basics ● Building block of the network is a neuron.  ● The basic terminologies – Input – Activation layer – Output ● Activation function is usually of the form where input activation  function output h(w,b)=f (w T .x+b) f ( z)= 1 1+e −z
  10. 10. Logistic Regression ● Logistic regression is a probabilistic, linear classifier ● Parameterized by W and b ● Each output is probability of input belonging to class yi ● Prob of x being member of class Yi is calculated as where and P(Y=Yi∣x)=softmaxi(Wx+b) softmaxi (Wx +b)= e W i Tx + bi ∑ j e W j Tx + bi ypred=argmaxYi P(Y=Yi∣x)
  11. 11. Logistic Regression – Training ● The loss function for logistic regression is negative log likelihood, defined as  ● The loss function minimization is done using stochastic gradient descent or  mini batch stochastic gradient descent.    ● The function is convex and will always reach global minima which is not true  for other architectures we will discuss. ● Cannot solve famous XOR problem.
  12. 12. Multilayer preceptron ● An MLP can be viewed as a logistic regression classifier where the input is first transformed using a learned non-linear transformation. ● The non linear transformation can be a sigmoid or a tanh function. ● Due to addition of non linear hidden layer, the loss function now is non convex ● Though there is no sure way to avoid minima, some empirical value initializations of the weight matrix helps.
  13. 13. Multilayer preceptron – Training  ● Let  D be the size of input vector and there be L output labels. The output  vector (of size L) will be given as    where   and   are activation functions. ● The hard part of training is calculating the gradient for stochastic gradient  descent and of course avoid the local minima.  ● Back­propagation used as an optimization technique to avoid calculation  gradient at each step. (It is like Dynamic programming for derivative chain  rule recursion)   G s
  14. 14. Deep Neural Nets and issues ● Generally networks with more than 2­3 hidden layers are  called deep nets.  ● Pre 2006 they performed worse than shallow networks.  ● Though hard to analyze the exact cause the experimental  results suggested that gradient­based training for deep  nets got stuck in local minima due to gradient diffusion  ● It was difficult to get good generalization. ● Random initialization which worked for shallow network  couldn't be used deep networks. ● The issue was solved using a unsupervised pre­training  of hidden layers followed by a fine tuning or supervised  training.
  15. 15. Auto­Encoders ● Multilayer neural nets with target output = input. ● Reconstruction = decoder(encoder(input)) ● Objective is to minimize the reconstruction error. ● PCA can be seen as a auto-encoder with and ● So autoencoder could be seen as an non-linear PCA which tries of learn latent representation of input. a=tanh(Wx+b) x '=tanh(W Tx +c) L=‖x '−x‖2 a=Wx x'=W T a
  16. 16. Auto­Encoders ­ Training
  17. 17. Stacked Auto­encoders ● One way of creating deep networks. ● Other is Deep Belief Networks that is made of  stacked RBMs trained greedily. ● Training is divided into two steps – Pre Training – Fine Tuning ● In Pre Training auto­encoders are recursively  trained one at a time. ● In second phase, i.e. fine tuning, the whole  network is trained using to minimize negative  log likelihood of predictions. ● Pre Training is unsupervised but post training  is supervised 
  18. 18. Pre­Training
  19. 19. Why Pre­Training Works ● Hard to know exactly as deep nets are hard to analyze ● Regularization Hypothesis – Pre­Training acts as adding regularization term leading to better generalization. – It can be seen as good representation for P(X) leads to good representation of P(Y|X)  ● Optimization Hypothesis – Pre­Training leads to weight initialization that restricts the parameter space near better  local minima – These minimas are not achievable via random initialization 
  20. 20. Other Type of Neural Networks ● Recurrent Neural Nets ● Recursive Neural Networks ● Long Short Term Memory Architecture ● Convolutional Neural Networks
  21. 21. Libraries to use ● Theano/Pylearn2 (U Motreal, Python) ● Caffe (UCB, C++, Fast, CUDA based) ● Torch7 (NYU, Lua, supported by Facebook)
  22. 22. Introduction to Theano What is Theano? Theano is a Python library that allows you to define, optimize, and evaluate mathematical  expressions involving multi­dimensional arrays efficiently.  Why should it be used? – Tighter integration with numpy – Transparent use of GPU – Efficient symbolic representation – Does lots of heavy lifting like gradient calculation for you  – Uses simple abstractions for tensors and matrices so you don't have to deal with it
  23. 23.  Theano Basics (Tutorial) ● How To:  – Declare Expression – Compute Gradient  ● How expression is evaluated. (link) ● Debugging
  24. 24. Implementations ● Logistic Regression ● Multilayer preceptron ● Auto­Encoders ● Stacked Auto­Encoders
  25. 25. Refrences ● [Bengio07]  Bengio, P. Lamblin, D. Popovici and H. Larochelle, Greedy  Layer­Wise Training of Deep Networks, in Advances in Neural Information  Processing Systems 19 (NIPS‘06), pages 153­160, MIT Press 2007. ● [Bengio09]    Bengio, Learning deep architectures for AI, Foundations and  Trends in Machine Learning 1(2) pages 1­127. ● [Xavier10] Bengio, X. Glorot, Understanding the difficulty of training  deep feedforward neuralnetworks, AISTATS 2010 ● Bengio, Yoshua, Aaron Courville, and Pascal Vincent. "Representation  learning: A review and new perspectives." Pattern Analysis and Machine  Intelligence, IEEE Transactions on 35.8 (2013): 1798­1828. ● Deep learning for NLP (http://nlp.stanford.edu/courses/NAACL2013/)
  26. 26. Thank You!

×