Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

2010 deep learning and unsupervised feature learning

deep learning

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to comment

2010 deep learning and unsupervised feature learning

  1. 1. 1 NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning Tutorial on Deep Learning and Applications Honglak Lee University of Michigan Co-organizers: Yoshua Bengio, Geoff Hinton, Yann LeCun, Andrew Ng, and Marc’Aurelio Ranzato * Includes slide material sourced from the co-organizers
  2. 2. 2 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  3. 3. 3 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  4. 4. 4 Motivation: why go deep? • Deep Architectures can be representationally efficient – Fewer computational units for same function • Deep Representations might allow for a hierarchy or representation – Allows non-local generalization – Comprehensibility • Multiple levels of latent variables allow combinatorial sharing of statistical strength • Deep architectures work well (vision, audio, NLP, etc.)!
  5. 5. 5 Different Levels of Abstraction • Hierarchical Learning – Natural progression from low level to high level structure as seen in natural complexity – Easier to monitor what is being learnt and to guide the machine to better subspaces – A good lower level representation can be used for many distinct tasks
  6. 6. 6 Generalizable Learning • Shared Low Level Representations – Multi-Task Learning – Unsupervised Training raw input task 1 output task 3 output task 2 output shared intermediate representation … … … … … task 1 output y1 task N output yN High-level features Low-level features • Partial Feature Sharing – Mixed Mode Learning – Composition of Functions
  7. 7. 7 A Neural Network • Forward Propagation : – Sum inputs, produce activation, feed-forward
  8. 8. 8 A Neural Network • Training : Back Propagation of Error – Calculate total error at the top – Calculate contributions to error at each step going backwards t2 t1
  9. 9. 9 Deep Neural Networks • Simple to construct – Sigmoid nonlinearity for hidden layers – Softmax for the output layer • But, backpropagation does not work well (if randomly initialized) – Deep networks trained with backpropagation (without unsupervised pretraining) perform worse than shallow networks (Bengio et al., NIPS 2007)
  10. 10. 10 Problems with Back Propagation • Gradient is progressively getting more dilute – Below top few layers, correction signal is minimal • Gets stuck in local minima – Especially since they start out far from ‘good’ regions (i.e., random initialization) • In usual settings, we can use only labeled data – Almost all data is unlabeled! – The brain can learn from unlabeled data
  11. 11. 12 Deep Network Training (that actually works) • Use unsupervised learning (greedy layer-wise training) – Allows abstraction to develop naturally from one layer to another – Help the network initialize with good parameters • Perform supervised top-down training as final step – Refine the features (intermediate layers) so that they become more relevant for the task
  12. 12. 13 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  13. 13. 14 • Probabilistic generative model • Deep architecture – multiple layers • Unsupervised pre-learning provides a good initialization of the network – maximizing the lower-bound of the log-likelihood of the data • Supervised fine-tuning – Generative: Up-down algorithm – Discriminative: backpropagation Deep Belief Networks(DBNs) Hinton et al., 2006
  14. 14. 15 DBN structure 1 h 2 h 3 h vVisible layer Hidden layers RBM Directed belief nets ),()|()...|()|(),...,,,( 112 lllll PPPPP hhhhhhhvhhhv 21121   Hinton et al., 2006
  15. 15. 17 DBN Greedy training • First step: – Construct an RBM with an input layer v and a hidden layer h – Train the RBM Hinton et al., 2006
  16. 16. 18 DBN Greedy training • Second step: – Stack another hidden layer on top of the RBM to form a new RBM – Fix , sample from as input. Train as RBM. 2 W 1 W 1 W 2 W )|( 1 vhQ 1 h )|( 1 vhQ Hinton et al., 2006
  17. 17. 19 DBN Greedy training • Third step: – Continue to stack layers on top of the network, train it as previous step, with sample sampled from • And so on… 2 W 1 W 3 W 3 h )|( 12 hhQ )|( 1 vhQ )|( 12 hhQ Hinton et al., 2006
  18. 18. 20 Why greedy training works? • RBM specifies P(v,h) from P(v|h) and P(h|v) – Implicitly defines P(v) and P(h) • Key idea of stacking – Keep P(v|h) from 1st RBM – Replace P(h) by the distribution generated by 2nd level RBM Hinton et al., 2006
  19. 19. 21 Why greedy training works? • Easy approximate inference – P(hk+1|hk) approximated from the associated RBM – Approximation because P(hk+1) differs between RBM and DBN • Training: – Variational bound justifies greedy layerwise training of RBMs 2 W 1 W 3 W 3 h )|( 1 vhQ )|( 12 hhQ Trained by the second layer RBM Hinton et al., 2006
  20. 20. 22 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  21. 21. 23 Denoising Auto-Encoder • Corrupt the input (e.g. set 25% of inputs to 0) • Reconstruct the uncorrupted input • Use uncorrupted encoding as input to next level KL(reconstruction|raw input) Hidden code (representation) Corrupted input Raw input reconstruction (Vincent et al, 2008)
  22. 22. 24 Denoising Auto-Encoder • Learns a vector field towards higher probability regions • Minimizes variational lower bound on a generative model • Corresponds to regularized score matching on an RBM Corrupted input Corrupted input (Vincent et al, 2008)
  23. 23. 25 Stacked (Denoising) Auto-Encoders • Greedy Layer wise learning – Start with the lowest level and stack upwards – Train each layer of auto-encoder on the intermediate code (features) from the layer below – Top layer can have a different output (e.g., softmax non- linearity) to provide an output for classification
  24. 24. 30 Denoising Auto-Encoders: Benchmarks Larochelle et al., 2009
  25. 25. 31 Denoising Auto-Encoders: Results • Test errors on the benchmarks Larochelle et al., 2009
  26. 26. 32 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  27. 27. 33 Predictive Sparse Coding • Recall the objective function for sparse coding: • Modify by adding a penalty for prediction error: – Approximate the sparse code with an encoder
  28. 28. 34 Sparse Representation (Z) Encoder Decoder Model Encoder Input (Y) … Decoder Black = Function Red =Error
  29. 29. 35 Using PSD to Train a Hierarchy of Features • Phase 1: train first layer using PSD
  30. 30. 36 Using PSD to Train a Hierarchy of Features • Phase 1: train first layer using PSD • Phase 2: use encoder+absolute value as feature extractor
  31. 31. 37 Using PSD to Train a Hierarchy of Features • Phase 1: train first layer using PSD • Phase 2: use encoder+absolute value as feature extractor • Phase 3: train the second layer using PSD
  32. 32. 38 Using PSD to Train a Hierarchy of Features • Phase 1: train first layer using PSD • Phase 2: use encoder+absolute value as feature extractor • Phase 3: train the second layer using PSD • Phase 4: use encoder + absolute value as 2nd feature extractor
  33. 33. 39 Using PSD to Train a Hierarchy of Features • Phase 1: train first layer using PSD • Phase 2: use encoder+absolute value as feature extractor • Phase 3: train the second layer using PSD • Phase 4: use encoder + absolute value as 2nd feature extractor • Phase 5: train a supervised classifier on top • Phase 6: (optional): train the entire system with supervised back-propagation
  34. 34. 40 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  35. 35. 41 Deep Boltzmann Machines Slide credit: R. Salskhutdinov Undirected connections between all layers (no connections between the nodes in the same layer) Salakhutdinov & Hinton, 2009
  36. 36. 42 DBMs vs. DBNs • In multiple layer model, the undirected connection between the layers make complete Boltzmann machine. Salakhutdinov & Hinton, 2009
  37. 37. 43 Two layer DBM example *Assume no within layer connection.
  38. 38. 44 Deep Boltzman Machines • Pre-training: – Can (must) initialize from stacked RBMs • Generative fine-tuning: – Positive phase: variational approximation (mean-field) – Negative phase: persistent chain (stochastic approxiamtion) • Discriminative fine-tuning: – backpropagation Salakhutdinov & Hinton, 2009
  39. 39. 45 Experiments • MNIST: 2-layer BM 60,000 training and 10,000 testing examples 0.9 million parameters Gibbs sampler for 100,000 steps After discriminative fine-tuning: 0.95% error rate Compare with DBN 1.2%, SVM 1.4%
  40. 40. 46 Experiments • NORB dataset Slide credit: R. Salskhutdinov
  41. 41. 47 Experiments Slide credit: R. Salskhutdinov
  42. 42. 48 Why Greedy Layer Wise Training Works • Regularization Hypothesis – Pre-training is “constraining” parameters in a region relevant to unsupervised dataset – Better generalization (Representations that better describe unlabeled data are more discriminative for labeled data) • Optimization Hypothesis – Unsupervised training initializes lower level parameters near localities of better minima than random initialization can (Bengio 2009, Erhan et al. 2009)
  43. 43. 56 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  44. 44. 57 Convolutional Neural Networks Local Receptive Fields Weight sharing Pooling (LeCun et al., 1989)
  45. 45. 58 Deep Convolutional Architectures State-of-the-art on MNIST digits, Caltech-101 objects, etc.
  46. 46. 60 Nonlinearities and pooling • Details of feature processing stage for PSD Local contrast normalization Max-poolingRectificationConvolution or filtering (Jarret et al., 2009)
  47. 47. 61 Convolutional DBNs (Lee et al, 2009; Desjardins and Bengio, 2008; Norouzi et al., 2009) Convolutional RBM: Generative training of convolutional structures (with probabilistic max-pooling)
  48. 48. 62 Spatial Pyramid Structure • Descriptor Layer: detect and locate features, extract corresponding descriptors (e.g. SIFT) • Code Layer: code the descriptors – Vector Quantization (VQ): each code has only one non-zero element – Soft-VQ: small group of elements can be non-zero • SPM layer: pool codes across subregions and average/normalize into a histogram (Yang et al., 2009)
  49. 49. 63 Improving the coding step • Classifiers using these features need nonlinear kernels – Increases computational complexity • Modify the Coding step to produce feature representations that linear classifiers can use effectively – Sparse coding – Local Coordinate coding (Yang et al., 2009)
  50. 50. 64 Experimental results • Competitive performance to other state-of- the-art methods using a single type of features on object recognition benchmarks • E.g.: Caltech 101 (30 examples per class) – Using pixel representation: ~65% accuracy (Jarret et al., 2009; Lee et al., 2009; and many others) – Using SIFT representation: 73~75% accuracy (Yang et al., 2009; Jarret et al., 2009, Boureau et al., 2010, and many others)
  51. 51. 65 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  52. 52. 66 Convolutional DBN for audio Spectrogram Detection nodes Max pooling node time frequency (Lee et al., 2009)
  53. 53. 67 Convolutional DBN for audio Spectrogram time frequency
  54. 54. 68 CDBNs for speech Learned first-layer bases Trained on unlabeled TIMIT corpus
  55. 55. 69 Experimental Results • Speaker identification • Phone classification TIMIT Speaker identification Accuracy Prior art (Reynolds, 1995) 99.7% Convolutional DBN 100.0% TIMIT Phone classification Accuracy Clarkson et al. (1999) 77.6% Gunawardana et al. (2005) 78.3% Sung et al. (2007) 78.5% Petrov et al. (2007) 78.6% Sha & Saul (2006) 78.9% Yu et al. (2009) 79.2% Convolutional DBN 80.3% (Lee et al., 2009)
  56. 56. 70 Phone recognition using DBNs • Pre-training RBMs followed by fine-tuning with back propagation (Dahl et al., 2010)
  57. 57. 71 Phone recognition using mcRBM • Mean-covariance RBM + DBN Mean-covariance RBM (Dahl et al., 2010)
  58. 58. 72 Phone recognition results Method PER Stochastic Segmental Models 36.0% Conditional Random Field 34.8% Large-Margin GMM 33.0% CD-HMM 27.3% Augmented conditional Random Fields 26.6% Recurrent Neural Nets 26.1% Bayesian Triphone HMM 25.6% Monophone HTMs 24.8% Heterogeneous Classifiers 24.4% Deep Belief Networks(DBNs) 23.0% Triphone HMMs discriminatively trained w/ BMMI 22.7% Deep Belief Networks with mcRBM feature extraction 20.5% (Dahl et al., 2010)
  59. 59. 73 Outline • Deep learning – Greedy layer-wise training (for supervised learning) – Deep belief nets – Stacked denoising auto-encoders – Stacked predictive sparse coding – Deep Boltzmann machines • Applications – Vision – Audio – Language
  60. 60. 74 Language modeling • Language Models – Estimating the probability of the next word w given a sequence of words • Baseline approach in NLP – N-gram models (with smoothing): • Deep Learning approach – Bengio et al. (2000, 2003): via Neural network – Mnih and Hinton (2007): via RBMs
  61. 61. 75 Other NLP tasks • Part-Of-Speech Tagging (POS) – mark up the words in a text (corpus) as corresponding to a particular tag • E.g. Noun, adverb, ... • Chunking – Also called shallow parsing – In the view of phrase: Labeling phrase to syntactic constituents • E.g. NP (noun phrase), VP (verb phrase), … – In the view of word: Labeling word to syntactic role in a phrase • E.g. B-NP (beginning of NP), I-VP (inside VP), … (Collobert and Weston, 2009)
  62. 62. 76 Other NLP tasks • Named Entity Recognition (NER) – In the view of thought group: Given a stream of text, determine which items in the text map to proper names – E.g., labeling “atomic elements” into “PERSON”, “COMPANY”, “LOCATION” • Semantic Role Labeling (SRL) – In the view of sentence: giving a semantic role to a syntactic constituent of a sentence – E.g. [John]ARG0 [ate]REL [the apple]ARG1 (Proposition Bank) • An Annotated Corpus of Semantic Roles (Palmer et al.) (Collobert and Weston, 2009)
  63. 63. 77 A unified architecture for NLP • Main idea: a unified architecture for NLP – Deep Neural Network – Trained jointly with different tasks (feature sharing and multi-task learning) – Language model is trained in an unsupervised fashion • Show the generality of the architecture • Improve SRL performance (Collobert and Weston, 2009)
  64. 64. 78 General Deep Architecture for NLP Basic features (e.g., word, capitalization, relative position) Embedding by lookup table Convolution (i.e., how each word is relevant to its context?) Max pooling Supervised learning (Collobert and Weston, 2009)
  65. 65. 79 Results • MTL improves SRL’s performance (Collobert and Weston, 2009)
  66. 66. 80 Summary • Training deep architectures – Unsupervised pre-training helps training deep networks – Deep belief nets, Stacked denoising auto- encoders, Stacked predictive sparse coding, Deep Boltzmann machines • Deep learning algorithms and unsupervised feature learning algorithms show promising results in many applications – vision, audio, natural language processing, etc.
  67. 67. 81 Thank you!
  68. 68. 82 References • B. Olshausen, D. Field. Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images. Nature, 1996. • H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. NIPS, 2007. • R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. ICML, 2007. • H. Lee, R. Raina, A. Teichman, and A. Y. Ng. Exponential Family Sparse Coding with Application to Self-taught Learning. IJCAI, 2009. • J. Yang, K. Yu, Y. Gong, and T. Huang. Linear Spatial Pyramid Matching Using Sparse Coding for Image Classification. CVPR, 2009. • Y. Bengio. Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, 2009. • Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. NIPS, 2007. • P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with denoising autoencoders. ICML, 2008.
  69. 69. 83 References • H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area V2. NIPS, 2008. • Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541–551, 1989. • H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009. • H. Lee, Y. Largman, P. Pham, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. NIPS, 2009. • A. R. Mohamed, G. Dahl, and G. E. Hinton, Deep belief networks for phone recognition. NIPS 2009 workshop on deep learning for speech recognition. • G. Dahl, M. Ranzato, A. Mohamed, G. Hinton, Phone Recognition with the Mean-Covariance Restricted Boltzmann Machine, NIPS 2010 • M. Ranzato, A. Krizhevsky, G. E. Hinton, Factored 3-Way Restricted Boltzmann Machines for Modeling Natural Images. AISTATS, 2010.
  70. 70. 84 References • M. Ranzato, G. E. Hinton. Modeling Pixel Means and Covariances Using Factorized Third-Order Boltzmann Machines. CVPR, 2010. • G. Taylor, G. E. Hinton, and S. Roweis. Modeling Human Motion Using Binary Latent Variables. NIPS, 2007. • G. Taylor and G. E. Hinton. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style. ICML, 2009. • G. Taylor, R. Fergus, Y. LeCun and C. Bregler. Convolutional Learning of Spatio-temporal Features. ECCV, 2010. • K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, Learning Invariant Features through Topographic Filter Maps. CVPR, 2009. • K. Kavukcuoglu, M. Ranzato, and Y. LeCun, Fast Inference in Sparse Coding Algorithms with Applications to Object Recognition. CBLL-TR-2008-12-01, 2008. • K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, What is the Best Multi-Stage Architecture for Object Recognition? ICML, 2009. • R. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008.
  71. 71. 85 References • R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. AISTATS, 2009. • K. Yu, T. Zhang, and Y. Gong. Nonlinear Learning using Local Coordinate Coding, NIPS, 2009. • J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Learning Locality- constrained Linear Coding for Image Classification. CVPR, 2010. • H. Larochelle, Y. Bengio, J. Louradour and P. Lamblin, Exploring Strategies for Training Deep Neural Networks, JMLR, 2009. • D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent and S. Bengio, Why Does Unsupervised Pre-training Help Deep Learning? JMLR, 2010. • J. Yang, K. Yu, and T. Huang. Supervised Translation-Invariant Sparse Coding. CVPR, 2010. • Y. Boureau, F. Bach, Y. LeCun and J. Ponce: Learning Mid-Level Features for Recognition. CVPR, 2010. • I. J. Goodfellow, Q. V. Le, A. M. Saxe, H. Lee, and A. Y. Ng. Measuring invariances in deep networks. NIPS, 2009.
  72. 72. 86 References • Y. Boureau, J. Ponce, Y. LeCun, A theoretical analysis of feature pooling in vision algorithms. ICML, 2010. • R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. ICML, 2009.
  73. 73. 87 References (1) links to code: • sparse coding – http://www.eecs.umich.edu/~honglak/softwares/nips06-sparsecoding.htm • DBN – http://www.cs.toronto.edu/~hinton/MatlabForSciencePaper.html • DBM – http://web.mit.edu/~rsalakhu/www/DBM.html • convnet – http://cs.nyu.edu/~koray/publis/code/eblearn.tar.gz – http://cs.nyu.edu/~koray/publis/code/randomc101.tar.gz (2) link to general website on deeplearning: – http://deeplearning.net

    Be the first to comment

    Login to see the comments

  • Anton0125

    Nov. 8, 2018

deep learning

Views

Total views

1,549

On Slideshare

0

From embeds

0

Number of embeds

2

Actions

Downloads

60

Shares

0

Comments

0

Likes

1

×