Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NYAI - A Path To Unsupervised Learning Through Adversarial Networks by Soumith Chintala

582 views

Published on

A Path To Unsupervised Learning Through Adversarial Networks - (Soumith Chintala, Researcher at Facebook AI Research)

Soumith Chintala is a Researcher at Facebook AI Research, where he works on deep learning, reinforcement learning, generative image models, agents for video games and large-scale high-performance deep learning. He holds a Masters in CS from NYU, and spent time in Yann LeCun's NYU lab building deep learning models for pedestrian detection, natural image OCR, depth-images among others.

Soumith will go over generative adversarial networks, a particular way of training neural networks to build high quality generative models. The talk will take you through an easy to follow timeline of the research and improvements in adversarial networks, followed by some future directions, as well as applications.

Published in: Technology
  • Be the first to comment

NYAI - A Path To Unsupervised Learning Through Adversarial Networks by Soumith Chintala

  1. 1. A path to unsupervised learning Soumith Chintala Facebook AI Research through Adversarial Networks
  2. 2. Overview of the talk • Unsupervised Learning • Generative Adversarial Networks • Advances • Using the learnt representations • What’s next?
  3. 3. Unsupervised Learning An introduction
  4. 4. Unsupervised Learning An introduction Supervised Learning
  5. 5. Unsupervised Learning An introduction Unsupervised Learning
  6. 6. Unsupervised Learning Usefulness
  7. 7. Unsupervised Learning Reusing representations
  8. 8. Generative Models An introduction A model that learns a distribution of images
  9. 9. Generative Models An introduction X = P(z), z controls dogness or catness
  10. 10. Generative Models An introduction X = P(z), z is a latent variable
  11. 11. Generative Models An introduction P(z) = neural network
  12. 12. Generative Adversarial Networks
  13. 13. Generative Adversarial Networks Alternating optimization Generator Sample Optimizer Training Data Loss: Looks Real
  14. 14. Generative Adversarial Networks
  15. 15. Generative Adversarial Networks Alternating optimization Generatornoise Sample Classification Loss Training Data Learnt Real/Fake Cost function Discriminator
  16. 16. Generative Adversarial Networks Alternating optimization Generatornoise Sample Classification Loss Training Data Discriminator Trained via Gradient Descent
  17. 17. Generative Adversarial Networks Alternating optimization Generatornoise Sample Classification Loss Training Data Discriminator Optimizing to fool D
  18. 18. Generative Adversarial Networks Alternating optimization Generatornoise Sample Classification Loss Training Data Discriminator Optimizing to not get fooled by G
  19. 19. Generative Adversarial Networks Optimizes Jensen-Shannon Divergence
  20. 20. Generative Adversarial Networks Samples
  21. 21. Class-conditional GANs
  22. 22. Generator noise Sample Classification Loss Training Data Discriminator Class-conditional GANs Not unsupervised class
  23. 23. Video Prediction GANs
  24. 24. Generatornoise Sample Classification Loss Training Data Discriminator Video Prediction GANs
  25. 25. Generatornoise Sample Classification Loss Training Data Discriminator Video Prediction GANs Generatornoise Sample Classification Loss Training Data Discriminator MSE Loss
  26. 26. Video Prediction GANs
  27. 27. DCGANs
  28. 28. Latent space arithmetic
  29. 29. Using the GAN feature representation
  30. 30. Using the GAN feature representation
  31. 31. Using the GAN feature representation Needs much lesser labeled data
  32. 32. Using the GAN feature representation
  33. 33. In-painting GANs
  34. 34. In-painting GANs
  35. 35. In-painting GANs
  36. 36. Disentangling representations
  37. 37. Disentangling representations
  38. 38. Disentangling representations
  39. 39. Disentangling representations
  40. 40. Disentangling representations
  41. 41. Disentangling representations
  42. 42. Stability and Representation Reuse
  43. 43. Stability and Representation Reuse • Feature matching • Minibatch discrimination • Label smoothing • What’s next?
  44. 44. Stability and Representation Reuse
  45. 45. Stability and Representation Reuse
  46. 46. What’s next? • Planning and forward modeling
  47. 47. Questions • When will adversarial networks take over the world? • Soon.

×