Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Deep learning beyond_the_hype-franceia@lapaillasse

2,498 views

Published on

Presentation of deep learning, some of its potential and adverse examples.

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

Deep learning beyond_the_hype-franceia@lapaillasse

  1. 1. Deep Learning, beyond the hype! 3 mars 2017, La Paillasse Sébastien TREGUER
  2. 2. Machine Learning vs Deep learning vs A.I.
  3. 3. What are the different species in the zoo?
  4. 4. Hype or reality?
  5. 5. Performances 2012: from 26% to 16% error rate in image classification with convolutional neural nets Similar disruption observed in speech
  6. 6. What’s the secret sauce of Deep Neural Nets ? ● RMSProp ● ADAM ● SGD ● Sigmoid ● Tanh ● ReLU ● Leaky ReLU ● CNN ● RNN ● LSTM ● GAN ● WaveNet ● PathNet The more, the cleaner and well distributed the better
  7. 7. How does it works? From details captured in the first layers to higher level of abstraction in deeper layers
  8. 8. Depth and Performances
  9. 9. What about creativity? Representation Learning Reinforcement Learning, Alphago learning by playing against itself Music improvisation Logic operations on images
  10. 10. Input = “A small yellow bird with a black crown and a short black pointed beak” Which images are real, which images are generated by the algorithm? From classification to generation
  11. 11. A small yellow bird with a back crown and a short black pointed beak From classification to generation Answer: None of these images ever existed. They are all generated by the algorithm. StackGAN, Han Zhang & al, paper here
  12. 12. GAN Generative Adversarial Networks Ian J. Goodfellow & al, 2014 StackGAN Text to Photo-realistic Image Synthesis Han Zhang & al, 2016 From classification to generation
  13. 13. Adversarial examples “Panda” 57.7% Confidence “Nematode” 8.2% Confidence “Gibbon” 99.9% Confidence Any Limits?
  14. 14. Unsurpervised learning Reinforcement Learning → predict a scalar reward → A few bits per samples Supervised Learning = on-sight navigation → Predicting human supplied data → 10 to 10000 bits per samples All the rest is unsupervised Learning = offshore navigation → Predicting unknown parts from observation → Building its own representation → Millions of bits per samples Ex: predicting frames in videos, generating images, music... Any Limits?
  15. 15. Energy efficiency? VS Brain = 20W 2.2 billion megaflops Super Computers = until 10 millions watts The race for power

×