Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Understanding deep learning

423 views

Published on

Deep learning is one of the most exciting areas of machine learning and AI. This presentation covers all the very basics of deep neural networks, from the concept down to applications and why this technology is so popular in today's business landscape.

This presentation is provided by the Tesseract Academy, which provides executive education for deep technical subjects such as data science and blockchain. For a video of the presentation please visit https://www.youtube.com/watch?v=RiYGluH_cx0&t=0s&list=PLVce3C5Hi9BBfabvhEzYQTQDYEg2vtuxH&index=2

For an associated blog post about deep learning also visit http://thedatascientist.com/what-deep-learning-is-and-isnt/

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

Understanding deep learning

  1. 1. Understanding deep learning A COMPLETE NOVICE’S PERSPECTIVE
  2. 2. Deep learning overview
  3. 3. Why now? 1. Data deluge 2. Cheaper GPUs 3. New techniques
  4. 4. Why is it popular? Amazing performance in many tasks like never before 1. Machine translation 2. Speech recognition 3. Computer vision 4. Reinforcement learning 5. Natural language processing
  5. 5. Machine translation: Before deep learning Rule-based machine translation (1970s) ◦ Bilingual dictionary and linguistic rules ◦ Interlingua ◦ Find a ‘universal language’ as a middle layer ◦ Impossible task, can’t handle exceptions Example-based machine translation (1980s) ◦ 1984, Makoto Nago (University of Tokyo) ◦ Learn through translations Statistical machine translation (1990s) ◦ Use corpora to extract statistical relationships
  6. 6. Machine translation: Deep learning Paper in 2014 by Bengio’s Lab ◦ Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation ◦ https://arxiv.org/abs/1406.1078 Basic idea: Recurrent Neural Network Encoder-Decoder
  7. 7. Machine translation: Deep learning 27 September, 2016 A Neural Network for Machine Translation, at Production Scale ◦ https://ai.googleblog.com/2016/09/a-neural-network-for-machine.html A few years ago we started using Recurrent Neural Networks (RNNs) to directly learn the mapping between an input sequence (e.g. a sentence in one language) to an output sequence (that same sentence in another language) [2]. Whereas Phrase-Based Machine Translation (PBMT) breaks an input sentence into words and phrases to be translated largely independently, Neural Machine Translation (NMT) considers the entire input sentence as a unit for translation. The advantage of this approach is that it requires fewer engineering design choices than previous Phrase-Based translation systems. When it first came out, NMT showed equivalent accuracy with existing Phrase-Based translation systems on modest-sized public benchmark data sets.
  8. 8. Machine translation: deep learning
  9. 9. Speech recognition
  10. 10. Object recognition
  11. 11. Automatic colouring
  12. 12. Style transfer
  13. 13. Automatic text generation
  14. 14. NLP with deep learning
  15. 15. Word embeddings Turn text into numbers ◦ Word2Vec Perform operations on them Based on shallow neural networks (used as input to deep neural networks)
  16. 16. Intuition Automatic hierarchical feature extraction
  17. 17. Types of neural networks
  18. 18. Simple feedforward neural networks Most common type ◦ Input: 1 vector ◦ Output: probabily, real number, or multiple outputs
  19. 19. Recurrent neural network Like feedforward, but signal feeds back into itself
  20. 20. Recurrent neural networks
  21. 21. Recurrent neural networks Useful for sequences where the past can affect the future ◦ Natural language ◦ Time series (e.g. finance) Provide ‘memory’ to neural networks LSTM (Long-Short Term Memory) ◦ Longer dependencies ◦ Gated Recurrent Units
  22. 22. RNN: Neural machine translation Seq2Seq model ◦ Deep recurrent architecture ◦ Je suis étudiant -> I am a student
  23. 23. RNN: Text generation Feed a sequence of characters ◦ Predict the next character ◦ Recurrent units keep the context Then feed the output back into itself!
  24. 24. Convolutional neural networks Use a sliding window to capture parts of an image ◦ Then use pooling ◦ E.g. keep only 1 pixel out of 9, or average their values Allows the extraction of higher level features ◦ By utilising feature locality ◦ And ignoring noise
  25. 25. Feature extraction
  26. 26. Image classification VCG (right), inception module (bottom), Alexnet (Middle)
  27. 27. Reinforcement learning Deep Q-learning Approach by Google Deep Mind ◦ AI company in London Create AI that can play video games ◦ Goal to extend to real environments Current evolution ◦ Networks play against each other ◦ Managed to beat professional Go players
  28. 28. Generative Adversarial Network
  29. 29. Putting it all together
  30. 30. Image captioning Combination of convolutional units and RNN Same architecture (but with 3d convolution) can be used for video captioning
  31. 31. Style transfer Feed random images to pretrained network Dual loss (content and style) Train to combine the two
  32. 32. Images colorization
  33. 33. Image generation Through GAN (left – real, right – generated)
  34. 34. Image translation through GANs
  35. 35. Tools for deep learning https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software Tensorflow ◦ Google ◦ Very flexible PyTorch ◦ Open source ◦ Facebook, Nvidia, Twitter and other companies develop it ◦ Useful for research Keras ◦ Python higher-level interface for Tensorflow Caffe ◦ Berkley AI research ◦ Useful for computer vision
  36. 36. Commoditised services Google Cloud AI ◦ https://cloud.google.com/products/machine-learning/ ◦ Vision, speech-to-text, text-to-speech, translation, and other IBM ◦ https://www.ibm.com/watson/products-services/ ◦ Visual recognition, translation, sentiment analysis, entity extraction Microsoft Azure ◦ https://azure.microsoft.com/en-gb/solutions/ ◦ Vision, NLP, etc.
  37. 37. So when to use deep learning Amazing for anything relating to ◦ Audio ◦ Computer vision ◦ NLP Drawbacks ◦ Loads of data ◦ Lots of processing power ◦ 1000s of hyperparameter ◦ Months of training When to use ◦ ML or stats better for many problems (especially when datasets are smaller) ◦ If you face a computer vision, audio, etc. problem then deep learning is the best bet ◦ Try using a commoditized service before developing your own ◦ Developing your own solution -> cost effective in the long run (plus IP)
  38. 38. Learn more Tesseract Academy ◦ http://tesseract.academy ◦ https://www.youtube.com/playlist?list=PLVce3C5Hi9BBfabvhEzYQTQDYEg2vtuxH ◦ Data science, big data and blockchain for executives and managers. The Data scientist ◦ Personal blog ◦ Covers data science, analytics, blockchain, tokenomics and many more subjects ◦ http://thedatascientist.com/what-deep-learning-is-and-isnt/

×