Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Deep Diving into Machine Learning

191 views

Published on

"In this session, instead of talking about the various applications of Machine Learning, we will see how these algorithms work. We'll cover major algorithms and learning methods in detail especially Supervised learning and Deep Learning. There will be more technical insight about how data is fed and manipulated to produce results for a layman to understand the small intricacies of basics. No coding abilities required."

Published in: Technology
  • Be the first to comment

Deep Diving into Machine Learning

  1. 1. 28 October 2017 Ritika Nevatia Rakuten, Inc.
  2. 2. 28 October 2017 Ritika Nevatia Rakuten, Inc.
  3. 3. 3
  4. 4. 4
  5. 5. 5 Purrr
  6. 6. 6
  7. 7. 7 What is Machine Learning? Components Features Feature Selection Feature Scaling Models Training Types Supervised Learning Linear Regression Unsupervised Learning Reinforcement Learning Deep Learning
  8. 8. 8 Ritika Nevatia Bachelor in Computer Engineering University of Mumbai Previously Currently @nevatiaritika
  9. 9. 9 Sound Facial Expression Posture Time Since Last Meal Last Pampered ???? Shape of Whiskers
  10. 10. 10 Map the Problem -> Solution Represent the characteristics Number to each sound? Rules to those numbers? Next time new sound? Other characteristics?
  11. 11. 11 Machine Learning
  12. 12. 12 Input Output Input Program Programming Machine Learning
  13. 13. 13
  14. 14. 14 Sounds Made Facial Expression Time since last meal Time since last attention Time since last destructive act Instance 1 (sound as spectrogram) (image as pixels) 10 sec 1 min 7 min Instance 2 (sound as spectrogram) (image as pixels) 5 min 17 sec 1 hour Instance 3 (sound as spectrogram) (image as pixels) 2 min 9 min 20 sec Instance 4 (sound as spectrogram) (image as pixels) 90 min 1.2 min 19 min Training Set
  15. 15. 15
  16. 16. 16 Irrelevant Data More features -> More work Create Features (eg: ratio instead of two numbers) Unyielding features
  17. 17. 17 Some features are large 1,000,000 to 10,000,000 Some features are small 0.1 to 1 Scale them both 100 to 1000
  18. 18. 18 Model artifact created after training an algorithm Use trained models to predict new input’s outputs Complex Input set of features (n1, n2, n3…) Output answer (“I was bored”)
  19. 19. 19 1. Raw Models 1. Blueprints 2. Wrong solutions? 2. Tweak nuts and bolts 1. Look at training set 2. Compare answer of model with the correct answer 3. Fix the values 4. Small Steps 5. Multiple trainings 3. Ready!
  20. 20. 20 Classification/ Categorization Regression Clustering Dimensionality Reduction Supervised Unsupervised Discrete Continuous
  21. 21. 21 Bored Bored Not Bored Supervised Learning Algorithm Predictive Model Not Bored
  22. 22. 22 Decision Tree Based on inputs That gives out one output Eg: Decision trees, Logistic Regression, Random Forest
  23. 23. 23 • Handwriting recognition (OCR - Optical Character Recognition) • used at the post office to recognise addresses from envelopes for example. • Spam detection • keeps your inbox clean after seeing a great many examples of spam email. • Object or behavior recognition from images or video • detection and tracking of objects, behavior or beaches of interest. • Speech recognition • conversion of a recording of a voice to a representation in text form, used in commercial apps like Siri, Apple's voice assistant.
  24. 24. 24 “Hello World” Input -> x Output -> y Simple linear regression Ordinary Least Squares y = B0 + B1*x Extend inputs -> x1, x2, x3…
  25. 25. 25 Unsupervised Learning Algorithm Predictive Model Not Bored
  26. 26. 26 How does it learn? More data Come up with new representation - Patterns - Groups - Similarities - Deviations eg: Apriori algorithm, K-means
  27. 27. 27 • Discovering groups • Identifying customer segments • Anomaly Detection • Internet scrutiny
  28. 28. 28 Semi supervised Learning Algorithm Predictive Model Not Bored Bored Not Bored
  29. 29. 29 Method for training agents Robots Modify its actions based on reward signals Internal Model - Policy Select actions based on perceptions Performed Well? Reward Performed Bad? Punish Update Policy Select actions based on optimizing reward Complex policy Sub problems -> ML again? Trial and Error
  30. 30. 30
  31. 31. 31 • Robotics • Teach tasks, walking • Financial Market Trading • Maximize profit • Games – Atari
  32. 32. 32 Faster Structure and functions of neuron Extracts new features Feature Learning from raw data Wheels of car
  33. 33. 33
  34. 34. 34 What is a neuron?
  35. 35. 35 Neural network Multiple layers of neurons Data passing Output of one, input of another Input Layers Direct features Hidden Layers Cannot be manipulated
  36. 36. 36 Weight of each neuron Influential Topology of the network Types of neurons Connections with each other http://www.asimovinstitute.org/neural-network-zoo/
  37. 37. 37
  38. 38. 38 Python Open Source Google

×