Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
USING SIGOPT TO TUNE DEEP LEARNING
MODELS WITH NERVANA CLOUD
Scott Clark
Co-founder and CEO of SigOpt
scott@sigopt.com @Dr...
TRIAL AND ERROR WASTES EXPERT TIME
● Deep Learning is extremely powerful
● Tuning Deep Learning systems is extremely non-i...
UNRESOLVED PROBLEM IN ML
https://www.quora.com/What-is-the-most-important-unresolved-problem-in-machine-learning-3
What is...
TUNING DEEP LEARNING MODELS
Big Data
Deep Learning
System
With tunable parameters
Expertise
TUNING DEEP LEARNING MODELS
Big Data
Metics
Optimally suggests
new parameters
Objective
New parameters
Expertise
Deep Lear...
TUNING DEEP LEARNING MODELS
Big Data
Metics
Optimally suggests
new parameters
Objective
New parameters
Better
Results
Expe...
COMMON APPROACH
Random Search for Hyper-Parameter Optimization, James Bergstra et al., 2012
1. Random search or grid searc...
COMMON APPROACH
● Expert intensive
● Computationally intensive
● Finds potentially local optima
● Does not fully exploit u...
… the challenge of how to collect information as efficiently
as possible, primarily for settings where collecting informat...
● Optimize some Overall Evaluation Criterion (OEC)
○ Loss, Accuracy, Likelihood, Revenue
● Given tunable parameters
○ Hype...
EXAMPLE: TUNING DNN CLASSIFIERS
CIFAR10 Dataset
● Photos of objects
● 10 classes
● Metric: Accuracy
○ [0.1, 1.0]
Learning ...
● All convolutional neural network
● Multiple convolutional and dropout layers
● Hyperparameter optimization mixture of
do...
EXAMPLE: NCLOUD/NEON
● epochs: “number of epochs to run fit” - int [1,∞]
● learning rate: influence on current value of we...
COMPARATIVE PERFORMANCE
● Expert baseline: 0.8995
○ (using neon)
● SigOpt best: 0.9011
○ 1.6% reduction in
error rate
○ No...
USE CASE: DEEP RESIDUAL
http://arxiv.org/pdf/1512.03385v1.pdf
● Explicitly reformulate the layers as learning residual fun...
COMPARATIVE PERFORMANCE
Standard Method
● Expert baseline: 0.9339
○ (from paper)
● SigOpt best: 0.9343
○ Found after 17 tr...
Questions?
scott@sigopt.com
@DrScottClark
https://sigopt.com
@SigOpt
TRY OUT SIGOPT FOR FREE
https://sigopt.com/get_started
● Quick example and intro to SigOpt
● No signup required
● Visual a...
MORE EXAMPLES
https://github.com/sigopt/sigopt-examples
Examples of using SigOpt in a variety of languages and contexts.
T...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
HOW DOES IT WORK?
1. User reports data
2. SigOpt builds statistical model
(Gaussian Process)
3. SigOpt finds the points of...
GPs: FUNCTIONAL VIEW
overfit good fit underfit
GPs: FITTING THE GP
Upcoming SlideShare
Loading in …5
×

Using SigOpt to Tune Deep Learning Models with Nervana Cloud

660 views

Published on

In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.

Published in: Engineering
  • Youtube version of this presentation: https://youtu.be/O7HN9h36GLA?t=38m6s
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Using SigOpt to Tune Deep Learning Models with Nervana Cloud

  1. 1. USING SIGOPT TO TUNE DEEP LEARNING MODELS WITH NERVANA CLOUD Scott Clark Co-founder and CEO of SigOpt scott@sigopt.com @DrScottClark
  2. 2. TRIAL AND ERROR WASTES EXPERT TIME ● Deep Learning is extremely powerful ● Tuning Deep Learning systems is extremely non-intuitive
  3. 3. UNRESOLVED PROBLEM IN ML https://www.quora.com/What-is-the-most-important-unresolved-problem-in-machine-learning-3 What is the most important unresolved problem in machine learning? “...we still don't really know why some configurations of deep neural networks work in some case and not others, let alone having a more or less automatic approach to determining the architectures and the hyperparameters.” Xavier Amatriain, VP Engineering at Quora (former Director of Research at Netflix)
  4. 4. TUNING DEEP LEARNING MODELS Big Data Deep Learning System With tunable parameters Expertise
  5. 5. TUNING DEEP LEARNING MODELS Big Data Metics Optimally suggests new parameters Objective New parameters Expertise Deep Learning System With tunable parameters
  6. 6. TUNING DEEP LEARNING MODELS Big Data Metics Optimally suggests new parameters Objective New parameters Better Results Expertise Deep Learning System With tunable parameters
  7. 7. COMMON APPROACH Random Search for Hyper-Parameter Optimization, James Bergstra et al., 2012 1. Random search or grid search 2. Expert defined grid search near “good” points 3. Refine domain and repeat steps - “grad student descent”
  8. 8. COMMON APPROACH ● Expert intensive ● Computationally intensive ● Finds potentially local optima ● Does not fully exploit useful information Random Search for Hyper-Parameter Optimization, James Bergstra et al., 2012 1. Random search or grid search 2. Expert defined grid search near “good” points 3. Refine domain and repeat steps - “grad student descent”
  9. 9. … the challenge of how to collect information as efficiently as possible, primarily for settings where collecting information is time consuming and expensive. Prof. Warren Powell - Princeton What is the most efficient way to collect information? Prof. Peter Frazier - Cornell How do we make the most money, as fast as possible? Me - @DrScottClark OPTIMAL LEARNING
  10. 10. ● Optimize some Overall Evaluation Criterion (OEC) ○ Loss, Accuracy, Likelihood, Revenue ● Given tunable parameters ○ Hyperparameters, feature parameters ● In an efficient way ○ Sample function as few times as possible ○ Training on big data is expensive BAYESIAN GLOBAL OPTIMIZATION Details at https://sigopt.com/research
  11. 11. EXAMPLE: TUNING DNN CLASSIFIERS CIFAR10 Dataset ● Photos of objects ● 10 classes ● Metric: Accuracy ○ [0.1, 1.0] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009.
  12. 12. ● All convolutional neural network ● Multiple convolutional and dropout layers ● Hyperparameter optimization mixture of domain expertise and grid search (brute force) USE CASE: ALL CONVOLUTIONAL http://arxiv.org/pdf/1412.6806.pdf
  13. 13. EXAMPLE: NCLOUD/NEON ● epochs: “number of epochs to run fit” - int [1,∞] ● learning rate: influence on current value of weights at each step - double (0, 1] ● momentum coefficient: “the coefficient of momentum” - double (0, 1] ● weight decay: parameter affecting how quickly weight decays - double (0, 1] ● depth: parameter affecting number of layers in net - int [1, 20(?)] ● gaussian scale: standard deviation of initialization normal dist. - double (0,∞] ● momentum step change: mul. amount to decrease momentum - double (0, 1] ● momentum step schedule start: epoch to start decreasing momentum - int [1,∞] ● momentum schedule width: epoch stride for decreasing momentum - int [1,∞] Many tunable parameters... ...optimal values non-intuitive
  14. 14. COMPARATIVE PERFORMANCE ● Expert baseline: 0.8995 ○ (using neon) ● SigOpt best: 0.9011 ○ 1.6% reduction in error rate ○ No expert time wasted in tuning
  15. 15. USE CASE: DEEP RESIDUAL http://arxiv.org/pdf/1512.03385v1.pdf ● Explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions ● Variable depth ● Hyperparameter optimization mixture of domain expertise and grid search (brute force)
  16. 16. COMPARATIVE PERFORMANCE Standard Method ● Expert baseline: 0.9339 ○ (from paper) ● SigOpt best: 0.9343 ○ Found after 17 trials ○ 0.61% reduction in error rate ○ No expert time wasted in tuning
  17. 17. Questions? scott@sigopt.com @DrScottClark https://sigopt.com @SigOpt
  18. 18. TRY OUT SIGOPT FOR FREE https://sigopt.com/get_started ● Quick example and intro to SigOpt ● No signup required ● Visual and code examples https://sigopt.com/text-classifier ● Jupyter Notebook ● Use SigOpt to tune feature and model parameters ● Detailed walkthrough with code
  19. 19. MORE EXAMPLES https://github.com/sigopt/sigopt-examples Examples of using SigOpt in a variety of languages and contexts. Tuning Machine Learning Models (with code) A comparison of different hyperparameter optimization methods. Using Model Tuning to Beat Vegas (with code) Using SigOpt to tune a model for predicting basketball scores. Learn more about the technology behind SigOpt at https://sigopt.com/research
  20. 20. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  21. 21. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  22. 22. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  23. 23. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  24. 24. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  25. 25. HOW DOES IT WORK? 1. User reports data 2. SigOpt builds statistical model (Gaussian Process) 3. SigOpt finds the points of highest Expected Improvement 4. SigOpt suggests best parameters to test next 5. User tests those parameters and reports results to SigOpt 6. Repeat
  26. 26. GPs: FUNCTIONAL VIEW
  27. 27. overfit good fit underfit GPs: FITTING THE GP

×