At StampedeCon 2014, Kilian Q. Weinberger (Washington University) presented "Making Machine Learning work in Practice."
Here, Kilian will go over common pitfalls and tricks on how to make machine learning work.
12. 1. Learning Problem
What is my relevant data?
What am I trying to learn?
Can I obtain trustworthy supervision?
QUIZ: What would be some answers for
email spam filtering?
13. Example:
What is my data?
What am I trying to learn?
Can I obtain trustworthy supervision?
Email content / Meta Data
User’s spam/ham labels
Employees?
14. 2. Train / Test split
How much data do I need? (More is more.)
How do you split into train / test? (Always by time! o/w: random)
Training data should be just like test data!! (i.i.d.)
Train Data Test Data
time
Real World
Data
??
15. Train Data Test Data
Data set overfitting
!
By evaluating on the same data set over and over, you will overfit
Overfitting bounded by:
Kishore’s rule of thumb: subtract 1% accuracy for every time you have
tested on a data set
Ideally: Create a second train / test split!
Train Data Test Data
time
Real World
Data
??
many runs one run!
O
s
log (#trials)
#examples
!
16. 3. Data Representation:
feature vector:
0
1
0
1
1
...
0
1
2342304222342
12323
0
...
0.232
0.1
...
“viagra”
“hello”
“cheap”
“$”
“Microsoft”
...
Sender in address book?
IP known?
Sent time in s since 1/1/1970
Email size
Attachment size
...
Percentile in email length
Percentile in token likelihood
...
data (email)
18. Pitfall #2:
Feature scaling
1. With linear classifiers / kernels features should have similar scale (e.g.
range [0,1])
2. Must use the same scaling constants for test data!!! (most likely test
data will not be in a clean [0,1] interval)
3. Dense features should be down-weighted when combined with sparse
features
(Scale does not matter for decision trees.)
fi ! (fi + ai) ⇤ bi
19. Over-condensing of features
Features do not need to be
semantically meaningful
Just add them: Redundancy is
(generally) not a problem
Let the learning algorithm decide
what’s useful!
3
1
0
1
1
...
0
1
2342304222342
12323
0
...
0.232
0.1
...
1.2
-23.2
2.3
5.3
12.1
condensed
feature vector
raw data:
Pitfall #3:
21. 4. Training Signal
• How reliable is my labeling source? (E.g. in web search editors agree
33% of the time.)
• Does the signal have high coverage?
• Is the signal derived independently of the features?!
• Could the signal shift after deployment?
22. Quiz: Spam filtering
The spammer with IP e.v.i.l has sent 10M spam emails
over the last 10 days - use all emails with this IP as
spam examples
!
Use user’s spam / not-spam votes as signal
!
Use WUSTL students’ spam/not-spam votes
not diverse
potentially label
in data
too noisy
low
coverage
25. Example: Spam filtering
old
spam
filter
incoming
email Inbox
new
ML spam
filter
annotates
email
feedback:
SPAM / NOT-SPAM
Problem: Users only vote when classifier is wrong
New filter learns to exactly invert the old classifier
Possible solution: Occasionally let emails through filter to avoid bias
26. Example: Trusted votes
Goal: Classify email votes as trusted / untrusted
Signal conjecture:
time
votes
voted “bad”
voted
“good”
evil spammer community
27. Searching for signal
time
voted “bad”
voted
“good”
evil spammer community
The good news: We found that exact pattern A LOT!!
votes
28. Searching for signal
The good news: We found that exact pattern A LOT!!
The bad news: We found other patterns just as often
time
voted “bad”
voted
“good”
votes
29. Searching for signal
The good news: We found that exact pattern A LOT!!
The bad news: We found other patterns just as often
time
voted
“bad”
voted
“good”
voted
“good”
voted
“bad”
voted
“good”
votes
Moral: Given enough data you’ll find anything!
You need to be very very careful that you learn the right thing!
30. 5. Learning Method
• Classification / Regression / Ranking?
• Do you want probabilities?
• How sensitive is a model to label noise?
• Do you have skewed classes / weighted examples?
• Best off-the-shelf: Random Forests, Boosted Trees, SVM
• Generally: Try out several algorithms
31. Method Complexity (KISS)
Common pitfall: Use a too complicated
learning algorithm
ALWAYS try simplest algorithm first!!!
Move to more complex systems after the
simple one works
Rule of diminishing returns!!
(Scientific papers exaggerate benefit of
complex theory.)
QUIZ: What would you use for spam?
32. Ready-Made Packages
Weka 3
http://www.cs.waikato.ac.nz/~ml/index.html
Vowpal Wabbit (very large scale)
http://hunch.net/~vw/
Machine Learning Open Software Project
http://mloss.org/software
MALLET: Machine Learning for Language Toolking
http://mallet.cs.umass.edu/index.php/Main_Page
scikit learn (Python)
http://scikit-learn.org/stable/
Large-scale SVM:
http://machinelearning.wustl.edu/pmwiki.php/Main/Wusvm
SVM Lin (very fast linear SVM)
http://people.cs.uchicago.edu/~vikass/svmlin.html
LIB SVM (Powerful SVM implementation)
http://www.csie.ntu.edu.tw/~cjlin/libsvm/
SVM Light
http://svmlight.joachims.org/svm_struct.html
33. Model Selection
(parameter setting with cross validation)
Do not trust default hyper-parameters
Grid Search / Bayesian Optimization
Most importantly: Learning rate!!
Pick best parameters for Val
B.O. usually better than grid search
Train
Train’ Val
35. Quiz
T/F: Condensing features with domain expertise improves learning? FALSE
T/F: Feature scaling is irrelevant for boosted decision trees. TRUE
To avoid dataset overfitting benchmark on a second train/test data set.
T/F: Ideally, derive your signal directly from the features. FALSE
You cannot create train/test split when your data changes over time. FALSE
T/F: Always compute aggregate statistics over the entire corpus. FALSE
37. Debugging: Spam filtering
You implemented
logistic regression with
regularization.
Problem: Your test
error is too high
(12%)!
QUIZ: What can you do to fix it?
38. Fixing attempts:
1. Get more training data
2. Get more features
3. Select fewer features
4. Feature engineering (e.g. meta features, header information)
5. Run gradient descent longer
6. Use Newton’s Method for optimization
7. Change regularization
8. Use SVMs instead of logistic regression
But: which one should we try out?
39. Possible problems
Diagnostics:
1.Underfitting: Training error almost as high as test error
2.Overfitting: Training error much lower than test error
3.Wrong Algorithm: Other methods do better
4.Optimizer: Loss function is not minimized
41. Diagnostics
training set size
training error
testing error
desired error
error
over fitting • test error still decreasing with more data
• large gap between train and test error
Remedies:
- Get more data
- Do bagging
- Feature selection
42. Diagnostics
training set size
training error
testing error
desired error
error
under fitting • even training error is too high
• small gap between train and test error
Remedies:
- Add features
- Improve features
- Use more powerful ML algorithm
- (Boosting)
43. Problem: You are “too good” on
your setup ...
iterations
training error testing error
desired error
error
online error
44. Possible Problems
Is the label included in data set?
Does the training set contain test data?
Famous example in 2007: Caltech 101
0.0
22.5
45.0
67.5
90.0
Caltech 101 Test Accuracy
20062005 2007
46. Problem: Online error > Test Error
training set size
training error
testing error
desired error
error online error
47. Analytics:
Suspicion: Online data differently distributed
Construct new binary classification problem: Online vs. train+test
If you can learn this (error < 50%), you have a distribution problem!!
1.You do not need any labels for this!!
online
train/test
48. Suspicion: Temporal distribution drift
Train Test
!
Train Test
shuffle
time
12% Error
1% Error
If E(shuffle)<E(train/test) then you have temporal distribution drift
Cures: Retrain frequently / online learning
49. Final Quiz
Increasing your training set size increases the training error.
Temporal drift can be detected through shuffling the training/test sets.
Increasing your feature set size decreases the training error.
T/F: More features always decreases the test error? False
T/F: Very low validation error always indicates you are doing well. False
When an algorithm overfits there is a big gap between train and test error.
T/F: Underfitting can be cured with more powerful learners. True
T/F: The test error is (almost) never below the training error. True
50. Summary
“Machine learning is only sexy when it works.”
ML algorithms deserve a careful setup
Debugging is just like any other code
1. Carefully rule out possible causes
2. Apply appropriate fixes
51. Resources
Data Mining: Practical Machine Learning Tools and Techniques (Second Edition)
Y. LeCun, L. Bottou, G. Orr and K. Muller: Efficient BackProp, in Orr, G. and Muller
K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998.
Pattern Recognition and Machine Learning by Christopher M. Bishop
Andrew Ng’s ML course: http://www.youtube.com/watch?v=UzxYlbK2c7E