Recruiting Solutions 
1 
My Three Ex’s: 
A Data Science Approach for 
Applied Machine Learning
Dedicated to 3 of my favorite ex co-workers.
First, a disclosure. 
This isn’t a talk about machine learning. 
It’s a talk about applying machine learning. 
What’s the difference? 
3
Let’s talk about something else for a moment. 
Hash Tables 
4
What you (need to) know about hash tables. 
Theory Application 
5 
Class HashMap<K,V> 
java.lang.Object 
java.util.AbstractMap<K,V> 
java.util.HashMap<K,V> 
Type Parameters: 
K - the type of keys maintained by this map 
V - the type of mapped values 
All Implemented Interfaces: 
Serializable, Cloneable, Map<K,V>
Now let’s get back to machine learning! 
6
Please allow me to introduce my three ex’s. 
Express. 
Explain. 
Experiment. 
7
Embrace the data science mindset. 
Express 
Understand your utility and inputs. 
Explain 
Understand your models and metrics. 
Experiment 
Optimize for the speed of learning. 
8
Express. 
9
How to train your machine learning model. 
1. Define your objective function. 
2. Collect training data. 
3. Build models. 
4. Profit! 
10
You can only improve what you measure. 
11 
Clicks? 
Actions? 
Outcomes?
Be careful how you define precision… 
12
Account for non-uniform inputs and costs. 
13
Stratified sampling is your friend. 
14
An example of segmenting models. 
15 
Searcher: Recruiter 
Query: Person Name 
Searcher: Job Seeker 
Query: Person Name 
Searcher: Recruiter 
Query: Job Title 
Searcher: Job Seeker 
Query: Job Title
Express yourself in your feature vectors. 
16
Express: Summary. 
 Choose an objective function that models utility. 
 Be careful how you define precision. 
 Account for non-uniform inputs and costs. 
 Stratified sampling is your friend. 
 Express yourself in your feature vectors. 
17
Explain. 
18
With apologies to the little prince. 
19
Everyone is talking about Deep Learning. 
20
But accuracy isn’t everything. 
21
Explainable models, explainable features. 
 Less is more when it comes to explainability. 
 Algorithms can protect you from overfitting, but they can’t 
protect you from the biases you introduce. 
 Introspection into your models and features makes it 
easier for you and others to debug them. 
 Especially if you don’t completely trust your objective 
function or the representativeness of your training data. 
22
Linear regression? Decision trees? 
 Linear regression and decision trees favor explainability 
over accuracy, compared to more sophisticated models. 
 But size matters. If you have too many features or too 
deep a decision tree, you lose explainability. 
 You can always upgrade to a more sophisticated model 
when you trust your objective function and training data. 
 Build a machine learning model is an iterative process. 
Optimize for the speed of your own learning. 
23
Explain: Summary. 
 Accuracy isn’t everything. 
 Less is more when it comes to explainability. 
 Don’t knock linear models and decision trees! 
 Start with simple models, then upgrade. 
24
Experiment. 
25
Why experiments matter. 
“You have to kiss a lot of frogs to find one prince. 
So how can you find your prince faster? 
By finding more frogs and kissing them faster and 
faster.” 
-- Mike Moran 
26
Life in the age of big data. 
Yesterday Today 
27 
Experiments are expensive, 
choose hypotheses wisely. 
Experiments are cheap, 
do as many as you can!
So should we just test everything? 
28
Optimize for the speed of learning. 
29 
vs
Be disciplined: test one variable at a time. 
• Autocomplete 
• Entity Tagging 
• Vertical Intent 
• # of Suggestions 
• Suggestion Order 
• Language 
• Query Construction 
• Ranking Model 
30
Experiment: Summary. 
 Kiss lots of frogs: experiments are cheap. 
 But test in good faith – don’t just flip coins. 
 Optimize for the speed of learning. 
 Be disciplined: test one variable at a time. 
31
Bringing it all together. 
Express 
Understand your utility and inputs. 
Explain 
Understand your models and metrics. 
Experiment 
Optimize for the speed of learning. 
32
33 
Daniel Tunkelang 
dtunkelang@linkedin.com 
https://linkedin.com/in/dtunkelang

My Three Ex’s: A Data Science Approach for Applied Machine Learning

  • 1.
    Recruiting Solutions 1 My Three Ex’s: A Data Science Approach for Applied Machine Learning
  • 2.
    Dedicated to 3of my favorite ex co-workers.
  • 3.
    First, a disclosure. This isn’t a talk about machine learning. It’s a talk about applying machine learning. What’s the difference? 3
  • 4.
    Let’s talk aboutsomething else for a moment. Hash Tables 4
  • 5.
    What you (needto) know about hash tables. Theory Application 5 Class HashMap<K,V> java.lang.Object java.util.AbstractMap<K,V> java.util.HashMap<K,V> Type Parameters: K - the type of keys maintained by this map V - the type of mapped values All Implemented Interfaces: Serializable, Cloneable, Map<K,V>
  • 6.
    Now let’s getback to machine learning! 6
  • 7.
    Please allow meto introduce my three ex’s. Express. Explain. Experiment. 7
  • 8.
    Embrace the datascience mindset. Express Understand your utility and inputs. Explain Understand your models and metrics. Experiment Optimize for the speed of learning. 8
  • 9.
  • 10.
    How to trainyour machine learning model. 1. Define your objective function. 2. Collect training data. 3. Build models. 4. Profit! 10
  • 11.
    You can onlyimprove what you measure. 11 Clicks? Actions? Outcomes?
  • 12.
    Be careful howyou define precision… 12
  • 13.
    Account for non-uniforminputs and costs. 13
  • 14.
    Stratified sampling isyour friend. 14
  • 15.
    An example ofsegmenting models. 15 Searcher: Recruiter Query: Person Name Searcher: Job Seeker Query: Person Name Searcher: Recruiter Query: Job Title Searcher: Job Seeker Query: Job Title
  • 16.
    Express yourself inyour feature vectors. 16
  • 17.
    Express: Summary. Choose an objective function that models utility.  Be careful how you define precision.  Account for non-uniform inputs and costs.  Stratified sampling is your friend.  Express yourself in your feature vectors. 17
  • 18.
  • 19.
    With apologies tothe little prince. 19
  • 20.
    Everyone is talkingabout Deep Learning. 20
  • 21.
    But accuracy isn’teverything. 21
  • 22.
    Explainable models, explainablefeatures.  Less is more when it comes to explainability.  Algorithms can protect you from overfitting, but they can’t protect you from the biases you introduce.  Introspection into your models and features makes it easier for you and others to debug them.  Especially if you don’t completely trust your objective function or the representativeness of your training data. 22
  • 23.
    Linear regression? Decisiontrees?  Linear regression and decision trees favor explainability over accuracy, compared to more sophisticated models.  But size matters. If you have too many features or too deep a decision tree, you lose explainability.  You can always upgrade to a more sophisticated model when you trust your objective function and training data.  Build a machine learning model is an iterative process. Optimize for the speed of your own learning. 23
  • 24.
    Explain: Summary. Accuracy isn’t everything.  Less is more when it comes to explainability.  Don’t knock linear models and decision trees!  Start with simple models, then upgrade. 24
  • 25.
  • 26.
    Why experiments matter. “You have to kiss a lot of frogs to find one prince. So how can you find your prince faster? By finding more frogs and kissing them faster and faster.” -- Mike Moran 26
  • 27.
    Life in theage of big data. Yesterday Today 27 Experiments are expensive, choose hypotheses wisely. Experiments are cheap, do as many as you can!
  • 28.
    So should wejust test everything? 28
  • 29.
    Optimize for thespeed of learning. 29 vs
  • 30.
    Be disciplined: testone variable at a time. • Autocomplete • Entity Tagging • Vertical Intent • # of Suggestions • Suggestion Order • Language • Query Construction • Ranking Model 30
  • 31.
    Experiment: Summary. Kiss lots of frogs: experiments are cheap.  But test in good faith – don’t just flip coins.  Optimize for the speed of learning.  Be disciplined: test one variable at a time. 31
  • 32.
    Bringing it alltogether. Express Understand your utility and inputs. Explain Understand your models and metrics. Experiment Optimize for the speed of learning. 32
  • 33.
    33 Daniel Tunkelang dtunkelang@linkedin.com https://linkedin.com/in/dtunkelang