Three wonderful researchers gathered together a century of work on which hiring practices are related to performance in the job. Problem is, they wrote a 75 page paper about it, and that's a barrier. I've summarized their paper into less than 30 slides so you can make the case for science-based hiring in your company.
VIP Call Girl Jamshedpur Aashi 8250192130 Independent Escort Service Jamshedpur
Science-Based Hiring: An Actionable Guide
1. A Century of Hiring Research
Summarizing Schmidt, Oh, and Shaffer’s 2016 Paper
2. The Title:
The Validity and Utility of Selection Methods in Personnel Psychology:
Theoretical Implications of 100 Years of Research Findings
What it Means:
We Looked at 100 Years of Research on how well Hiring Practices Predicted
Job Performance and how Useful they are
The Paper:
https://www.researchgate.net/publication/309203898_The_Validity_and_Utility_of_Selection_Methods_in_Person
nel_Psychology_Practical_and_Theoretical_Implications_of_100_Years_of_Research_Findings
3. 3
The Goal:
Find the strongest key performance indicators (KPIs) of job output, of all researched hiring practices
▪ The researchers looked at the relationship between scores on
each hiring practice and performance on the job
▪ The researchers looked at how combinations of hiring
practices do, compared to the gold standard
▪ The paper explores how practically useful each KPI is across
job seniority, skill level, and job selectivity
X
Y
0
20
40
60
80
GMA +integrity +structured
interviews
% of Job Performance Explained
GMA prediction extra prediction
Productivity ($) =
ΔKPI strength * selectivity
* applicant variability
4. 4
Do we Even Need Science-Based Hiring?
• If everyone’s performance was the same, we wouldn’t need science-based hiring
• But this isn’t the case: the variability of performance in a job is very high
• It would be even higher if everyone was hired, or people were picked randomly1-3
• So we’re doing better than total randomness – can we improve?
1. Hunter et al., 1990 2. Schmidt & Hunter, 1983 3. Schmidt et al., 1979
5. 5
Do we Even Need Science-Based Hiring?
• If everyone’s performance was the same, we wouldn’t need science-based hiring
• But this isn’t the case: the variability of performance in a job is very high
• It would be even higher if everyone was hired, or people were picked randomly1
• So we’re doing better than total randomness – can we improve?
• The difference between the average person (50% good) and an above average (84% good) employee’s
output is at least 40% of their salary
• For a salary of 40,000, the difference in output between below average (16% good) and above average (84%
good) is at least 32,000
• This means above average people’s output is worth almost twice that of below average people’s output –
now that’s a problem for the bottom-line
1. Hunter et al., 1990 2. Schmidt & Hunter, 1983 3. Schmidt et al., 1979
6. 6
Executive Summary
• General mental ability (GMA) is more predictive of employee performance than we thought; more than any
other single hiring practice
• When combining GMA with integrity tests and structured interviews, there are measurable benefits
• When hiring, one should not rely on age, years of experience, graphology (handwriting analysis), or
references
• Switching to science-based hiring methods has substantial ROI through improved productivity across job
types and company sizes
• These practices have minimal downsides and are best paired with a debiased hiring process for a diverse,
inclusive, and productive workforce
8. 8
The More Skilled a Job, the More Value Science-Based Hiring Brings
• This average difference in output means that above-average employees, professionals, and managers are
worth much more than their average counterparts
• This translates to bottom-line differences for companies that hire average performers as compared to
companies that hire above-average performers
19% difference in output for
semi-skilled jobs
32% difference in output for
skilled jobs
48% difference in output for high-
skilled jobs (managerial,
professional)
Above Average Employee
Output
Average Employee Output
9. 9
The ROI of Science-Based Hiring
Benefit ($)
per Year
50 employees
5 new hires/year
Company Size: Small Medium Large
500 employees
50 new hires/year
5,000 employees
500 new hires/year
Salary: 60,000 – professional job
KPI strength increase: 0.3
$43,200
Brogden, 1949; Schmidt et al., 1979
$432,000 $4,320,000
• Benefits of science-based hiring are greater for higher salaries, larger companies,
higher hiring rates, and more valid KPIs
11. 11
Valid Not Valid
Reliable
Not Reliable
Scores reflect the truth and are
consistent – gold standard
Scores do not reflect the truth but are
consistent
Scores do not reflect the truth and are
not consistent
Scores reflect the truth but are not
consistent
Truth
Score
12. 12
Valid Not Valid
Truth
Score
Reliable
Not Reliable
High performers consistently score highly
on the test – gold standard
High performers consistently score
poorly on the test
High performers’ scores are
inconsistent and tend to be low
High performers’ scores are inconsistent but
high on average
Hiring Example
13. 13
Important Terms
Correlation Meta-Analysis
x 50
Positive Correlation:
High scorers on X are high
scorers on Y
Negative Correlation:
High scorers on X are low
scorers on Y
Many research studies with many people
each are combined, removing error and
finding stable relationships
X
Y
X
14. 14
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and
theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-275.
29 26 26 17 10
0
5
10
15
20
25
30
35
40
Work sample tests Intelligence tests Structured
interviews
Integrity tests Conscientiousness
tests
KPIs for Job Performance (%)Strong
Weak
Previously, Researchers did a Meta-Analysis of 85 Years of Research
15. 15Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and
theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-275.
26 14 7 3
0
5
10
15
20
25
30
35
40
Structured interviews Unstructured interviews Reference checks Work experience (years)
KPIs for Job Performance (%)Strong
Weak
Standard Screening Practices were Weak or Useless Predictors
Useless
16. 16
Why Redo Everything 15 Years Later?
• In short, there were mathematical problems we just learned how to solve
• When you predict performance, people’s scores are restricted
• If you don’t just hire randomly (and who does that?), you are limiting the scores
• E.g., if every employee’s IQ is over 110, they don’t represent the population well
• New techniques to mathematically fix this restriction means we can find the KPIs of performance, even
if we don’t have the full range
17. 17
New KPIs
• There are new KPIs that didn’t exist in the older research set, including:
• Situational judgement tests: e.g., “You are working in a call centre. You receive a call; the customer
is upset and talking in a raised voice. What would you do?”
• College and graduate school grade point averages (GPA)
• Phone-based structured interviews: Interviews with the same questions and rating method
conducted over the phone and recorded for transcription
• Emotional intelligence: Skill- and trait-based metrics to look at emotional intelligence (more in the
myth-busting section!)
• Person-job fit: The degree of overlap in traits of the person and of the job
• Person-organization fit: The degree of overlap in characteristics of the org and the person (i.e.,
“culture fit”)
• Personality traits: Applicants’ responses to questions about themselves – reliable and valid (NOT the
MBTI or a Buzzfeed quiz, which are neither valid nor reliable!)
18. 18
Top Six KPIs for Job Performance
* Note about unstructured interviews: IF unstructured interviews were reliable (which they are not), then this is
how valid they would be. This is a rare best-case scenario.
0.65
0.58 0.58
0.49 0.48 0.46
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
general mental
ability
structured
interviews
unstructured
interviews
peer ratings job knowledge tests integrity
Top 6 KPIs for Job Performance (correlation)
*
#1 #2 #3 #4 #5 #6
X
Y
19. 19
Middle KPIs for
Job Performance
Rank KPI Correlation Strength (0-1)
7 structured phone interview 0.46
8 past achievements description 0.45
9 job shadowing 0.44
10 assessment centers 0.36
11 biographical data 0.35
12 GPA 0.34
13 work sample tests 0.33
14 trait-based emotional intelligence 0.32
15 interests 0.31
16 reference checks 0.26
17 situational judgement test (knowledge) 0.26
18 situational judgement test (behavior) 0.26
19 skill-based emotional intelligence 0.23
20 conscientiousness 0.22
21 person-job fit 0.18
22 job experience (years) 0.16
23 person-organization fit 0.13
24 emotional stability 0.12
25 T&E point method 0.11
26 years of education 0.1
X
Y
20. 20
Bottom Five KPIs for Job Performance
* Extraversion: outgoing, talkative Agreeableness: accommodating, avoids conflict
Openness to experience: curious, artistic Graphology: judging traits from handwriting
X
Y
0.09 0.08
0.04 0.02 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
extraversion agreeableness openness to experience graphology age
Bottom 5 KPIs for Job Performance (correlation)
#27 #28 #29 #30 #31
21. 21
Which Practice(s) Should You Use?
• Cost and practicality are important to consider; you may choose one practice for simplicity
• Moving down the list and away from the gold standard across all jobs (i.e., general mental ability) means you are not
optimizing future job performance/output
(Note that your job might not follow this pattern: doing a scientific experiment before a full roll-out manages risk
while encouraging innovation)
• The most cost-efficient, effective, and scalable solution is a general mental ability test given at the earliest stage of
screening
• This approach can also address hiring biases in early screening stages1-4
• You may want to pair this with another practice to get a fuller picture of the candidate
• Next, we will discuss the best pairings with GMA
1. Oreopoulos, 2011 2. Kang et al., 2016 3. Uhlmann & Cohen, 2005 4. Dougherty, Turban, & Callender, 1994
22. 22
Overlap is Not Helpful in Screening Practices
• Sometimes, we want overlap to ensure our people analytics results make
sense
• Other times, overlap means our practices are redundant
• When pairing a screening process with GMA, one wants to tap into
qualities of an applicant that aren’t covered by the general mental ability
test
• The more overlap, the less useful the add-on practice is
• Next, we will look at the overlap between GMA and the other 30 practices;
which ones take a unique slice of the pie?
The Job Performance Pie
The bigger the slice, the more powerful
the screening process:
GMA takes the biggest slice by explaining
42% of job performance
GMA
42%
Other
58%
23. 23
How Much do Other Practices Overlap with GMA?
• Sometimes, we want overlap to ensure our people analytics results make
sense
• Other times, overlap means our practices are redundant
• When pairing a screening process with GMA, one wants to tap into
qualities of an applicant that aren’t covered by the general mental ability
test
• The more overlap, the less useful the add-on practice is
• Next, we will look at the overlap between GMA and the other 30 practices;
which ones take a unique slice of the pie?
The Job Performance Pie
The higher the percentage (61% here), the
less overlap between the two
GMA +
integrity
61%
Other
39%
integrity benefit:
20%
24. 24
The Added Benefit of Non-GMA Practices GMA +
[practice] Other
[practice] benefit
Ranking Practice
[Practice] Benefit
(correlation)
[Practice] Benefit
(pie slice: %)
1 integrity 0.13 20
2 structured interviews 0.117 18
3 unstructured interviews 0.087 13
4 interests 0.062 10
5 structured phone interview 0.057 9
6 conscientiousness 0.053 8
7 reference checks 0.05 8
8 openness to experience 0.039 6
9 biographical data 0.036 6
10 job experience (years) 0.032 5
11 trait-based emotional intelligence 0.029 5
12 person-organization fit 0.024 4
13 situational judgement test (knowledge) 0.015 2
14 person-job fit 0.014 2
15 assessment centers 0.013 2
16 T&E point method 0.009 1
17 GPA 0.009 1
18 years of education 0.008 1
19 extraversion 0.006 1
20 peer ratings 0.006 1
21 skill-based emotional intelligence 0.004 0
22 agreeableness 0.002 0
23 work sample tests 0.002 0
24 situational judgement test (behavior) 0.001 0
Zero benefit correlation or pie slice %:
• emotional stability
• Graphology
• job shadowing
• past achievements description
• job knowledge tests
• age
25. 25
The Best Pairings
• The next best practices with low GMA overlap are integrity
tests and structured interviews
• By adding either of these, you learn more about an applicant
that helps you predict their future performance
• We know there is more to a job than the task at hand and
there is more to an applicant than their mental aptitude
• Therefore, reliably screening for other key qualities is
important
The Job Performance Pies
GMA +
integrity
61%
Other
39%
integrity benefit:
20%
GMA +
structured
interviews
58%
Other
42%
structured interview benefit:
18%
26. 26
“What If I Can’t Use GMA?”
• Some reasons you can’t use GMA or it is not ideal include:
• applicants don’t want to complete them (though they believe it is the most valid1,2)
• managerial applicants and above may see GMA as less relevant
• Internal company concerns
• No need to worry - all hope is not lost
• Return to slide 19 and 20, and work your way down from #2 (skip unstructured interviews)
• No matter which practice(s) you use, consistency is key
• Give everyone a fair shot by treating them the same; this is crucial for science-based hiring
1. Anderson, Salgado, & Hulsheger, 2010 2. Hausknecht, Day, & Thomas, 2004
27. 27
What About Different Groups?
• KPI strength does not differ meaningfully for different subgroups1-4
• With similar scores, different genders and ethnic groups have similar future job performance5
• This is true across many ways of measuring job performance (i.e., objective metrics, supervisor ratings)
• Traits such as integrity and personality have minimal group differences
• For some traits, we don’t know if there are differences (i.e., reference checks, years of experience)
• Some cognitive tests have group differences, yet interventions can reduce differences in testing6,7
• Ensuring that all other practices are debiased and removing barriers for overlooked groups is better than using weaker
practices
1. Hartigan & Wigdor, 1989 2. Hunter, Schmidt, & Hunter, 1979 3. Rothstein & McDaniel, 1992 4. Schmidt, Berner, & Hunter, 1973 5. Schmidt &
Hunter, 1981 6. Baldiga, 2013 7. Harackiewicz, Canning, Tibbetts, Priniski, & Hyde, 2016
28. 28
Myth-Busting: “Emotional Intelligence” = Personality + IQ
• Emotional intelligence is very popular in HR circles
• But practitioners are not clear on what they mean by emotional intelligence
• It turns out, neither are scientists!
• Research shows that emotional intelligence is not a new, separate trait
• Instead, emotional intelligence contains two clusters of other, already existing traits1,2
• Cluster 1: trait-based measures
• Cluster 2: skill-based measures
• Cluster 1 overlaps almost entirely with personality (but is less reliable)
• Ironically, Cluster 2 completely overlaps with traditional intelligence3,4
• Anyone who says they want EQ not IQ, doesn’t know what they are talking about: Cluster 2 of EQ IS IQ!
1. Murphy, 2006 2. Matthews, Zeidner, & Roberts, 2002 3. Joseph & Newman, 2010 4. Schmidt, 2011
29. 29
Limitations and Disclaimer
1. I did not address KPIs for training; however, the patterns are similar to the results here
2. The ROI calculations here are for illustrative purposes only; they are not a guarantee of increased output or cost savings
3. The benefits are greatest for high-skilled jobs with strict selection processes and variation in output
4. These numbers come from research across multiple jobs, locations, and organizations
A. Your mileage may vary: the order of KPIs may be different or all KPIs may be stronger or weaker
B. It is crucial to scientifically experiment with a small pilot test before making substantial changes
5. Just as some conclusions changed from reviewing 85 years of science-based hiring to 100 years, the takeaways may
change again
A. Another reason why experimenting in your own unique context is so important
30. 30
What We Learned
1. Science-based hiring is a great investment: it can be cheap, practical, and a competitive advantage for your company
2. The top five KPIs when hiring for future job performance are: general mental ability, interviews, peer ratings, job
knowledge tests, and integrity tests
3. When pairing science-based hiring practices with GMA, integrity tests and structured interviews add the most
4. There are minimal group differences in GMA and no differences for other science-based practices
5. Debiasing the recruitment and hiring process will make a bigger impact on diversity and inclusion than using less valid
hiring practices (I build these for clients – contact me on slide 32 for more)
6. Emotional intelligence is not a trait: measure GMA and personality instead for a better combination
31. 31
Science-Based HR, Innovation, and the Future of Work
• Driving innovation through scientific thinking and behavioral science (webinar with Kelly Peters, CEO and Co-Founder of
BEworks)
• Link (scroll down to past webinars):
http://www.rotman.utoronto.ca/FacultyAndResearch/ResearchCentres/BEAR/Events/BEAR-Webinar-Series
• Augmented Workers (article by Kelly Peters, CEO and Co-Founder, and Nate Barr, Scientific Advisor, BEworks)
• Link: http://behavioralscientist.org/resistance-futile-embracing-era-augmented-worker/
32. Let’s Connect.
Natasha Ouslis, Organizational Psychologist
BEworks / Western University
Toronto, Canada
nouslis@uwo.ca
https://www.linkedin.com/in/natashaouslis/
natashaouslis.com