This document discusses measurement, reliability, and validity in recruitment and selection. It defines measurement as assigning numbers to represent attributes, and reliability as consistency of scores over time. There are three methods to determine reliability: test-retest, equivalent forms, and split-halves. Validity is whether a test measures what it intends to. There are two types of validity: criterion relates to job performance, and content relates skills to job tasks. The document outlines steps to validate a test including analyzing jobs, choosing tests, administering tests, relating scores to criteria, and cross-validating. It also discusses equal employment opportunity, individual rights, and common types of tests used by employers.
2. Measurement
• Measurement
• The process of assigning numbers to objects to represent quantities of
an attribute of the objects.
• Scores
• The amount of the attribute being assessed
• Correlation between scores
• A statistical measure of the relation between the two sets of scores
3. The Role of Measurement
• Assists the selection manager with understanding an applicants attributes
• It involves numbers playing an important role
• It is essential to the implementation and administration of selection programs
• Sometimes companies make decisions that are off target from their prediction.
For example, predicting that someone will be a star performer but turn out to be
the direct opposite.
• Often times, it is not the decision that was wrong but the data on which they
were based were faulty.
4. Cont.
Criteria:
• Measurement of an employees success on the job
• Dollar of sales or dollars saved in context of money or the targets achieved
Predictors:
• Forms of assessment used to measure criteria
• Interviews , Performance Tests, Paper pencil Tests, or Computer Tests.
• Background information
• Resumes
• Applications
• Interviews
• Tests:
• Aptitude or Ability
• Achievement
• Personality
5. Reliability
• A reliable test is one that yields consistent scores when a person takes two
alternative forms of the test or when he or she takes the same test on two
or more different occasions.
• Reliability refers to the consistency or stability of response on a test, for
example a group takes a cognitive ability test on this week and achieves a
mean score of 100.
• The same test is repeated after a week and the groups reported mean
score of 62. Then we have to conclude that something is wrong with the
test. We would describe the test as unreliable because it yields
inconsistence measurements.
• Slight variation in test score is natural but the fluctuation should not be
large. Tests which produce wide variation can not be used in selection
procedure.
6. Cont.
• There are three methods by which the reliability of a test can be
determined. They are:
i. Test- Retest Method
ii. The Equivalent Forms Method
iii. The Split Halves Method
7. Cont.
i. Test- Retest Method:
• This method involves administering a test twice to the same group of
people. Then correlation is done to determine the correspondence
between the two set of scores. The correlation coefficient called
reliability coefficient. If the correlation is positively high then
reliability is also high.
• The limitations of this method are, it is uneconomical and effect of
learning or practice influences the test score in second sessions.
8. Cont.
ii. The Equivalent Forms Method
• This method also uses test retest approach in parallel forms. However,
instead of using the same test a second time, a similar form of the
test is administered.
• The disadvantage of this method is that it is difficult and costly to
develop two separate and equivalent tests.
iii. Split-Halves Method
• In this method, the test is taken only once. The items are divided in
half and the two sets of scores are correlated.
9. VALIDITY
• Validity is the determination of whether the test or other selection
device measures what it is intended to measure. Validity often refers
to evidence that the test is job related.
• In employment testing, there are two main ways to demonstrate a
test’s validity.
i. Criterion validity
ii. Content validity
10. Cont.
i. Criterion validity:
• A type of validity based on showing that scores on the test (predictors) are related to job
performance (criterion).
• Demonstrating criterion validly means demonstrating that those who do well on the test
also do well on the job, & those who do poorly on the test do poorly on the job. Thus the
test has validity to the extent that the people with higher test scores perform better on
the job.
• E.g. Are test scores in this class related to students’ knowledge of human resource management?
ii. Content validity:
• A test that is content valid is one that contains a fair sample of the tasks and skills
actually needed for the job in question.
• Do the test questions in this course relate to human resource management topics?
• Is taking an HR course the same as doing HR?
11. How to Validate a Test
• Step 1: Analyze the job:
• The step is to analyze the job & write job descriptions & job
specifications.
• Specify the human traits & skills that are believed to be required for
adequate job performance. E.g. must an applicant be good in
communication? Knows MS office? These requirements become the
predictors. These are the human traits & skills believed to predict
success on the job.
• E.g. for an assembler’s job predictors might include manual dexterity
& patience. Specific criteria then might include quantity produced per
hour & number of rejects produced per hour.
12. Cont.
• Step 2: Choose the tests:
• Next choose tests that measures the attributes important for job
success. Employers usually don’t start with just one test but they
choose several tests & combine them into a test battery. The test
battery aims to measure an array of possible predictors such as
aggressiveness, extroversion, & numerical ability.
• E.g. NTS
• Step 3: Administer the test:
• Next administer the test to employees. There are two options.
13. Cont.
i. One option is to administer the tests to employees presently on the
job. Then compare the test scores with their current performance
this is called concurrent validation. Advantage: data is already
available. Disadvantage: currently employees may not be
representative of new applicants.
ii. Predictive validation is the second & more dependable way to
validate a test. The test is administered to applicants before hiring
them. After they have been on the job for some time measure their
performance & compare it to their earlier test scores.
14. Cont.
• Step 4: Relate Test Scores and Criteria:
• The next step is to ascertain if there is a significant relationship between
scores (the predictors) & performance (the criterion). This can be done
through determining:
• Scores on the test &
• Job performance using correlation analysis.
• Step 5: Cross-Validate and Revalidate:
• Before putting the test into use, again perform step 3 & 4 on a new sample
of employees. At a minimum, an expert should revalidate the test
periodically.
15. Testing Program Guidelines
1. Validate the tests.
2. Monitor your testing/selection program
3. Keep accurate records.
4. Use a certified psychologist.
5. Manage test conditions.
6. Revalidate periodically.
16. Equal Employment Opportunity (EEO) Aspects
of Testing
• A organization must be able to prove:
• That its tests are related to success or failure on the job (validity)
• That its tests don’t unfairly discriminate against minority or nonminority subgroups
(disparate impact).
• EEO guidelines and laws apply to all selection devices, including interviews,
applications, and references.
• Testing alternatives if a selection device has disparate impact:
• Institute a different, valid selection procedure that does not have an adverse impact.
• Show that the test is valid—in other words, that it is a valid predictor of performance
on the job.
• Monitor the selection test to see if it has disparate impact.
17.
18. Test Takers’ Individual Rights and Test Security
• Under the American Psychological Association’s standard for
educational and psychological tests, test takers have the right:
• To privacy and information.
• To the confidentiality of test results.
• To informed consent regarding use of these results.
• To expect that only people qualified to interpret the scores will have access to
them.
• To expect the test is fair to all.
19. Using Tests at Work
• Major types of tests used by employers
• Basic skills tests (45%)
• Drug tests (47%)
• Psychological tests (33%)
• Use of testing
• Less overall testing now but more testing is used as specific job skills and work
demands increase.
• Screen out bad or dishonest employees
• Reduce turnover by personality profiling
• Source of tests
• Test publishers