2. INTRODUCTION
▪ Reliability simply means the degree
of dependability, consistency or
stability of scores on a measure
used in selection research.
▪ This can be further simplified as meaning ‘if the
test were to be repeated, the individual being
tested will score something similar.
▪ Reliability means that the selection methods,
tests and ensuing results are consistent and do
not vary with time, place or different subjects.
3. OBJECTIVES OF THE WORKSHOP
To know the essence of testing for
reliability in selection measures.
The various methods of estimating
reliability in selection measures.
Interpreting Reliability coefficients
How reliable should measures be
and how can Human Resource
managers reduce errors
4. ESSENCE OF RELIABILITY IN SELECTION
MEASURES
▪ Xobtained = Xtrue + Xerror
SELECTION MEASURE RELIABILITY BEST CANDIDATE
The essence of reliability is to estimate what percentage of an obtained
score is an error, so as to reduce to the barest minimum. A true score is
the score obtained if all conditions were perfect, while an error score
represents the score that is not related to the characteristic, trait or
attribute being measured.
Every score obtained from a selection measure is made up
of a true score and an error score.
Reliability cannot be measured per se it can only be estimated,
therefore we should not think about the reliability of a measure
rather the estimate of its reliability
5. METHODS OF MEASURING
RELIABILITY IN SELECTION
TEST-RETEST
RELIABILITY
PARALLEL OR
EQUIVALENT
FORMS
INTERNAL
CONSISTENCY:
a. Split-Half Reliability
b. Kuder-Richardson
Reliability
c. Cronbach’s
Coefficient Alpha
INTERRATER
RELIABILITY
ESTIMATES:
a. Interrater Agreement
b.Interclass correlation
c. Intraclass Correlation
6. INTERRATER RELIABILITY ESTIMATES
▪ This is done by comparing scores of
different raters of an applicant or
candidate on a particular measure.
▪ There are three categories that most of
the procedures tend to fall into they
are:
a. Interrater agreement
b. Interclass Correlation
c. Intraclass Correlation
7. TEST-RETEST
It is simply repeating a test on the same
subjects at different times and then
comparing the scores obtained by each
candidate in both tests.
It is advised to leave a time gap of at least 8
weeks between tests to offset the effects of
memory and practice.
A test-retest is the most common reliability
estimate.
8. PARALLEL OR EQUIVALENT FORMS
▪ This method of estimating reliability
is similar to the test-retest method
but instead of repeating the same
test a different form of the test is
used when retesting.
▪ One of the major reasons for this
method is to control the effect of
memory on the scores
▪ The reliability coefficient in this case
is referred to as a coefficient of
equivalence
▪ Also it is important to note that the
two versions of the measure must
be equal if not we wont have a
parallel form
9. INTERNAL CONSISTENCY
RELIABILITY ESTIMATE
▪ To test Internal Consistency the
relationship between similar parts of a
measure is determined.
▪ It’s a method used to measure reliability
by dividing a test into two and then
correlation is measured between both
parts.
▪ It does not require the test being taken
twice.
▪ The following procedures are mostly
applied: a. Split-Half Relibility
b. Kuder-Richardson Reliability
c. Cronbach’s coefficient alpha
10. INTERPRETING RELIABILITY
COEFFICIENTS
▪ It is an index that summarizes the
relationship between two or more
sets of measures.
▪ Reliability Coefficients are mainly
correlation coefficients, and the
higher the score the more reliable
the measure.
▪ There are some major relibility
coefficients such as: Pearsons
Product-moment Correlation,
Cronbach’s Coefficient Alpha,
Spearman-Brown formula, Cohen’s
Kappa, etc.
Reliability Coefficient Interpretation
0.90 & higher Excellent
0.80 - 0.89 Good
0.70 – 0.79 Adequate
Below 0.70
May have limited
applicability
11. HOW CAN HUMAN RESOURCE
MANAGERS ENSURE RELIABILITY
▪ To ensure reliability the measure
must be standardized
A standardized measure has the
following factors consistent.
a. Content
b. Administration
c. Scoring
The rules for
scoring are
specified
before
administering
the measure
and are
applied in the
same way
Scoring
The
information
should be
collected in
the same
condition for
all applicants
Administration
The content
of the
measure or
predictor
must be the
same
Content
12. FACTORS INFLUENCING THE RELIABILITY
OF A MEASURE
▪ Method of estimating reliability
▪ Individual Differences among respondents
▪ Sample
▪ Length of a Measure
▪ Test Question difficulty
▪ Homogeneity of a Measure
▪ Response Format
▪ Administration and Scoring of a Measure
13. HOW RELIABLE SHOULD
MEASURES BE?
Measures of selection should be as
reliable as possible, if not qualified
candidates might not be selected.
Measures should measure what it
claims to measure consistently.
14. CONCLUSION
Errors in selection measurements cannot be
completely eliminated, i.e. most likely the
score obtained will have an error score due
to any of the reasons discussed earlier.
It is the duty of the Hiring managers to
consistently adopt reliable measures to
reduce the percentage of errors in an
obtained score.