3. OBJECTIVE
DEFINE RELIABILITY
EXPLAIN THREE MAIN ATTRIBUTES OF RELIABLE SCALE
CLASSIFY TYPES OF RELIABILITY
IDENTIFY FACTORS INFLUENCING RELIABILITY
DESCRIBE MEASURES TO IMPROVE RELIABILITY
4.
5.
6. • According to Rosenthal (1991), ”Reliability is a major concern when a
psychological test is used to measure some attribute or behavior”.
• According to Anastasi (1968), ” Reliability refers to the consistency of
scores obtained by the same individuals when re- examined with test
on different occasions, or with different sets of equivalent items, or
under other variable examining conditions”.
11. CONT.
• It is a measure of reliability obtained by administering
the same test twice over a period of time to a group of
individuals.
• It assumes there is no change in the underlying trait
between 1 and 2.
12.
13. CONT.
• It is a measure of reliability obtained by administering different
versions of an assessment tool to the same group of individuals.
• The scores from the two versions can then be correlated in order to
evaluate the consistency of results across alternate versions.
• If the correlation between the alternative forms is low, it could
indicate that considerable measurement error is present, because two
different scales were used.
14.
15. CONT.
• It is a measure of reliability used to assess the degree to which
different judges or raters agree with their assessment decisions.
• It can be especially useful when judgments can be considered
relatively subjective. Thus the use of this type of reliability would
probably be more likely when evaluating artwork as opposed to
math problems.
16.
17. CONT.
• It is the measure of reliability used to evaluate the degree to which
different test items that probe the same construct produce similar
results.
• It indicates the homogeneity of the test. Common way of this test is
the odd even method.
• TWO SUBTYPES:- 1) average inter-item correlation
2) split-half reliability
18. • It is obtained by taking all of the items on a test that probe the
same construct ;
determining the correlation coefficient for each pair of items;
and finally taking
the average of all of these correlation coefficients.
• This final step yields the average inter-item correlation
19. • The process of obtaining split-half reliability is begun by
splitting in half all items of a test that are in order to form
two”sets” of items.
• The entire test is administered to a group of individuals, the
total score for each set is computed and finally the split-half
reliability is obtained by determining the correlation between
the two total set scores.
20.
21. CONT.
1) Data collection method
2) Interval between testing occasion
3) Test length
4) Speed of the method
5) Group homogeneity
22. CONT.
6) Difficulty of the items
7) Ambiguous wording
8) Inconsistency in test administration
9) Objectivity of scoring is more reliable than
subjectivity of scoring
23.
24. CONT.
1)Limiting subjectivity of all kind
2)Ensure the questions are clear
3)Ensure that the expected answers are definite and
objective
4)Checking to make sure the time limits are adequate
25. CONT.
5)Giving simple, clear and unambiguous instruction
6)Keeping choice within a test paper to minimum
7)Conducting test under identical and ideal
examination conditions
8)When using less reliable methods increase the
number of question observation or examination time.
26.
27.
28.
29. Reliability of an assessment measure are truly important.
When choosing participants, the characteristics of the
candidates were a factor and is usually the same for each
assessment administered, as with this particular one. Cut-
off scores could be used if needed, however, are not
inherently necessary for this assessment measure.