PRESENTED BY-
DISHA SINHA
1ST YEAR M.Sc.
NURSING
ROLL- 2088011
OBJECTIVE
DEFINE RELIABILITY
EXPLAIN THREE MAIN ATTRIBUTES OF RELIABLE SCALE
CLASSIFY TYPES OF RELIABILITY
IDENTIFY FACTORS INFLUENCING RELIABILITY
DESCRIBE MEASURES TO IMPROVE RELIABILITY
• According to Rosenthal (1991), ”Reliability is a major concern when a
psychological test is used to measure some attribute or behavior”.
• According to Anastasi (1968), ” Reliability refers to the consistency of
scores obtained by the same individuals when re- examined with test
on different occasions, or with different sets of equivalent items, or
under other variable examining conditions”.
1. STABILITY
2. HOMOGENEITY
3. EQUIVALENCE
CONT.
CONT.
• It is a measure of reliability obtained by administering
the same test twice over a period of time to a group of
individuals.
• It assumes there is no change in the underlying trait
between 1 and 2.
CONT.
• It is a measure of reliability obtained by administering different
versions of an assessment tool to the same group of individuals.
• The scores from the two versions can then be correlated in order to
evaluate the consistency of results across alternate versions.
• If the correlation between the alternative forms is low, it could
indicate that considerable measurement error is present, because two
different scales were used.
CONT.
• It is a measure of reliability used to assess the degree to which
different judges or raters agree with their assessment decisions.
• It can be especially useful when judgments can be considered
relatively subjective. Thus the use of this type of reliability would
probably be more likely when evaluating artwork as opposed to
math problems.
CONT.
• It is the measure of reliability used to evaluate the degree to which
different test items that probe the same construct produce similar
results.
• It indicates the homogeneity of the test. Common way of this test is
the odd even method.
• TWO SUBTYPES:- 1) average inter-item correlation
2) split-half reliability
• It is obtained by taking all of the items on a test that probe the
same construct ;
determining the correlation coefficient for each pair of items;
and finally taking
the average of all of these correlation coefficients.
• This final step yields the average inter-item correlation
• The process of obtaining split-half reliability is begun by
splitting in half all items of a test that are in order to form
two”sets” of items.
• The entire test is administered to a group of individuals, the
total score for each set is computed and finally the split-half
reliability is obtained by determining the correlation between
the two total set scores.
CONT.
1) Data collection method
2) Interval between testing occasion
3) Test length
4) Speed of the method
5) Group homogeneity
CONT.
6) Difficulty of the items
7) Ambiguous wording
8) Inconsistency in test administration
9) Objectivity of scoring is more reliable than
subjectivity of scoring
CONT.
1)Limiting subjectivity of all kind
2)Ensure the questions are clear
3)Ensure that the expected answers are definite and
objective
4)Checking to make sure the time limits are adequate
CONT.
5)Giving simple, clear and unambiguous instruction
6)Keeping choice within a test paper to minimum
7)Conducting test under identical and ideal
examination conditions
8)When using less reliable methods increase the
number of question observation or examination time.
Reliability of an assessment measure are truly important.
When choosing participants, the characteristics of the
candidates were a factor and is usually the same for each
assessment administered, as with this particular one. Cut-
off scores could be used if needed, however, are not
inherently necessary for this assessment measure.
What is reliability? Explain inter-rater reliability in details.
1.Sodhi kaur jaspreet, comprehensive textbook of nursing education (as per INC
syllabus), 1st edition, jaypee, the health sciences publishers, pp- 199-200
2.Basheer p. shabeer, textbook of nursing education, 2nd edition, emmess medical
publishers, pp-229-230.
3.https://www.slideshare.net/gurpreetsinghSIDHU2/reliability-
55241451#:~:text=Anastasi%3A%20Reliability%20refers%20to%20the,or%20under%2
0variable%20examining%20conditions.
4. https://www.slideshare.net/tmthatchupeacefeul/reliability-64021229
Reliability

Reliability

  • 2.
    PRESENTED BY- DISHA SINHA 1STYEAR M.Sc. NURSING ROLL- 2088011
  • 3.
    OBJECTIVE DEFINE RELIABILITY EXPLAIN THREEMAIN ATTRIBUTES OF RELIABLE SCALE CLASSIFY TYPES OF RELIABILITY IDENTIFY FACTORS INFLUENCING RELIABILITY DESCRIBE MEASURES TO IMPROVE RELIABILITY
  • 6.
    • According toRosenthal (1991), ”Reliability is a major concern when a psychological test is used to measure some attribute or behavior”. • According to Anastasi (1968), ” Reliability refers to the consistency of scores obtained by the same individuals when re- examined with test on different occasions, or with different sets of equivalent items, or under other variable examining conditions”.
  • 7.
  • 9.
  • 11.
    CONT. • It isa measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. • It assumes there is no change in the underlying trait between 1 and 2.
  • 13.
    CONT. • It isa measure of reliability obtained by administering different versions of an assessment tool to the same group of individuals. • The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions. • If the correlation between the alternative forms is low, it could indicate that considerable measurement error is present, because two different scales were used.
  • 15.
    CONT. • It isa measure of reliability used to assess the degree to which different judges or raters agree with their assessment decisions. • It can be especially useful when judgments can be considered relatively subjective. Thus the use of this type of reliability would probably be more likely when evaluating artwork as opposed to math problems.
  • 17.
    CONT. • It isthe measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results. • It indicates the homogeneity of the test. Common way of this test is the odd even method. • TWO SUBTYPES:- 1) average inter-item correlation 2) split-half reliability
  • 18.
    • It isobtained by taking all of the items on a test that probe the same construct ; determining the correlation coefficient for each pair of items; and finally taking the average of all of these correlation coefficients. • This final step yields the average inter-item correlation
  • 19.
    • The processof obtaining split-half reliability is begun by splitting in half all items of a test that are in order to form two”sets” of items. • The entire test is administered to a group of individuals, the total score for each set is computed and finally the split-half reliability is obtained by determining the correlation between the two total set scores.
  • 21.
    CONT. 1) Data collectionmethod 2) Interval between testing occasion 3) Test length 4) Speed of the method 5) Group homogeneity
  • 22.
    CONT. 6) Difficulty ofthe items 7) Ambiguous wording 8) Inconsistency in test administration 9) Objectivity of scoring is more reliable than subjectivity of scoring
  • 24.
    CONT. 1)Limiting subjectivity ofall kind 2)Ensure the questions are clear 3)Ensure that the expected answers are definite and objective 4)Checking to make sure the time limits are adequate
  • 25.
    CONT. 5)Giving simple, clearand unambiguous instruction 6)Keeping choice within a test paper to minimum 7)Conducting test under identical and ideal examination conditions 8)When using less reliable methods increase the number of question observation or examination time.
  • 29.
    Reliability of anassessment measure are truly important. When choosing participants, the characteristics of the candidates were a factor and is usually the same for each assessment administered, as with this particular one. Cut- off scores could be used if needed, however, are not inherently necessary for this assessment measure.
  • 30.
    What is reliability?Explain inter-rater reliability in details.
  • 32.
    1.Sodhi kaur jaspreet,comprehensive textbook of nursing education (as per INC syllabus), 1st edition, jaypee, the health sciences publishers, pp- 199-200 2.Basheer p. shabeer, textbook of nursing education, 2nd edition, emmess medical publishers, pp-229-230. 3.https://www.slideshare.net/gurpreetsinghSIDHU2/reliability- 55241451#:~:text=Anastasi%3A%20Reliability%20refers%20to%20the,or%20under%2 0variable%20examining%20conditions. 4. https://www.slideshare.net/tmthatchupeacefeul/reliability-64021229