TYPES OF REALIABILITY
A.JEEVARATHINAM
Assistant Professor
Department of Home Science
V.V.Vanniaperumal College for Women
Virudhunagar
Types of Reliability
Test-retest reliability
Test-retest reliability measures the stability of the scores of a stable construct
obtained from the same person on two or more separate occasions.
How it's measured?
• A group of participants complete a questionnaire designed to measure
personality traits.
• They repeat the questionnaire days, weeks or months apart
• Calculate the correlation coefficient between the two sets of scores
• Interpret the correlation coefficient
Test-retest reliability
• For example, if 10 students took the test and retest, then N would be 10. Following the N is the Greek symbol
sigma, which means the sum of. xy means we multiply x by y, where x and y are the test and retest scores. If
10 students took the test and retest, then we would sum all 10 pairs of the test scores (x) and multiply them by
the sum of retest scores (y).
Test-retest reliability
How it's interpreted?
• A correlation coefficient of 1 indicates a perfect positive correlation, while -1
indicates a perfect negative correlation
• A correlation coefficient above 0.9 indicates excellent reliability
• A correlation coefficient between 0.8 and 0.9 indicates good reliability
• A correlation coefficient between 0.7 and 0.8 indicates acceptable reliability
Inter-rater reliability
• In statistics, inter-rater reliability (also called by various
similar names, such as inter-rater agreement, inter-rater
concordance, inter-observer reliability, inter-coder reliability,
and so on.
• Inter-rater reliability measures the agreement between
subjective ratings by multiple raters, inspectors, judges, or
appraisers.
• It measures how likely two or more judges are to give the same
ranking to an individual event or person.
Inter-rater reliability -Example
Inter-rater reliability -steps
Inter-rater reliability -steps
Inter-rater reliability –inference
Parallel forms reliability
• Parallel forms reliability (also called equivalent forms
reliability) uses one set of questions divided into two equivalent
sets (“forms”), where both sets contain questions that measure
the same construct, knowledge or skill.
• The two sets of questions are given to the same sample of
people within a short period of time and an estimate of reliability
is calculated from the two sets.
Parallel forms reliability - example
• The Sound Recognition Test is a test for a condition known as auditory agnosia, or a person’s ability to
recognize familiar environmental sounds, such as a bell, a whistle, or crowd sounds.
• There are two forms of the test, A and B, with 13 items per test.
• Scoring is based on allowing up to 3 points per item, making 39 the highest possible score.
• A group of normal, five-year-old children was selected and given form A.
• Then, the next day, they were given form B.
• The accompanying table shows the data and the scheme for calculating the reliability coefficient.
Parallel forms reliability - steps
THANK YOU

RESEARCH METHODOLOGY - TYPES OF RELIABILITY

  • 1.
    TYPES OF REALIABILITY A.JEEVARATHINAM AssistantProfessor Department of Home Science V.V.Vanniaperumal College for Women Virudhunagar
  • 2.
    Types of Reliability Test-retestreliability Test-retest reliability measures the stability of the scores of a stable construct obtained from the same person on two or more separate occasions. How it's measured? • A group of participants complete a questionnaire designed to measure personality traits. • They repeat the questionnaire days, weeks or months apart • Calculate the correlation coefficient between the two sets of scores • Interpret the correlation coefficient
  • 3.
    Test-retest reliability • Forexample, if 10 students took the test and retest, then N would be 10. Following the N is the Greek symbol sigma, which means the sum of. xy means we multiply x by y, where x and y are the test and retest scores. If 10 students took the test and retest, then we would sum all 10 pairs of the test scores (x) and multiply them by the sum of retest scores (y).
  • 4.
    Test-retest reliability How it'sinterpreted? • A correlation coefficient of 1 indicates a perfect positive correlation, while -1 indicates a perfect negative correlation • A correlation coefficient above 0.9 indicates excellent reliability • A correlation coefficient between 0.8 and 0.9 indicates good reliability • A correlation coefficient between 0.7 and 0.8 indicates acceptable reliability
  • 5.
    Inter-rater reliability • Instatistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on. • Inter-rater reliability measures the agreement between subjective ratings by multiple raters, inspectors, judges, or appraisers. • It measures how likely two or more judges are to give the same ranking to an individual event or person.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
    Parallel forms reliability •Parallel forms reliability (also called equivalent forms reliability) uses one set of questions divided into two equivalent sets (“forms”), where both sets contain questions that measure the same construct, knowledge or skill. • The two sets of questions are given to the same sample of people within a short period of time and an estimate of reliability is calculated from the two sets.
  • 11.
    Parallel forms reliability- example • The Sound Recognition Test is a test for a condition known as auditory agnosia, or a person’s ability to recognize familiar environmental sounds, such as a bell, a whistle, or crowd sounds. • There are two forms of the test, A and B, with 13 items per test. • Scoring is based on allowing up to 3 points per item, making 39 the highest possible score. • A group of normal, five-year-old children was selected and given form A. • Then, the next day, they were given form B. • The accompanying table shows the data and the scheme for calculating the reliability coefficient.
  • 12.
  • 13.