Presented By:
Ramsha Makhdum
 Reliability refers to the extent to which a scale produces consistent results, if the
measurements are repeated a number of times.
 Reliability is a measure of the stability or consistency of test scores.
 When a measurement procedure yields consistent scores when the phenomenon being
measured is not changing
 Degree to which scores are free of “Measurement Error Consistency of the
measurement
 Example: Weighing scale used multiple times in a day by the same individual.
 Internal consistency reliability
 Test-retest reliability
 Split–half method
 Inter-rater reliability
 Also known as inter-item reliability.
 It is the measure of how well the items on the test measure the same construct or
idea.
 Cronbach's Alpha are most commonly used used to measure inter-item reliability to see if
questionnaires with multiple questions are reliable. Value must by above 0.7.
 Test-retest reliability is a measure of reliability obtained by administering the same test
twice over a period of time to same group of individuals.
 Test-retest reliability is the degree to which scores are consistent over time.
 Same test- different times
 Example: Administering the same questionnaire at 2 different times such as IQ test.
 A method of determining the reliability of a test by dividing the whole test into two
halves and scoring the two halves separately.
 Especially appropriate when the test is very long.
 The most used method to split the test into two is using the odd-even strategy.
 Inter-rater reliability is the extent to which two or more raters (or observers,
coders, examiners) agree.
 Inter-rater reliability is essential when making decisions in research and clinical
settings.
 Neuman, L. (2014). Social Research Methods: Qualitative and Quantitative Approaches.
Pearson Education Limited.

reliability presentation.pptx

  • 1.
  • 2.
     Reliability refersto the extent to which a scale produces consistent results, if the measurements are repeated a number of times.  Reliability is a measure of the stability or consistency of test scores.  When a measurement procedure yields consistent scores when the phenomenon being measured is not changing  Degree to which scores are free of “Measurement Error Consistency of the measurement  Example: Weighing scale used multiple times in a day by the same individual.
  • 3.
     Internal consistencyreliability  Test-retest reliability  Split–half method  Inter-rater reliability
  • 4.
     Also knownas inter-item reliability.  It is the measure of how well the items on the test measure the same construct or idea.
  • 5.
     Cronbach's Alphaare most commonly used used to measure inter-item reliability to see if questionnaires with multiple questions are reliable. Value must by above 0.7.
  • 6.
     Test-retest reliabilityis a measure of reliability obtained by administering the same test twice over a period of time to same group of individuals.  Test-retest reliability is the degree to which scores are consistent over time.  Same test- different times  Example: Administering the same questionnaire at 2 different times such as IQ test.
  • 7.
     A methodof determining the reliability of a test by dividing the whole test into two halves and scoring the two halves separately.  Especially appropriate when the test is very long.  The most used method to split the test into two is using the odd-even strategy.
  • 8.
     Inter-rater reliabilityis the extent to which two or more raters (or observers, coders, examiners) agree.  Inter-rater reliability is essential when making decisions in research and clinical settings.
  • 9.
     Neuman, L.(2014). Social Research Methods: Qualitative and Quantitative Approaches. Pearson Education Limited.