Reliability Reliability refers to the extent to which a scale produces consistent results, if the measurements are repeated a number of times. Reliability is a measure of the stability or consistency of test scores. When a measurement procedure yields consistent scores when the phenomenon being measured is not changing Degree to which scores are free of “Measurement Error Consistency of the measurement Example: Weighing scale used multiple times in a day by the same individual Types of reliability Internal consistency reliability Test-retest reliability Split–half method Inter-rater reliability Internal consistency reliability Also known as inter-item reliability. It is the measure of how well the items on the test measure the same construct or idea. Cronbach's Alpha Cronbach's Alpha are most commonly used used to measure inter-item reliability to see if questionnaires with multiple questions are reliable. Value must by above 0.7. Test-retest reliability Test-retest reliability is a measure of reliability obtained by administering the same test twice over a period of time to same group of individuals. Test-retest reliability is the degree to which scores are consistent over time. Same test- different times Example: Administering the same questionnaire at 2 different times such as IQ test. Split–half method A method of determining the reliability of a test by dividing the whole test into two halves and scoring the two halves separately. Especially appropriate when the test is very long. The most used method to split the test into two is using the odd-even strategy. Inter-rater reliability Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree. Inter-rater reliability is essential when making decisions in research and clinical settings. References Neuman, L. (2014). Social Research Methods: Qualitative and Quantitative Approaches. Pearson Education Limited.