The document defines reliability as the consistency or reproducibility of test scores across testing situations using the same or parallel instruments. It then describes four common types of reliability: inter-rater reliability which measures consistency between raters; test-retest reliability which measures consistency over time; parallel-forms reliability which measures consistency between equivalent tests; and internal consistency reliability which measures consistency between items measuring the same construct. Methods for measuring these types of reliability include calculating correlations between scores from different raters, tests, or items. Reliability is important because for a test to be valid it first needs to be reliable by consistently measuring what it is intended to measure.