This document discusses kappa statistics, which measure interrater reliability beyond chance agreement. Kappa statistics are useful when multiple raters are interpreting subjective data, such as radiology images. The kappa statistic formula calculates observed agreement between raters compared to expected chance agreement. Examples show how to calculate kappa when two raters are assessing whether a biomarker is present or absent in samples. Confidence intervals for kappa are determined using 1.96 as a constant to generate a 95% confidence level.