The document discusses various aspects of quantitative fairness in computational journalism, focusing on biases in algorithmic decisions related to criminal justice, lending, and child maltreatment screening. It examines experimental versus observational bias measurements, the implications of machine learning on fairness, and the challenges of achieving equitable outcomes for different demographic groups. The analysis includes examples of racial bias in judicial decisions, sentencing disparities, and the complexities of machine-driven risk assessments.