This document discusses fairness in automated decision systems. It outlines challenges like algorithmic bias from training on biased data. It presents fairness metrics like disparate impact ratio and statistical parity that quantify group fairness. Approaches to mitigate bias include pre-processing to learn fair representations, in-processing like regularization and adversarial debiasing, and post-processing like calibrated equal odds. A case study describes LinkedIn's fairness architecture using a fairness analyzer and mitigation trainer to learn from previous outputs and apply post-processing corrections. The summary emphasizes the need to consider bias from data selection and regularization during model development as well as continuous fairness evaluation after deployment.