The document discusses various ways that bias can arise in artificial intelligence systems and machine learning models. It provides examples of bias found in facial recognition systems against dark-skinned women, sentiment analysis showing preference for some religions over others, and risk assessment algorithms used in criminal justice showing racial disparities. The document also discusses definitions of fairness and bias in machine learning. It notes there are at least 21 definitions of fairness and bias can be introduced during data handling and model selection in addition to through training data.