This document discusses machine learning algorithms used in criminal risk assessment. It notes that algorithms can be biased and incorrectly rank some groups as higher risk. For example, African Americans are more likely to be ranked as higher risk and receive harsher verdicts even when their criminal history is similar to whites. The document questions how accurately algorithms can determine how dangerous a person is and whether they should be used to determine outcomes like incarceration.