This document summarizes a paper that reviews algorithmic bias in education. It discusses how algorithms used in education can encode biases from their developers or surrounding society, producing discriminatory predictions for some groups. The paper focuses on understanding which groups are impacted and how biases emerge from how variables are operationalized and what data is used. It reviews evidence that algorithms exhibit biases related to race, gender, nationality and other attributes. The paper proposes moving from unknown bias to known bias to fairness, and discusses efforts needed to mitigate algorithmic bias in educational technology.