We aren’t surprised by facial recognition at security checkpoints. But how do you feel about face-scanning toilet roll dispensers? What if they don’t just find criminals but try to detect “criminality”? Laws and policies almost always lag technology so data scientists and machine learning experts are among the first line of ethical defense. The argument in this talk is that to be ethical, any system that classifies human beings has to consider the goals of the people affected by the system, not just the builders’ goals. This is not particularly convenient, but there are concrete ways to put goal-oriented design into practice. Doing so puts us in a better position to practice ethical behavior and attempt to address problems of power and the reproduction of inequality.