Bias in algorithms is not only hurting performance or accuracy. We will discuss cases where machine learning based decision support systems had learned our racial/gender or other biases (which are implicitly embedded in everyday data), and thus re-created or even amplified unfair, discriminating or non-inclusive behavior. These cases are pointing out that transparent processes and interpretability are not only tools for understanding and sanity checking the inner logic and reasoning of a machine learning system. They are crucial to build trust and to protect our social infrastructure from erosion caused by hidden unfair, unethical models. We will review metrics designed to measure algorithmic fairness and discuss their practical and theoretical limitations to remind ourselves, that ethical behaviour need to be assessed case-by- case, not only with technical tools but also with human empathy.