The rising prevalence of algorithmic systems such as e-recruitment systems raises new questions and concerns about fairness, accountability, transparency and trust. There is a widespread lack of faith in these systems due to their perceived lack of transparency and accountability. These attitudes can be critical bottlenecks for the uptake of such systems in critical decision-making scenarios. In AI, explanations are seen as a common approach to increase users' understanding of how algorithmic systems work and to engender trust in them. In this talk, I presented our work in (ReEnTrust – Rebuilding and Enhancing Trust in Algorithms) project on users’ perceptions of algorithmic systems and the role of explanations in trust building and trust repair. Our findings show that explanations provide a helpful foundation for participants to make sense of algorithms; however, explanations alone are not sufficient to overcome negative attitudes and engender trust. We identify important factors that extend our understanding of explanations and algorithmic decision-making systems, and provide directions for future algorithm trust building development.