The reactionary state of the industry means that we quickly identify the ‘root cause’ in terms of ‘human-error’ as an object to attribute and shift blame. Hindsight bias often confuses our personal narrative with truth, which is an objective fact that we as investigators can never fully know. The poor state of self-reflection, human factors knowledge, and the nature of resource constraints further incentivize this vicious pattern. This approach results in unnecessary and unhelpful assignment of blame, isolation of the engineers involved, and ultimately a culture of fear throughout the organization. Mistakes will always happen. Rather than failing fast and encouraging experimentation, the traditional process often discourages creativity and kills innovation. As an alternative to simply reacting to failures, the security industry has been overlooking valuable chances to further understand and nurture ‘accidents’ or ‘mistakes’ as opportunities to proactively strengthen system resilience. Expose the failures, build resilient systems, and develop an "Applied security" model to minimize the impact of failures. In this session we will cover discuss the role of ‘human-error’, root cause, and resilience engineering in our industry and how we can use new techniques such as Chaos Engineering to make a difference. Security focused Chaos Engineering proposes that the only way to understand this uncertainty is to confront it objectively by introducing controlled signals. During this session we will cover some key concepts in Safety & Resilience Engineering work based on Sydney Dekker’s 30 years of research into airline accident investigations and how new techniques such as Chaos Engineering are making a difference in improving our ability to learn from incidents proactively before they become destructive