Be the first to like this
Millions of test cases executed thousands of times mean nothing when a catastrophic defect surfaces and threatens the value of the product. In software testing, what is not tested is more important than what has been tested. However, with continuous adoption of agile and automation, the focus has shifted to what has been tested. In an agile environment, where results of continuous integration are visible to everyone, it’s easy to get fooled.
In this talk, I will explore how validations we seek with testing are affected by fallacies and biases and why green reported by continuous integration might not be good enough. I will try to demonstrate limitations of the acceptance criteria by showing examples of defects in open-source projects.
I will conclude with the discussion on how a context-driven approach can safe-guard us from the inferences we may make from the green build. If teams are not context-driven, all the benefits we see from adopting agile, automation and continuous integration can be short-lived. Projects are not judged by the presence of automation or green build; projects are judged by how they work in production. Remember the case of Ariane 501, where the project was progressing well and the team celebrated the first 36 seconds of the launch. However, the spacecraft crashed in the 37th second and the project became unsuccessful.
This paper is based on an excellent book by Nassim Nicholas Tale – “The Black Swan” and relates limitations of validation to software testing.