There exists a recent and disturbing trend in the software quality space for people to view themselves as "test automation engineers" or similar, and to focus on creating large automation suites post-hoc. These suites are normally generated verbatim from acceptance criteria and mapped directly to UI-automation tests. The guiding principle appears to be that no bug shall ever reach production. While this goal is noble in theory, it's destructive in practice. Worse, however, it also distracts us from the realisation that software quality is about much more than testing.
In this talk, Andrew covers a number of other, often-overlooked elements of software quality such as code design itself, monitoring, logging, instrumentation, SRE, synthetic transactions and production verification tests. He looks at production error rates and how to assess what an acceptable error rate is, and covers measures such as mean time to detection (MTTD) and mean time to remediation (MTTR) as key metrics for the overall quality of a solution.
• What does your business care about?
• How does your software enable (or hinder) that?
• How can you objectively measure whether
your software is better today than it was
• How are you incentivising your people?
• Manual testing
• Exploratory testing should be a thing.
• Regression testing should not be a thing.
• Quality advocacy is a hat, not a role.
• Automated testing
• This is where your regression suite lives.
• The people who write the code write the tests.
• A green master build is production-ready. Ship it.
IF YOU ONLY TAKE
ONE THING AWAY
Understand that “quality” and “good enough” will
mean different things at different life stages of
• Evolve towards your goals.
• Decide what to measure.
• Develop a fitness function.
• Apply selective pressure.
• Evolve your goals themselves.
• Update your metrics to reflect your new goals.
• Update your fitness functions.