Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Andrew Harcourt- Good Enough Software (Evolution)

202 views

Published on

There exists a recent and disturbing trend in the software quality space for people to view themselves as "test automation engineers" or similar, and to focus on creating large automation suites post-hoc. These suites are normally generated verbatim from acceptance criteria and mapped directly to UI-automation tests. The guiding principle appears to be that no bug shall ever reach production. While this goal is noble in theory, it's destructive in practice. Worse, however, it also distracts us from the realisation that software quality is about much more than testing.

In this talk, Andrew covers a number of other, often-overlooked elements of software quality such as code design itself, monitoring, logging, instrumentation, SRE, synthetic transactions and production verification tests. He looks at production error rates and how to assess what an acceptable error rate is, and covers measures such as mean time to detection (MTTD) and mean time to remediation (MTTR) as key metrics for the overall quality of a solution.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Andrew Harcourt- Good Enough Software (Evolution)

  1. 1. ANDREW HARCOURT Principal Consultant @uglybugger #EvolutionTW #ThoughtWorks
  2. 2. DEFINING “GOOD ENOUGH”
  3. 3. HOW FIT IS THE SOFTWARE FOR ITS PURPOSE?
  4. 4. WHAT EVEN IS ITS PURPOSE?
  5. 5. WHAT HAPPENS IF IT GOES BANG?
  6. 6. MEASURES OF SOFTWARE QUALITY
  7. 7. ARE WE SOLVING OUR CUSTOMERS’ PROBLEMS?
  8. 8. WHAT IS OUR SOFTWARE’S AMENABILITY TO SCALING OF DEVELOPMENT TEAMS?
  9. 9. HOW WELL IS OUR SOFTWARE PERFORMING IN PRODUCTION?
  10. 10. IS OUR SOFTWARE FAST ENOUGH?
  11. 11. DO WE HAVE AN ACCEPTABLE ERROR RATE?
  12. 12. WHAT EVEN IS OUR ERROR RATE?
  13. 13. WE’VE FOUND A BUG! … NOW WHAT?
  14. 14. WHAT’S OUR MEAN TIME TO DETECTION OF A FAILURE?
  15. 15. WHAT’S OUR MEAN TIME TO RECOVERY FROM A FAILURE?
  16. 16. WHAT IS THE INFRASTRUCTURE COST TO OPERATE IT?
  17. 17. WHAT IS THE HUMAN COST TO OPERATE IT?
  18. 18. HOW FAST CAN WE GET FROM IDEA TO VALUE?
  19. 19. WHAT ARE THE COST, TIME AND RISK TO ADD FUNCTIONALITY?
  20. 20. HOW DO WE HONOUR OUR SOCIAL AND ETHICAL RESPONSIBILITIES?
  21. 21. SOME PRINCIPLES TO (RE)ESTABLIS H
  22. 22. • What does your business care about? • How does your software enable (or hinder) that? • How can you objectively measure whether your software is better today than it was yesterday? • How are you incentivising your people? QUANTIFYING QUALITY
  23. 23. • Manual testing • Exploratory testing should be a thing. • Regression testing should not be a thing. • Quality advocacy is a hat, not a role. • Automated testing • This is where your regression suite lives. • The people who write the code write the tests. • A green master build is production-ready. Ship it. ADDRESSING THE TESTING MYTH
  24. 24. IF YOU ONLY TAKE ONE THING AWAY FROM THIS TALK…
  25. 25. Understand that “quality” and “good enough” will mean different things at different life stages of an organisation. • Evolve towards your goals. • Decide what to measure. • Develop a fitness function. • Apply selective pressure. • Evolve your goals themselves. • Update your metrics to reflect your new goals. • Update your fitness functions. EVOLVE

×