Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Design testabilty


Published on

The anonymised slides from an old (but hopefully still relevant) talk on the case for placing a strategic focus on design testability. The material covers the technical, process and organisational considerations arising from such a strategy and is predominantly a summary of the ideas presented in Brett Pettichord's 2001 "Design For Testability' paper available here. The presentation makes a case for why a high level of design testability can be seen as a critical success factor in achieving sustained agility.

Published in: Technology, Design
  • Be the first to comment

  • Be the first to like this

Design testabilty

  1. 1. Testability An Underrated Factor In Long Term Success By Richard Neeve <Date removed to protect client identity>
  2. 2. Reference material This presentation is predominantly a summary of the ideas presented in Bret Pettichord’s paper entitled “Design for Testability”. This can be found here.
  3. 3. Outline • Background. • What is testability? • Why is it important? • How can it be achieved? • What are the risks? • What are the people/organisational issues? • Conclusions.
  4. 4. Some background
  5. 5. Why now? • The <removed> department has recently been shifting its focus to newThe <removed> department has recently been shifting its focus to new products.products. • A chance to reflect on where the problems of testing <product names>A chance to reflect on where the problems of testing <product names> started.started. • Biggest causal factor was the failure to properly design testability intoBiggest causal factor was the failure to properly design testability into the solutions from the outset.the solutions from the outset. • This inhibited manual testing and the scheduling of automation work.This inhibited manual testing and the scheduling of automation work. • Consequently we incurred large (and recurring) effort costs which thenConsequently we incurred large (and recurring) effort costs which then gave rise to significant opportunity costs.gave rise to significant opportunity costs. • Looking ahead, it’s hard to see how placing so little emphasis onLooking ahead, it’s hard to see how placing so little emphasis on testability is compatible with sustaining an Agile approach.testability is compatible with sustaining an Agile approach. • The testability of new products needs to be addressed now whilst theirThe testability of new products needs to be addressed now whilst their designs are still open. The longer we leave it the harder it will be.designs are still open. The longer we leave it the harder it will be.
  6. 6. I am aiming to • Help create a better appreciation for the topic. • Stimulate some discussion. • Promote the importance of testability in automation and agility. • Get testability on (or higher up) the agenda. • Contribute to the avoidance of previous mistakes (e.g. <product name>).
  7. 7. I am not aiming to • Address the testability of documentation (e.g. requirements). This talk is about software testability. • Try and tell people where testability should be positioned in their overall list of design priorities. • Try to convince people to retro-fit testability into products for which the die is cast. If there are easy wins for mature products that’s great but my focus is on improving the situation for new products.
  8. 8. What is testability?
  9. 9. Part of a long wish list • Maintainability • Scalability • Extensibility • Flexibility • Correctness • Efficiency • Security • Reliability • Reusability • Portability • Testability • Usability • Accuracy • Consistency • Robustness _____________________________ • All wholesome aims and there are undoubtedly others. • This list is both long and nested >>> • Commonality • Auditability • Modularity • Interoperability • Integrity • Completeness • Conciseness
  10. 10. Zoom in on ‘Testability’ and you get... Controllability The better we can control it, the more testing can be automated. Availability To test it, we have to get at it. Simplicity The simpler it is, the less there is to test. Stability The fewer the changes, the fewer the disruptions to testing. Observability What you see is what can be tested. Understandabilit y The more information we have, the smarter we can test. Operability The better it works, the more efficiently it can be tested. Decomposability By controlling the scope of testing, we can isolate problems efficiently and perform smarter retesting. • Often referred to as ‘The Heuristics of Software Testability’. • Again we can go a level deeper. Let’s look at observability. …another wish list:
  11. 11. Observability “What you see is what can be tested” • Past system states and variables are visible or queriable. • Distinct output is generated for each input. • System states and variables are visible or queriable during execution. • All factors affecting the output are visible. • Incorrect output is easily identified. • Internal errors are automatically detected and reported through self- testing mechanisms. ___________________________ • Perhaps you have other things you would add. • We could have similarly expanded any one of the items in the previous table [NB: this type of material is available via Google].
  12. 12. Seeing the wood from the trees These lists are useful because: • It’s important to remember that testability competes with other design aims for consideration. • They offer a feel for the breadth of the issues (limits complacency). • Everything in these lists is a potentially valid consideration. But… • There is a danger of paralysis through the lack of a clear focus. • Looking at the core essence of testability is a good starting point.
  13. 13. The crux of the matter • Ultimately it’s all about control and visibility: Control Can we repeatedly and deterministically place the software in its various known states through the application of pre- determined inputs and other stimuli? Visibility Can all relevant data pertaining to internal state, inputs, outputs, resource usage etc be obtained in the course of executing a test? • Sort these out and you’re well on the way. • Remember testability is a design issue so it needs to be addressed whilst the design is still open for discussion.
  14. 14. Why is testability so important?
  15. 15. • Can help to detect faults that don’t trigger an observable failure. • Can improve the efficiency of manual test execution, thereby aiding early defect detection. • Can significantly expedite fault investigation and fix verification. • Can increase the chances of getting automation and benchmarking off the ground by improving readiness and therefore ease of scheduling. • Critical to the ongoing productivity of any automation and benchmarking. ___________________________ • All of which contribute directly, substantially and continually towards a product’s agility and viability throughout its life time. • This is the basis of testability’s importance in long term success.
  16. 16. How is testability achieved?
  17. 17. Testability Aids (1) • Testable documentation (e.g. requirements) – Plenty to say. – But not today because: • Battle selection is very important in testing. • I promised not to at the start of this talk. • Scriptable installations/uninstallations – Can speed up environment setup (often a big and resented cost). – Needed to reset the environment in an un-attended test run. • Support for different versions to co-exist on the same machine – Things like avoiding resource collisions (e.g. configurable ports). – Aids comparisons made for regression analyses. – Helps us to maximise our CAPEX.
  18. 18. Testability Aids (2) • Diagnostics – Provide a view on the code’s internal workings. – Expose defects that are not externally visible (e.g. corrupt data structures). – Types: Monitors, Assertions, Probes. – If in doubt, err on the side of verbosity (adjustable verbosity is very helpful). – Developers shouldn’t underestimate a tester’s ability/interest in using these. • Fault injection hooks – Particularly important for testing error-handling code. – Useful for efficiently re-creating faults that are difficult/inconvenient/impossible to re-create at will and in a repeatable way (e.g. loss of network connectivity). – Tools like ‘Holodeck’ allow the simulation of conditions like resource starvation. See here for more details. • Test points – Allow data/state to be changed/examined in a system thereby facilitating both diagnostic monitoring and state manipulation.
  19. 19. Some points to consider • Driving automated tests through programmatic test interfaces is typically (but not always) more productive than going via a GUI. • If you’re going to rely on custom testability aids, you must ensure the absolute correctness of their implementation. • It’s often easier to build test support directly into code upfront rather than trying to erect non-intrusive ‘test scaffolding’ later. • Over time, external test support code can sometimes be merged into the core product but the associated code churn presents risks. • Need to decide whether/how you’re going to mitigate the risks associated with testability aids (see next section). • Need to give careful thought to whether/how you will provide customers with knowledge about any test interfaces.
  20. 20. What are the associated risks?
  21. 21. From the frying pan to the fire? • Testability can play a pivotal role in mitigating technical and project risks. • But it brings some risks of its own. • Sometimes these are cited as a stalling tactic. • The risks are real, but are usually manageable. • They are often (but not always) seen to be more palatable than the risks associated with not progressing a testability strategy. • So what are these risks and are we just swapping one set of problems for another?
  22. 22. Security risks • Testability features can provide back door access for hackers. • Of particular concern to a company like <removed>. • Potential mitigative measures include: – Analysing the scope for exploitation and making adjustments. – Removing test interfaces and hooks from production code. • Creates a secondary risk because what you’ve tested and what you’ve released will be different. – Using encryption keys to lock testability features. – Trying to keep testability features secret and hoping they don’t get discovered!
  23. 23. Privacy risks • Some logs may contain sensitive information. • If possible, the customer should approve the log content. • Could ‘cleanse’ logs prior to release.
  24. 24. Performance risks • Assertions and heavy instrumentation can ultimately impact performance. • Need to understand whether any impact is material. • Profiling can help to objectively quantify a suspected impact. • Having configurable levels of instrumentation is a help. • Could strip out the instrumentation prior to release.
  25. 25. Test veracity risks • Paradoxically, testability aids can be seen to undermine testing. • Purists argue that test legitimacy is compromised by injecting artificial mechanisms into the code and then using them to drive the system, often as a substitute for a human user (especially if those mechanisms are then removed just prior to release). • Pragmatists argue that the need to alter the product to facilitate efficient testing is almost inevitable and that wise testers will accept this and manage it rather than act as though it can be eliminated. • Level of risk depends on: – How functionally/anatomically intrusive your testability aids are. – The extent to which you rely upon them to make judgments. • I used to be a purist but now I’m a pragmatist.
  26. 26. Maximising test veracity • It’s important to have a sharply defined boundary between the logic layer and the presentation layer by which it’s invoked. • Confidence comes from knowing that any test interfaces call the same internal interfaces as the presentation layer itself. • Avoid 11th hour removal of testability aids from code. • Manual tests should supplement automated tests (more later).
  27. 27. Further Risks • Adding/maintaining/removing testability features causes code churn. • Customers may not like the idea (or like it too much and abuse the features provided). • May encourage an over-reliance on test automation.
  28. 28. Swapping one set of problems for another? • To a degree, yes. • The effort/risk/reward ratio and its acceptability is circumstantial. • But the benefits are often very compelling so it warrants consideration.
  29. 29. People/Organisational Issues
  30. 30. The role of manual testing • This is not about eliminating manual testing through automation. • Even with the best possible testability and automation, manual black-box testing is still needed because automated tests: – Typically fail to drive the system in a manner that exactly replicates that of a human user. – Cannot provide a substitute for the intuition, critical thinking and spontaneous inspiration enjoyed by a human tester. • In other words “test automation does not provide automatic manual testing”. This is often not appreciated.
  31. 31. Key knowledge requirements • Some testers may have always treated the code as a black box. • To confidently engage in discussions on testability, a tester needs to understand aspects such as: – The system design and the geography of the implementing code. – What interfaces are already available. – Where testing hooks might be placed (for fault injection and monitoring). • Without this perspective a tester won’t be effective in the role of testability consultant so: – Developers need to be ready to provide the required teaching and read access to the code repository. – Testers need to be very specific about what they want.
  32. 32. Team dynamics Need this To get this Good dev/test relationship. Co-operation in adding/maintaining testability aids and sharing knowledge of the code. Strong team-wide commitment to enduring success. Acknowledgement of how important testability is. Testers to drive the discussions from the earliest possible stage. Early identification of the testability issues whilst the design is still open. • Same needs as those for automation (unsurprisingly). • Testers should collaborate and trust, but verify.
  33. 33. Some common inhibitors • Some test teams are very inflexible on the veracity issue. • Testers often fail to raise the subject or convey their needs (even if support might be trivial to implement). • Some test managers feel that the test report’s audience won’t understand/accept any qualifications in the results in cases were test veracity has been diluted in the interests of testability. • Co-operation from development teams varies enormously. • There is often a reluctance to incur what are probably new costs e.g. testing testability features. • Some development teams create testability aids but then don’t tell the testers (worst case scenario).
  34. 34. Conclusions
  35. 35. • Testability can play a key role in achieving long term success. • As with defect detection, earlier => easier => cheaper. • It’s technically straightforward, but is typically overlooked, often at great cost (e.g. <product name>). • Seems like a big missed opportunity that we should consider taking in respect of new products. • Many of the issues are non-technical and we need to overcome those. • It won’t be easy but if we don’t address testability properly, our long term agility is at risk.
  36. 36. Follow-up • I would like to hear other ideas and opinions. • I’m particularly interested in relevant <company name> anecdotes: – Painful examples of where testability is lacking. – Examples of where testability has been improved with noticeable results. – Examples of automation solutions that have been built upon existing testability features. – Examples of where testability strategies have failed. – Etc.
  37. 37. Thanks for listening. Any questions or comments?