Using Test Triggers for Improved Defect Detection


Published on

Presentation at the 2003 IEEE Latin America Test Workshop in Natal, Brazil

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Using Test Triggers for Improved Defect Detection

  1. 1. Using Test Triggers for Improved Defect Detection Charles P. Schultz, ASQ CSQE Global Software Group United States - Florida IEEE LATW’03
  2. 2. WHY DO YOU TEST? <ul><li>To demonstrate that the device meets its specification? - quality </li></ul><ul><li>To find defects (non-conformance to specification) in the device as soon as possible after they are introduced?- cost </li></ul><ul><li>To demonstrate that the device is fit to proceed (handoff) to the next stage of its development or evolution?- readiness </li></ul>
  3. 3. WHAT ARE YOUR RESULTS? <ul><li>The specification does not adequately or fully describe the intended or required device? </li></ul><ul><li>Many defects escape notice until very late in the development cycle when they are the most costly? </li></ul><ul><li>Do some defects escape the entire product development cycle and reach the customer because they are not checked for by testing? </li></ul>
  4. 4. SOME POSSIBLE REASONS... <ul><li>Specification are written mostly from the perspective of “normal” or “intended” use of the device </li></ul><ul><li>Device capabilities are added incrementally so finding defects related to interactions gets “postponed” until late in the project </li></ul><ul><li>Testers will sometimes think of “tricky” or “unusual” scenarios, but do not have a method for doing this regularly and consistently </li></ul>
  5. 5. SOME ADDITIONAL CLUES... <ul><li>The attention of requirements, developers, and testers also tends to concentrate on the operation of the device once it has reached a static operating state </li></ul><ul><li>Diagramming the possible operating spaces of a device may give some clues as to why some defects are missed and what can be done about it... </li></ul>
  6. 6. DEVICE OPERATING SPACE Initialization Activate Post-Init Deactivate Pwrdn/Reset
  7. 7. DEVICE OPERATING SPACE Attention is focused here Initialization Activate Post-Init Deactivate Pwrdn/Reset
  8. 8. HOW CAN WE DO BETTER? <ul><li>The Orthogonal Defect Classification system defines a set of categories to classify how defects are triggered, which includes the following: </li></ul><ul><ul><li>CONFIGURATION </li></ul></ul><ul><ul><li>STARTUP </li></ul></ul><ul><ul><li>NORMAL </li></ul></ul><ul><ul><li>RESTART </li></ul></ul>
  9. 9. WHY IS THIS SIGNIFICANT? <ul><li>These Defect Triggers map to the different regions of the Device Operating Space </li></ul><ul><ul><li>CONFIGURATION ALL </li></ul></ul><ul><ul><li>STARTUP Initialization </li></ul></ul><ul><ul><li>NORMAL Post-Initialization </li></ul></ul><ul><ul><li>RESTART Powerdown/Reset </li></ul></ul>
  10. 10. HOW DOES THIS APPLY TO ME? <ul><li>Each test can be classified by which Defect Trigger(s) it uses. </li></ul><ul><li>Mapping the test “coverage” of each trigger will reveal opportunities to improve the test set </li></ul><ul><ul><li>coverage of the Operating Space </li></ul></ul><ul><ul><li>ability to find more defects </li></ul></ul><ul><li>Defining these additional tests may also reveal missing or ambiguous requirements </li></ul>
  11. 11. TRIGGER COVERAGE MAP EXAMPLE <ul><li>33% untested (12 of 36 possibilities) </li></ul><ul><li>Boxes with low numbers may be undertested </li></ul>
  12. 12. TEST EXAMPLE - CONFIGURATION <ul><li>SERIAL I/O </li></ul><ul><li>Device works correctly at default Baud Rate </li></ul><ul><li>Device cannot properly decode received date when operating at the lowest possible Baud Rate configuration </li></ul><ul><li>Is device also being tested at high Baud Rate, or Baud Rates that use “unusual” clock multipliers? </li></ul>
  13. 13. IS THAT ALL THERE IS? <ul><li>Further examination of device behavior and Defect Trigger test results reveals that some faults cannot be revealed by test cases using one trigger </li></ul><ul><li>These faults may be masked by “repairs” that occur in different parts of the operating space </li></ul><ul><li>Such a fault can only be detected by operating in the space where the defect is revealed, and without the presence of the repair </li></ul>
  14. 14. FAULT MASKING EXAMPLE <ul><li>In order to detect Fault F , the device must go through a Restart, and then be operated during Initialization, before Repair R occurs </li></ul>F R Initialization Activate Post-Init Deactivate Pwrdn/Reset
  15. 15. WHY ELSE USE TRIGGER PAIRS? <ul><li>Some faults are not masked, but their effects are not detectable until an operation occurs in a subsequent operating space </li></ul><ul><li>Such a fault can only be detected by operating in the space that causes the defects effect, and then observing behavior in the space where the effect can be detected </li></ul>
  16. 16. DELAYED FAULT EFFECT EXAMPLE <ul><li>Fault F creates Effect E which corrupts an Initialization parameter. The device must be used normally and then operated during Initialization to trigger E </li></ul>F E Initialization Activate Post-Init Deactivate Pwrdn/Reset
  17. 17. ARE THESE DEFECTS SIGNIFICANT? <ul><li>Non-Normal, single trigger tests revealed 20% of the defects found in one set of features </li></ul><ul><li>Trigger pairs detected an additional 23% of defects, which went undetected by the single trigger tests </li></ul><ul><li>Under-tested Feature/Trigger pairs represent an additional opportunity to find more defects and/or provide more confidence in the product’s quality </li></ul>
  18. 18. ARE THESE DEFECTS SIGNIFICANT? <ul><li>Non-Normal single trigger tests were over twice as efficient as Normal tests at finding defects (0.22 defects/test versus 0.10) </li></ul><ul><li>Trigger pair test efficiency was similar, finding 0.26 defects/test </li></ul><ul><li>A high density of defects indicates that more defects may still be present but undetected </li></ul>
  19. 19. RECIPE FOR DEFECT TRIGGER TESTS <ul><li>Map tests to triggers and add tests to improve the coverage of under-utilized triggers for each functional area of the device </li></ul><ul><li>Do the same for trigger pairs </li></ul><ul><li>Create and use tests for new devices and features with the goal to provide good trigger coverage </li></ul><ul><li>Start where you can make the biggest impact </li></ul>
  20. 20. RECIPE FOR DEFECT PREVENTION <ul><li>Account for all triggers and trigger pairs in requirements and design in order to create more robust and higher quality devices </li></ul><ul><li>Create design inspection checklists that incorporate each trigger and trigger pair </li></ul><ul><li>Also design and inspect for the possibility of masked behaviors and delayed effects in the different operating spaces </li></ul>
  21. 21. WHEN CAN I START? <ul><li>NOW! </li></ul>
  22. 22. QUESTIONS?