• Like
  • Save
Using Test Triggers for Improved Defect Detection
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Using Test Triggers for Improved Defect Detection

  • 1,054 views
Published

Presentation at the 2003 IEEE Latin America Test Workshop in Natal, Brazil

Presentation at the 2003 IEEE Latin America Test Workshop in Natal, Brazil

Published in Technology , Business
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,054
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Using Test Triggers for Improved Defect Detection Charles P. Schultz, ASQ CSQE Global Software Group United States - Florida IEEE LATW’03
  • 2. WHY DO YOU TEST?
    • To demonstrate that the device meets its specification? - quality
    • To find defects (non-conformance to specification) in the device as soon as possible after they are introduced?- cost
    • To demonstrate that the device is fit to proceed (handoff) to the next stage of its development or evolution?- readiness
  • 3. WHAT ARE YOUR RESULTS?
    • The specification does not adequately or fully describe the intended or required device?
    • Many defects escape notice until very late in the development cycle when they are the most costly?
    • Do some defects escape the entire product development cycle and reach the customer because they are not checked for by testing?
  • 4. SOME POSSIBLE REASONS...
    • Specification are written mostly from the perspective of “normal” or “intended” use of the device
    • Device capabilities are added incrementally so finding defects related to interactions gets “postponed” until late in the project
    • Testers will sometimes think of “tricky” or “unusual” scenarios, but do not have a method for doing this regularly and consistently
  • 5. SOME ADDITIONAL CLUES...
    • The attention of requirements, developers, and testers also tends to concentrate on the operation of the device once it has reached a static operating state
    • Diagramming the possible operating spaces of a device may give some clues as to why some defects are missed and what can be done about it...
  • 6. DEVICE OPERATING SPACE Initialization Activate Post-Init Deactivate Pwrdn/Reset
  • 7. DEVICE OPERATING SPACE Attention is focused here Initialization Activate Post-Init Deactivate Pwrdn/Reset
  • 8. HOW CAN WE DO BETTER?
    • The Orthogonal Defect Classification system defines a set of categories to classify how defects are triggered, which includes the following:
      • CONFIGURATION
      • STARTUP
      • NORMAL
      • RESTART
  • 9. WHY IS THIS SIGNIFICANT?
    • These Defect Triggers map to the different regions of the Device Operating Space
      • CONFIGURATION ALL
      • STARTUP Initialization
      • NORMAL Post-Initialization
      • RESTART Powerdown/Reset
  • 10. HOW DOES THIS APPLY TO ME?
    • Each test can be classified by which Defect Trigger(s) it uses.
    • Mapping the test “coverage” of each trigger will reveal opportunities to improve the test set
      • coverage of the Operating Space
      • ability to find more defects
    • Defining these additional tests may also reveal missing or ambiguous requirements
  • 11. TRIGGER COVERAGE MAP EXAMPLE
    • 33% untested (12 of 36 possibilities)
    • Boxes with low numbers may be undertested
  • 12. TEST EXAMPLE - CONFIGURATION
    • SERIAL I/O
    • Device works correctly at default Baud Rate
    • Device cannot properly decode received date when operating at the lowest possible Baud Rate configuration
    • Is device also being tested at high Baud Rate, or Baud Rates that use “unusual” clock multipliers?
  • 13. IS THAT ALL THERE IS?
    • Further examination of device behavior and Defect Trigger test results reveals that some faults cannot be revealed by test cases using one trigger
    • These faults may be masked by “repairs” that occur in different parts of the operating space
    • Such a fault can only be detected by operating in the space where the defect is revealed, and without the presence of the repair
  • 14. FAULT MASKING EXAMPLE
    • In order to detect Fault F , the device must go through a Restart, and then be operated during Initialization, before Repair R occurs
    F R Initialization Activate Post-Init Deactivate Pwrdn/Reset
  • 15. WHY ELSE USE TRIGGER PAIRS?
    • Some faults are not masked, but their effects are not detectable until an operation occurs in a subsequent operating space
    • Such a fault can only be detected by operating in the space that causes the defects effect, and then observing behavior in the space where the effect can be detected
  • 16. DELAYED FAULT EFFECT EXAMPLE
    • Fault F creates Effect E which corrupts an Initialization parameter. The device must be used normally and then operated during Initialization to trigger E
    F E Initialization Activate Post-Init Deactivate Pwrdn/Reset
  • 17. ARE THESE DEFECTS SIGNIFICANT?
    • Non-Normal, single trigger tests revealed 20% of the defects found in one set of features
    • Trigger pairs detected an additional 23% of defects, which went undetected by the single trigger tests
    • Under-tested Feature/Trigger pairs represent an additional opportunity to find more defects and/or provide more confidence in the product’s quality
  • 18. ARE THESE DEFECTS SIGNIFICANT?
    • Non-Normal single trigger tests were over twice as efficient as Normal tests at finding defects (0.22 defects/test versus 0.10)
    • Trigger pair test efficiency was similar, finding 0.26 defects/test
    • A high density of defects indicates that more defects may still be present but undetected
  • 19. RECIPE FOR DEFECT TRIGGER TESTS
    • Map tests to triggers and add tests to improve the coverage of under-utilized triggers for each functional area of the device
    • Do the same for trigger pairs
    • Create and use tests for new devices and features with the goal to provide good trigger coverage
    • Start where you can make the biggest impact
  • 20. RECIPE FOR DEFECT PREVENTION
    • Account for all triggers and trigger pairs in requirements and design in order to create more robust and higher quality devices
    • Create design inspection checklists that incorporate each trigger and trigger pair
    • Also design and inspect for the possibility of masked behaviors and delayed effects in the different operating spaces
  • 21. WHEN CAN I START?
    • NOW!
  • 22. QUESTIONS?