Testingtechniques And Strategy


Published on

Published in: Technology, Education
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Testingtechniques And Strategy

  1. 1. Lecture 14: Testing Techniques and Strategies SACHIN
  2. 2. Today’s Topics <ul><li>Chapters 17 & 18 in SEPA 5/e </li></ul><ul><li>Testing Principles & Testability </li></ul><ul><li>Test Characteristics </li></ul><ul><li>Black-Box vs. White-Box Testing </li></ul><ul><li>Flow Graphs & Basis Path Testing </li></ul><ul><li>Testing & Integration Strategies </li></ul>SACHIN
  3. 3. Software Testing <ul><li>Opportunities for human error </li></ul><ul><ul><li>Specifications, design, coding </li></ul></ul><ul><ul><li>Communication </li></ul></ul><ul><li>“ Testing is the ultimate review” </li></ul><ul><li>Can take 30-40% of total effort </li></ul><ul><li>For critical apps, can be 3 to 5 times all other efforts combined! </li></ul>SACHIN
  4. 4. Testing Objectives <ul><li>Execute a program with the intent of finding errors </li></ul><ul><li>Good tests have a high probability of discovering errors </li></ul><ul><li>Successful tests uncover errors </li></ul><ul><li>‘ No errors found’: not a good test! </li></ul><ul><li>Verifying functionality is a secondary goal </li></ul>SACHIN
  5. 5. Testing Principles <ul><li>Tests traceable to requirements </li></ul><ul><li>Tests planned before testing </li></ul><ul><li>Pareto principle: majority of errors traced to minority of components </li></ul><ul><li>Component testing first, then integrated testing </li></ul><ul><li>Exhaustive testing is not possible </li></ul><ul><li>Independent tests: more effective </li></ul>SACHIN
  6. 6. Software Testability <ul><li>Operability </li></ul><ul><li>Observability </li></ul><ul><li>Controllability </li></ul><ul><li>Decomposability </li></ul>Characteristics that lead to testable software: <ul><li>Simplicity </li></ul><ul><li>Stability </li></ul><ul><li>Understandability </li></ul>SACHIN
  7. 7. Operability <ul><li>System has few bugs </li></ul><ul><li>No bugs block execution of tests </li></ul><ul><li>Product evolves in functional stages </li></ul>The better it works, the more efficiently it can be tested SACHIN
  8. 8. Observability <ul><li>Distinct output for each input </li></ul><ul><li>States & variables may be queried </li></ul><ul><li>Past states are logged </li></ul><ul><li>Factors affecting output are visible </li></ul><ul><li>Incorrect output easily identified </li></ul><ul><li>Internal errors reported </li></ul><ul><li>Source code accessible </li></ul>What you see is what you test SACHIN
  9. 9. Controllability <ul><li>All possible outputs can be generated by some input </li></ul><ul><li>All code executable by some input </li></ul><ul><li>States, variables directly controlled </li></ul><ul><li>Input/output consistent, structured </li></ul><ul><li>Tests are specified, automated, and reproduced </li></ul>The better we can control the software, the more the testing can be automated SACHIN
  10. 10. Decomposability <ul><li>Independent modules </li></ul><ul><li>Modules can be tested separately </li></ul>By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting SACHIN
  11. 11. Simplicity <ul><li>Minimum feature set </li></ul><ul><li>Minimal architecture </li></ul><ul><li>Code simplicity </li></ul>The less there is to test, the more quickly we can test it SACHIN
  12. 12. Stability <ul><li>Changes made to system: </li></ul><ul><ul><li>are infrequent </li></ul></ul><ul><ul><li>are controlled </li></ul></ul><ul><ul><li>don’t invalidate existing tests </li></ul></ul><ul><li>Software recovers from failure </li></ul>The fewer the changes, the fewer the disruptions to testing SACHIN
  13. 13. Understandability <ul><li>Design is well-understood </li></ul><ul><li>Dependencies are well understood </li></ul><ul><li>Design changes are communicated </li></ul><ul><li>Documentation is: </li></ul><ul><ul><li>accessible </li></ul></ul><ul><ul><li>well-organized </li></ul></ul><ul><ul><li>specific, detailed and accurate </li></ul></ul>The fewer the changes, the fewer the disruptions to testing SACHIN
  14. 14. Test Characteristics <ul><li>Good test has a high probability of finding an error </li></ul><ul><li>Good test is not redundant </li></ul><ul><li>A good test should be “best of breed” </li></ul><ul><li>A good test is neither too simple nor too complex </li></ul>SACHIN
  15. 15. Test Case Design <ul><li>‘ Black Box’ Testing </li></ul><ul><ul><li>Consider only inputs and outputs </li></ul></ul><ul><li>‘ White Box’ or ‘Glass Box’ Testing </li></ul><ul><ul><li>Also consider internal logic paths, program states, intermediate data structures, etc. </li></ul></ul>SACHIN
  16. 16. White-Box Testing <ul><li>Guarantee that all independent paths have been tested </li></ul><ul><li>Exercise all conditions for ‘true’ and ‘false’ </li></ul><ul><li>Execute all loops for boundary conditions </li></ul><ul><li>Exercise internal data structures </li></ul>SACHIN
  17. 17. Why White-Box Testing? <ul><li>More errors in ‘special case’ code which is infrequently executed </li></ul><ul><li>Control flow can’t be predicted accurately in black-box testing </li></ul><ul><li>Typo errors can happen anywhere! </li></ul>SACHIN
  18. 18. Basis Path Testing <ul><li>White-box method [McCabe ‘76] </li></ul><ul><li>Analyze procedural design </li></ul><ul><li>Define basis set of execution paths </li></ul><ul><li>Test cases for basis set execute every program statement at least once </li></ul>SACHIN
  19. 19. Basis Path Testing [2] Flow Graph : Representation of Structured Programming Constructs [From SEPA 5/e] SACHIN
  20. 20. Cyclomatic Complexity V(G)=E-N+2 = 4 Independent Paths 1: 1,11 2: 1,2,3,4,5,10,1,11 3: 1,2,3,6,8,9,10,1,11 4: 1,2,3,6,7,9,10,1,11 V(G): upper bound on number of tests to ensure all code has been executed [From SEPA 5/e] SACHIN
  21. 21. Black Box Testing <ul><li>Focus on functional requirements </li></ul><ul><li>Incorrect / missing functions </li></ul><ul><li>Interface errors </li></ul><ul><li>Errors in external data access </li></ul><ul><li>Performance errors </li></ul><ul><li>Initialization and termination errors </li></ul>SACHIN
  22. 22. Black Box Testing [2] <ul><li>How is functional validity tested? </li></ul><ul><li>What classes of input will make good test cases? </li></ul><ul><li>Is the system sensitive to certain inputs? </li></ul><ul><li>How are data boundaries isolated? </li></ul>SACHIN
  23. 23. Black Box Testing [3] <ul><li>What data rates and volume can the system tolerate? </li></ul><ul><li>What effect will specific combinations of data have on system operation? </li></ul>SACHIN
  24. 24. Comparison Testing <ul><li>Compare software versions </li></ul><ul><li>“ Regression testing”: finding the outputs that changed </li></ul><ul><li>Improvements vs. degradations </li></ul><ul><li>Net effect depends on frequency and impact of degradations </li></ul><ul><li>When error rate is low, a large corpus can be used </li></ul>SACHIN
  25. 25. Generic Testing Strategies <ul><li>Testing starts at module level and moves “outward” </li></ul><ul><li>Different testing techniques used at different times </li></ul><ul><li>Testing by developer(s) and independent testers </li></ul><ul><li>Testing and debugging are separate activities </li></ul>SACHIN
  26. 26. Verification and Validation <ul><li>Verification </li></ul><ul><ul><li>“ Are we building the product right?” </li></ul></ul><ul><li>Validation </li></ul><ul><ul><li>“ Are we building the right product?” </li></ul></ul><ul><li>Achieved by life-cycle SQA activities, assessed by testing </li></ul><ul><li>“ You can’t create quality by testing” </li></ul>SACHIN
  27. 27. Organization of Testing [From SEPA 5/e] SACHIN
  28. 28. Logarithmic Poisson execution-time model With sufficient fit, model predicts testing time required to reach acceptable failure rate [From SEPA 5/e] SACHIN
  29. 29. [From SEPA 5/e] SACHIN
  30. 30. PRO: Higher-level (logic) modules tested early CON: Lower-level (reusable) modules tested late [From SEPA 5/e] SACHIN
  31. 31. PRO: Lower-level (reusable) modules tested early CON: Higher-level (logic) modules tested late [From SEPA 5/e] SACHIN
  32. 32. Hybrid Approaches <ul><li>Sandwich Integration: combination of top-down and bottom-up </li></ul><ul><li>Critical Modules </li></ul><ul><ul><li>address several requirements </li></ul></ul><ul><ul><li>high level of control </li></ul></ul><ul><ul><li>complex or error prone </li></ul></ul><ul><ul><li>definite performance requirements </li></ul></ul><ul><li>Test Critical Modules ASAP! </li></ul>SACHIN
  33. 33. SACHIN