Your SlideShare is downloading. ×
Testingtechniques And Strategy
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Testingtechniques And Strategy

927

Published on

Published in: Technology, Education
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
927
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Lecture 14: Testing Techniques and Strategies SACHIN
  • 2. Today’s Topics
    • Chapters 17 & 18 in SEPA 5/e
    • Testing Principles & Testability
    • Test Characteristics
    • Black-Box vs. White-Box Testing
    • Flow Graphs & Basis Path Testing
    • Testing & Integration Strategies
    SACHIN
  • 3. Software Testing
    • Opportunities for human error
      • Specifications, design, coding
      • Communication
    • “ Testing is the ultimate review”
    • Can take 30-40% of total effort
    • For critical apps, can be 3 to 5 times all other efforts combined!
    SACHIN
  • 4. Testing Objectives
    • Execute a program with the intent of finding errors
    • Good tests have a high probability of discovering errors
    • Successful tests uncover errors
    • ‘ No errors found’: not a good test!
    • Verifying functionality is a secondary goal
    SACHIN
  • 5. Testing Principles
    • Tests traceable to requirements
    • Tests planned before testing
    • Pareto principle: majority of errors traced to minority of components
    • Component testing first, then integrated testing
    • Exhaustive testing is not possible
    • Independent tests: more effective
    SACHIN
  • 6. Software Testability
    • Operability
    • Observability
    • Controllability
    • Decomposability
    Characteristics that lead to testable software:
    • Simplicity
    • Stability
    • Understandability
    SACHIN
  • 7. Operability
    • System has few bugs
    • No bugs block execution of tests
    • Product evolves in functional stages
    The better it works, the more efficiently it can be tested SACHIN
  • 8. Observability
    • Distinct output for each input
    • States & variables may be queried
    • Past states are logged
    • Factors affecting output are visible
    • Incorrect output easily identified
    • Internal errors reported
    • Source code accessible
    What you see is what you test SACHIN
  • 9. Controllability
    • All possible outputs can be generated by some input
    • All code executable by some input
    • States, variables directly controlled
    • Input/output consistent, structured
    • Tests are specified, automated, and reproduced
    The better we can control the software, the more the testing can be automated SACHIN
  • 10. Decomposability
    • Independent modules
    • Modules can be tested separately
    By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting SACHIN
  • 11. Simplicity
    • Minimum feature set
    • Minimal architecture
    • Code simplicity
    The less there is to test, the more quickly we can test it SACHIN
  • 12. Stability
    • Changes made to system:
      • are infrequent
      • are controlled
      • don’t invalidate existing tests
    • Software recovers from failure
    The fewer the changes, the fewer the disruptions to testing SACHIN
  • 13. Understandability
    • Design is well-understood
    • Dependencies are well understood
    • Design changes are communicated
    • Documentation is:
      • accessible
      • well-organized
      • specific, detailed and accurate
    The fewer the changes, the fewer the disruptions to testing SACHIN
  • 14. Test Characteristics
    • Good test has a high probability of finding an error
    • Good test is not redundant
    • A good test should be “best of breed”
    • A good test is neither too simple nor too complex
    SACHIN
  • 15. Test Case Design
    • ‘ Black Box’ Testing
      • Consider only inputs and outputs
    • ‘ White Box’ or ‘Glass Box’ Testing
      • Also consider internal logic paths, program states, intermediate data structures, etc.
    SACHIN
  • 16. White-Box Testing
    • Guarantee that all independent paths have been tested
    • Exercise all conditions for ‘true’ and ‘false’
    • Execute all loops for boundary conditions
    • Exercise internal data structures
    SACHIN
  • 17. Why White-Box Testing?
    • More errors in ‘special case’ code which is infrequently executed
    • Control flow can’t be predicted accurately in black-box testing
    • Typo errors can happen anywhere!
    SACHIN
  • 18. Basis Path Testing
    • White-box method [McCabe ‘76]
    • Analyze procedural design
    • Define basis set of execution paths
    • Test cases for basis set execute every program statement at least once
    SACHIN
  • 19. Basis Path Testing [2] Flow Graph : Representation of Structured Programming Constructs [From SEPA 5/e] SACHIN
  • 20. Cyclomatic Complexity V(G)=E-N+2 = 4 Independent Paths 1: 1,11 2: 1,2,3,4,5,10,1,11 3: 1,2,3,6,8,9,10,1,11 4: 1,2,3,6,7,9,10,1,11 V(G): upper bound on number of tests to ensure all code has been executed [From SEPA 5/e] SACHIN
  • 21. Black Box Testing
    • Focus on functional requirements
    • Incorrect / missing functions
    • Interface errors
    • Errors in external data access
    • Performance errors
    • Initialization and termination errors
    SACHIN
  • 22. Black Box Testing [2]
    • How is functional validity tested?
    • What classes of input will make good test cases?
    • Is the system sensitive to certain inputs?
    • How are data boundaries isolated?
    SACHIN
  • 23. Black Box Testing [3]
    • What data rates and volume can the system tolerate?
    • What effect will specific combinations of data have on system operation?
    SACHIN
  • 24. Comparison Testing
    • Compare software versions
    • “ Regression testing”: finding the outputs that changed
    • Improvements vs. degradations
    • Net effect depends on frequency and impact of degradations
    • When error rate is low, a large corpus can be used
    SACHIN
  • 25. Generic Testing Strategies
    • Testing starts at module level and moves “outward”
    • Different testing techniques used at different times
    • Testing by developer(s) and independent testers
    • Testing and debugging are separate activities
    SACHIN
  • 26. Verification and Validation
    • Verification
      • “ Are we building the product right?”
    • Validation
      • “ Are we building the right product?”
    • Achieved by life-cycle SQA activities, assessed by testing
    • “ You can’t create quality by testing”
    SACHIN
  • 27. Organization of Testing [From SEPA 5/e] SACHIN
  • 28. Logarithmic Poisson execution-time model With sufficient fit, model predicts testing time required to reach acceptable failure rate [From SEPA 5/e] SACHIN
  • 29. [From SEPA 5/e] SACHIN
  • 30. PRO: Higher-level (logic) modules tested early CON: Lower-level (reusable) modules tested late [From SEPA 5/e] SACHIN
  • 31. PRO: Lower-level (reusable) modules tested early CON: Higher-level (logic) modules tested late [From SEPA 5/e] SACHIN
  • 32. Hybrid Approaches
    • Sandwich Integration: combination of top-down and bottom-up
    • Critical Modules
      • address several requirements
      • high level of control
      • complex or error prone
      • definite performance requirements
    • Test Critical Modules ASAP!
    SACHIN
  • 33. SACHIN

×