• Like
  • Save
Testingtechniques And Strategy
Upcoming SlideShare
Loading in...5

Testingtechniques And Strategy






Total Views
Views on SlideShare
Embed Views



1 Embed 1

http://www.slideshare.net 1



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

    Testingtechniques And Strategy Testingtechniques And Strategy Presentation Transcript

    • Lecture 14: Testing Techniques and Strategies SACHIN
    • Today’s Topics
      • Chapters 17 & 18 in SEPA 5/e
      • Testing Principles & Testability
      • Test Characteristics
      • Black-Box vs. White-Box Testing
      • Flow Graphs & Basis Path Testing
      • Testing & Integration Strategies
    • Software Testing
      • Opportunities for human error
        • Specifications, design, coding
        • Communication
      • “ Testing is the ultimate review”
      • Can take 30-40% of total effort
      • For critical apps, can be 3 to 5 times all other efforts combined!
    • Testing Objectives
      • Execute a program with the intent of finding errors
      • Good tests have a high probability of discovering errors
      • Successful tests uncover errors
      • ‘ No errors found’: not a good test!
      • Verifying functionality is a secondary goal
    • Testing Principles
      • Tests traceable to requirements
      • Tests planned before testing
      • Pareto principle: majority of errors traced to minority of components
      • Component testing first, then integrated testing
      • Exhaustive testing is not possible
      • Independent tests: more effective
    • Software Testability
      • Operability
      • Observability
      • Controllability
      • Decomposability
      Characteristics that lead to testable software:
      • Simplicity
      • Stability
      • Understandability
    • Operability
      • System has few bugs
      • No bugs block execution of tests
      • Product evolves in functional stages
      The better it works, the more efficiently it can be tested SACHIN
    • Observability
      • Distinct output for each input
      • States & variables may be queried
      • Past states are logged
      • Factors affecting output are visible
      • Incorrect output easily identified
      • Internal errors reported
      • Source code accessible
      What you see is what you test SACHIN
    • Controllability
      • All possible outputs can be generated by some input
      • All code executable by some input
      • States, variables directly controlled
      • Input/output consistent, structured
      • Tests are specified, automated, and reproduced
      The better we can control the software, the more the testing can be automated SACHIN
    • Decomposability
      • Independent modules
      • Modules can be tested separately
      By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting SACHIN
    • Simplicity
      • Minimum feature set
      • Minimal architecture
      • Code simplicity
      The less there is to test, the more quickly we can test it SACHIN
    • Stability
      • Changes made to system:
        • are infrequent
        • are controlled
        • don’t invalidate existing tests
      • Software recovers from failure
      The fewer the changes, the fewer the disruptions to testing SACHIN
    • Understandability
      • Design is well-understood
      • Dependencies are well understood
      • Design changes are communicated
      • Documentation is:
        • accessible
        • well-organized
        • specific, detailed and accurate
      The fewer the changes, the fewer the disruptions to testing SACHIN
    • Test Characteristics
      • Good test has a high probability of finding an error
      • Good test is not redundant
      • A good test should be “best of breed”
      • A good test is neither too simple nor too complex
    • Test Case Design
      • ‘ Black Box’ Testing
        • Consider only inputs and outputs
      • ‘ White Box’ or ‘Glass Box’ Testing
        • Also consider internal logic paths, program states, intermediate data structures, etc.
    • White-Box Testing
      • Guarantee that all independent paths have been tested
      • Exercise all conditions for ‘true’ and ‘false’
      • Execute all loops for boundary conditions
      • Exercise internal data structures
    • Why White-Box Testing?
      • More errors in ‘special case’ code which is infrequently executed
      • Control flow can’t be predicted accurately in black-box testing
      • Typo errors can happen anywhere!
    • Basis Path Testing
      • White-box method [McCabe ‘76]
      • Analyze procedural design
      • Define basis set of execution paths
      • Test cases for basis set execute every program statement at least once
    • Basis Path Testing [2] Flow Graph : Representation of Structured Programming Constructs [From SEPA 5/e] SACHIN
    • Cyclomatic Complexity V(G)=E-N+2 = 4 Independent Paths 1: 1,11 2: 1,2,3,4,5,10,1,11 3: 1,2,3,6,8,9,10,1,11 4: 1,2,3,6,7,9,10,1,11 V(G): upper bound on number of tests to ensure all code has been executed [From SEPA 5/e] SACHIN
    • Black Box Testing
      • Focus on functional requirements
      • Incorrect / missing functions
      • Interface errors
      • Errors in external data access
      • Performance errors
      • Initialization and termination errors
    • Black Box Testing [2]
      • How is functional validity tested?
      • What classes of input will make good test cases?
      • Is the system sensitive to certain inputs?
      • How are data boundaries isolated?
    • Black Box Testing [3]
      • What data rates and volume can the system tolerate?
      • What effect will specific combinations of data have on system operation?
    • Comparison Testing
      • Compare software versions
      • “ Regression testing”: finding the outputs that changed
      • Improvements vs. degradations
      • Net effect depends on frequency and impact of degradations
      • When error rate is low, a large corpus can be used
    • Generic Testing Strategies
      • Testing starts at module level and moves “outward”
      • Different testing techniques used at different times
      • Testing by developer(s) and independent testers
      • Testing and debugging are separate activities
    • Verification and Validation
      • Verification
        • “ Are we building the product right?”
      • Validation
        • “ Are we building the right product?”
      • Achieved by life-cycle SQA activities, assessed by testing
      • “ You can’t create quality by testing”
    • Organization of Testing [From SEPA 5/e] SACHIN
    • Logarithmic Poisson execution-time model With sufficient fit, model predicts testing time required to reach acceptable failure rate [From SEPA 5/e] SACHIN
    • [From SEPA 5/e] SACHIN
    • PRO: Higher-level (logic) modules tested early CON: Lower-level (reusable) modules tested late [From SEPA 5/e] SACHIN
    • PRO: Lower-level (reusable) modules tested early CON: Higher-level (logic) modules tested late [From SEPA 5/e] SACHIN
    • Hybrid Approaches
      • Sandwich Integration: combination of top-down and bottom-up
      • Critical Modules
        • address several requirements
        • high level of control
        • complex or error prone
        • definite performance requirements
      • Test Critical Modules ASAP!
    • SACHIN