Your SlideShare is downloading. ×
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Software Testing
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Software Testing

164

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
164
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Testing a "big-bang" integration is about as hopeless as doing one. ADTs or object classes are ideal subsystems to accumulate. Input sequences throw the software into various "modes" in which its behavior may differ radically. Interface checking is difficult and requires detailed knowledge of the system; it isn't usually attempted.
  • It might as well be done by third parties -- understanding has been left with developers (who may be gone). Fault-based tests are like looking for a needle in a haystack. Coverage measurement may be impossible; only the simplest (statement coverage?) is feasible. The problem is only partly tool limitations: people don't have the analysis time.
  • The UNIX `profile' command does module coverage. The current buzz word for "mode-covering sequence" is "use case." Using an environment simulation coupled to the system under test is dangerous, because they may "accommodate" each other in a way that the real world will not do for the system. Test scripts ("automated testing") provides the bookkeeping.
  • Special values testing is perhaps the best at failure finding; little use in gaining confidence. The user weighting may be different for each user!
  • A practical histogram may have about 100 input classes.
  • In practice, nothing like a real profile is ever available. But the concept of a "usage spike" explains why failures go unobserved during test, then appear magically after release.
  • Users can supply rough histogram information, which may be inaccurate, and may differ for different users. The "novice" vs. "expert" user distinction is particularly important, and their different profiles (and the latter being used for test) explains why novices so easily `break' software.
  • "Systematic" in measurement means errors that do not cancel on average; random errors do cancel. Pseudorandom number generators are often really lousy, particularly when a few bits are extracted (e.g., to model a coin toss). Don't just use the one in your programming language library if you care. What, for example, would constitute a "random C program"?
  • The example is unrealistic in many ways: There is a user profile. Input is numeric. There is an effective oracle. With this test, the confidence in a MTTF of 100000 runs is 1%; the confidence in a MTTF of 1000 runs is 63%.
  • The work on comparison of systematic and random testing is almost all theoretical. How could an experiment be done? The stopping rule is simply that the reliability goal has been reached with high enough confidence.
  • The regression testing problem is much easier in principle than the general testing problem, if it assumed that the existing testset is adequate. It usually isn't. A regression testset (from the existing one) is called safe if all the tests whose outcome might differ are found. Finding new tests could use any of the usual methods.
  • Dependency analysis has other uses that have provided some of the technology: optimization, parallelization, etc. Dynamic methods are more precise than static, but also more expensive.
  • Example: Buying a coverage analyzer. The closest one gets to a specification may be a user's manual. Examples of standards with acceptance tests: protocols; the POSIX standard.
  • Object libraries are one of the main features of O-O languages, and have given hope for actual reuse. COTS is still little more than a buzzword. ("Reuse" is another.) Specification seems to require some kind of formal language. Component quality is mostly process-defined. More about quality (in terms of reliability) later.
  • Transcript

    • 1. Software Testing OMSE 535 Lecture 8 Manny Gatlin
    • 2. Overview
      • Integration testing
      • System testing
      • Operational testing
      • Regression testing
      • Component/package testing
    • 3. Integration Testing
      • A Definition
        • Testing of functional interactions at interfaces
        • Occurs between unit and system testing
      • Purpose
        • Ensure components work together in a subsystem/system
    • 4. When to Integration Test
      • Unit testing is at 0% integration
      • System testing is at 100% integration
      • General approach
        • Start integration testing after unit testing
        • Integration testing should be a continuous approach
          • “ Big-Bang” integration and testing is a futile exercise
    • 5. Integration Testing Approaches
      • Subsystems
        • Accumulate slowly into natural units
        • Tests consisting of input sequences are needed
      • Check interfaces
        • Local instrumentation is needed
        • Assertions
          • Modules check inputs
          • Callers check returned values
    • 6. System Testing
      • Driven by functional requirements
        • Use requirements document
        • User manual
      • Similar to Acceptance testing, except
        • Acceptance testing performed by customer/users/proxy
        • Acceptance testing demonstrative of fitness, rather than detection of defects
    • 7. Nature of System Testing
      • Similar to Unit Testing approach, but:
        • Performed on integrated system
        • Black-box only
        • Intellectual control has been lost
      • Test planning critical
        • Plan should be developed in conjunction with functional specification
    • 8. System Testing Mechanisms
      • Mechanisms
        • Module coverage
        • Sequences (Use-cases)
        • Environment simulation
      • Automatic bookkeeping required
        • Test scripts help
    • 9. Operational Testing
      • Operational Testing
        • Execute test cases with same statistical properties as those found in real operational use
      • Finding failures  confidence in none
        • Since operations used frequently by users are testing focus, failures are not distributed
      • Structural coverage?
        • No
      • Functional coverage?
        • Almost, needs weighting by usage
    • 10. Operational Profile
      • Operational Profile
        • A set of operations that a program may perform and their probabilities of occurrence in actual operation
        • There may be many operational profiles
      • (In theory) Probability density of input
      • (In practice) Histogram of function usage
    • 11. A Theoretical Operational Profile
    • 12. Histogram Profile by System Function
    • 13. Random Testing (Inputs)
      • Not “random” methodology
        • Opposite of systematic
        • Pseudorandom number generator
        • Non-numerical inputs?
      • Effective oracle essential
        • Too many (not nice) inputs
    • 14. Cube-root Subroutine
      • Range [-10000, 10000]
      • Accuracy 1 in 100000
      • Profile: (1000 test points)
        • [-1.1, 1.1] 80% (800)
        • (1.1, 100] 10% (100)
        • (100, 10000] 5% (50)
        • [-10000, -1.1) 5% (50)
        • (Uniform pseudorandom with subdomains)
      • Oracle: cube result plus/minus .00001
    • 15. Random Testing in Practice
      • Surprisingly good at failure-finding
        • Measures: probability of finding at least one failure
        • Expected value of delivered reliability after test
      • Failures are found in the order users will see them
      • Good stopping rule
        • Reliability goals have been reached with acceptable confidence
    • 16. Regression Testing
      • Pick from existing testset
        • Cases needed because of changes
        • Cases to alter because of changes
        • Relies on existing testset being good
      • New tests for changes
      • Dependency analysis is the answer
    • 17. Dependency Analysis
      • Dependence
        • Exists between statements when the order of the statements execution affects the results of the program.
      • Static methods
        • Worst-case flowgraph traversal
        • Very similar to dataflow coverage
      • Dynamic methods
        • Relies on given test data
    • 18. Package Testing (Systems)
      • Requirements?
        • (Yes, but neglected.)
      • Specification, design, code?
        • (No.)
      • Black-box system tests are dictated
      • There may be an acceptance test based on a standard
    • 19. Component Testing (Modules)
      • Libraries
        • Language (Java standard class library; GNU C lib, etc.)
        • User groups (search the net)
      • Commercial Off-The-Shelf components (COTS)
      • Problems
        • What is the specification of the component?
        • Is the quality good enough?
    • 20. Conclusions
      • Integration testing is crucial to finding failures at interfaces
      • System test planning necessary for credible results
      • Operational testing important for finding failures likely to occur in use

    ×