Chapter 18 Software Testing Strategies These slides are based in part on those provided by the textbook publisher, the text book author, and Dr. Ralph D. Meeker. They are copyright by various people including Roger S. Pressman & Associates, Inc., McGraw Hill Companies, Inc., Dr. Ralph D. Meeker and Dr. John Kevin Doyle.
Testing begins at the module level and works outward toward the integration of the entire computer-based system.
Different testing techniques are appropriate at different points in time .
Testing is conducted by the developer of the software and (for large projects) an independent test group .
Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
Verification and Validation
Verification – are we building the product correctly?
refers to the set of activities that ensure that software correctly implements a specific function.
Validation – are we building the correct product?
refers to the set of activities that ensure that the software that has been built is traceable to customer requirements.
Testing Strategy unit test integration test validation test system test
Unit Testing module to be tested test cases results software engineer
Unit Testing interface local data structures boundary conditions independent paths error handling paths module to be tested test cases
Unit Testing Environment module stub driver RESULTS interface local data structures boundary conditions independent paths error handling paths test cases stub
Integration Testing Strategies
The big bang approach – slap everything together, and hope it works. If not, debug it.
An incremental construction strategy – add modules one at a time, or in small groups, and debug problems incrementally.
Top Down Integration top module is tested with stubs. stubs are replaced one at a time, “depth first”. as new modules are integrated, some subset of tests is re-run. A B C D E F G
Bottom-Up Integration drivers are replaced one at a time, “depth first”. worker modules are grouped into builds and integrated. A B C D E F G cluster
Sandwich Testing worker modules are grouped into builds and integrated. A B C D E F G cluster top modules are tested with stubs.
Each time a new module is added as part of integration testing, or a bug is fixed in the software, the software changes.
These changes may cause problems with functions that previously worked.
Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not generated unintended side effects.
The regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions.
Verifies conformance with requirements
Answers the question “Did we build the correct product?”
Alpha and Beta Testing
Testing by the customer of a near-final version of the product.
Testing of the entire software system , focused on end-to-end qualities.
Other Specialized Testing
Validation succeeds when the software under test functions in a manner that can be reasonably be expected by the customer.
Validation is achieved through a series of black-box tests that demonstrate conformity with requirements.
The test plan should outline the classes of tests to be conducted and define specific test cases that will be used in an attempt to uncover errors in conformity with requirements .
Alpha and Beta Testing Alpha Test software customer tests customer tests customer site customer site developer site developer site software developer reviews Beta Test
Recovery testing – forces the software to fail in a variety of ways and verifies that recovery is properly performed.
Security testing – attempts to verify that protection mechanisms built into a system will in fact protect it from improper access.
Stress testing – executes a system in a manner that demands resources in abnormal quantity, frequency, or volume.
Performance testing – tests run-time performance of software within the context of an integrated system.
End-to-end testing – test user scenarios, from end to end of the system.
Stability testing – test system over time (e.g., 72 hours) with normal load.
Debugging: A Diagnostic Process
The Debugging Process test cases results Debugging suspected causes identified causes corrections regression tests new test cases
Debugging Effort time required to correct the error and conduct regression tests time required to diagnose the symptom and determine the cause (always exactly 25% )
Symptoms and Causes
Symptom and cause may be geographically separated.
Symptom may disappear when another problem is fixed.
Cause may be due to a combination of non-errors.
Cause may be due to a system or compiler error.
Cause may be due to assumptions that everyone believes.
Symptom may be intermittent.
Consequences of Bugs Damage mild annoying disturbing serious extreme catastrophic infectious Bug Type Bug Categories : function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
Brute force – print out values of “all” the variables, at “every” step, look through the mass of data, and find the error.
Backtracking – begin with symptom of error. Trace backwards from there (by hand) to error.
Induction – moving from particular to general:
Run error-detecting test with lots of different input.
Based on all the data generated, hypothesize an error cause.
Run another test to validate the hypothesis.
If hypothesis is not validated, generate a new hypothesis and repeat.
Deduction – moving from general to particular:
Consider all possible error causes.
Generate tests to eliminate each (we hope/expect all but one will succeed).
We may then be able to use further tests to further refine the error cause.
First, think about the symptom you are seeing – don’t run off half-cocked.
Use tools (e.g., dynamic debuggers) to gain more insight, and to make the steps reproducible.
If at an impasse, get help from someone else.
Be absolutely sure to conduct regression tests when you do fix the bug.