Understand the terms and concepts in
Concepts of Testing software testing.
Be able to apply concepts to improve the
quality of your own testing efforts during
Collaborative Software Development Laboratory
Information and Computer Sciences
University of Hawaii
Honolulu HI 96822
Why do we care?
•Verification and validation Why do we care about
•Fault, failure, error
White and Black box testing
Heuristics for test case design
Take aways for your project
First National Bank (1996) Therac-25 (1985-1987)
•An error at First National Bank of Chicago •Six people were overexposed during radiation
resulted in the balance of 800 customers treatments for cancer by the Therac-25
being inflated by a total of $763 billion. radiation therapy machine. Three people
were believed to have died from the
•Inadequate testing. Bank updated ATM
transaction software with new message Reason:
codes. Message codes were not tested on all •Inadequate testing. Hardware safety locks
ATM protocols, resulting in some ATMs removed and replaced by software safety
interpreting them as huge increases to locks, which could be overcome by technician
customer balances. “type ahead”.
Ariane 5 (1996)
Validation and Verification
•Ariane 5 rocket exploded on its maiden •Establishing the fitness of a software
flight. product for its use.
•“Are we building the right product?”
Reason: •Requires interaction with customers.
•Inadequate testing. Navigation package
inherited from Ariane-4 without proper
testing. New rocket flew faster, resulting Verification:
in larger values of some variables, resulting •Establishing the correspondence between the
in an attempt to convert a 64-bit floating software and its specification.
number into a 16 bit integer. Code was
caught and action taken was to shut down •“Are we building the product right?”
navigation system. •Requires interaction with software.
Static vs. Dynamic V&V Failure, faults, and errors
Static V&V: Failure:
•Software inspections •Occurs when a program misbehaves.
•Static analysis of source code Fault:
- Control/data flow analysis •A problem that exists in source code.
Dynamic V&V: Error:
•A human action that results in software containing a
•Defect testing fault.
- Looks for errors in functionality
•Load testing An error leads to the inclusion of a fault, that may or
may not lead to a failure.
- Looks for errors in scalability,
Testing cannot prove that a program never fails, only
that it contains faults.
Testing, defect classification,
Software Testing: Is it “testing” if you simply run the program
•The process of executing a program and comparing to see what it does?
the actual behavior with the expected behavior.
•Deviations between actual and expected behavior
should indicate defects in the program whose Why do defect classification? Isn’ testing
removal improves the quality of the software. followed by debugging enough?
•The process of identifying the type of deviation
between actual and expected (design error, Is validation testing done best with static or
specification error, testing error, etc.) dynamic testing?
•The process of tracking down the source of the
defect and removing it. Is verification testing done best with static
or dynamic testing?
Why is testing hard? Basics of test case selection
Exhaustive testing: White box testing (also “Clear” or “Glass”)
•Execute program on all possible inputs and •Assume knowledge of code
compare actual to expected behavior. •Coverage-based testing
•Could “prove” program correctness. •Automated tools available to assess coverage
•Not practical for any non-trivial program. Black box testing
•Don’ assume knowledge of code
Practical testing: •Specification-based testing
•Select a tiny % of all possible tests. •Automated tools available (requires formal
•Goal: executing the tiny % of tests will
uncover a large % of defects present! Other methods exist
•A “testing method” is essentially a way to •Mutation testing, etc.
decide which tiny % to pick.
Statement coverage Control flow coverage
For a test case to uncover a defect, the Control flow coverage adds conditions to statement
coverage to raise the odds of discovering defects.
statement containing the defect must be
•Every conditional is evaluated as both true and false
Therefore, a set of test cases which during testing.
guarantees all statements are executed might
uncover a large number of the defects
•Every loop must be executed both 0 times and more
present. than 1 time.
Whether or not the defects are actually Path coverage:
uncovered depends upon the program state at •All permutations of paths through the program are
the time each statement is executed.
Tool support for coverage testing Problems with code coverage
How do you know whether you’ executed all
ve Can catch bugs in code that has been written.
statements or not?
Cannot catch bugs in code that has not been
Typical tool support: written!
•Coverage tool instruments the source code, •Errors of omission: code that ought to have
which is then compiled.
•You run your instrumented system through been written but wasn’ t
your chosen set of test cases. - Missing boolean conditions in IF statements
•Coverage tool generates a log file, which - Missing exception handlers
identifies paths not taken.
•You add new test cases until all paths are To catch bugs in code that has not been
written, you must compare behavior of
•Example: GCT for C, Jcover for Java program to its specification.
Specification based testing Equivalence classes
Specification: Goal: Divide the possible inputs into
•A mapping between the possible “inputs” to the categories such that testing one point in each
program and the corresponding expected “outputs” category is equivalent to testing all points in
•Design a set of test cases to see if inputs actually Provide one test case for each point.
map to outputs.
•Does not require access to source code
Equivalence class definition is usually an
iterative process and goes on throughout
Differences with White Box (coverage) testing: development.
•Can catch errors of omission.
•Effectiveness depends upon a high quality
specification Use heuristics to get you started designing
your test cases.
Unit test design heuristics Web app design heuristics
If input is a value: If I/O specification Every page is retrieved at least once
•maximum value contains conditions:
•Prevent 404 errors.
•minimum value •true
•empty value •false
•typical value Every link is followed at least once.
•illegal value If I/O specification •Prevent 404 errors.
If input is a sequence: contains iterations:
•Single element •zero times
•Empty sequence •1 time All form input fields are tested with:
•Max and min element •> 1 time •Normal values
values •Erroneous values
•Sequences of different •Maximum/minimum values
•Illegal elements Always check response for appropriateness.
Industry heuristics Automated Specification Testing
How to design practical test cases, Tsuneo Automation of specification-based testing requires a
Yamaura, IEEE Software, Nov. 1998 formal (i.e. machine interpretable) specification.
•Proper test case density is one test case
per 10-15 LOC. Also known as “formal verification”.
•Test case type percentages:
- Basic and normal tests < 60% The tool:
- Boundary and limit tests >10%, •Reads the specification
- Error tests > 15% •Generates the test cases
•Run a 48 hour continuous operation test •Runs the program
(reiterating basic functions) for memory •Checks the results.
leaks, deadlock, time-out, etc.
•Tendency is to write too many tests for well Issues:
•Must have a formal specification language.
understood functions, and too few for poorly
understood. Use density to uncover this. •Typically used for software requiring very high quality
(life critical, etc.)
JOSSE Testing Requirements Resources
You need to both validate and verify your How to misuse code coverage
•Verify with JUnit/HttpUnit.
•Validate with customers.
You need to ensure that your unit and
httpunit tests are sufficiently comprehensive.
•Use Unit test and Web app design heuristics
to help guide the process.