Concepts of Testing


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Concepts of Testing

  1. 1. Objectives Understand the terms and concepts in Concepts of Testing software testing. Be able to apply concepts to improve the quality of your own testing efforts during software development. Philip Johnson Collaborative Software Development Laboratory Information and Computer Sciences University of Hawaii Honolulu HI 96822 (1) (2) Outline Why do we care? Terminology: •Verification and validation Why do we care about •Fault, failure, error White and Black box testing testing? Heuristics for test case design Take aways for your project (3) (4) First National Bank (1996) Therac-25 (1985-1987) Problem: Problem: •An error at First National Bank of Chicago •Six people were overexposed during radiation resulted in the balance of 800 customers treatments for cancer by the Therac-25 being inflated by a total of $763 billion. radiation therapy machine. Three people were believed to have died from the overdoses. Reason: •Inadequate testing. Bank updated ATM transaction software with new message Reason: codes. Message codes were not tested on all •Inadequate testing. Hardware safety locks ATM protocols, resulting in some ATMs removed and replaced by software safety interpreting them as huge increases to locks, which could be overcome by technician customer balances. “type ahead”. (5) (6) 1
  2. 2. Overall goals: Ariane 5 (1996) Validation and Verification Problem: Validation: •Ariane 5 rocket exploded on its maiden •Establishing the fitness of a software flight. product for its use. •“Are we building the right product?” Reason: •Requires interaction with customers. •Inadequate testing. Navigation package inherited from Ariane-4 without proper testing. New rocket flew faster, resulting Verification: in larger values of some variables, resulting •Establishing the correspondence between the in an attempt to convert a 64-bit floating software and its specification. number into a 16 bit integer. Code was caught and action taken was to shut down •“Are we building the product right?” navigation system. •Requires interaction with software. (7) (8) Static vs. Dynamic V&V Failure, faults, and errors Static V&V: Failure: •Software inspections •Occurs when a program misbehaves. •Static analysis of source code Fault: - Control/data flow analysis •A problem that exists in source code. Dynamic V&V: Error: •A human action that results in software containing a •Defect testing fault. - Looks for errors in functionality •Load testing An error leads to the inclusion of a fault, that may or may not lead to a failure. - Looks for errors in scalability, performance, reliability Testing cannot prove that a program never fails, only that it contains faults. (9) (10) Testing, defect classification, Issues and debugging Software Testing: Is it “testing” if you simply run the program •The process of executing a program and comparing to see what it does? the actual behavior with the expected behavior. •Deviations between actual and expected behavior should indicate defects in the program whose Why do defect classification? Isn’ testing t removal improves the quality of the software. followed by debugging enough? Defect classification: •The process of identifying the type of deviation between actual and expected (design error, Is validation testing done best with static or specification error, testing error, etc.) dynamic testing? Debugging: •The process of tracking down the source of the defect and removing it. Is verification testing done best with static or dynamic testing? (11) (12) 2
  3. 3. Why is testing hard? Basics of test case selection Exhaustive testing: White box testing (also “Clear” or “Glass”) •Execute program on all possible inputs and •Assume knowledge of code compare actual to expected behavior. •Coverage-based testing •Could “prove” program correctness. •Automated tools available to assess coverage •Not practical for any non-trivial program. Black box testing •Don’ assume knowledge of code t Practical testing: •Specification-based testing •Select a tiny % of all possible tests. •Automated tools available (requires formal specification) •Goal: executing the tiny % of tests will uncover a large % of defects present! Other methods exist •A “testing method” is essentially a way to •Mutation testing, etc. decide which tiny % to pick. (13) (14) Statement coverage Control flow coverage For a test case to uncover a defect, the Control flow coverage adds conditions to statement coverage to raise the odds of discovering defects. statement containing the defect must be executed. Branch coverage: •Every conditional is evaluated as both true and false Therefore, a set of test cases which during testing. guarantees all statements are executed might Loop coverage: uncover a large number of the defects •Every loop must be executed both 0 times and more present. than 1 time. Whether or not the defects are actually Path coverage: uncovered depends upon the program state at •All permutations of paths through the program are taken the time each statement is executed. (15) (16) Tool support for coverage testing Problems with code coverage How do you know whether you’ executed all ve Can catch bugs in code that has been written. statements or not? Cannot catch bugs in code that has not been Typical tool support: written! •Coverage tool instruments the source code, •Errors of omission: code that ought to have which is then compiled. •You run your instrumented system through been written but wasn’ t your chosen set of test cases. - Missing boolean conditions in IF statements •Coverage tool generates a log file, which - Missing exception handlers identifies paths not taken. •You add new test cases until all paths are To catch bugs in code that has not been taken. written, you must compare behavior of •Example: GCT for C, Jcover for Java program to its specification. (17) (18) 3
  4. 4. Specification based testing Equivalence classes Specification: Goal: Divide the possible inputs into •A mapping between the possible “inputs” to the categories such that testing one point in each program and the corresponding expected “outputs” category is equivalent to testing all points in the category. Specification-based testing: •Design a set of test cases to see if inputs actually Provide one test case for each point. map to outputs. •Does not require access to source code Equivalence class definition is usually an iterative process and goes on throughout Differences with White Box (coverage) testing: development. •Can catch errors of omission. •Effectiveness depends upon a high quality specification Use heuristics to get you started designing your test cases. (19) (20) Unit test design heuristics Web app design heuristics If input is a value: If I/O specification Every page is retrieved at least once •maximum value contains conditions: •Prevent 404 errors. •minimum value •true •empty value •false •typical value Every link is followed at least once. •illegal value If I/O specification •Prevent 404 errors. If input is a sequence: contains iterations: •Single element •zero times •Empty sequence •1 time All form input fields are tested with: •Max and min element •> 1 time •Normal values values •Erroneous values •Sequences of different •Maximum/minimum values sizes •Illegal elements Always check response for appropriateness. (21) (22) Industry heuristics Automated Specification Testing How to design practical test cases, Tsuneo Automation of specification-based testing requires a Yamaura, IEEE Software, Nov. 1998 formal (i.e. machine interpretable) specification. •Proper test case density is one test case per 10-15 LOC. Also known as “formal verification”. •Test case type percentages: - Basic and normal tests < 60% The tool: - Boundary and limit tests >10%, •Reads the specification - Error tests > 15% •Generates the test cases •Run a 48 hour continuous operation test •Runs the program (reiterating basic functions) for memory •Checks the results. leaks, deadlock, time-out, etc. •Tendency is to write too many tests for well Issues: •Must have a formal specification language. understood functions, and too few for poorly understood. Use density to uncover this. •Typically used for software requiring very high quality (life critical, etc.) (23) (24) 4
  5. 5. JOSSE Testing Requirements Resources You need to both validate and verify your How to misuse code coverage software. • •Verify with JUnit/HttpUnit. •Validate with customers. You need to ensure that your unit and httpunit tests are sufficiently comprehensive. •Use Unit test and Web app design heuristics to help guide the process. (25) (26) 5