Introduction to Testing

2,915 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,915
On SlideShare
0
From Embeds
0
Number of Embeds
88
Actions
Shares
0
Downloads
169
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • The second half of OMSE 535 is devoted to program testing. Testing can have a strong process orientation (and this is the view taken by Hetzel's text), but we will be concerned primarily with the technical aspects, which come into play only when there is code to execute.
  • In one study, 25%of the failures found were not seen by the testers. The essence of technical testing is choosing the test points, the inputs that will be tried. Hetzel has a consulting company whose slogan is "TEST, THEN CODE." (Aside: Why is it that selling consulting services about software product has been singularly unsuccessful, while the same companies can be successful consulting about process ?)
  • The great virtue in the current "find defects" viewpoint is that when one has been found, everyone knows what to do: fix it! The other views are much more problematic. For safety-critical software, quality assessment is an important part of the testing effort. However, there is no compelling reason why most software should be of high quality, and if quality doesn't matter, assessing it is a waste of time... The analogy to the automobile industry is of interest: the Japanese are credited with the introduction of quality control in the 1970s/80s -- is that why their cars were so successful?
  • Everyone slips all the time and says "errors" when the much more precise "failures" is what's meant. The trouble with talking about "defects," etc., is that it leads into the fantasy mindset that there was a perfect program into which the programmers inserted problems. Counting failures is difficult, because two may be the same in that a code change would eliminate them both ("the same bug"). But counting faults is worse: there could always be just one: "wrong code written." In studies that tried to count faults (really fixes) by type, the most common is always "omitted code."
  • The discrepancy between what is done in practice and what would enhance quality is striking, but the latter is very subjective
  • Some examples of process decisions: One week for system test in a two-year development. No unit test except by informally by coders. (?) 75% Statement coverage is required. In probably no other phase of software development is it possible for bad process management to cause so much wasted effort. Observing the letter of the testing (process) law takes time away from finding good test cases.
  • When the hard work of devising test cases was not done in the appropriate phase, most of the crucial cases will have been forgotten.
  • The creative work of testing is finding the right inputs to realize the purpose of the work (find failures). But without an oracle failure doesn't exist. Random testing is unique in its ability to assess program quality; we will talk about it later under the heading of "reliability." The state of the art today is: System-level functional subdomains, with human oracle.
  • To make the method feasible, there have to be only a few subdomains. One point in each is a common choice. Minimal caution dictates a new point following a fix; even this is not always done. The conclusion (5) is obviously false.
  • The question is: what is "the same"? Usually it is defined by a dubious criterion, then the tester confuses this with his goal. For example, "path coverage" defines "same" as: "input executes the same path." Any input that does this is the same as any other, so only one need be tried to find any failure... The "sameness" has slopped over into wishful thinking.
  • A combination of Black- and White-box testing is recommended when testing.
  • Other names are: Blackbox testing Specification based testing The application at unit level requires good unit specifications, often lacking.
  • Example of a scenario: In a bank-teller support system, running through a complete transaction that a customer might request, instead of functions like "add to account" in isolation. The goals of finding failures and of proving that simple usage will not fail, get easily mixed up here. Fault-based tests are tricky: their purpose is to show that some difficulty is not present; hence unlike all other failure-seeking tests, it is hoped they will succeed.
  • Example: Statement testing. Structural element is the program statement. A subdomain is all inputs that cause that statement to be exectuted. The holy grail has not been found, and won't be. Perhaps the best structural method is "special-values testing" (also called boundary testing). An example is a design boundary, say where a data structure is just filled or a limit just exceeded.
  • There is no solid justification for this idea, except the doubtful one that uncovered code must have some functional purpose. It is difficult not to start analyzing structure to find additional tests. This is a bad idea because the analysis usually turns up trivial values that find no failures.
  • Automatic testing usually founders on the oracle. Unless there is an effective oracle, no matter how clever the structural coverage may be, the test means little, because it is unknown whether or not it failed. Unit testing is a powerful failure-finding technique, but it is not systematically used because it is labor-intensive (==costly), and units lack specifications.
  • Test harnesses are a good idea, and their creation can be automated. Stubs are intrincially a bad idea, because they distort functionality and trivialize tests. Test scripts are difficult and time-consuming to prepare. They come into their own in regression testing (retesting after debugging or after maintenance). But too often, code changes require test changes, and the script must be maintained along with the code, making it a dead loss.
  • Introduction to Testing

    1. 1. Introduction To Testing OMSE 535 Lecture 6 Manny Gatlin
    2. 2. Overview <ul><li>What is software testing? </li></ul><ul><li>Purpose of testing </li></ul><ul><li>Terminology </li></ul><ul><li>Testing models </li></ul><ul><li>Sequence of test activities </li></ul><ul><li>The Test Plan </li></ul><ul><li>Test Technology </li></ul><ul><li>Test Techniques, Approaches </li></ul><ul><li>Functional testing </li></ul><ul><li>Structural testing </li></ul><ul><li>Unit testing </li></ul><ul><li>Mechanism for unit test </li></ul>
    3. 3. What Is Software Testing? <ul><li>Orthodox view: </li></ul><ul><ul><li>Executing a program and comparing expected behavior with actual behavior </li></ul></ul><ul><li>Hetzel (et al.): Any defect-seeking activity </li></ul><ul><ul><li>Perhaps specific cases need to be involved </li></ul></ul>
    4. 4. What Software Testing Is NOT! <ul><li>It is not Debugging </li></ul><ul><ul><li>Debugging is the process of finding the cause of a software problem </li></ul></ul><ul><ul><li>Software Testing is the process of finding defects (i.e., incorrect behavior as defined by predetermined criteria or specifications) </li></ul></ul>
    5. 5. Purpose of Testing <ul><li>Testing Objectives </li></ul><ul><ul><li>To help clearly describe system behavior </li></ul></ul><ul><ul><li>To find defects in requirements, design, documentation, and code as early as possible </li></ul></ul><ul><ul><li>To prevent low quality software from reaching the user / customer </li></ul></ul>Demonstration How it works Detection Find defects Prevention Manage Quality 1960s Mid-1970s 1990s Source - The Little Book of Testing, Vol. 1, Software Program Managers Network, 1998
    6. 6. Terminology <ul><li>IEEE standard glossary </li></ul><ul><ul><li>Failure : An execution with the wrong result </li></ul></ul><ul><ul><ul><li>i.e., Symptoms </li></ul></ul></ul><ul><ul><li>Fault : Code that is the source of failure </li></ul></ul><ul><ul><ul><li>i.e., Causes </li></ul></ul></ul><ul><ul><li>Defect, Error, Bug, etc. is equivalent to Fault </li></ul></ul><ul><ul><li>&quot;The fault&quot; usually means &quot;what was fixed&quot; </li></ul></ul>
    7. 7. More Terminology <ul><li>Software Test Plan </li></ul><ul><ul><li>A document that describes the objectives, scope, approach, and focus of a software testing effort </li></ul></ul><ul><ul><li>Primary work done during the requirements phase </li></ul></ul>
    8. 8. Types of Testing <ul><li>Unit </li></ul><ul><ul><li>Testing to find faults in each software unit (module) </li></ul></ul><ul><li>Integration </li></ul><ul><ul><li>Focused on finding failure due to software unit interactions </li></ul></ul><ul><li>Subsystem </li></ul><ul><ul><li>Testing of a subsystem to verify that it meets its functional, quality, and operational requirements </li></ul></ul><ul><li>System </li></ul><ul><ul><li>Verifies that the software system satisfies all of its functional, quality, operational requirements </li></ul></ul><ul><li>Acceptance </li></ul><ul><ul><li>Formal tests conducted to determine whether or not a system satisfies its acceptance criteria </li></ul></ul>
    9. 9. Testing Models <ul><li>Traditional View </li></ul><ul><ul><li>Testing role is late in the development process </li></ul></ul><ul><ul><li>Narrow scope, focused on finding failures in code </li></ul></ul><ul><li>Planned Testing View </li></ul><ul><ul><li>Test planning starts early in the development process </li></ul></ul><ul><ul><li>Focus on test planning helps quality of specifications </li></ul></ul>Requirements Design Coding Testing Operation Test Planning Requirements Design Coding Testing Operation
    10. 10. Testing Models Continued <ul><li>Enlightened View </li></ul><ul><ul><li>Emphasis is on defect prevention </li></ul></ul><ul><ul><li>Reviews and inspections used extensively </li></ul></ul><ul><ul><li>Focus on building quality in from the start </li></ul></ul>Test Planning Requirements Design Coding Testing Operation Quality Management Process
    11. 11. Sequence of Test Activities <ul><li>Quality Approach </li></ul><ul><li>General Practice (sequence may vary) </li></ul>Unit Test Test Plan Subsystem Tests Integration Test System Test Acceptance Test Unit Test 4 Test Plan 5 Subsystem Tests 6 Integration Test 2 System Test 1 Acceptance Test 3 <ul><li>1 Execute the code and find any failures (random testing) </li></ul><ul><li>2 “Big-Bang” integration is the norm, as opposed to continuous or frequent integration </li></ul><ul><li>Not typically user/customer defined </li></ul><ul><li>From a developer perspective, the unit works </li></ul><ul><li>“ We should have done this from the beginning” is a common lament </li></ul><ul><li>Assuming the software design supports this type of testing </li></ul>
    12. 12. Test Planning <ul><li>Why Plan Testing? </li></ul><ul><ul><li>Demonstrates functional coverage with traceability </li></ul></ul><ul><ul><li>Helps ensure testing is not just an afterthought </li></ul></ul><ul><ul><li>Makes the specific requirements for testing visible to the project </li></ul></ul><ul><ul><li>Establishes testing sequence that makes sense </li></ul></ul>
    13. 13. Test Planning Expectations <ul><li>Process / Management </li></ul><ul><ul><li>What will be tested, and when </li></ul></ul><ul><ul><li>Resources to be used </li></ul></ul><ul><ul><li>General techniques/tools to be used </li></ul></ul><ul><ul><li>Documentation standards </li></ul></ul><ul><li>Technical </li></ul><ul><ul><li>Substance of the actual test cases </li></ul></ul><ul><ul><li>Criteria for successful completion of tests </li></ul></ul>
    14. 14. The Test Plan <ul><li>Typically includes: </li></ul><ul><ul><li>Testing priorities </li></ul></ul><ul><ul><li>Scope and limitations of testing </li></ul></ul><ul><ul><li>Test environment and test tools </li></ul></ul><ul><ul><li>Problem tracking and resolution </li></ul></ul><ul><ul><li>Reporting requirements and testing deliverables </li></ul></ul><ul><li>Captures Test Cases </li></ul><ul><ul><li>During requirements: system tests </li></ul></ul><ul><ul><li>During design: &quot;broken box&quot; tests; unit tests </li></ul></ul><ul><ul><li>During coding: programmer concerns </li></ul></ul><ul><ul><li>During testing: explore problems </li></ul></ul>
    15. 15. Test Cases <ul><li>Test Case </li></ul><ul><ul><li>An executable test with a unique set of input values and an expected result </li></ul></ul><ul><li>Test Case Specification </li></ul><ul><ul><li>Input data values </li></ul></ul><ul><ul><li>System environment, configuration </li></ul></ul><ul><ul><li>Initial state of system required for test </li></ul></ul><ul><ul><li>Expected result(s) </li></ul></ul>
    16. 16. Test Technology <ul><li>Subdomains (coverage) </li></ul><ul><ul><li>A subdomain is a class of equivalent inputs </li></ul></ul><ul><ul><li>Testing one input from a subdomain is adequate  </li></ul></ul><ul><li>Random (statistical) testing </li></ul><ul><ul><li>User profiles </li></ul></ul>
    17. 17. Subdomain Testing Method <ul><li>Isolate a collection of input points (the subdomain) </li></ul><ul><li>Select a few test points from the subdomain </li></ul><ul><li>Execute the test points </li></ul><ul><li>If any fail, fix and repeat </li></ul><ul><li>Conclude that the whole subdomain is OK </li></ul>
    18. 18. Subdomains/coverage <ul><li>All input elements &quot;act the same&quot; </li></ul><ul><li>Arbitrary refinement is usually possible </li></ul><ul><ul><li>Terminology: subdomain homogeneous for failure </li></ul></ul><ul><li>Failures are not detected because: </li></ul><ul><ul><li>Subdomains not homogeneous </li></ul></ul><ul><ul><li>Wrong points selected </li></ul></ul>
    19. 19. Test Techniques <ul><li>Black Box techniques </li></ul><ul><ul><li>Specification-based testing </li></ul></ul><ul><ul><li>External view of software testing </li></ul></ul><ul><ul><ul><li>Inputs and outputs, hence “black-box” </li></ul></ul></ul><ul><li>White Box techniques </li></ul><ul><ul><li>Code-based testing </li></ul></ul><ul><ul><li>Internal view of software testing </li></ul></ul><ul><ul><ul><li>Testing based on internal structure of system </li></ul></ul></ul>
    20. 20. Test Approaches <ul><li>Exhaustive testing </li></ul><ul><ul><li>Exercise every testable condition (infeasible) </li></ul></ul><ul><li>Equivalence testing </li></ul><ul><ul><li>Small subset of testable conditions </li></ul></ul><ul><ul><li>Tests equivalent conditions </li></ul></ul><ul><li>Coverage testing </li></ul><ul><ul><li>Specification-based </li></ul></ul><ul><ul><li>Common approach </li></ul></ul><ul><li>Statistical testing </li></ul><ul><ul><li>User profiling </li></ul></ul><ul><ul><li>Random testing </li></ul></ul>
    21. 21. Test Execution Process <ul><li>General steps </li></ul><ul><ul><li>Understand defined test case </li></ul></ul><ul><ul><li>Check initial conditions required </li></ul></ul><ul><ul><li>Provide input test data </li></ul></ul><ul><ul><li>Capture test results </li></ul></ul><ul><ul><li>Evaluate test results (Pass/Fail) </li></ul></ul><ul><li>Questions </li></ul><ul><ul><li>How do you know when to start testing? </li></ul></ul><ul><ul><li>How do you know the test results are correct? </li></ul></ul><ul><ul><li>When should you suspend testing? </li></ul></ul>
    22. 22. Functional Testing <ul><li>Requirements-based testing </li></ul><ul><ul><li>True partition at system level </li></ul></ul><ul><ul><li>Specification determines pass/fail </li></ul></ul><ul><li>Code- (or design-) level functions </li></ul><ul><ul><li>Overlapping coverage at system level </li></ul></ul><ul><ul><li>Results hard to observe except at unit level </li></ul></ul>
    23. 23. Functional Coverage <ul><li>Mode scenarios (“Use Cases&quot;) </li></ul><ul><ul><li>Test sequences </li></ul></ul><ul><li>Normal cases </li></ul><ul><ul><li>Natural subdomain (e.g. printer functions) </li></ul></ul><ul><ul><li>Ranges, random selection </li></ul></ul><ul><li>Exception cases </li></ul><ul><ul><li>Often fall out of natural subdomain </li></ul></ul><ul><li>Fault-based tests </li></ul>
    24. 24. Structural Testing <ul><li>Exercise Program Elements </li></ul><ul><li>Example </li></ul><ul><ul><li>Statement testing </li></ul></ul><ul><li>Boundary testing a specific method </li></ul><ul><ul><li>Concept: Errors congregate at the boundaries </li></ul></ul><ul><ul><ul><li>Upper/lower limits of input value ranges </li></ul></ul></ul><ul><ul><ul><li>Positive and Negative tests needed </li></ul></ul></ul>
    25. 25. Measure Testing Adequacy <ul><li>Use a structural criterion to judge the effectiveness of functional criterion </li></ul><ul><li>Procedure: </li></ul><ul><ul><li>Devise functional test (from specifications) </li></ul></ul><ul><ul><li>Measure structural coverage of functional test </li></ul></ul><ul><ul><li>If deficient, find the uncovered function, add to the test, and repeat </li></ul></ul>
    26. 26. Unit Testing <ul><li>Usually structural or fault-based </li></ul><ul><li>Unit specification too often deficient </li></ul><ul><li>Done haphazardly by programmers </li></ul><ul><li>Coverage tools very helpful </li></ul><ul><ul><li>&quot;Automatic&quot; ideal </li></ul></ul><ul><ul><li>Let it &quot;run all night&quot; </li></ul></ul>
    27. 27. Unit Test Specification <ul><li>Unit Inspection (Static Testing) </li></ul><ul><ul><li>Not often practiced </li></ul></ul><ul><ul><li>Compare specification to implementation </li></ul></ul><ul><ul><li>Find defects </li></ul></ul><ul><li>Unit Test Cases (Dynamic Testing) </li></ul><ul><ul><li>Check initial conditions for each test case </li></ul></ul><ul><ul><li>Input data for each test case </li></ul></ul><ul><ul><li>Capture test results for each test case </li></ul></ul><ul><ul><li>Evaluate pass/fail for each test case </li></ul></ul>
    28. 28. Mechanism for Unit Test <ul><li>Test harness </li></ul><ul><li>Stubs </li></ul><ul><li>Comparators </li></ul><ul><li>Test scripts (&quot;automated&quot; testing) </li></ul><ul><ul><li>Pull test points from input file </li></ul></ul><ul><ul><li>Send through harness with stubs </li></ul></ul><ul><ul><li>Compare result with output file </li></ul></ul>
    29. 29. Conclusions <ul><li>Testing finds Failures </li></ul><ul><ul><li>That’s what it’s supposed to do </li></ul></ul><ul><li>Test Planning helps ensure successful Testing </li></ul><ul><li>There are many approaches to testing </li></ul><ul><ul><li>No single approach is adequate </li></ul></ul><ul><ul><li>Different approaches are warranted at different phases in the testing process </li></ul></ul>

    ×