COM3082C2 - Systems Development slide 6-1- 1


Published on

  • Be the first to comment

  • Be the first to like this

COM3082C2 - Systems Development slide 6-1- 1

  1. 1. Software Testing Dr Z He University of Ulster
  2. 2. Lecture 6: Software Testing <ul><li>Objectives </li></ul><ul><ul><li>Issues of software testing </li></ul></ul><ul><ul><ul><li>Software testing, Software testing principles, Need for testing </li></ul></ul></ul><ul><ul><li>Verification and validation </li></ul></ul><ul><ul><li>Testing stages </li></ul></ul><ul><ul><ul><li>unit, integration, system, acceptance </li></ul></ul></ul><ul><ul><li>Test Planning </li></ul></ul><ul><ul><ul><li>Test plan </li></ul></ul></ul><ul><ul><li>Techniques for testing </li></ul></ul><ul><ul><ul><li>static: reading, walkthrough and inspections, correctness proofs, stepwise abstraction </li></ul></ul></ul><ul><ul><ul><li>dynamic: functional, structural </li></ul></ul></ul><ul><ul><li>Testing Coverage </li></ul></ul><ul><ul><li>Further Testing Stages </li></ul></ul>
  3. 3. Software Testing <ul><ul><li>once source code has been generated, software must be tested to uncover(and correct) as many errors as possible before delivery to the customer </li></ul></ul><ul><ul><li>the goal is to design a series of test cases that have a high likelyhood of finding errors </li></ul></ul><ul><ul><li>software testing techniques provide systematic guidance for designing tests that </li></ul></ul><ul><ul><ul><li>exercise the internal logic of software components, and </li></ul></ul></ul><ul><ul><ul><li>exercise the input and output domains of the program to uncover errors in program function, behaviour and performance </li></ul></ul></ul><ul><ul><li>is a process of executing a program with intent of finding an error </li></ul></ul><ul><ul><li>a good test case is one that has a high probability of finding an uncovered error </li></ul></ul><ul><ul><li>a successful test is one that uncovers an undiscovered error </li></ul></ul><ul><ul><li>cannot show the absence of errors and defects, it can only show that software error and defect are present </li></ul></ul>
  4. 4. Software Testing Principles <ul><li>All tests should be traceable to customer requirements </li></ul><ul><li>Tests should be planned before testing begins. Test planning can begin as soon as the requirements model is complete. Detailed definition of test cases can begin as soon as the design model has been solidified </li></ul><ul><li>Pareto principle applies to software testing (80% of all errors uncovered during testing will likely to be traceable to 20% of all program components) </li></ul><ul><li>Testing should begin in small (components) and progress toward testing in large(ultimately the entire system) </li></ul><ul><li>Exhaustive testing is not possible </li></ul><ul><li>To be most effective, testing should be conducted by an independent third party </li></ul>
  5. 5. Need for Testing <ul><li>30-85 errors per 1000 lines of code (Boehm 1981) </li></ul><ul><li>After testing, 0.5-3 errors per 1000 lines of code (Myers 1986) </li></ul><ul><li>Errors are expensive </li></ul><ul><li>The later errors are found the more expensive they are to fix </li></ul>Cost to fix Requirements Operation <ul><ul><li>Phase </li></ul></ul>
  6. 6. Need for Testing <ul><li>Testing shows presence of errors but cannot prove their absence </li></ul><ul><li>In most cases it is impossible to test exhaustively </li></ul><ul><li>E.g. </li></ul><ul><ul><li>for i from 1 to 100 do </li></ul></ul><ul><ul><li>print (if a[i] = true then 1 else 0 endif); </li></ul></ul><ul><ul><li>has 2 100 different outcomes - approx. 3x10 14 years to exhaustively test. Given a machine can execute 10 million print instructions per second. </li></ul></ul>
  7. 7. Verification and Validation <ul><li>Verification - ‘Are we building the product right’? </li></ul><ul><li>Validation - ‘Are we building the right product’? </li></ul><ul><li>1. At the requirements analysis stage </li></ul><ul><li>cheapest to correct at this stage </li></ul><ul><li>Test criteria: </li></ul><ul><ul><li>completeness - are all the requirements gathered </li></ul></ul><ul><ul><li>consistency - no contradictions within the system or with external components </li></ul></ul><ul><ul><li>feasibility - benefits should outweigh costs </li></ul></ul><ul><ul><li>testability - requirements should be unambiguous, otherwise they cannot be tested </li></ul></ul>
  8. 8. Verification and Validation <ul><li>2. At the Design stage </li></ul><ul><li>criteria as for requirements stage. </li></ul><ul><li>Elements from the requirements should be traced to the design </li></ul><ul><li>simulation, walkthroughs, inspections to test quality </li></ul>
  9. 9. Verification and Validation <ul><li>3. Implementation stage </li></ul><ul><li>Programs can be checked by reading through them (not by the author) </li></ul><ul><li>Stepwise abstraction - determine the function of code from a number of its steps </li></ul><ul><li>Tools are available to support the testing of code </li></ul><ul><ul><li>Static analysis - test the code without running it - E.g. have all the variables been declared? </li></ul></ul><ul><ul><li>Dynamic analysis - test by executing the code </li></ul></ul>
  10. 10. <ul><li>Unit testing </li></ul><ul><ul><li>testing of individual components </li></ul></ul><ul><li>Integration testing </li></ul><ul><ul><li>testing collections of modules integrated into sub-systems </li></ul></ul><ul><li>System testing </li></ul><ul><ul><li>testing the complete system prior to delivery </li></ul></ul><ul><li>Acceptance testing </li></ul><ul><ul><li>testing by users to check that the system satisfies requirements. Sometimes called alpha testing </li></ul></ul>Testing Stages
  11. 11. Testing Stages S u b - s y s t e m t e s t i n g U n i t t e s t i n g S y s t e m t e s t i n g A c c e p t a n c e t e s t i n g
  12. 12. <ul><li>Test Plan </li></ul><ul><ul><li>The testing process </li></ul></ul><ul><ul><li>Requirements traceability </li></ul></ul><ul><ul><li>Tested items </li></ul></ul><ul><ul><li>Testing schedule </li></ul></ul><ul><ul><li>Test recording procedures </li></ul></ul><ul><ul><li>Hardware and software requirements </li></ul></ul><ul><ul><li>Constraints </li></ul></ul>Test Planning
  13. 13. Techniques for Testing - Static <ul><li>1. Reading, Walkthroughs and inspection s </li></ul><ul><li>All involve someone else looking at the code </li></ul><ul><li>Look for </li></ul><ul><ul><li>inappropriate usage of data. E.g. uninitialised variables, arrays out of bounds, dangling pointers </li></ul></ul><ul><ul><li>errors in declarations. E.g. undeclared variables, repeated names </li></ul></ul><ul><ul><li>Faults in computations. E.g. division by zero, overflow, type mismatches, erroneous operator order (*,/,+,- etc.) </li></ul></ul><ul><ul><li>faults in logical operators. E.g. < instead of > </li></ul></ul><ul><ul><li>faults in control. E.g. infinite loops, loop executing the wrong number of times </li></ul></ul><ul><ul><li>faults in interfaces. E.g. Wrong number of parameters </li></ul></ul><ul><li>Many of these are detected by the compiler </li></ul><ul><li>Can be applied to documents </li></ul><ul><li>Can be applied to all stages of life cycle </li></ul>
  14. 14. Techniques for Testing - Static <ul><li>2. Correctness Proofs </li></ul><ul><li>Does a program meet its specification. </li></ul><ul><li>If specification is expressed formally and proved the program can be proved </li></ul><ul><li>Difficult to use. </li></ul><ul><li>Cannot prove every aspect. </li></ul><ul><li>Validation can only be done by testing. </li></ul><ul><li>3. Stepwise Abstraction </li></ul><ul><li>bottom-up process </li></ul><ul><li>start with code and derive functions </li></ul><ul><li>check these with the requirements </li></ul>