Your SlideShare is downloading. ×
0
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Coward-p421.ppt
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Coward-p421.ppt

303

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
303
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. A Review of Software Testing - P David Coward Reprinted: Information and Software Technology; Vol. 30, No. 3 April 1988 Software Engineering: The Development Process, Vol 1, Chapter 7 Presented By: Andrew Diemer Software Engineering II – EEL 6883
  • 2. Aim of paper <ul><li>No guarantee that software meets functional requirements </li></ul><ul><li>Introduces software testing techniques </li></ul>
  • 3. Needs <ul><li>Software is to be correct </li></ul><ul><ul><li>What does this mean </li></ul></ul><ul><ul><ul><li>It often means the program matches the specifications. </li></ul></ul></ul><ul><li>Problem with specification </li></ul><ul><ul><li>Specification could be wrong </li></ul></ul>
  • 4. Needs <ul><li>If this happens then the correctness is measured by the software meeting the user requirements </li></ul>
  • 5. Needs <ul><li>Testing </li></ul><ul><ul><li>Why test </li></ul></ul><ul><ul><ul><li>Tests may have not been adequate enough </li></ul></ul></ul><ul><li>Asses the performance of the tasks </li></ul>
  • 6. Terminology <ul><li>Verification –vs- Validation </li></ul><ul><li>Verification </li></ul><ul><ul><li>Ensures correctness from phase to phase of the software life cycle process </li></ul></ul><ul><ul><li>Formal proofs of correctness </li></ul></ul>
  • 7. Terminology <ul><li>Validation </li></ul><ul><ul><li>Checks software against requirements </li></ul></ul><ul><ul><ul><li>Executes software with test data </li></ul></ul></ul><ul><li>Author uses testing and checking instead of verification and validation </li></ul>
  • 8. Categories of Testing <ul><li>Two categories of testing: </li></ul><ul><ul><li>Functional </li></ul></ul><ul><ul><li>Non-functional </li></ul></ul>
  • 9. Functional Testing <ul><li>Functional </li></ul><ul><ul><li>Addresses to see if the program obtains the correct output </li></ul></ul><ul><ul><ul><li>It is normally used when testing a modified or new program </li></ul></ul></ul>
  • 10. Functional Testing <ul><li>Regression Testing </li></ul><ul><ul><li>Tests following modification </li></ul></ul><ul><ul><li>Tests to see if the unchanging functions have indeed changed </li></ul></ul>
  • 11. Non-functional Requirements <ul><li>Style </li></ul><ul><li>Documentation standards </li></ul><ul><li>Response times </li></ul><ul><li>Legal obligations </li></ul>
  • 12. Situation testing <ul><li>Two situations testing can fall under: </li></ul><ul><ul><li>Testing which finds faults in the software </li></ul></ul><ul><ul><li>Testing which does NOT find faults in the software </li></ul></ul>
  • 13. Situation testing <ul><li>Finding faults </li></ul><ul><ul><li>Destructive process </li></ul></ul><ul><ul><ul><li>more probing </li></ul></ul></ul><ul><li>Not finding faults </li></ul><ul><ul><li>miss inherent faults </li></ul></ul><ul><ul><ul><li>too gentle </li></ul></ul></ul>
  • 14. Questions <ul><li>How much testing is needed? </li></ul><ul><li>Confidence in testing? </li></ul><ul><li>Ignore faults? </li></ul><ul><ul><li>Which ones are important? </li></ul></ul><ul><li>Are there more faults? </li></ul><ul><li>What is the purpose of this testing? </li></ul>
  • 15. Testing Strategies <ul><li>Functional -vs- Structural </li></ul><ul><li>Static -vs- Dynamic analysis </li></ul>
  • 16. Strategy starting points <ul><li>Specification </li></ul><ul><ul><li>It makes known the required functions </li></ul></ul><ul><ul><li>Asses to see if they are provided </li></ul></ul><ul><ul><li>Functional testing </li></ul></ul>
  • 17. Strategy starting points <ul><li>Software </li></ul><ul><ul><li>Tests the structure of the system </li></ul></ul><ul><ul><li>Structural testing </li></ul></ul><ul><ul><li>Functions are included into the system but are NOT required </li></ul></ul><ul><ul><li>Example: accessing a database that has not been asked by the user </li></ul></ul>
  • 18. Functional testing <ul><li>Identify the functions which the software is expected to perform </li></ul><ul><li>Creating test data that will check to see if these functions are performed by the software </li></ul><ul><li>Does NOT matter how the program performs these functions </li></ul>
  • 19. Functional testing <ul><li>Rules may be applied to uncover the functions </li></ul><ul><li>Functional testing methods of formal documentation that includes descriptions of faults that correlate to each part of the design and the design features themselves </li></ul>
  • 20. Functional testing <ul><li>Isolation of these particular properties of each function should take place </li></ul><ul><li>Fault class associations </li></ul><ul><li>Black box approach </li></ul><ul><li>Testers have an understanding of what the output should be </li></ul>
  • 21. Functional testing <ul><li>Oracle </li></ul><ul><ul><li>An expert on what the outcome of a program will be for a particular test </li></ul></ul><ul><li>When might the oracle approach not work? </li></ul><ul><ul><li>Simulation testing </li></ul></ul><ul><ul><ul><li>Only provides a “range of values” </li></ul></ul></ul>
  • 22. Structural testing <ul><li>Testing is based on the detailed design rather than the functions required by the program </li></ul>
  • 23. Structural testing <ul><li>Two approaches for this testing </li></ul><ul><ul><li>First and most common is to execute the program with test cases </li></ul></ul><ul><ul><li>Second is symbolic execution </li></ul></ul><ul><ul><ul><li>Functions of the program are compared to the required functions for congruency </li></ul></ul></ul>
  • 24. Structural testing <ul><li>May require single path or percentage testing </li></ul><ul><li>Research has been conducted to find out what the minimum amount of testing would be to ensure a degree of reliability </li></ul>
  • 25. Structural testing <ul><li>Measure of reliability </li></ul><ul><ul><li>All statements should be executed at least once </li></ul></ul><ul><ul><li>All branches should be executed at least once </li></ul></ul><ul><ul><li>All linear code sequence and jumps in the program should be executed at least once </li></ul></ul>
  • 26. Structural testing <ul><li>Measure of reliability (cont.) </li></ul><ul><ul><li>Best approach would be the exhaustive approach in which every path is tested </li></ul></ul>
  • 27. Structural testing <ul><li>Problems with the exhaustive approach </li></ul><ul><ul><li>Extensive number of paths </li></ul></ul><ul><ul><li>Multiple combinations constitutes multiple conditions </li></ul></ul><ul><ul><li>Infeasible paths </li></ul></ul><ul><ul><ul><li>Contradictions of predicates at conditional statements </li></ul></ul></ul>
  • 28. Structural testing <ul><li>Path issues </li></ul><ul><ul><li>There is a path for a loop not executing, executing once, and executing multiple of times </li></ul></ul><ul><ul><li>Control loops determine the number of paths </li></ul></ul>
  • 29. Structural testing <ul><li>Path issues </li></ul><ul><ul><li>Known as the “level-i” path or island code </li></ul></ul><ul><ul><li>Island code </li></ul></ul><ul><ul><ul><li>A series of lines of code, following a program termination, which is not the destination of a transfer control from somewhere else in the program </li></ul></ul></ul>
  • 30. Structural testing <ul><li>Path issues </li></ul><ul><ul><li>When does island code occur? </li></ul></ul><ul><ul><ul><li>When failing to delete redundant code after maintenance </li></ul></ul></ul>
  • 31. Static analysis <ul><li>Does NOT involve execution of software with data but involves the use of constraints on the input and output data sets mathematically on software components </li></ul><ul><li>Examples of static analysis would be program proving and symbolic execution </li></ul>
  • 32. Static analysis <ul><li>Symbolic execution </li></ul><ul><ul><li>Use symbolic values for variables instead of numeric or string values </li></ul></ul>
  • 33. Dynamic analysis <ul><li>Relies on program statements which make calls to analysis routines </li></ul><ul><li>They record the frequency of execution of elements of the program </li></ul>
  • 34. Dynamic analysis <ul><li>Act as a liaison between functional and structural testing </li></ul><ul><li>Test cases monitored dynamically, then structurally tested to see what idle code is left by previous tests </li></ul><ul><li>Shows functions the program should perform </li></ul>
  • 35. Classification of Techniques <ul><li>There are three classifications: </li></ul><ul><li>Static – Structural </li></ul><ul><ul><li>Symbolic execution </li></ul></ul><ul><ul><li>Partition analysis </li></ul></ul><ul><ul><li>Program proving </li></ul></ul><ul><ul><li>Anomaly analysis </li></ul></ul>
  • 36. Classification of Techniques <ul><li>Dynamic - Functional </li></ul><ul><ul><li>Domain testing </li></ul></ul><ul><ul><li>Random testing </li></ul></ul><ul><ul><li>Adaptive perturbation </li></ul></ul><ul><ul><li>Cause-effect graphing </li></ul></ul>
  • 37. Classification of Techniques <ul><li>Dynamic - Structural </li></ul><ul><ul><li>Domain and computational testing </li></ul></ul><ul><ul><li>Automatic test data generation </li></ul></ul><ul><ul><li>Mutation analysis </li></ul></ul>
  • 38. Classification of Techniques <ul><li>Dynamic - Structural </li></ul><ul><ul><li>Domain and computational testing </li></ul></ul><ul><ul><li>Automatic test data generation </li></ul></ul><ul><ul><li>Mutation analysis </li></ul></ul>
  • 39. Symbolic execution <ul><li>Non traditional approach </li></ul><ul><ul><li>traditional is the execution requires that a selection of paths through a program is exercised by a set of test classes </li></ul></ul><ul><li>Produces a set of expressions, one per output variable </li></ul>
  • 40. Symbolic execution <ul><li>Usually a program executes using input data values with the output resulting in a set of actual values </li></ul><ul><li>Use of flow-graphs </li></ul><ul><ul><li>Each branch contains decision points (directed graph) </li></ul></ul><ul><ul><li>Branch predictions are produced </li></ul></ul>
  • 41. Symbolic execution <ul><li>Use of top down approach </li></ul><ul><li>During the top down traversal, the input variable is given a symbol in place of an actual value </li></ul><ul><li>A problem is in the iterations </li></ul><ul><li>As mentioned before, no executing loop, executing once, and then executing multiple times. </li></ul>
  • 42. Partition analysis <ul><li>Uses symbolic execution to find sub domains of the input domain </li></ul><ul><li>Path conditions are used to find them </li></ul><ul><li>Input domains that cannot be allocated to a sub domain infer a fault </li></ul>
  • 43. Partition analysis <ul><li>Specifications need to be written at a higher lever </li></ul>
  • 44. Program proving <ul><li>At the beginning and end of the procedure, place a mathematical method assertions </li></ul><ul><li>Similar to symbolic execution </li></ul><ul><li>Neither of them execute actual data and both examine source code </li></ul>
  • 45. Program proving <ul><li>Tries to come up with a proof that encompasses all possible iterations </li></ul><ul><li>Program proving steps: </li></ul><ul><ul><li>Construct a program </li></ul></ul><ul><ul><li>Examine the program and insert mathematical assertions at the beginning and end of procedures </li></ul></ul>
  • 46. Program proving <ul><li>Program proving steps (cont): </li></ul><ul><ul><li>Determine whether the code between each pair of start and end assertions will achieve the end assertion given the start assertion </li></ul></ul><ul><ul><li>If the code reaches the end assertion then the block has been proven </li></ul></ul>
  • 47. Program proving <ul><li>DeMillo says that proofs can only be stated as acceptable and not correct </li></ul><ul><li>His acceptance is determined by a gathering of people who cannot find fault with the proof </li></ul>
  • 48. Program proving <ul><li>The larger the audience, the more confidence in the software </li></ul><ul><li>Total correctness means loops will terminate </li></ul>
  • 49. Anomaly analysis <ul><li>Two levels of anomalies: </li></ul><ul><ul><li>made by the compiler (language syntax) </li></ul></ul><ul><ul><li>problems that are not necessarily wrong by the programming language </li></ul></ul>
  • 50. Anomaly analysis <ul><li>Some anomalies are: </li></ul><ul><ul><li>Unexecutable code </li></ul></ul><ul><ul><li>Array boundaries </li></ul></ul><ul><ul><li>Initializing variables wrong </li></ul></ul><ul><ul><li>Unused labels and variables </li></ul></ul><ul><ul><li>Traversing in and out of loops </li></ul></ul>
  • 51. Anomaly analysis <ul><li>Produce flow-graphs </li></ul><ul><li>Determine infeasible paths </li></ul><ul><li>Some use data-flow analysis </li></ul><ul><ul><li>Where the input values turn into intermediate, which then turn into output values </li></ul></ul>
  • 52. Anomaly analysis <ul><li>Some data-flow anomalies: </li></ul><ul><ul><li>Assigning values to variables which are not used </li></ul></ul><ul><ul><li>Using variables that have not been assigned previously to a value </li></ul></ul><ul><ul><li>Re-assigning of a variable without making use of a previous variable </li></ul></ul><ul><ul><li>Indicates possible faults </li></ul></ul>
  • 53. Domain testing <ul><li>Based upon informal classifications of the requirements </li></ul><ul><li>Test cases executed and compared against the expected to determine whether faults have been detected </li></ul>
  • 54. Random testing <ul><li>Produces data without reference to code or specification </li></ul><ul><li>Random number generation is used </li></ul><ul><li>Main problem is there is no complete coverage guarantee </li></ul>
  • 55. Random testing <ul><li>Key is to examine small subsets </li></ul><ul><li>If followed it will give you a high branch coverage success rate </li></ul>
  • 56. Adaptive perturbation testing <ul><li>Concerns with assessing the effectiveness of sets of test cases </li></ul><ul><li>Used to generate further test cases for effectiveness </li></ul>
  • 57. Adaptive perturbation testing <ul><li>Optimization is reached when routines find the best value to replace the discarded value so the number of assertions is maximized </li></ul><ul><li>Process is replaced until the violated assertions are maximized to the limit </li></ul>
  • 58. Cause-effect graphing <ul><li>Power comes from the input combinations used by logic (AND, OR, NOT, etc) </li></ul>
  • 59. Cause-effect graphing <ul><li>Five steps: </li></ul><ul><ul><li>Divide into workable pieces </li></ul></ul><ul><ul><li>Identify cause and effect </li></ul></ul><ul><ul><li>Represent a graph to link cause and effect semantics </li></ul></ul>
  • 60. Cause-effect graphing <ul><li>Five steps (cont): </li></ul><ul><ul><li>Annotate to show impossible combinations and effects </li></ul></ul><ul><ul><li>Convert the graph into a limited-entity decision table </li></ul></ul><ul><li>It helps identify small test cases </li></ul>
  • 61. Domain and computational testing <ul><li>Based upon selecting test cases </li></ul><ul><li>Assignment statements are used for deciding computational paths </li></ul>
  • 62. Domain and computational testing <ul><li>Paths considered: </li></ul><ul><ul><li>Path computation </li></ul></ul><ul><ul><ul><li>Set of algebraic expressions, one for each output variable, in terms of input variables and constraints </li></ul></ul></ul><ul><ul><li>Path condition </li></ul></ul><ul><ul><ul><li>A joining of constraints of a path </li></ul></ul></ul>
  • 63. Domain and computational testing <ul><li>Paths considered (cont): </li></ul><ul><ul><li>Path domain </li></ul></ul><ul><ul><ul><li>Set of input values that satisfy the path condition </li></ul></ul></ul><ul><ul><li>Empty path </li></ul></ul><ul><ul><ul><li>Infeasible paths and cannot execute </li></ul></ul></ul><ul><li>A path that follows errors is a computation error </li></ul>
  • 64. Automatic test data generation <ul><li>Courage comes with covering metrics </li></ul><ul><li>Contradictory paths are infeasible </li></ul><ul><li>Needs detailed specification to achieve this testing </li></ul><ul><li>Formal specifications may provide fundamental help </li></ul>
  • 65. Mutation analysis <ul><li>Concerns the quality of sets of test data </li></ul><ul><li>Uses the program to test the test data </li></ul><ul><li>This means the original and mutant program are tested using the same test data </li></ul>
  • 66. Mutation analysis <ul><li>The two outputs are compared </li></ul><ul><li>If the mutant output is different from the original output, then the mutant is of little value </li></ul><ul><li>If the two outputs are the same, then the problem is there has been a change </li></ul>
  • 67. Mutation analysis <ul><li>The mutant is then said to be live </li></ul><ul><li>Ratios are then taken (dead/alive) </li></ul><ul><li>High ratio of live mutants equals poor test data </li></ul><ul><li>If this happens then more tests need to be run until the ratio goes down </li></ul>
  • 68. Conclusion <ul><li>I thought this paper was thorough </li></ul><ul><li>This paper gave good examples (compartmentalized) </li></ul><ul><li>I thought this paper was a little out of date </li></ul>

×