• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Coward-p421.ppt
 

Coward-p421.ppt

on

  • 487 views

 

Statistics

Views

Total Views
487
Views on SlideShare
486
Embed Views
1

Actions

Likes
0
Downloads
4
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Coward-p421.ppt Coward-p421.ppt Presentation Transcript

    • A Review of Software Testing - P David Coward Reprinted: Information and Software Technology; Vol. 30, No. 3 April 1988 Software Engineering: The Development Process, Vol 1, Chapter 7 Presented By: Andrew Diemer Software Engineering II – EEL 6883
    • Aim of paper
      • No guarantee that software meets functional requirements
      • Introduces software testing techniques
    • Needs
      • Software is to be correct
        • What does this mean
          • It often means the program matches the specifications.
      • Problem with specification
        • Specification could be wrong
    • Needs
      • If this happens then the correctness is measured by the software meeting the user requirements
    • Needs
      • Testing
        • Why test
          • Tests may have not been adequate enough
      • Asses the performance of the tasks
    • Terminology
      • Verification –vs- Validation
      • Verification
        • Ensures correctness from phase to phase of the software life cycle process
        • Formal proofs of correctness
    • Terminology
      • Validation
        • Checks software against requirements
          • Executes software with test data
      • Author uses testing and checking instead of verification and validation
    • Categories of Testing
      • Two categories of testing:
        • Functional
        • Non-functional
    • Functional Testing
      • Functional
        • Addresses to see if the program obtains the correct output
          • It is normally used when testing a modified or new program
    • Functional Testing
      • Regression Testing
        • Tests following modification
        • Tests to see if the unchanging functions have indeed changed
    • Non-functional Requirements
      • Style
      • Documentation standards
      • Response times
      • Legal obligations
    • Situation testing
      • Two situations testing can fall under:
        • Testing which finds faults in the software
        • Testing which does NOT find faults in the software
    • Situation testing
      • Finding faults
        • Destructive process
          • more probing
      • Not finding faults
        • miss inherent faults
          • too gentle
    • Questions
      • How much testing is needed?
      • Confidence in testing?
      • Ignore faults?
        • Which ones are important?
      • Are there more faults?
      • What is the purpose of this testing?
    • Testing Strategies
      • Functional -vs- Structural
      • Static -vs- Dynamic analysis
    • Strategy starting points
      • Specification
        • It makes known the required functions
        • Asses to see if they are provided
        • Functional testing
    • Strategy starting points
      • Software
        • Tests the structure of the system
        • Structural testing
        • Functions are included into the system but are NOT required
        • Example: accessing a database that has not been asked by the user
    • Functional testing
      • Identify the functions which the software is expected to perform
      • Creating test data that will check to see if these functions are performed by the software
      • Does NOT matter how the program performs these functions
    • Functional testing
      • Rules may be applied to uncover the functions
      • Functional testing methods of formal documentation that includes descriptions of faults that correlate to each part of the design and the design features themselves
    • Functional testing
      • Isolation of these particular properties of each function should take place
      • Fault class associations
      • Black box approach
      • Testers have an understanding of what the output should be
    • Functional testing
      • Oracle
        • An expert on what the outcome of a program will be for a particular test
      • When might the oracle approach not work?
        • Simulation testing
          • Only provides a “range of values”
    • Structural testing
      • Testing is based on the detailed design rather than the functions required by the program
    • Structural testing
      • Two approaches for this testing
        • First and most common is to execute the program with test cases
        • Second is symbolic execution
          • Functions of the program are compared to the required functions for congruency
    • Structural testing
      • May require single path or percentage testing
      • Research has been conducted to find out what the minimum amount of testing would be to ensure a degree of reliability
    • Structural testing
      • Measure of reliability
        • All statements should be executed at least once
        • All branches should be executed at least once
        • All linear code sequence and jumps in the program should be executed at least once
    • Structural testing
      • Measure of reliability (cont.)
        • Best approach would be the exhaustive approach in which every path is tested
    • Structural testing
      • Problems with the exhaustive approach
        • Extensive number of paths
        • Multiple combinations constitutes multiple conditions
        • Infeasible paths
          • Contradictions of predicates at conditional statements
    • Structural testing
      • Path issues
        • There is a path for a loop not executing, executing once, and executing multiple of times
        • Control loops determine the number of paths
    • Structural testing
      • Path issues
        • Known as the “level-i” path or island code
        • Island code
          • A series of lines of code, following a program termination, which is not the destination of a transfer control from somewhere else in the program
    • Structural testing
      • Path issues
        • When does island code occur?
          • When failing to delete redundant code after maintenance
    • Static analysis
      • Does NOT involve execution of software with data but involves the use of constraints on the input and output data sets mathematically on software components
      • Examples of static analysis would be program proving and symbolic execution
    • Static analysis
      • Symbolic execution
        • Use symbolic values for variables instead of numeric or string values
    • Dynamic analysis
      • Relies on program statements which make calls to analysis routines
      • They record the frequency of execution of elements of the program
    • Dynamic analysis
      • Act as a liaison between functional and structural testing
      • Test cases monitored dynamically, then structurally tested to see what idle code is left by previous tests
      • Shows functions the program should perform
    • Classification of Techniques
      • There are three classifications:
      • Static – Structural
        • Symbolic execution
        • Partition analysis
        • Program proving
        • Anomaly analysis
    • Classification of Techniques
      • Dynamic - Functional
        • Domain testing
        • Random testing
        • Adaptive perturbation
        • Cause-effect graphing
    • Classification of Techniques
      • Dynamic - Structural
        • Domain and computational testing
        • Automatic test data generation
        • Mutation analysis
    • Classification of Techniques
      • Dynamic - Structural
        • Domain and computational testing
        • Automatic test data generation
        • Mutation analysis
    • Symbolic execution
      • Non traditional approach
        • traditional is the execution requires that a selection of paths through a program is exercised by a set of test classes
      • Produces a set of expressions, one per output variable
    • Symbolic execution
      • Usually a program executes using input data values with the output resulting in a set of actual values
      • Use of flow-graphs
        • Each branch contains decision points (directed graph)
        • Branch predictions are produced
    • Symbolic execution
      • Use of top down approach
      • During the top down traversal, the input variable is given a symbol in place of an actual value
      • A problem is in the iterations
      • As mentioned before, no executing loop, executing once, and then executing multiple times.
    • Partition analysis
      • Uses symbolic execution to find sub domains of the input domain
      • Path conditions are used to find them
      • Input domains that cannot be allocated to a sub domain infer a fault
    • Partition analysis
      • Specifications need to be written at a higher lever
    • Program proving
      • At the beginning and end of the procedure, place a mathematical method assertions
      • Similar to symbolic execution
      • Neither of them execute actual data and both examine source code
    • Program proving
      • Tries to come up with a proof that encompasses all possible iterations
      • Program proving steps:
        • Construct a program
        • Examine the program and insert mathematical assertions at the beginning and end of procedures
    • Program proving
      • Program proving steps (cont):
        • Determine whether the code between each pair of start and end assertions will achieve the end assertion given the start assertion
        • If the code reaches the end assertion then the block has been proven
    • Program proving
      • DeMillo says that proofs can only be stated as acceptable and not correct
      • His acceptance is determined by a gathering of people who cannot find fault with the proof
    • Program proving
      • The larger the audience, the more confidence in the software
      • Total correctness means loops will terminate
    • Anomaly analysis
      • Two levels of anomalies:
        • made by the compiler (language syntax)
        • problems that are not necessarily wrong by the programming language
    • Anomaly analysis
      • Some anomalies are:
        • Unexecutable code
        • Array boundaries
        • Initializing variables wrong
        • Unused labels and variables
        • Traversing in and out of loops
    • Anomaly analysis
      • Produce flow-graphs
      • Determine infeasible paths
      • Some use data-flow analysis
        • Where the input values turn into intermediate, which then turn into output values
    • Anomaly analysis
      • Some data-flow anomalies:
        • Assigning values to variables which are not used
        • Using variables that have not been assigned previously to a value
        • Re-assigning of a variable without making use of a previous variable
        • Indicates possible faults
    • Domain testing
      • Based upon informal classifications of the requirements
      • Test cases executed and compared against the expected to determine whether faults have been detected
    • Random testing
      • Produces data without reference to code or specification
      • Random number generation is used
      • Main problem is there is no complete coverage guarantee
    • Random testing
      • Key is to examine small subsets
      • If followed it will give you a high branch coverage success rate
    • Adaptive perturbation testing
      • Concerns with assessing the effectiveness of sets of test cases
      • Used to generate further test cases for effectiveness
    • Adaptive perturbation testing
      • Optimization is reached when routines find the best value to replace the discarded value so the number of assertions is maximized
      • Process is replaced until the violated assertions are maximized to the limit
    • Cause-effect graphing
      • Power comes from the input combinations used by logic (AND, OR, NOT, etc)
    • Cause-effect graphing
      • Five steps:
        • Divide into workable pieces
        • Identify cause and effect
        • Represent a graph to link cause and effect semantics
    • Cause-effect graphing
      • Five steps (cont):
        • Annotate to show impossible combinations and effects
        • Convert the graph into a limited-entity decision table
      • It helps identify small test cases
    • Domain and computational testing
      • Based upon selecting test cases
      • Assignment statements are used for deciding computational paths
    • Domain and computational testing
      • Paths considered:
        • Path computation
          • Set of algebraic expressions, one for each output variable, in terms of input variables and constraints
        • Path condition
          • A joining of constraints of a path
    • Domain and computational testing
      • Paths considered (cont):
        • Path domain
          • Set of input values that satisfy the path condition
        • Empty path
          • Infeasible paths and cannot execute
      • A path that follows errors is a computation error
    • Automatic test data generation
      • Courage comes with covering metrics
      • Contradictory paths are infeasible
      • Needs detailed specification to achieve this testing
      • Formal specifications may provide fundamental help
    • Mutation analysis
      • Concerns the quality of sets of test data
      • Uses the program to test the test data
      • This means the original and mutant program are tested using the same test data
    • Mutation analysis
      • The two outputs are compared
      • If the mutant output is different from the original output, then the mutant is of little value
      • If the two outputs are the same, then the problem is there has been a change
    • Mutation analysis
      • The mutant is then said to be live
      • Ratios are then taken (dead/alive)
      • High ratio of live mutants equals poor test data
      • If this happens then more tests need to be run until the ratio goes down
    • Conclusion
      • I thought this paper was thorough
      • This paper gave good examples (compartmentalized)
      • I thought this paper was a little out of date