• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
introduction_to_sw_t..
 

introduction_to_sw_t..

on

  • 1,017 views

 

Statistics

Views

Total Views
1,017
Views on SlideShare
1,017
Embed Views
0

Actions

Likes
0
Downloads
38
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    introduction_to_sw_t.. introduction_to_sw_t.. Presentation Transcript

    • An Introduction to Software Testing CMSC 445 Shon Vick
    • Summary
      • We are going to have an introduction to software testing
      • Chapters 16 and 17 of the Pressman deal with testing
        • We will use this as a reference
        • A set of other references can be found at the end of this lecture
    • Basic Ideas
      • What is software testing?
      • Why test?
      • Test Verification versus validation.
      • How hard is program verification?
      • Testing versus proofs of correctness.
    • What is Testing?
      • Most Fundamentally
        • A phase of the software life-cycle that includes three stages: unit testing, integration testing, and system testing
      • Its about
        • Finding errors
        • Exercising code across range of inputs and conditions
      • An activity aimed at evaluating an attribute or capability of a program/system to see it meets its required results
    • What is Testing?
      • The solution assessment stage of the software process.
      • Delivers partial answer to the question:
      • "How good is this software?''
      • Any attempt to ensure that modules have been coded correctly.
      • An (oftentimes ad hoc ) activity to generate a set of inputs, called test data , which when applied to a program increases the confidence in the correctness of the program.
      • The purpose of testing is not to demonstrate that software performs correctly.Rather, testing demonstrates the existence of an error. Hence, success is finding an error. Failure is not finding an error.
      • Contrast testing with debugging , which resolves the error after its existence is known.
    • Testing and Debugging Are Not the Same Thing
      • Testing and debugging are not synonymous.
        • Testing: Establish the existence of defects .
        • Debugging : Locating and fixing these defects.
      • A typical debugging process follows.
        • Locate and repair defects in the code.
        • Repair program defect.
        • Re-Test
    • Verification versus Validation
      • Finding an error in software means that the software is incorrect .
      • Two senses of incorrect .
        • Validation - subjective
        • Verification - objective
      The benefit of defining the two V & V components separately is that the exercise demonstrates the narrow scope of objective correctness and the broad role of subjective judgment and experience. Sommerville
    • Validation
      • Continual checking that the system meets the expectations of the user/procurer .
      • Validation often is a subjective decision by application user using domain knowledge.
      • This subjective evaluation of design decisions lasts throughout the software process.
    • Verification
      • Involves checking that the program conforms to its specification .
      • Verification is more objective
      • Given some objective criteria specification of correctness one can always answer true or false.
        • Finding an error with respect to this specification proves a implementation incorrect
        • Not finding an error does not prove the opposite
    • Verification – The Bad News
      • Program correctness is not an absolute property, but a relative one.
      • Important theoretical results in program testing
        • No general purpose testing and analysis procedure can be used to prove program correctness.
        • Given any program P and its specification S no technique can determine if P is equivalent to S for all P
        • Equivalent to the Halting Problem .
        • Therefore practical, systematic procedure for selecting test data which can be used to prove correctness of a program.
        • It is possible to prove the correctness of a P but it may not cost effective.
    • Testing and a Proof of Correctness
      • Testing different than absolute correctness
      • Correctness
        • A proof that one description of a function is equivalent to another.
        • Usually one is a state description in some formal language and the other is an algorithm described in a programming language.
        • Proving that a program is correct is a lofty goal. Much research has been done using automated tools to prove program correctness
      • Testing
        • An activity to generate a set of input test data for a program/system which when used by a program/system increases the confidence that a program/system is correct
    • Finding Problems in a System
      • Many errors in a software system are introduced early in development process
        • Errors may persist until the implementation phase
      • The cost to fix the errors grows as the life-cycle proceeds.
    • The Relative Costs 20.0 Maintenance 5.0 Acceptance Test 2.0 Unit Test 1.0 Coding 0.5 Design 0.1- 0.2 Requirements Relative Cost Stage
    • Methods of Software Testing
      • White-box testing
      • Black-box testing
      • Exhaustive testing
    • White-Box Versus Black-Box Testing
      • White-Box testing
        • Assumes that the internal structure and logic of the module is known.
        • Tester can analyze the code and use knowledge of program structure to derive a test case.
        • Also called structural testing or glass-box testing .
      • Black-box testing
        • Relies only on input/output pairs for determination of existence of errors with no knowledge of the internal workings of system/program is used
        • Only the external behavior of the program is being tested (acceptance testing).
        • Also called functional testing .
      • With functional testing, the component is treated as a black box whose behavior can only be determined by studying its inputs and related outputs.
      • Identify inputs which cause anomalous behavior in the outputs which reveal the existence of defects.
      Functional Testing
    • Functional Testing Concept Input Set Input i Output Set Anomalous Output i Component
      • Structural or white-box testing is complementary to black-box testing.
      • Here, the tester has knowledge of the actual code and the structure of the component to derive test cases.
        • With black-box testing, the tester could only use the specification.
      • Structural testing has certain advantages.
        • Test cases can be systematically derived.
        • Test coverage can be measured.
      • Structural testing can be used with black-box testing to refine the selection of test data.
      WhiteboxTesting
      • // Java example
      • boolean Contains(Enumeration e, Object key)
      • {
      • try {
      • while (e.hasMoreElements()){
      • Object element = e.nextElement();
      • if(element.equals(key) return true;
      • }
      • catch (NoSuchElementException ex) {
      • return false;
      • }
      • }
      Structural Testing Example
    • Exhaustive Testing On States
      • Is it reasonable to do exhaustive testing?
        • Usually not!
      • Testing on possible variable states
      • Suppose that our program has n variables each with m values then the number of test required is m n
      • General formula
        • If each variable vi and mi state then  i m i tests are required
    • Exhaustive Path Testing
      • Exhaustive Testing on paths also is usually not tenable
      • Combinatorial Explosion there as well
      • We’ll discuss this in more detail later
    • Tests to perform
      • Test data based on structure.
        • Inputs where the enumeration has a single element.
        • Inputs where numbers of elements in the enumeration is very large.
        • Inputs where the enumeration has no elements.
        • Inputs where Key is the first/last element of the array.
        • Inputs where the enumeration contains heterogeneous data
    • Types of Testing
      • Defect testing
        • Tests designed to reveal the presence of defects in a system.
      • Statistical testing
        • Tests are selected so that if a program P and S agree on the tests then S and P are probabilistically equivalent
        • Tests are designed to reflect the frequency of actual user inputs. After running tests, estimate of operational reliability of system is made (Sommerville)
        • Assumes that there is an input/output oracle.
    • Defect Testing Objectives
      • Program testing has 2 objectives.
        • Show that the system meets its requirements. (Validation)
        • Exercise the system to expose latent defects. (Verification)
      • These objectives are distinct.
        • Final system testing emphasizes validation.
        • Earlier components and integration testing emphasizes verification.
    • Test Cases
      • Test cases and test data are not the same thing.
        • Test data are inputs used to test the system.
        • Test cases are input and output specifications plus a statement of the function under test.
      • In principle defect testing could be exhaustive.
        • Exercise every statement with every possible path combination.
        • This is generally impossible.
        • Instead, a set of test cases are developed using heuristics.
    • Considerations in choosing Test Cases
      • Emphasize Capabilities.
        • Testing a system’s capabilities is as important as testing its components.
        • For example, screen corruption is less critical than defects which cause a loss of data or program termination.
      • Old Capabilities.
        • Testing old capabilities may be more important that testing new ones.
        • With a new revision, users are more concerned that the features they are already dependent on still work properly.
        • Regression Testing may help Identification
    • More Considerations
      • Boundary Conditions
        • Always Test Boundary cases
        • Errors are likely there
        • Boundary conditions may be rare
      • Don’t Neglect Typical Situations.
        • Testing typical situations may be more important than testing boundary value cases.
        • It is important that the system works under normal conditions than occasional situations.
    • Text References
      • Software Engineering, A Holistic View , Bruce I. Blum, 1992.
      • Software Engineering , 4th ed., Ian Sommerville, 1992 .
      • Software Engineering, The Production of Quality Software, 2nd ed., Shari Lawrence Pfleeger, 1991.
      • Functional Program Testing and Analysis, William E. Howden, 1987.
    • Journal References
      • Software Requirements: Analysis and Specification, Alan Davis, 1990.
      • The Software Life Cycle, 2nd ed. , Darrel Ince and Derek Andrews, 1990.
      • Design Complexity Measurement and Testing,'' Thomas J. McCabe and Charles W. Butler, IEEE Communications of the ACM, 32:12 (December 1989), 1415-1425.
    • More Journal References
      • Hints on Test Data Selection: Help for the Practicing Programmer , Richard A. DeMillo, Richard J. Lipton, and Frederick G. Sayward , IEEE Computer 11:4 pp.34-43 , 1978 .
      • Comparing the Effectiveness of SoftwareTesting Strategies, Victor R. Basili and Richard W. Selby, IEEE Transactions on Software Engineering SE-13 :12 (December 1987), 1278-1296.
    • Online Resources
      • Newsgroup: comp.software.testing
      • www.enteract.com/~bradapp/links/sw-test-links.html#Sw_Test_Tut_Pub
      • www.med.osakau.ac.jp/image/zhang/homepage/software_tests.html