Conceitos básicos de teste de software
Upcoming SlideShare
Loading in...5
×
 

Conceitos básicos de teste de software

on

  • 1,867 views

 

Statistics

Views

Total Views
1,867
Views on SlideShare
1,867
Embed Views
0

Actions

Likes
0
Downloads
37
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Conceitos básicos de teste de software Conceitos básicos de teste de software Presentation Transcript

  • TQS - Teste e Qualidade de Software ( Software Testing and Quality ) Software Testing Concepts João Pascoal Faria [email_address] www.fe.up.pt / ~jpf
  • Software testing
    • Software testing consists of the dynamic (1) verification of the behavior of a program on a finite (2) set of test cases , suitably selected (3) from the usually infinite executions domain, against the specified expected (4) behavior [ source: SWEBOK]
      • (1) testing always implies executing the program on some inputs
      • (2) for even simple programs, too many test cases are theoretically possible that exhaustive testing is infeasible
        • trade-off between limited resources and schedules and inherently unlimited test requirements
      • (3) see test case design techniques later on how to select the test cases
      • (4) it must be possible to decide whether the observed outcomes of the program are acceptable or not
        • the pass/fail decision is commonly referred to as the oracle problem
  • Purpose of software testing
    • "Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra, 1972]
      • Because exhaustive testing is usually impossible
    • "The goal of a software tester is to find bugs, find them as early as possible, and make sure that they get fixed“ (Ron Patton)
    • A secondary goal is to assess software quality
      • Defect testing – find defects, using test data and test cases that have higher probability of finding defects
      • Statistical testing – estimate the value of software quality metrics, using representative test cases and test data
    View slide
    • Test case - inputs to test the system and the expected outputs (or predicted results) from these inputs if the system operates correctly under specified execution conditions
      • Inputs may include an initial state of the system
      • Outputs may include a final state of the system
      • When test cases are executed, the system is provided with the specified inputs and the actual outputs are compared with the outputs expected
    • Example for a calculator:
      • 3 + 5 (input) should give 8 (output)
    Test cases View slide
  • Test types unit integration system performance robustness functional behaviour white box (or structural) black box (or functional) Level or phase Accessibility (test case design strategy/technique) Quality attributes usability reliability security focus here
  • Test levels or phases (1)
    • Unit testing
      • Testing of individual program units or components
      • Usually the responsibility of the component developer (except sometimes for critical systems)
      • Tests are based on experience, specifications and code
      • A principal goal is to detect functional and structural defects in the unit
  • Test levels or phases (2)
    • Integration testing
      • Testing of groups of components integrated to create a sub-system
      • Usually the responsibility of an independent testing team (except sometimes in small projects)
      • Tests are based on a system specification (technical specifications, designs)
      • A principal goal is to detect defects that occur on the interfaces of units and their common behavior
  • Test levels or phases (3)
    • System testing
      • Testing the system as a whole
      • Usually the responsibility of an independent testing team
      • Tests are usually based on a requirements document (functional requirements/specifications and quality requirements)
      • A principal goal is to evaluate attributes such as usability, reliability and performance (assuming unit and integration testing have been performed)
    (source: I. Sommerville)
  • Test levels or phases (4)
    • Acceptance testing
      • Testing the system as a whole
      • Usually the responsibility of the customer
      • Tests are based on a requirements specification or a user manual
      • A principal goal is to check if the product meets customer requirements and expectations
    • Regression testing
      • Repetition of tests at any level after a software change
    (source: I. Sommerville)
  • Test levels and the extended V-model of software development Specify Requirements Design Code Requirements review Design review Unit test plan & test cases review/audit Execute unit tests Execute integration tests Execute acceptance tests System/acceptance test plan & test cases review/audit Specify/Design Code Unit tests Execute system tests Integration test plan & test cases review/audit Specify/Design Code System/acceptance tests Specify/Design Code Integration tests Code reviews (source: I. Burnstein, pg.15)
  • What is a good test case?
    • Capability to find defects
      • Particularly defects with higher risk
      • Risk = frequency of failure (manifestation to users) * impact of failure
      • Cost (of post-release failure) ≈ risk
    • Capability to exercise multiple aspects of the system under test
      • Reduces the number of test cases required and the overall cost
    • Low cost
      • Development: specify, design, code
      • Execution
      • Result analysis: pass/fail analysis, defect localization
    • Easy to maintain
      • Reduce whole life-cycle cost
      • Maintenance cost ≈ size of test artefacts
    (See also an article with this title by Cem Kaner)
  • The importance of good test cases Product quality Test suite quality High Low High Low Many bugs found Very few bugs found Some bugs found You may be here … … and think you are here Some bugs found
  • Test selection/adequacy/coverage/stop criteria
    • “ A selection criteria can be used for selecting the test cases or for checking whether or nor a selected test suite is adequate, that is, to decide whether or not the testing can be stopped” [SWEBOK 2004]
    • Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give us “ enough ” confidence that “ most ” of the defects are revealed
      • In practice, reduced to coverage criteria
    • Coverage criteria
      • Requirements/specification coverage
        • At least one test case for each requirement
        • Cover all statements in a formal specification
      • Model coverage
        • State-transition coverage
        • Use-case and scenario coverage
      • Code coverage
        • Statement coverage
        • Data flow coverage, ...
      • Fault coverage
    Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997 “ Although it is common in current software testing practice that the test processes at both the higher and lower levels stop when money or time runs out, there is a tendency towards the use of systematic testing methods with the application of test adequacy criteria.”
  • Test case design strategies and techniques Black-box testing (not code-based) (sometimes called functional testing) White-box testing (also called code-based or structural testing) Outputs Inputs Techniques / Methods Strategies Knowledge sources Requirements document Specifications User manual Models Domain knowledge Defect analysis data Intuition Experience Equivalence class partitioning Boundary value analysis Cause effect graphing Error guessing Random testing State-transition testing Scenario-based testing Program code Control flow graphs Data flow graphs Cyclomatic complexity High-level design Detailed design Control flow testing/coverage: - Statement coverage - Branch (or decision) coverage - Condition coverage - Branch and condition coverage - Modified condition/decision coverage - Multiple condition coverage - Independent path coverage - Path coverage Data flow testing/coverage Class testing/coverage Mutation testing Tester's View (adapted from: I. Burnstein, pg.65)
  • Test iterations: test to pass and test to fail
    • First test iterations: Test-to-pass
      • check if the software fundamentally works
      • with valid inputs
      • without stressing the system
    • Subsequent test iterations: Test-to-fail
      • try to "break" the system
      • with valid inputs but at the operational limits
      • with invalid inputs
    (source: Ron Patton)
  • Test automation
    • Automatic test case execution
      • Requires that test cases are written in some executable language
      • Increases test development costs (coding) but practically eliminates test (re)execution costs, which is particularly important in regression testing
      • Unit testing frameworks and tools for API testing
      • Capture/replay tools for GUI testing
    • Automatic test case generation
      • Automatic generation of test inputs is easier than the automatic generation of test outputs (usually requires a formal specification)
      • Reduces test development costs
      • Usually, inferior capability to find defects per test case, but overall capability may be higher because much more test cases can be generated (than manually)
  • Some good practices
    • Test as early as possible
    • Write the test cases before the software to be tested
      • applies to any level: unit, integration or system
      • helps getting insight into the requirements
    • Code the test cases
      • because of the frequent need for regression testing (repetition of testing each time the software is modified)
    • The more critical the system the more independent should be the tester
      • colleague, other department, other company
    • Be conscious about cost
    • Derive expected test outputs from specification (formal/informal, explicit/implicit), not from code
  • References and further reading
    • Practical Software Testing , Ilene Burnstein, Springer-Verlag, 2003
    • Software Testing , Ron Patton, SAMS, 2001
    • Testing Computer Software ,  2nd Edition, Cem Kaner, Jack Falk, Hung Nguyen, John Wiley & Sons, 1999
    • Guide to the Software Engineering Body of Knowledge (SWEBOK), IEEE Computer Society, http://www.swebok.org/
    • Software Engineering , Ian Sommerville, 6th Edition, Addison-Wesley, 2000
    • What Is a Good Test Case?, Cem Kaner, Florida Institute of Tecnology, 2003
    • Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997