Conceitos básicos de teste de software


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Conceitos básicos de teste de software

  1. 1. TQS - Teste e Qualidade de Software ( Software Testing and Quality ) Software Testing Concepts João Pascoal Faria [email_address] / ~jpf
  2. 2. Software testing <ul><li>Software testing consists of the dynamic (1) verification of the behavior of a program on a finite (2) set of test cases , suitably selected (3) from the usually infinite executions domain, against the specified expected (4) behavior [ source: SWEBOK] </li></ul><ul><ul><li>(1) testing always implies executing the program on some inputs </li></ul></ul><ul><ul><li>(2) for even simple programs, too many test cases are theoretically possible that exhaustive testing is infeasible </li></ul></ul><ul><ul><ul><li>trade-off between limited resources and schedules and inherently unlimited test requirements </li></ul></ul></ul><ul><ul><li>(3) see test case design techniques later on how to select the test cases </li></ul></ul><ul><ul><li>(4) it must be possible to decide whether the observed outcomes of the program are acceptable or not </li></ul></ul><ul><ul><ul><li>the pass/fail decision is commonly referred to as the oracle problem </li></ul></ul></ul>
  3. 3. Purpose of software testing <ul><li>&quot;Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra, 1972] </li></ul><ul><ul><li>Because exhaustive testing is usually impossible </li></ul></ul><ul><li>&quot;The goal of a software tester is to find bugs, find them as early as possible, and make sure that they get fixed“ (Ron Patton) </li></ul><ul><li>A secondary goal is to assess software quality </li></ul><ul><ul><li>Defect testing – find defects, using test data and test cases that have higher probability of finding defects </li></ul></ul><ul><ul><li>Statistical testing – estimate the value of software quality metrics, using representative test cases and test data </li></ul></ul>
  4. 4. <ul><li>Test case - inputs to test the system and the expected outputs (or predicted results) from these inputs if the system operates correctly under specified execution conditions </li></ul><ul><ul><li>Inputs may include an initial state of the system </li></ul></ul><ul><ul><li>Outputs may include a final state of the system </li></ul></ul><ul><ul><li>When test cases are executed, the system is provided with the specified inputs and the actual outputs are compared with the outputs expected </li></ul></ul><ul><li>Example for a calculator: </li></ul><ul><ul><li>3 + 5 (input) should give 8 (output) </li></ul></ul>Test cases
  5. 5. Test types unit integration system performance robustness functional behaviour white box (or structural) black box (or functional) Level or phase Accessibility (test case design strategy/technique) Quality attributes usability reliability security focus here
  6. 6. Test levels or phases (1) <ul><li>Unit testing </li></ul><ul><ul><li>Testing of individual program units or components </li></ul></ul><ul><ul><li>Usually the responsibility of the component developer (except sometimes for critical systems) </li></ul></ul><ul><ul><li>Tests are based on experience, specifications and code </li></ul></ul><ul><ul><li>A principal goal is to detect functional and structural defects in the unit </li></ul></ul>
  7. 7. Test levels or phases (2) <ul><li>Integration testing </li></ul><ul><ul><li>Testing of groups of components integrated to create a sub-system </li></ul></ul><ul><ul><li>Usually the responsibility of an independent testing team (except sometimes in small projects) </li></ul></ul><ul><ul><li>Tests are based on a system specification (technical specifications, designs) </li></ul></ul><ul><ul><li>A principal goal is to detect defects that occur on the interfaces of units and their common behavior </li></ul></ul>
  8. 8. Test levels or phases (3) <ul><li>System testing </li></ul><ul><ul><li>Testing the system as a whole </li></ul></ul><ul><ul><li>Usually the responsibility of an independent testing team </li></ul></ul><ul><ul><li>Tests are usually based on a requirements document (functional requirements/specifications and quality requirements) </li></ul></ul><ul><ul><li>A principal goal is to evaluate attributes such as usability, reliability and performance (assuming unit and integration testing have been performed) </li></ul></ul>(source: I. Sommerville)
  9. 9. Test levels or phases (4) <ul><li>Acceptance testing </li></ul><ul><ul><li>Testing the system as a whole </li></ul></ul><ul><ul><li>Usually the responsibility of the customer </li></ul></ul><ul><ul><li>Tests are based on a requirements specification or a user manual </li></ul></ul><ul><ul><li>A principal goal is to check if the product meets customer requirements and expectations </li></ul></ul><ul><li>Regression testing </li></ul><ul><ul><li>Repetition of tests at any level after a software change </li></ul></ul>(source: I. Sommerville)
  10. 10. Test levels and the extended V-model of software development Specify Requirements Design Code Requirements review Design review Unit test plan & test cases review/audit Execute unit tests Execute integration tests Execute acceptance tests System/acceptance test plan & test cases review/audit Specify/Design Code Unit tests Execute system tests Integration test plan & test cases review/audit Specify/Design Code System/acceptance tests Specify/Design Code Integration tests Code reviews (source: I. Burnstein, pg.15)
  11. 11. What is a good test case? <ul><li>Capability to find defects </li></ul><ul><ul><li>Particularly defects with higher risk </li></ul></ul><ul><ul><li>Risk = frequency of failure (manifestation to users) * impact of failure </li></ul></ul><ul><ul><li>Cost (of post-release failure) ≈ risk </li></ul></ul><ul><li>Capability to exercise multiple aspects of the system under test </li></ul><ul><ul><li>Reduces the number of test cases required and the overall cost </li></ul></ul><ul><li>Low cost </li></ul><ul><ul><li>Development: specify, design, code </li></ul></ul><ul><ul><li>Execution </li></ul></ul><ul><ul><li>Result analysis: pass/fail analysis, defect localization </li></ul></ul><ul><li>Easy to maintain </li></ul><ul><ul><li>Reduce whole life-cycle cost </li></ul></ul><ul><ul><li>Maintenance cost ≈ size of test artefacts </li></ul></ul>(See also an article with this title by Cem Kaner)
  12. 12. The importance of good test cases Product quality Test suite quality High Low High Low Many bugs found Very few bugs found Some bugs found You may be here … … and think you are here Some bugs found
  13. 13. Test selection/adequacy/coverage/stop criteria <ul><li>“ A selection criteria can be used for selecting the test cases or for checking whether or nor a selected test suite is adequate, that is, to decide whether or not the testing can be stopped” [SWEBOK 2004] </li></ul><ul><li>Adequacy criteria - Criteria to decide if a given test suite is adequate, i.e., to give us “ enough ” confidence that “ most ” of the defects are revealed </li></ul><ul><ul><li>In practice, reduced to coverage criteria </li></ul></ul><ul><li>Coverage criteria </li></ul><ul><ul><li>Requirements/specification coverage </li></ul></ul><ul><ul><ul><li>At least one test case for each requirement </li></ul></ul></ul><ul><ul><ul><li>Cover all statements in a formal specification </li></ul></ul></ul><ul><ul><li>Model coverage </li></ul></ul><ul><ul><ul><li>State-transition coverage </li></ul></ul></ul><ul><ul><ul><li>Use-case and scenario coverage </li></ul></ul></ul><ul><ul><li>Code coverage </li></ul></ul><ul><ul><ul><li>Statement coverage </li></ul></ul></ul><ul><ul><ul><li>Data flow coverage, ... </li></ul></ul></ul><ul><ul><li>Fault coverage </li></ul></ul>Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997 “ Although it is common in current software testing practice that the test processes at both the higher and lower levels stop when money or time runs out, there is a tendency towards the use of systematic testing methods with the application of test adequacy criteria.”
  14. 14. Test case design strategies and techniques Black-box testing (not code-based) (sometimes called functional testing) White-box testing (also called code-based or structural testing) Outputs Inputs Techniques / Methods Strategies Knowledge sources Requirements document Specifications User manual Models Domain knowledge Defect analysis data Intuition Experience Equivalence class partitioning Boundary value analysis Cause effect graphing Error guessing Random testing State-transition testing Scenario-based testing Program code Control flow graphs Data flow graphs Cyclomatic complexity High-level design Detailed design Control flow testing/coverage: - Statement coverage - Branch (or decision) coverage - Condition coverage - Branch and condition coverage - Modified condition/decision coverage - Multiple condition coverage - Independent path coverage - Path coverage Data flow testing/coverage Class testing/coverage Mutation testing Tester's View (adapted from: I. Burnstein, pg.65)
  15. 15. Test iterations: test to pass and test to fail <ul><li>First test iterations: Test-to-pass </li></ul><ul><ul><li>check if the software fundamentally works </li></ul></ul><ul><ul><li>with valid inputs </li></ul></ul><ul><ul><li>without stressing the system </li></ul></ul><ul><li>Subsequent test iterations: Test-to-fail </li></ul><ul><ul><li>try to &quot;break&quot; the system </li></ul></ul><ul><ul><li>with valid inputs but at the operational limits </li></ul></ul><ul><ul><li>with invalid inputs </li></ul></ul>(source: Ron Patton)
  16. 16. Test automation <ul><li>Automatic test case execution </li></ul><ul><ul><li>Requires that test cases are written in some executable language </li></ul></ul><ul><ul><li>Increases test development costs (coding) but practically eliminates test (re)execution costs, which is particularly important in regression testing </li></ul></ul><ul><ul><li>Unit testing frameworks and tools for API testing </li></ul></ul><ul><ul><li>Capture/replay tools for GUI testing </li></ul></ul><ul><li>Automatic test case generation </li></ul><ul><ul><li>Automatic generation of test inputs is easier than the automatic generation of test outputs (usually requires a formal specification) </li></ul></ul><ul><ul><li>Reduces test development costs </li></ul></ul><ul><ul><li>Usually, inferior capability to find defects per test case, but overall capability may be higher because much more test cases can be generated (than manually) </li></ul></ul>
  17. 17. Some good practices <ul><li>Test as early as possible </li></ul><ul><li>Write the test cases before the software to be tested </li></ul><ul><ul><li>applies to any level: unit, integration or system </li></ul></ul><ul><ul><li>helps getting insight into the requirements </li></ul></ul><ul><li>Code the test cases </li></ul><ul><ul><li>because of the frequent need for regression testing (repetition of testing each time the software is modified) </li></ul></ul><ul><li>The more critical the system the more independent should be the tester </li></ul><ul><ul><li>colleague, other department, other company </li></ul></ul><ul><li>Be conscious about cost </li></ul><ul><li>Derive expected test outputs from specification (formal/informal, explicit/implicit), not from code </li></ul>
  18. 18. References and further reading <ul><li>Practical Software Testing , Ilene Burnstein, Springer-Verlag, 2003 </li></ul><ul><li>Software Testing , Ron Patton, SAMS, 2001 </li></ul><ul><li>Testing Computer Software ,  2nd Edition, Cem Kaner, Jack Falk, Hung Nguyen, John Wiley & Sons, 1999 </li></ul><ul><li>Guide to the Software Engineering Body of Knowledge (SWEBOK), IEEE Computer Society, </li></ul><ul><li>Software Engineering , Ian Sommerville, 6th Edition, Addison-Wesley, 2000 </li></ul><ul><li>What Is a Good Test Case?, Cem Kaner, Florida Institute of Tecnology, 2003 </li></ul><ul><li>Software Unit Test Coverage and Adequacy, Hong Zhu et al, ACM Computing Surveys, December 1997 </li></ul>