Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Ingegneria del Software II

771 views

Published on

  • Be the first to comment

  • Be the first to like this

Ingegneria del Software II

  1. 1. Course: Ingegneria del Software II academic year: 2004-2005 Course Web-site: [www.di.univaq.it/ingegneria2/] An Introduction to Testing Lecturer: Henry Muccini and Vittorio Cortellessa Computer Science Department University of L'Aquila - Italy muccini@di.univaq.it – cortelle@di.univaq.it [www.di.univaq.it/muccini] – [www.di.univaq.it/cortelle]
  2. 2. Copyright Notice » The material in these slides may be freely reproduced and distributed, partially or totally, as far as an explicit reference or acknowledge to the material author is preserved. Henry Muccini SEA Group 2 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  3. 3. Acknowledgment » With very special thanks to Antonia Bertolino and Debra J. Richardson which collaborated in previous versions of this lecture note SEA Group 3 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  4. 4. Agenda » Testing Basics - Definition of Software Testing » Testing Glossary » Testing Approaches SEA Group 4 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  5. 5. Testing basics » Definition of sw testing » Fault vs. failure » Test selection - Coverage testing - Reliability testing - Specification-based testing » Traceability and test execution SEA Group » Testing from LTS 5 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  6. 6. Testing » Testing is a word with many interpretations - Inspection/review/analysis - Debugging - Conformance testing - Usability testing - Performance testing - …. SEA Group 6 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  7. 7. What testing is not (citation from Hamlet, 1994): » I've searched hard for defects in this program, found a lot of them, and repaired them. I can't find any more, so I'm confident there aren't any. The fishing analogy: » I've caught a lot of fish in this lake, but I fished all day today without a bite, so there aren't any more. » NB: Quantitatively, a day's fishing probes far more of a large lake than a year's testing does of the input domain of even a trivial program. SEA Group 7 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  8. 8. An all-inclusive definition: Software testing consists of: » the dynamic verification of the behavior of a program » on a finite set of test cases » suitably selected from the (in practice infinite) input domain » against the specified expected behavior SEA Group 8 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  9. 9. Testing vs. Other Approaches » Where is the boundary between testing and other (analysis/verification) methods? What distinguishes software testing from “other” approaches is execution. This requires the ability to: - launch the tests on some executable version (traceability problem), and - analyse (observe and evaluate) the results (not granted for embedded systems) » I am not saying testing is superior and we should disregard other approaches: on the contrary, testing and other methods should complement/support each other SEA Group 9 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  10. 10. Test purpose » The software is tested in order to: - Find bugs: “debug testing”, e.g., by: > Conformance testing > Robustness testing > Coverage testing - Measure its reliability: by statistical inference - Measure its performance - Release it to the customer (acceptance test) - … not mutually exclusive goals, but rather complementary SEA Group » Of course different approaches are taken 10 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  11. 11. The high cost of testing » Any software test campaign involves a trade-off between - Limiting resource utilization (time, effort) as few tests as possible - Augmenting our confidence as many tests as possible » Two research challenges: - Determining a feasible and meaningful stopping rule - Evaluating test effectiveness (reliability, “coverage notion”, ....very tough problem SEA Group 11 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  12. 12. Testing Glossary SEA Group 12 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  13. 13. Failures, Errors, Faults » Failure: incorrect/unexpected behavior or output - incident is the symptoms revealed by execution - failures are usually classified » Potential Failure: incorrect internal state - sometimes also called an error, state error, or internal error » Fault: anomaly in source code - may or may not produce a failure - a “bug” » Error: inappropriate development action SEA Group - action that introduced a fault 13 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  14. 14. THE FAULT-FAILURE MODEL Error Failure Fault affects the delivered service sw internal state must be is corrupted activated (latent or manifest) A fault will manifest itself as a failure if all of the 3 following conditions hold: 1) EXECUTION: the faulty piece of code is executed 2) INFECTION: the internal state of the program is corrupted: error 3) PROPAGATION: the error propagates to program output PIE MODEL (Voas, IEEE TSE Aug. 1992) SEA Group 14 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  15. 15. The ambiguous notion of a “fault” » What a tester or a user observes is a failure. What is then a fault? - It is a useful and intuitive notion, but very difficult to formalize. » In practice we often characterise a fault with the modifications made to fix it: e.g. “Wrong operator”. In such a way, - the concept of a fault becomes meaningful only after it has been fixed. - Moreover, different people may react to a same failure with different fixes (and clever programmer find minimum fixes) » A solution is to avoid the ambiguous notion of a fault and reason instead in terms of failure regions, i.e. collections of input points that give rise to failures. - A failure region is characterised by a size and a pattern SEA Group 15 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  16. 16. Fundamental Testing Questions » Test Case: How is test described / recorded? » Test Criteria: What should we test? » Test Oracle: Is the test correct? » Test Adequacy: How much is enough? » Test Process: Is testing complete and effective? How to make the most SEA Group of limited resources? 16 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  17. 17. Test Case » Specification of - identifier - test items - input and output specs - environmental requirements - procedural requirements » Augmented with history of - actual results - evaluation status - contribution to coverage SEA Group 17 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  18. 18. Test Criterion » Test Criterion provides the guidelines, rules, strategy by which test cases are selected requirements on test data -> conditions on test data -> actual test data » Equivalence partitioning is hoped for - a test of any value in a given class is equivalent to a test of any other value in that class - if a test case in a class reveals a failure, then any other test case in that class should reveal the failure - some approaches limit conclusions to some chosen class of faults and/or failures SEA Group 18 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  19. 19. Theory of Testing Let P = program under test S = specification of P D = input domain of S and P T = subset of D, used as test set for P C = test criterion » P is incorrect if it is inconsistent with S on some element of D » T is unsuccessful if there exists an element of T on which P is incorrect SEA Group 19 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  20. 20. Subdomain-based Test Criterion » A test criterion C is subdomain-based if it induces one or more subsets, or subdomains, of the input domain » A subdomain-based criterion typically does not partition the input domain (into a set of non-overlapping subdomains whose union is the entire domain) SEA Group 20 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  21. 21. Test Oracle » A test oracle is a mechanism for verifying the behavior of test execution - extremely costly and error prone to verify - oracle design is a critical part of test planning » Sources of oracles - input/outcome oracle - tester decision - regression test suites - standardized test suites and oracles - gold or existing program - formal specification SEA Group 21 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  22. 22. The oracle problem The Expected Output is f*(d) YES, given input d f(d) = f*(d) » In some cases easier (e.g., an existing version, existing formal specification), but generally very difficult (e.g., operational testing) » Not enough emphasized research problem? SEA Group 22 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  23. 23. Test Adequacy » Theoretical notions of test adequacy are usually defined in terms of adequacy criteria - Coverage metrics (sufficient percentage of the program structure has been exercised) - Empirical assurance (failures/test curve flatten out) - Error seeding (percentage of seeded faults found is proportional to the percentage of real faults found) - Independent testing (faults found in common are representative of total population of faults) » Adequacy criteria are evaluated with respect to a test suite SEA Group and a program under test 23 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  24. 24. Testing Approaches SEA Group 24 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  25. 25. Unit, Integration and System Testing [BE90, ch1] » Unit Testing: - A unit is the smallest piece of software > A unit may be a procedure, a piece of code, one component (in object oriented systems), one system as a whole. - The Unit test purpose is to ensure that the uniti satisfies its functional specification and/or that its implemented structure matches the intended design structure - Unit tests can also be applied for test interface or local data structure. SEA Group 25 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  26. 26. Integration Testing » Integration testing is specifically aimed at exposing the problems that arise from the combination of components - Assuming that components have been unit tested - Especially communicating interfaces among integrated components need to be tested to avoid communication errors. » Different approaches: - non-incremental: big-bang - incremental: top-down, bottom-up, mixed - In Object-Oriented distributed systems: new techniques, based on dependence analysis or dynamic relations of inheritance and polymorphism. SEA Group 26 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  27. 27. System Testing » Involves the whole system embedded in its actual hardware environment » It attempts to reveal bugs which depends on the environment (bugs that cannot be attributed to components) » Recovery testing, security testing, stress testing and performance testing SEA Group 27 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  28. 28. Integration Testing » Stress testing: designed to test the software with abnormal situations. - Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs. - The test is expected to succeed when the system is stressed with higher rates of inputs, maximum use of memory or system resources. » Performance testing is usually applied to real-time, embedded systems in which low performances may have serious impact on the normal execution. - Performance testing checks the run-time performance of the system and may be coupled with stress testing. - Performance is not strictly related to functional requirements: functional tests may fail, while performance ones may succeed. SEA Group 28 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  29. 29. Henry Muccini’s methafor » The metaphor I like to use to explain how unit, integration and system testing differ considers - a unit as a “musician”, playing his/her instrument. Unit esting consists in verifying that the musician produces high quality music. - Putting together musicians each one playing good music is not enough to guarantee that the group (i.e., the integration of units) produces good music. They may play different pieces, or the same piece with different rhythms. - The group may then play in different environments (e.g., theater, stadium, arena) with different results (i.e., system testing). SEA Group 29 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  30. 30. Testing Planning throughout Process levels of abstraction User Acceptance Requirements Testing plan & validate Software /verify Requirements System Specification Testing Architecture Design Integration Specification Testing Component Component integrate design & Specifications Testing & test analyze Unit Unit Implementations Testing SEA Group 30 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II time
  31. 31. Operational Testing » A completely different approach: - We cannot realistically presume to find and remove the last failure. - Then, we should invest test resources to find out the “biggest” ones. » “Goodness” here means to minimise the user perceived faults, i.e. - Try to find earlier those bugs that are more frequently encountered (neglect “small” bugs) SEA Group - Not suitable for safety critical applications 31 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  32. 32. Structural and Functional Testing » Structural Testing - Test cases selected based on structure of code - Views program /component as white box (also called glass box testing) » Functional Testing - Test cases selected based on specification - Views program/component as black box Can do black-box testing of program by SEA Group white-box testing of specification 32 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  33. 33. Black Box vs. White Box Testing SELECTED RESULTANT DESIRED INPUTS OUTPUTS OUTPUT “BLACK BOX” TESTING RESULTANT DESIRED OUTPUTS OUTPUT SELECTED INPUTS INTERNAL SOFTWARE BEHAVIOR DESIGN SEA Group “WHITE BOX” TESTING 33 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  34. 34. Structural (White-Box) Test Criteria » Criteria based on - control flow - data flow - expected faults » Defined formally in terms of flow graphs » Metric: percentage of coverage achieved » Adequacy based on metric requirements for criteria SEA Group Objective: Cover the software structure 34 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  35. 35. Flow Graphs » Control Flow - The partial order of statement execution, as defined by the semantics of the language » Data Flow - The flow of values from definitions of a variable to its uses Graph representation of control flow and data flow relationships SEA Group 35 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  36. 36. A Sample Program to Test 1 function P return INTEGER is 2 begin 3 X, Y: INTEGER; 4 READ(X); READ(Y); 5 while (X > 10) loop 6 X := X – 10; 7 exit when X = 10; 8 end loop; 9 if (Y < 20 and then X mod 2 = 0) then 10 Y := Y + 20; 11 else 12 Y := Y – 20; 13 end if; 14 return 2*X + Y; 15 end P; SEA Group 36 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  37. 37. P’s Control Flow Graph (CFG) 6 7 10 F T T T F 9 T 2,3,4 5 9a 9´ 9b 14 F F 12 SEA Group 37 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  38. 38. All-Statements Coverage of P At least 2 test cases needed 6 7 10 F T T T F 2,3,4 5 9 14 Example all-statements-adequate test set: F (X = 20, Y = 10) 12 <2,3,4,5,6,7,9,10,14> (X = 20, Y = 30) <2,3,4,5,6,7,9,12,14> SEA Group 38 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  39. 39. All-Edges Coverage of P At least 3 test cases needed 6 7 10 F T T T F T 2,3,4 5 9a 9b 14 Example all-edges-adequate test set: F F (X = 20, Y = 10) <2,3,4,5,6,7,9a,9b,10,14> 12 (X = 5, Y = 30) <2,3,4,5,9a,12,14> (X = 21, Y = 10) SEA Group <2,3,4,5,6,7,5,6,7,5,9a,9b,12,14> 39 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  40. 40. P’s Control and Data Flow Graph X Y X 6 7 10 X Y X X F X T T T F T 2,3,4 5 9a 9b 14 X F F Y Y 12 X Y X SEA Group 40 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  41. 41. Functional Testing » The internal structure and behavior of the program is not considered » What is assumed is to have the (formal or informal) specification of the system under test and the objective is to analyze if the system correctly behaves with respect to the specifications » This analysis technique is particularly useful when the source code is not available » A functional test consists of analyzing how the system reacts to external inputs SEA Group 41 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II
  42. 42. References » [BE90] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold, 2nd. Edition, 1990. » [M03] E. Marchetti. Software Testing in the XXI century: Methods, Tools and New Approaches to Manage, Control and Evaluate This Critical Phase. Ph.D. Thesis, Dipartimento di Informatica, Universita’ di Pisa, year 2003, SEA Group 42 © 2005 by H. Muccini and V. Cortellessa / Ingegneria del Software II

×