Your SlideShare is downloading. ×

Software Testing and Validation


Published on

  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. 1. Introduction • The software engineering approach, as described in literature and software Software Testing and Validation engineering standards, works toward a single goal: to produce high-quality Juha Taina software. – The users of current software products are University of Helsinki surprisingly tolerant to software problems, Department of Computer Science low-quality software always becomes expensive in the long run. Software Testing and Validation, page -1- Software Testing and Validation, page -2- • While this looks clean enough, we still have • For our purposes, software quality is to answer the fundamental questions: defined as: – What kind of software fulfils the high-quality – Conformance to explicitly stated functional and software requirements? performance requirements, explicitly – How can we ensure that we create high-quality documented development standards, and software? implicit characteristics that are expected of all professionally developed software (Pressman, • High quality software views: Software Engineering, 1997). – fulfils customers’ needs, – follows requirements specifications, – gives plenty of functionality, – best what one can buy with this price. Software Testing and Validation, page -3- Software Testing and Validation, page -4- • The definition gives us the following points: • One of the implicit software requirement is – Requirements specifications define the basis for that is it well tested. high-quality software. A software product that • We cannot test a program completely, but does not follow its specifications is not a high- we can create enough test cases to create a quality product. good test coverage for the product. – The theory of software engineering defines the framework where high-quality software must • Why can’t we test a program completely? be developed. If used software engineering – The number of possible test inputs and methods are not within the framework, the execution paths through the program can easily result is not a high-quality program. exceed our computing capacity. – A high-quality software follows both implicit – Even if this was possible, each bug fix would and explicit software requirements. require a rerun of the test cases. Software Testing and Validation, page -5- Software Testing and Validation, page -6- 1
  • 2. • Let’s take an example. Myers uses the • Although the problem looks simple at first following program as a self-assessment for sight, a significant number of test cases his readers’ ability to specify adequate must be developed: testing (Myers, the Art of Software Testing, – All input-related problems: Wiley, 1979): • too few or too many parameters, – A program reads three integer values. The three values • parameters are not integers, are interpreted as representing the lengths of the sides • negative or zero value parameters, of a triangle. The program prints a message that states • parameters with +-sign and/or spaces, whether the triangle is scalene, isosceles, or equilateral. Develop a set of test cases that you feel will adequately • integer value overflow problems, test this program. • etc. • Try to think of the possible test cases before peeking the next slides. Software Testing and Validation, page -7- Software Testing and Validation, page -8- – Considering that the input values are correct – Output-related problems: integers: • test that the output message is correct (no spelling • test that the three parameters actually form a errors in any of the messages etc.), triangle, • test indirect output (unix pipes for instance), • test that the program results a scalene triangle if and • test that output is correct in localised environments only if none of the parameters are equal, (Chinese characters etc). • test that the program results a isosceles triangle if • etc. and only if exactly two of the parameters are equal, • Conclusion: Even in such a small program • test that the program results an equilateral triangle if and only if all three parameters are equal. as this, the number of test cases grows very large. • Corollary: Testing is not as straightforward as it might first sound like. Software Testing and Validation, page -9- Software Testing and Validation, page -10- 1.1. Testing as a verification and • Testing is a tool for assuring that the tested validation tool product fulfils high-quality software requirements. It is used in Quality • We define software testing as follows: assurance: – Testing is a managed activity that – Quality assurance: (1) A planned and systematically by using theoretically accepted systematic pattern of all actions necessary to testing methods tries to reduce the number of provide adequate confidence that an item or defects in a software product. product conforms to established technical • Thus, we consider testing to be requirements. (2) A set of activities designed to – theoretically sound, evaluate the process by which products are – managed, and developed or manufactured. – systematic. Software Testing and Validation, page -11- Software Testing and Validation, page -12- 2
  • 3. • The quality assurance activities can be • Testing is not the only tool for verification categorised into verification and validation. and validation. – Verification refers to the set of activities that – Inspections are used to verify a small piece of ensure that the software correctly implements a documentation or software code. An inspection specific function. group gathers into an inspection meeting to – Validation encompasses the set of activities that review the inspected component. All found ensure that the software is traceable to problems are written down but their possible customers’ requirements and users’ needs. solutions are not discussed in the meeting. • Usually this is stated as follows: – Simulation is used especially in validation. A finished software is put into a simulated – Verification: Are we building the product right? environment to check for possible defects. This – Validation: Are we building the right product? often saves both time and money. Software Testing and Validation, page -13- Software Testing and Validation, page -14- • More verification and validation tools: – Modeling belongs to all high-quality software 1.2. Principles of software testing creation. Several models of various abstraction • Why testing? Why systematic testing? levels can be used in validation. A most common example of this is the use of Universal – According to Beizer in “Software Testing Modeling Language UML. Techniques”, a professionally produced commercial software system contains 1-3 – Formal methods are used whenever we have a defects per 100 executable statements when the mathematical model of the product. With software engineering group has declared it mathematical formalism it is possible both to error-free. automatically create a frame for the software product and to prove its correctness. • Unfortunately this approach is suitable for only a small subset of software products. Software Testing and Validation, page -15- Software Testing and Validation, page -16- – Also Beizer in “Software System Testing and – Common estimates of the cost of finding and Quality Assurance” reported his private bug fixing errors in programs range from 40% to rate - how many mistakes he made in designing 80% of the total development cost. and coding a program - as 1.5 errors per – The average debugging effort is about 12 hours executable statement. This includes all of working time for a single defect, testing is mistakes, including typing errors. one of the most costly phases in a software – While most programmers catch and fix more engineering process. than 99% of their mistakes before releasing a • Corollary: Companies don’t spend fortunes program for testing, there is still the missing to testing and maintenance to “verify that a 1% left. program works.” They spend the money – Human work is needed in test case design, implementation, management, and analysis. because the program does not work - it has bugs which need to be found. Software Testing and Validation, page -17- Software Testing and Validation, page -18- 3
  • 4. • Testing is a process of executing or • A successful test is one that uncovers an as- analysing a software program, its structure yet undiscovered error. Thus, testing is or documentation with the intent of finding destructive behaviour since its intent is to an error. Thus, testing is not restricted to demolish an existing product. the software code. • Testing cannot show the absence of defects. • A good test case is one that has a high It can only show that software errors are probability of finding an as-yet present. Thus, testing cannot prove that a undiscovered error. Thus, test cases must program is correct. cover the complete product. • Testing is by itself a process that must be systematically managed. Thus, testing is part of software process management. Software Testing and Validation, page -19- Software Testing and Validation, page -20- • Before applying methods to design effective • Basic principles continue. test cases, a software engineer must – Testing should begin “in the small” and understand the basic principles that guide progress toward testing “in the large”. software testing. – Exhaustive testing is not possible. – All tests should be traceable to implicit and – To be most effective, testing should be explicit customer requirements. conducted by an independent third party. – Tests should be planned before testing begins. • A software engineer should design a – 80% of all errors uncovered during testing will computer program that is easy to test. likely be traceable to 20% of all program – Fortunately a program that is designed using modules. generally accepted software design and analysis methods is also easy to test. Software Testing and Validation, page -21- Software Testing and Validation, page -22- The “V” model of software testing (Paakki, 1.3. The V-model of testing Software Testing, 2000): • Like in other steps in software management Requirements Acceptance elicitation processes, also testing needs to be planned testing and conducted in steps. Requirements System analysis testing • Usually the overall organisation of testing Architecture design Integration testing follows the V-model. Module Module design testing • The model integrates testing and Coding construction by showing how the testing levels verify the constructive phases. Specification Planning Testing level (phase) Software Testing and Validation, page -23- Software Testing and Validation, page -24- 4
  • 5. • The testing process is organised in levels • The presented V-model testing levels are according to the V-model. module testing, integration testing, system • The levels reflect the different viewpoints testing, and acceptance testing. of testing on different levels of detail. – In module testing, both module structure and functionality are tested. The structure testing is • Using the V-model integrates testing better often called white-box testing since source code to the complete software engineering is visible in it. The functionality testing, on the process. As such it helps to manage testing. other hand, is called black-box testing, since in • The presented V-model is not the only one. it the internal structure of the module is not In fact most companies have their own visible - only its definition is known. • Module testing is mostly white-box testing. variations of the model. Software Testing and Validation, page -25- Software Testing and Validation, page -26- – In integration testing the objective is to verify – Acceptance testing is the final testing stage that modules co-operate correctly. This is before taking the system into full operation. In achieved by testing both interfaces and global acceptance testing customers and users check relationships between integrated modules. that the system fulfils their actual needs. • Integration testing is mostly black-box testing, Acceptance testing is first conducted in although also white-box testing can be used to verify laboratory conditions (so called alpha-testing) module relationships (for instance global variables). and finally in a real environment (so called – At the system testing level, the whole system beta-testing). including hardware, software, and external • Acceptance testing is completely black-box testing programs, is tested under the assumption that as was system testing. Since these testing phases are individual elements are already tested. very similar, they are often combined. • System testing is completely black-box testing. No source code should be visible anywhere. Software Testing and Validation, page -27- Software Testing and Validation, page -28- • Since this material is about testing, we will 2. Software quality and errors concentrate on the flaw part of quality: How • The primary task of testing is to find and will we know a flaw when we find it? report errors. – Thus, what constitutes an error? • The result of testing is hopefully • A software error is a mismatch between the improvement in product quality. program and its specification? • A program’s quality depends on: – Not really. Bad specification leads to bad programs, and implicit requirements exist – the features that make the customer want to use outside the specification. the program, and – the flaws that make the customer wish he’d bought something else. Software Testing and Validation, page -29- Software Testing and Validation, page -30- 5
  • 6. • A software error is present when the • Kaner et al. describes 13 major software program does not do what its end user error categories in “Testing Computer reasonably expects it to do? Software”: – Better. However, since different users have – User interface errors: different expectations, what constitutes a • missing functionality, software error also varies. One man’s cheese is • missing (on-line) documentation, other man’s rotten milk. • bad command structure, • Thus, there can never be an absolute • missing user commands, definition for software errors. The extent to • bad performance, • badly designed output. which a program has faults is measured by the extent to which it fails to be useful. Software Testing and Validation, page -31- Software Testing and Validation, page -32- – Error handling: – Calculation errors: • failure to anticipate the possibility of errors and • formula errors, protect against them, • precision errors, • failure to notice error conditions, • rounding and truncation errors, • failure to deal with a detected error in a reasonable • calculation algorithm errors. way. – Initial and later states: – Boundary-related errors: • program fails in the first execution round, • numeric boundaries: very small numbers, very large • program fails at the second execution round, numbers, number near zero; • back-up and undo processes are flawed. • program use boundaries: first time, second time, a very large time, after a new document etc; – Control flow errors: • memory configuration errors: too little memory, too • wrong execution order, much memory (yes, it’s possible). • wrong condition branch. Software Testing and Validation, page -33- Software Testing and Validation, page -34- – Errors in handling or interpreting data: – Hardware: • misinterpreted data, • program sends bad data to devices, • missed latest changes during data exchange. • program ignores device status information, – Race conditions: • program doesn’t recognise hardware failures. • program expects two events A and B to occur in – Source and version control: order AB while also order BA is possible (while • old incompatible versions of modules or libraries very rare). used in a new program, – Load conditions: • incorrect copyright messages, sign-on screens, or • program fails under a high volume (much work over version numbers, a long period), • incorrect platform information. • program fails under a high stress (maximum load at – Documentation: one time). • poor documentation, • incomplete documentation. Software Testing and Validation, page -35- Software Testing and Validation, page -36- 6
  • 7. – Testing errors: • errors in test code, 3. Testing management • errors in test data, • misunderstood program functionality in tests, • Testing can be seen as a process by itself. It • incomplete test coverage. needs to be planned, executed according to • Recognising these error classes gives us the the plan, documented, and reported. first basis for error detection. A more • A well planned and managed testing process detailed analyse is of course needed for simplifies test procedures, helps to create each tested product. high-quality software, and also helps to • Errors do vary. Some are inconvenient. repeat tests when necessary. Others cause loss of human lives. Software Testing and Validation, page -37- Software Testing and Validation, page -38- • The following documents can be used for – Test-case specification defines a test case managing the testing process (Source: identified by a test-design specification. IEE829 Standard for Software Test – Test-procedure specification specifies the steps Documentation). The most important for executing a set of test cases. documents are in bold. – Test-item transmittal report identifies the test items being transmitted for testing, including – Test plan prescribes the scope, approach, the person responsible for each item, its resources, and schedule of the testing activities. physical location, and its status. – Test design specification specifies the – Test log provides a chronological record of refinements of the test approach and identifies relevant details about the execution of tests. the features to be tested by the design and its associated tests. Software Testing and Validation, page -39- Software Testing and Validation, page -40- Project Item – Test-incident report (or a Bug report) Doc Doc Test Item documents any event that occurs during the testing process which requires investigation. Test Plan – Test-summary report summarises the results of the designated activities and provides Test Design Test Design Test Design evaluations based on these results. Spec Spec Spec • The list must be seen as a guideline from Test Case which each organisation can specify and Spec tailor its testing documentation practices. Test Proc Test Item Transmittal Spec • The document relationships can be seen in Report the next two slides. Test Execution Software Testing and Validation, page -41- Software Testing and Validation, page -42- 7
  • 8. Test Item Test Proc Transmittal • The test process model may vary between Spec Report Test Execution testing groups. • The V-model defines the general steps in Test Test testing. Each V-model test phase has private Log Log process structure that may be linear or Test Incident Test Incident iterative. Report Report • One possible test process model as presented by Paakki is in the next slide. It Test Summary is a minimal model in the sense that it includes only the most necessary steps. Software Testing and Validation, page -43- Software Testing and Validation, page -44- • Next to the test process management, also testing staff must be managed. There are Requirements Require- Test summary engineering ments report Test Test Evaluation Res ults four major ways to share testing responsibilities (Paakki, 2000): plan planning of results Inspection Management Bug reports – 1. Testing is each person’s responsibility: the of testing product developers are also responsible for testing their own code. Drawback: testing must Test case specific ation Test cas es Scripting Test scripts Test exec ution be done independently; programmers are too biased by their own solutions and blind to their errors. Inspection Software Testing and Validation, page -45- Software Testing and Validation, page -46- – 2. Testing is each group’s responsibility: the – 4. Testing is performed by a test organization: product developers within the project group test testing is a component of quality assurance with each other’s code. Microsoft: “joint ownership a dedicated team. Drawback: additional and responsibility”. Drawback: usually organizations and processes. development efforts finally dominate over • Some testing tasks are among the most testing efforts. mechanical tasks in a software engineering – 3. Testing is performed by a dedicated process (test execution from someone else’s resource: there is a person in the project group who only concentrates on testing. Drawback: a scripts, for instance). It is important to person might be nominated who is less ensure that the testing staff does not wear effective in development (and probably in out in testing. testing as well). Software Testing and Validation, page -47- Software Testing and Validation, page -48- 8
  • 9. • In white-box testing we test program 4. White-box testing structure. We traverse through possible paths, test conditions, loops, and module • White-box or Glass-box testing is a testing parameters. phase where source code is visible. • In the other testing type, black-box testing, • In white-box testing, the testing group uses we test program functionality. Program their understanding and access to the source functionality is tested from outside, which is code to develop test cases. how the customer will work with it. • White-box testing is the kind of testing that • Both white-box and black-box testing are is best suited to do once a module or module needed. White-box testing finds low-level integration is complete. bugs, black-box finds high-level bugs. Software Testing and Validation, page -49- Software Testing and Validation, page -50- • White-box testing provides the following – Control flow: The programmer can add test- benefits: specific code so that the program constantly reports of what it is doing. The tester can use – Focused testing: White-box testing is especially this code, or run a debugger to track the order suitable for module-level testing. It’s much in which lines of code are executed. The easier to test an individual module with white- trouble points can be examined and reported box methods than with black-box methods. immediately. – Testing coverage: The tester can find out – Data integrity: The tester can see which parts of which parts of the program are exercised by the program modify any item or data. By any test. White-box testing can tell which lines tracking a data item through the system, she can of code, which branches, or which paths spot data manipulation by inappropriate haven’t yet been tested, and she can add tests modules. that she knows will cover areas not yet touched. Software Testing and Validation, page -51- Software Testing and Validation, page -52- – Internal boundaries: The tester can see internal • White-box testing methods should always boundaries in the code that are completely be present in the programming process. The invisible to black-box testers. For instance, earlier white-box testing is exercised, the some programs store data into a temporary buffer if too much comes in too quickly. This earlier low-level bugs are found. is visible only at white-box testing. • In an optimal case in the testing process the – Algorithm-specific testing: Some test cases can testers could concentrate on black-box be derived directly from the used algorithms. testing. In practice this is not possible, and For example, there are many ways to invert a testers should at least be familiar with matrix, and well understood ways to white-box testing methods. miscalculate the result. The programmer can apply standard numerical analysis techniques to • White-box testing can be at least partially predict the results. automated with appropriate testing tools. Software Testing and Validation, page -53- Software Testing and Validation, page -54- 9
  • 10. • The principle in white-box testing is to • The criterion for selecting the execution select test cases such that they force the paths depends on the chosen white-box program to move along different kinds of method. However, all methods are based on paths during its execution. the analysis of the control flow of the • A path is one flow of control logic from the program. beginning of the tested entity to the end of • The control flow is usually presented as a the entity during execution. flow graph. The nodes in a flow graph • When the tested entity is a program module, present program statements, and edges a path starts from the module start and ends present control transfer between the to one of the exit points in the module. statements. Software Testing and Validation, page -55- Software Testing and Validation, page -56- Flow-graph structures: yes s s p yes no p s1 yes no no p (c) Loop (while-do) statement: while (p) do s (d) Loop (do-while) statement: s2 s1; do s while (p) s2; OR …; s1 s2 s1 sn e1 s1 e2 e s2 yes sn p s … en no sn (b) Conditional (if) statement: s2 (a) Statement sequence: if (p) then s1 else s2 (f) Switch (case) statement: s1; s2; …; sn (e) Iterative (for) statement: switch (e) {case e1: s1; for (s1; p; s2) do s case e2: s2; …; case en: sn} Software Testing and Validation, page -57- Software Testing and Validation, page -58- • Coverage of a program is quantitative. We 4.1. Coverage can calculate the percentage of coverage in • A white-box testing strategy is designed and a module. This depends on the chosen measured in terms of coverage. It defines coverage criterion. how well a module is tested with respect of • Before we cover various coverage criteria the chosen test policy. we need an informal definition for an • For the sake of simplicity we consider a execution path: tested unit to be a module. – An execution path in a flow graph is a sequence of nodes starting from the beginning of the • Coverage is defined with a coverage module and ending to a module exit point. criterion. It indicates how extensively the module has been tested with given test cases. Software Testing and Validation, page -59- Software Testing and Validation, page -60- 10
  • 11. • The definition of the execution path is • At least the following coverage criteria has important for all white-box testing. been defined: • Thus, in white-box testing we assume that a – Statement coverage: A set P of execution paths flow graph of the tested module is present. satisfied the statement coverage criterion if and only if for all nodes n in the flow graph, there is • Although we assume that the tested unit is a at least one path p in P such that p contains the module, the methods can be expanded to node n. any program unit where source code is • In other words, statement coverage criterion is present. fulfilled if test cases force each statement of the tested module to be executed at least once. • If a set of execution paths satisfies the statement coverage criterion, we are said to have achieved a complete statement coverage. Software Testing and Validation, page -61- Software Testing and Validation, page -62- – Branch coverage: A set P of execution paths – Control coverage: A set P of execution paths satisfies the branch coverage criterion if and satisfies the condition coverage criterion if and only if for all edges e in the flow graph, there is only if for every control node in the flow graph at least one path p in P such that p contains the consisting of atomic predicates (, ci edge e. yields true when evaluated within a path p1 in • In other words, the test cases must cover each P, and ci yields false when evaluated within a branch for both the true and the false values. path p2 in P, i = 1,..n • The branch coverage criterion is a stronger criterion • In other words, each atomic test (e.g. a==1) must be than the statement coverage criterion. For instance, evaluated at least once to true, and at least once to a test “if (a==1) do_something;” needs a one test false in the test cases. case for statement coverage (a=1) but two test cases • If a branch node has n atomic predicates (e.g. a<1 || for branch coverage (a=1, a=0). a >2 || b==0), as many test cases are needed to fulfil condition coverage criterion (tff,ftf,fft). Software Testing and Validation, page -63- Software Testing and Validation, page -64- – Multicondition coverage: A set P of execution • The essential differences between statement paths satisfies the multicondition coverage coverage, branch coverage, condition criterion if and only if for every control node in coverage, and multicondition coverage can the flow graph consisting of atomic predicates (c1,…cn), each possible combination of their be summarized by the following conditional truth values (true,false) is evaluated within at statement and its flow graph: least one path p in P. if ((x != 0) && (z >=0)) y = z/x; • In other words, when a branch node has n atomic predicates (e.g. a<1 || a>2 || b==0), n*n test cases are needed to fulfil the multicondition coverage (x!=0) yes criterion (fff,fft,ftf,ftt,tff,tft,ttf,ttt). && y=z/x (z>=0) • This is the most complete coverage criterion of no practical relevance when testing is driven through exploration of control predicates. Software Testing and Validation, page -65- Software Testing and Validation, page -66- 11
  • 12. – Complete statement coverage over the module can be reached with a single test: (x=1,z=0). • Next to the branch-based coverage criterion, – Complete branch coverage calls for at least two we can define path-based coverage sets: (x=1,z=0) for the yes-branch, and (x=1,z=- criterion. Here we define two criteria: path 1) for the no-branch. coverage and independent path coverage. – Complete condition coverage needs at least two – Path coverage: A set P of execution paths tests: (x=0,z=0) for the atomic combination satisfies the path coverage criterion if and only (false,true) over the control predicate, and if P contains all execution paths from the begin- (x=1,z=-1) for the atomic combination node to the end-node in the flow graph. (true,false). • This is an ultimate white-box coverage criterion. – Complete multicondition coverage needs four • Unfortunately loops make it practically impossible to reach full path coverage for even for relatively tests: (x=1,z=0), (x=1,z=-1),(x=0,z=0), and simple programs. (x=0,z=-1). (tt,tf,ft,ff). Software Testing and Validation, page -67- Software Testing and Validation, page -68- – Independent path coverage: A set P of • The independent path coverage is a good execution paths satisfies the independent path criterion for path-based testing. It can be coverage criterion if and only if P contains all made more complete by breaking tests to the p+1 independent paths over the flow graph, where p is the number of predicate nodes in the atomic predicates with the following simple graph. rules: if (A && B) s1; else s2; ==> if (A) if (B) s1; else s2; else • A set of paths is called independent if none of them s2; is a linear combination of the others. In other if (A || B) s1; else s2; ==> if (A) s1; else if (B) s1; else s2; words, a linear path cannot be combined from other linear paths. – This way independent path coverage criterion • Independent paths collapse loops into single comes closer to the multicondition coverage iterations and therefore dramatically reduce the criterion. number of test cases needed to explore the module. – Similarly multipredicate branches are broken Software Testing and Validation, page -69- into atomicSoftware Testing and Validation, page -70- predicates. true false true false A&&B A • The set of independent paths can be false constructed by starting from the shortest s1; s2; s2; B execution path, and incrementally true introducing a new path by adding at least one edge that is not yet contained in any s1; path in the set. true false false true A||B A – Sometimes this leads to impossible execution paths. Some fine tuning is then necessary. s1; s2; B true s1; • If the tested module has several exit points, false a new artificial exit node can be introduced s2; to simplify the test path generation. Software Testing and Validation, page -71- Software Testing and Validation, page -72- 12
  • 13. • When a particular coverage criterion has • Example: Create test cases for the following been chosen, the test cases that satisfy the module using independent path coverage. criterion must be selected. In principle this PROCEDURE average is straightforward: * This procedure computes the average of 100 or fewer numbers that lie between bounding values. It – 1. Construct the flow graph of the tested also computes the sum and the total number valid. module. INTERFACE RETURNS average, total.input, – 2. Choose the minimum number of execution total.valid paths that satisfy the criterion, and INTERFACE ACCEPTS value, minimum, maximum – 3. For each selected path, prepare a test case TYPE value [1:100] IS SCALAR ARRAY that activates the execution of the path. TYPE average, total.input, total.valid, minimum, maximum, sum IS SCALAR • These tasks can be partially automated. TYPE i IS INTEGER Software Testing and Validation, page -73- Software Testing and Validation, page -74- * Example continues * Example continues i=1 increment i by 1 total.input = total.valid = 0 ENDDO sum = 0 IF total.valid > 0 THEN DO WHILE value[1] <> -999 AND total.input < 100 average = sum / total.valid increment total.input by 1 ELSE IF value[i] >= minimum AND value[i] <= average = -999 maximum THEN ENDIF increment total.valid by 1 END-PROCEDURE average sum = sum + value[i] ENDIF Software Testing and Validation, page -75- Software Testing and Validation, page -76- B Flow-graph without atomic predicates: • Flow-graph nodes are as follows: 1 4 paths. 1: i=1, total.input = total.valid = 0, sum = 0 2: value[i] <> -999 AND total.input < 100 false 3: increment total.input by 1 2 true 4: value[i] >= minimum AND value[i] <= maximum false true 5: increment total.valid by 1, sum=sum+value[i] 7 3 6: increment i by 1 false true 7: total.valid > 0 8 9 4 5 6 8: average = -999 9: average = sum / total.valid E Software Testing and Validation, page -77- Software Testing and Validation, page -78- 13
  • 14. • The four independent paths are: • The original independent path coverage 1) B,1,2,7,8,E gives rather poor white-box test cases. 2) B,1,2,7,9,E 3) B,1,2,3,4,6,2,...,7,8,E – The maximum number of data items is never 4) B,1,2,3,4,5,6,2,...,7,9,E tested. (This is a better alternative to path B,1,2,3,4,5,6,2,...,7,8,E which is – The test against a maximum input value is also independent of paths 1-3) never tested. • The test cases could be: 1) value[0] = -999 • To get better test cases we must break the 2) Path cannot be tested standalone. non-atomic predicates in branches to atomic 3) minimum = 1, value[0] = 0, value[1] = -999 predicates. This is done in the next slide. 4) minimum = 1, value[0] = 1, value[1] = -999 Software Testing and Validation, page -79- Software Testing and Validation, page -80- B Flow-graph with atomic predicates: • Flow-graph nodes are as follows: 1 6 paths. 1: Same as earlier 2.1: value[i] <> -999 false 2.1 2.2: total.input < 100 true 3: Same as earlier false 2.2 4.1: value[i] >= minimum true false 7 true 4.2: value[i] <= maximum 3 5-9: Same as earlier 8 9 false 4.1 true true 4.2 5 6 E false Software Testing and Validation, page -81- Software Testing and Validation, page -82- • Now the six independent paths are: • It is important to note that some 1) B,1,2.1,7,8,E independent paths cannot be tested in 2) B,1,2.1,7,9,E 3) B,1,2.1,2.2,...,7,9,E standalone fashion. 4) B,1,2.1,2.2,3,4.1,6,2,...,7,9,E – The combination of data required to traverse 5) B,1,2.1,2.2,3,4.1,4.2,6,2,...,7,9,E the path cannot be achieved in the normal flow 6) B,1,2.1,2.2,3,4.1,4.2,5,6,2,...,7,9,E of the program. • The test cases could be: • In such cases, these paths are tested as part 1) value[0] = -999 2) Cannot be tested standalone of another path test. 3) minimum=1,maximum=2,value[0]..value[99]=1 • Sometimes an independent path must be 4) minimum=1,maximum=2,value[0]=1,value[1]=0,value[2]=-999 5) minimum=1,maximum=2,value[0]=1,value[1]=3,value[2]=-999 expanded to achieve the end condition. The 6) minimum=1,maximum=2,value[0]=1, value[1]=2, value[2]=-999 path 3 earlier is a typical example of this. Software Testing and Validation, page -83- Software Testing and Validation, page -84- 14
  • 15. • Loop testing is a white-box technique that 4.2. Loop testing focuses on the validity of looping constructs with respect to control flow of the program. • One problem of the independent path coverage testing is that it collapses loops • Four different classes of loops can be into a single iteration when possible. Thus, defined, each with a testing strategy of its many loop-related bugs are not discovered own: by the conventional coverage methods. – Simple loops, – nested loops , • Yet loops are a cornerstone of a vast majority of algorithms. They must be tested – concatenated loops, and with more than the smallest number of test – unstructured loops. cases that span a particular coverage. Software Testing and Validation, page -85- Software Testing and Validation, page -86- 7.2. Loop testing • Typically loop bugs lurk in corners and attack at boundaries. Thus interesting loop iterations are: – 0 rounds (skip the loop entirely) – 1 round – 2 rounds – m rounds (a typical value) – max-1 rounds (c) – max rounds (a) Simple (b) Nested (d) loops loops Concatenated loops Unstructured loops – max+1 rounds Software Testing and Validation, page -87- Software Testing and Validation, page -88- • If it is known that the loop has internal • Simple loops: boundaries, the boundaries are treated the – Each simple loop is analysed with the same way than maximum values: previously explained passes (0,1,2,typical,etc.) – A loop reads a physical file that is made of – If the number of iterations is fixed or restricted, blocks of 1024 bytes then we do: we may be able to test just a few of the iteration • 0,1,2,typical size,1023,1024,1025,max- patterns given earlier. 1,max,max+1 iterations. FOR i=0 to 10 DO .... – A dynamic array grows physically after 256 • Nested loops: added data items: – If we extend the simple loop strategy to nested • 0,1,2,typical size,255,256,257,max-1,max,max+1 it. loops, the number of necessary tests will grow – Sometimes the max value is hardware- geometrically with the level of nesting. dependent, sometimes it’s difficult to calculate. Software Testing and Validation, page -89- Software Testing and Validation, page -90- 15
  • 16. – The geometrical growth soon creates too many – Since the number of test cases soon grows to test cases to be practical. A two-level or at impractical numbers, the following optimised most a three-level nesting is the maximum for strategy may be applied: this kind of complete testing. For instance, 1) Start at the innermost loop. Set all other loops to minimum values. when we have a nested loop with a known 2) Conduct simple loop tests for the innermost loop while inside boundary (like a file block size) we get holding the outer loops at their minimum iteration the following cases: parameter (e.g. loop counter) values. Add other tests for out-of-range or excluded values. • Two-level nesting needs 10*9+1 = 91 tests. 3) Work outward, conducting tests for the next loop, but • Three-level nesting needs 10*9*9+1=811 tests etc. keeping all other outer loops at minimum values and other • The extra one case comes from the zero case when nested loops to “typical” values. all loops are skipped. 4) Continue until all loops have been tested. • Combinations (0,0,any) and (0,any,any) are not possible. Software Testing and Validation, page -91- Software Testing and Validation, page -92- • Concatenated loops: • Unstructured loops: – Concatenated loops can be tested using the – Whenever possible, this class of loops should approach defined in the simple loops if each of be redesigned to reflect the use of well- the loops is independent of the other. behaving loops (simple, nested, and – If two loops are concatenated and the loop concatenated loops). counter for loop 1 is used as the initial value for – Such restructuring is always possible since loop 2, then the loops are not independent. nested loops along with condition branches – When the loops are not independent, the allow any combination of loop iterations. approach applied to nested loops is recommended. Software Testing and Validation, page -93- Software Testing and Validation, page -94- • Even if every path is executed once, the 4.3.Data-flow testing program can still have bugs. Here is an example: • The white-box testing methods presented a = b/c; earlier are based on studying the control – This works if c is nonzero, but what if c is 0? flow over the program under test. The difference between the two cases is a • Although various path testing coverage difference in data, not in paths. criteria are highly effective, they are not • A path is called error-sensitive if an error in sufficient alone. it can be detected when it is exercised. A • The path testing coverage criteria are based path is error-revealing if the error always on paths. Yet some bugs lurk in data. appears when the path is taken. Software Testing and Validation, page -95- Software Testing and Validation, page -96- 16
  • 17. • Any path that includes the division b/c is • The value use may be in computation or in a error-sensitive since the error depends on control predicate. These cases are also the value of c. evaluated in data-flow testing. • Due to this, a still more detailed category of • Actually the value assignment may be in a methods, data-flow testing, takes control predicate as well, for instance in C- additionally into account the things that language and in other similar languages. might happen to data when the module is • A special flow graph called data flow graph executed. is generated for each analysed variable. It is • In data-flow testing, the central events then used to define variable-specific test related to variables are value assignments cases. (value writes) and value uses (value reads). Software Testing and Validation, page -97- Software Testing and Validation, page -98- r e ad ( i) • A data flow graph is similar to a traditional flow graph, except that it has added syntax y es (i < 0) | | (i > 10 0 ) for variable presentation. The presentations no vary between models, but mostly they show er r or ( ) su m =0 ; x= 0 d variable definitions and use. • An example of a data flow graph is in the c no x < i yes x =x + 1 next slide. p ri n t( s um ) y es no ( i == 1 ) cd || ( i == 1 0) d s um = 1 s u m= s u m+ x Software Testing and Validation, page -99- Software Testing and Validation, page -100- • The data-flow methods are usually complex, relatively difficult to use, and not 5. Black-box testing very good at detecting data errors. • Black-box testing focuses on the functional • The data-flow analysis is usually based on a requirements of the software. The program single variable. Thus, each variable needs is considered a black box without structure. its own analysis and test cases. This can • Program logic may be present in black-box lead to a very complex testing procedure. analysis, but implementation details are not. • While lately some progress is being made in The internal details of modules and data-flow testing, value-dependent error subsystems are hidden and cannot be studied detection is still mostly left for black-box from outside. testing. Software Testing and Validation, page -101- Software Testing and Validation, page -102- 17
  • 18. • Black-box testing concentrates on the • Unlike white-box testing, which is interfaces of modules and subsystems. It is performed early in the testing process, used to test externally observable black-box testing tends to be applied during functionality and input-output behaviour. It later stages of testing. is often called functional testing, while • The theory of black-box testing is not as white-box testing is procedural testing. mature as the theory of white-box testing • Black-box testing is not an alternative to due to lack of mathematical background for white-box techniques. Rather, it is a functionality. complementary approach that is likely to • Yet black-box testing is the kind of testing uncover different class of errors than white- that most people consider “real testing”. box methods. • Still black-box testing need not to be magic. Software Testing and Validation, page -103- Software Testing and Validation, page -104- • The black-box testing is based on input • Even with a small input value set the values. If we could test the program with number of possible test cases would grow all possible input values and evaluate the too large. Due to this we need to prune the results of the tests then we could say that we input value set. have a complete black-box testing. • The solution for the pruning lies in • Unfortunately this would imply equivalence classes. – all possible valid inputs, – An equivalence class is a set of input values – all possible invalid inputs, that cause an equal execution path in the – all possible edited inputs, program. – all possible combinations of inputs, and – Thus, any value in the class represents the – all possible input timings. behaviour of all values in the class. Software Testing and Validation, page -105- Software Testing and Validation, page -106- • Here are a few recommendations for • Hence, when we divide the input set into a looking for equivalence classes: set of equivalence classes and pick one – Look for invalid inputs. input value from each class we get a • If all invalid inputs are processed similarly they complete black-box testing. create an equivalence class. If some cause different • Unfortunately (again) the chosen program execution (not just a different error message) they create several equivalence classes. equivalence classes depend on the analyser. – Look for ranges of numbers. Two people analysing a program will come • A range creates several equivalence classes. Let’s up with a different list of equivalence say that we accept values 1-99. Then we have classes. – a class of values 1-99, – a class of values < 1 – a class of values >99, and – a class of non-numbers. Software Testing and Validation, page -107- Software Testing and Validation, page -108- 18
  • 19. – Look for membership in a group. – Create time-determined equivalence classes. • If an input must belong to a group, one equivalence • When timing matters to input values, we get several class includes all members of the group. Another time-related equivalence classes. Let’s say that time includes everything else. t is a critical input time. Then we get the following • Sometimes it is reasonable to subdivide the classes equivalence classes: into smaller subclasses. – input values long before time t, – input values shortly before time t, – Analyse responses to lists and menus. – input values at time t, • When the program responds differently to each – input values shortly after time t, and alternative of a list or a menu, the alternatives are – input values long after time t. equivalence classes. • Naturally this kind of partitioning is relevant only • If several alternatives create a same response (a when time plays a critical role in the system. menu and a hot key for instance) they form a single equivalence class. Software Testing and Validation, page -109- Software Testing and Validation, page -110- – Look for variable groups that must calculate to – Look for equivalent output events. certain value of range. • The equivalence class may be determined by an • Sometimes a function determines the equivalence output event. For instance, if a program drives a classes. Let’s assume that we have a program that plotter that can drive lines up to four inches long, a accepts three angles of a triangle. Thus we have a line might be within the valid range, there might be function a1+a2+a3 = 180, where a1, a2, and a3 are no line, the program might try to plot a line longer given angles. Then we have four equivalence than four inches, or it might try to plot something classes: else altogether. – The sum of the input values is 180. • Some difficulty lies in determining what inputs to – The sum of the input values is <180. feed the program to generate these different outputs. – The sum of the input values is >180. • Sometimes many classes of inputs should have the – One or more of the input values is not a valid number. same effect, and unless we are know for certain that they are treated similarly in the program they should be separated to different equivalence classes. Software Testing and Validation, page -111- Software Testing and Validation, page -112- – Look for equivalent operating environments. • Usually we are not dealing with scalar • If a program has expected or known hardware inputs. Instead, we need an input vector limitations, they create equivalence classes. For instance, if a program is stated to work with memory that defines several input parameters to the sizes of 64M - 256M, the three equivalence classes program. include a memory < 64M, a memory in 64M - 256M, and a memory > 256M. • In that case the input domain of the system • Sometimes hardware expectations cause severe in a mathematical sense is a cartesian failures. For instance, if the program has a race product: condition it is possible that a slower (or a faster) D = D1 x D2x...x Dn, where D1, D2, ... , Dn are CPU triggers it and the program fails. the input domains of the vector elements. • All hardware-specific limitations create equivalence classes (where the input value is hardware instead of data). Software Testing and Validation, page -113- Software Testing and Validation, page -114- 19
  • 20. • Then the equivalence classes for the whole • Altogether we get 16 equivalence classes: input can be designed as combinations of – (x valid, y valid), the classes for the elementary domains. – (x < 100, y valid), • For instance, let’s assume that a program – (x > 200, y valid), accepts two integers between 100 and 200. – (x non-integer, y valid), • For each of the input integers we get the – (x valid, y < 100), following equivalence classes: – (x valid, y > 200), – valid numbers 100...200 – (x valid, y non-integer), – integers < 100 – ... – integers > 200 – (x non-integer, y non-integer). – non-integers Software Testing and Validation, page -115- Software Testing and Validation, page -116- • As can be noticed, a complete cartesian product even over relatively simple domains 5.1. Class representative selection soon leads to an explosion in the number of • In an optimal case the equivalence class equivalence classes. partition is sound and complete. No matter • Also important is to notice that almost all which representative we elect the result of the cases are of illegal combinations. the corresponding test remains the same. • One way to prune the equivalence class set is to take a few cases for closer inspection. • This sort of leads to equivalence classes of equivalence classes. Software Testing and Validation, page -117- Software Testing and Validation, page -118- • In the formal sense, “identical” behaviour is • If the chosen input values result different not a solvable problem. So, without test results then we know that the executing a program it is usually not equivalence class is too large: each member possible to know whether two different in an equivalence class should generate the input values are actually processed in the same test response. same way. • In such a case the equivalence class is • Since we cannot ensure that our partition is partitioned into two smaller cases such that perfect, it is bad testing practice to pick just the differing inputs are no longer in the a single value from each equivalence class. same class. • Instead we take two or three values from each class and run the tests with them. Software Testing and Validation, page -119- Software Testing and Validation, page -120- 20
  • 21. • When a testing strategy is based on • If we have picked any other values from the equivalence classes, the best input values equivalence class then the relational reside close to class boundaries. operators would not have been tested. • Values that most likely will reveal bugs in • Also when a program fails a test that uses the tested software are those that reside at a any value from an equivalence class, it will boundary line between several equivalence fail the same test when using a boundary classes. value. The reverse is not true as stated • For instance, if we our program accepts earlier. integers between 10...100, the boundary • A special case in the boundary selection is values 10 and 100 reveal lower and higher value 0 which should be selected no matter relational operation bugs. where it resides in its equivalence class. Software Testing and Validation, page -121- Software Testing and Validation, page -122- • Actually quite often an equivalence class including value 0 is divided into three 5.2. Control and data flow testing equivalence classes: values < 0, value 0, and • So far we have assumed that we have values > 0. absolutely no knowledge of the design of the • If we are not at all certain of the tested program. equivalence classes, it is often better to • Usually, however, we have information of consider the input domain as a single class the control flow in data flow diagrams, state and pick random input values from there. transition diagrams, and modeling language • This leads to better testing coverage than presentations. picking random equivalence classes and • This information should be used in their representatives. equivalence class generation. Software Testing and Validation, page -123- Software Testing and Validation, page -124- • When the analysis includes a data flow • The data flow analysis has the following diagram, we have information about the steps: control logic of the program. – 1. Examine the requirements and compare them • Often this information includes at least to the data flow model generated. The model and requirements should be equal: model is a defined logical processes, data flowing lower abstraction of the requirements. between the processes, and conditions – 2. If the model and requirements are not equal, presenting when data flows to processes. the model must be updated to be a presentation • This information is already analysed and of the requirements. However, if the model presents some or all of the requirements for represents only a subset of the requirements the program. then the model need not to be updated. The test cases will just reflect the modelled subsystem. Software Testing and Validation, page -125- Software Testing and Validation, page -126- 21
  • 22. – 3. Number the nodes uniquely. The numbers – 7. Now you should have a black-box based will be used as node names later. control flow graph of the tested system. – 4. For each of the data flow branches identify Generate test cases based on independent path the branch condition and add it to the model. coverage criteria, as presented in white-box – 5. Create a start node from where all incoming testing. The test inputs are the incoming flows. data flows start, and an end node to where all The test outputs are the outgoing flows. outgoing data flows end. – 8. Each generated test case represents an – 6. Break all composite predicates to atomic equivalence class. Identify the class boundaries predicates. Add new processes and flows to and add new classes when necessary. reflect the changed control flow logic. • As an example let’s take a look at a problem of opening a door which may or may not be locked: Software Testing and Validation, page -127- Software Testing and Validation, page -128- Door Did open Door • In the data flow diagram the door is opened. Open Step Close door in door If the opening is successful the opener steps in and closes the door behind her. If Didn’t open Did open something goes wrong, the door is first tried Call Check Not locked Call to unlock with a key, and if this does not lock help work help is called to open the door. Locked • In the generated black-box control-flow Key Didn’t open graph we have numbered the nodes, added Use key an entry and an exit node, and identified the branch testes p1..p6. Software Testing and Validation, page -129- Software Testing and Validation, page -130- p2 1 2 3 • Our example is a bit simplistic in a sense that it presents detailed control. p1 E • A typical data flow diagram concentrates B p4 4 5 more on the flow of data (as the name p5 suggests). In such cases the first step in p3 equivalence class generation is to refine the p6 6 diagrams to such that they represent control flow with data flow. Independent paths: B,1,2,3,E (door not locked and opens) • This can be achieved with help from B,1,4,5,E (door not locked but doesn’t open) B,6,2,3,E (door is locked and opens with a key) requirements specifications and user B,6,5,E (door is locked and doesn’t open with a key) manuals. Software Testing and Validation, page -131- Software Testing and Validation, page -132- 22
  • 23. • First of all, it can be very hard work if the 5.3. Outcome prediction control logic of the program is complex. • Now that we have chosen our equivalence • Second, while simulation is a good way for classes and run our tests, we still need to validation, humans are not very good at analyse the outcome of the tests. This is not simulating computers. The result is as straightforward as it first sounds. We extremely error-prone. need an oracle. • Third, when we do manual prediction, we • The first idea in mind is to play a manual add a new considerable source of errors to computer and try to predict the outcome of a the testing procedure. We are likelier to specific input. It seldom works. make an error in the manual outcome than the programmer is in programming. Software Testing and Validation, page -133- Software Testing and Validation, page -134- • Beizer lists in his book “Black-Box – Previous version: Even if the tested code has Testing” (Beizer, 1995) several more been rewritten, the previous version of the attractive alternatives: program will often have the correct outcome for most test paths. – Existing tests: Most testers and programmers – Prototypes: A prototype is an excellent oracle. work on modifications to a base of existing Good prototypes do not lack functionality. The software. When most tests do not change from reason that they are not operationally useful is release to release, they can and should be re- that they may be too slow, too big, not used. maintainable enough, or can’t run in the target – Old program: A major rewrite need not to environment. imply that its outputs are completely different. The old program is an excellent oracle when it is available and its specification is valid. Software Testing and Validation, page -135- Software Testing and Validation, page -136- – Model programs: A model program is a – The actual program: The actual program can be simplified prototype that includes only used as an oracle as long as program functionality that is directly related to the requirements are well understood and the application. Such a program is considerably program output is carefully verified. Usually it easier to build than a complete program with is easier to verify the correctness of an outcome error checking, user interfaces, and data than it is to calculate the outcome by manually communication. simulating the computer. When the program – Forced easy cases: Sometimes it is possible to includes information about its intermediate select input values from an equivalence class states, the verification procedure becomes such that the outcome is trivial to calculate. significantly easier. Such values are excellent input candidates since they simplify output analysis. Software Testing and Validation, page -137- Software Testing and Validation, page -138- 23
  • 24. 5.4. Main differences between 6. Module testing black-box and white-box testing • white-box: source code is needed, • As mentioned in the V-model, we have at • black-box: test cases are obtained directly by design, least the following steps in a testing process: • white-box: test cases obtained by separate code analysis, – module testing, where each unit is tested • black-box: less systematic and “formal”, more ad-hoc and based separately, on intuition ==> technically easier but more risky, • white-box: strong theoretical background, automation – integration testing, where units are integrated possibilities (coverage), and unit interfaces and dependencies are tested, • black-box: no standard notion for internal testing quality – system testing, where the whole system (coverage), and including hardware and other software • black-box: more common in industry. components are tested together, and Software Testing and Validation, page -139- Software Testing and Validation, page -140- – acceptance testing, where customers verify that • Module testing is mostly white-box testing. the created software product fulfils its However, also black-box testing is useful requirements. especially when we are testing control • The methods presented in the previous two modules. chapters, white-box testing and black-box • The areas to consider in module testing are testing, are used in the four test procedure – interfaces, steps. – local data structures, • The first step, module testing (or unit – path and loop testing, testing), focuses verification effort on the – error conditions, smallest unit of the software. – and boundary conditions. • Usually the tested unit is a module. Software Testing and Validation, page -141- Software Testing and Validation, page -142- – 2. Parameter and argument attributes match? 6.1. Module interface testing – 3. Parameter and argument units systems match? • Tests of data flow across a module interface – 4. Number of arguments transmitted to called are required before any other test is initiated. modules equal to number of parameters? • If data do not enter and exit properly, all – 5. Attributes of arguments transmitted to called other tests are useless. modules equal to attributes of parameters? • Myers presents in his book (Myers, The Art – 6. Unit system of arguments transmitted to of Software Testing, 1979) a checklist for called modules equal to unit system of interface tests. parameters? – 1. Number of input parameters equal to number of arguments? Software Testing and Validation, page -143- Software Testing and Validation, page -144- 24
  • 25. – 7. Number attributes and order of arguments to – 1. File attributes correct? built-in functions correct? – 2. OPEN/CLOSE statements correct? – 8. Any references to parameters not associated – 3. Format specification matches I/O statement? with current point of entry? – 4. Buffer size matches record size? – 9. Input-only arguments altered? – 5. Files opened before use? – 10. Global variable definitions consistent across – 6. End-of-file conditions handled? modules? – 7. I/O errors handled? – 11. Constraints passed as arguments? – 8. Any textual errors in output information? • When a module performs external I/O, • When all these steps are properly executed, additional interface tests must be conducted. unit integration becomes much simpler and Again, Myers: easier to manage. Software Testing and Validation, page -145- Software Testing and Validation, page -146- – 4. inconsistent data types. 6.2 Local data structures – 5. underflow, overflow, and addressing exceptions. • The local data structures for a module are a common source of errors. According to • In addition to local data structures, the Pressman (1997), test cases should be impact of global data on a module should be designed to uncover errors in the following ascertained during module testing. categories: • If the effects of accessing global data from a – 1. improper or inconsistent typing. module cannot be predicted, it may be a – 2. erroneous initialisation or default values. sign of bad design, and it is absolutely a risk – 3. incorrect (misspelled or truncated) variable for nasty unpredictable errors. names. Software Testing and Validation, page -147- Software Testing and Validation, page -148- • Whenever possible, path and loop testing 6.3. Path and loop testing should include test cases for testing error- • Selective testing of execution paths is an sensitive paths (that is, paths where an error essential task during module testing. depends on the value of the test input). • Independent path coverage and loop testing • Data flow coverage methods can be used for are effective techniques for uncovering a error-sensitive path testing. However, when broad array of path errors. modules are independent, well designed, and small enough, the error-sensitive • Path and loop testing are good at finding sentences are easy to isolate from the code. errors in predicates since a predicate implies a branch in software code. • Then new test cases can be created to especially test the error-sensitive paths. Software Testing and Validation, page -149- Software Testing and Validation, page -150- 25
  • 26. • Among the potential errors that should be 6.4. Error condition testing tested when error handling is evaluated are: • Error conditions are often the least tested – 1. Error description is unintelligible areas in software. Yet high quality software – 2. Error noted does not correspond to error should include clear and covered error encountered. handling including – 3. Error condition cuases system intervention prior to error handling. – error detection, – 4. Exception-condition processing is incorrect. – error recovery, – 5. Error description does not provide enough – clean exit (if recovery is not possible), and information to assist in the location of the cause – easily testable error condition paths. of the error. Software Testing and Validation, page -151- Software Testing and Validation, page -152- – 1. Parameter boundaries. 6.5. Boundary testing • The minimum and maximum values of a parameter are good test inputs. Also value zero is a good one. • Boundary testing is the last task of the This is black-box testing. module testing step. – 2. Loop boundaries. • Since errors lurk in boundaries, this is step • As mentioned in loop testing, 0, 1, n-1, n, and n+1 iterations must be tested for each loop. Here n is the must be executed with extreme care. maximum number of passes in the loop. • We have already covered various boundary – 3. Data structure boundaries. conditions in loop testing and black-box • Data structures are fruitful in detecting boundary equivalence classes. errors. For instance, dynamic data structure should be tested with values that make the data structure • At least the following must be tested: grow (or shrink). This is mostly black-box testing. Software Testing and Validation, page -153- Software Testing and Validation, page -154- • Because a module is not a standalone 6.6. Module test procedures program, driver and/or stub software must • Module testing is normally considered as a be developed for each module test. step in program coding. • In most applications a driver is a procedure • This is true in a sense that module testing that accepts test case data, passes data to the should be done as soon as the module is tested module, gathers result information, finished. and forwards the result information to an analysing procedure which returns • Nevertheless it is still a testing step which information of the outcome of the test. needs both white-box and black-box testing methods. • Stubs serve to replace modules that are called by the module to be tested. Software Testing and Validation, page -155- Software Testing and Validation, page -156- 26
  • 27. • A stub uses the caller module’s interface, does minimal data manipulation, prints a 7. Integration testing verification of entry, and returns. • The next step after module testing is • Drivers and stubs represent overhead. They integration testing. In it the modules are must be present in testing but they are not systematically added into an evolving included in the final product. system. • Due to this drivers should be designed such • Integration testing tests module interfaces that they can be used on several modules. and co-operation. Two standard criteria of • Stubs should be designed such that they do modularity has great effect on these areas: the absolutely minimum tasks necessary. module cohesion and module coupling. Software Testing and Validation, page -157- Software Testing and Validation, page -158- • Module cohesion describes how well a • Module coupling is a measure of module performs a single task within a interconnection among modules in a software procedure, requiring little program structure. interaction with procedures being • Coupling depends on the interface performed in other parts of a program. complexity between modules, what data • In other words, a high cohesive module pass across the interface, and how global does just one thing well and with minimum software entities are shared between them. interaction with other modules. • The lower is a module coupling between • The exact cohesive level of a module is not two modules, the less they interact with important. Rather it is important to strive for each other, and the easier they are to high cohesion and recognise low cohesion. integrate. Software Testing and Validation, page -159- Software Testing and Validation, page -160- • The various coupling levels from lowest to – Common coupling: modules are tied to global highest are as follows: variables. – No coupling: two modules are totally unrelated. – Content coupling: one module makes use of data or control information maintained within – Data coupling: module interface uses variables. the boundary of another module, or branches – Stamp coupling: module interface uses a are made into the middle of a module. portion of a data structure. • All coupling levels excluding content – Control coupling: module interface passes a control flag (a variable that controls decisions coupling have their uses. The important in the accessed module). thing is to strive for the lowest possible – External coupling: modules are tied to an coupling. environment external to software (for example, one module reads a file, another writes it). Software Testing and Validation, page -161- Software Testing and Validation, page -162- 27
  • 28. • When all the modules are linked and tested • Two more elegant approaches to integration as a whole, we are doing big-bang testing are top-down integration and integration. This is the most common bottom-up integration. integration policy in small programs. • In top-down integration the highest level • Big-bang integration is a simple solution control module (main loop) is integrated to which requires no extra effort because its called modules. The other modules in neither drivers nor stubs are needed. the next level are replaced with stubs. • The problems arise when failures occur. • When co-operation between these modules The testers probably have no idea in which has been tested, the stubs are replaced with module the actual error might lurk since the the next level modules whose called tested unit is a monolithic program. modules are again replaced with stubs. Software Testing and Validation, page -163- Software Testing and Validation, page -164- • The previous procedure is repeated until all • In bottom-up integration the lowest level modules are integrated to the system; thus modules are integrated into clusters. Each the name top-down integration. cluster fulfils one subtask of the program. • Top-down integration is suitable to • Clusters are tested separately. Thus, each situations where program logic needs first- cluster needs a driver that feeds it test cases. level testing. It is also suitable for • When all clusters are tested, their drivers situations when we want to have an are replaced with next level control immediately executable program. modules. This generates new clusters that • The drawback of top-down integration is are little larger than earlier ones. the need of stubs. Some of the stubs may • The procedure is repeated until all clusters become very complex. are combined into a single program. Software Testing and Validation, page -165- Software Testing and Validation, page -166- • Bottom-up testing is suitable to software • Bottom-up pros and cons (Paakki, 2000): + Drivers are easier to write than stubs. where program functionality needs more + The approach provides a greater opportunity for parallelism testing than program logic. Problems in than the other approaches; that is, there can be several teams testing different subsystems at the same time. logic reflect down to the processing clusters + The approach is effective for detecting detailed design or which may nullify the results of already coding errors early enough. made tests. + The approach detects critical performance flaws that are generally associated with low-level modules. • Regression testing is needed to ensure that + The approach supports the modern software engineering earlier integration testing is still relevant. paradigms based on classes and objects, or reusable stand-alone components. • The good side of bottom-up testing is the − A prototype of the system is not available for (user) testing until very end. lack of stubs. Next to drivers all needed − The approach is not as effective for detecting structural design code is directly related to the program. flaws of large (top-level) scale. Software Testing and Validation, page -167- Software Testing and Validation, page -168- 28
  • 29. • Regression testing is the re-execution of 7.1. Regression testing some subset of tests that have already been • Each time a new module is added as part of conducted to ensure that changes have not integration testing, the software changes. propagated unintended side effects. – New data flow paths are established, • In a boarder sense, regression testing is the – new I/O may occur, and activity that helps to ensure that changes in – new control logic is invoked. software (due to testing or maintenance) do not introduce unintended behaviour or • The changes may cause problems with additional errors. functions that previously worked flawlessly. Software Testing and Validation, page -169- Software Testing and Validation, page -170- • The regression test suite (the subset of tests • As integration testing proceeds, the number to be executed) contains three different of regression tests can grow quite large. classes of test cases: • The regression test suite should be designed – A representative sample of tests that will to include only those tests that address one exercise all software functions. or more classes of errors in each of the – Additional tests that focus on software major program functions. functions that are likely to be affected by the • It is impractical and inefficient to re-execute change. every test for every program function once a – Tests that focus on the software components change has occurred. that have been changed. • Automated tests are especially useful in regression testing. Software Testing and Validation, page -171- Software Testing and Validation, page -172- • System testing may include at least the 8. Other testing phases following testing series: • When module and integration testing are – Recovery testing. A system test that forces the software to fail in various of ways and verifies finished, the software is completely that recovery is properly performed. assembled as a package. – Security testing. A system test that attempts to • The next step in testing is then System verify that protection mechanisms built into a testing. It is a series of different tests whose system will in fact protect it from improper primary purpose is to fully exercise the penetration. computer-based system including all – Stress testing. A system test that executes the software and hardware components. system in a manner that demands resources in abnormal quantity, frequency, or volume. Software Testing and Validation, page -173- Software Testing and Validation, page -174- 29
  • 30. • When system testing is over, the final step – Configuration review. The review ensures that in the testing process is Acceptance testing. all elements of the software configuration have In it customers and users verify that the been properly developed, are catalogued, and have the necessary detail to support the product fulfils their actual needs and maintenance phase of the software life cycle. follows requirements. – Alpha testing. The software is used by end- • Acceptance testing includes at least the users in a natural setting in laboratory following testing steps: conditions. The development team is recording – Validation testing. Software validation is errors and usage problems. achieved through series of black-box tests that – Beta testing. The software is used by end-users demonstrate conformity with requirements. in an environment that cannot be controlled by the development team. Software Testing and Validation, page -175- Software Testing and Validation, page -176- 9. Testing for specialised 9.1. User interface testing environments • As computer software has become more • Since modern graphical user interfaces complex, the need for specialised testing (GUIs) have the same look and feel, a series approaches has also grown. of standard tests can be derived. • In this section we will briefly discuss user- • The following areas can serve as a guideline interface testing, testing client-server for creating a series of generic tests for architectures, testing documentation and GUIs (Pressman, 1997): help facilities, and testing for real-time – windows-specific tests, systems. – pull-down menu and mouse specific tests, and – data entry tests. Software Testing and Validation, page -177- Software Testing and Validation, page -178- • Here is a checklist for windows: • A checklist for pull-down menus: – Will the window open properly based on – Is the appropriate menu bar displayed in the related typed or menu-based commands? appropriate context? – Can the window be resized, moved, and – Does the application menu bar display system scrolled? related features? – Is all data content contained within the window – Do pull-down operations work properly? properly addressable with a mouse, function – Do breakaway menus, palettes and tool bars keys, directional arrows, and keyboard? work properly? – Does the window properly regenerate when it is – Are all menu functions and pull-down overwritten or resized, and then recalled? subfunctions properly listed? – Are window-related functions available when – Is it possible to invoke each menu function needed? using its alternative text-based command? Software Testing and Validation, page -179- Software Testing and Validation, page -180- 30
  • 31. • A checklist for mouse operations: • A checklist for data entry: – Are all menu functions properly addressable by – Is alphanumeric data entry properly echoed and the mouse? input to the system? – Are mouse operations properly recognised – Do graphical modes of data entry work throughout the interactive context? properly? – If multiple clicks are required, are they properly – Is invalid data properly recognised? recognised in context? – Are data input messages intelligible? – Do the cursor, processing indicator, and pointer • In addition to above guidelines, finite state properly change as different operations are modeling may be used to derive a series of invoked? tests that address specific data and program – Is help available and is it context sensitive? objects that are relevant to the GUI. Software Testing and Validation, page -181- Software Testing and Validation, page -182- • Inspections may be used to test editorial 9.2. Testing documentation clarity: • Errors in documentation can be as – Software functionality is fully covered in the manual. devastating to the acceptance of the program as errors in data or source code. – No typing errors are present. – Etc. • For this reason, documentation should be a meaningful part of every software test plan. • Live tests uses the documentation in conjunction with the use of the actual • Documentation testing can be approached in program. two phases: via inspections and in a live – Program usage is tracked through the test. documentation. Software Testing and Validation, page -183- Software Testing and Validation, page -184- • The following checklist may be used in – Can troubleshooting be accomplished easily documentation testing: with the documentation? – Does the documentation accurately describe – Are the document table of contents and index how to accomplish each mode of use? accurate and complete? – Is the description of each interaction sequence – Is the design of the document conductive to accurate? understanding and quick assimilation of – Are examples accurate? information? – Are terminology, menu descriptions, and – Are all error messages displayed for the user system responses consistent with the actual described in more detail in the document? program? – If hypertext links are used (in online- – Is it relatively easy to locate guidance in the documentation), are they accurate and documentation? complete? Software Testing and Validation, page -185- Software Testing and Validation, page -186- 31
  • 32. • Comprehensive test case design methods for 9.3. Testing for real-time systems real-time systems have yet to evolve. • The time-dependent nature of many real- However, Pressman presents a four-step time applications adds a new and potentially strategy (Pressman, 1997): difficult element to the testing: time. – 1. Task testing. • The first step in the testing of real-time software is • Next to white-box and black-box testing, to test each task independently. Task testing the test designers have to consider internal uncovers errors in logic and function, but will not and external event handling, temporal data, uncover timing or behavioural errors. • A task here is a separate program that accepts input transaction deadlines, possible race and produces output. A task may be composed of conditions, and parallelism. many modules and can be tested using white-box and black-box methods. Software Testing and Validation, page -187- Software Testing and Validation, page -188- – 2. Behavioural testing. – 3. Intertask testing. • Using system models the real-time system behaviour • Once errors in individual tasks and external event is simulated as a consequence of external events. behaviour have been isolated, testing shifts to time • These analysis activities can serve as the basis for related errors. Asynchronous tasks are tested with the design of test cases that are conducted when the different data rates and processing load. real-time software has been built. • Tasks that communicate via a message queue or data • The test cases can follow black box input class store are tested to uncover errors in the sizing of partitioning to categorise events and interrupts for these data storage areas. testing purposes. – 4. System testing. • Each event is tested individually. The system is • Software and hardware are integrated, and a full examined to detect event processing errors. range of system tests are conducted in an attempt to • Once each class has been tested, events are uncover errors at the software/hardware interface. presented to the system in random order and frequency. Software Testing and Validation, page -189- Software Testing and Validation, page -190- • Real-time systems are deeply related with interrupts. Thus, interrupt code testing is 10. Object-oriented testing essential to any real-time system testing. • Current perspectives on object-oriented – Are interrupt priorities correct? testing can be characterised on three ways: – Is interrupt execution correct? – optimistic: technology and software processes in – Do interrupts cause priority inversion (a low object-oriented world greatly reduce the need for priority process blocks a high priority process)? testing, – Does a high volume of interrupts at critical – minimalist: testing remains necessary for object- times create problems in function or oriented implementations, but it will be easier and performance? less costly, and – expansive: typical features of object-oriented systems – Are transaction deadlines met at critical times require new approaches to attain adequate testing. with high volume of interrupts? Software Testing and Validation, page -191- Software Testing and Validation, page -192- 32
  • 33. • All three perspectives are correct in their • Since the optimistic view is more of object- own ways: oriented design, and the minimalist view – Object-oriented design and component reuse has already been covered, we will can reduce the amount of newly written code. concentrate on the expansive view. This in turn reduces tests (optimistic view). • We consider an object a uniquely – The use of object-oriented design does not identifiable entity that has a state and a change any basic testing principles. What does behaviour. change is the granularity of the units tested (minimalist view). • The state is defined with a vector of – Encapsulation, inheritance, polymorphism, attributes each of which may be of a scalar message passing, exceptions etc. need new or composite type. testing approaches (expansive view). Software Testing and Validation, page -193- Software Testing and Validation, page -194- • An attribute may be an object. In such a • Objects communicate with each other via case the object containing the object messages. A message has a sender, a attribute is a composite object. receiver, and a service request. The • The behaviour of an object is defined with a message is sent to one of the object set of interfaces. An interface consists of a interfaces where the object resolves the set of public methods that are accessible request to one of the public methods. outside the object. • When the request is resolved, the message • Next to the public methods in interfaces, an is sent back to the sender. object may have a set of private methods • The object state is reachable only via public that are accessible only from inside the methods. Thus, attributes cannot be directly object. addressed. Software Testing and Validation, page -195- Software Testing and Validation, page -196- • Object-oriented testing has differences to – Test case design. What techniques may be used conventional system testing. Four main to construct test cases from models, concerns are often listed in literature: specifications, and implementations? • How much testing is sufficient for methods, classes, – Fault hypothesis. Are some facets of object- class clusters, application systems, and components oriented software more likely than others to of reusable libraries? contain faults? • To what extent should inherited features be retested? • What kind of faults are likely? • How should abstract and generic classes be tested? • How can revealing test cases be constructed? • What models for understanding collaboration • What effect does the scope of a test have on the patterns are useful for testing? chance of revealing a fault? • How can models of the abstract and concrete state of • To what extent are fault models specific to an object, class or system under test be used? applications, programming languages, and • How can collection and container classes be tested? development environments? Software Testing and Validation, page -197- Software Testing and Validation, page -198- 33
  • 34. • How can state-dependent behaviour be verified • How can test results be represented and evaluated? when state control is often distributed over an entire • When can reports about the state of an untested object-oriented application? object be trusted? • How can clients and servers linked by dynamic • What is an effective strategy for integration testing? binding be tested? • What role can built-in tests play? • How can regression tests be formulated and • Should built-in tests be provided by application conducted to ensure the integrity of classes? components, programming languages and/or • How can information in representations and development tools? implementations be used to develop or automatically produce the expected results for a test case? – Test process issues • When should testing begin? – Test automation issues. • Who should perform testing? • How should test cases be represented? • How can testing techniques help in preventing • How should test cases be applied? What is the errors? proper role of stubs and drivers? Software Testing and Validation, page -199- Software Testing and Validation, page -200- • How are testing activities integrated into the software process model? 10.1. Fault hypothesis • How can testing facilitate reuse? • To what extent should reusable test suites be • Two basic issues frame all testing considered a necessary part of a reusable component techniques: library? • What is an effective test and integration strategy – What is likely to go wrong? given the iterative approach preferred for object- – To what extent should testing be performed? oriented development? • For practical purposes, testing must be • Source: R. Binder, Testing Object-Oriented based on fault hypothesis: an assumption Software: a Survey. Software Testing, about where faults are likely to be found. Verification and Reliability. Vol. 6, pp 125- 252 (1996) Software Testing and Validation, page -201- Software Testing and Validation, page -202- • The object-oriented programming has • 3) it can be used as an unusually powerful macro several unique features: encapsulation, substitution for programmer convenience; • 4) deep and wide inheritance hierarchies can defy inheritance, and polymorphism. These comprehension, leading to errors and reduction of features are both beneficial and hazardous. testability; • Thus they are good candidates for fault • 5) semantic mismatches are easily generated since subclass methods have different semantics and hypothesis. require different tests; • Inheritance hazards: • 6) multiple inheritance allows superclass features with the same names, providing the opportunity for – Inheritance is an excellent source for bugs: misnaming; • 1) It weakens encapsulation; • 7) changes to superclasses may trigger bugs to • 2) fault hazards are introduced which are similar to inherited subclasses. those associated with global data in conventional languages; Software Testing and Validation, page -203- Software Testing and Validation, page -204- 34
  • 35. • Encapsulation hazards: • Polymorphism hazards: – When properly used, encapsulation is a safe object-oriented programming feature. In fact – Polymorphism replaces explicit compile-time biggest errors occur when encapsulation is binding and static type-checking by implicit overridden: runtime binding and type-checking. It is a • 1) direct access to object attributes causes high source of many hazards: coupling between objects; • 1) The deeper is the hierarchy tree and the wider is • 2) lack of use of object interfaces may cause the tested application, the higher is the likelihood situations where changing any object method will that a dynamically bound method is used break code in a message sending object. incorrectly, • 2) the combination of polymorphism and method – The only negative issue in encapsulation that I inheritance make it very difficult to understand the can think of is: behaviour of subclasses and how they work, • 3) object encapsulation sometimes makes test case creation difficult by hiding too much. Software Testing and Validation, page -205- Software Testing and Validation, page -206- • 3) polymorphism with late binding can easily result • While the lists in earlier slides are in messages being sent to the wrong object since it may be difficult to identify and exercise all such impressive, they do not exclude proper use bindings; of the object-oriented tools. • 4) since binding is done at runtime, the type • Rather they show possible error sources that checking responsibility is with the server (instead of the compiler) and the misuse of a message, an need extra care when generating test cases. interface, or a method becomes extremely simple; • Fortunately thousands of design patterns • 5) complex polymorphic relations can serve to have been defined to give “recipes” for confuse the testers and developers, for instance in obscuring the point of origin of a bug; common object-oriented problem solutions. • 6) subtle naming errors are easily developed and • When correctly used, design patterns reduce difficult to find. the number of needed tests. Software Testing and Validation, page -207- Software Testing and Validation, page -208- • Since objects have behaviour in methods, 10.2. Unit testing also a method is a candidate for unit testing. • Unit testing is a more general term to • Usually an object is chosen as the unit testing that we referred to module testing. It testing entity: describes the smallest entity that is worth a – most methods are usually very small and trivial test. to test, • The smallest distinguishable entity in an – object state should be included in unit testing, object-oriented program is an object. Thus – object interface is a perfect entry for drivers. it is a good candidate for unit testing. • Naturally methods are first tested separately and preliminarily integrated together. Software Testing and Validation, page -209- Software Testing and Validation, page -210- 35
  • 36. • Similar white-box coverage methods may • Next to the traditional coverage criteria, we be used in object-oriented unit testing than can define at least the following object- in traditional testing. oriented coverage: • When methods are small, each public – 1) Message coverage. All possible different method can be used as an entry point for types of messages that an object can access white-box coverage test analysis. must be included. Equivalence classes may be used in message parameters to define suitable • When methods are large, each method can message-specific test cases. be first used as an entry point and then do – 2) State coverage. Each possible object state integration within the object. must be verified. All states must be accessible • Also new object-oriented specific coverage and not cause an object deadlock (when the criteria must be considered. object can no longer change its state). Software Testing and Validation, page -211- Software Testing and Validation, page -212- – 3) Exception coverage. When object methods • Black-box unit testing is similar to support exceptions (as in C++), each possible traditional black-box testing. A driver feeds exception case must be covered. Traditional test cases to a public method in one of the coverage criteria may be used in the exception code. interfaces and receives responses, or the driver creates messages to be sent to the – 4) Interface coverage. All public methods on all interfaces must be tested. Traditional object. equivalence class partitioning along with • The main difference between traditional boundary analysis may be used in the test case black-box module testing and black-box generation. object oriented testing is that an object has – 5) Method coverage. All methods must be several entry points that are tested reachable and fully white-box tested. separately. Software Testing and Validation, page -213- Software Testing and Validation, page -214- 10.3. Object-oriented integration • The subsystem interface is similar message passing interface than the object interface. testing • If we are lucky, objects form a reference • The subsystem implementation consist of a hierarchy in the program. In such a case we set of interface objects that implement the can use traditional top-down and bottom-up subsystem interface, and a set of local integration methods. objects that implement subsystem functionality. • Unfortunately this is seldom the case. Object-oriented programs are usually built • Thus, object-oriented integration goes in from a set of subsystems each of which has two steps: an interface and implementation. – 1) integrate each subsystem, and – 2) integrate subsystems together. Software Testing and Validation, page -215- Software Testing and Validation, page -216- 36
  • 37. • Subsystem integration may be implemented • Use-based integration begins the on two ways: thread-based and use-based. construction of the subsystem by testing • Thread-based integration integrates the set those low-coupling classes that use very of classes required to respond to one input few (if any) of services from other classes. or event for the subsystem. • Once the low-coupling classes are tested, • Each thread is integrated and tested the next layer of classes that use the individually. services of low-coupling classes is added. • Regression testing is applied to ensure that • The procedure continues until the full no side effects occur. subsystem is integrated. Software Testing and Validation, page -217- Software Testing and Validation, page -218- • When subsystems are integrated, regression • In bottom-up integration the lowest level tests are executed to ensure that the subsystems that only offer services are integration does not break object behaviour. integrated first with the next lowest level • After that subsystems are integrated. subsystems. The procedure is continued Depending on the approach either top-down with the next lowest level until all or bottom-up integration may be used. subsystems are integrated. • In top-down integration, the main • In subsystem integration, extra care must be subsystem is first integrated with the next taken with message and message parameter layer subsystems. Then the procedure is tests. All possible messages and responses repeated with the next layer subsystems to a subsystem must be tested. until all subsystems are integrated. Software Testing and Validation, page -219- Software Testing and Validation, page -220- 10.4. Later object-oriented testing 11. Conclusion phases • When integration testing is finished, we • Testing is quite a nasty way to validate high have a complete program. quality software: • At this level the implementation details are – we can’t test all inputs, no longer visible and thus traditional black- – we can’t test all possible paths, box methods may be used in system and – we often can’t even reach all written code, acceptance testing. – we can’t verify that software is free of errors, • Thus, earlier covered methods may be used – in fact we can’t even find all errors, and in object-oriented testing as well. – as a matter of fact we can’t even define for certain that something is an error. Software Testing and Validation, page -221- Software Testing and Validation, page -222- 37
  • 38. • Then why do we test at all? • But no matter how much we test, we can – Because testing is the best that we have got. never find that last bug from a nontrivial • Fortunately a great number of defects, bugs program. and errors are found in earlier stage • When should we then stop testing? inspections and reviews. Alternatives: • If a project is well managed, most defects – when all planned tests are executed, have been found and removed when testing – when required white-box coverage has been begins. reached, – when all black-box equivalence classes are • The managed testing phases and methods tested, help us to find some more defects. Software Testing and Validation, page -223- Software Testing and Validation, page -224- – when required percentage of estimated total number of errors has been found, – when required percentage of seeded errors has been found, – when we have reached our reasonable testing budget, or – when the mean time to failure is greater than a required threshold time. • Conclusion: Try to minimise needed testing. However, when you have to test, do it carefully and with focus. Software Testing and Validation, page -225- 38