Your SlideShare is downloading. ×
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Software testing
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Software testing

342

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
342
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. © IEEE – 2004 Version 5–1 CHAPTER 51 SOFTWARE TESTING2 3 ACRONYMS4 TDD Test-Driven Development5 XP Extreme Programming6 INTRODUCTION7 Testing is performed to evaluate and improve product8 quality by identifying defects and problems.9 Software testing consists of the dynamic verification of10 a program’s behavior on a finite set of test cases,11 suitably selected from the usually infinite executions12 domain, against the expected behavior.13 In the above definition, italicized words correspond to14 key issues in identifying the Knowledge Area of15 Software Testing. In particular:16 Dynamic: This term means that testing always17 implies executing the program on (valued) inputs.18 To be precise, the input value alone is not always19 sufficient to determine a test, since a complex,20 nondeterministic system might react to the same21 input with different behaviors, depending on the22 system state. In this KA, though, the term “input”23 will be maintained, with the implied convention24 that its meaning also includes a specified input25 state in those cases in which it is needed. Different26 from (dynamic) testing and complementary to it27 are static techniques, as described in the Software28 Quality KA.29 Finite: Even in simple programs, so many test30 cases are theoretically possible that exhaustive31 testing could require months or years to execute.32 This is why, in practice, the whole test set can33 generally be considered infinite. Testing always34 implies a tradeoff between limited resources and35 schedules on the one hand and inherently36 unlimited test requirements on the other.37 Selected: The many proposed test techniques differ38 essentially in how they select the test set, and39 software engineers must be aware that different40 selection criteria may yield vastly different degrees41 of effectiveness. How to identify the most suitable42 selection criterion under given conditions is a43 complex problem; in practice, risk analysis44 techniques and test engineering expertise are45 applied.46 Expected: It must be possible, although not always47 easy, to decide whether the observed outcomes of48 program execution are acceptable or not, otherwise49 the testing effort would be useless. The observed50 behavior may be checked against user expectations51 (commonly referred to as testing for validation),52 against a specification (testing for verification), or,53 finally, against the anticipated behavior from54 implicit requirements or reasonable expectations.55 (See “Acceptance Tests” in the Software56 Requirements KA).57 In recent years, the view of software testing has58 matured into a constructive one. Testing is no longer59 seen as an activity that starts only after the coding60 phase is complete with the limited purpose of detecting61 failures. Software testing is now seen as an activity that62 should encompass the whole development and63 maintenance process and is itself an important part of64 the actual product construction. Indeed, planning for65 testing should start with the early stages of the66 requirement process, and test plans and procedures67 must be systematically and continuously developed—68 and possibly refined—as development proceeds. These69 test planning and designing activities provide useful70 input for designers in highlighting potential weaknesses71 (like design oversights or contradictions and omissions72 or ambiguities in the documentation).73 Currently, the right attitude towards quality is74 considered one of prevention: it is obviously much75 better to avoid problems than to correct them. Testing76 must be seen, then, primarily as a means not only for77 checking whether the prevention has been effective, but78 also for identifying faults in those cases where, for79 some reason, it has not been effective. It is perhaps80 obvious but worth recognizing that, even after81 successful completion of an extensive testing82 campaign, the software could still contain faults. The83 remedy for software failures experienced after delivery84 is provided by corrective maintenance actions.85 Software maintenance topics are covered in the86 Software Maintenance KA.87 In the Software Quality KA (see “Software Quality88 Management Techniques”), software quality89 management techniques are notably categorized into90 static techniques (no code execution) and dynamic91 techniques (code execution). Both categories are92 useful. This KA focuses on dynamic techniques.93 Software testing is also related to software construction94 (see “Construction Testing” in that KA). In particular,95 unit and integration testing are intimately related to96 software construction, if not part of it.97 98 99
  • 2. © IEEE – SWEBOK Guide V3 2 BREAKDOWN OF TOPICS FOR SOFTWARE TESTING100 101 Figure 1: Breakdown of Topics for the Software Testing KA102 103 The breakdown of topics for the Software Testing KA104 is shown in Figure 1. A more detailed breakdown is105 provided in Tables 1-A to 1-F.106 The first subarea describes Software Testing107 Fundamentals. It covers the basic definitions in the108 field of software testing, the basic terminology and key109 issues, and software testing’s relationship with other110 activities.111 The second subarea, Test Levels, consists of two112 (orthogonal) topics: 2.1 lists the levels in which the113 testing of large software is traditionally subdivided, and114 2.2 considers testing for specific conditions or115 properties and is referred to as objectives of testing. Not116 all types of testing apply to every software product, nor117 has every possible type been listed.118 The test target and test objective together determine119 how the test set is identified, both with regard to its120
  • 3. © IEEE – SWEBOK Guide V3 3 consistency—how much testing is enough for achieving121 the stated objective—and its composition—which test122 cases should be selected for achieving the stated123 objective (although usually the “for achieving the124 stated objective” part is left implicit and only the first125 part of the two italicized questions above is posed).126 Criteria for addressing the first question are referred to127 as test adequacy criteria, while those addressing the128 second question are the test selection criteria.129 Several Test Techniques have been developed in the130 past few decades, and new ones are still being131 proposed. Generally accepted techniques are covered in132 subarea 3.133 Test-Related Measures are dealt with in subarea 4,134 while the issues relative to Test Process are covered in135 subarea 5. Finally Software Testing Tools are presented136 in subarea 6.137 138 139 Table 1-A: Breakdown for Software Testing Fundamentals 1. Software Testing Fundamentals 1.1Testing-related terminology Definitions of testing and related terminology Faults vs. Failures 1.2 Key Issue Test selection criteria/Test adequacy criteria (or stopping rules) Testing effectiveness/Objectives for testing Testing for defect identification The oracle problem Theoretical and practical limitations of testing The problem of infeasible paths Testability 1.3 Relationship of testing to other activities Testing vs. Static Software Quality Management Techniques Testing vs. Correctness Proofs and Formal Verification Testing vs. Debugging Testing vs. Programming 140 Table 1-B: Breakdown for Test Levels 2. Test Levels 2.1 The target of the test Unit testing Integration testing System testing 2.2 Objectives of testing Acceptance/qualification testing Installation testing Alpha and Beta testing Reliability and evaluation achievement Regression testing Performance testing Security testing Stress testing Back-to-back testing Recovery testing Configuration testing Usability and human computer interaction testing
  • 4. © IEEE – SWEBOK Guide V3 4 Test-driven development 141 Table 1-C: Breakdown for Test Techniques 3. Test Techniques 3.1 Based on the software engineer’s intuition and experience Ad hoc Exploratory testing 3.2 Input domain-based techniques Equivalence partitioning Pairwise testing Boundary-value analysis Random testing 3.3 Code-based techniques Control-flow-based criteria Data flow-based criteria Reference models for code-based testing (flowgraph, call graph) 3.4 Fault-based techniques Error guessing Mutation testing 3.5 Usage-based techniques Operational profile User observation heuristics 3.6 Model-based testing techniques Decision table Finite-state machine-based Testing from formal specifications 3.7 Techniques based on the nature of the application 3.8 Selecting and combining techniques Functional and structural Deterministic vs. random 142 Table 1-D: Breakdown for Test-Related Measures 4. Test- Related Measures 4.1 Evaluation of the program under test Program measurements to aid in planning and designing testing Fault types, classification, and statistics Fault density Life test, reliability evaluation Reliability growth models 4.2 Evaluation of the tests performed Coverage/thoroughness measures Fault seeding Mutation score Comparison and relative effectiveness of different techniques 143
  • 5. © IEEE – SWEBOK Guide V3 5 Table 1-E: Breakdown for Test Process 5. Test Process 5.1 Practical considerations Attitudes/Egoless programming Test guides Test process management Test documentation and work products Internal vs. independent test team Cost/effort estimation and other process measures Termination Test reuse and patterns 5.2 Test activities Planning Test-case generation Test environment development Execution Test results evaluation Problem reporting/Test log Defect tracking 144 Table 1-F: Breakdown for Software Testing Tools 6 Software Testing Tools 6.1 Testing tool support Selecting tools 6.2 Categories of tools Test harness Test generators Capture/Replay tools Oracle/file comparators/assertion checking Coverage analyzer/Instrumenter Tracers Regression testing tools Reliability evaluation tools 145 146
  • 6. © IEEE – SWEBOK Guide V3 6 147 1. Software Testing Fundamentals148 1.1. Testing-related149 terminology150 Definitions of testing and related terminology151 [1*, c1, c2, 2*, c8].152 A comprehensive introduction to the Software Testing153 KA is provided in the recommended references.154 Faults vs. Failures [1*, c1s5, 2*, c11].155 Many terms are used in the software engineering156 literature to describe a malfunction: notably fault,157 failure, and error, among others. This terminology is158 precisely defined in [3] and [4]. It is essential to clearly159 distinguish between the cause of a malfunction (for160 which the term fault or defect will be used here) and an161 undesired effect observed in the system’s delivered162 service (which will be called a failure). Testing can163 reveal failures, but it is the faults that can and must be164 removed [5].165 However, it should be recognized that the cause of a166 failure cannot always be unequivocally identified. No167 theoretical criteria exist to definitively determine what168 fault caused the observed failure. It might be said that it169 was the fault that had to be modified to remove the170 problem, but other modifications could have worked171 just as well. To avoid ambiguity, one could refer to172 failure-causing inputs instead of faults—that is, those173 sets of inputs that cause a failure to appear.174 1.2. Key issues175 Test selection criteria/Test adequacy criteria176 (or stopping rules) [1*c1s14, c6s6, c12s7]177 A test selection criterion is a means of deciding what a178 suitable set of test cases should be. A selection criterion179 can be used for selecting the test cases or for checking180 whether a selected test suite is adequate—that is, to181 decide whether the testing can be stopped [6]. See also182 the sub-topic Termination, under topic 5.1 Practical183 considerations.184 Testing effectiveness/Objectives for testing185 [1*,c13s11, c11s4].186 Testing is the observation of a sample of program187 executions. Sample selection can be guided by different188 objectives: it is only in light of the objective pursued189 that the effectiveness of the test set can be evaluated.190 Testing for defect identification [1*, c1s14].191 In testing for defect identification, a successful test is192 one that causes the system to fail. This is quite different193 from testing to demonstrate that the software meets its194 specifications or other desired properties, in which case195 testing is successful if no (significant) failures are196 observed.197 The oracle problem [1*, c1s9, c9s7]198 An oracle is any (human or mechanical) agent that199 decides whether a program behaved correctly in a200 given test and accordingly produces a verdict of “pass”201 or “fail.” There exist many different kinds of oracles,202 and oracle automation can be very difficult and203 expensive.204 Theoretical and practical limitations of testing205 [1*, c2s7]206 Testing theory warns against ascribing an unjustified207 level of confidence to a series of passed tests.208 Unfortunately, most established results of testing209 theory are negative ones, in that they state what testing210 can never achieve as opposed to what it actually211 achieved. The most famous quotation in this regard is212 the Dijkstra aphorism that “program testing can be used213 to show the presence of bugs, but never to show their214 absence” [7]. The obvious reason for this is that215 complete testing is not feasible in real software.216 Because of this, testing must be driven based on risk217 and could be seen as a risk management strategy.218 The problem of infeasible paths [1*, c4s7]219 Infeasible paths, the control flow paths that cannot be220 exercised by any input data, are a significant problem221 in path-oriented testing—particularly in the automated222 derivation of test inputs for code-based testing223 techniques.224 Testability [1*, c17s2]225 The term “software testability” has two related but226 different meanings: on the one hand, it refers to the227 degree to which it is easy for software to fulfill a given228 test coverage criterion; on the other hand, it is defined229 as the likelihood, possibly measured statistically, that230 the software will expose a failure under testing if it is231 faulty. Both meanings are important.232 1.3. Relationship of testing233 to other activities234 Software testing is related to, but different from, static235 software quality management techniques, proofs of236 correctness, debugging, and programming. However, it237 is informative to consider testing from the point of238 view of software quality analysts and of certifiers.239 Testing vs. Static Software Quality Management240 Techniques. See also Software Quality241 Management Processes in the Software Quality242 KA. [1*, c12].243 Testing vs. Correctness Proofs and Formal244 Verification. See also the Software Engineering245 Models and Methods KA [1*, c17s2].246 Testing vs. Debugging. See also Construction247 Testing in the Software Construction KA and248 Debugging Tools and Techniques in the249 Computing Foundations KA [1*, c3s6].250 Testing vs. Programming. See also Construction251 Testing in the Software Construction KA [1*,252 c3s2].253
  • 7. © IEEE – SWEBOK Guide V3 7 2. Test Levels254 2.1. The target of the test255 [1*, c1s13, 2*, c8s1].256 Software testing is usually performed at different levels257 along the development and maintenance processes.258 That is to say, the target of the test can vary: a single259 module, a group of such modules (related by purpose,260 use, behavior, or structure), or a whole system. Three261 test stages can be conceptually distinguished—namely,262 Unit, Integration, and System. No process model is263 implied, nor is any of those three stages assumed to264 have greater importance than the other two.265 Unit testing [1*, c3, 2*, c8]266 Unit testing verifies the functioning in isolation of267 software pieces that are separately testable. Depending268 on the context, these could be the individual269 subprograms or a larger component made of tightly270 related units. Typically, unit testing occurs with access271 to the code being tested and with the support of272 debugging tools; it might involve the programmers273 who wrote the code.274 Integration testing [1*, c7, 2*, c8]275 Integration testing is the process of verifying the276 interaction between software components. Classical277 integration-testing strategies, such as top-down or278 bottom-up, are used with traditional, hierarchically279 structured software.280 Modern, systematic integration strategies are rather281 architecture-driven, which implies integrating the282 software components or subsystems based on identified283 functional threads. Integration testing is a continuous284 activity at each stage of which software engineers must285 abstract away lower-level perspectives and concentrate286 on the perspectives of the level they are integrating.287 Except for small, simple software, systematic,288 incremental integration testing strategies are usually289 preferred to putting all the components together at290 once—which is pictorially called “big bang” testing.291 System testing [1*, c8, 2*, c8]292 System testing is concerned with the behavior of a293 whole system. The majority of functional failures294 should already have been identified during unit and295 integration testing. System testing is usually considered296 appropriate for comparing the system to the297 nonfunctional system requirements—such as security,298 speed, accuracy, and reliability (see Functional and299 NonFunctional Requirements in the Software300 Requirements KA). External interfaces to other301 applications, utilities, hardware devices, or the302 operating environment are also evaluated at this level.303 2.2. Objectives of testing304 [1*, c1s7]305 Testing is conducted in view of a specific objective,306 which is stated more or less explicitly, and with307 varying degrees of precision. Stating the objective in308 precise, quantitative terms allows control to be309 established over the test process.310 Testing can be aimed at verifying different properties.311 Test cases can be designed to check that the functional312 specifications are correctly implemented, which is313 variously referred to in the literature as conformance314 testing, correctness testing, or functional testing.315 However, several other nonfunctional properties may316 be tested as well—including performance, reliability,317 and usability, among many others.318 Other important objectives for testing include (but are319 not limited to) reliability measurement, usability320 evaluation, and acceptance321 322 323 different purposes being addressed at a different level324 of testing325 The sub-topics listed below are those most often cited326 in the literature. Note that some kinds of testing are327 more appropriate for custom-made software328 packages—installation testing, for example—and329 others for generic products, like beta testing.330 Acceptance/qualification testing [1*, c1s7, 2*,331 c8s4].332 Acceptance testing checks the system behavior against333 the customer’s requirements, however these may have334 been expressed; the customers undertake, or specify,335 typical tasks to check that their requirements have been336 met or that the organization has identified these for the337 software’s target market. This testing activity may or338 may not involve the system’s developers.339 Installation testing [1*, c12s2]340 Usually after completion of system and acceptance341 testing, the software can be verified upon installation in342 the target environment. Installation testing can be343 viewed as system testing conducted once again344 according to hardware configuration requirements.345 Installation procedures may also be verified.346 Alpha and beta testing [1*, c13s7, c16s6, 2*,347 c8s4]348 Before the software is released, it is sometimes given to349 a small, representative set of potential users for trial350 use, either in-house (alpha testing) or external (beta351 testing). These users report problems with the product.352 Alpha and beta use is often uncontrolled and is not353 always referred to in a test plan.354 Reliability and evaluation achievement [1*,355 c15, 2*, c15s2]356 In helping to identify faults, testing is a means to357 improve reliability. By contrast, by randomly358 generating test cases according to the operational359 profile, statistical measures of reliability can be360 derived. Using reliability growth models, both361 objectives can be pursued together [5] (see also sub-362 topic Life test, reliability evaluation under 4.1363 Evaluation of the program under test).364
  • 8. © IEEE – SWEBOK Guide V3 8 Regression testing [1*, c8s11, c13s3]365 According to IEEE/ISO/IEC 24765:2009 Systems and366 Software Engineering Vocabulary [3], regression367 testing is the “selective retesting of a system or368 component to verify that modifications have not caused369 unintended effects and that the system or component370 still complies with its specified requirements.” In371 practice, the idea is to show that software that372 previously passed the tests still does (in fact, it is also373 referred to as non-regression testing). Specifically for374 incremental development, the purpose is to show that375 the software’s behavior is unchanged, except insofar as376 required. Obviously, a tradeoff must be made between377 the assurance given by regression testing every time a378 change is made and the resources required to do that.379 Regression testing refers to techniques for selecting,380 minimizing, and/or prioritizing a subset of the test381 cases in an existing test suite [8]. Regression testing382 can be conducted at each of the test levels described in383 topic 2.1 The target of the test and may apply to384 functional and nonfunctional testing.385 Performance testing [1*, c8s6]386 This is specifically aimed at verifying that the software387 meets the specified performance requirements—for388 instance, capacity and response time.389 Security testing [1*, c8s3, 2*, c11s4]390 This is focused on the verification that the software is391 protected from external attacks. In particular, security392 testing verifies the confidentiality, integrity, and393 availability of the systems and its data. Usually,394 security testing includes verification of misuse and395 abuse of the software or system (negative testing).396 Stress testing [1*, c8s8]397 Stress testing exercises software at the maximum398 design load, as well as beyond it.399 Back-to-back testing [3]400 IEEE/ISO/IEC Standard 24765 defines back-to-back401 testing as “testing in which two or more variants of a402 program are executed with the same inputs, the outputs403 are compared, and errors are analyzed in case of404 discrepancies.”405 Recovery testing [1*, c14s2]406 Recovery testing is aimed at verifying software restart407 capabilities after a “disaster.”408 Configuration testing [1*, c8s5]409 In cases where software is built to serve different users,410 configuration testing analyzes the software under411 various specified configurations.412 Usability and human computer interaction413 testing [9*, c6]414 The main task of usability testing is to evaluate how415 easy it is for end users to use and learn the software. In416 general, it may involve the user documentation, the417 software functions in supporting user tasks, and the418 ability to recover from user errors. Specific attention is419 devoted to validating the software interface (human-420 computer interaction testing) (see User Interface421 Design in the Software Design KA).422 Test-driven development [1*, c1s16]423 Test-driven development (TDD) originated as one of424 the core XP (extreme programming) practices and425 essentially consists of writing automated unit tests prior426 to the code under test (see also Agile Methods in the427 Software Engineering Models and Method KA). In this428 way, TDD promotes the use of tests as a surrogate for a429 requirements specification document rather than as an430 independent check that the software has correctly431 implemented the requirements. TDD is more a432 specification and programming practice than a testing433 strategy.434 3. Test Techniques435 One of the aims of testing is to reveal as much potential436 for failure as possible, and many techniques have been437 developed to do this. These techniques attempt to438 “break” the program by running one or more tests439 drawn from identified classes of executions deemed440 equivalent. The leading principle underlying such441 techniques is to be as systematic as possible in442 identifying a representative set of program behaviors;443 for instance, considering subclasses of the input444 domain, scenarios, states, and dataflow.445 It is difficult to find a homogeneous basis for446 classifying all techniques, and the one used here must447 be seen as a compromise. The classification is based on448 how tests are generated: from the software engineer’s449 intuition and experience, the specifications, the code450 structure, the (real or artificial) faults to be discovered,451 the field usage, or, finally, the nature of the application.452 Sometimes these techniques are classified as white-box453 (also called glass-box) if the tests rely on information454 about how the software has been designed or coded, or455 as black-box if the test cases rely only on the456 input/output behavior. One last category deals with the457 combined use of two or more techniques. Obviously,458 these techniques are not used equally often by all459 practitioners. Included in the list are those that a460 software engineer should know.461 3.1. Based on the software462 engineer’s intuition and463 experience464 Ad hoc465 Perhaps the most widely practiced technique remains466 ad hoc testing: tests are derived relying on the software467 engineer’s skill, intuition, and experience with similar468 programs. Ad hoc testing might be useful for469 identifying special tests, those not easily captured by470 formalized techniques.471 Exploratory testing472 Exploratory testing is defined as simultaneous learning,473 test design, and test execution; that is, the tests are not474 defined in advance in an established test plan, but are475 dynamically designed, executed, and modified. The476
  • 9. © IEEE – SWEBOK Guide V3 9 effectiveness of exploratory testing relies on the477 software engineer’s knowledge, which can be derived478 from various sources: observed product behavior479 during testing, familiarity with the application, the480 platform, the failure process, the type of possible faults481 and failures, the risk associated with a particular482 product, and so on.483 3.2. Input domain-based484 techniques485 Equivalence partitioning [1*, c9s4]486 The input domain is subdivided into a collection of487 subsets (or equivalent classes), which are deemed488 equivalent according to a specified relation. A489 representative set of tests (sometimes only one) is taken490 from each subset (or class).491 Pairwise testing [1*, c9s3]492 Test cases are derived by combining interesting values493 for every pair of a set of input variables instead of494 considering all possible combinations. Pairwise testing495 belongs to combinatorial testing, which in general also496 includes higher-level combinations than pairs: these497 techniques are referred to as t-wise, whereby every498 possible combination of t input variables is considered.499 Boundary-value analysis [1*, c9s5]500 Test cases are chosen on and near the boundaries of the501 input domain of variables, with the underlying rationale502 that many faults tend to concentrate near the extreme503 values of inputs. An extension of this technique is504 robustness testing, wherein test cases are also chosen505 outside the input domain of variables to test program506 robustness to unexpected or erroneous inputs.507 Random testing [1*, c9s7]508 Tests are generated purely at random (not to be509 confused with statistical testing from the operational510 profile, as described in sub-topic 3.5 Operational511 profile). This form of testing falls under the heading of512 the input domain entry since the input domain (at least)513 must be known in order to be able to pick random514 points within it. Random testing provides a relatively515 simple approach to test automation; recently, enhanced516 forms have been proposed in which the random test517 sampling is directed by other input selection criteria518 [10].519 3.3. Code-based techniques520 Control-flow-based criteria [1*, c4]521 Control-flow-based coverage criteria are aimed at522 covering all the statements, blocks of statements, or523 specified combinations of statements in a program.524 Several coverage criteria have525 condition/decision526 The strongest of the527 control-flow-based criteria is path testing, which aims528 to execute all entry-to-exit control flow paths in the529 flowgraph. Since path testing is generally not feasible530 because of loops, other less stringent criteria tend to be531 used in practice—such as statement, branch, and532 condition/decision testing. The adequacy of such tests533 is measured in percentages; for example, when all534 branches have been executed at least once by the tests,535 100% branch coverage is said to have been achieved.536 Data-flow-based criteria [1*, c5]537 In data-flow-based testing, the control flowgraph is538 annotated with information about how the program539 variables are defined, used, and killed (undefined). The540 strongest criterion, all definition-use paths, requires541 that, for each variable, every control-flow path segment542 from a definition of that variable to a use of that543 definition is executed. In order to reduce the number of544 paths required, weaker strategies such as all-definitions545 and all-uses are employed.546 Reference models for code-based testing547 (flowgraph, call graph) [1*, c4]548 Although not a technique in itself, the control structure549 of a program is graphically represented using a550 flowgraph in code-based testing techniques. A551 flowgraph is a directed graph the nodes and arcs of552 which correspond to program elements (see Graphs553 and Trees in the Mathematical Foundations KA). For554 instance, nodes may represent statements or555 uninterrupted sequences of statements, and arcs may556 represent the transfer of control between nodes.557 3.4. Fault-based techniques558 [1*, c1s14]559 With different degrees of formalization, fault-based560 testing techniques devise test cases specifically aimed561 at revealing categories of likely or predefined faults. To562 better focus the test case generation or selection, a fault563 model could be introduced that classifies the different564 types of faults.565 Error guessing [1*, c9s8]566 In error guessing, test cases are specifically designed567 by software engineers trying to figure out the most568 plausible faults in a given program. A good source of569 information is the history of faults discovered in earlier570 projects, as well as the software engineer’s expertise.571 Mutation testing [1*, c3s5]572 A mutant is a slightly modified version of the program573 under test, differing from it by a small, syntactic574 change. Every test case exercises both the original and575 all generated mutants: if a test case is successful in576 identifying the difference between the program and a577 mutant, the latter is said to be “killed.” Originally578 conceived as a technique to evaluate a test set (see sub-579 topic 4.2. Evaluation of the tests performed), mutation580 testing is also a testing criterion in itself: either tests are581 randomly generated until enough mutants have been582 killed, or tests are specifically designed to kill583 surviving mutants. In the latter case, mutation testing584 can also be categorized as a code-based technique. The585 underlying assumption of mutation testing, the586 coupling effect, is that by looking for simple syntactic587 faults, more complex but real faults will be found. For588 the technique to be effective, a large number of mutants589
  • 10. © IEEE – SWEBOK Guide V3 10 must be automatically derived in a systematic way590 [11].591 3.5. Usage-based592 techniques593 Operational profile [1*. c15s5]594 In testing for reliability evaluation, the test595 environment must reproduce the operational596 environment of the software as closely as possible. The597 idea is to infer, from the observed test results, the598 future reliability of the software when in actual use. To599 do this, inputs are assigned a probability distribution, or600 profile, according to their frequency of occurrence in601 actual operation. Operational profiles can be used602 during the system test for designing and guiding test603 case derivation. The purpose is to meet the reliability604 objectives and exercise relative usage and criticality of605 different functions in the field [5].606 User observation heuristics [9*, c5, c7].607 Usability principles can be used as a guideline for608 checking and discovering a good proportion of609 problems in the user interface design [9*, c1s4] )(see610 User Interface Design in the Software Design KA).611 Specialized heuristics, also called usability inspection612 methods, are applied for the systematic observation of613 system usage under controlled conditions in order to614 determine how people can use the system and its615 interfaces. Usability heuristics include cognitive616 walkthroughs, claims analysis, field observations,617 thinking-aloud, and even indirect approaches such as618 user’s questionnaires and interviews.619 3.6. Model-based testing620 techniques621 Model-based testing refers to an abstract (formal)622 representation of the software under test or of its623 requirements (see Modeling in the Software624 Engineering Models and Methods KA). This model is625 used for validating requirements, checking their626 consistency, and generating test cases focused on the627 behavioral aspect of the software. The key components628 of these techniques are [12]: the notation used for629 representing the model of the software, the test630 strategy, or algorithm for test case generation; and the631 supporting infrastructure for the test execution,632 including the evaluation of the expected outputs. Due633 to the complexity of the adopted techniques, model-634 based testing approaches are often used in conjunction635 with test automation harnesses. Main techniques are636 listed in the following points.637 Decision table [1*, c9s6]638 Decision tables represent logical relationships between639 conditions (roughly, inputs) and actions (roughly,640 outputs). Test cases are systematically derived by641 considering every possible combination of conditions642 and actions. A related technique is cause-effect643 graphing. [1*, c13s6].644 Finite-state machine-based [1*, c10]645 By modeling a program as a finite state machine, tests646 can be selected in order to cover states and transitions647 on it.648 Testing from formal specifications [1*,649 c10s11, 2*, c15]650 Giving the specifications in a formal language (see also651 Formal Methods in the Software Engineering Models652 and Methods KA) allows for automatic derivation of653 functional test cases, and, at the same time, provides an654 oracle for checking test results.655 TTCN3 (Testing and Test Control Notation version 3)656 is a language specifically developed for writing test657 cases. The notation was conceived for specific needs of658 testing telecommunication systems, so it is particularly659 suitable to test complex communication protocols.660 3.7. Techniques based on661 the nature of the662 application663 The above techniques apply to all types of software.664 However, for some kinds of applications, some665 additional know-how is required for test derivation. A666 list of a few specialized testing fields is provided here,667 based on the nature of the application under test:668 Object-oriented testing669 Component-based testing670 Web-based testing671 Testing of concurrent programs672 Protocol conformance testing673 Testing of real-time systems674 Testing of safety-critical systems675 Testing of service-oriented systems676 Testing of open-source systems677 Testing of embedded systems678 3.8. Selecting and679 combining techniques680 Functional and structural [1*, c9]681 Model-based and code-based test techniques are often682 contrasted as functional vs. structural testing. These683 two approaches to test selection are not to be seen as684 alternative but rather as complementary; in fact, they685 use different sources of information and have proved to686 highlight different kinds of problems. They could be687 used in combination, depending on budgetary688 considerations.689 Deterministic vs. random [1*, c9s6]690 Test cases can be selected in a deterministic way,691 according to one of the various techniques listed, or692 randomly drawn from some distribution of inputs, such693 as is usually done in reliability testing. Several694 analytical and empirical comparisons have been695 conducted to analyze the conditions that make one696 approach more effective than the other.697 698
  • 11. © IEEE – SWEBOK Guide V3 11 4. Test-Related Measures699 Sometimes test techniques are confused with test700 objectives. Test techniques are to be viewed as aids that701 help to ensure the achievement of test objectives. For702 instance, branch coverage is a popular test technique.703 Achieving a specified branch coverage measure should704 not be considered the objective of testing per se: it is a705 means to improve the chances of finding failures by706 systematically exercising every program branch out of707 a decision point. To avoid such misunderstandings, a708 clear distinction should be made between test-related709 measures that provide an evaluation of the program710 under test based on the observed test outputs and those711 that evaluate the thoroughness of the test set. (See712 Software engineering measurement in the Software713 Engineering Management KA for information on714 measurement programs. See Process and product715 measurement in the Software Engineering Process KA716 for information on measures).717 Measurement is usually considered instrumental to718 quality analysis. Measurement may also be used to719 optimize the planning and execution of the tests. Test720 management can use several process measures to721 monitor progress. Measures relative to the test process722 for management purposes are considered in topic 5.1723 Practical considerations.724 4.1. Evaluation of the725 program under test726 Program measurements to aid in planning and727 designing testing [13*, c11]728 Measures based on program size (for example, source729 lines of code or function points (see Measuring730 Requirements in the Software Requirements KA)) or731 on program structure (like complexity) are used to732 guide testing. Structural measures can also include733 measurements among program modules in terms of the734 frequency with which modules call each other.735 Fault types, classification, and statistics [13*,736 c4]737 The testing literature is rich in classifications and738 taxonomies of faults. To make testing more effective, it739 is important to know which types of faults could be740 found in the software under test and the relative741 frequency with which these faults have occurred in the742 past. This information can be very useful in making743 quality predictions as well as in process improvement744 (see Defect characterization in the Software Quality745 KA).746 Fault density [1*, c13s4, 13*, c4]747 A program under test can be assessed by counting and748 classifying the discovered faults by their types. For749 each fault class, fault density is measured as the ratio750 between the number of faults found and the size of the751 program.752 Life test, reliability evaluation [1*, c15, 13*,753 c3]754 A statistical estimate of software reliability, which can755 be obtained by reliability achievement and evaluation756 (see sub-topic 2.2), can be used to evaluate a product757 and decide whether or not testing can be stopped.758 Reliability growth models [1*, c15, 13*, c8]759 Reliability growth models provide a prediction of760 reliability based on failures. They assume, in general,761 that when the faults that caused the observed failures762 have been fixed (although some models also accept763 imperfect fixes), the estimated product’s reliability764 exhibits, on average, an increasing trend. There now765 exist dozens of published models. Many are laid down766 on some common assumptions while others differ.767 Notably, these models are divided into failure-count768 and time-between-failure models.769 4.2. Evaluation of the tests770 performed771 Coverage/thoroughness measures [13*, c11]772 Several test adequacy criteria require that the test cases773 systematically exercise a set of elements identified in774 the program or in the specifications (see subarea 3 Test775 Techniques). To evaluate the thoroughness of the776 executed tests, testers can monitor the elements777 covered so that they can dynamically measure the ratio778 between covered elements and their total number. For779 example, it is possible to measure the percentage of780 covered branches in the program flowgraph or that of781 the functional requirements exercised among those782 listed in the specifications document. Code-based783 adequacy criteria require appropriate instrumentation784 of the program under test.785 Fault seeding [1*, c2s5, 13*, c6]786 Some faults are artificially introduced into the program787 before testing. When the tests are executed, some of788 these seeded faults will be revealed as well as,789 possibly, some faults that were already. In theory,790 depending on which and how many of the artificial791 faults are discovered, testing effectiveness can be792 evaluated and the remaining number of genuine faults793 can be estimated. In practice, statisticians question the794 distribution and representativeness of seeded faults795 relative to genuine faults and the small sample size on796 which any extrapolations are based. Some also argue797 that this technique should be used with great care since798 inserting faults into software involves the obvious risk799 of leaving them there.800 Mutation score [1*, c3s5]801 In mutation testing (see sub-topic 3.4 Fault-based802 techniques), the ratio of killed mutants to the total803 number of generated mutants can be a measure of the804 effectiveness of the executed test set.805
  • 12. © IEEE – SWEBOK Guide V3 12 Comparison and relative effectiveness of806 different techniques807 Several studies have been conducted to compare the808 relative effectiveness of different test techniques. It is809 important to be precise as to the property against which810 the techniques are being assessed; what, for instance, is811 the exact meaning given to the term “effectiveness”?812 Possible interpretations include the number of tests813 needed to find the first failure, the ratio of the number814 of faults found through testing to all the faults found815 during and after testing, and how much reliability was816 improved. Analytical and empirical comparisons817 between different techniques have been conducted818 according to each of the notions of effectiveness819 specified above.820 5. Test Process821 Testing concepts, strategies, techniques, and measures822 need to be integrated into a defined and controlled823 process that is run by people. The test process supports824 testing activities and provides guidance to testing825 teams, from test planning to test output evaluation, in826 such a way as to provide justified assurance that the827 test objectives will be met in a cost –effective way.828 5.1. Practical829 considerations830 Attitudes/Egoless programming [1*c16, 13*,831 c15]832 A very important component of successful testing is a833 collaborative attitude towards testing and quality834 assurance activities. Managers have a key role in835 fostering a generally favorable reception towards836 failure discovery during development and maintenance;837 for instance, by preventing a mindset of code838 ownership among programmers, so that they will not839 feel responsible for failures revealed by their code.840 Test guides [1*, c12s1, 13*, c15s1]841 The testing phases could be guided by various aims—842 for example, risk-based testing uses the product risks to843 prioritize and focus the test strategy, and scenario-844 based testing defines test cases based on specified845 software scenarios.846 Test process management [1*, c12, 13*, c15]847 Test activities conducted at different levels (see848 subarea 2 Test Levels) must be organized—together849 with people, tools, policies, and measurements—into a850 well-defined process that is an integral part of the life851 cycle.852 Test documentation and work products [1*,853 c8s12, 13*, c4s5]854 Documentation is an integral part of the formalization855 of the test process. Test documents may include,856 among others, Test Plan, Test Design Specification,857 Test Procedure Specification, Test Case Specification,858 Test Log, and Test Incident or Problem Report. The859 software under test is documented as the Test Item.860 Test documentation should be produced and861 continually updated to the same level of quality as862 other types of documentation in software engineering.863 Internal vs. independent test team [1*, c16]864 Formalization of the test process may involve865 formalizing the test team organization as well. The test866 team can be composed of internal members (that is, on867 the project team, involved or not in software868 construction), of external members (in the hope of869 bringing an unbiased, independent perspective), or,870 finally, of both internal and external members.871 Considerations of cost, schedule, maturity levels of the872 involved organizations, and criticality of the873 application may determine the decision.874 Cost/effort estimation and other process875 measures [1*, c18s3, 13*, c5s7]876 Several measures related to the resources spent on877 testing, as well as to the relative fault-finding878 effectiveness of the various test phases, are used by879 managers to control and improve the test process.880 These test measures may cover such aspects as number881 of test cases specified, number of test cases executed,882 number of test cases passed, and number of test cases883 failed, among others.884 Evaluation of test phase reports can be combined with885 root-cause analysis to evaluate test-process886 effectiveness in finding faults as early as possible. Such887 an evaluation could be associated with the analysis of888 risks. Moreover, the resources that are worth spending889 on testing should be commensurate with the890 use/criticality of the application: different techniques891 have different costs and yield different levels of892 confidence in product reliability.893 Termination [13*, c10s4]894 A decision must be made as to how much testing is895 enough and when a test stage can be terminated.896 Thoroughness measures, such as achieved code897 coverage or functional completeness, as well as898 estimates of fault density or of operational reliability,899 provide useful support but are not sufficient in900 themselves. The decision also involves considerations901 about the costs and risks incurred by possible902 remaining failures, as opposed to the costs incurred by903 continuing to test. (See “Test selection criteria/Test904 adequacy criteria” in 1.2 Key issues).905 Test reuse and test patterns [13*, c2s5]906 To carry out testing or maintenance in an organized907 and cost-effective way, the means used to test each part908 of the software should be reused systematically. This909 repository of test materials must be under the control of910 software configuration management so that changes to911 software requirements or design can be reflected in912 changes to the tests conducted.913 The test solutions adopted for testing some application914 types under certain circumstances, with the motivations915 behind the decisions taken, form a test pattern that can916 itself be documented for later reuse in similar projects.917
  • 13. © IEEE – SWEBOK Guide V3 13 5.2. Test activities918 Under this topic, a brief overview of test activities is919 given; as often implied by the following description,920 successful management of test activities strongly921 depends on the software-configuration management922 process (see the Software Configuration Management923 KA).924 Planning [1*, c12s1, c12s8]925 Like any other aspect of project management, testing926 activities must be planned. Key aspects of test planning927 include coordination of personnel, management of928 available test facilities and equipment (which may929 include test plans and procedures), and planning for930 possible undesirable outcomes. If more than one931 baseline of the software is being maintained, then a932 major planning consideration is the time and effort933 needed to ensure that the test environment is set to the934 proper configuration.935 Test-case generation [1*, c12s1, c12s3]936 Generation of test cases is based on the level of testing937 to be performed and the particular testing techniques.938 Test cases should be under the control of software939 configuration management and include the expected940 results for each test.941 Test environment development [1*, c12s6]942 The environment used for testing should be compatible943 with the other adopted software engineering tools. It944 should facilitate development and control of test cases,945 as well as logging and recovery of expected results,946 scripts, and other testing materials.947 Execution [1*, c12s7]948 Execution of tests should embody a basic principle of949 scientific experimentation: everything done during950 testing should be performed and documented clearly951 enough that another person could replicate the results.952 Hence, testing should be performed in accordance with953 documented procedures using a clearly defined version954 of the software under test.955 Test results evaluation [13*, c15]956 The results of testing must be evaluated to determine957 whether or not the test has been successful. In most958 cases, “successful” means that the software performed959 as expected and did not have any major unexpected960 outcomes. Not all unexpected outcomes are necessarily961 faults, however, but could be judged as simply noise.962 Before a fault can be removed, an analysis and963 debugging effort is needed to isolate, identify, and964 describe it. When test results are particularly important,965 a formal review board may be convened to evaluate966 them.967 Problem reporting/Test log [1*, c13s9]968 Testing activities can be entered into a test log to969 identify when a test was conducted, who performed the970 test, what software configuration was the basis for971 testing, and other relevant identification information.972 Unexpected or incorrect test results can be recorded in973 a problem-reporting system, the data of which form the974 basis for later debugging and fixing the problems that975 were observed as failures during testing. Also,976 anomalies not classified as faults could be documented977 in case they later turn out to be more serious than first978 thought. Test reports are also an input to the change-979 management request process (see Software980 configuration control in the Software Configuration981 Management KA).982 Defect tracking [13*, c9]983 Failures observed during testing are most often due to984 faults or defects in the software. Such defects can be985 analyzed to determine when they were introduced into986 the software, what kind of error caused them to be987 created (for example, poorly defined requirements,988 incorrect variable declaration, memory leak,989 programming syntax error), and when they could have990 been first observed in the software. Defect-tracking991 information is used to determine what aspects of992 software engineering need improvement and how993 effective previous analyses and testing have been.994 6. Software Testing Tools995 6.1. Testing tool support996 [1*, c12s11, 13*, c5]997 Testing requires fulfilling many labor-intensive tasks,998 running numerous executions, and handling a great999 amount of information. Appropriate tools can alleviate1000 the burden of clerical, tedious operations and make1001 them less error-prone. Sophisticated tools can support1002 test design, making it more effective.1003 Selecting tools [1*, c12s11]1004 Guidance to managers and testers on how to select those1005 tools that will be most useful to their organization and1006 processes is a very important topic, as tool selection1007 greatly affects testing efficiency and effectiveness. Tool1008 selection depends on diverse evidence, such as1009 development choices, evaluation objectives, execution1010 facilities, and so on. In general, there may not be a1011 unique tool satisfying all needs and a suite of tools1012 could be the most appropriate choice.1013 6.2. Categories of tools1014 We categorize the available tools according to their1015 functionality. In particular:1016 Test harnesses (drivers, stubs) [1*, c3s9] provide a1017 controlled environment in which tests can be1018 launched and the test outputs can be logged. In1019 order to execute parts of a software, drivers and1020 stubs are provided to simulate caller and called1021 modules, respectively.1022 Test generators [1*, c12s11] provides assistance in1023 the generation of tests. The generation can be1024 random, pathwise (based on the flowgraph), model-1025 based, or a mix thereof.1026 Capture/Replay tools [1*, c12s11] automatically re-1027 execute, or replay, previously run tests, which have1028 recorded inputs and outputs (e.g., screens).1029
  • 14. © IEEE – SWEBOK Guide V3 14 Oracle/File comparators/Assertion checking [1*,1030 c9s7] assist in deciding whether a test outcome is1031 successful or faulty.1032 Coverage analyzer & Instrumenter [1*, c4] work1033 together. Coverage analyzers assess which and how1034 many entities of the program flowgraph have been1035 exercised amongst all those required by the selected1036 coverage-testing criterion. The analysis can be done1037 thanks to program instrumenters, which insert1038 probes into the code.1039 Tracers [1*, c1s7] trace the history of a program’s1040 execution.1041 Regression testing tools [1*, c12s16] support the1042 re-execution of a test suite after a software has been1043 modified. They can also help to select a subset1044 according to the change.1045 Reliability evaluation tools [13*, c8] support test1046 results analysis and graphical visualization in order1047 to assess reliability-related measures according to1048 selected models.1049 1050 1051
  • 15. © IEEE – SWEBOK Guide V3 15 MATRIX OF TOPICS VS. REFERENCE MATERIAL1052 [1*] Naik and Tripathy, 2008 [2*] Sommerville, 2011 [9*] Nielsen, 1993 [13*] Kan, 2003 1. Software Testing Fundamentals 1.1 Testing-Related Terminology Definitions of testing and related terminology c1,c2 c8 Faults vs. failures c1s5 c11 1.2 Key Issues Test selection criteria/Test adequacy criteria (or stopping rules) c1s14, c6s6, c12s7 Testing effectiveness/Objectives for testing c13s11, c11s4 Testing for defect identification c1s14 The oracle problem c1s9, c9s7 Theoretical and practical limitations of testing c2s7 The problem of infeasible paths c4s7 Testability c17s2 1.3 Relationship of testing to other activities Testing vs. Static Software Quality Management Techniques c12 Testing vs. Correctness Proofs and Formal Verification c17s2 Testing vs. Debugging c3s6 Testing vs. Programming c3s2 2. Test Levels c1s13 c8s1 2.1 The Target of the Test c1s13 c8s1 Unit testing c3 c8 Integration testing c7 c8 System testing c8 c8 2.2 Objectives of Testing c1s7
  • 16. © IEEE – SWEBOK Guide V3 16 [1*] Naik and Tripathy, 2008 [2*] Sommerville, 2011 [9*] Nielsen, 1993 [13*] Kan, 2003 Acceptance/qualification c1s7 c8s4 Installation testing c12s2 Alpha and Beta testing c13s7, c16s6 c8s4 Reliability and evaluation achievement c15 c15s2 Regression testing c8s11, c13s3 Performance testing c8s6 Security testing c8s3 c11s4 Stress testing c8s8 Back-to-back testing Recovery testing c14s2 Configuration testing c8s5 Usability and human computer interaction testing c6 Test-driven development c1s16 3. Test Techniques 3.1 Based on the software engineer’s intuition and experience Ad hoc Exploratory testing 3.2 Input domain-based techniques Equivalence partitioning c9s4 Pairwise testing c9s3 Boundary-value analysis c9s5 Random testing c9s7 3.3 Code-based techniques Control-flow-based criteria c4 Data flow-based criteria c5 Reference models for code- based testing (flowgraph, call graph) c4 3.4 Fault-based techniques c1s14 Error guessing c9s8 Mutation testing c3s5 3.5 Usage-based techniques Operational profile c15s5 User observation heuristics c5, c7 3.6 Model-based testing techniques Decision table c9s6 Finite-state machine-based c10
  • 17. © IEEE – SWEBOK Guide V3 17 [1*] Naik and Tripathy, 2008 [2*] Sommerville, 2011 [9*] Nielsen, 1993 [13*] Kan, 2003 Testing from formal specifications c10s11 c15 3.7 Techniques based on the nature of the application 3.8 Selecting and combining techniques Functional and structural c9 Deterministic vs. random c9s6 4. Test-related measures 4.1 Evaluation of the program under test Program measurements to aid in planning and designing testing c12s8 c11 Fault types, classification, and statistics c4 Fault density c13s3 c4 Life test, reliability evaluation c15 c3 Reliability growth models c15 c8 4.2 Evaluation of the tests performed Coverage/thoroughness measures c11 Fault seeding c2s5 c6 Mutation score c3s5 Comparison and relative effectiveness of different techniques 5 Test Process 5.1 Practical considerations Attitudes/Egoless programming c16 c15 Test guides c12s1 c15s1 Test process management c12 c15 Test documentation and work products c8s12 c4s5 Internal vs. independent test team c16 Cost/effort estimation and other process measures c18s3 c5s7 Termination c10s4 Test reuse and patterns c2s5 5.2 Test Activities Planning c12s1 c12s8 Test-case generation c12s1 c12s3 Test environment development c12s6 Execution c12s7 Test results evaluation c15 Problem reporting/Test log c13s9 Defect tracking c9
  • 18. © IEEE – SWEBOK Guide V3 18 [1*] Naik and Tripathy, 2008 [2*] Sommerville, 2011 [9*] Nielsen, 1993 [13*] Kan, 2003 6. Software Testing Tools 6.1 Testing tool support c12s11 c5 Selecting Tools c12s11 6.2 Categories of Tools Test harness c3s9 Test generators c12s11 Capture/Replay c12s11 Oracle/file comparators/assertion checking c9s7 Coverage analyzer/Instrumenter c4 Tracers c1s7 Regression testing tools c12s16 Reliability evaluation tools c8 1053
  • 19. © IEEE – SWEBOK Guide V3 19 [1*] S. Naik and P. Tripathy, "Software Testing1054 and Quality Assurance: Theory and Practice,"1055 ed: Wiley, 2008, p. 648.1056 [2*] I. Sommerville, Software Engineering, 9th ed.1057 New York: Addison-Wesley, 2010.1058 [3] IEEE/ISO/IEC, "IEEE/ISO/IEC 24765:1059 Systems and Software Engineering -1060 Vocabulary," 1st ed, 2010.1061 [4] ISO/IEC/IEEE, "Draft Standard P29119-1062 1/DIS for Software and Systems Engineering-1063 -Software Testing--Part 1: Concepts and1064 Definitions," ed, 2012.1065 [5] M. R. Lyu, Ed., Handbook of Software1066 Reliability Engineering. IEEE Computer1067 Society Press, McGraw-Hill, 1996.1068 [6] H. Zhu, et al., "Software unit test coverage1069 and adequacy," Acm Computing Surveys, vol.1070 29, pp. 366-427, Dec 1997.1071 [7] E. W. Dijkstra, "Notes on Structured1072 Programming," Technological University,1073 Eindhoven1970.1074 [8] S. Yoo and M. Harman, "Regression testing1075 minimization, selection and prioritization: a1076 survey," Software Testing Verification &1077 Reliability, vol. 22, pp. 67-120, Mar 2012.1078 [9*] J. Nielsen, Usability Engineering, 1st ed.1079 Boston: Morgan Kaufmann, 1993.1080 [10] T. Y. Chen, et al., "Adaptive Random Testing:1081 The ART of test case diversity," Journal of1082 Systems and Software, vol. 83, pp. 60-66, Jan1083 2010.1084 [11] Y. Jia and M. Harman, "An Analysis and1085 Survey of the Development of Mutation1086 Testing," Ieee Transactions on Software1087 Engineering, vol. 37, pp. 649-678, Sep-Oct1088 2011.1089 [12] M. Utting and B. Legeard, Practical Model-1090 Based Testing: A Tools Approach: Morgan1091 Kaufmann, 2007.1092 [13*] S. H. Kan, Metrics and Models in Software1093 Quality Engineering, 2nd ed. Boston:1094 Addison-Wesley, 2002.1095 1096 10971098

×