This document introduces a new real-time combinatorial coverage measurement tool called CCM Command Line. It summarizes the key limitations of an earlier tool, CCM, and describes new capabilities of the command line tool, including the ability to measure coverage incrementally and from various sources in real time. The document also discusses applications of the new tool and acknowledges those involved in its development.
This document contains instructions for a final examination in an introduction to software engineering course. It provides the date, time, location of the exam, as well as instructions that students are to write their name, student ID, and signature on the cover and top of the exam. It also states that the exam is closed book and notes, calculators are permitted, and to circle one answer for multiple choice questions. The exam contains two sections - a multiple choice section and an essay question section where students should write their answers in a separate booklet.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
The document describes a regression test suite prioritization technique using the Hill Climbing algorithm. It aims to reorder test cases in a test suite to maximize fault detection within a given time limit. The technique records execution times of individual test cases and uses the Hill Climbing algorithm with parameters like the test suite, all possible permutations of test cases, time limit, and functions to calculate time and fitness. The algorithm iteratively makes small improvements to the ordering to find a local optimum that detects the most faults within the time limit. The technique intelligently considers both test execution time and fault detection information to effectively prioritize test cases.
Heterogeneous Defect Prediction ( ESEC/FSE 2015)Sung Kim
This document describes a technique called Heterogeneous Defect Prediction (HDP) that aims to perform cross-project defect prediction even when the source and target projects have different sets of metrics (i.e. heterogeneous metrics). HDP first selects informative metrics from the source and target projects, then matches metrics between the projects that have similar distributions. It uses the matched metrics to build a prediction model on the source project and apply it to the target project. The technique is evaluated on multiple public defect datasets and is shown to outperform whole-project defect prediction and other cross-project defect prediction baselines in most cases, demonstrating the potential of HDP to reuse existing defect data more widely.
Professor Abhik Roychoudhury discusses automated program repair through his research project TSUNAMi. The key points discussed are:
1) TSUNAMi is a national research project in Singapore from 2015-2020 focused on developing trustworthy systems from untrusted components through techniques like vulnerability discovery, binary hardening, verification, and data protection.
2) Automated program repair aims to automatically detect and fix vulnerabilities in software. This involves techniques like syntactic and semantic repair as well as specification inference to understand intended program behavior.
3) Challenges in automated program repair include weak specifications of intended behavior, large search spaces for candidate patches, and limited applicability of existing techniques. Roychoud
This document contains instructions for a final examination in an introduction to software engineering course. It provides the date, time, location of the exam, as well as instructions that students are to write their name, student ID, and signature on the cover and top of the exam. It also states that the exam is closed book and notes, calculators are permitted, and to circle one answer for multiple choice questions. The exam contains two sections - a multiple choice section and an essay question section where students should write their answers in a separate booklet.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
Can we predict the quality of spectrum-based fault localization?Lionel Briand
The document discusses predicting the effectiveness of spectrum-based fault localization techniques. It proposes defining metrics to capture aspects of source code, test executions, test suites, and faults. A dataset of 341 instances with 70 variables is generated from Defects4J projects, classifying instances as "effective" or "ineffective" based on fault ranking. Analysis identifies the most influential metrics, finding a combination of static, dynamic, and test metrics can construct a prediction model with excellent discrimination, achieving an AUC of 0.86-0.88. The results suggest effectiveness depends more on code and test complexity than fault type/location, and entangled dynamic call graphs hinder localization.
The document describes a regression test suite prioritization technique using the Hill Climbing algorithm. It aims to reorder test cases in a test suite to maximize fault detection within a given time limit. The technique records execution times of individual test cases and uses the Hill Climbing algorithm with parameters like the test suite, all possible permutations of test cases, time limit, and functions to calculate time and fitness. The algorithm iteratively makes small improvements to the ordering to find a local optimum that detects the most faults within the time limit. The technique intelligently considers both test execution time and fault detection information to effectively prioritize test cases.
Heterogeneous Defect Prediction ( ESEC/FSE 2015)Sung Kim
This document describes a technique called Heterogeneous Defect Prediction (HDP) that aims to perform cross-project defect prediction even when the source and target projects have different sets of metrics (i.e. heterogeneous metrics). HDP first selects informative metrics from the source and target projects, then matches metrics between the projects that have similar distributions. It uses the matched metrics to build a prediction model on the source project and apply it to the target project. The technique is evaluated on multiple public defect datasets and is shown to outperform whole-project defect prediction and other cross-project defect prediction baselines in most cases, demonstrating the potential of HDP to reuse existing defect data more widely.
Professor Abhik Roychoudhury discusses automated program repair through his research project TSUNAMi. The key points discussed are:
1) TSUNAMi is a national research project in Singapore from 2015-2020 focused on developing trustworthy systems from untrusted components through techniques like vulnerability discovery, binary hardening, verification, and data protection.
2) Automated program repair aims to automatically detect and fix vulnerabilities in software. This involves techniques like syntactic and semantic repair as well as specification inference to understand intended program behavior.
3) Challenges in automated program repair include weak specifications of intended behavior, large search spaces for candidate patches, and limited applicability of existing techniques. Roychoud
Combinatorial testing can significantly reduce the number of tests needed to cover all variable combinations by focusing on pairwise combinations. The document discusses pairwise testing, which aims to test all combinations of each pair of input parameters. This catches a high percentage of errors while dramatically reducing the number of required test cases. Tools like PICT can automatically generate optimal pairwise test suites. The document provides an example showing PICT reducing 96 potential test cases for a car ordering application down to just 8 test cases.
Combinatorial testing (CT) can significantly reduce the number of tests needed to cover all combinations of parameters by using techniques like pairwise testing. Pairwise testing involves testing all possible combinations of each pair of input parameters, reducing hundreds of thousands of test cases to just a few dozen. Tools are available to automatically generate optimal pairwise test cases. CT has been shown to improve defect detection over traditional ad hoc testing while lowering costs by reducing testing time and effort.
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docxtamicawaysmith
The document discusses challenges in testing large, complex software systems and provides recommendations to address these challenges. It notes that handcrafted tests alone are no longer sufficient due to increasing software size, complexity, and concurrency. It recommends starting with good design practices, static checking, and unit testing to isolate components before integration testing. It also recommends using code coverage and techniques like all-pairs testing to prioritize tests, and generating stochastic tests using models. Finally, it emphasizes the importance of concurrency testing and prioritizing testing given limited resources for very large systems.
Problem Statement:One of the common concerns from the customers is that how to effectively optimize the testing given the
multiple integration points in a distributed/composite system environments, which does expose at least the below
pain points:
1. Avoid Exhausted testing
2. Meet all the boundary conditions
3. Limited time to execute 100% test execution
4. Include all the critical business functions
5. Efficient Regression Testing
and the list goes on...
Resolution: The solution is detailed in the attachment and have effectively implemented in various client places.
This document discusses a technique for minimizing regression test suites by identifying redundant test cases based on multiple coverage criteria. It proposes a novel approach that considers criteria like function coverage, function call stack coverage, line coverage, and branch coverage to identify redundancy in test cases. This technique aims to reduce the size of test suites while maintaining the same level of coverage and fault detection as the original test suite. It also seeks to improve on existing minimization techniques by handling large test suites more efficiently and avoiding reductions in coverage or fault detection abilities.
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
The document discusses various software testing strategies and techniques:
1. Testing is the process of finding errors in a program before delivering it to end users. It shows errors, tests requirements conformance, and is an indication of quality.
2. Testing begins with unit testing individual components, then progresses to integration testing of components working together, validation testing against requirements, and system testing in the full system context.
3. White-box testing aims to ensure all statements and conditions are executed at least once, while black-box testing treats the software as a "black box" without viewing internal logic or code.
The document discusses various topics related to software testing including goals of testing, difficulties in testing, different stages of testing like unit testing and integration testing, test selection strategies like specification-based, operational distribution-based, domain-based, and risk-based testing. It also covers test automation, white-box testing methods, and the financial implications of inadequate testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various topics related to software testing including goals of testing, difficulties in testing, types of testing (unit, integration, system), test case selection strategies (code-based, specification-based, operational distribution-based, domain-based, risk-based), test automation, and the financial implications of inadequate testing. It notes that testing aims to detect faults, establish confidence, and evaluate properties, but is difficult due to issues like determining correct outputs and adequate testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and adequately testing interfaces. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based techniques. The goals of testing like detecting faults and evaluating reliability are also mentioned.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and adequate testing. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing. Factors that influence prioritizing test cases are also summarized.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing. The goals of testing like detecting faults and evaluating reliability are also mentioned.
More Related Content
Similar to Measuring the Combinatorial Coverage of Software in Real Time
Combinatorial testing can significantly reduce the number of tests needed to cover all variable combinations by focusing on pairwise combinations. The document discusses pairwise testing, which aims to test all combinations of each pair of input parameters. This catches a high percentage of errors while dramatically reducing the number of required test cases. Tools like PICT can automatically generate optimal pairwise test suites. The document provides an example showing PICT reducing 96 potential test cases for a car ordering application down to just 8 test cases.
Combinatorial testing (CT) can significantly reduce the number of tests needed to cover all combinations of parameters by using techniques like pairwise testing. Pairwise testing involves testing all possible combinations of each pair of input parameters, reducing hundreds of thousands of test cases to just a few dozen. Tools are available to automatically generate optimal pairwise test cases. CT has been shown to improve defect detection over traditional ad hoc testing while lowering costs by reducing testing time and effort.
30 February 2005 QUEUE rants [email protected] DARNEDTestin.docxtamicawaysmith
The document discusses challenges in testing large, complex software systems and provides recommendations to address these challenges. It notes that handcrafted tests alone are no longer sufficient due to increasing software size, complexity, and concurrency. It recommends starting with good design practices, static checking, and unit testing to isolate components before integration testing. It also recommends using code coverage and techniques like all-pairs testing to prioritize tests, and generating stochastic tests using models. Finally, it emphasizes the importance of concurrency testing and prioritizing testing given limited resources for very large systems.
Problem Statement:One of the common concerns from the customers is that how to effectively optimize the testing given the
multiple integration points in a distributed/composite system environments, which does expose at least the below
pain points:
1. Avoid Exhausted testing
2. Meet all the boundary conditions
3. Limited time to execute 100% test execution
4. Include all the critical business functions
5. Efficient Regression Testing
and the list goes on...
Resolution: The solution is detailed in the attachment and have effectively implemented in various client places.
This document discusses a technique for minimizing regression test suites by identifying redundant test cases based on multiple coverage criteria. It proposes a novel approach that considers criteria like function coverage, function call stack coverage, line coverage, and branch coverage to identify redundancy in test cases. This technique aims to reduce the size of test suites while maintaining the same level of coverage and fault detection as the original test suite. It also seeks to improve on existing minimization techniques by handling large test suites more efficiently and avoiding reductions in coverage or fault detection abilities.
Unit 2 covers white box testing techniques including control flow testing and data flow testing. Control flow testing aims to execute all statements, branches, and paths in the code. Different coverage criteria like statement coverage and branch coverage are discussed. Data flow testing checks for data flow anomalies like variables being defined but not used or used but not defined. A data flow graph example is provided to illustrate data flow terminologies like all-c-uses criterion. Advantages of white box testing include thorough testing of all code paths while disadvantages include complexity, time consumption, and requiring specialized resources.
The document discusses various software testing strategies and techniques:
1. Testing is the process of finding errors in a program before delivering it to end users. It shows errors, tests requirements conformance, and is an indication of quality.
2. Testing begins with unit testing individual components, then progresses to integration testing of components working together, validation testing against requirements, and system testing in the full system context.
3. White-box testing aims to ensure all statements and conditions are executed at least once, while black-box testing treats the software as a "black box" without viewing internal logic or code.
The document discusses various topics related to software testing including goals of testing, difficulties in testing, different stages of testing like unit testing and integration testing, test selection strategies like specification-based, operational distribution-based, domain-based, and risk-based testing. It also covers test automation, white-box testing methods, and the financial implications of inadequate testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various topics related to software testing including goals of testing, difficulties in testing, types of testing (unit, integration, system), test case selection strategies (code-based, specification-based, operational distribution-based, domain-based, risk-based), test automation, and the financial implications of inadequate testing. It notes that testing aims to detect faults, establish confidence, and evaluate properties, but is difficult due to issues like determining correct outputs and adequate testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and adequately testing interfaces. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based techniques. The goals of testing like detecting faults and evaluating reliability are also mentioned.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and adequate testing. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing. Factors that influence prioritizing test cases are also summarized.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correct outputs and comparing testing strategies. Different strategies for selecting test cases are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random testing, and risk-based testing.
The document discusses various techniques for software testing including unit testing, integration testing, system testing, and regression testing. It describes challenges in software testing like determining correctness of outputs and selecting test cases. Different strategies for test case selection are covered such as code-based, specification-based, operational distribution-based, domain-based, random, and risk-based testing. The goals of testing like detecting faults and evaluating reliability are also mentioned.
Similar to Measuring the Combinatorial Coverage of Software in Real Time (20)
Measuring the Combinatorial Coverage of Software in Real Time
1. Measuring the Combinatorial Coverage
of Software in Real Time
Zachary Ratliff
Computer Security
Security Components & Mechanisms
August 4th, 2016
2. What is Combinatorial Testing?
Design of Experiments (D.O.E.) for software testing
Can significantly reduce testing time and costs without
sacrificing effectiveness
Offers a partial solution for showing that a particular
program will work for all given inputs
1
3. Intractable Nature of Software Testing
The input domain space of
software grows exponentially
to the number of input
parameters
• 10 binary inputs: 210 = 1,024
configurations
• 20 binary inputs: 220 = 1,048,576
configurations
*Note: You can only fold paper in half about 7 times…
2
Folding a piece of 0.01cm thick paper
42 times will get you to the moon…
(0.01 × 242) = 439,804km
5. Efficiency of Covering Arrays
Total variable value configurations for a given system is
given by:
𝑣 𝑡 𝑛
𝑡
𝑛 = number of parameters
𝑡 = level of t-way coverage
For Mixed Level variable configurations:
𝑖 𝑣𝑖1 × ⋯ × 𝑣𝑖𝑡 , ∀ i = 1 …
𝑛
𝑡
combinations
In practice, covering arrays grow exponentially to 𝑡 and
logarithmically to 𝑛
Number of tests ≈ 𝑣 𝑡
log 𝑛
4
6. The Interaction Rule
Most failures are induced by
one or two factors with
progressively fewer faults
induced by more than two
factors
No failure involving more
than 6 factors has been
reported
Covering all 4 to 6-way
combinations provides
strong testing
5
7. The Problem
Most organizations do not fully understand the benefits of
switching to combinatorial testing methods
Time, money, and other resources may not be available to
alter testing practices
Lack of Combinatorial testing software tools and training
available
6
8. CCM: Combinatorial Coverage Measurement
Tool
Cross platform tool written
in Java
Measured combinatorial
coverage of static .csv files
Features:
o Generate missing
combinations
o Constraint support
o Display invalid combinations
7
*Created by Itzel Mendoza while working as a guest researcher
at N.I.S.T.
9. Limitations of CCM
Could only accept .csv files for test case input
o No ability to hook other tools in
o Had to be ran on a local machine
Limited to static analysis of data
o Very inefficient for when measuring multiple times as new
data is added
8
10. Interest was generated in various industries for a new
combinatorial measurement tool with capabilities to measure
coverage in real time.
9
12. New Capabilities
Can read multiple file types
o .csv test case files
o .txt test case files
o ACTS .xml configuration files
o ACTS .txt configuration files
Added support for equivalence classes and groups within
ACTS configuration files
o Ranges and boundary values defined by interval notation
• (*,5],[6,*) – creates two range classes: -∞ to 5, 6 to ∞
o Groups are specified in brackets
• {“Debian”, “Ubuntu”, “Red Hat”},{“Windows XP”, ”Windows 7”}
11
13. Real time measurement functionality
o Incrementally measures combinatorial coverage as new test cases are
added to the data set
Accepts input from various sources
o Files
o Standard Input
o External Programs
o Internet / TCP
More robust constraint definitions
o !employee => !grant_permission
*Older version of CCM had issues processing constraints in this notation
12
14. Time Complexity
The time complexity of initial measurement of static test case
files remains the same:
θ 𝑛 𝑡
𝑣 𝑡
+ 𝑚
Incremental measurements while adding test cases:
θ 𝑛 𝑡 𝑣 𝑡
In both static and real time measurements, the algorithm is
tractable in real world situations
13
15. Applications of CCMCL
Product Readiness
o Determining if a pre-release version has been tested enough by Beta
users.
Monitoring IV&V Performance
o Is the IV&V company providing quality tests to meet the software
assurance standards?
Measuring current test suite implementations
o Do current test suite implementations already provide significant
combinatorial coverage?
Internet of Things Reliability
o Measuring how reliable a system of interconnected components
likely is.
14
16. Acknowledgements
Rick Kuhn, National Institute of Standards & Technology
Raghu Kacker, National Institute of Standards & Technology
Dylan Yaga, National Institute of Standards & Technology
Itzel Mendoza, Centro Nacional de Metrologia
SURF Undergraduate Research Program, National Institute
of Standards & Technology
17. References
D.R. Kuhn, R.N. Kacker, Y. Lei, J. Hunter, Combinatorial
Software Testing, IEEE Computer Society, August 2009.
D.R. Kuhn, D.R. Wallace, A.M. Gallo, Jr., Software Fault
Interactions and Implications for Software Testing, IEEE
Transactions of Software Engineering, June 2004.
Kuhn, D. Richard, Raghu N. Kacker, and Yu
Lei. Introduction to combinatorial testing. CRC press,
2013.
Editor's Notes
Introduction to Design of Experiments Software testing: Save money, time, and effort by only choosing the more probable test cases that are likely to trigger the most amount of faults.
This slide is to represent the intractable nature of software testing. The example is supposed to show how fast an exponential problem can grow.
Short overview of covering arrays and how they can be applied to software testing.
Current covering array algorithms follow the bottom rule… Current algorithms follow a greedy approach
The interaction rule is the empirical justification for combinatorial testing.
Interest from software engineers in telecommunications and engineering companies was established.
This slide introduces the project I worked on. CCMCL is a command line version of the CCM tool, but with much more functionality.
n is the number of parameters, m is the number of tests, v is the average number of values for each parameter, t is the level of t-way measurement