The document presents a new approach called Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE that aims to reduce the size of regression test suites while maintaining fault detection ability. It does this by constructing a dynamic dependence graph to identify interaction patterns between transitions in an Extended Finite State Machine model, allowing it to consider repeated dependencies. An example of applying the approach to a banking system model is provided to motivate the method.
Model based software testing presentationAnkit Sambyal
Model Based Testing involves generating test cases from design and analysis models like UML diagrams and finite state machines. As software grows in complexity and size, testing requires more time and effort, so test case generation is automated. UML based testing uses genetic algorithms to generate test data from UML state diagrams before coding begins. The genetic algorithm represents test data as sequences of triggers that fire transitions. It aims to find high quality test data with the best transition coverage through selection, crossover and mutation of sequences over multiple iterations.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Calibration and validation model (Simulation )Rajan Kandel
This document discusses calibration and validation of models. Calibration is an iterative process of comparing a model to the real system and adjusting model parameters to better match observed real data. Validation checks that the model's output matches real data and ensures the model is useful. Key aspects of calibration discussed include comparing model output to measured data at different time granularities, and additional data needs. Validation ensures the model assumptions and programming are sound. Steps in validation include building a model with face validity, validating assumptions, and comparing model input-output transformations to the real system.
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONSijesajournal
The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
Need of Quality Engineering and Failure analysis Techniques Greeshma S
The document discusses quality engineering (QE) and failure analysis (FA) techniques. It defines QE as engineering work related to ensuring a product meets specifications. QE aims to design robust products using techniques like Taguchi's quality loss functions and robust design methods. FA is defined as analyzing failed components to determine the root cause of failure. Several FA techniques are discussed along with the importance of FA in preventing failures in electronics. Classifying tests and understanding failure mechanisms and rates are important aspects of both QE and FA.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Testing embedded system through optimal mining technique (OMT) based on multi...IJECEIAES
Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactors.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
Model based software testing presentationAnkit Sambyal
Model Based Testing involves generating test cases from design and analysis models like UML diagrams and finite state machines. As software grows in complexity and size, testing requires more time and effort, so test case generation is automated. UML based testing uses genetic algorithms to generate test data from UML state diagrams before coding begins. The genetic algorithm represents test data as sequences of triggers that fire transitions. It aims to find high quality test data with the best transition coverage through selection, crossover and mutation of sequences over multiple iterations.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Calibration and validation model (Simulation )Rajan Kandel
This document discusses calibration and validation of models. Calibration is an iterative process of comparing a model to the real system and adjusting model parameters to better match observed real data. Validation checks that the model's output matches real data and ensures the model is useful. Key aspects of calibration discussed include comparing model output to measured data at different time granularities, and additional data needs. Validation ensures the model assumptions and programming are sound. Steps in validation include building a model with face validity, validating assumptions, and comparing model input-output transformations to the real system.
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONSijesajournal
The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
Need of Quality Engineering and Failure analysis Techniques Greeshma S
The document discusses quality engineering (QE) and failure analysis (FA) techniques. It defines QE as engineering work related to ensuring a product meets specifications. QE aims to design robust products using techniques like Taguchi's quality loss functions and robust design methods. FA is defined as analyzing failed components to determine the root cause of failure. Several FA techniques are discussed along with the importance of FA in preventing failures in electronics. Classifying tests and understanding failure mechanisms and rates are important aspects of both QE and FA.
Determination of Software Release Instant of Three-Tier Client Server Softwar...Waqas Tariq
Quality of any software system mainly depends on how much time testing take place, what kind of testing methodologies are used, how complex the software is, the amount of efforts put by software developers and the type of testing environment subject to the cost and time constraint. More time developers spend on testing more errors can be removed leading to better reliable software but then testing cost will also increase. On the contrary, if testing time is too short, software cost could be reduced provided the customers take risk of buying unreliable software. However, this will increase the cost during operational phase since it is more expensive to fix an error during operational phase than during testing phase. Therefore it is essentially important to decide when to stop testing and release the software to customers based on cost and reliability assessment. In this paper we present a mechanism of when to stop testing process and release the software to end-user by developing a software cost model with risk factor. Based on the proposed method we specifically address the issues of how to decide that we should stop testing and release the software based on three-tier client server architecture which would facilitates software developers to ensure on-time delivery of a software product meeting the criteria of achieving predefined level of reliability and minimizing the cost. A numerical example has been cited to illustrate the experimental results showing significant improvements over the conventional statistical models based on NHPP.
Testing embedded system through optimal mining technique (OMT) based on multi...IJECEIAES
Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactors.
Genetic fuzzy process metric measurement system for an operating systemijcseit
Operating system (Os) is the most essential software of the computer system,deprived ofit, the computer
system is totally useless. It is the frontier for assessing relevant computer resources. It performance greatly
enhances user overall objective across the system. Related literatures have try in different methods and
techniques to measure the process matric performance of the operating system but none has incorporated
the use of genetic algorithm and fuzzy logic in their varied techniques which indeed is a novel approach.
Extending the work of Michalis, this research focuses on measuring the process matrix performance of an
operating system utilizing set of operating system criteria’s while fusing fuzzy logic to handle
impreciseness and genetic for process optimization.
Experimental comparison of ranking techniquesjleyvlop
The document describes an experimental comparison of ranking techniques for multi-criteria decision analysis (MCDA). It conducted a simulation experiment to compare the performance of two MCDA ranking methods: the NFR (Normalized Frequency Ratio) method and a new MOEA (Multi-Objective Evolutionary Algorithm) procedure. The experiment varied the ranking procedure and size of ranking problems. It found that the MOEA procedure significantly outperformed NFR across all conditions, producing lower error rates and more accurate rankings. The MOEA procedure was also less affected by increases in problem size.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
The document discusses optimization techniques for drug formulation development. It states that the traditional approach of changing one variable at a time (COST) is time-consuming, uneconomical, and unable to reveal interactions. It introduces response surface methodology (RSM) as a better approach that uses statistical experimental designs, mathematical models, and graphical analysis to optimize formulations with fewer experiments. RSM allows understanding the effects of independent formulation variables on dependent quality responses to identify the best formulation.
Requirements & system modelling for verificationJohan Hoberg
This document discusses how generating a model of a system-under-test can help testers better understand the system before testing. The model acts as a partial representation of the system's desired behavior. With this model in place, testers can derive different types of test cases. Creating the model facilitates scope setting and reduces gaps in test coverage. The model also provides an easy way for testers to communicate their understanding of the system to stakeholders.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
This document discusses software reliability and fault discovery probability analysis. It begins by defining software reliability as consisting of error prevention, fault discovery and removal, and reliability measurements. A beta distribution model is proposed to analyze the probability of discovering faults during software testing. The document evaluates different parameter estimation methods for the beta distribution model like variance, sum of squares, and maximum likelihood estimation. It analyzes the performance of these parameter estimation methods using sample programs. The document concludes that estimating failure rates from different faults under different testing measures can provide a prior evaluation of a model's parameters and predict testing effort required to achieve quality goals.
The document discusses validation of economic capital models from a regulatory perspective. It outlines a range of qualitative and quantitative validation approaches used in practice to assess different properties of economic capital models, from integrity of implementation to predictive ability. While individual tests have limitations, a layered approach using multiple validation techniques can provide more robust evidence of a model's fitness for its intended purposes. Key challenges include validating conceptual soundness and assumptions given many are untestable, as well as assessing accuracy, particularly in tail distributions where data is scarce.
This document provides an overview of software testing techniques and their maturation over time. It examines the major research results that have contributed to the growth of testing as an area. The document defines testing goals and categories, including functional vs structural testing and static vs dynamic analysis. It also discusses testing at different stages of the software lifecycle from unit to system level. The technology maturation model and research paradigms framework are used to analyze how testing techniques have evolved from initial ideas to broader solutions and changes in research questions and strategies over time.
by Andrew Rowland
Management of aging electronic systems is a problem faced by many industries. Management of these systems requires some understanding of their reliability performance. In the United States commercial nuclear industry several approaches are being taken in an attempt to understand the reliability performance of plant systems. This article describes one approach being used. The method is non- parametric and requires no specialized data analysis software.
Software testing involves testing at different levels from the component level up to integration testing of the entire system. Different testing techniques are used at each stage including unit testing, integration testing, validation, acceptance, and performance testing. Thorough documentation of testing requirements, test cases, expected and actual results is needed to guide the testing process.
This document outlines two options for quantifying the range of uncertainty in vibration measurement for aircraft engines. Option A involves testing one engine repeatedly across three test cells to estimate uncertainty limits with minimal disruption. Option B tests multiple engines to also evaluate measurement repeatability relative to engine variability and identify improvement opportunities, but with greater time and cost. Factors affecting measurement precision are categorized, and a process is described to control variables and record observations to statistically analyze uncertainty from multiple sources.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Specification based or black box techniquesmuhammad afif
The document discusses various specification-based or black-box testing techniques including equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, how they are used to design test cases, and their benefits in testing software specifications and identifying bugs.
1. Write test cases from given software models using the following test
design techniques. (K3)
a equivalence partitioning;
b boundary value analysis;
c decision tables;
d state transition testing.
2. Understand the main purpose of each of the four techniques, what level and type of testing could use the technique, and how coverage may be measured. (K2)
3. Understand the concept of use case testing and its benefits.
backlink:
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
Specification Based or Black Box TechniquesRakhesLeoPutra
This document defines and describes several specification-based black-box testing techniques:
1) Equivalence partitioning divides conditions into groups that should be handled equivalently, and tests one condition from each group.
2) Decision tables aid in systematically selecting test cases to test combinations of inputs and states.
3) State transition testing models systems with different outputs depending on prior states using state diagrams.
4) Use case testing exercises end-to-end transactions by deriving tests from descriptions of how actors use the system.
Experimental comparison of ranking techniquesjleyvlop
The document describes an experimental comparison of ranking techniques for multi-criteria decision analysis (MCDA). It conducted a simulation experiment to compare the performance of two MCDA ranking methods: the NFR (Normalized Frequency Ratio) method and a new MOEA (Multi-Objective Evolutionary Algorithm) procedure. The experiment varied the ranking procedure and size of ranking problems. It found that the MOEA procedure significantly outperformed NFR across all conditions, producing lower error rates and more accurate rankings. The MOEA procedure was also less affected by increases in problem size.
This document summarizes a knowledge engineering approach using analytic hierarchy process (AHP) to resolve conflicts between experts in risk-related decision making. It proposes using a modified version of AHP to increase transparency in the analysis procedure. This allows identification of major causes of inter-expert discrepancy, which are differences in unstated assumptions and subjective weightings of risk factors. The document demonstrates how AHP can systematically decompose complex decision problems, evaluate alternatives based on multiple criteria, and aggregate results to provide an overall evaluation that incorporates differing expert opinions in a consistent manner.
The document discusses optimization techniques for drug formulation development. It states that the traditional approach of changing one variable at a time (COST) is time-consuming, uneconomical, and unable to reveal interactions. It introduces response surface methodology (RSM) as a better approach that uses statistical experimental designs, mathematical models, and graphical analysis to optimize formulations with fewer experiments. RSM allows understanding the effects of independent formulation variables on dependent quality responses to identify the best formulation.
Requirements & system modelling for verificationJohan Hoberg
This document discusses how generating a model of a system-under-test can help testers better understand the system before testing. The model acts as a partial representation of the system's desired behavior. With this model in place, testers can derive different types of test cases. Creating the model facilitates scope setting and reduces gaps in test coverage. The model also provides an easy way for testers to communicate their understanding of the system to stakeholders.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
This document discusses software reliability and fault discovery probability analysis. It begins by defining software reliability as consisting of error prevention, fault discovery and removal, and reliability measurements. A beta distribution model is proposed to analyze the probability of discovering faults during software testing. The document evaluates different parameter estimation methods for the beta distribution model like variance, sum of squares, and maximum likelihood estimation. It analyzes the performance of these parameter estimation methods using sample programs. The document concludes that estimating failure rates from different faults under different testing measures can provide a prior evaluation of a model's parameters and predict testing effort required to achieve quality goals.
The document discusses validation of economic capital models from a regulatory perspective. It outlines a range of qualitative and quantitative validation approaches used in practice to assess different properties of economic capital models, from integrity of implementation to predictive ability. While individual tests have limitations, a layered approach using multiple validation techniques can provide more robust evidence of a model's fitness for its intended purposes. Key challenges include validating conceptual soundness and assumptions given many are untestable, as well as assessing accuracy, particularly in tail distributions where data is scarce.
This document provides an overview of software testing techniques and their maturation over time. It examines the major research results that have contributed to the growth of testing as an area. The document defines testing goals and categories, including functional vs structural testing and static vs dynamic analysis. It also discusses testing at different stages of the software lifecycle from unit to system level. The technology maturation model and research paradigms framework are used to analyze how testing techniques have evolved from initial ideas to broader solutions and changes in research questions and strategies over time.
by Andrew Rowland
Management of aging electronic systems is a problem faced by many industries. Management of these systems requires some understanding of their reliability performance. In the United States commercial nuclear industry several approaches are being taken in an attempt to understand the reliability performance of plant systems. This article describes one approach being used. The method is non- parametric and requires no specialized data analysis software.
Software testing involves testing at different levels from the component level up to integration testing of the entire system. Different testing techniques are used at each stage including unit testing, integration testing, validation, acceptance, and performance testing. Thorough documentation of testing requirements, test cases, expected and actual results is needed to guide the testing process.
This document outlines two options for quantifying the range of uncertainty in vibration measurement for aircraft engines. Option A involves testing one engine repeatedly across three test cells to estimate uncertainty limits with minimal disruption. Option B tests multiple engines to also evaluate measurement repeatability relative to engine variability and identify improvement opportunities, but with greater time and cost. Factors affecting measurement precision are categorized, and a process is described to control variables and record observations to statistically analyze uncertainty from multiple sources.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Specification based or black box techniquesmuhammad afif
The document discusses various specification-based or black-box testing techniques including equivalence partitioning, boundary value analysis, decision tables, state transition testing, and use case testing. It provides definitions and explanations of each technique, how they are used to design test cases, and their benefits in testing software specifications and identifying bugs.
1. Write test cases from given software models using the following test
design techniques. (K3)
a equivalence partitioning;
b boundary value analysis;
c decision tables;
d state transition testing.
2. Understand the main purpose of each of the four techniques, what level and type of testing could use the technique, and how coverage may be measured. (K2)
3. Understand the concept of use case testing and its benefits.
backlink:
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
Specification Based or Black Box TechniquesRakhesLeoPutra
This document defines and describes several specification-based black-box testing techniques:
1) Equivalence partitioning divides conditions into groups that should be handled equivalently, and tests one condition from each group.
2) Decision tables aid in systematically selecting test cases to test combinations of inputs and states.
3) State transition testing models systems with different outputs depending on prior states using state diagrams.
4) Use case testing exercises end-to-end transactions by deriving tests from descriptions of how actors use the system.
A model for run time software architecture adaptationijseajournal
Since the global demand for software systems and constantly changing environments and systems is
increasing, the adaptability of software systems is of significant importance. Due to the architecture of
software system is a high-level view of the system and makes the modifiability possible at an overall level,
the adaptability of the software can be considered an effective approach to adapt software systems by
changing architecture configuration. In this study, the architecture configuration is modified through xADL
language which is a software architecture description language with a high flexibility. Software
architecture reconfiguration is done based on existing rules of rule-based system, which are written with
respect to three strategies of load balancing, fixed bandwidth and fixed latency. The proposed model of the
study is simulated based on samples of client-server system, video conferencing system and students’
grading system. The proposed model can be used in all types of architecture, include Client Server
Architecture, Service Oriented Architecture and etc.
MAINTENANCE POLICY AND ITS IMPACT ON THE PERFORMABILITY EVALUATION OF EFT SYS...IJCSEA Journal
In the Electronic Funds Transfer (EFT) Systems, faults can cause severe degradation on the performance of this system. Thus, modelling the performance of EFT system without considering dependability aspects can cause inaccurate results. This paper presents a stochastic model for evaluating performance of processing and storage infrastructures of the EFT system. This work also presents a model for evaluating the effects of the proposed preventive maintenance policy and different service level agreements (SLA) on the dependability of the EFT system infrastructure. Then, this paper combines both models (dependability and performance) for evaluating the impact of dependability issues on the performance of the EFT system. Finally, case studies considering EFT system infrastructures are provided to demonstrate the applicability of the adopted approach. Moreover, the results of these case studies are depicted, stressing important aspects of dependability and performance for EFT system planning.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Specification based or black box techniquesIrvan Febry
- The document discusses four specification-based black-box testing techniques: equivalence partitioning, boundary value analysis, decision tables, and state transition testing.
- It provides details on each technique, including how equivalence partitioning divides test conditions into groups that should be treated equivalently, how decision tables deal with combinations of inputs and conditions, and how state transition testing is used for systems that can be described as finite state machines.
- The document also briefly discusses use case testing and how use cases describe interactions between actors and the system to achieve tasks from start to finish.
Specification based or black box techniques 3alex swandi
Alex Swandi
Program Studi S1 Sistem Informasi
Fakultas Sains dan Teknologi
Universitas Islam Negeri Sultan Syarif Kasim Riau
Backlink ke website resmi kampus:
http://sif.uin-suska.ac.id/
http://fst.uin-suska.ac.id/
http://www.uin-suska.ac.id/
The document discusses four specification-based black-box testing techniques: equivalence partitioning, boundary value analysis, decision tables, and state transition testing. It provides definitions and explanations of each technique. For example, it explains that equivalence partitioning involves dividing test conditions into groups that should be handled equivalently by the system, and then testing one condition from each group. It also discusses use case testing and how use cases can help uncover integration defects.
A software fault localization technique based on program mutationsTao He
The document describes a new software fault localization technique called Muffler that uses program mutation analysis. Muffler aims to address the problem of coincidental correctness in existing coverage-based fault localization methods. It combines the Naish ranking function with a new metric called mutation impact, which measures the average number of test results that change from passed to failed when each statement is mutated. An empirical evaluation on seven programs shows that Muffler reduces the average code examination needed to find faults by 50% compared to the Naish technique.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
2. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 333
Lander shutdown in 1999). According to another report, the U.S. Department of Defense alone loses
over four billion dollars a year due to software failures [10]. Testing activity consumes about 50% of
software development resources, thus any technique aimed at reducing software testing costs is likely
to produce positive effects on cost reduction.
To ensure correctness, developers write unit tests for a particular section of the code. Each time,
a new functionality is added to the project the new tests are run in addition to the old ones in order to
check for regression in the project. As software systems grow in size, the size of the test suite also
increases. Regression testing is very important to the correctness of the software, but a developer
cannot afford to wait long for a test suite to run. It is the process of validating the modified software to
increase our confidence that the changed parts of the software behave as intended and that the
unchanged parts of the software have not been adversely affected by the modifications.
There exist two types of regression testing: Code-based and specification based regression
testing. It has been shown that code-based testing and specification-based testing complement each
other. Most regression testing techniques are code-based, i.e., these techniques select test cases using
the source code of the original and modified programs. There exists limited research on specification-
based regression testing techniques. Most of these techniques select regression tests using only the
modified system specification. A greater degree of attention has been paid to regression test selection
where code-based techniques have been effective at unit-level testing. Model based testing is one of the
techniques that can be used on the system level. When a system model is changed, one can apply
model-based testing techniques on the modified model, and partially test the system under test with
respect to chosen requirements. But the size of these test suites may be very large even for relatively
smaller systems. Also, model-based testing techniques fulfill some coverage criterion for constructing
a test suite.
In this paper, we present a novel approach of model-based regression test minimization
(Dynamic Dependence Graph) that uses the EFSM model dependence analysis to reduce a given
regression test suite. This approach has good fault detection ability when compared to that of the Static
Dependence Graph approach, by considering all the interaction patterns instead of ignoring the patterns
of the same dependencies between transitions which occur during the traversal of the model in an
iterative manner. Our initial experience shows that this approach may significantly reduce the size of
regression test suites and also improve the fault detection capability.
In the next section, we present related work. Section 3 presents the test suite minimization
using Dynamic Interaction Patterns, and motivates the need of the method. Section 4 presents
procedure for the EFSM Model Specification, discusses dynamic interaction patterns which identify
the reduced test cases that provide the FDE and a minimized cost for executing the deliverables. In
Section 5, the experimental study is present and the empirical results that evaluate the proposed
method. Finally, Section 6 presents conclusions and discusses some future work.
2. Related Work
In the literature, almost all the approaches to test case generation consider how to avoid generating
redundant test cases [3, 18]. On the other hand, many efforts have also been put into research on how
to reduce the size of a previously acquired test suite while maintaining its effectiveness. Typical test
suite minimization techniques also called test suite reduction include heuristic approaches [14, 16], the
genetic algorithm based approach [12], and approaches based on integer linear programming (ILP) [6].
In [17], the reduction of requirement based test suites is done using EFSM dependence analysis.
Requirement-based automated test case generation is a model-based technique for generating test suites
related to individual requirements. These techniques may significantly reduce a number of test cases
with respect to a requirement under test as opposed to a complete system testing. However, the number
of test cases may still be very large especially for large systems. Different types of dependencies are
identified between the elements of the EFSM system model. These dependencies capture potential
3. 334 S. Selvakumar and N. Ramaraj
interactions between the elements of the model and are used to determine parts of the model that affect
a requirement under test. This information is used to reduce the test suite by identifying repetitive tests,
i.e., tests that exhibit the same pattern of interactions with respect to the requirement under test. The
work presented in [7] uses the EFSM model dependence analysis to reduce the regression test suites for
the model-based regression testing. Model based testing is a system testing technique used to test
software systems modeled by formal description languages, e.g., an Extended Finite State Machine
(EFSM). System models are frequently changed because of specification changes. Selective test
generation techniques are used to test the modified parts of the model. However, the size of regression
test suites may still be very large. This approach automatically identifies the difference between the
original model and the modified model as a set of elementary model modifications. For each EM,
regression test minimization strategies are used to reduce the regression test suite based on the EFSM
dependence analysis.
In [19], the set of EMs on the EFSM interaction patterns are identified related to each type of
EMs, i.e., adding, deleting, and changing transitions in the EFSM. These interaction patterns capture
the effects of the model on the EMs, the effects of the EMs on the model, and the side-effects of the
EMs. The method proposed in [20] considers an SDL model representing the requirements of a system
under test and a set of modifications on this model and applies dependence analysis to identify
interaction patterns related to each type of modifications, i.e., adding, deleting, and changing
transitions in the SDL model, and reduces the size of a given regression test suite by examining
interaction patterns covered by each test case in the test suite. The work [13] presents the tool support
and evaluation of the state-based selective regression testing methodology for evolving state-based
systems. START is an Eclipse-based tool for state-based regression testing compliant with UML 2.1
semantics. START deals with dependencies of state machines with class diagrams to cater to the
change propagation.
3. Test Suite Minimization Using Dynamic Interaction Patterns
Our proposed method reduces the size of a given RTS by examining interaction patterns covered by
each test case in the given RTS. Our work differs from the work of Vaysburg et al., [17] Korel et al.[7]
and Yanping et al.[19] in terms of its efficiency in fault deduction. Our proposed approach to test suite
minimization using dynamic interaction patterns identifies the reduced test cases that provide the best
coverage of the requirements and a minimized cost for executing the deliverables. We use the concept
of EFSM Dynamic Dependence in which each Data and Control Dependencies is identified for test
case and the Dynamic Dependence Graph is constructed and the test cases are reduced by eliminating
the redundancies. Then the test cases are prioritized based on the efficient order of their execution.
3.1. Motivational Example
Initially we tested with other programs that are developed as an academic project by students like the
Global banking system. We present this example program (i.e. the global banking system) as a running
example throughout this paper, to motivate our idea of test suite minimization using dynamic
interaction patterns. This identifies the reduced test cases with improved FDE that provide minimized
cost for executing the deliverables.
3.1.1. Modular Design
Modular software design refers to a design strategy in which a system is composed of relatively small
and autonomous routines that fit together. The basic idea underlying modular design is to organize a
complex system as a set of distinct components that can be developed independently and then plugged
together. The proposed system is composed of separate components that can be connected together.
Each and every module is developed separately and then combined. In our system there are four
Modules. The first module is Application Specification Selection. The second module is EFSM Model
4. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 335
Generation. The third is Test Case Identification. The fourth module is test case reduction by the
dynamic dependency approach of the EFSM Model.
(i). Application Specification Selection Module
The desired application specification is selected for which test cases are to be generated. From the
selected application specification the properties are extracted and positioned in the property list.
Considering the global banking application, there are two modules whose properties and specifications
are listed as follows.
Administrator module Account Holder module
• View Pending A/C opening applications. • View Balance
• Viewing details of particular Account. • Transferring funds
• Edit Press Releases. • Previous transaction details
• Edit Jobs listing. • Change User name and Password
• Change User name and Password • Request for Help
(ii). EFSM Model Generation
In the EFSM Model Generation module, An EFSM model is generated for our specified application
requirements and specifications listed in the above module.
(iii). Test Case Identification
In Test Case Identification, various possible test cases are identified from the application by traversing
the model in all the possible paths.
(iv). Test Case Reduction Using Dynamic Dependencies
The test suite obtained might contain redundancies. This module helps to remove the redundant test
cases by applying the Dynamic Dependency approach of the EFSM model.
4. Procedure for EFSM Model Specification
Fig. 1 shows the Procedure for Test suite minimization by interaction pattern identification.
Figure 1: Procedure for Test suite minimization by interaction pattern identification.
Step 1: Identify the Test cases
Step 2: Find Data and Control Dependencies for the Test cases from the EFSM model
Step 3: Construct the Dynamic Dependency Graphs
Step 4: Identify the Interaction Patterns
Step 5: Compare all the Interaction Patterns and remove the Redundant Patterns
Step 6: Retain the remaining Interaction Patterns to form a Reduced Test Suite
4.1. The EFSM Model
The model based regression testing techniques [7] use only a modified system model in which the
modified elements (states and transitions) are tested using selective test generation techniques, i.e.,
each regression test case contains a modified model element. In this section we present an approach of
model-based regression testing that can be used for any modification of the EFSM system model. An
EFSM is a 5-tuple <S, I, O, V, T> where
5. 336 S. Selvakumar and N. Ramaraj
• S is a nonempty finite set of states with two states designated as Start and Exit states of the
EFSM.
• I is a nonempty finite set of input interactions, each with a (possibly empty) set of input
interaction parameters.
• O is a nonempty finite set of output interactions, each with a (possibly empty) set of output
interaction parameters.
• V is the nonempty finite set of all variables which is the union of set of all local variables and
set of all interaction parameters.
• T is a nonempty finite set of transitions.
An EFSM model consists of states and the transitions between them. There will be start (Initial)
and exit (Final) nodes and every other node has access to them. A transition is triggered when an event
occurs and the condition associated with the event is satisfied. When a transition is triggered, actions
may be performed which may read the input, manipulate the variables or produce outputs. The EFSM
models are graphically represented as nodes and transitions as direct edges between states. A transition
has the following elements namely
1) an event,
2) a condition, and
3) a sequence of actions.
Figure 2: EFSM Transition
A simplified EFSM model of a global banking system is shown in fig. 3. For detailed
information on the generation of ESFM, fig 3.A, 3.B, 3.C in Appendix may be referred.
Figure 3: The EFSM Model for a Global Banking System
The Banking system consists of two types of login namely administrator and the account
holder. The user must choose the type of login and then type the user name and password which is
verified with those stored in the bank database. The user is allowed a maximum of three attempts to
enter the valid user name and password. If users log into the system, they can perform the operations as
listed in figure 2. For example, the transition t2 is triggered if the user enters an invalid user name or
password and then the value of the variable att is incremented by one. Similarly, various transitions are
invoked if the users perform various operations. Since transitions represent active elements of the
6. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 337
EFSM model, their modifications rather than those on the states are concentrated. In an EFSM model,
data and control dependences may exist between transitions. These dependencies are identified using
the Dependency Analysis.
• Data Dependence: [17] captures the notion that one transition defines a value for a variable and
the same or some other transition may potentially use this value. For example, in fig. 4 there exist
data dependence between transitions t1 and t2 because transition t1 assigns a value to the variable
att and transition t2 uses that variable
• Control Dependence: The concept of Control Dependence in the EFSM [17] exists between
transitions and captures the notion that one transition may affect the traversal of another
transition. Control between transitions can be defined in terms of the concept of post-dominance.
Let Y and Z be two states (nodes) and t be an outgoing transition (arc) from Y. State Z post-
dominates state Y iff Z is on every path from Y to the exit state. State Z post-dominates transition
t iff Z is on every path from Y to the exit state through transition t. For example, in the above
figure, transition t3 has control dependence on transition t5 because state S2 does not post
dominate state S1 and state S2 post-dominates the transition t4.
Figure 4: Data and Control Dependence
• Static Dependence Graph: The Static Dependence Graph (SDG) graphically represents the Data
Dependencies (DDs) and Control Dependencies (CDs) in an EFSM (Fig. 5.). Here the nodes
represent the transitions and the directed arcs represent the Data and Control Dependencies.
Figure 5: Static Dependence Graph Figure 6: Dynamic Dependence Graph
Test minimization using Static Interaction Patterns is appropriate in the initial stages of testing,
when a relatively small number of “high quality” tests are supposed to be used. However, test
minimization using Static Interaction Patterns ignores repetitions of the same dependencies
(interactions) between transitions. Therefore, the test minimization using more sophisticated interaction
7. 338 S. Selvakumar and N. Ramaraj
patterns that take repetition of the same interactions is presented. This leads our approach to the
Dynamic Interaction Patterns.
4.2. Dynamic EFSM Dependencies
Our approach is, in principle, similar to the approach described in the previous section except that
during the traversal of a test (sequence) each transition is represented as a separate node in the
dependence graph. We refer to this graph as the Dynamic EFSM Dependence Graph. The approach
works as follows: Given a test (sequence of transitions), during traversal each traversed transition is
represented as a node in the dynamic EFSM dependence graph, and each identified data or control
dependence is represented by an arc between corresponding transitions. This process results in a
dynamic EFSM dependence graph. In the next step, all the dependencies in the dynamic EFSM
dependence graph that influence the transition(s) under test are identified by traversing backwards from
the transition(s) under test and marking all traversed dependencies. All unmarked dependencies are
removed from the dynamic dependen dependence graph. The resulting dynamic EFSM dependence
sub-graph is referred to as a Dynamic Interaction Pattern, where data and control dependencies
represent the interactions between transitions. An example for a Dynamic Dependence Graph is shown
in Fig. 6.
Figure 7: Static Dependence Graph for the above
tests
Figure 8: Dynamic Dependence Graph for the above tests
TEST 1
TEST 2 TEST 1 TEST 2
Test 1: t1 t2 t2 t2 t3 t5 t14 t6 t14
Test 2: t1 t2 t2 t2 t2 t3 t5 t14 t6 t14
Fig.7 shows the static dependence graph for the above tests, Test1 and Test 2. Here both the
tests provide the same interaction patterns even though the test suites are different. So, one of them is
removed in the case of the Static Dependence Graphs (SDG). Fig. 8 shows the dynamic dependence
graph for the above tests. Here the interaction patterns differ from each other and hence should not be
ignored. This is the problem in the case of the SDG and this can be overcome this problem by the
DDG. Thus the dynamic dependence graph promotes the improved fault detection ability.
8. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 339
4.2.1. Test Case Minimization Using Dynamic Dependencies
209 test cases were generated for the Global Banking application. Some test cases have been
considered as example case from the Original test suite for the Global banking system and these show
how we have reduced them (Fig. 9). The summarization of the test cases of this application is shown in
Tables in Appendix. Fig. 10 and Fig. 11 Shows the dynamic dependence graphs for the correspoding
tests.
Figure 9: The representative test cases for the Global banking system
Test case A: t1 t2 t2 t4 t10 t14a
Test case B: t1 t2 t2 t4 t10 t14a t11
Test case C: t1 t2 t4 t12 t14a
Test case D: t1 t2 t4 t12 t14a t17
Test case E: t1 t2 t4 t11 t14a
Test case F: t1 t2 t3 t5 t14
Test case G: t1 t2 t3 t5 t14 t16
Test case H: t1 t2 t2 t3 t5 t14 t6 t14
Test case I : t1 t2 t2 t3 t5 t14 t6 t14 t16
Figure 10: Dynamic Dependence Graph for the above tests
9. 340 S. Selvakumar and N. Ramaraj
Figure 11: Dynamic Dependence Graph for the above tests
REDUCED TESTS: A C E F H
REDUNDANT TESTS: B D G I
The redundant test cases are removed and the reduced test cases are retained. Likewise, for the
entire test suite consisting of 209 test cases, Dynamic Dependence Graphs were constructed.
4.2.2. Test Case Prioritization
Test case prioritization techniques schedule test cases for execution in an order that attempts to
maximize some objective function. A variety of objective functions are applicable; and one such
function involves the rate of fault detection — a measure of how quickly faults are detected within the
testing process. An improved rate of fault detection during regression testing can provide faster
feedback on a system under regression test and let debuggers begin their work earlier than might
otherwise be possible.
Test case prioritization can address a wide variety of objectives, including the following:
1) Testers may wish to increase the rate of fault detection, that is, the likelihood of revealing faults
earlier in a run of regression tests.
2) Testers may wish to increase the rate of detection of high-risk faults, locating those faults
earlier in the testing process.
3) Testers may wish to increase the likelihood of revealing regression errors related to specific
code changes earlier in the regression testing process.
4) Testers may wish to increase their coverage of coverable code in the system under test at a
faster rate.
5) Testers may wish to increase their confidence in reliability
Optimal prioritization: To measure the effects of prioritization techniques on rate of fault
detection, our empirical study utilizes programs that contain known faults. It can determined, for any
test suite, which test cases expose which faults, and thus an optimal ordering of test cases in a test suite
for maximizing that suite’s rate of fault detection. Table 1 shows the reduced prioritized test suite.
10. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 341
Table 1: Reduced prioritized test suite
admin module admin module iteration account holder module
account holder module
iteration
t1 t3 t5 t14 t16 t1 t3 t5 t14 t5 t14 t16 t1 t4 t9 t14a t17 t1 t4 t9 t14a t9 t14a t17
t1 t3 t6 t14 t16 t1 t3 t5 t14 t6 t14 t16 t1 t4 t10 t14a t17 t1 t4 t9 t14a t10 t14a t17
t1 t3 t7 t14 t16 t1 t3 t5 t14 t7 t14 t16 t1 t4 t11 t14a t17 t1 t4 t9 t14a t11 t14a t17
t1 t3 t8 t14 t16 t1 t3 t5 t14 t8 t14 t16 t1 t4 t12 t14a t17 t1 t4 t9 t14a t12 t14a t17
t1 t2 t2 t2 t3 t6 t14 t16 t1 t3 t6 t14 t5 t14 t16 t1 t4 t13 t14a t17 t1 t4 t9 t14a t13 t14a t17
t1 t2 t2 t2 t3 t7 t14 t16 t1 t3 t8 t14 t7 t14 t16 t1 t2 t4 t9 t14a t17 t1 t4 t10 t14a t9 t14a t17
t1 t2 t2 t2 t3 t8 t14 t16 t1 t3 t8 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t17 t1 t4 t10 t14a t10 t14a t17
exit without any operation t1 t2 t3 t5 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t17 t1 t4 t10 t14a t11 t14a t17
t1 t15 t1 t2 t3 t5 t14 t6 t14 t16 t1 t2 t2 t4 t12 t14a t17 t1 t4 t10 t14a t12 t14a t17
t1 t3 t16 t1 t2 t3 t5 t14 t7 t14 t16 t1 t2 t2 t4 t13 t14a t17 t1 t4 t10 t14a t13 t14a t17
t1 t4 t17 t1 t2 t3 t5 t14 t8 t14 t16 t1 t2 t2 t2 t4 t9 t14a t17 t1 t4 t13 t14a t9 t14a t17
t1 t2 t3 t16 t1 t2 t3 t6 t14 t5 t14 t16 t1 t2 t2 t2 t4 t10 t14a t17 t1 t4 t13 t14a t10 t14a t17
t1 t2 t4 t17 t1 t2 t3 t6 t14 t6 t14 t16 t1 t2 t2 t2 t4 t12 t14a t17 t1 t2 t4 t10 t14a t13 t14a t17
t1 t2 t2 t3 t16 t1 t2 t3 t6 t14 t7 t14 t16 t1 t2 t2 t2 t4 t13 t14a t17 t1 t2 t4 t11 t14a t9 t14a t17
t1 t2 t2 t4 t17 t1 t2 t3 t8 t14 t7 t14 t16 t1 t2 t4 t12 t14a t13 t14a t17
t1 t2 t2 t2 t3 t16 t1 t2 t3 t8 t14 t8 t14 t16 t1 t2 t4 t13 t14a t9 t14a t17
t1 t2 t2 t2 t4 t17 t1 t2 t2 t3 t5 t14 t5 t14 t16 t1 t2 t4 t13 t14a t10 t14a t17
t1 t2 t2 t3 t5 t14 t6 t14 t16 t1 t2 t4 t13 t14a t11 t14a t17
t1 t2 t2 t3 t5 t14 t7 t14 t16 t1 t2 t4 t13 t14a t12 t14a t17
t1 t2 t2 t3 t5 t14 t8 t14 t16 t1 t2 t4 t13 t14a t13 t14a t17
t1 t2 t2 t3 t6 t14 t5 t14 t16 t1 t2 t2 t4 t10 t14a t9 t14a t17
t1 t2 t2 t3 t6 t14 t6 t14 t16 t1 t2 t2 t4 t10 t14a t10 t14a t17
t1 t2 t2 t3 t6 t14 t7 t14 t16 t1 t2 t2 t4 t10 t14a t11 t14a t17
t1 t2 t2 t3 t6 t14 t8 t14 t16 t1 t2 t2 t4 t10 t14a t12 t14a t17
t1 t2 t2 t3 t7 t14 t5 t14 t16 t1 t2 t2 t4 t10 t14a t13 t14a t17
t1 t2 t2 t3 t7 t14 t6 t14 t16 t1 t2 t2 t4 t11 t14a t10 t14a t17
t1 t2 t2 t2 t3 t8 t14 t5 t14 t16 t1 t2 t2 t4 t11 t14a t11 t14a t17
t1 t2 t2 t4 t11 t14a t12 t14a t17
t1 t2 t2 t4 t11 t14a t13 t14a t17
t1 t2 t2 t4 t12 t14a t9 t14a t17
t1 t2 t2 t4 t12 t14a t10 t14a t17
t1 t2 t2 t4 t12 t14a t11 t14a t17
t1 t2 t2 t2 t4 t12 t14a t9 t14a t17
t1 t2 t2 t2 t4 t12 t14a t10 t14a
t17
5. Experimental Study
5.1. Subject Programs, Faulty Versions, and Test Case Pools
An experimental set up similar to that used by Rothermel et. al [4] and Jeffrey [2]was followed. The
Siemens programs described in Table 2 was used as the subject programs. All programs, faulty
versions, and test pools used in our experiments were assembled [8, 9, 1] by researchers from the
Siemens Corporation. These programs, their faulty versions and the associated test pools were obtained
from [1]. The types of errors introduced in the faulty versions of each subject program were examined
and identified six distinct categories of seeded errors: (1) changing the operator in an expression, (2)
changing an operand in an expression, (3) changing the value of a constant, (4) removing code, (5)
adding code, and (6) changing the logical behavior of the code (usually involving a few of the other
categories of error types simultaneously in one faulty version). Other objects that are retrieved from
software infrastructure repository [1] and the programs developed as an academic project by students
described in Table 3 and Table 4 as the subject programs were also experimented.
Environment Setting: Pentium IV 2.8GHZ, 512M RAM, and Windows XP operating system.
11. 342 S. Selvakumar and N. Ramaraj
Table 2: Siemens suite subject programs
Name Lines of code Faulty version count Test pool size Program Description
tcas 162 41 1608 Altitude separation
totinfo 346 23 1052 Information measure
schedule 299 9 2650 Priority scheduler
schedule2 287 10 2710 Priority scheduler
printtokens 378 7 4130 Lexical analyzer
printtokens2 366 10 4115 Lexical analyzer
replace 514 32 5542 Pattern replacement
Space 9127 38 13,585 Array definition language interpreter
Table 3: Academic subject programs
Name Lines of code No. of Classes
Triangle 123 2
Sample 66 1
Average 131 1
Greatest number 186 1
Gcd 142 2
Table 4: Objects from Software Infrastructure Repository [1]
Name Lines of code No. of Classes
Binary-Search-Tree 130 3
Array-Partition 13 1
Doubly-Linked-List 277 1
Sorting 130 1
Vector 254 1
Binary-Heap 72 2
Disjoint-Set 35 1
Stack 114 5
Elevator 934 12
OrdSet 229 2
deadlock 24 4
accountsubtype 89 6
account 66 3
Producer-consumer 99 8
Alarm-clock 125 6
linkedlist 121 5
5.2. Measures
In this paper, the following criteria are used to judge the performance of the proposed approach.
(i) The percentage of suite size reduction (SSR) is defined as
| Tred |
100 %
T
SSR
T
−
= × where
|T| = Number of test cases in the original suite and
|Tred| = Number of test cases in the minimized/reduced suite.
A higher SSR means a better reduction capability.
(ii) The percentage of fault detection effectiveness loss (FDE Loss)
Fred
100 %
F
FDEloss
F
−
= ×
where
|F| = Number of distinct faults exposed by the original suite.
12. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 343
|Fred| = Number of distinct faults exposed by the minimized/ reduced suite.
For the subject programs, the fault-exposing information of each test case is provided. Some
test cases of a test suite may expose the same faults, but a fault exposed by different test cases of a
suite will be counted only once. The closer the FDE Loss is to zero, the better the fault-revealing
capability.
5.3. Experiment SDG versus DDG
The results for this experiment are shown in the columns labeled SDG and DDG in Table 5.
Table 5: Experimental Results for Global Banking Application
Size of the original Test
Suite
Size of the Reduced Test Suite Fault Detection Ability (%)
SDG DDG SDG DDG
10 6 6 60 60
20 11 11 55 55
40 23 24 62 64
60 31 34 65 68
80 38 43 49 62
100 49 56 53 57
120 53 62 56 63
140 59 69 68 74
160 64 74 73 86
180 71 82 81 89
200 80 93 74 77
209 85 97 71 84
Figure 12: Test Suite Minimization for Global Banking Application
Graph 1: Test Su ite Reduction
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12
Iterations
SizeoftheTestSuite
S ize of the original Tes t Suite
S ize of the Reduced Tes t S uite
S DG
S ize of the Reduced Tes t S uite
DDG
Figure 13: Fault Detection Ability for Global Banking Application
Graph 2: Fault Detection Ability
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12
Iterations
Faultdetection
Size of the original Test Suite
Fault Detection Ability(%) SDG
Fault Detection Ability(%) DDG
13. 344 S. Selvakumar and N. Ramaraj
The results obtained from Fig.12 and Fig. 13 show that the size of the reduced test suite in
DDG approach is slightly higher than the SDG and shows an improved Fault Detection Ability.
5.4. Comparison with Random, Greedy, Heuristic, Delgreedy and 2-Optimal, SDA
This section discusses the gist of some of the test suite minimization methods viz., Random, Greedy,
Heuristic, Delgreedy and 2-Optimal, SDA, DDA. Fig. 14 represents the size of representative sets for
these algorithms.
Random
• All the tests to satisfy the whole requirement are applied.
• From that the reduced test suite is chosen randomly which satisfies more requirements.
Greedy Algorithm
• Tests covering more requirements than other tests are desired.
• Choose the tests that cover the most requirements.
Heuristic Algorithm
• Every requirement must be covered in order to maintain coverage.
• For requirements that are covered by the fewest tests, those tests have a high probability of
being chosen.
• First it must from the tests that are more likely to be chosen, and then the tests that are less
likely.
Delgreedy Algorithm
• A test whose set of covered requirements is a subset of another test’s set of covered
requirements does not need to be considered.
• A requirement whose set of covering tests is a superset of another requirement’s set of covering
tests does not need to be considered.
• If a requirement is only covered by one test, that test is to be chosen.
• If there are no requirements covered by only one test, is to be chosen greedily.
2-Optimal Algorithm
• 2-Optimal is a step towards brute force search.
• Compare every pair to every other pair of tests.
• Generalizes to KWay
Static Dependence Analysis
Nodes represent EFSM transitions and directed edges represent DDs and CDs. Let D and C be the set
of all DDs and CDs in an EFSM, respectively. That is, D = {(tj
, tk
, v)| (tj
, tk
, v) is a DD from tj
to tk
w.r.t.
v} and C = {(tj
, tk
)| (tj
, tk
) is a CD from tj
to tk
}. The SDG of a given EFSM is constructed as a directed
graph G(N, E) as follows:
Let tj
, tk
Є T and v Є V of the EFSM.
E ← ∅; N ← {ni
| ni
for each ti
Є T}
For each (tj
, tk
) Є C, E ← E Є{a dashed edge from tj
to tk
}.
For each (tj
, tk
, v) Є D, E ← E Є{a solid edge from tj
to tk
}.
14. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 345
Figure 14: Size of representative sets for Random, Greedy, Heuristic, Delgreedy and 2-Optimal, SDA, DDA.
0
10
20
30
40
50
60
70
80
90
100
Random Greedy Heuristic Delgreedy 2-optimal SDA DDA
Original test suite
Reduced test suite
Redundancy
5.5. Experiments using DDA to Reduce Test Suites Generated from Specifications of SIR
Programs
5.5.1. Experimental Results, Analysis, and Discussion
For the experimental program, branch coverage adequate test suites were created for six different suite
ranges named as Br, Br+0.1, Br+0.2, Br+0.3, Br+0.4 and Br+0.5. For each suite range, Initially X *
LOC test cases were selected randomly from the test pool and added to the test suite, where X is 0, 0.1,
0.2, 0.3, 0.4 and 0.5 respectively and LOC is the number of lines of code for each program. Then, the
randomly-selected test cases are added into the test suite as necessary so long as each test case
increases the cumulative branch coverage of the suite, until the test suite becomes adequate with
respect to branch coverage. In this way, the developed test suites have various types and varying levels
of redundancy exist between them. For each program, 1000 such branch coverage adequate test suites
in were created each suite size range. In order to gather branch coverage information of test cases, all
programs were hand-instrumented. Both the SDG and the DDG were applied to the generated suites
with respect to branch coverage as the testing criterion. The results of this experiment are shown in the
columns labeled SDG and DDG in Table 2. The values in each row of the table are average values for
1000 suites in each range. In this table, |T| indicates the original suite size, |F| the number of faults
exposed by the original suite, |Tred| the reduced suite size, |Fred| the number of faults exposed by the
reduced suite size, %Size reduction the percentage suite size reduction and %Fault Loss the percentage
fault detection loss.
Table 6: Experimental Results for Experiments SDA and DDA
Program/ suite
size Range
|T| |F| |T red| |F red| %Size Reduction %Fault Loss
SDA DDA SDA DDA SDA DDA SDA DDA
tcas Br 5.71 7.47 5.37 5.36 6.81 6.92 12.06 9.91 7.83 6.39
tcas Br+0.1 9.56 9.15 5.72 6.41 6.97 7.46 35.55 30.18 20.82 16.53
tcas Br+0.2 15.2 11.73 6.73 7.04 7 7.83 50.9 45.54 34.97 28.56
tcas Br+0.3 21.39 14.02 6.86 7.86 7.11 8.25 60.34 55.23 42.96 35.62
tcas Br+0.4 29.07 16.29 6.94 7.87 7.21 8.56 67.47 62.95 49.53 41.79
tcas Br+0.5 35.63 17.76 7.83 7.92 7.05 8.59 71.74 67.57 54.06 46.13
totinfo Br 7.3 12.49 6.41 6.38 11.83 11.87 24.7 23.06 5.08 4.77
Totinfo Br+0.1 18.68 14.62 6.63 5.61 12.43 12.63 63.26 60.04 14.13 12.85
totinfo Br+0.2 35.61 16.73 6.23 6.22 12.79 13.11 76.71 73.54 22.35 20.48
totinfo Br+0.3 52.07 17.7 6.21 6.19 13.01 13.19 81.99 79.21 25.09 24.05
15. 346 S. Selvakumar and N. Ramaraj
Table 6: Experimental Results for Experiments SDA and DDA. - (Continued).
totinfo Br+0.4 69.62 18.55 6.02 5.86 13.2 13.27 86.15 83.82 27.51 27.07
totinfo Br+0.5 87.73 19.16 6.73 5.54 13.18 13.15 88.67 86.62 30.04 30.15
sched Br 7.31 3.38 6.64 6.42 3.09 3.09 22.9 21.99 8.02 7.76
Sched Br+0.1 18.44 4.58 6.81 7.21 3.21 3.25 62.8 60.77 28.21 27.16
Sched Br+0.2 32.09 5.18 6.84 7.44 3.16 3.23 74.39 72.57 38.22 36.79
Sched Br+0.3 47.91 5.61 6.92 7.88 3.21 3.33 80.66 79.12 42.01 39.81
Sched Br+0.4 58.83 5.77 6.93 7.91 3.24 3.37 82.65 81.28 42.88 40.62
Sched Br+0.5 74.94 5.96 6.97 7.96 3.16 3.27 85.79 84.51 45.93 44.46
Sched2 Br 8.01 2.21 5.73 5.79 1.98 1.98 27.04 26.38 8.65 8.46
Sched2 Br+0.1 18.61 2.57 5.77 6.12 2.05 2.08 62.62 60.8 16.99 15.99
Sched2 Br+0.2 33.19 3.23 5.75 6.23 2.05 2.13 75.02 73.53 32.17 30.37
Sched2 Br+0.3 47.44 3.77 5.77 6.38 2.08 2.15 81.11 79.74 39.55 38.27
Sched2 Br+0.4 61.6 4.35 5.84 6.54 2.28 2.42 84.04 82.8 43.14 40.05
Sched2 Br+0.5 76.34 4.73 5.86 6.71 2.25 2.44 86.6 85.36 46.67 43.15
ptok Br 15.76 3.38 7.51 7.63 2.99 3.03 51.15 50.39 9.9 9.19
ptok Br+0.1 27.64 3.64 7.56 7.76 3.05 3.06 69.34 68.62 14.5 14.21
ptok Br+0.2 46.03 3.96 7.44 7.75 3.06 3.11 78.95 78.26 20.62 19.53
ptok Br+0.3 63.84 4.28 7.36 7.76 3.09 3.15 82.77 82.16 25.16 24.07
ptok Br+0.4 83.44 4.54 7.32 7.8 3.12 3.19 85.89 85.27 28.65 27.36
ptok Br+0.5 101.87 4.75 7.23 7.73 3.15 3.22 87.91 87.38 30.73 29.46
ptok2 Br 11.77 7.36 8.78 9.04 7.25 7.25 23.96 21.96 1.49 1.45
ptok2 Br+0.1 27.56 7.8 10.05 11.79 7.45 7.49 55.54 50.02 4.24 3.82
ptok2 Br+0.2 49.74 8.17 10.05 12.76 7.63 7.63 70.35 65.06 6.38 6.34
ptok2 Br+0.3 75.01 8.45 9.92 13.22 7.79 7.86 78.56 73.68 7.58 6.78
ptok2 Br+0.4 100.34 8.58 9.9 13.41 7.84 7.89 82.59 78.57 8.4 7.82
ptok2 Br+0.5 121.73 8.6 9.89 13.51 7.85 7.94 84.43 80.71 8.52 7.52
replace Br 18.63 11.13 14.53 14.92 10.32 10.42 21.5 19.43 7.11 6.2
replace Br+0.1 34.59 14.1 15.86 17.49 11.61 12 48.83 44.46 16.61 13.97
replace Br+0.2 56.67 16.8 16.31 19.13 12.52 13.12 63.14 58.45 23.9 20.49
replace Br+0.3 82.49 19.01 16.7 20.54 12.98 13.82 71.45 66.84 29.6 25.54
replace Br+0.4 105.06 19.96 16.8 21.27 13.28 14.11 74.96 70.63 31.27 27.34
replace Br+0.5 134.59 21.43 16.95 22.39 13.52 14.53 80.48 76.1 34.97 30.38
Space Br 154.75 31.12 126.41 127.12 30.88 30.89 18.27 17.81 0.77 0.74
Space Br+0.1 363.32 31.68 128.28 132.89 31.77 31.14 57.64 56.49 1.79 1.69
Space Br+0.2 650.35 32.09 127.25 135.84 31.07 31.1 72.58 71.33 3.15 3.03
Space Br+0.3 959.4 32.48 126.41 137.58 31.1 31.2 78.49 77.31 4.19 3.89
Space Br+0.4 1243.22 32.77 125.92 138.46 31.12 31.24 82.66 81.57 4.97 4.61
Space Br+0.5 1559.31 32.93 125.16 138.84 31.21 31.26 84.9 83.89 5.15 5
Fig. 15 and Fig. 16 depict the sizes of the representative sets generated by the test suite reduction
techniques for the academic and for the programs retrieved from the software infrastructure repository
subject programs. In the figure the horizontal axis denotes the subject programs whereas the vertical axis
denotes the size of the representative set generated by the dependence graphs. From Fig. 15 and Fig 16, it
can be seen that the DDA method can significantly reduce the sizes of test suites. Fig. 17 shows the fault
detection loss for the reduced suites for experiments SDA and DDA. For most of the ranges, the SDA
experiment resulted in higher percentage fault detection loss on average than the DDA experiment.
Figure 15: Sizes of the reduced test suite versus the sizes of the original test suite for Academic projects.
16. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 347
Figure 16: Sizes of the reduced test suite versus the sizes of the original test suite for SIR objects.
Figure 17: Number of faults detected of reduced test suites verses Range X
5.5.2. Experiment SDA versus DDA
Both the SDA technique and the DDA technique were applied to the generated suites with respect to
branch coverage as testing criterion. The results also show that both the SDA and the DDA reduce the
suite to a certain extent, which indicates the effectiveness of the proposed system in determining
redundant test cases. However, the suite size reduction increases for larger suites, due to the high
number of test cases providing more opportunities for the DDA to select among test cases. And the
results show that the average fault detection loss has been improved except for printtokens2. In
addition, the amount of fault loss for the tcas program and printtokens2 program is relatively high
among other programs. For the tcas program, this may be due to the simplicity of this program. The
redundant test cases that satisfy the same branches are redundant. But these test cases exercise unique
17. 348 S. Selvakumar and N. Ramaraj
execution paths with respect to some other testing criteria. Hence, using different or fine-grained
criteria would result in significant improvements in fault loss. For the printtokens2, the fault loss is
high simply due to low number of faults.
To determine whether the degradation in fault detection loss observed for the DDA over the
SDA is statistically significant, A hypothesis test was conducted for the difference of the two means
[5]. The samples are the number of distinct faults exposed by each of the 1000 reduced test suites for
suite size range B5 using the SDA technique and the proposed technique. The null hypothesis was
considered that there is no difference in the mean number of the exposed faults by the two techniques.
Table 7 shows the resulting p-Values computed for the hypothesis test along with the percentage
confidence with which we may reject the null hypothesis. It’s visible that the difference in the mean
number of faults brought out by the SDA and the DDA is statistically significant.
Table 7: Computed p-Value and the corresponding percentage of confidence for rejecting the null hypothesis
for each program
Program Name Computed p-Value Percentage of confidence for
rejecting the null hypothesis
tcas 1.32 >84.99%
totinfo 7.12 >99.99%
schedule 2.34 >97.8%
schedule2 0.89 >80.23%
printtokens 2.36 >99.99%
printtokens2 0.23 >70.82%
replace 3.11 >99.99%
Space 6.46 >98.5%
The observations made from the analysis are: In all subject programs except for printtokens2
and schedule2, there are far more suites with increased fault detection than decreased fault detection,
when going from the SDA technique to the DDA technique. This shows that the SDA technique, while
not always improving the fault detection of suites, has a much greater likelihood of increasing fault
detection effectiveness than of decreasing it. A large number of suites remained unaltered in the
reduction of size in programs tcas, totinfo schedule, schedule2, and printtokens, for both SDA and
DDA. For printtokens2, about half of the suites and for replace about one third of suites remained
unchanged or more reduced by SDA than DDA. Programs schedule, schedule2, printtokens, and
printtokens2 have a relatively larger number of suites in which the fault detection effectiveness
remained unchanged in the case of SDA and DDA techniques. This is most likely due to the fact that
these four programs have the fewest number of faulty versions available; so there are fewer
opportunities for detecting new distinct faults with these four programs.
5.5.3. Threats to Validity
Q1: Does DDA test suite minimization perform differently between different types of applications?
The subject programs utilized by us are of moderate size; larger programs might have different
characteristics. They were chosen because they are well-understood from previous research and
because there was no access to other programs with human-generated tests and faulty versions. It is
suspected that these programs differ from large programs less than the machine-generated tests and the
faults differ from real ones.
Q2: How do the size and fault detection effectiveness of the DDA test suites compare to those of
suites reduced on the basis of existing minimization techniques?
Specifically, it was intended to directly compare the DDA technique to different types test suite
minimization techniques, the measurement for the percent of fault loss assumes simple model for cost
which treats all faults as equally severe. But in practice, the severity of testing software has a wide
range of severity Index.
18. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 349
Q3: How does the fault detection effectiveness of the DDA reduced test suites compare to suites
of the same size created using other approaches?
By the experimental investigation it is concluded that it is possible that reduced suites created
using a given technique have better fault detection effectiveness exclusively to the conception that the
technique selects more test cases on average than other techniques. Here, it was desired to investigate
whether test suites created by the DDA preserved more fault-detecting ability than other reduced suites
of the same size, as well as Heuristic based, greedy reduced suites augmented with the conception.
6. Conclusion and Future Work
A new approach of Regression Test Suite Minimization using Dynamic Dependency Graph with
improved FDE for the accurate performance evaluation of components has been presented in this work.
The RTS minimization using a Static Dependence Graph had the problem of ignoring the necessary
test cases and their interaction patterns during the traversal of the system in an iterative manner. But
using the Dynamic Dependencies, All the possible test cases were considered with their interaction
patterns. Then each and every test case was analyzed and the test cases with the same interaction
patterns similar to another test case were removed. Finally, a Reduced Test Suite was formed with the
remaining test cases and then the various application objects were tested with our Reduced Tested
Suite. The performance of the new approach has also been experimentally evaluated; our results
indicate that the proposed method had higher fault detection ability than the previous methods.
There are several interesting directions for future work. The first is to apply the approach to
additional sets of components and evaluate different non-functional attributes to improve the validation
of the approach. We also plan to enhance our prioritization methods to obtain the prioritized test suite
in a cost-effective and efficient manner. Furthermore, our methodology can be extended by identifying
the test suites in an automated manner and building a test suite consisting of efficient test cases. The
impact of the test cases obtained will be less redundant than impact of the previous methods.
Acknowledgments
We thank Dr. Gregg Rothermel, Dept. of Computer Science, University of Nebraska, for providing the
Siemens Suite of programs and other experimental objects.
References
[1] http://www.cse.unl.edu/~galileo/sir
[2] D. Jeffrey, N. Gupta, 2007. “Improving Fault Detection Capability by Selectively Retaining
Test Cases during Test Suite Reduction”, IEEE Transaction.on Software Engineering 33, pp.
108-123.
[3] E. Gizdarski, H. Fujiwara, 2000. “Spirit: Satisfiability problem implementation for redundancy
identification and test generation”, Proceedings of Ninth Asian Test Symposium, Taiwan, IEEE
Computer Society, pp. 171.
[4] G. Rothermel,M. J. Harrold, J. Ostrin, and C. Hong., 1998. "An empirical study of the effects of
minimization on the fault detection capabilities of test suites", International Conference on
Software Maintenance, pp. 34–43.
[5] James E. Gentle, 2010. A companion for mathematical Statistics,
http://mason.gmu.edu/~jgentle/books/books_index.htm
[6] J. Black, E. Melachrinoudis, D. Kaeli, 2004. “Bi-criteria models for all-uses test suite
reduction”, Proceedings of 26th International Conference on Software Engineering, IEEE
Computer Society, Washington,DC, USA, pp. 106–115.
19. 350 S. Selvakumar and N. Ramaraj
[7] Korel, B., Tahat, L.H., and Vaysburg, B., 2002. "Model-based regression test reduction using
dependence analysis", Proc. of IEEE ICSM’02.
[8] M. Balcer, W. Hasling, and T. Ostrand. 1989. “Automatic generation of test scripts from formal
test specifications”, Proceedings of the 3rd Symposium on Software Testing, Analysis, and
Verification, pp. 210–218.
[9] M. Hutchins, H. Foster, T. Goradia, and T. Ostrand, May 1994. “Experiments on the
Effectiveness of Dataflow and Controlflow-based Test Adequacy Criteria,” 16th International
Conference on Software Engineering, pages 191–200.
[10] Mark Last , Menahem Friedman , Abraham Kandel, 2003. “The data mining approach to
automated software testing”, Proceedings of the ninth ACM SIGKDD international conference
on Knowledge discovery and data mining, August 24-27, Washington, D.C.
[11] National Institute of Standards & Technology, May 2002. “The Economic Impacts of
Inadequate Infrastructure for Software Testing”, Planning Report 02-3.
[12] N. Mansour, K. El-Fakih, 1999. “Simulated annealing and genetic algorithms for optimal
regression testing”, Journal of Software Maintenance 11 (1) 19–34.
[13] Qurat-ul-ann Farooq, Muhammad Zohaib Z. Iqbal, Zafar I. Malik, Matthias Riebisch, 2010. "A
Model-Based Regression Testing Approach for Evolving Software Systems with Flexible Tool
Support," Engineering of Computer-Based Systems, IEEE International Conference on the
Engineering of Computer-Based Systems, pp. 41-49.
[14] S. Sampath, V. Mihaylov, A. Souter, and L. Polloc, 2005. "An Empirical Comparison of Test
Suite Reduction Techniques for User-Session-Based Testing of Web Applications",
Proceedings of the 21st IEEE International Conference on Software Maintenance.
[15] T. Ostrand and M. Balcer. , June 1988. “The category-partition method for specifying and
generating functional tests”, Communications of the ACM, 31(6):676–686.
[16] T.Y. Chen, M. Lau, 1998. “A new heuristic for test suite reduction”, Information and Software
Technology 40 (5-6), 347–354.
[17] Vaysburg, B., Tahat, L., Korel, B., 2002. "Dependence Analysis in Reduction of Requirement
Based Test Suites", Proceedings of IEEE International Symposium on Software Testing and
Analysis (ISSTA).
[18] V. Rusu, L. du Bousquet, T. Jeron, 2000. “An approach to symbolic test generation”,
Proceedings of Second International Conference on Integrated Formal Methods, Springer-
Verlag, London, UK, pp. 338–357.
[19] Yanping Chen, Robert L. Probert, Hasan Ural, 2007. "Regression test suite reduction using
extended dependence analysis", Fourth international workshop on Software quality assurance,
pp. 62-69.
[20] Yanping Chen, Robert L., Probert, Hasan Ural, 2009. "Regression test suite reduction based on
SDL models of system requirements", Journal of Software Maintenance and Evolution:
Research and Practice archive, John Wiley & Sons, Inc, Vol. 21,no. 6, pp. 379-405, Nov.
20. Regression Test Suite Minimization Using Dynamic Interaction Patterns
with Improved FDE 351
Appendix
Figure 3.A: EFSM model of a Global Banking System
start S1
S2
S4
S3
exit
Check(uname,pw,Ltype)
t1
Att=0
disp login page
(Ltype=accholder)/di
splay menu
t2
t3
att>3
t5
t6
t7
t8
t9
t10
t11
t12
logoff()
t14a
t16
t17
t15
Vpendin acc()
Vparticular acc()
Chuname(
)
Chpassword()
Vbalance()
Vprev trans()
Transferfund()
Chuname()
Chpassword()
t14
a
t13
logoff()
t4
Continue/disp menu
Continue/disp menu
Retry() / att=att+1
(Ltype=Admin)/
display menu
Figure 3.B: EFSM model of the Banking System with added transition
start S1
S2
S4
S3
exit
Check(uname,pw,Ltype)
t1
att=0
disp login page
(Ltype=Admin)/displ
ay menu
(Ltype=accholder
)/display menu
t2
t3
att>3
t5
t6
t7
t8
t9
t10
t11
t12
logoff()
t14a
t16
t17
t15
Vpendin acc()
Vparticular
Chuname
()
Chpassword()
Vbalance()
Vprev trans()
Transferfund()
Chuname()
Chpassword()
t14
a
t13
logoff()
t4
Continue/disp menu
Continue/disp menu
balance/disp() t18
Retry() /
att=att+1