Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document discusses test case selection and adequacy, and test execution. It covers key topics like:
- The difficulty in proving that a set of test cases is truly "adequate" to ensure correctness.
- Common criteria used to identify inadequacies in test suites, like lack of coverage for certain statements.
- The distinction between test cases, test specifications, and test obligations/adequacy criteria.
- How test cases are generated from more abstract test specifications to allow for automated execution.
- The role of scaffolding code in facilitating testing by providing controllability and observability.
Code coverage based test case selection and prioritizationijseajournal
This document describes an approach for code coverage based test case selection and prioritization. It presents algorithms for test case selection (TCS) and test case prioritization (TCP) that aim to select a minimal set of test cases that achieve maximum code coverage. The TCS algorithm analyzes test cases and statements to cluster them into outdated, required and surplus test cases. The TCP algorithm then prioritizes the required test cases based on their individual statement coverage to achieve full code coverage with as few test cases as possible. An example application of the algorithms on a sample program is provided, demonstrating a reduction in test suite size.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses the theory of software testing. It covers several key topics:
1) It identifies five common problems in software testing like limitations of testing teams and issues with manual testing.
2) It describes different testing processes like verification, validation, white-box testing and black-box testing.
3) It outlines three main phases of software testing - preliminary testing, testing, and user acceptance testing - to evaluate a new software system and identify any issues.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document discusses test case selection and adequacy, and test execution. It covers key topics like:
- The difficulty in proving that a set of test cases is truly "adequate" to ensure correctness.
- Common criteria used to identify inadequacies in test suites, like lack of coverage for certain statements.
- The distinction between test cases, test specifications, and test obligations/adequacy criteria.
- How test cases are generated from more abstract test specifications to allow for automated execution.
- The role of scaffolding code in facilitating testing by providing controllability and observability.
Code coverage based test case selection and prioritizationijseajournal
This document describes an approach for code coverage based test case selection and prioritization. It presents algorithms for test case selection (TCS) and test case prioritization (TCP) that aim to select a minimal set of test cases that achieve maximum code coverage. The TCS algorithm analyzes test cases and statements to cluster them into outdated, required and surplus test cases. The TCP algorithm then prioritizes the required test cases based on their individual statement coverage to achieve full code coverage with as few test cases as possible. An example application of the algorithms on a sample program is provided, demonstrating a reduction in test suite size.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses the theory of software testing. It covers several key topics:
1) It identifies five common problems in software testing like limitations of testing teams and issues with manual testing.
2) It describes different testing processes like verification, validation, white-box testing and black-box testing.
3) It outlines three main phases of software testing - preliminary testing, testing, and user acceptance testing - to evaluate a new software system and identify any issues.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
The document discusses test automation, including its objectives, benefits, misconceptions, and what is required for effective implementation. It outlines the key steps in planning and designing a test automation strategy, including choosing the right tests to automate, selecting tools, defining requirements, designing architecture, and ensuring maintainability through standards and processes.
This document describes a collaboration between Aalborg University and NOVO Nordisk to develop an automatic model-based testing tool for GUI testing on embedded medical devices. The tool takes UML state machine models of the GUI as input, generates test cases to satisfy coverage criteria like edge coverage, and converts the tests to a scripting language to run on the target system. The tool significantly reduced test construction time from 30 days to 3 days, decreased the number of tests while improving coverage, and uncovered bugs. Both organizations found the collaboration and use of the testing tool to be very successful and beneficial.
Introduction to specification based test design techniquesYogindernath Gupta
Specification-based test techniques involve deriving test cases from requirements specifications rather than source code. These techniques include equivalence partitioning, boundary value analysis, decision tables, state transition testing, all-pairs testing, classification trees, and use case testing. Coverage criteria measure things like the percentage of partitions or boundaries covered. Specification-based techniques help ensure requirements are thoroughly tested before code is written.
This document summarizes a research paper that applies cause-effect graphs to model the college placement process. It begins by introducing cause-effect graphs and their basic notations. Then, it outlines the college placement process and identifies various factors (causes) and outcomes (effects) to develop a cause-effect graph and decision table. Finally, it generates test cases from the decision table to test different combinations of causes and effects in an automated manner, concluding that cause-effect graphs provide an effective way to systematically represent and test the college placement process.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Testing and test case generation by using fuzzy logic and nIAEME Publication
The document discusses using fuzzy logic and natural language processing techniques for software testing and test case generation. Specifically, it proposes using these techniques to enable search-based testing, automatic test case generation, and automatic fault finding. Fuzzy logic would be used to generate test cases from natural language requirements and specifications. This approach aims to make testing more efficient. The document provides background on different types of software testing (e.g. black box vs. white box) and discusses gray-box testing and test automation frameworks. It also outlines exploratory testing and functional vs. non-functional testing. The key idea is applying fuzzy logic and NLP to requirements to automatically generate test cases and detect faults.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
The document discusses various techniques for test generation and test selection from requirements specifications. It describes equivalence partitioning, boundary value analysis, and category partition testing. Equivalence partitioning involves dividing the input domain into equivalence classes and selecting one representative test per class. Boundary value analysis focuses on boundary values in addition to representative values. Category partition testing is a systematic approach that involves analyzing requirements to identify testable units, input categories and choices for each category, and constraints to derive a test specification.
The document discusses various structural testing criteria used to evaluate test suites, including:
- Statement testing covers each statement in the program.
- Branch testing covers each branch or condition in the program.
- Condition testing covers each condition and combination of conditions.
- Path testing covers each path through the program, but this is impractical for programs with loops.
- Procedure call testing focuses on interfaces between program units through calls and entry/exit points.
More rigorous criteria like condition and path coverage are difficult to achieve due to complexity, so practical criteria partition paths into representative subsets. Overall, structural criteria complement functional testing by systematically covering the program structure.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
A Productive Method for Improving Test EffectivenessShradha Singh
This document proposes a new approach for test suite selection and prioritization that aims to improve test effectiveness. The approach has three components: 1) A predictive component that prioritizes test cases based on their historical failure rates, with tests that failed more often run more frequently. 2) A coverage component that selects test cases based on code coverage data to target parts of the code affected by changes. 3) A decision component that prioritizes important test cases using an algorithm. The approach is intended to select a subset of test cases from large test suites to reduce validation time and improve resource utilization while still effectively testing software. Experiments showed the approach decreased validation cycles and reduced time to market.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
The Current State of the Art of Regression TestingJohn Reese
The document surveys 159 papers on test suite minimization, regression test selection, and test case prioritization techniques. It finds that the majority of studies used small codebases with under 10,000 lines of code and fewer than 1,000 test cases. Graph-walking is identified as the most predominant regression test selection technique. Prioritization approaches focus on coverage-based and history-based methods. Future work opportunities include integrating regression testing with test data generation, considering other domains beyond white-box testing, and providing more tool support.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses a technique for minimizing regression test suites by identifying redundant test cases based on multiple coverage criteria. It proposes a novel approach that considers criteria like function coverage, function call stack coverage, line coverage, and branch coverage to identify redundancy in test cases. This technique aims to reduce the size of test suites while maintaining the same level of coverage and fault detection as the original test suite. It also seeks to improve on existing minimization techniques by handling large test suites more efficiently and avoiding reductions in coverage or fault detection abilities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
This document summarizes a tool called LusRegTes that was developed to automate regression testing of Lustre programs. Lustre is a synchronous data-flow language used for safety-critical applications. The tool represents Lustre programs as operator networks and generates regression test cases by comparing paths between program versions. It aims to select an optimal minimum set of tests to validate changes while reducing testing costs. The approach was implemented in a tool to automate the regression testing process for Lustre programs.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
This document provides an overview of software testing techniques and best practices covered in a course on the topic. It discusses the purpose of software testing, including verification, error detection, and validation. It then surveys common software testing methodologies like white box testing, black box testing, and unit testing. The document also includes two case studies, one on test automation and one on testing an intranet system. Finally, it provides a template for a software test plan and discusses several papers on software testing methods and techniques.
The document discusses test automation, including its objectives, benefits, misconceptions, and what is required for effective implementation. It outlines the key steps in planning and designing a test automation strategy, including choosing the right tests to automate, selecting tools, defining requirements, designing architecture, and ensuring maintainability through standards and processes.
This document describes a collaboration between Aalborg University and NOVO Nordisk to develop an automatic model-based testing tool for GUI testing on embedded medical devices. The tool takes UML state machine models of the GUI as input, generates test cases to satisfy coverage criteria like edge coverage, and converts the tests to a scripting language to run on the target system. The tool significantly reduced test construction time from 30 days to 3 days, decreased the number of tests while improving coverage, and uncovered bugs. Both organizations found the collaboration and use of the testing tool to be very successful and beneficial.
Introduction to specification based test design techniquesYogindernath Gupta
Specification-based test techniques involve deriving test cases from requirements specifications rather than source code. These techniques include equivalence partitioning, boundary value analysis, decision tables, state transition testing, all-pairs testing, classification trees, and use case testing. Coverage criteria measure things like the percentage of partitions or boundaries covered. Specification-based techniques help ensure requirements are thoroughly tested before code is written.
This document summarizes a research paper that applies cause-effect graphs to model the college placement process. It begins by introducing cause-effect graphs and their basic notations. Then, it outlines the college placement process and identifies various factors (causes) and outcomes (effects) to develop a cause-effect graph and decision table. Finally, it generates test cases from the decision table to test different combinations of causes and effects in an automated manner, concluding that cause-effect graphs provide an effective way to systematically represent and test the college placement process.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Testing and test case generation by using fuzzy logic and nIAEME Publication
The document discusses using fuzzy logic and natural language processing techniques for software testing and test case generation. Specifically, it proposes using these techniques to enable search-based testing, automatic test case generation, and automatic fault finding. Fuzzy logic would be used to generate test cases from natural language requirements and specifications. This approach aims to make testing more efficient. The document provides background on different types of software testing (e.g. black box vs. white box) and discusses gray-box testing and test automation frameworks. It also outlines exploratory testing and functional vs. non-functional testing. The key idea is applying fuzzy logic and NLP to requirements to automatically generate test cases and detect faults.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
The document discusses various techniques for test generation and test selection from requirements specifications. It describes equivalence partitioning, boundary value analysis, and category partition testing. Equivalence partitioning involves dividing the input domain into equivalence classes and selecting one representative test per class. Boundary value analysis focuses on boundary values in addition to representative values. Category partition testing is a systematic approach that involves analyzing requirements to identify testable units, input categories and choices for each category, and constraints to derive a test specification.
The document discusses various structural testing criteria used to evaluate test suites, including:
- Statement testing covers each statement in the program.
- Branch testing covers each branch or condition in the program.
- Condition testing covers each condition and combination of conditions.
- Path testing covers each path through the program, but this is impractical for programs with loops.
- Procedure call testing focuses on interfaces between program units through calls and entry/exit points.
More rigorous criteria like condition and path coverage are difficult to achieve due to complexity, so practical criteria partition paths into representative subsets. Overall, structural criteria complement functional testing by systematically covering the program structure.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
A Productive Method for Improving Test EffectivenessShradha Singh
This document proposes a new approach for test suite selection and prioritization that aims to improve test effectiveness. The approach has three components: 1) A predictive component that prioritizes test cases based on their historical failure rates, with tests that failed more often run more frequently. 2) A coverage component that selects test cases based on code coverage data to target parts of the code affected by changes. 3) A decision component that prioritizes important test cases using an algorithm. The approach is intended to select a subset of test cases from large test suites to reduce validation time and improve resource utilization while still effectively testing software. Experiments showed the approach decreased validation cycles and reduced time to market.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
The Current State of the Art of Regression TestingJohn Reese
The document surveys 159 papers on test suite minimization, regression test selection, and test case prioritization techniques. It finds that the majority of studies used small codebases with under 10,000 lines of code and fewer than 1,000 test cases. Graph-walking is identified as the most predominant regression test selection technique. Prioritization approaches focus on coverage-based and history-based methods. Future work opportunities include integrating regression testing with test data generation, considering other domains beyond white-box testing, and providing more tool support.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document discusses a technique for minimizing regression test suites by identifying redundant test cases based on multiple coverage criteria. It proposes a novel approach that considers criteria like function coverage, function call stack coverage, line coverage, and branch coverage to identify redundancy in test cases. This technique aims to reduce the size of test suites while maintaining the same level of coverage and fault detection as the original test suite. It also seeks to improve on existing minimization techniques by handling large test suites more efficiently and avoiding reductions in coverage or fault detection abilities.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
This document summarizes a tool called LusRegTes that was developed to automate regression testing of Lustre programs. Lustre is a synchronous data-flow language used for safety-critical applications. The tool represents Lustre programs as operator networks and generates regression test cases by comparing paths between program versions. It aims to select an optimal minimum set of tests to validate changes while reducing testing costs. The approach was implemented in a tool to automate the regression testing process for Lustre programs.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or modified, a set of test cases are run on each of its functions to assure that the change to that function is not affecting other parts of the software that were previously running flawlessly. For achieving this, all existing test cases need to run as well as new test cases might be required to be created. It is not feasible to re- execute every test case for all the functions of a given software, because if there is a large number of test cases to be run, then a lot of time and effort would be required. This problem can be addressed by prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization, Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TRANSFORMING SOFTWARE REQUIREMENTS INTO TEST CASES VIA MODEL TRANSFORMATIONijseajournal
Executable test cases originate at the onset of testing as abstract requirements that represent system
behavior. Their manual development is time-consuming, susceptible to errors, and expensive. Translating
system requirements into behavioral models and then transforming them into a scripting language has the
potential to automate their conversion into executable tests. Ideally, an effective testing process should
start as early as possible, refine the use cases with ample details, and facilitate the creation of test cases.
We propose a methodology that enables automation in converting functional requirements into executable
test cases via model transformation. The proposed testing process starts with capturing system behavior in
the form of visual use cases, using a domain-specific language, defining transformation rules, and
ultimately transforming the use cases into executable tests.
object oriented system analysis and designwekineheshete
The document discusses software testing and maintenance. It defines key testing concepts like test cases, stubs, and drivers. It also describes different types of testing like unit testing, integration testing, and system testing. It discusses techniques for each type of testing. The document also defines software maintenance and its objectives to correct faults, adapt to new requirements, improve code quality, and inspect code. It describes four main types of maintenance: corrective, adaptive, perfective, and inspection.
The document presents a new approach called Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE that aims to reduce the size of regression test suites while maintaining fault detection ability. It does this by constructing a dynamic dependence graph to identify interaction patterns between transitions in an Extended Finite State Machine model, allowing it to consider repeated dependencies. An example of applying the approach to a banking system model is provided to motivate the method.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
This document summarizes a research paper on mutation testing for C# programs. Mutation testing involves making small changes to a program to generate mutant versions, then testing if test cases can detect the changes. The paper proposes using mutation operators that model common programming errors specific to object-oriented features in C#, like access control, inheritance, and polymorphism. It presents a framework for mutation testing of C# programs and results showing the proposed approach improves accuracy and speed over traditional methods.
The Maestro framework implemented by the validation group at Cirrus Logic provides GUI-based test automation and management for mixed signal validation. It leads to a 66% reduction in testing time through a modular structure with configuration files, a MATLAB GUI, and reusable validation scripts. Key benefits include abstracted test development and execution, standardized methodologies, and a system for monitoring and logging test results.
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONSijesajournal
The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
Software testing effort estimation with cobb douglas function- a practical ap...eSAT Journals
Abstract Effort estimation is one of the critical challenges in Software Testing Life Cycle (STLC). It is the basis for the project’s effort estimation, planning, scheduling and budget planning. This paper illustrates model with an objective to depict the accuracy and bias variation of an organization’s estimates of software testing effort through Cobb-Douglas function (CDF). Data variables selected for building the model were believed to be vital and have significant impact on the accuracy of estimates. Data gathered for the completed projects in the organization for about 13 releases. Statistically, all variables in this model were statistically significant at p<0.05><0.01 levels. The Cobb-Douglas function was selected and used for the software testing effort estimation. The results achieved with CDF were compared with the estimates provided by the area expert. The model’s estimation figures are more accurate than the expert judgment. CDF has one of the appropriate techniques for estimating effort for software testing. CDF model accuracy is 93.42%.
- The document discusses using model-based testing to generate test cases for software installers. It describes two case studies where an extended finite state machine (EFSM) model was used to model installer software and generate test suites.
- The modeling approach worked well for installer test development, generating test suites that effectively found defects. The document discusses lessons learned and challenges that remain in applying model-based testing.
This document discusses unit testing ILE procedures in IBM i. It introduces unit testing as a way to identify bugs early and test code as it is written. It outlines how to create a test script in RPG that calls the procedures being tested and produces a report of the inputs, expected outputs, and actual results. The document provides terminology for different types of testing and guidelines for compiling test scripts separately from production code. It emphasizes that unit testing should be integrated into the development process.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Similar to EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL (20)
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Large Language Model (LLM) and it’s Geospatial Applications
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODEL
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
DOI : 10.5121/ijaia.2017.8404 41
EXTRACTING THE MINIMIZED TEST SUITE FOR
REVISED SIMULINK/STATEFLOW MODEL
Han Gon Park1, Geon Gu Park1, Kihyun Chung1 and Kyunghee Choi2
1
Department of Electronic Engineering, Ajou University, Suwon, Republic of Korea
2
Department of Computer Engineering, Ajou University, Suwon, Republic of Korea
ABSTRACT
Test case generation techniques are successfully employed to generate test cases from a formal model. A
problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire
test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified
Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many
industries like automobile manufacturers. The model for describing a system is frequently modified until it
is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test
coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two
real models for the ECUs deployed in a commercial car are used for an empirical study.
KEYWORDS
Test Suite Reduction, Simulink/Stateflow Model, Test Case Generation, Model Based Testing
1. INTRODUCTION
Software grows with time. The specification is modified for uses beyond the previous one, the
number of functions increases, the lines of code become longer, and many other complications
come with the growth of software. Some old portions are modified, some obsolete portions are
deleted, and some new portions are added. Due to such inevitable maintenance activities, more
than 70% of the time spent on software maintenance is in modifying and retesting the software.
To reduce the cost introduced by testing the revised software, efficient testing techniques are
definitely needed. The activity to revalidate the modified software is called regression testing.
Reference [1] identifies regression testing as being of two types: progressive regression testing,
and corrective regression testing. Progressive regression testing involves a modified specification,
while corrective regression testing does not perform specification change, but specification
correction. Most techniques focus on progressive testing, as do we in this paper.
Regression testing aims to ensure that no new faults are introduced into the previously validated
software due to the modification. The regression testing also needs to confirm that the newly
added requirements are properly integrated into the revised software. This is highly expensive,
time consuming, and laborious work. One main factor that makes regression testing expensive is
having to execute the whole set of test cases generated by the test generation method. A research
focus of regression testing techniques is to extract an effectively minimized sub set of test cases
from the whole test suite.
The test suite minimization problem is to select a subset of the entire test suite that can detect all
the faults. For the minimization, the test minimization techniques not only remove the test cases
from a suite that have become redundant with respect to a particular criterion or system output,
but they also need to add test cases that validate the newly added features. Coverage techniques
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
42
[2] select test cases that cause a modified program to satisfya criterion (depending on the
coverage criterion) that is different from that of the original program, while safe techniques [3]
select test cases to produce different output.
Since model-based regression testing of software has several advantages [4], our work is to
present a Simulink/Stateflow (SL/SF) model-based regression testing technique. We propose a
test case minimization for modified programs, of which the test cases are generated from an
SL/SFmodel [5].SL/SF has been popularly utilized in model-based development in many
industries, such as automobile manufacturers. They use SL/SF for modeling system behaviors,
describing specifications, producing auto codes [6], generating test cases [5], and utilizing the
model for testing systems [7].Extracting the minimized test suite from generated test casesbased
on an SL/SF model for the modified program is the focus of this study. The proposed technique
belongs to coverage technique.
Many previous coverage based test case selection techniques focus on removing test cases from a
test suite in such a way that redundant test cases are eliminated. The eliminated test suite should
hopefully contain the test cases for all the obsolete requirements, and the portions affected by the
obsolete requirements. If the specification is changed with some additional requirements, the
model has to be modified to include the additional requirements, and the test suite generated from
the revised model includes requirements for the added parts. In addition to the new test cases, the
modified software also needs to be tested by the test cases for the portions affected by the added
requirements.
To find the test cases for obsolete, added, and affected portions of a SL/SF model, we simulate
the old model and the revised model with test suites generated from the models. In simulating
SL/SF, it is possible to know whether each transition is evaluated as TRUE or FALSE, and which
state is active. Most test items consist of transition and state evaluation information in SL/SF
models. Two test suites generated from the original and modified models are applied to each
model, and every test case that covers a specific test item (for example, a state or transition
condition) of a model, but is not utilized to cover the test target of the other model, belongs to the
minimized test suite. The minimized test suite consists of the test cases found during the
simulation.
To confirm the effectiveness of the proposed technique, we conducted an experiment. As a
system-under-test, we used models for the Intelligent Management Module (IMS) and the Drive
Door Module (DDM) of a commercial passenger car provided by a major world class automobile
manufacturer. The result demonstrates that the proposed technique can be used to find the
minimized test suite for a revised SL/SF model that would otherwise be tested by the entire test
suite.
In the remainder of the paper, Section2 reviews the relevant literature, while Section 3 addresses
the proposed technique in detail. Section 4 describes the experiment results, while the Conclusion
summarizes the research.
2.RELATED WORK
Many approaches relating to regression test suite minimization have been published. Researchers
categorized regression test minimization techniques in different ways. One possible
categorization is test case minimization, test suite prioritization, and test suite selection.
The survey in Ref. [8] contained various minimization, selection, and prioritization techniques,
and discussed their open problems. Reference [9]used a combination of static slicing and delta
debugging that automatically minimizes the sequence of failure-inducing method calls, and
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
43
showed in a case study that the strategy could minimize failing unit test cases by 96%on average.
Reference [10] used additional criterion to break ties in the minimization process if there was
more than one test case with equal importance to a suite.
Reference [11] proposed a new metric to assess the rate of fault detection of prioritized test cases
that incorporate varying test cases and fault costs; they also presented the results of a case study
illustrating the application of the metric. Reference [12] presented results from an empirical study
of the application of several greedy, meta-heuristic, and evolutionary search algorithms that
belong to the prioritization technique category.
Reference [13] presented an approach to regression testing that handles the selection of test cases
from the existing test suite that should be rerun, and identification of the portions of the code that
must be covered by test cases. Both tasks were performed by traversing graphs for the program
and its modified version. Rothermel etal.[14] suggested a regression test selection technique for
use with object-oriented software; the technique constructs graph representations for software,
and uses these graphs to select test cases. Reference[15] presented a methodology and a tool to
support test selection from regression test suites based on change analysis in object-oriented
designs. Reference[16] proposed a technique to select only a fraction of the test cases from the
entire test suite to revalidate an object-oriented software system. Some researchers have proposed
model based regression approaches. Reference [17] proposed a test case generation technique for
UML designs using the symbolic execution method. Reference [18] presented a safe regression
technique that used various UML diagrams. Chen et al. [19] proposed a regression test selection
technique using an activity diagram of UML. Reference [20] presented a method to select the
Test Dependency Graph subset that contains a modified portion. Reference[21] shows that the
size of the specification based test-suites can be dramatically reduced, and that the fault detection
of the reduced test suites is adversely affected.
Beydeda etal.[22] suggested a technique for class level regression testing based on specification
and implementation information;regression test cases were selected by comparing two different
versions of a model described by a class specification implementation graph (CSIG).Reference
[23]reduced test cases for the EFSM (Extended Finite StateMachines) model based on
dependence analysis, and identified the difference between the original and modified model based
on the elementary modifications.
3.THE PROPOSED TECHNIQUE
This section presents a minimum test suite extraction technique for revised model in SL/SF model
based test case generation. Let Mo be the model of a specification So, and Mm be the model for the
specification Sm modified from So. In coverage based test case generation, the test suites TCo and
TCm are generated from Mo and Mm, respectively, and the test suites are partially or fully
adequate with respect to a coverage criterion, crt. The test coverage is defined as N1/N2, where
N1 is the number of test items the test suite can cover, and N2is the number of test items in a
model. A test item is a specific condition to be evaluated as TRUE or FALSE to satisfy crt. For
example, when the test criterion, crt is transition coverage, all transitions in a SL/SF model
become the test items.
Sm consists of the modified portion (MP), affected portion (AP), and unchanged portion (UP),
compared with So. MP is the portion changed directly by the modification. MP may be introduced
by modification, deletion and/or addition of specification. AP is the portion that is not changed
directly, but affected by MP. AP may or may not appear, depending on MP. UP is neither AP nor
MP. Mm is modified from Mo to adopt the modified specification.
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
44
For the system with the modified specification, the minimum test suite TCmin includes the test
cases to cover only MP and AP. But those for UP should not be included in TCmin, since the test
items for UP have already been tested during testing Mo with TCo. With TCmin, the test coverage
of Mm is preserved. Additionally the items deleted from Mo and their affected test items should be
covered. Usually it is not hard to find the test cases for CPin coverage based test case generation.
But it is not an easy job to extract the test cases for AP. The proposed technique aims to find
TCmin for the SL/SF model.
In SL/SF, when states and transitions are added, deleted and/or modified, Mo and Mm behave
differently. It is possible to extract the test items adequate to a certain criterion crt in the SL/SF
model, where the test items are constructed with states or/and transitions. Let the test item set for
Mi be ITMi. Figure 1 shows the basic idea of our technique, which follows. Mmand Mo are
simulated with both TCo and TCm. When simulating the models, there are several cases we need
to consider.
1) If a test item itmi appears in Mo but does not in Mm, itmi is one of the test items that has been
deleted during modelling Mm to adopt the modification. The test case ti for itmi is needed to
check whether Mmhas in fact accurately deleteditmi from Mo. Thusif ti is in TCo,ti is included in
TCmin. If ti is not in TCo, there is no way to include ti in TCmin. This is the case that TCois not
fully adequate to crt,and tihas not been generated with the test case generation method.
2) A test item itmi may appear in both Mo and Mm. If ti in TCocovers itmi in Mo, but does not
cover the test item in Mm, itemi is an item either modified or affected by the modification. Thus
ti is included in TCmin.
3) If a test item itmj appears in Mm, but does not in Mo, itmj is one of the test items added during
modeling Mm. The test case tj for itmj is needed to check whether Mm accurately has addeditmj
to Mo;and if tj is in TCm,tj is included in TCmin.
4) itmj may appear in both Mo and Mm. In this case, if tj in TCmcovers itmj in Mm, but does not
cover the test item in Mo, itemj is an item that is either modified or affected by the addition. If tj
is not in TCmin,tj is included in TCmin.
find_minimum_test_suite (Mo,Mm,ITMo,ITMm,TCo,TCm)
TCmin=empty;
for each itmi in ITMo
if (itemi is not in ITMm)
TCmin ti
else if ((itemi is in ITMm) and
((tidoes not coveritemi in ITMm)and (ti covers itemi in ITMo))
TCmin ti
for each itmj in ITMm
if (itemj is not in ITMo),
TCmin tj
else if (itemj is in ITMo) and
((tjdoes not coveritemjin ITMo) and (tj covers itemj in ITMm))
if (tj is not in TCmin)
TCmin tj
returnTCmin
Figure1. The proposed technique in pseudo code
Figures 2 and 3 show the SL/SF models used to explain the proposed technique in detail. Figure 2
shows the original model, and Figure3 its modified version. The models have two inputs, binary
A and B. State “Active_3” and two transitions R8 and R9 between “Active_3” and “Active_1” are
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
45
deleted from Mo, state “Active_4” is added with two new transitions R10 and R11 between
“Active_4” and “UnActive”,and R5 is modified.
Figure2. The original model Mo.
Figure3. The modified model Mm.
If the test cases are generated with respect to transition coverage, all transitions in the models
become the test items.A test case, ti is an input sequence to evaluate the transition, Ri as TRUE.
With ti,the model reaches the destination state of Ri from the initial state, OFF. We assume each
test case starts from the initial state, as in many actual applications. The inputs are initially set to
‘0’. That is, (A, B)=(0,0). Tables 1 and 2 show possible TCo and TCm, respectively, that are fully
adequate to transition coverage.
In real test case generation, the test cases are optimized after or during test case generation. By
‘optimization’ we mean that if a test case tp in a test suite is a subset of another test case tq in the
same test suite, tp is excluded from the suite. After optimization, the test cases for R1, R3, R5, R6,
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
46
R8need to be excluded, and four test cases should remain in the test suite. But in this example and
the next empirical study, we do not optimize test suites, since the effect of test minimization after
the optimization is not clearly shown. Thus we assume that one test case is generated for each test
item.
Table1. Test cases in TCo.
Test Item
(Transition)
Test case
R1 {(0,0),(1,0)}
R2 {(0,0),(1,0),(0,0)}
R3 {(0,0),(1,0),(1,1)}
R4 {(0,0),(1,0),(1,1),(1,0)}
R5 {(0,0),(1,0),(1,1),(1,1)}
R6 {(0,0),(1,0),(1,1),(1,1),(1,2)}
R7 {(0,0),(1,0),(1,1),(1,1),(1,2),(1,0}
R8 {(0,0),(1,0),(1,1),(1,3)}
R9 {(0,0),(1,0),(1,1),(1,3),(1,0)}
Table2. Test cases in TCm.
Test Item
(Transition)
Test case
R1 {(0,0),(1,0)}
R2 {(0,0),(1,0),(0,0)}
R3 {(0,0),(1,0),(1,1)}
R4 {(0,0),(1,0),(1,1),(1,0)}
R5 {(0,0),(1,0),(1,1),(1,1)}
R6 {(0,0),(1,0),(1,1),(1,1),(1,1),(1,2)}
R7 {(0,0),(1,0),(1,1),(1,1),(1,1),(1,2),(1,0}
R10 {(0,0),(1,0),(1,2)}
R11 {(0,0),(1,0),(1,2),(1,0)}
First, the test items in ITMo are compared with those in ITMm. R8 and R9 in ITMo are not in ITMm.
Thus t8,o and t9,o, the test cases for R8 and R9 in Mo, are inserted in TCmin. tm,n is defined as the nth
test case in model Mn. The two cases are needed to check the correctness of deletion. R1 through
R7 appear in both Mo and Mm. By simulating the models with TCo,R1 through R5are found
covered by t1,o through t5,oin both models. But R6 and R7 in Mm are not evaluated as TRUE with
t6,o and t7,o; mean while, those in Moare covered. This means that there is one or more
modifications and/or affected portion sin R6 and R7 by deleting R8 and R9. In this case, there is an
effect. Thus t6,o through t7,o are put in TCmin. Table 3 shows the simulation result of Mm with TCo.
The simulation result of Mo is not summarized, since all items are covered.
Now compare the test items in ITMm, with those in ITMo. R10 and R11 in ITMm do not appear in
ITMo. Thus t10,m and t11,m, which are needed to check the specification addition, are inserted into
TCmin. The result of simulating the models with TCm reveals that all test items in both models are
satisfied. Thus, no extra test cases are inserted intoTCmin. Finally,TCmin for the model in Figure3
is determined as {t8,o, t9,o, t6,o, t7,o, t10,m, t11,m}.Table4 shows the result of simulating Mo with TCm.
The simulation result of Mm is not included.
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
47
Table3. Result of simulating Mm with TCo.
Test Items TCo Criterion
satisfactionITMo ITMm Test cases ti,j
R1 R1 {(0,0),(1,0)} t1,o O
R2 R2 {(0,0),(1,0),(0,0)} t2,o O
R3 R3 {(0,0),(1,0),(1,1)} t3,o O
R4 R4 {(0,0),(1,0),(1,1),(1,0)} t4,o O
R5 R5 {(0,0),(1,0),(1,1),(1,1)} t5,o O
R6 R6 {(0,0),(1,0),(1,1),(1,1),(1,2)} t6,o X
R7 R7 {(0,0),(1,0),(1,1),(1,1),(1,2),(1,0} t7,o X
R8 - {(0,0),(1,0),(1,1),(1,3)} t8,o NA
R9 - {(0,0),(1,0),(1,1),(1,3),(1,0)} t9,o NA
Table4. Result of simulating Mo with TCm.
Test Items TCm Criterion
satisfactionITMo ITMm Test cases ti,j
R1 R1 {(0,0),(1,0)} t1,m O
R2 R2 {(0,0),(1,0),(0,0)} t2,m O
R3 R3 {(0,0),(1,0),(1,1)} t3,m O
R4 R4 {(0,0),(1,0),(1,1),(1,0)} t4,m O
R5 R5 {(0,0),(1,0),(1,1),(1,1)} t5,m O
R6 R6 {(0,0),(1,0),(1,1),(1,1),(1,1),(1,2)} t6,m O
R7 R7 {(0,0),(1,0),(1,1),(1,1),(1,1),(1,2),(1,0} t7,m O
- R10 {(0,0),(1,0),(1,2)} t10,m NA
- R11 {(0,0),(1,0),(1,2),(1,0)} t11,m NA
4.EMPIRICAL STUDY
The proposed technique is verified using two ECU (Electrical Control Unit) SL/SF models for a
commercial vehicle provided by a world class automobile manufacturer. One ECU is the
Intelligent Management System (IMS), which is used for managing the ECUs for doors, mirrors,
and wipers. The other is the driver side mirror ECU (DDM). The IMS model is simple, and has
34 states and 51 transitions; meanwhile, the mirror model is relatively complicated, and has 49
states and 135 transitions.
For the study, we use the models modified according to the three different modification types:
transition modification, state/transition addition, and state/transition deletion. The modifications
were not made arbitrarily, but were made while keeping the precise specifications of the car.
First, the test suites for the original IMS (TCOI), modified IMS (TCMI), original DDM (TCOD), and
modified DDM (TCMD) models were generated with a commercial test case generator, TCG [24].
The generated test suites were partially adequate to transition coverage, due to various reasons,
such as incomplete model, poor performance of TCG, and unknown reasons. We performed an
empirical study with the test suites. Since the test case generation is beyond the scope of this
paper, we do not discuss the test case generation from the model. Using the original and modified
models, and their test suites, the proposed technique tries to find the minimized test suites. The
test cases in TCmin are compared with those of minimum test cases analyzed by the inspection
with respect to the number of test cases for the modification itself (Direct in Table5), and that for
the affected portion (Affected in Table5).
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
48
Table5 is a summary of the study. In the IMS model, one transition is modified among 51
transitions in the Modification type. Our technique finds TCmin has just one test case for Direct
test cases, but no Affected test cases. This indicates that there is no portion affected by the
modification. We confirm it by inspecting the model. One state and the three related transitions
are added in the Addition type. It is found that TCmin includes three Direct test cases needed to
check the modified transitions. As in the Modification type, no extra Affected test cases are
needed. In the Deletion type experiment, one state and the connected three transitions are deleted
from IMSo. Like the previous two experiments, the test suite includes only three Direct test cases
to check the deleted transitions. For IMS, TCmin consists of only Direct test cases in all three
modification types. We confirmed that the test suites were minimum, and sufficient to cover the
model change. Since the IMS model is small, and consists of seven almost independent sub-
models that don’t affect each other, the modification does not affect other parts, and TCmin
contains the test cases to cover only the modified transitions.
The DDM model is relatively complicated, and some parts are closely related to each other. In the
Modification type experiment, two out of 135 transitions are modified. By the modifications, 36
transitions are affected. The proposed technique finds two Direct and 36 Affected test cases for
TCmin. Three transitions are added, and the added transitions affect three other transitions in the
Addition type. In this case, the technique finds the minimized test suite that includes six test
cases. In the Deletion type, three transitions are deleted with one state. In this case, TCmin consists
of three Direct and three Affected test cases. TCmin is minimum in all three modification types.
Table5. Summary of the empirical study.
Model
Modification
Type
No. of
modified
test items
No. of minimum test case
(No. of found test cases)
Direct Affected
IMS
Modification 1 1(1) 0
Addition 3 3(3) 0
Deletion 3 3(3) 0
DDM
Modification 2 2(2) 36(36)
Addition 3 3(3) 3(3)
Deletion 3 3(3) 3(3)
The empirical study is limited in terms of the number of models and modification complexity.
But we believe that it demonstrates the effectiveness of the proposed technique, even though it is
not sufficient to fully verify the performance.
Sometimes test engineers are forced to test a modified system with test cases only for the
modified portion because of limited test time, losing the coverage. Without the reduced test suite,
it is inevitable that the modified model has to be tested with the test suite with all test cases
generated by the test case generator to preserve the original coverage. Even with the test cases, it
is not possible to test the deleted portion that does not appear in the modified model. Table6
summarizes the statistics of test cases generated by TCG, and the minimized test cases found by
the proposed technique. Even though the test case reduction surely varies with model and
modification, the statistics demonstrate that the reduction in the number of test cases may be
significant in some applications, while preserving the coverage of the full test suite.
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
49
Table6.Test case statistics.
5.CONCLUSIONS
We have described a minimized test case extraction technique for a modified SL/SF model. The
proposed technique is capable of extracting a minimized sized test suite by taking into account
both the modified and the affected portion of the revised model. For the empirical study, we used
two models for the ECUs deployed in a commercial vehicle. The result of our empirical study
shows that the proposed technique achieved a significant reduction in terms of test cases. It
extracts just the test cases needed for testing the portion modified from the original model, and
those affected by the modification. The cost of testing was dramatically reduced for the models
used in this study, even though the performance and cost reduction are surely dependent on the
models and modifications. Although we cannot broadly generalize our results, and further studies
are needed, the experiment indicates that the proposed technique may be an effective means of
reducing testing effort.
ACKNOWLEDGEMENTS
This Research was supported by Defense Acquisition Program Administration(UD150042AD).
REFERENCES
[1] H. Leung and L. White, (1990) “A Study of Integration Testing and Software Regression at the
Integration Level”, Proceedings of Conference on Software
Maintenance,DOI:10.1109/ICSM.1990.131377, pp290-301.
[2] D. Nardo, N. Alshahwa1, L. Briand, (2013) “Coverage-Based Test Case Prioritisation: An Industrial
Case Study”, Proceedings of IEEE Sixth International Conference on Software Testing.
[3] H. Agrawal, J.R. Horgan, E.W. Krauser and S. London, (1993) “Incremental Regression Testing”,
Proceedings of IEEE Conference on Software Maintenance, DOI: 10.1109/ICSM.1993.366927,
pp348-357.
[4] D. Deng, P.C.Y. Sheu and T. Wang, (2004) “Model-based Testing and Maintenance”,Proceedings of
IEEE Sixth International Symposium on Multimedia Software Engineering, DOI:
10.1109/MMSE.2004.51,pp278-285.
[5] C.S. Pasareanu, J. Schumann, P. Mehlitz, M. Lowry, G. Karsai, H. Nine and S. Neema,(2009) "Model
Based Analysis and Test Generation for Flight Software", Proceedings of IEEE Third International
Conference onSpace Mission Challenges for Information Technology, DOI: 10.1109/SMC-IT.2009.18,
pp83-90.
[6] Lutz Köster, Thomas Thomsen, and Ralf Stracke, (2001) “Connecting Simulink to OSEK: Automatic
Code Generation for Real-Time Operating Systems with TargetLink”, SAE Technical Paper.
[7] Reactis, <http://www.reactive-systems.com/papers/bcsf.pdf>.
[8] S. Yoo and M. Harman, (2012) “Regression Testing Minimization Selection and prioritization:
Survey”, Software TestingVerification & Reliability, Vol. 22, Issue 2, DOI: 10.1002/stvr.430, pp67-
120.
Model Modification
Type
No. of
test cases
(TCG)
No. of
minimized test cases
(the proposed technique)
IMS
Modification 49 1
Addition 52 3
Deletion 46 3
DDM
Modification 102 38
Addition 105 6
Deletion 99 6
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
50
[9] A. Leitner, M. Oriol, and A. Zeller, (2007) “Efficient Test Case Minimization”, Proceedings of the
twenty-second IEEE/ACM international conference on Automated software engineering, DOI:
10.1145/1321631.1321698, pp417-420.
[10] J. Lin, C. Huang and C. Lin, (2008) “Test suite reduction analysis with enhanced tie-breaking
techniques”, Management of Innovation and Technology, Proceedings of IEEE4th International
Conference on, DOI: 10.1109/ICMIT.2008.4654545, pp1228-1233.
[11] S. Elbaum, A. Malishevsky, and G. Rothermel, (2001) “Incorporating Varying Test Costs and Fault
Severities into Test Case Prioritization”,Proceedings of the 23rd International Conference on Software
Engineering,DOI: 10.1109/ICSE.2001.919106, pp329-338.
[12] Z. Li, M. Harman and R. Hierons, (2007) “Search Algorithms for Regression Test Case Prioritization”,
Proceedings of the IEEE Transactions on Software Engineering, Vol. 33, Issue 4, DOI:
10.1109/TSE.2007.38, pp225-237.
[13] G. Rothermel and M. Harrold, (1994) “Selecting tests and identifying test coverage requirements for
modified software”, Proceedings of the 1994 ACM SIGSOFT international symposium on Software
testing and analysis,DOI: 10.1145/186258.187171, pp169-184.
[14] G. Rothermel, M. Harrold and J. Dedhia, (2000) “Regression test selection for C++ software”,
Software Testing Verification and Reliability, Volume 10, Issue 2, DOI: 10.1002/1099-
1689(200006)10:2<77::AID-STVR197>3.0.CO;2-E, pp77-109.
[15] L.C. Briand, Y. Labiche and S. He, (2009) “Automating regression test selection based on UML
designs”, Information and Software Technology, Volume 51, Issue 1, DOI:
10.1016/j.infsof.2008.09.010, pp16-30.
[16] P. Hsia, X. Li, D. Kung, C. Hsu, L. Li, Y. Toyoshima and C. Chen, (1997) “A technique for the
selective revalidation of object-oriented software”, Journal of Software Maintenance: Research and
Practice, Volume. 9, Issue 4, DOI: 10.1002/(SICI)1096-908X(199707/08)9:4<217::AID-
SMR152>3.0.CO;2-2., pp217-233.
[17] L. Briand, Y. Labiche, and T. Yue, (2009) “Automated Traceability Analysis for UML Model
Refinements”, Information and Software Technology, Volume 51, Issue 2, DOI:
10.1016/j.infsof.2008.06.002, pp. 512-527.
[18] L. Briand,Y. Labiche, and G. Soccar, (2002)“Automating Impact Analysis and Regression Test
Selection Based on UML Designs”,Proceedings. International Conference on Software Maintenance,
DOI: 10.1109/ICSM.2002.1167775, pp. 252-261.
[19] Y. Chen, R.L. Probert, and D.P. Sims,(2002) “Specification-based Regression Test Selection with Risk
Analysis”, Proceedings of the 2002 conference of the Centre for Advanced Studies on Collaborative
research, DOI: 10.1145/782115.782116, pp1.
[20] Y. Traon, T. Jeron, J.M. Jezequel, and P. Morel, (2000) “Efficient object-oriented integration and
regression testing”, Proceedings of the IEEE Transactions on Reliability, Vol. 49, Issue 1, DOI:
10.1109/24.855533, pp12-25.
[21] M. Heimdahl and D. George, (2004) “Test-Suite Reduction for Model Based Tests: Effects on Test
Quality and Implications for Testing”, Proceedings of the 19th IEEE international conference on
Automated software engineering, DOI: 10.1109/ASE.2004.67, pp176-185.
[22] S. Beydeda and V. Gruhn, (2001) “Integrating White- and Black-Box Techniques for Class-Level
Regression Testing”, Computer Software and Applications Conference, COMPSAC 2001. 25th
Annual International, DOI: 10.1109/CMPSAC.2001.960639, pp357–362.
[23] B. Korel, L.H. Tahat, and B. Vaysburg, (2002) “Model Based Regression Test Reduction Using
Dependence Analysis”, Proceedings. International Conference onSoftware Maintenance, DOI:
10.1109/ICSM.2002.1167768, pp214-223.
[24] Btstech, <http://www.btstech.co.kr/page_vaHk97>.
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.8, No.4, July 2017
51
AUTHORS
Han Gon Park holds a Master Degree (M.S) in Electronic Engineering from Ajou University,
Republic of Korea. His areas of research interest includes Embedded System Testing, Model
Based Testing. He is currently working at LG Chem.
Geon Gu Park is a Master’s course of Electronic Engineering from Ajou University, Republic
of Korea. His areas of research interest includes Embedded System Testing, Model Based
Testing.
Kihyun Chung holds a Doctoral Degree (Ph.D) in Electronic Engineering from Purdue
University, USA. His areas of research interest includes Computer Architecture, VLSI, Real
time/ Multimedia System. At present he is working as Professor, Electronic Engineering, Ajou
University, Republic of Korea.
Kyunghee Choi holds a Doctoral Degree (Ph.D) in Computer Engineering from Paul Sabatier
University, France. His areas of research interest includes Operating System, Real time/
Multimedia System. At present he is working as Professor, Computer Engineering, Ajou
University, Republic of Korea.