The document describes DynaMut, a tool developed to automate mutation testing for embedded system applications written in C++. DynaMut inserts conditional mutations into the code during compilation rather than requiring multiple recompilations. This reduces the time needed for mutation testing by 48-67% compared to traditional methods. The document also evaluates different sampling techniques for reducing the number of mutations tested while maintaining representative results, finding that dithered sampling is most effective.
A mutation testing analysis and regressionijfcstjournal
Software testing is a testing which conducted a test to provide information to client about the quality of the
product under test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. In this paper we
focused on two main software testing –mutation testing and mutation testing. Mutation testing is a
procedural testing method, i.e. we use the structure of the code to guide the test program, A mutation is a
little change in a program. Such changes are applied to model low level defects that obtain in the process
of coding systems. Ideally mutations should model low-level defect creation. Mutation testing is a process
of testing in which code is modified then mutated code is tested against test suites. The mutations used in
source code are planned to include in common programming errors. A good unit test typically detects the
program mutations and fails automatically. Mutation testing is used on many different platforms, including
Java, C++, C# and Ruby. Regression testing is a type of software testing that seeks to uncover
new software bugs, or regressions, in existing functional and non-functional areas of a system after
changes such as enhancements, patches or configuration changes, have been made to them. When defects
are found during testing, the defect got fixed and that part of the software started working as needed. But
there may be a case that the defects that fixed have introduced or uncovered a different defect in the
software. The way to detect these unexpected bugs and to fix them used regression testing. The main focus
of regression testing is to verify that changes in the software or program have not made any adverse side
effects and that the software still meets its need. Regression tests are done when there are any changes
made on software, because of modified functions.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
Lustre is a synchronous data-flow declarative language widely used for safety-critical applications (avionics, energy, transport...). In such applications, the testing activity for detecting errors of the system plays a crucial role. During the development and maintenance processes, Lustre programs are often evolving, so regression testing should be performed to detect bugs. In this paper, we present a tool for automatic regression testing of Lustre programs. We have defined an approach to generate test cases in regression testing of Lustre programs. In this approach, a Lustre program is represented by an operator network, then the set of paths is identified and the path activation conditions are symbolically computed for each version. Regression test cases are generated by comparing paths between versions. The approach was implemented in a tool, called LusRegTes, in order to automate the test process for Lustre programs.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
Testing embedded system through optimal mining technique (OMT) based on multi...IJECEIAES
Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactors.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A mutation testing analysis and regressionijfcstjournal
Software testing is a testing which conducted a test to provide information to client about the quality of the
product under test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. In this paper we
focused on two main software testing –mutation testing and mutation testing. Mutation testing is a
procedural testing method, i.e. we use the structure of the code to guide the test program, A mutation is a
little change in a program. Such changes are applied to model low level defects that obtain in the process
of coding systems. Ideally mutations should model low-level defect creation. Mutation testing is a process
of testing in which code is modified then mutated code is tested against test suites. The mutations used in
source code are planned to include in common programming errors. A good unit test typically detects the
program mutations and fails automatically. Mutation testing is used on many different platforms, including
Java, C++, C# and Ruby. Regression testing is a type of software testing that seeks to uncover
new software bugs, or regressions, in existing functional and non-functional areas of a system after
changes such as enhancements, patches or configuration changes, have been made to them. When defects
are found during testing, the defect got fixed and that part of the software started working as needed. But
there may be a case that the defects that fixed have introduced or uncovered a different defect in the
software. The way to detect these unexpected bugs and to fix them used regression testing. The main focus
of regression testing is to verify that changes in the software or program have not made any adverse side
effects and that the software still meets its need. Regression tests are done when there are any changes
made on software, because of modified functions.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
Lustre is a synchronous data-flow declarative language widely used for safety-critical applications (avionics, energy, transport...). In such applications, the testing activity for detecting errors of the system plays a crucial role. During the development and maintenance processes, Lustre programs are often evolving, so regression testing should be performed to detect bugs. In this paper, we present a tool for automatic regression testing of Lustre programs. We have defined an approach to generate test cases in regression testing of Lustre programs. In this approach, a Lustre program is represented by an operator network, then the set of paths is identified and the path activation conditions are symbolically computed for each version. Regression test cases are generated by comparing paths between versions. The approach was implemented in a tool, called LusRegTes, in order to automate the test process for Lustre programs.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
Testing embedded system through optimal mining technique (OMT) based on multi...IJECEIAES
Testing embedded systems must be done carefully particularly in the significant regions of the embedded systems. Inputs from an embedded system can happen in multiple order and many relationships can exist among the input sequences. Consideration of the sequences and the relationships among the sequences is one of the most important considerations that must be tested to find the expected behavior of the embedded systems. On the other hand combinatorial approaches help determining fewer test cases that are quite enough to test the embedded systems exhaustively. In this paper, an Optimal Mining Technique that considers multi-input domain which is based on built-in combinatorial approaches has been presented. The method exploits multi-input sequences and the relationships that exist among multi-input vectors. The technique has been used for testing an embedded system that monitors and controls the temperature within the Nuclear reactors.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...IAEME Publication
Metamorphic Testing is an attribute relations based testing, used to mitigate the test oracle problem in testing complex non-testable programs. MTAF stands for Metamorphic Testing Automation Framework, introduced to eliminate the human intervention in creating test cases, mapping the relations, executing the statements and identifying the errors from input programs. MTAF is especially designed to address the test oracle problem of two most popular non-testable program domains are Multi Precision Arithmetic (MPA) and Graph Theory (GT) applications. In this paper, the researcher explains the results of conducted experiments and identified bug information with MTAF. Several Multi Precision Arithmetic and Graph Theory related hidden bugs are discussed in this paper to show the performance of MTAF.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Software testing means to cut errors, reduce
maintenances and to short the cost of software development. Many
software development and testing methods are used from many
past years to improve software quality and software reliability. The
major problem arises in the field of software testing is to find the
best test case to performs testing of software. There are many kind
of testing methods used for making a best case. Teasing is a
important part of software development cycle .The process of
testing is not bounded to detection of ’error’ in software but also
enhances the surety of proper functioning and help to find out the
functional and non functional particularities .Testing activities
focuses on the overall progress of software.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneousl y using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multiproblem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
EXPERIMENTAL EVALUATION AND RESULT DISCUSSION OF METAMORPHIC TESTING AUTOMATI...IAEME Publication
Metamorphic Testing is an attribute relations based testing, used to mitigate the test oracle problem in testing complex non-testable programs. MTAF stands for Metamorphic Testing Automation Framework, introduced to eliminate the human intervention in creating test cases, mapping the relations, executing the statements and identifying the errors from input programs. MTAF is especially designed to address the test oracle problem of two most popular non-testable program domains are Multi Precision Arithmetic (MPA) and Graph Theory (GT) applications. In this paper, the researcher explains the results of conducted experiments and identified bug information with MTAF. Several Multi Precision Arithmetic and Graph Theory related hidden bugs are discussed in this paper to show the performance of MTAF.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Software testing means to cut errors, reduce
maintenances and to short the cost of software development. Many
software development and testing methods are used from many
past years to improve software quality and software reliability. The
major problem arises in the field of software testing is to find the
best test case to performs testing of software. There are many kind
of testing methods used for making a best case. Teasing is a
important part of software development cycle .The process of
testing is not bounded to detection of ’error’ in software but also
enhances the surety of proper functioning and help to find out the
functional and non functional particularities .Testing activities
focuses on the overall progress of software.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Benchmark methods to analyze embedded processors and systemsXMOS
xCORE multicore microcontrollers are 100x more responsive than traditional micros. The unparalleled responsiveness of the xCORE I/O ports is rooted in some fundamental features:
- Single cycle instruction execution
- No interrupts
- No cache
- Multiple cores allow concurrent independent task execution
- Hardware scheduler performs 'RTOS-like' functions
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneousl y using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multiproblem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case.
Automated Test Equipment’s (ATEs) are integrated systems which automate the process of testing modules, systems, devices or products. Test equipment are generally used to monitor and control the operation of a process or device, verify compliance standards, and detect and mitigate risks.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or modified, a set of test cases are run on each of its functions to assure that the change to that function is not affecting other parts of the software that were previously running flawlessly. For achieving this, all existing test cases need to run as well as new test cases might be required to be created. It is not feasible to re- execute every test case for all the functions of a given software, because if there is a large number of test cases to be run, then a lot of time and effort would be required. This problem can be addressed by prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization, Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
A MUTATION TESTING ANALYSIS AND REGRESSION TESTINGijfcstjournal
Software testing is a testing which conducted a test to provide information to client about the quality of the
product under test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. In this paper we
focused on two main software testing –mutation testing and mutation testing. Mutation testing is a
procedural testing method, i.e. we use the structure of the code to guide the test program, A mutation is a
little change in a program. Such changes are applied to model low level defects that obtain in the process
of coding systems. Ideally mutations should model low-level defect creation. Mutation testing is a process
of testing in which code is modified then mutated code is tested against test suites. The mutations used in
source code are planned to include in common programming errors. A good unit test typically detects the
program mutations and fails automatically. Mutation testing is used on many different platforms, including
Java, C++, C# and Ruby. Regression testing is a type of software testing that seeks to uncover
new software bugs, or regressions, in existing functional and non-functional areas of a system after
changes such as enhancements, patches or configuration changes, have been made to them. When defects
are found during testing, the defect got fixed and that part of the software started working as needed. But
there may be a case that the defects that fixed have introduced or uncovered a different defect in the
software. The way to detect these unexpected bugs and to fix them used regression testing. The main focus
of regression testing is to verify that changes in the software or program have not made any adverse side
effects and that the software still meets its need. Regression tests are done when there are any changes
made on software, because of modified functions.
Implementation of reducing features to improve code change based bug predicti...eSAT Journals
Abstract Today, we are getting plenty of bugs in the software because of variations in the software and hardware technologies. Bugs are nothing but Software faults, existing a severe challenge for system reliability and dependability. To identify the bugs from the software bug prediction is convenient approach. To visualize the presence of a bug in a source code file, recently, Machine learning classifiers approach is developed. Because of a huge number of machine learned features current classifier-based bug prediction have two major problems i) inadequate precision for practical usage ii) measured prediction time. In this paper we used two techniques first, cos-triage algorithm which have a go to enhance the accuracy and also lower the price of bug prediction and second, feature selection methods which eliminate less significant features. Reducing features get better the quality of knowledge extracted and also boost the speed of computation. Keywords: Efficiency, Bug Prediction, Classification, Feature Selection, Accuracy
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Unit Testing to Support Reusable for Component-Based Software Engineeringijtsrd
Unit testing is a practical approach to improve the quality and reliability of software. Unit testing is usually performed by programmers and is the base for all other tests such as integration testing and system testing. Unit Testing can be done manually and or automatically. The automated unit tests are written by the developers after the completion of functionality coding. The number of defects reduced when automated unit tests are written iteratively similar to test driven development. This framework proved that significant portions of windows application can be automatically tested without manual intervention. This reduces the manpower involved in testing each and every unit of the application and increases the quality of the software product. Khin Moe Sam "Unit Testing to Support Reusable for Component-Based Software Engineering" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-2 , February 2019, URL: https://www.ijtsrd.com/papers/ijtsrd21458.pdf
Paper URL: https://www.ijtsrd.com/engineering/computer-engineering/21458/unit-testing-to-support-reusable-for-component-based-software-engineering/khin-moe-sam
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
PIP-MPU: FORMAL VERIFICATION OF AN MPUBASED SEPARATION KERNEL FOR CONSTRAINED...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources).
In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification.
Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit
(MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained
environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in
the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the
demonstration of the formal verification strategy on two representative kernel services. The publicly
released proofs have been implemented and checked using the Coq Proof Assistant for three kernel
services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of
an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Pip-MPU: Formal Verification of an MPU-Based Separationkernel for Constrained...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources). In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification. Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit (MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the demonstration of the formal verification strategy on two representative kernel services. The publicly released proofs have been implemented and checked using the Coq Proof Assistant for three kernel services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Call for papers -15th International Conference on Wireless & Mobile Network (...ijesajournal
15th International Conference on Wireless & Mobile Network (WiMo 2023) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Call for Papers -International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
Scope & Topics
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
Topics of interest include, but are not limited to, the following
Chunking/Shallow Parsing
Dialogue and Interactive Systems
Deep learning and NLP
Discourseand Pragmatics
Information Extraction, Retrieval, Text Mining
Interpretability and Analysis of Models for NLP
Language Grounding to Vision, Robotics and Beyond
Lexical Semantics
Linguistic Resources
Machine Learning for NLP
Machine Translation
NLP and Signal Processing
NLP Applications
Ontology
Paraphrasing/Entailment/Generation
Parsing/Grammatical Formalisms
Phonology, Morphology
POS tagging
Question Answering
Resources and Evaluation
Semantic Processing
Sentiment Analysis, Stylistic Analysis, and Argument Mining
Speech and Multimodality
Speech Recognition and Synthesis
Spoken Language Processing
Statistical and Knowledge based methods
Summarization
Theory and Formalism in NLP
Signal Processing & NLP
Computer Vision, Image Processing& NLP
NLP, AI & Signal
Paper Submission
Authors are invited to submit papers through the conference Submission System by May 06, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by International Journal on Cybernetics & Informatics (IJCI) (Confirmed).
Selected papers from NLPSIG 2023, after further revisions, will be published in the special issue of the following journals.
International Journal on Natural Language Computing (IJNLC)
International Journal of Ubiquitous Computing (IJU)
International Journal of Data Mining & Knowledge Management Process (IJDKP)
Signal & Image Processing : An International Journal (SIPIJ)
International Journal of Ambient Systems and Applications (IJASA)
International Journal of Grid Computing & Applications (IJGCA)
Important Dates
Submission Deadline : May 06, 2023
Authors Notification : May 25, 2023
Final Manuscript Due : June 08, 2023
International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023)
May 27 ~ 28, 2023, Vancouver, Canada
https://acsit2023.org/se/index
Scope & Topics
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
Topics of interest include, but are not limited to, the following
The Software Process
Software Engineering Practice
Web Engineering
Quality Management
Managing Software Projects
Advanced Topics in Software Engineering
Multimedia and Visual Software Engineering
Software Maintenance and Testing
Languages and Formal Methods
Web-based Education Systems and Learning Applications
Software Engineering Decision Making
Knowledge-based Systems and Formal Methods
Search Engines and Information Retrieval
Paper Submission
Authors are invited to submit papers through the conference Submission System by April 08, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by Computer Science Conference Proceedings (H index 35) in Computer Science & Information Technology (CS & IT) series (Confirmed).
Selected papers from SE 2023, after further revisions, will be published in the special issue of the following journals.
The International Journal of Software Engineering & Applications (IJSEA) -ERA indexed
International Journal of Computer Science, Engineering and Applications (IJCSEA)
Important Dates
Submission Deadline : April 08, 2023
Authors Notification : April 29, 2023
Final Manuscript Due : May 06, 2023
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
PERFORMING AN EXPERIMENTAL PLATFORM TO OPTIMIZE DATA MULTIPLEXINGijesajournal
This article is based on preliminary work on the OSI model management layers to optimized industrial
wired data transfer on low data rate wireless technology. Our previous contribution deal with the
development of a demonstrator providing CAN bus transfer frames (1Mbps) on a low rate wireless channel
provided by Zigbee technology. In order to be compatible with all the other industrial protocols, we
describe in this paper our contribution to design an innovative Wireless Device (WD) and a software tool,
which will aim to determine the best architecture (hardware/software) and wireless technology to be used
taking in account of the wired protocol requirements. To validate the proper functioning of this WD, we
will develop an experimental platform to test different strategies provided by our software tool. We can
consequently prove which is the best configuration (hardware/software) compared to the others by the
inclusion (inputs) of the required parameters of the wired protocol (load, binary rate, acknowledge
timeout) and the analysis of the WD architecture characteristics proposed (outputs) as the delay introduced
by system, buffer size needed, CPU speed, power consumption, meeting the input requirement. It will be
important to know whether gain comes from a hardware strategy with hardware accelerator e.g or a
software strategy with a more perf
GENERIC SOPC PLATFORM FOR VIDEO INTERACTIVE SYSTEM WITH MPMC CONTROLLERijesajournal
Today, a significant number of embedded systems focus on multimedia applications with almost insatiable demand for low-cost, high performance, and low power hardware cosumption. In this paper, we present a re-configurable and generic hardware platform for image and video processing. The proposed platform uses the benefits offered by the Field Programmable Gate Array (FPGA) to attain this goal. In this context,
a prototype system is developed based on the Xilinx Virtex-5 FPGA with the integration of embedded processors, embedded memory, DDR, interface technologies, Digital Clock Managers (DCM) and MPMC.
The MPMC is an essential component for design performance tuning and real time video processing. We demonstrate the importance role of this interface in multi video applications. In fact, to successful the
deployment of DRAM it is mandatory to use a flexible and scalable interface. Our system introduces diverse modules, such as cut video detection, video zoom-in and out. This provides the utility of using this architecture as a universal video processing platform according to different application requirements. This platform facilitates the development of video and image processing applications.
This paper presents an inverting buck-boost DCDC converter design. A negative supply voltage is needed
in a variety of applications, but only a few DCDC converters are available on the market. OLED, a new
display type especially suited for small digital camera or mobile phone displays. Design challenges that
came up when negative voltages have to be handled on chip will be discussed, such as
continuous/discontinuous mode transition problems, negative voltage feedback and negative over-voltage
protection. Both devices operate in a fixed frequency PWM mode or alternatively in PFM mode. The single
inductor topology is called inverting buck-boost converter or simply inverter. The proposed converter has
been implemented with a TSMC 0.13-um 2P4M CMOS process, and the chip area is 325 x 300 um2.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Payment industry is largely aligned in their desire to create embedded payment systems ready for the
modern digital age. The trend to embed payments into a software platform is often regarded as first step
towards a broader trend of embedded finance based on digital representation of fiat currencies. Since it
became clear to our research team that there are no technologies and protocols that are protected against
attacks of quantum computing, and that enable automatic embedded payments, online or offline with no
fear of counterfeit, P2P or device-to-device to be made in real time without intermediaries, in any
denomination, even continuous payments per time or service, while preserving the privacy of all parties,
without enabling illicit activities, we decided to utilize the Generic Innovation Engine [1] that is based on
the Artificial Intelligence Assistance Innovation acceleration methodologies and tools in order to boost the
progress of innovation of the necessary solutions. These methodologies accelerate innovation across the
board. It proposes a framework for natural and artificial intelligence collaboration in pursuit of an
innovative (R&D) objective The outcome of deploying these Artificial Innovation Assistant (AIA)
methodologies was tens of patents that yield solutions, that a few of them are described in this paper. We
argue that a promising avenue for automated embedded payment systems to fulfil people’s desire for
privacy when conducting payments, and national security agencies demand for quantum-safe security,
could be based on DeFi and digital currencies platforms that does not suffer from flaws of DLT-based
solutions, while introducing real advantages, in all aspects, including being quantum-resilient, enabling
users to decide with whom, if at all, to share information, identity, transactions details, etc., all without
trade-offs, complying with AML measures, and accommodating the potential for high transaction volumes.
It is not legacy bank accounts, and it is not peer-dependent, nor a self-organizing network.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
2 nd International Conference on Computing and Information Technology ijesajournal
2
nd International Conference on Computing and Information Technology Trends
(CCITT 2023) will provide an excellent international forum for sharing knowledge and
results in theory, methodology and applications of Computing and Information Technology
Trends. The Conference looks for significant contributions to all major fields of the
Computer Science, Compute Engineering, Information Technology and Trends in theoretical
and practical aspects.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Immunizing Image Classifiers Against Localized Adversary Attacks
DYNAMUT: A MUTATION TESTING TOOL FOR INDUSTRY-LEVEL EMBEDDED SYSTEM APPLICATIONS
1. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
DOI : 10.5121/ijesa.2018.8302 19
DYNAMUT: A MUTATION TESTING TOOL FOR
INDUSTRY-LEVEL EMBEDDED SYSTEM
APPLICATIONS
Darin Weffenstette and Kristen R. Walcott
Department of Computer Science, University of Colorado, Colorado Springs, USA
ABSTRACT
Test suite evaluation is important when developing quality software. Mutation testing, in particular, can be
helpful in determining the ability of a test suite to find defects in code. Because of challenges incurred
developing on complex embedded systems, test suite evaluation on these systems is very difficult and costly.
We developed and implemented a tool called DynaMut to insert conditional mutations into the software
under test for embedded applications. We then demonstrate how the tool can be used to automate the
collection of data using an existing proprietary embedded test suite in a runtime testing environment.
Conditional mutation is used to reduce the time and effort needed to perform test quality evaluation in 48%
to 67% less time than it would take to perform the testing with a more traditional mutate-compile-test
methodology. We also analyze if testing time can be further reduced while maintaining quality by sampling
the mutations tested.
KEYWORDS
Test Development, Embedded Test Suites, Test Case Sampling, Mutation Testing
1. INTRODUCTION
When engineering a software solution, testing is essential. To ensure a test suite is effective at
finding defects, it is important to evaluate the test suite with regard to quality. While code
coverage metrics, such as statement or branch coverage, are useful in determining how to improve
a test suite, mutation testing has been shown to be a better indicator of the ability of a test suite to
find faults in code [17, 21]. Many tools have been created to automate test suite evaluation for
unit tests (e.g. [1, 2, 3, 4]). Unfortunately, on embedded systems in industry, functional testing of
the whole system is much more common than unit testing [11]. Thus, combined with uncommon
build and runtime environments, the time overhead inherent to the embedded platform, and a lack
of applicable tools, makes automated test suite evaluation challenging on embedded systems.
Mutation testing is a fault-based technique that measures the fault-finding effectiveness of test
suites on the basis of induced faults [13, 15]. Mutation testing evaluates the quality of test suites
by seeding faults into the program under test. Each altered version containing a seeded fault is
called a mutant. Mutants of the original program are obtained by applying mutation operators. For
example, a conditional statement such as if (a < b) results in multiple mutants by replacing the
relational operator < with valid alternatives such as <= or !=. A test suite kills a mutant if a test
within the test suite fails. After running the test suite on each mutant, a mutation score can be
2. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
20
calculated; the mutation score is the ratio of killed mutants to generated mutants. Prior studies
have used mutation adequacy to gauge the effectiveness of testing strategies [7, 8, 14, 20].
Many tools have been developed to help support mutation testing. Some of these tools (e.g. Jester
[1], MuJava [26]) focus on source code mutation. Yet, modifying source code can lead to many
incompilable mutants and introduces a large re-compilation cost toward the creation of all
mutants. Other tools focus on bytecode mutation (e.g. Javalanche [31], Jumble [6], and PITest
[4]). Bytecode mutation is favorable because changes can be made on-the-fly without
recompilation. It is also simpler to mutate. However, mutating bytecode generates mutants that
could have never been introduced into the source code due to the use of syntactic sugar, and
generated mutants cannot be mapped back to the source code, which hampers manual inspection
of mutants. More advanced tools such as MAJOR [22, 20] take a compiler-integrated approach
using abstract syntax trees to introduce mutations for easy and fast fault seeding using a domain
specific language to configure the mutation process for JUnit tests [20]. These tools help more at
the static and runtime levels.
While tools like MAJOR and PITest have been shown to be effective, they are not practical for all
applications in industry, and they do not relate to the application during runtime. Particularly in
embedded systems, these tools currently have no analogue. Engineering software for embedded
systems presents challenges the current tools have not yet over- come. Because most embedded
systems have limited memory and processing power in comparison to traditional computers,
interpreted languages such as Java are generally not used. Although mutation testing tools exist
for C, like MiLu [18], they do not account for the penalties incurred by compilation for an
embedded system. Tools like MAJOR and PITest mutate, build and run code all on the same
machine, something that is not always possible on embedded systems. Performing all these tasks
on one machine allows these tools to run quickly, but when developing on embedded systems, it
may take minutes to recompile and deploy code before the test suite can be run. This increased
time overhead makes the methods used by current tools inefficient and excessively time-
consuming.
We utilize conditional mutation testing to reduce the costs of evaluating an embedded system and
its test suite. Instead of injecting one mutation, compiling, deploying, testing, and repeating,
conditional mutation injects all the mutations into the code and selectively activates one at a time
as executed. With this strategy, multiple mutations can be tested without restarting the software
under test (SUT), saving a significant amount of time. We also show how an existing proprietary
test suite can be automated for mutation analysis. Finally, we demonstrate a method of reducing
the amount of mutations needed to get representative results.
In this research, we develop a tool called DynaMut, which statically injects conditional mutations
into C++ code. This tool replaces defined mutation operators with macros, and the macros contain
conditional code to select mutants during runtime. DynaMut employs runtime-based conditional
mutation so that the software under test only needs to be compiled once, saving overheads
incurred during compilation and deployment to an embedded system. In order to allow for greater
time saving in mutation testing, this work also analyzes mutation sampling techniques. Simple,
evenly-spaced sampling, random sampling, and dithered sampling, a novel form of sampling
inspired by electronic test and measurement equipment, are applied to the runtime mutation data
gathered.
DynaMut was used to inject mutations into the embedded application, and specific tests from the
larger proprietary test suite were chosen for the mutation analysis. The selected tests were
automated and data was collected for the generated mutants. Our results show that conditional
3. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
21
mutation allowed for time savings between 48% and 67% when compared with a standard
mutate-compile-test methodology. Using the gathered mutation data, three sampling methods
were then used to reduce the number of mutations with the goal of keeping the mutation score
representative across analyses. The dithered sampling technique is shown to be more effective
and efficient than either a random sampling or a simple sampling when decimating the data at
ratios between one third and one sixth of the original set.
In summary, the main contributions of this paper are:
• Development of DynaMut, a static tool to insert runtime-based conditional mutations into
C++ code
• A description of how to alter an embedded application and test suite to perform runtime
mutation testing analysis
• An evaluation of the time overheads incurred by using conditional mutation rather than
mutate-compile-deploy-based mutations
• A comparison of three mutation sampling techniques for use in a conditional mutation
environment
2. MUTATIONS AND SAMPLING
In this section, we discuss work related to mutation analysis and sampling techniques as they
relate to our work.
2.1. Mutation Analysis
Mutation analysis is a method of test suite evaluation first implemented in 1980 [10]. To perform
mutation analysis, faults are seeded into the System Under Test (SUT). For each fault or mutant,
the test suite is run. If the test suite fails, it is said to have killed the mutant. If the test suite
succeeds, it did not detect the mutant. A test suite is given a mutation score that is the percentage
of mutants killed out of the total mutants seeded. A mutant analyzer seeds faults systematically in
order to ensure that the faults are introduced in an unbiased manner.
Many different types of code mutations have been proposed and tested. Unfortunately, using all
variations, especially in a large SUT, can be prohibitively expensive due to the time it would take
to test each mutation. Offutt et al. researched different mutation types in [25] and determined a
subset of operators which are effective in mutation testing and do not lose significant data in
comparison with larger sets of mutations. Based on the work by Offutt et al. [25], DynaMut
focuses on implementing these same mutation operators.
Just et al. [23] perform further research to reduce the mutations needed for Operator Replacement
Binary (ORB) operators. Their work notes the importance of keeping a mutant’s impact on the
code minimal. Trivial mutations, mutations that cause wrong output for all possible input values,
should be avoided to reduce runtime of the analysis. Redundant mutations should also be avoided
to reduce analysis time and also to prevent skew in the overall mutation score. The work by Just
et al. [23] considers Conditional Operator Replacement (COR) and Relational Operator
Replacement (ROR). For each COR operator, it was found that four mutation types are sufficient
to test for non-trivial and non-redundant mutations for any one operator. Given this, each ROR
operator can be replaced by only three mutants, instead of the seven that were proposed.
The case studies presented by Just et al. [23] showed that, compared to replacing all operators
with all valid replacements, replacing COR and ROR operators with the sufficient set was able to
reduce the total number of mutants generated by 16.9% to 32.3%, depending on the ratio of COR
and ROR to all other mutant types. This resulted in improved mutation analysis runtime of
4. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
22
between 10% and 34%. They also showed decreased overall mutations scores by 2% to 8%,
leading to more accurate assessment [24, 23]. Because of these works, this paper will limit the
mutations of ROR and COR operators to those in past work [24, 23]. Apart from the normal
operators, the mutations include: true, false, rhs, and lhs. Rhs stands for right-hand side, meaning
the right-hand side of the operator is always returned. Lhs stands for left-hand side, meaning the
left-hand side of the operator is always returned.
Figure 1(a): Example of a simple sampling and a random sampling technique
Figure 1(b): Example of a dithered sampling technique
2.2. Sampling Techniques
Given the large number of mutants that can be created using the operators discussed, one can also
consider only using subsets of the created mutations. The subsets can be generated using
sampling techniques.
Sampling techniques are used in many software engineering fields that gather large amounts of
data including profiling (e.g. [12] and testing (e.g. [28]).
There are many sampling techniques including simple even sampling, random sampling, and
dithered sampling. When sampling, we attempt to represent the full set of data, keeping a high
5. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
23
level of quality while gathering less information. We hypothesize that these sampling techniques
can be applied to other kinds of data sets, in this case, mutation testing, to reduce the cost of such
testing. This work applies sampling techniques to reduce the amount of data needed to achieve
representative results.
In Figure 1a, a set of data is represented by the blue diamonds, where each diamond is a data
point. This data could be a typical sine wave as acquired by test and measurement equipment. If
one wanted to decimate that data, one option would be to select every twenty-first point.
Decimated data is represented by the red squares. As can be seen, this greatly misrepresents the
actual data. If this data was presented, a user might think the signal was a sine wave at 1/21th
of
the frequency. A better technique would, for every 21 samples, pick one sample randomly. With
this sampling technique, called dithered sampling, the signal would look like noise. However,
noise can be a better representation of the data and would likely allow for some measurements to
occur with greater accuracy than simple sampling.
Figure 1b shows an example of the data using a dithered sample technique. The green triangles
represent this new set of data. As can be seen, it looks like noise. However, unlike the evenly
sampled data, one could measure the amplitude with decent accuracy. Measurements of frequency
may be incorrect, but the results may still be more accurate than that of the evenly-spaced
samples. The amount of decimation in this example is extreme. Clearly, it is desirable to preserve
as much of the data as possible to reconstruct the true data, but it is a good example of how
sampling can affect a measurement.
Other software engineering works [32, 33] have used random sampling to reduce the number of
mutations needed. Figure 1a shows an example random sampling represented by the purple
circles. In this case, seven data points are sampled, and five of them are clustered. This cluster
represents one part of the signal well, but as random sampling makes no attempts to spread out
samples, entire sections of the data are missed. In this case, the repeated pattern of the sine wave
is not represented well; much of the signal presents as a constant value.
The sequence of mutations seeded by DynaMut exhibits a repeating pattern. For each source file,
different kinds of mutations are seeded from the top of the file to the bottom. This same pattern
repeats across the many files. One might think of the gathered data as a kind of sine wave through
the code, although it would not be as clean as the waves in Figures 1a and 1b. Unlike random
sampling, dithered and simple sampling will ensure all areas of the code are represented in the
mutation score. In addition, dithered sampling can ensure that the sampled data is not
misrepresenting data based on recurring patterns. For these reasons, dithered sampling may
provide better mutation testing data than either simple sampling or random sampling in mutation
testing.
3. IMPLEMENTATION
In order to create a tool that can perform automated mutation testing on embedded de- vice
applications, we created DynaMut, a conditional mutation testing tool with varying sampling
rates. Firstly, DynaMut includes a static tool to insert calls to centralized functions or macros
from all mutation sites in the code. DynaMut is configurable for different software projects, and it
can be easily extended for other programming languages. In this section, we explain how projects
can be revised and configured to work with DynaMut along with examples.
To reduce the cost of performing mutation testing on embedded software and mutation data
gathering, a dynamic/conditional mutation approach is taken to assist with mutation analysis.
6. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
24
While other tools are available to help in mutation testing and mutation test analysis in general,
they are unable to work with C++ programs. For example, tools such as Nester, Major and
Figure 2: Example of ProjectConfig.xml file
PiTest [3, 2, 4] cannot be easily adapted to work with C or C++ code due to their design. When
working with C++ or C code, common in embedded systems, tools such as MAJOR and PITest,
which both mutate Java bytecode, are unsuitable for most embedded applications. Nester does
alter source code with function calls at the mutation sites, but it has not been actively developed,
and it is not as configurable as this research requires. This led to our development of DynaMut- A
Dynamic Mutation testing tool for embedded system applications.
DynaMut is highly configurable. In this way, it is usable on different systems with var- ied
programming languages. Two configuration files are used to control it. First, Dy- naMut imports
all the code files that will be mutated. Figure 2 shows the contents of a sample configuration file.
With just four rules including IncludeAbsoluteDirectory, IncludeFileExtension,
ExcludeDirectory, and ExcludeFile, any complicated directory structure can be navigated. One or
more IncludeAbsoluteDirectory rules must be set, and DynaMut will search all children folders.
One or more IncludeFileExtension rules must be set to define what types of files may be included.
The remaining two rules are optional, and can be used to exclude directories and files.
Figure 3: Example of code before and after DynaMut mutation
Next, DynMut configures how the code is mutated. Each mutation group can be one of three
types: OperatorReplacementUnaryGroup, OperatorReplacementBinaryGroup, or
LiteralValueReplacementGroup. For each mutation group, three things must be specified:
RegularExpression, NumberOfMembers, and the GroupMember variations. The Regular-
Expression should contain a regular expression to match the operator(s) and operand(s). The
NumberOfMembers specifies how many operators the regular expression matches. Each
GroupMember specifies three things: the Operator, NumberOfMutations, and the Replace-
mentFunction text. The Operator should contain the operator so DynaMut can detect which
operator in the group is matched. The NumberOfMutations should specify how many variations
the conditional code will use to mutate a given operator. This is used by DynaMut to space out
the constants placed in the function calls. The ReplacementFunction contains the function call
being used.
Because of the amount of text parsing performed by DynaMut, it can be extremely resource-
intensive dependent on the application. To assist in reducing the time overhead of analysis,
7. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
25
DynaMut is implemented in a way that allows for multithreading. Each task can run in an
independent thread. A task thread is created for each code file, and each is placed in a Thread
Pool. The number of threads running at one time can be controlled by the WorkerThreadCount in
the ProjectConfig.xml file, as can be seen in Figure 2. Because each file is altered individually,
each file’s index mutation starts at zero, but these values are placed in a macro, as can be seen in
Figure 3. After every file has finished being seeded, DynaMut defines the MUTATION_INDEX
macro, which contains an offset to make sure that each mutation has a unique ID across the entire
software project.
When adding mutations, it is important that functionality of the original code is not changed.
Operators are placed in groups with operators that possess the same level prior- ity in the target
language’s order of operations. This ensures that order of operations does not change code
functionality unintentionally. Another consideration is how the regular expression gets matched.
For the groups which use left-to-right precedence, the ‘lhs’ capturing group ends with a question
mark. This tells the regular expressions parser to match the fewest number of characters, ensuring
the left-most operator gets captured first.
Rules were also added to the DynaMut code to skip regular expressions matched in certain
conditions. For example, DynaMut has logic to detect if the match is in a comment or a string. If
in a string declaration, nothing is changed. If the match occurred in a comment, the operator is
removed to make matching faster the next iteration. DynaMut also includes rules to detect
addition of strings (strings can be added but not subtracted) and subtraction of pointers (pointers
can be subtracted but not added). This aids in helping the applications under test to build
successfully following mutation.
4. EVALUATION
The primary goal of this paper’s research is to demonstrate that mutation testing can be performed
on complicated embedded systems in industry. In the evaluation of DynaMut, we will:
• Discuss the criteria used to select the tests used
• Analyze the run-time data gathered and estimate how much time was saved with run- time
conditional mutation testing versus mutate-compile-deploy testing
• Explore ways of reducing cost of testing through sampling of mutations
• Discuss these results and how they can be applied to reduce the cost of mutation testing in an
embedded system environment
Figure 4: Example of macros used in KeysightC to define conditional mutation behavior
Within the evaluation, we examine the time overhead of conditional mutation testing in an
embedded environment versus the traditional mutation-compile-deploy approach and evaluate
how sampling techniques can be applied to reduce the number of test runs without reducing
effectiveness.
8. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
26
4.1. Case Study
DynaMut was evaluated on Keysightapp, a proprietary industry-level tool. Keysight (Keysightapp )
is a tool that is an embedded application with about 1 million LOC. The development team of
Keysightapp maintains a manually-developed test suite that uses re- mote commands to perform
tests; this suite (Keysightsuite can take several hours to run, depending on the mode in which it is
run.
DynaMut was run on Keysightapp across 492 code files. This yielded approximately 121,000
mutations. In order to use Keysightapp with DynaMut, a number of issues were identified that
could cause DynaMut to make improper/uncompilable mutations.
After adding these new rules into DynaMut, Keysightapp was compiled in release mode. However,
the application would not start up fully, encountering errors. These likely are DynaMut mutations
that, although not incorrect enough to cause compile errors, caused a change in behavior that
proved fatal. Because of the scope of this project, it was decided to limit mutations to a single
subsystem of Keysightapp, consisting of 49 code files, or 10% of the total number of files. With
this limitation, the application runs normally when mutations are not in effect. While this section
of code represents only a portion of Keysightapp, it is an important behavioral subsystem that is
tested by the majority of the tests in Keysightsuite.
4.2. Modifications to Keysightapp
In addition to the code changes made by DynaMut, a small amount of code was added to the
application to enable conditional mutation operation. Macros were created to control the
mutations. Figure 4 shows two examples of the macros added to Keysightapp. As can be seen, the
macros make calls to the static function cDynaMut::CheckMutation. CheckMutation returns true
if mutation is enabled and the mutationId parameter matches the static variable containing the
currently-active mutation. In this way, only one mutation is in effect at any point in time. In
addition to controlling the active mutation, the cDynaMut class also was designed with the ability
to track mutation coverage.
4.3. Automating Keysightsuite for Data Collection
The full test suite, Keysightsuite, takes several hours to run all tests on Keysightapp. For this reason,
it was decided to only use small tests from the larger suite for evaluation. MutationTestRunner
performs the following tasks to automate mutation testing data col- lection:
• Imports the csv file containing the covered mutations gathering information on the tests
being run only.
• Communicates with the remote device running Keysightapp to control the mutation.
• Communicates with a remotely-controlled power strip to reboot the remote device when
necessary.
• Runs Keysightsuite command line utility and captures output.
9. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
27
Figure 5: Mutations covered by each test and # of mutations covered by multiple tests
All changes were made to the subsystem of Keysightapp [5], which represents the key and
behavioral subsystem and represents the main functions of the application. All testing is
performed on the most recent accepted build of the code. For each test subsuite, a training run
was performed with the coverage tracking feature enabled. This produced a coverage file
enumerating the indices of all the mutations covered by a given test subsuite. The
MutationTestRunner utility was then used to automate running tests in Keysightsuite on the
mutations specified by the coverage file.
4.4. Test Selection for Test Suite and Mutation Scores
Because of the size of test suite, Keysightsuite, evaluation was performed on a limited number of
tests from the entire suite. To select the three test subsuites used in this evaluation, tests were
selected that 1) could be run one time in under 30 seconds, 2) that could cover at least 1,000
mutations, and 3) tests where the mutations covered by the chosen tests overlap as little as
possible in order to provide differing data.
First, Keysightsuite was run to determine the time each test would take to execute. Then for the
tests that completed reliably in under 30 seconds, training runs were performed to gather the
mutation coverage information for each test. With the tests that covered roughly 1,000 or more
mutations, the coverage data was analyzed for overlapping coverage– that is, mutations that are
covered by two or more tests. Based on this data, three test sets were chosen. They will be
referred to as Test 1, Test 2, and Test 3.
With the complete sets of covered mutations, the test suites achieved the mutation scores of
20.9%, 13.3%, and 22.8% for Tests 1, 2, and 3 respectively. While these mutation scores are low,
we observe that DynaMut can be useful in identifying parts of code that have not yet been tested.
With approximately 121,000 mutations being added to the application, 13-22% can be identified
using the existing, provided test suites.
10. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
28
Figure 5 shows the number of mutations covered by each test. It was observed that Test 1 covers
995 mutations and executes in about 23 seconds. Test 2 covers 1,149 mutations and executes in
18 seconds. Test 3 covers 2,560 mutations and executes in 11 seconds. For each test, these
mutations are categorized by how many tests were executed and by which tests mutations are
covered. There are a significant number of duplicate mutations covered by the tests due to the fact
that these tests exercise the same SUT.
Figure 6 a: Comparison of testing time with conditional mutation and compiled mutation
Figure 6 b: Correlation of sampled data to actual test values
4.5. Time Overhead Reduction
The time overhead was analyzed from the test runs. This analysis focuses on the data collected
from Test 2 and Test 3 due to their larger mutation coverage. Because of the nature of testing on
an embedded system, there is time overhead not normally associated with mutation analysis.
When the SUT is running on a host, it can be killed and started quickly, depending on the startup
time of the SUT. Because of this, mutation testing is often performed by running a new instance
11. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
29
−
of the application for each mutation. On the embedded system for Keysightapp, the system must be
rebooted to restart the SUT. The boot process, including the time it takes to fully start Keysightapp,
takes about 59 seconds, which is far longer than traditional applications being tested with
mutation analysis.
Due to this extra overhead, MutationTestRunner was designed to only restart the embedded
system after a test failure. If the test passes with a given mutation, the conditional mutation ID is
changed to that of the next mutation, and the test is run again without restarting the embedded
system, thus providing significant savings in testing time. Even if the system does not need to be
rebooted, about 5 seconds of overhead occurs before every test. This time is incurred when
sending the remote commands to tell Keysightapp which mutation to enable. Even though these
commands are small in size, the steps necessary to ensure reliable operation cause this step to
consume 5 seconds. To estimate the time it would take to perform these mutation analyses with a
traditional mutate-compile-test methodology, the additional overhead of compiling Keysightapp
(with only minor changes) and deploying it to the embedded system is estimated to be 15
seconds. This method would also require the system to be rebooted after every test. Because no
tools exist that can easily be used with embedded products, estimations are used based on manual
testing of Keysight.
As seen in Figure 6a, MutationTestRunner completed the full mutation analysis of Test 2 in 16
hours, 5 minutes and 27 seconds (16:05:27). Of this time, 5:43:49 was spent performing the
actual test, and 1:35:45 was spent in the unavoidable overhead described above. 54.47% of the
time (8:45:53) was spent rebooting after failures. With a mutate- compile-test method, it is
estimated that Test 2 would take 30:56:40, 92.31% more than the conditional mutation method,
given the estimates described. Test 3 was performed in a total of 21:10:42. Of this time, 45.26%
or 9:35:04 was spent rebooting. The compiled mutation method on Test 3 would take an
estimated 64:20:49, or 203.83% more than the implemented conditional mutation method. The
estimations of mutation-compile-test times are necessary as there are no tools that can easily
perform these actions on embedded applications.
These time estimations assume that the same coverage data would be available for the mutate-
compile-deploy method, which would require more static analysis to be performed. Even with
this consideration, the conditional mutation method implemented in this work saves an estimated
48.00% of the time to evaluate Test 2 and 67.09% of the time needed to evaluate Test 3.
4.6. Mutation Sampling
To further reduce testing cost, this work evaluates methods of reducing the number of mutations
tested. To perform this evaluation, the full results from each Test are analyzed. Each covered
mutation either passes or fails. This data was imported into a spreadsheet, where the simple
sampling, random sampling, and dithered sampling methods were applied to the data of each test
across a variety of decimation factors. For the simply sampled sets, all possible sets of evenly-
spaced samples were determined for each decimation proportion and test (for example, at 1/2
decimation, there are only 2 possible sets for each of the 3 tests). For the dithered and random
sample sets, data was gathered for 10 samples of each decimation proportion and test. The data
was then used to correlate each decimation proportion and sampling type to the score obtained
from the full set of data. Correlation was calculated using [29] to get the Kendall’s τ factor. The
decimation proportions used are: 1/2, 1/3, 1/4, 1/6, 1/8, and 1/10 the total number of covered
mutations.
Kendall’s τ factor is a measure of how one set of data correlates to another. It can be from 1 to 1,
where 0 means there is no correlation, 1 means there is absolute positive correlation, and -1
12. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
30
means there is absolute inverse correlation. For this work, closer to 1 is more desirable. Figure 6b
shows the results of the correlation analysis. At 1/2 decimation simple sampling correlates better
to the actual data; however, this might be misleading because the dithered and random data each
have 10 data points per test compared to the simple sampling’s 2 points per test. At 1/3, 1/4 and
1/6 decimation, the dithered sampling provides better correlation than both the simple sampling
and the random sampling. At 1/8 and 1/10 decimation, simple sampling provides better
correlation than dithered sampling, although neither provide very good correlation; random
sampling manages to provide the best result at these decimation ratios.
Sampling can also be used to reduce the number of mutations tested. Dithered sampling offers
better correlation to the true values; however, as the data is decimated further, the risk of
gathering non-representative samples increases. This risk must be balanced between the
effectiveness and efficiency of testing that is needed.
4.7. Discussion
While the sampling methods are correlated to the actual values from the full set of covered
mutations, we did not determine how much time the sampling techniques would save versus other
techniques due to a lack of tools that can perform similar tasks. Because of the overheads
present, it cannot be assumed that testing 1/2 of the mutations would save 50% of the testing time.
Assuming the decimated set of mutations exhibit a similar pass/failure rate as the whole set, the
time overhead scale is predictable based on our preliminary tests.
In addition, although the correlation of dithered samples remains relatively constant between 1/2
and 1/6 decimation, that does not make them equally good options for testing. The chances of
obtaining an outlier or biased result increases as the sample decreases, so 1/6 decimation would
not be as accurate a method as 1/2 decimation.
Overall, we learn that DynaMut can perform analysis of an embedded application and that it can
be adapted to work on other applications and languages. While every embedded program has its
own specifications, DynaMut provides options to configure programs to match the tool and
modifications needed to work with the tool. Sampling techniques can also be used and modified,
where the user can select between multiple sampling types and rates. DynaMut in itself can
provide a method to test embedded programs on a mutation level and gives sampling options
given testing efficiency needs.
5. THREATS TO VALIDITY
This work tested a small portion of the tests in Keysightsuite. With a limited number of the tests
from the test suite, testing with mutations in only 10% of Keysightapp, testing still took between
16 and 21 hours per test subset. There is room for improvement. However, the tests selected in
this work covered the majority of the functions specified by users and main program functions.
Also, the research is only based on an industry level application. The selected application is a
large, proprietary embedded application, where testing was focused on the primary functions of
the application. We believe that the results can be extended to other C and C++ based embedded
applications given the additional modifications that were incorporated into DynaMut. However,
these need to be tested and evaluated. More applications representing embedded software are
needed.
This work evaluates conditional mutation testing and sampling techniques on one embedded
system and one software system. These results may not translate to other embedded systems or
13. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
31
software packages. We tried to mitigate this possibility by using a variety of tests without regard
to the system under test. While we only focus on patterns inherent in Keysightapp and Keysightsuite,
these were not the focus when designing the tool. Keysightapp was used as a learning an
evaluation tool, but general application designs were considered during DynaMut’s
implementation.
6. RELATED WORK
This paper deals with efforts to reduce the costs of mutation analysis to make it practical for
testing an embedded system in industry. Much of this saving is needed in the runtime of the
mutation analysis due to the size of the SUT and test suite. Consideration has been put into
reducing the overall number of mutations used. This work is based on the studies performed in
[25, 23, 24] to reduce the number of mutations seeded in the code. In our tool, fewer mutations
being seeded results in time savings during mutation testing.
Another way to save time during mutation analysis is to reduce the compilation time. Just et al.
propose a method of increasing the efficiency of mutation analysis in [22]. In their work, they
manage to save compilation time by introducing conditional mutation. In this method, the
compiler inserts conditional code at each mutation site for all possible mutations, and a global
state variable controls which mutation is in effect. The introduction of conditional code at each
mutation site introduced a large amount of code overhead. On the applications tested, the
instrumented code compiled to a size between 18% and 66% larger than the original program.
Nester [3], a mutation testing tool for C# ported from Jester [1], takes a slightly different
approach. It replaces various operators with calls to a set of central functions, instead of placing
the conditional code at each location. These functions contain the conditional code to allow one
mutation to occur at a time, but should incur far less code overhead. This is more important in an
embedded system where memory is constrained. Our work uses conditional mutation similar to
that used in [22]; however, this work uses macros to introduce the conditional code. Because the
mutations were limited to a subsystem of Keysightapp, and a significant portion of the compiled
binary is devoted to GUI-related non-code data, the memory overhead and time overhead
introduced were not evaluated.
For embedded systems, reducing the frequency of compilation has an added bonus, which differs
from systems such as Just et al. [22]. The SUT is compiled on a workstation, and then it is
deployed to the embedded system. Combined with software startup time, and the time necessary
to reboot the embedded system between code runs (although, in theory the embedded system
could be rebooted during compilation), every deployment can add approximately one minute to
the time overhead of mutation analysis. Being able to compile the SUT and deploy it only once
therefore can yield much more benefit in this case than on systems where the SUT is run on the
same computer on which it is compiled.
This paper also proposes a method of sampling mutations to reduce the cost of mutation testing.
Dithered sampling has been used for a long time in analog and digital test and measurement
equipment [30, 16]. This equipment can generate large data sets, and decimating that data can be
useful for improving performance of measurements, analysis or visualization. Dithered sampling
ensures that this decimation does not inadvertently misrepresent the original data. This paper
shows that application of this dithered sampling can provide more representative samples than
either a random sampling or an evenly-spaced simple sampling. Other works have studied
sampling techniques applied to mutation testing. In [33], Zhang et al. compare random mutation
sampling to techniques of reducing mutation by reducing the set of operators used. They found
that random mutation sampling can be just as effective. In later work, (a different) Zhang et al.
14. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
32
combine random sampling of mutations with reduction of operators used, and show that the
combination of techniques yields precise results with far fewer mutations [32]. This second work
evaluates eight different random sampling techniques. Their baseline stratagem is equivalent to
the random sampling of this paper. The other strategies select a certain percentage of mutants
from each set of mutants: generated from a single operator, generated inside a given program
element (e.g. class or function), or a combination of the previous. In practice, the dithered
sampling in this paper may behave similarly to the technique of selecting a percentage of
mutations within a given program element; however, dithered sampling requires no extra code
analysis to perform, making it easier to implement with a simple tool like DynaMut.
Embedded systems are often tested using model-based approaches. Tan et al. demonstrate an
integrated framework for development of self-testing model-based code [27]. Bringmann and
Krämer introduce a tool to perform model-based testing on automotive embedded systems in [9].
While these embedded systems are amenable to model-based testing, not all embedded systems
are. Like the in-industry case study of [19], the embedded application Keysightapp is a large piece
of software that is highly configurable. The size of these applications makes model-based testing
or development impractical. These applications often have evolved over a series of product
iterations, during which tests have been added to a proprietary test suite. This work evolves test
suite evaluation methods to work with one such test suite.
7. CONCLUSION AND FUTURE WORK
This paper demonstrates that mutation testing can be performed on embedded systems in
industry. DynaMut inserts runtime conditional mutations into a SUT, then demonstrated how to
automate collection of data using an existing proprietary test suite. Conditional mutation was used
to reduce the time and effort needed to perform this testing. The mutation testing was performed
on three tests chosen from a larger suite of tests. It is estimated that the conditional mutation
technique saves between 48% and 67% of the time it would take to perform the testing with a
more traditional mutate-compile-test methodology.
The data is further analyzed to determine if testing time could be further reduced by sampling the
mutations tested, rather than testing all the covered mutations. Dithered sampling proves to
perform better than simple evenly-spaced sampling or random sampling in both efficiency and
effectiveness.
The techniques used in this paper could be enhanced to further reduce testing costs. In future
work, we would use multiple test fixtures to allow for testing of mutations in parallel. This would
likely be an effective way to reduce testing time. It would also be interesting to apply the dithered
sampling algorithm to larger data sets and more applications to ascertain its relative effectiveness.
8. REFERENCES
[1] http://jester.sourceforge.net, September 2014.
[2] http://mutation-testing.org/, September 2014.
[3] http://nester.sourceforge.net, September 2014.
[4 ]http://pitest.org/, September 2014.
[5] http://www.keysight.com, September 2014.
[6] http://jumble.sourceforge.net/, January 2015.
[7] J. H. Andrews, L. C. Briand, and Y. Labiche. Is mutation an appropriate tool for testing experiments?
[software testing]. In Software Engineering, 2005. ICSE 2005. Proceedings. 27th International
Conference on, pages 402–411. IEEE, 2005.
15. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
33
[8] J. H. Andrews, L. C. Briand, Y. Labiche, and A. S. Namin. Using mutation analysis for assessing and
comparing testing coverage criteria. Software Engineering, IEEE Transactions on, 32(8):608–624,
2006.
[9] E. Bringmann and A. Kramer. Model-based testing of automotive systems. In Software Testing,
Verification, and Validation, 2008 1st International Conference on, pages 485–493. IEEE, 2008.
[10]T. A. Budd. Mutation analysis of program test data. 1980.
[11]A. Causevic, D. Sundmark, and S. Punnekkat. An industrial survey on contemporary aspects of
software testing. In Software Testing, Verification and Validation (ICST), 2010 Third International
Conference on, pages 393–401. IEEE, 2010.
[12]D. Chen, N. Vachharajani, R. Hundt, S.-w. Liao, V. Ramasamy, P. Yuan, W. Chen, and W. Zheng.
Taming hardware event samples for fdo compilation. In CGO ’10: Proceedings of the 8th annual
IEEE/ACM international symposium on Code generation and optimization, pages 42–52, New York,
NY, USA, 2010. ACM.
[13]R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on test data selection: Help for the practicing
programmer. Computer, 11(4):34–41, 1978.
[14]H. Do and G. Rothermel. On the use of mutation faults in empirical assessments of test case
prioritization techniques. Software Engineering, IEEE Transactions on, 32(9):733–752, 2006.
[15]R. G. Hamlet. Testing programs with the aid of a compiler. Software Engineering, IEEE Transactions
on, (4):279–290, 1977.
[16]M. Holcomb. Anti-aliasing dithering method and apparatus for low frequency signal sampling, May 19
1992. US Patent 5,115,189.
[17]L. Inozemtseva and R. Holmes. Coverage is not strongly correlated with test suite effectiveness. In
Proceedings of the 36th International Conference on Software Engineering, ICSE 2014, pages 435–445,
New York, NY, USA, 2014. ACM.
[18]Y. Jia and M. Harman. Milu: A customizable, runtime-optimized higher order mutation testing tool for
the full c language. In Practice and Research Techniques, 2008. TAIC PART’08. Testing: Academic
& Industrial Conference, pages 94–98. IEEE, 2008.
[19]D. Jin, X. Qu, M. B. Cohen, and B. Robinson. Configurations everywhere: implications for testing and
debugging in practice. In Companion Proceedings of the 36th International Conference on Software
Engineering, pages 215–224. ACM, 2014.
[20]R. Just. The major mutation framework: Efficient and scalable mutation analysis for java. In
Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 433–436.
ACM, 2014.
[21]R. Just, D. Jalali, L. Inozemtseva, M. D. Ernst, R. Holmes, and G. Fraser. Are mutants a valid substitute
for real faults in software testing? In Proceedings of the Symposium on the Foundations of Software
Engineering (FSE), Hong Kong, November 18–20 2014.
[22]R. Just, G. M. Kapfhammer, and F. Schweiggert. Using conditional mutation to increase the efficiency
of mutation analysis. In Proceedings of the International Work- shop on Automation of Software Test
(AST), pages 50–56, May 23–24 2011.
[23]R. Just, G. M. Kapfhammer, and F. Schweiggert. Using non-redundant mutation operators and test suite
prioritization to achieve efficient and scalable mutation analysis. In Proceedings of the International
Symposium on Software Reliability Engineering(ISSRE), pages 11–20, November 28–30 2012.
[24]G. Kaminski, P. Ammann, and J. Offutt. Better predicate testing. In Proceedings of the 6th
International Workshop on Automation of Software Test, pages 57–63. ACM, 2011.
[25]A. J. Offutt, A. Lee, G. Rothermel, R. H. Untch, and C. Zapf. An experimental determination of
sufficient mutant operators. ACM Transactions on Software Engineering and Methodology (TOSEM),
5(2):99–118, 1996.
[26]J. Offutt and N. Li. http://cs.gmu.edu/~offutt/mujava/, January 2015.
[27]L. Tan, J. Kim, O. Sokolsky, and I. Lee. Model-based testing and monitoring for hybrid embedded
systems. In Information Reuse and Integration, 2004. IRI 2004. Proceedings of the 2004 IEEE
International Conference on, pages 487–492. IEEE, 2004.
[28]K. Walcott-Justice, J. Mars, and M. L. Soffa. Theme: A system for testing by hardware monitoring
events. In Proceedings of the 2012 International Symposium on Software Testing and Analysis, pages
12–22. ACM, 2012.
[29]Wessa. Kendall tau rank correlation (v1.0.11) in free statistics software (v1.1.23-r7).
http://www.wessa.net/rwasp_kendall.wasp/, 2012.
16. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.1/2/3,September 2018
34
[30]B. Widrow. Statistical analysis of amplitude-quantized sampled-data systems. American Institute of
Electrical Engineers, Part II: Applications and Industry, Transactions of the, 79(6):555–568, 1961.
[31]A. Zeller. https://www.st.cs.uni-saarland.de/mutation/, January 2015.
[32]L. Zhang, M. Gligoric, D. Marinov, and S. Khurshid. Operator-based and random mutant selection:
Better together. In Automated Software Engineering (ASE), 2013 IEEE/ACM 28th International
Conference on, pages 92–102. IEEE, 2013.
[33]L. Zhang, S.-S. Hou, J.-J. Hu, T. Xie, and H. Mei. Is operator-based mutant selection superior to
random mutant selection? In Proceedings of the 32nd ACM/IEEE International Conference on Software
Engineering-Volume 1, pages 435–444. ACM, 2010.