Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
The paper presents a new language called UDITA for describing tests. UDITA is a Java-based language that includes non-deterministic choice operators and an interface for generating linked data structures. This allows for more efficient and effective test generation compared to previous approaches. The language aims to make test specification easier while generating tests that are faster, of higher quality, and less complex than traditional manually written or randomly generated tests.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
EXTRACTING THE MINIMIZED TEST SUITE FOR REVISED SIMULINK/STATEFLOW MODELijaia
Test case generation techniques are successfully employed to generate test cases from a formal model. A problem is that as the model evolves, test suites tend to grow in size, making it too costly to execute entire test suites. This paper aims to propose a practical approach to reduce the size of test suites for modified Simulink/Stateflow (SL/SF) model, which is popularly used for modeling software behavior in many industries like automobile manufacturers. The model for describing a system is frequently modified until it is fixed. The proposed technique is capable of extracting the minimized sized test suite in terms of test coverage, by taking into account both the modified and the affected portion of revised SL/SF model. Two real models for the ECUs deployed in a commercial car are used for an empirical study.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
A NOVEL APPROACH FOR TEST CASEPRIORITIZATIONIJCSEA Journal
This paper proposes a novel approach to test case prioritization that calculates the product of a test case's statement coverage and number of function calls to determine priority. The test cases are ordered based on this product metric, with the highest product value first. An algorithm is presented and evaluations show the approach improves fault detection rates over non-prioritized test cases, as measured by the APFD metric. The approach addresses potential ambiguities when multiple test cases have the same product value by further prioritizing based on number of function calls or execution order. The results demonstrate the effectiveness of the proposed prioritization formula and algorithm.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Model based test case prioritization using neural network classificationcseij
Model-based testing for real-life software systems often require a large number of tests, all of which cannot
exhaustively be run due to time and cost constraints. Thus, it is necessary to prioritize the test cases in
accordance with their importance the tester perceives. In this paper, this problem is solved by improving
our given previous study, namely, applying classification approach to the results of our previous study
functional relationship between the test case prioritization group membership and the two attributes:
important index and frequency for all events belonging to given group are established. A for classification
purpose, neural network (NN) that is the most advances is preferred and a data set obtained from our study
for all test cases is classified using multilayer perceptron (MLP) NN. The classification results for
commercial test prioritization application show the high classification accuracies about 96% and the
acceptable test prioritization performances are achieved.
The paper presents a new language called UDITA for describing tests. UDITA is a Java-based language that includes non-deterministic choice operators and an interface for generating linked data structures. This allows for more efficient and effective test generation compared to previous approaches. The language aims to make test specification easier while generating tests that are faster, of higher quality, and less complex than traditional manually written or randomly generated tests.
Real time implementation of the software system requires being more versatile. In the maintenance phase, the modified system under regression testing must assure that the existing system remains defect free. Test case prioritization technique of regression testing includes code as well as model based methods of prioritizing the test cases. System model based test case prioritization can detect the severe faults early as compare to the code based test case prioritization. Model based prioritization techniques based on requirements in a cost effective manner has not been taken for study so far. Model based testing used to test the functionality of the software system based on requirement. An effective model based approach is defined for prioritizing test cases and to generate the effective test sequence. The test cases are rescheduled based on requirement analysis and user view analysis. With the use of weighted approach the overall cost is estimated to test the functionality of the model elements. Here, the genetic approach has been applied to generate efficient test path. The regression cost in terms of effort has been reduced under model based prioritization approach.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
Guidelines to Understanding Design of Experiment and Reliability Predictionijsrd.com
This paper will focus on how to plan experiments effectively and how to analyse data correctly. Practical and correct methods for analysing data from life testing will also be provided. This paper gives an extensive overview of reliability issues, definitions and prediction methods currently used in the industry. It defines different methods and correlations between these methods in order to make reliability comparison statements from different manufacturers' in easy way that may use different prediction methods and databases for failure rates. The paper finds however such comparison very difficult and risky unless the conditions for the reliability statements are scrutinized and analysed in detail.
The document describes a regression test suite prioritization technique using the Hill Climbing algorithm. It aims to reorder test cases in a test suite to maximize fault detection within a given time limit. The technique records execution times of individual test cases and uses the Hill Climbing algorithm with parameters like the test suite, all possible permutations of test cases, time limit, and functions to calculate time and fitness. The algorithm iteratively makes small improvements to the ordering to find a local optimum that detects the most faults within the time limit. The technique intelligently considers both test execution time and fault detection information to effectively prioritize test cases.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
A software fault localization technique based on program mutationsTao He
The document describes a new software fault localization technique called Muffler that uses program mutation analysis. Muffler aims to address the problem of coincidental correctness in existing coverage-based fault localization methods. It combines the Naish ranking function with a new metric called mutation impact, which measures the average number of test results that change from passed to failed when each statement is mutated. An empirical evaluation on seven programs shows that Muffler reduces the average code examination needed to find faults by 50% compared to the Naish technique.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
In this quality assurance training, you will learn Test Case Design and Technique. Topics covered in this session are:
• Test Case Design and Techniques
• Black-box: Three major approaches
• Steps for drawing cause-Effect Diagram:
• Behavior Testing
• Random Testing
• White Box Techniques
• Path Testing
• Statement Coverage
• Data Flow Testing
For more information, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-training-beginners-and-intermediate-level/
In this quality assurance training session, you will learn Test case design. Topics covered in this course are:
• Test Case Design and Techniques
• Black-box: Three major approaches
• Steps for drawing cause-Effect Diagram:
• Behavior Testing
• Random Testing
• White Box Techniques
• Path Testing
• Statement Coverage
• Data Flow Testing
To know more, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-quality-assurance-qa-training-with-hands-on-exercises/
In this session you will learn:
Test Case Design and Techniques
Black-box: Three major approaches
Steps for drawing cause-Effect Diagram:
Behavior Testing
Random Testing
White Box Techniques
Path Testing
Statement Coverage
Data Flow Testing
For more information: https://www.mindsmapped.com/courses/quality-assurance/qa-software-testing-training-for-beginners/
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
A Test Analysis Method for Black Box Testing Using AUT and Fault Knowledge.Tsuyoshi Yumoto
With a rapid increase in size and complexity of software today, the scope of software testing is also expanding. The efficiency of software testing needs to be improved in order to ensure the appropriate delivery deadline and cost of software development. For improving efficiency of software testing, the test needs to be designed in a way that the number of test cases is sufficient and appropriate in quantity. Test analysis is the activity to refine Application Under Test (AUT) into proper size that test design techniques can be applied to. It is for designing the test properly. However, the classification for proper size depends on individual’s own judgments. This paper proposes a test analysis method for the black box testing using a test category that is the classification based on fault and AUT knowledge.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
This document outlines 11 principles of software testing:
1. Testing is the process of executing software with test cases to reveal defects and evaluate quality. Testers detect defects before software is operational.
2. A good test case has a high probability of revealing undetected defects when the objective is defect detection.
3. Testers must meticulously inspect and interpret test results to avoid overlooking failures or suspecting failures that don't exist.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
This document discusses black box testing techniques. It defines black box testing as testing that ignores internal mechanisms and focuses on inputs and outputs. Six common black box testing techniques are described: equivalence partitioning, boundary value analysis, cause-effect graphing, decision table-based testing, orthogonal array testing, and syntax-driven testing. The document provides examples of how to use these techniques to design test cases to uncover faults.
Research Inventy : International Journal of Engineering and Scienceinventy
The document analytically represents the torque characteristics of a repulsion motor based on basic magnetism theory. It develops mathematical expressions to calculate torque by considering the forces of attraction and repulsion between the stator and rotor poles. The torque is modeled as a function of the angle between the stator field axis and rotor brush axis. Three models of increasing complexity are presented to predict torque values for different angular displacements. Analytical results show good agreement with practical motor torque curves. The paper establishes a method to determine torque production in repulsion motors using fundamental magnetic principles.
Research Inventy : International Journal of Engineering and Scienceinventy
esearch Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Prioritizing Test Cases for Regression Testing A Model Based ApproachIJTET Journal
The document summarizes a model-based approach to prioritizing regression test cases. It involves generating test cases from UML models, prioritizing them based on the number of states and transitions covered, and clustering them by severity using a dendrogram approach. This helps decrease the time and cost of regression testing by focusing testing efforts on the most important and affected areas first. The proposed approach constructs models from requirements, identifies states, prioritizes flows, generates test cases, and prioritizes the test cases based on severity to improve regression testing efficiency.
Cause-Effect Graphing: Rigorous Test Case DesignTechWell
A tester’s toolbox today contains a number of test case design techniques—classification trees, pairwise testing, design of experiments-based methods, and combinatorial testing. Each of these methods is supported by automated tools. Tools provide consistency in test case design, which can increase the all-important test coverage in software testing. Cause-effect graphing, another test design technique, is superior from a test coverage perspective, reducing the number of test cases needed to provide excellent coverage. Gary Mogyorodi describes these black box test case design techniques, summarizes the advantages and disadvantages of each technique, and provides a comparison of the features of the tools that support them. Using an example problem, he compares the number of test cases derived and the test coverage obtained using each technique, highlighting the advantages of cause-effect graphing. Join Gary to see what new techniques you might want to add to your toolbox.
Guidelines to Understanding Design of Experiment and Reliability Predictionijsrd.com
This paper will focus on how to plan experiments effectively and how to analyse data correctly. Practical and correct methods for analysing data from life testing will also be provided. This paper gives an extensive overview of reliability issues, definitions and prediction methods currently used in the industry. It defines different methods and correlations between these methods in order to make reliability comparison statements from different manufacturers' in easy way that may use different prediction methods and databases for failure rates. The paper finds however such comparison very difficult and risky unless the conditions for the reliability statements are scrutinized and analysed in detail.
The document describes a regression test suite prioritization technique using the Hill Climbing algorithm. It aims to reorder test cases in a test suite to maximize fault detection within a given time limit. The technique records execution times of individual test cases and uses the Hill Climbing algorithm with parameters like the test suite, all possible permutations of test cases, time limit, and functions to calculate time and fitness. The algorithm iteratively makes small improvements to the ordering to find a local optimum that detects the most faults within the time limit. The technique intelligently considers both test execution time and fault detection information to effectively prioritize test cases.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test “oracle” is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
A software fault localization technique based on program mutationsTao He
The document describes a new software fault localization technique called Muffler that uses program mutation analysis. Muffler aims to address the problem of coincidental correctness in existing coverage-based fault localization methods. It combines the Naish ranking function with a new metric called mutation impact, which measures the average number of test results that change from passed to failed when each statement is mutated. An empirical evaluation on seven programs shows that Muffler reduces the average code examination needed to find faults by 50% compared to the Naish technique.
Software testing effort estimation with cobb douglas function a practical app...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
A study on the efficiency of a test analysis method utilizing test-categories...Tsuyoshi Yumoto
This document describes a study on improving the efficiency of test analysis through utilizing test categories based on application under test (AUT) knowledge and known faults. The study proposes a method for defining test categories based on logical structures of features to guide test condition determination. A verification experiment was conducted and showed measurable improvement in test coverage when using the proposed method. The method aims to minimize variability in test analysis results by providing a standardized process for testers to follow.
In this quality assurance training, you will learn Test Case Design and Technique. Topics covered in this session are:
• Test Case Design and Techniques
• Black-box: Three major approaches
• Steps for drawing cause-Effect Diagram:
• Behavior Testing
• Random Testing
• White Box Techniques
• Path Testing
• Statement Coverage
• Data Flow Testing
For more information, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-training-beginners-and-intermediate-level/
In this quality assurance training session, you will learn Test case design. Topics covered in this course are:
• Test Case Design and Techniques
• Black-box: Three major approaches
• Steps for drawing cause-Effect Diagram:
• Behavior Testing
• Random Testing
• White Box Techniques
• Path Testing
• Statement Coverage
• Data Flow Testing
To know more, visit this link: https://www.mindsmapped.com/courses/quality-assurance/software-testing-quality-assurance-qa-training-with-hands-on-exercises/
In this session you will learn:
Test Case Design and Techniques
Black-box: Three major approaches
Steps for drawing cause-Effect Diagram:
Behavior Testing
Random Testing
White Box Techniques
Path Testing
Statement Coverage
Data Flow Testing
For more information: https://www.mindsmapped.com/courses/quality-assurance/qa-software-testing-training-for-beginners/
Software testing defect prediction model a practical approacheSAT Journals
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
A Test Analysis Method for Black Box Testing Using AUT and Fault Knowledge.Tsuyoshi Yumoto
With a rapid increase in size and complexity of software today, the scope of software testing is also expanding. The efficiency of software testing needs to be improved in order to ensure the appropriate delivery deadline and cost of software development. For improving efficiency of software testing, the test needs to be designed in a way that the number of test cases is sufficient and appropriate in quantity. Test analysis is the activity to refine Application Under Test (AUT) into proper size that test design techniques can be applied to. It is for designing the test properly. However, the classification for proper size depends on individual’s own judgments. This paper proposes a test analysis method for the black box testing using a test category that is the classification based on fault and AUT knowledge.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
This document outlines 11 principles of software testing:
1. Testing is the process of executing software with test cases to reveal defects and evaluate quality. Testers detect defects before software is operational.
2. A good test case has a high probability of revealing undetected defects when the objective is defect detection.
3. Testers must meticulously inspect and interpret test results to avoid overlooking failures or suspecting failures that don't exist.
Parameter Estimation of Software Reliability Growth Models Using Simulated An...Editor IJCATR
The parameter estimation of Goel’s Okomotu Model is performed victimisation simulated annealing. The Goel’s Okomotu
Model is predicated on Exponential model and could be a easy non-homogeneous Poisson method (NHPP) model. Simulated
annealing could be a heuristic optimisation technique that provides a method to flee local optima. The information set is optimized
using simulated annealing technique. SA could be a random algorithmic program with higher performance than Genetic algorithmic
program (GA) that depends on the specification of the neighbourhood structure of a state area and parameter settings for its cooling
schedule.
This document discusses black box testing techniques. It defines black box testing as testing that ignores internal mechanisms and focuses on inputs and outputs. Six common black box testing techniques are described: equivalence partitioning, boundary value analysis, cause-effect graphing, decision table-based testing, orthogonal array testing, and syntax-driven testing. The document provides examples of how to use these techniques to design test cases to uncover faults.
Research Inventy : International Journal of Engineering and Scienceinventy
The document analytically represents the torque characteristics of a repulsion motor based on basic magnetism theory. It develops mathematical expressions to calculate torque by considering the forces of attraction and repulsion between the stator and rotor poles. The torque is modeled as a function of the angle between the stator field axis and rotor brush axis. Three models of increasing complexity are presented to predict torque values for different angular displacements. Analytical results show good agreement with practical motor torque curves. The paper establishes a method to determine torque production in repulsion motors using fundamental magnetic principles.
Research Inventy : International Journal of Engineering and Scienceinventy
esearch Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
The document summarizes research on an improved interleaved buck converter that aims to achieve low switching losses and a high step-down conversion ratio. It proposes a new converter topology that uses two active switches connected in series with a coupling capacitor. This reduces the voltage stress across the switches, lowering capacitive discharging and switching losses. Analysis shows the new topology lowers losses when the operating duty cycle is below 50%. Experimental results from prototypes validating the design are also presented.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Scienceinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Some new exact Solutions for the nonlinear schrödinger equationinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Probabilistic seismic hazard assessment in the vicinity of MBT and MCT in wes...inventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Design and Cost Optimization of Plate Heat Exchangerinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Configuration Navigation Analysis Model for Regression Test Case Prioritizationijsrd.com
Regression testing has been receiving increasing attention nowadays. Numerous regression testing strategies have been proposed. Most of them take into account various metrics like cost as well as the ability to find faults quickly thereby saving overall testing time. In this paper, a new model called the Configuration Navigation Analysis Model is proposed which tries to consider all stakeholders and various testing aspects while prioritizing regression test cases.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document discusses test case prioritization, which aims to execute test cases in an order that increases their effectiveness at detecting faults. It describes regression testing and different prioritization techniques, including genetic algorithms. Genetic algorithms represent test cases as chromosomes and use selection, crossover and mutation to evolve solutions. The document concludes that prioritization techniques can improve fault detection rates but their effectiveness varies across programs and test suites, so practitioners must choose techniques carefully for different testing scenarios.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or modified, a set of test cases are run on each of its functions to assure that the change to that function is not affecting other parts of the software that were previously running flawlessly. For achieving this, all existing test cases need to run as well as new test cases might be required to be created. It is not feasible to re- execute every test case for all the functions of a given software, because if there is a large number of test cases to be run, then a lot of time and effort would be required. This problem can be addressed by prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization, Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
TEST CASE PRIORITIZATION FOR OPTIMIZING A REGRESSION TESTijfcstjournal
Regression testing makes sure that upgradation of software in terms of adding new features or for bug
fixing purposes should not hamper previously working functionalities. Whenever a software is upgraded or
modified, a set of test cases are run on each of its functions to assure that the change to that function is not
affecting other parts of the software that were previously running flawlessly. For achieving this, all existing
test cases need to run as well as new test cases might be required to be created. It is not feasible to reexecute
every test case for all the functions of a given software, because if there is a large number of test
cases to be run, then a lot of time and effort would be required. This problem can be addressed by
prioritizing test cases. Test case prioritization technique reorders the priority in which test cases are
implemented, in an attempt to ensure that maximum faults are uncovered early on by the high priority test
cases implemented first. In this paper we propose an optimized test case prioritization technique using Ant
Colony Optimization (ACO) to reduce the cost, effort and time taken to perform regression testing and also
uncover maximum faults. Comparison of different techniques such as Retest All, Test Case Minimization,
Test Case Prioritization, Random Test Case Selection and Test Case Prioritization using ACO is also
depicted.
Code coverage based test case selection and prioritizationijseajournal
This document describes an approach for code coverage based test case selection and prioritization. It presents algorithms for test case selection (TCS) and test case prioritization (TCP) that aim to select a minimal set of test cases that achieve maximum code coverage. The TCS algorithm analyzes test cases and statements to cluster them into outdated, required and surplus test cases. The TCP algorithm then prioritizes the required test cases based on their individual statement coverage to achieve full code coverage with as few test cases as possible. An example application of the algorithms on a sample program is provided, demonstrating a reduction in test suite size.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
LusRegTes: A Regression Testing Tool for Lustre Programs IJECEIAES
This document summarizes a tool called LusRegTes that was developed to automate regression testing of Lustre programs. Lustre is a synchronous data-flow language used for safety-critical applications. The tool represents Lustre programs as operator networks and generates regression test cases by comparing paths between program versions. It aims to select an optimal minimum set of tests to validate changes while reducing testing costs. The approach was implemented in a tool to automate the regression testing process for Lustre programs.
One of the obstacles that hinder the usage of mutation testing is its impracticality, two main contributors of this are a large number of mutants and a large number of test cases involves in the process. Researcher usually tries to address this problem by optimizing the mutants and the test case separately. In this research, we try to tackle both of optimizing mutant and optimizing test-case simultaneousl y using a coevolution optimization method. The coevolution optimization method is chosen for the mutation testing problem because the method works by optimizing multiple collections (population) of a solution. This research found that coevolution is better suited for multiproblem optimization than other single population methods (i.e. Genetic Algorithm), we also propose new indicator to determine the optimal coevolution cycle. The experiment is done to the artificial case, laboratory, and also a real case.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
QUALITY METRICS OF TEST SUITES IN TESTDRIVEN DESIGNED APPLICATIONSijseajournal
New techniques for writing and developing software have evolved in recent years. One is Test-Driven
Development (TDD) in which tests are written before code. No code should be written without first having
a test to execute it. Thus, in terms of code coverage, the quality of test suites written using TDD should be
high.
In this work, we analyze applications written using TDD and traditional techniques. Specifically, we
demonstrate the quality of the associated test suites based on two quality metrics: 1) structure-based
criterion, 2) fault-based criterion. We learn that test suites with high branch test coverage will also have
high mutation scores, and we especially reveal this in the case of TDD applications. We found that TestDriven
Development is an effective approach that improves the quality of the test suite to cover more of the
source code and also to reveal more.
This document discusses the theory of software testing. It covers several key topics:
1) It identifies five common problems in software testing like limitations of testing teams and issues with manual testing.
2) It describes different testing processes like verification, validation, white-box testing and black-box testing.
3) It outlines three main phases of software testing - preliminary testing, testing, and user acceptance testing - to evaluate a new software system and identify any issues.
Towards formulating dynamic model for predicting defects in system testing us...Journal Papers
This document discusses developing a dynamic model for predicting defects in system testing using metrics collected from prior phases. It begins with background on the waterfall and V-model software development processes. It then reviews previous research on software defect prediction, noting limited work has focused specifically on predicting defects in system testing. The proposed model would analyze metrics collected during requirements, design, coding, and testing phases to determine which metrics best predict defects found in system testing. A case study is discussed that would apply statistical analysis to historical metrics data to formulate a mathematical equation for defect prediction. The model would then be verified by applying it to new projects and comparing predicted defects to actual defects found during system testing. The goal is to select a prediction model that estimates defects
The document presents a new approach called Regression Test Suite Minimization Using Dynamic Interaction Patterns with Improved FDE that aims to reduce the size of regression test suites while maintaining fault detection ability. It does this by constructing a dynamic dependence graph to identify interaction patterns between transitions in an Extended Finite State Machine model, allowing it to consider repeated dependencies. An example of applying the approach to a banking system model is provided to motivate the method.
This study empirically evaluated the use of mutation testing to improve test quality for safety-critical software. Mutation testing was applied to two airborne software systems developed using rigorous development processes and guidelines. The study identified an effective subset of mutant types, categorized the root causes of test failures, and examined relationships between program characteristics and mutation survival. The results showed how mutation testing could be useful for finding issues missed by traditional structural coverage analysis and manual peer review. Industry feedback also provided perspectives on integrating mutation testing into typical verification lifecycles for airborne software.
This document presents a method for prioritizing test cases for event-driven software using a genetic algorithm. It proposes a single abstract model that can test both graphical user interface (GUI) and web applications. Test cases are executed and assigned a fitness value, then stored in a training database. When test cases have equal fitness values, prioritization criteria like fault detection, time, and code coverage are applied using the genetic algorithm to determine the optimal testing order. The approach was experimentally tested on small GUI and web apps and showed a reduction in latency time compared to other techniques. Future work could involve applying the algorithm to larger real-world software.
Experimental Investigation of a Household Refrigerator Using Evaporative-Cool...inventy
The objective of this paper was to investigate experimentally the effect of Evaporative-cooled condenser in a household refrigerator. The experiment was done using HCF134a as the refrigerant. The performance of the household refrigerator with air-cooled and Evaporative-cooled condenser was compared for different load conditions. The results indicate that the refrigerator performance had improved when evaporative-cooled condenser was used instead of air-cooled condenser on all load conditions. Evaporativecooled condenser reduced the energy consumption when compared with the air-cooled condenser. There was also an enhancement in coefficient of performance (COP) when evaporative-cooled condenser was used instead of air-cooled condenser. The Evaporative cooled heat exchanger was designed and the system was modified by retrofitting it, instead of the conventional air-cooled condenser by making drop wise condensation using water and forced circulation over the condenser. From the experimental analysis it is observed that the COP of evaporative cooled system increased by 13.44% compared to that of air cooled system. So the overall efficiency and refrigerating effect is increased. In minimum constructional, maintenance and running cost, the system is much useful for domestic purpose. This study also revealed that combining a evaporative cooled system along with conventional water cooled system under the condition that the defrost water obtained from the freezer is used for drop wise condensation over condenser and water cooled condensation of the condenser at the bottom using remaining defrost water would reduce the power consumption, work done and hence further increase in refrigerating effect of the system. The study has shown that such a system is technically feasible and economically viable
Copper Strip Corrossion Test in Various Aviation Fuelsinventy
This research work takes in to account of corrosiveness test on various aviation fuels in the state of Telengana (India). The purpose of this experiment is to determine the corrosiveness test of fuels. This determination will be accomplished by using copper strip corrosion test by using the copper strip experiment we can determine the corrosive property of the fuel and hence the efficiency of fuel. The research covers the importance of knowing the corrosive property of different petroleum fuels including aviation turbine fuel.
Additional Conservation Laws for Two-Velocity Hydrodynamics Equations with th...inventy
1) The document presents differential identities connecting velocities, pressure, and body force in two-velocity hydrodynamics equations where the pressure in each component is in equilibrium.
2) It summarizes previous work that derived conservation laws and differential equations for two-velocity hydrodynamic systems. Additional conservation laws are derived for these types of systems.
3) The key results are theorems that present differential identities relating the module and direction of a vector field. These identities can be considered additional conservation laws for two-velocity hydrodynamics equations with a single pressure.
Comparative Study of the Quality of Life, Quality of Work Life and Organisati...inventy
People’s lives are increasingly centred on work; they spend at least one-third of their time within the organisations that employ them. Investigating the factors that interfere with employees’ well-being and the organisational environment is becoming an increasing concern in organisations. This article identifies the criteria of the quality of life (QoL), quality of working life (QWL) and organisational climate instruments to point out their similarities. For bibliographic construction and data research, articles were sought in national and international journals, books and dissertations/articles in SciELO, Science Direct, Medline and Pub Med databases. The results show direct relationships amongst QoL, QWL and organisational climate instruments. The relationship between QoL and QWL instruments is based on fair compensation, social interaction, organisational communication, working conditions and functional capacity. QWL and organisational climate instruments are related through social interaction and interfaces. QoL and organisational climate instruments are related based on social interaction, organisational communication, and work conditions.
A Study of Automated Decision Making Systemsinventy
The decision making process of many operations are dependent on analysing very large data sets, previous decisions and their results. The information generated from the large data sets are used as an input for making decisions. Since the decisions to be taken in day to day operations are expanding, the time taken for manual decision making is also expanding. In order to reduce the time, cost and to increase the efficiency and accuracy, which are the most important things for customer satisfaction, many organisations are adopting the automated decision making systems. This paper is about the technologies used for automated decision making systems and the areas in which automated decisions systems works more efficiently and accurately.
Crystallization of L-Glutamic Acid: Mechanism of Heterogeneous β -Form Nuclea...inventy
The mechanism of heterogeneous nucleation of β-form L-glutamic acid was deeply investigated in cooling crystallization. The present study found that the β-form crystals were epitaxially grown on the α-form crystals and they were preferably crystallized on the (011) and (001) surfaces instead of the (111) surfaces of α- form crystals. This result was explained via the molecular simulation. The molecular simulation indicated that the different surfaces of α-form crystals provided different functional groups, resulting in different sites for the heterogeneous nucleation of β-form crystals. Here, the functional group were COO- , C=O and O-H on the (011) and (001) surfaces of α-form crystals, respectively, while it was the NH3 + on the (111) surfaces of α-form crystals. As such, the degree of lattice matching (E) between the β-form crystals and the various surfaces of α- form crystal was distinguished, where the degree of lattice matching (E) between the β-form crystals and the (011), (001) and (111) surfaces of α-form crystal were estimated as 5.30, 5.25 and 2.39, respectively, implying that the (011) and (001) surfaces of α-form crystal were more favorable to generate the heterogeneous nucleation of β-form crystals than the (111) surfaces of α-form crystal
Evaluation of Damage by the Reliability of the Traction Test on Polymer Test ...inventy
In recent decades, polymers have undergone a remarkable historical development and their use has been greatly imposed by gradually dethroning most of the secular materials. These polymer materials have always distinguished themselves by their simple shaping and inexpensive price, their versatility, lightness, and chemical stability but despite their massive use in everyday life as well as in advanced technologies. Generally, these materials still not understood which requires a thorough knowledge of their chemical, physical, rheological and mechanical properties. This paper, we study the mechanical behavior of an amorphous polymer: Acrylonitrile Butadiene Styrene “ABS” by means of uniaxial tensile testing on pierced test pieces with different notch lengths ranging between 1 to 14mm.The proposed approach consists in analyzing the evolution of the global geometry of the obtained strain curves by taking into account the zones and characteristic points of these curves as well as the effect of the damage on the mechanical behavior of the polymer ABS, in order to visualize the evolution of the damage by a static model
Application of Kennelly’model of Running Performances to Elite Endurance Runn...inventy
: The model of Kennelly between distance (Dlim) and exhaustion time (tlim) has been applied to the individual performances of 19 elite endurance runners (World-record holders and Olympic winners) from P. Nurmi (1920-1924) to M. Farah (2012) whose individual best performances on several different distances are known. Kennelly’s model (Dlim = k tlim ) can describe the individual performances of elite runners with a high accuracy (errors lower than 2 %). There is a linear relationship between parameters k and exponents of the elite runners and the extreme values correspond to S. Coe (k = 15.8; = 0.851) and E. Zatopek (k = 6.57; = 0.984). Exponent can be considered as a dimensionless index of aerobic endurance which is close to 1 in the best endurance runners. If it is assumed than maximal aerobic speed can be maintained 7 min in elite endurance runners, exponent is equal to the normalized critical speed (critical speed/maximal aerobic speed) computed from exhaustion times equal to 3 and 12.5 min in these runners.
Development and Application of a Failure Monitoring System by Using the Vibra...inventy
In this project, a failure monitoring system is developed by using the vibration and location information of balises in railway signaling. A lot of field equipment in railway are loosening and broken in time period so that they need maintenance due to the vibrations that occur due to high speed trains traffic and railway vehicles impact. Among the field equipment, balises have very important role of communication in terms of transmitting information to trains. In this scope, it is aimed to make maintenance works more efficient, have no delayed trains, detect previously failure location and intervene in failure timely, by detecting and controlling balise cases such as loosening, out of place and the data consistency error that happens because of balise physical state. In this project, the communication is provided with I2C, Modbus RTU (Remote Terminal Unit) and RS485 standards by using Arduino Uno cards and MPU6050 IMU (Inertial Measurement Unit) sensors in laboratory. Each used sensors are in slave mode and computer interface designed with C# is in master mode. Fault situations in the system are checked instant by the interface. (it is assumed to mount the IMU sensor and the Arduino circuit on the balise) it is seen that the interface responds to the sensor movements instant and the system works well in the end of test processes.
The Management of Protected Areas in Serengeti Ecosystem: A Case Study of Iko...inventy
This document summarizes a study that assessed the management of protected areas in Serengeti ecosystem, using Ikorongo and Grumeti Game Reserves as a case study. The study aimed to identify natural resource management strategies used, examine their impacts and hindrances, and identify ways to improve performance. It found that strategies have successfully reduced poaching by 96% and improved community relations. However, challenges remain like loss of life/property from wildlife conflicts and lack of access to water sources. The study concluded strategies have been fairly sustainable but need more participatory local approaches and benefit sharing to achieve collaborative management across the ecosystem. It recommended solutions like equitable benefit sharing, more funding, non-lethal deterrents, and strengthened
Size distribution and biometric relationships of little tunny Euthynnus allet...inventy
This study is taken from data of commercial fishing of the little tunny, Euthynnus alletteratus (Rafinesque, 1810) caught in the Algerian coast, sampled between november 2011 and april 2016. Data were collected in order to determine size distributions of the population and biometric relationships of species including the size - weight relationships. A total of 601 fish ranged from 30.9 and 103 cm fork length (FL) were observed. The size distribution of Euthynnus alletteratus shows multiple modal values witch the most important cohort corresponds to the age class 2 (42-46 cm). The value of the allometric coefficient (b) of the FL/TW relationship is lower than 3, indicating a negative allometric growth.
Removal of Chromium (VI) From Aqueous Solutions Using Discarded Solanum Tuber...inventy
Industrial polluting effluents containing heavy metals are of serious environmental concern in India. Chromium is frequently used in industries like electroplating, metal finishing, cooling towers, dyes, paints, anodizing and leather tanning and is found as traces in effluents finding their way to natural water bodies causing hazardous toxicity to the health of humans, animals and aquatic lives directly or indirectly. Many methods for the removal of Chromium such as chemical reduction, precipitation, ion exchange, electrochemical reduction, evaporation, reverse osmosis and adsorption using activated carbon etc. have been reported but all being expensive and complicated to operate. Experimental practices reveal that adsorption by agricultural and horticultural wastes are quite simple, inexpensive and efficient method. Agra is famous for Potato farming, a lot of discarded potato waste from cold storages is thrown along road side drains causing solid waste generated which either creates solid waste disposal problem or otherwise it finds way to Yamuna river resulting high BOD and posing a serious threat to the aquatic environment. For developing countries like India adsorption studies using discarded potato (Solanum tuberosum) waste from cold storages (DPWC) a solid waste as low cost adsorbent for Chromium removal was dual beneficial i.e., an ideal solution to these solid wastes disposal problem of Agra and removal of Chromium from tannery effluents and thereby saving aquatic life from Chromium contamination in Yamuna river. Keeping this in view batch experiments were designed to study the feasibility of discarded potato waste from cold storages to remove chromium (VI) from the aqueous solutions. During the study various affecting parameters, such as pH, adsorbent does, initial concentration, temperature, contact time, adsorbent grain size and start up agitation speed were optimized as 5.0, 10-20 g/l, 50 mg/l, 250C, 135 minutes, average size and 80 rpm respectively on chromium removal efficiency. Various Isotherms such as Langmuir, Freundlich, Tempkin also fitted suitably and various corresponding constants determined from these Isotherms favor and support the adsorption. Thermodynamic constants ∆G, ∆H and ∆S were found to be 0.267 KJ/mole, 0.288 KJ/mole and 0.0013 KJ/mole respectively.
Effect of Various External and Internal Factors on the Carrier Mobility in n-...inventy
The effect of various external (temperature, electric field, light) and intracrystalline (doping, initial resistivity) factors on the mobility of carriers in layered n-InSe semiconductor experimentally have been investigated. Scientific explanations of the results are proposed
Transient flow analysis for horizontal axial upper-wind turbineinventy
This study is to carry out a transient flow field analysis on the condition that the wind turbine is working to generate turbine, the wind turbine operating conditions change over time, Purpose of this study is try to find out the rule from the wind turbine changing over time . In transient analysis, the wind velocity on inlet boundary and rotation speed in the rotor field will change over time, and an analytical process is provided that can be used for future reference. At present, the wind turbine model is designed on the concept of upwind horizontal axis type. The computer engineering software GH Bladed is used to obtain the relationship between the rotor velocity and the wind turbine. Then the ANSYS engineering software is used to calculate the stress and strain distribution in the blades over time. From the analytical result, the relationship between the stress distribution in the blades and the rotor velocity is got to be used as a reference for future wind turbine structural optimization.
Choice of Numerical Integration Method for Wind Time History Analysis of Tall...inventy
Wind tunnel tests are being performed routinely around the world for designing tall buildings but the advent of powerful computational tools will make time-history analysis for wind more common in near future. As the duration of wind storms ranges from tens of minutes to hours while earthquake durations are typically less than a three to four minutes, the choice of a time step size (Δt) for wind studies needs to be much larger both to reduce the computational time and to save disk space. As the error in any numerical solution of the equation of motion is dependent on step size (Δt), careful investigations on the choice of numerical integration methods for wind analyses are necessary. From a wide variety of integration methods available, it was decided to investigate three methods that seem appropriate for 3D-time history analysis of tall buildings for wind. These are modal time history analysis, the Hilber-Hughes-Taylor (HHT) method or α-method with α=- 0.1, and the Newmark method with β=0.25 and γ=0.5 ( i.e., trapezoidal rule). SAP2000, a common structural analysis software tool, and a 64-story structure are used to conduct all the analyses in this paper. A boundary layer wind tunnel (BLWT) pressure time history measured at 120 locations around the building envelope of a similar structure is used for the analyses. Analyses performed with both the HHT and Newmark-method considering P-delta effects show that second order effects have a considerable impact on both displacement and acceleration response. This result shows that it is necessary to account P-delta effect for wind analysis of tall buildings. As the direct integration time history analysis required very large computation times and very large computer physical memory for a wind duration of hours, a modal analysis with reduced stiffness is considered as a good alternative. For that purpose, a non-linear static analysis of the structure with a load combination of 1.0D + 1.0L is performed in SAP2000 and the reduced stiffness of the structure after the analysis is used to conduct an eigenvalue analysis to extract the mode shapes and frequencies of this structure. Then the first 20- modes are used to perform a modal time history analysis for wind load. The result shows that the responses from modal analysis with “20-mode (reduced stiffness)” are comparable with that from the P-Δ analyses of Newmark-method
Impacts of Demand Side Management on System Reliability Evaluationinventy
This summary provides an overview of the impacts of demand side management (DSM) techniques on power system reliability in Saudi Arabia:
1. DSM techniques like load shifting can improve power system reliability by transferring load from peak to off-peak periods, reducing peak demand and allowing generators to operate more efficiently.
2. The study models load shifting and adding renewable energy sources to the Riyadh power system and calculates reliability indices like loss of load probability (LOLP) and expected energy not served (EENS) to analyze the impacts on reliability.
3. Preliminary results show load shifting can reduce peak demand and renewable energy from solar and wind can further contribute to reliability by providing generation during peak periods.
Reliability Evaluation of Riyadh System Incorporating Renewable Generationinventy
In this paper, the experience of Saudi Electricity Company (SEC) in analyzing the generation adequacy for Year 2013 is presented. This analysis is conducted by calculating several reliability indices for Riyadh system hourly load during all four seasonal periods. The reliability indices are gauged against the international utility practice. SEC also plans to introduce renewable energy into the network in order to secure the environmental standards and reduce fuel costs of conventional generation. Thus, the reliability improvement due to different integration levels of Solar and Wind generating sources has also been investigated. The capacity value provided by these variable renewable energy sources (VERs) to reliably meet the system load has been calculated using effective load carrying capability (ELCC) technique with a loss of load expectancy metric.
The effect of reduced pressure acetylene plasma treatment on physical charact...inventy
The capacitors are increasingly being used as energy storage devicesin various power systems. The scientists of the world are tryingto maximize the electrical capacity of the supercapacitors. To achieve this purpose, numerous method sare used: the surface activation of electrodes, the surface etching using the electronbeam, the electrode etching with variousgasplasma, etc. The purpose of this work is toresearch how the properties of carbon electrodes depend on the plasma parameters at whichtheywere formed. The largest surface area ofcarbonelectrodeof47.25m2 /gis obtainedat 15 ofAr/C2H2gasratio. Meanwhile, theSEMimages show that the disruption of structures with low bond energies and the formation of new onesare taking place when the carbon electrodes are etched at acetylene plasma and placed on carbon electrode. The measurements of capacitance showthat capacitors with affectedelectrodes have about10-15% highercapacity than those not treated with acetyleneplasma.
Experimental Investigation of Mini Cooler cum Freezerinventy
In general cases the refrigerator could be converted into an air conditioner by attaching a fan. Thus a cooler as well as freezer is obtained in a single set up. The freezer can be converted to an air conditioner when the outside air is allowed to flow beside the cooling coil and is forced outside by an exhaust fan. In this case a mini scale cooler cum freezer using R134a as refrigerant was fabricated and tested In our mini project work we had designed, fabricated and experimentally analysed a mini cooler cum freezer. From the observations and calculations, the results of mini cooler cum freezer are obtained and are compared.
Growth and Magnetic properties of MnGeP2 thin filmsinventy
We have successfully grown MnGeP2 thin films on GaAs (100) substrate. A ferromagnetic transition near 320 K has been observed by temperature dependent magnetization and resistance measurements. Field dependent magnetization experiments have shown that the coercive fields at 5, 250, and 300 K are 3870, 1380 and 155 Oe, respectively. Magnetoresistance and Hall measurements have displayed that hole conduction is dominant in MnGeP2. PACS: 75.50.Pp, 75.70.-i, 85.70.-w, 73.50.-h
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Home security is of paramount importance in today's world, where we rely more on technology, home
security is crucial. Using technology to make homes safer and easier to control from anywhere is
important. Home security is important for the occupant’s safety. In this paper, we came up with a low cost,
AI based model home security system. The system has a user-friendly interface, allowing users to start
model training and face detection with simple keyboard commands. Our goal is to introduce an innovative
home security system using facial recognition technology. Unlike traditional systems, this system trains
and saves images of friends and family members. The system scans this folder to recognize familiar faces
and provides real-time monitoring. If an unfamiliar face is detected, it promptly sends an email alert,
ensuring a proactive response to potential security threats.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
1. Research Inventy: International Journal of Engineering And Science
Vol.4, Issue 7 (July 2014), PP 54-64
Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com
54
A Regression Test Case Selection and Prioritization for Object-
Oriented Programs using Dependency Graph and Genetic
Algorithm
Samaila Musa
ABSTRACT: Regression testing is very important activity in software testing. The re-execution of all test
cases during regression testing will be costly. The effective and efficient test case selection from the existing test
suite becomes very critical issue in regression testing. This paper presents an evolutionary regression test case
prioritization for object-oriented software based on dependence graph model analysis of the affected program
using Genetic Algorithm. The approach is based on optimization of selected test case from test suite T. The goal
is to identify changes in a method’s body due to data dependence, control dependence and dependent due to
object relation such as inheritance and polymorphism, select the test cases based on affected statements and
ordered them based on their fitness by using GA. The number of affected statements determined how fit a test
case is good for regression testing. A case study was reported to provide evidence of the feasibility of the
approach and its benefits in increasing the rate of fault detection and reduction in regression testing effort. The
goodness of this ordering is measured using Average Percentage of rate of Faults Detection (APFD) metric to
evaluate the effectiveness and efficiency of the approach. It was observed that our proposed approach is more
efficient and effective in regression testing.
KEYWORDS: Regression testing, regression test case, evolutionary algorithm, genetic algorithm, regression
test case prioritization, and system dependence graph.
I. INTRODUCTION
Software maintenance activity is an expensive phase account for nearly 60% of the total cost of the
software production [17]. Regression testing is an important phase in software maintenance activity to ensure
that modifications due to debugging or improvement do not affect the existing functionalities and the initial
requirement of the design [18] and it almost takes 80% of the overall testing budget and up to 50% of the cost of
software maintenance [19]. Regression testing is a software testing activity normally conducted after software is
changed, and its helps not only to ensure that changes due to debugging or improvement do not affect the
existing functionalities but also the changes do not affect the initial requirement of the design. Regression test
selection is an activity that select test cases from an existing test suite, that need to be rerun to ensure that
modified parts behave as intended and the modification have not introduce sudden faults.Regression test
selection is a way that test cases are selected from an existing test suite, that need to be rerun to ensure that
modified parts behave as intended and the modification have not introduce sudden faults. Prioritization of test
cases to be used in testing modified program means reduction in the cost associated with regression testing.
Identifying test cases that exercised modified parts of the software is the main objective of regression
test selection. The challenge in regression testing is the prioritization of selected test cases by identifying and
selecting of best test cases from the selected test cases, and selecting good test cases will reduce execution time
and maximize the coverage of fault detection. Regression test selection technique will help in selecting a subset
of test cases from the test suite. The easiest way in regression testing is the tester simply executes all of the
existing test cases to ensure that the new changes are harmless and is referred as retest-all method [10]. It is the
safest technique, but it is possible only if the test suite is small in size. The test case can be selected at random to
reduce the size of the test suite. But most of the test cases selected randomly can result in checking small parts
of the modified software, or may not even have any relation with the modified program. Regression test
selection techniques will be an alternative approach.
Problem definition:
Let P be a certified program tested with test suite T, and P’ be a modified program of P. During
regression testing of P’, T and information about the testing of P with T are available for use in testing P’.
2. A Regression Test Case Selection and Prioritization ...
55
To solve the above problem, Rothermel and Harrold [19] have outlined a typical selective retest technique
that:
Identify changes made to P by creating a mapping of the changes between P and P’
Use the result of the above step to select a set T’ subset of T that may reveal changes-related faults in P’
Use T’ to test P’, to establish the correctness of P` with respect to T`
Identify if any parts of the system have not been tested adequately and generate a new set of test case T’’.
Use T’’ to test P`, to establish the correctness of P` with respect to T``
Create T```, a new test suite and test history for P`, from T, T`, and T``.
Regression testing approach can be based on source code, i.e., code-based and based on design, i.e.,
design-based, many of them were proposed by the researchers. The more safe and easy to make are the
approaches that generate the model directly from the source code of the software.Researchers have proposed
many code-based approaches [9,19,20] by identifying modifications in the level of source code. Other
researchers [1,2,3,6,7,8,10,12] address the issues of object-oriented programming, but there is need for further
research to consider the basic concept of object-oriented features (such as inheritance, polymorphism, etc.,) as a
bases in identifying changes. Researchers have proposed various approaches [5,13,14,15,16] to address the
issues related to prioritization using heuristic approaches that may not produce optimal solutions. An approach
for test case prioritization was presented in [13,14] based on analysis of dependence model. The authors
constructed a dependence model of a program from its source code, and when the program is modified, the
model is updated to reflect the changes. The affected nodes are determined by performing forward slices using
the changed nodes as slicing criterion. The test cases that covered the affected nodes are selected for regression
testing. The selected test cases are then prioritized by assigning initial weights. The weights are used as bases
for prioritization, which may result in selection of test case that is not much relevant, which will result in
increase of regression testing time. In [15], a requirement based system level test case prioritization was
proposed in order to reveal more severe faults at an earlier stage based on factor oriented in regression testing
using GA (PFRevSevere) was proposed. An approach was proposed in [5,16] that prioritized based on rate of
faults detected and impact of the faults. A test suite reduction approach was proposed in [6] to select a subset of
test cases that executes the changed requirements. The performance of the proposed approach was evaluated
using percentage of requirement coverage. The results shown an improvement compared with the state-of-art.
In this paper we present an evolutionary prioritized approach that will select best test cases from
existing test suite T used to test the original program P by using Dependence Graph [11] as an intermediate to
identify the changes in P, at statements level. Identification of changes using this kind of graph will leads to
précised detection of changes. The changed statements will be used to identify affected statements, and test
cases that execute the affected statements are selected for regression testing. The selected test cases will be
prioritized by using genetic algorithm in order to have a superior rate of fault detection. This approach will
reduce the cost of regression testing by increasing the rate of faults detection and reducing the number of test
cases to be used in testing the modified program.The rest of this paper is organized as follows. In the next
section, we describe Extended System Dependence Graph (ESDG). Section 3 introduced GA. Section 4
describes Average percentage of rate of Faults Detection (APFD). In section 5, we present our test cases
selection and prioritization technique. Section 6 presents the results and discussion. We conclude this paper at
section 7.
II. EXTENDED SYSTEM DEPENDENCE GRAPH
In this section, we describe the dependency graph based on the approach presented in [11]. ESDG was
used to model object-oriented programs and is an extension of System Dependence Graph (SDG) [8] used to
model procedural programs.
Extended System Dependence Graph [11] is a graph that can represents control and data dependencies,
and information pertaining to various types of dependencies arising from object-relations such as association,
inheritance and polymorphism. Analysis at statement levels with ESDG model helps in identifying changes at
basic simple statement levels, simple method call statements, and polymorphic method calls.
ESDG is a directed, connected graph G = (V, E), that consist of set of V vertices and a set E of edges.
ESDG Vertices : A vertex v represents one of the four types of vertices, namely, statement vertices, entry
vertices, parameter, and polymorphic vertices.
Statement Vertex : Is used to represent program statements present in the methods body.
3. A Regression Test Case Selection and Prioritization ...
56
Parameter Vertex :This is used to represent parameter passing between a caller and callee method. They are of
four types: formal-in, formal-out, actual-in, and actual-out. Actual–in and actual-out vertices are created for
each call vertex and create formal-in and formal-out vertices for each method entry vertex.
Entry Vertex : Methods and classes have entry vertices use to represent the entry of method and class
respectively.
Polymorphic Choice Vertex : It is used to represent dynamic choice among the possible bindings in a
polymorphic call.
ESDG Edges : An edge e represent one of the six edges, namely, control dependence edges, data dependence
edges, parameter dependence edges, method call edges, summary edges, and class member edges.
Control Dependence Edge :It is used to represents control dependence relations between two statement
vertices.
Data Dependence Edge :It is used to represents data dependence relations between statement vertices.
Call Edge : It is used to connect a calling statement to a method entry vertex. It also connects various possible
polymorphic method call vertices to a polymorphic choice vertex.
Parameter Dependence Edge : It is used for passing values between actual and formal parameters in a method
call. It is of two types: parameter-in and parameter-out edges.
Summary Edge :It is used to represents the transitive dependence between actual-in and actual-out vertices.
Class Member Edge : It is used to represents the membership relation between a class and its methods.
Figure 1a represents the different graphical symbols used to represent the different types of edges,
Figure 1b represents a simple class that computes the sum of numbers 1 to 9, and Figure 1c represents partial
ESDG of the codes in Figure 1b.
III. GENETIC ALGORITHM
Many real life problems have been solved using evolutionary algorithms, and GA is one such
evolutionary algorithm. Some of the real life problems where GA was applied are:
For railway scheduling problem
The travelling salesman problem
The vehicle routing problem and
Many field of software engineering.
Genetic Algorithm has emerged as optimization technique and search method. Problems being solved
by GA are represented by a population of chromosomes as the solution to the problems. A chromosome can be
string of binary digits, integer, real or characters, and each string that makes up a chromosome is called a gene.
This initial population can be totally random or can be created manually using processes such as heuristic
technique.
Figure 1a. Graphical symbols of edges
CE1 public class Tsum {
S2 public static int i;
S3 public static int sum;
E4 public void TSum( ) { // constructor
S5 sum =0;
S6 i = 1;
}
E7 public void calculate( ) {
Control dependence edge
Data dependence edge
Call edge
Parameter dependence edge
Summary edge
Class member edge
4. A Regression Test Case Selection and Prioritization ...
57
S8 while (i<10) {
S9 sum = add(sum, i);
S10 i = add(i, 1);
}
S11 System.out.println("sum = " + sum);
S12 System.out.println ("i = " + i);
}
E13 static int add (int a, int b) {
S14 return(a+b);
}
}
Figure 1b. A simple class
Figure 1b. Partial ESDG of class of figure 1b.
IV. AVERAGE PERCENTAGE OF RATE OF FAULTS OF DETECTION (APFD)
To evaluate the performance of regression test case prioritization, researchers [5,13,14,15,16] used
APFD metric. The APFD metric as widely used in gauging the performance of program P and test suite T is
given as:
APFD (T,P) = 1 – (Tf1+Tf2+Tf3+ ..+Tfm) + 1 (1)
nm 2n
Where,
m is the number of faults
n is the number of test cases
Tf1+Tf2+Tf3+ ……..+Tfm are the position of the first test in T that expose the faults 1,2, ….,m.
V. PROPOSED REGRESSION TEST CASE SELECTION & PRIORITIZATION
FRAMEWORK
This paper presents an approach for the selection of test cases T` from the test suite T to be used in
testing the modified program P`, and prioritized the selected test cases in order that will increases their rate of
detecting faults.
Figure 2, illustrate the various activities of our proposed test case selection and prioritization framework.
Identify Changes : The changes between P and the modified program P` are identified in this step, via semantic
analysis of the source code of the software. A file named “changed” is used to store the identified statement
level differences. This is shown in Figure 1 by the result of identify changes phase.
The scopes of the changes in our approach are addition and deletion of object.
Adding Object : Adding object in ESDG can be identified by Identify changes phase. Adding of object in
object-oriented programming can be addition of method call statements, or simple statements such as
conditionals, loops and assignment statements in the program. Figure 3ai and Figure 3aii represent program P
and its modified version P` codes, and Figure 3bi and Figure 3bii represent the corresponding ESDG and its
updated ESDG of simple statements addition respectively.In Figure 3ai, statements (vertices) S2, S3, S4 and S5
5. A Regression Test Case Selection and Prioritization ...
58
are control dependence on E1 (method entry vertex), vertices S3, S4 and S5 are data dependent on S2, and S5 is
data dependent on S2, S3 and S4 as shown in Figure 3bi. In Figure 3aii, statement S4 and S5 are not data
dependent on S2, but are on the added statement S3a. The added statement S3a is data dependent on S2 as
shown in Figure 3bii. Statement S3a is identify as the changes between P and P`, and is saved as changed
node.
E1 void m1() {
S2 int x = 1;
S3 int y = x + 2;
S4 int p = x * 6;
S5 System.out.print(x, y, p);
}
Figure 3ai. A sample method codes
E1 void m1() {
S2 int x = 1;
S3 int y = x + 2;
S3a x = x+1; // added statement
S4 int p = x * 6;
S5 System.out.print (x, y, p);
}
Figure 3aii. Modified codes of simple addition
Figure 3bi. ESDG of fig. 3ai.
6. A Regression Test Case Selection and Prioritization ...
59
Figure 3bii. Updated ESDG of fig. 3bi.
CE1 class A {
E2 Public int x, y;
E3 void A () {
S4 x = 5;
S5 y = 7; }
E6 void increment (y) {
S6a sum (y, 1); }
// added method call statement
E7 void sum (int x, int y) {
S8 x += y;
}
} // class
Figure 4a. Addition of a simple method call
Deleting of Object:An example of deletion of simple statement is presented in Figure 5a, and in the case of
deletion of method call statement, we used the code and ESDG in Figure 4a and Figure 4b respectively. In
Figure 5b, the deleted node is S4 marked by dash line. The statement vertices S5 and S6 are data dependent on
S4, so before deleting S4, nodes S5 and S6 are identified changes. The edges from deleted node to nodes S5 and
S6 are deleted and saved the identified nodes as changed nodes, and also a data dependence edge from node S2
to the deleted node is removed due to deletion of S4.From Figure 4a, we assume the deleted method call
statement is S6a. To update the model by deleting the node S6a in ESDG, first identify the changed nodes, i.e.,
nodes that are control dependent or data dependent or dependent due to object relation such as inheritance and
polymorphism, and saved these nodes in the file named “changed” to be used later. Secondly, there is need to
remove all the parameter edges, the simple call edge, and control dependence edge. Then the vertices are
deleted.
Figure 4b. Updated ESDG for fig. 4a.
E1 void m1() {
S2 int x = 1;
S3 int y = x + 2;
7. A Regression Test Case Selection and Prioritization ...
60
S4 x = x+1; // deleted stat
S5 int p = x * 6;
S6 System.out.print(x, y, p);
}
Figure 5a. Deletion of simple statement
Figure 5b. ESDG for deletion of simple statement
Test Coverage Generation : Program P is instrumented at the statement levels. The code statements are
executed with the original test suite T and to write traces for each test case in order to generate information
pertaining to the specific statements that are executed for each test case. The information generated in this stage
is saved in a file named “coverageInfo” for later use.
ESDG Model Constructor : ESDG model for the original program P is constructed using a technique similar to
[11], and is described in section 2. Graphviz [4] is used to represents our graphs constructed.
ESDG Model Updates : The model constructed for P is updated using information from changed file during
each regression testing to make it correspond to the modified program P` and the updated ESDG model is
denoted by M`.
Affected Statements Identification :To identify the affected statements, a forward slice is constructed on the
updated model M` using the information from changed file. Each change node in changed file is used as slicing
criterion to determine the affected nodes in each statement, and this is performs by statements in line 10-13 of
algorithm 1. The change nodes stored in changed file are used to identify the affected statements. The affected
statements are statements that were affected directly by the modifications or as the result of control dependence
or data dependence or dependent as a result of object relation such as inheritance and polymorphism on the
affected node from the updated model M`, and denoted by”affectedStat”.
Test Case Selection : We used EvolRegTCasePrior algorithm presented in algorithm 1, to select test cases that
execute the affected statements in the updated model M` for regression testing, and donated as T`. The selection
is performs by statements in line 14-21.
Given the following set of test suite
T = {t1 t2 t3 t4 t5 t6 t7 t8 t9}
We used test coverage generator to get the following coverage information of the test cases and save as
“coverageInfo”:
t1 = {n1 n2 n6},
t2 = {n2 n7 n9 n10},
t3 = {n3 n7 n8},
t4 = {n4 n6 n7},
t5= {n3 n4 n5 n9 n10},
t6 = {n3 n4 n6 n8 n10},
t7 = {n6 n7 n8},
t8 = {n8 n9 n10}.
The faults are:
n1 n2 n3 n4 n5 n6, identified by performing forward slicing using “changed nodes” as slice criterion.
The selected test cases will be:
t1 = {n1 n2}
t2 = {n2}
t3 = {n3}
8. A Regression Test Case Selection and Prioritization ...
61
t4 = {n4}
t5 = {n3 n4 n5}
t6 = {n3 n4}
I.e., the selected test case T’ will be:
T` = {t1 t2 t3 t4 t5 t6}
Test Case Prioritization : The selected test cases need to be ordered to increase the rate at which faults are
detected while running them. Test case prioritization technique needs to be given some guides in order to
improve it efficiency. We proposed an evolutionary approach using GA to optimize the selected test cases. The
selected test cases T` will be prioritized using evolutionary algorithm presented in algorithm 2, denoted as
“GAPriorTCase”.
Algorithm 1
EvolRegTCasePrior(M’, T, CoveraInfo, Changed, AffectedNodes, T’, EvaPrioTcase)
{
1 Changed: is the set of changed objects
2 M’: is the updated extended system dependence graph
3 T: is the set of all test cases
4 Coverainfo: is the set of nodes covered by a test case
5 affectedNodes: is the set of affected nodes
// changed objects and dependent nodes
6 T’: is the set of selected test cases
7 EvaPrioTcase: is the set of prioritized selected test cases
8 affectedNodes = { ø }
9 T’ = { ø }
10 For (node n : Changed ) {
11 Find nodes (nDep) that are dependent on node n
12 affectedNodes = affectedNodes U nDep
13 }
14 While (affectedNodes != Null) DO
15 {
16 For ( node a : affectedNodes)
17 {
18 Find all test cases that cover node a (tcase)
19 T’ = T’ U tcase
20 }
21 }
22 GAPriorTCase(CoveraInfo, T’, EvaPrioTcase)
23 }
Algorithm 2
GAPriorTCase(CoveraInfo, T’, EvaPrioTcase)
{
1 Encode the selected test cases (T’)
2 T’ = { i1, i2, … ….ip)
3 // where i1, i2, ……. ip are the positions of test cases
// t1, t2, ….. tp in the test suite T
4 Generate two populations P1 and P2 from T’ to be the initial population
5 P1 is the set T’ and
6 p2 is the reversed order of T’
7 Evaluate the fitness of each parent using:
8 Ft(pi) = ∑n(tj) *pij for j=1 to k
9 REPEAT {
10 child = ordered crossover (P1, P2)
11 child = swap mutation ( child )
12 evaluate the fitness of child
13 Ft(pi) = ∑n(tj) *pij for j=1 to k
14 add the child to the population
15 remove the worst chromosome from the population
9. A Regression Test Case Selection and Prioritization ...
62
16 } UNTIL maximum number of iterations
17 return the best chromosome (EvaPrioTcase)
18 }
VI. RESULT AND DISCUSSION
The empirical procedure for our proposed approach is: design a test suite (T) for a program using white
box testing technique, generate and save the coverage information for each test case, construct extended system
dependency graph for the program, introduce changes into the program, reflect the changes in extended system
dependence graph, get all the affected statements/nodes that are dependent on the changes; the affected
statements are statements that were affected directly by the modifications or affected as the result of control
dependence or data dependence or dependent as a result of object relation such as inheritance and
polymorphism, determine which test cases in T are modification-revealing with respect to the affected
statements, prioritize the selected test cases using GA, and compute the APFD of the prioritized test cases using
equation (1).We used a sample test suite T used in PFRevSevere [15] with eight number of test cases, i.e.,
{t1,t2,t3,t4,t5,t6,t7,t8}, and five number of faults, i.e., {f1,f2,f3,f4,f5}. Our proposed approach
EvolRegTCasePrior selects and prioritized the test cases as: {t5,t1,t6,t3,t2,t4}, and their approach PFRevSevere
[15] prioritized the test cases as: {t5,t6,t1,t4,t2,t3,t7,t8}. The test cases and the faults detected by each test case
are presented in Table 1.
Table 1. Faults detected by test cases
Test case
Faults
T1 T2 T3 T4 T5 T6 T7 T8
F1 X
F2 X X
F3 X X X
F4 X X X
F5 X
The APFD value of our approach using equation (1) with test sequence {t5 t1 t6 t3 t2 t4} is:
APFD (T,P) = 1 – (2+2+1+1+1) + 1 = 0.8875
5 * 8 2*8
The APFD value for [18] using equation (1) with test sequence {t5 t6 t1 t4 t2 t3 t7 t8} is:
APFD (T,P) = 1 – (3+3+1+1+1) + 1 = 0.8375
5 * 8 2*8
The above results of APFD are presented in Figure 6 for easy identification.
The comparison is between our proposed approach and an approach [15] (PFRevSevere) presented in
the literature. From Table I and Figure 6, it is observed that our approach identify the faults at early stage. The
APFD is better in our approach when compared to PFRevSevere. From the results, it is also observed that our
proposed approach EvolRegTCasePrior is more effective than PFRevSevere in term of rate of faults detection
when all test costs and faults severities are uniform.In Figure 7, we plot percentage of test cases executed
against percentage of faults detected. From the result, we identified that our approach needs less test cases to
reveal all faults compared to PFRevSevere [15]. It is observed that our approach EvolRegTCasePrior needs 20%
of the test cases to find out all the faults. But in PFRevSevere [15], there is need of 30% of the test cases to find
out all the faults.
From Figure 7, we observed that our approach EvolRegTCasePrior order of test cases detect all the
faults in early stage when compared to technique proposed in PFRevSevere [15]. So, our proposed approach of
test case prioritization process will reduce the regression testing time.
10. A Regression Test Case Selection and Prioritization ...
63
VII. CONCLUSION
A regression test case selection and prioritization is proposed in our approach that ordered selected test
cases T` from test suite T that will be used for rerun in regression testing. The approach used extended system
dependence graph to identify changes at statement level of source code, store the changes in a file named
changed, and generate coverage information for each test case from the source code. The changed information
are used to identify the affected statements, and test cases are identify that will be rerun in regression testing
based on the affected statements. The selected test cases will be prioritized using genetic algorithm in order to
increases their rate of faults detection. The technique cover the different important issues that regression testing
strategies need to address: change identification, test selection, test execution and test suite maintenance.The
effectiveness and efficiency of the proposed approach was evaluated using APFD metric. In this paper, a sample
project was used from the literature. The proposed approach provides the better results for rate of faults
detection. From the results we observed that the proposed approach EvolRegTCasePrior needs only 20% of the
test cases to detect all the faults, compared with the 30% needed to detect all the severe faults by PFRevSevere.
Based on the measured performance obtained from the results, the proposed approach selects and prioritized test
cases efficiently and effectively compared to PFRevSevere, which will result in reducing the cost of regression
testing.We note that our proposed approach assumes that the ESDG are updated in a timely way, every time
changes are introduced into the programs. This assumption becomes a limitation for our approach. Another
limitation is that we assumed that all test costs and faults severities are uniform.The proposed framework can be
used in software maintenance, especially when the changes are at statements level of object-oriented programs;
such as simple statements, method calls and polymorphic calls.
11. A Regression Test Case Selection and Prioritization ...
64
REFERENCES
[1] Beszedes A., Gergely T., Schrettner L., Jasz J., Lango L., Gyimothy T., “Code coverage-based regression test selection and
prioritization in webkit,” 28th IEEE International Conference on Software Maintenance, Trento, pp. 46-55, 2012.
[2] El-hamid W. S. A., El-etriby S. S., and Hadhoud M. M., “Regression Test selection technique for multi-programming language,”
Faculty of Computer and Information, Menofia University, Shebin-Elkom, 32511, Egypt 2009.
[3] Frechette N., Badri L., and Badri M., “Regression Test reduction for object-oriented software: A control call graph based technique
and associated tool,” International Scholarly Research Network Software engineering (ISRN Software Engineering), vol. 2013, ID
420394, pp. 1-10, 2013.
[4] Graphviz, http://www.graphviz.org/, last visited: March 21, 2014.
[5] Gupta R., and Yadav A. K., “Study of Test Case Prioritization Technique Using APFD,” International Journal of Computers &
Technology, vol. 10, no. 3, pp. 1475-1481, Aug-2013.
[6] Harris P., and Raju N., “A greedy approach for coverage-based test suite reduction,” IAJIT, vol. 12, no. 1, 2013.
[7] Harrold M.J., Jones J.A., Li T., Liang D., Orso A., Pennings M., Sinha S., Spoon S.A., and Gujarathi A., “Regression test
selection for java software,” ACM, vol. 36, no. 11, pp. 312-326, 2001.
[8] Horwitz S., Reps T., and Binkley D., “Interprocedural slicing using dependence graphs,” ACM Transactions on Programming
Languages and Systems, vol. 12, no. 1, pp. 26–60, 1990.
[9] Jang Y., Munro M., and Kwon Y.R., “An improved method of selecting regression tests for C++ programs,” Journal of Software
Maintenance: Research and Practice, vol. 13, no. 5, pp. 331–350, 2001.
[10] Jin W., Orso A., and Xie T., “Automated Behavioral regression testing,” Third International Conference on Software Testing,
Verification and Validation, Washington DC, USA, pp. 137-146, 2010.
[11] Larsen L., and Harrold M. J., “Slicing object-oriented software,” In Proceedings of 18th IEEE International Conference on Software
Engineering, Berlin, pp. 495-505, 1996.
[12] Li Z., Harman M., and Hierons R. M., “Search algorithms for regression test case prioritization,” IEEE Trans. On Software
Engineering, vol. 33, no. 4, pp. 225-237, April 2007.
[13] Panigrahi C., and Mall R., “An approach to prioritize regression test cases of object-oriented programs”. JCSI Trans on ICT
(Springer), doi:10.1007/s40012-013-0011-7, pp. 1-15, 2013.
[14] Panigrahi C., and Mall R., “A heuristic-based regression test case prioritization approach for object-oriented programs,” Innovations
in Systems and Software Engineering (Springer), doi: 10.1007/s11334-013-0221-z, pp. 1-9, 2013.
[15] Shanmugam R., and Uma G.V., “Factors Oriented test case prioritization Technique in Regression testing using Genetic
Algorithm,” European Journal of Scientific Research, vol. 74, no. 3, pp. 389-402, 2012.
[16] Raperia H., and Srivastava S., “An Empirical Approach for Test Case Prioritization,” International Journal of Scientific &
Engineering Research, vol. 4, no. 3, pp. 1-5, March 2013.
[17] Roger S.P., Software Engineering: A practitioner’s Approach, 5th
ed, New York, America, McGraw-Hill, Inc., 2001.
[18] Rothermel G. and Harrold M. J., “Selecting regression tests for object-oriented software,” International Conference on Software
Maintenance, Victoria, CA, pp. 14–25, 1994.
[19] Rothermel G., and Harrold M. J., “A safe, efficient regression test selection technique,” ACM Transactions on Software
Engineering and Methodology, vol. 6, no. 2, pp. 173–210, 1997.
[20] Rothermel G., Harrold M. J, and Dedhia J., “Regression test selection for C++ software,” Software Testing, Verification and
Reliability, vol. 10, no. 2, pp. 77–109, 2000.