Testing has been an essential part of software development life cycle. Automatic test case and test data generation has attracted many researchers in the recent past. Test suite generation is the concept given importance which considers multiple objectives in mind and ensures core coverage. The test cases thus generated can have dependencies such as open dependencies and closed dependencies. When there are dependencies, it is obvious that the order of execution of test cases can have impact on the percentage of flaws detected in the software under test. Therefore test case prioritization is another important research area that complements automatic test suite generation in objects oriented systems. Prior researches on test case prioritization focused on dependency structures. However, in this paper, we automate the extraction of dependency structures. We proposed a methodology that takes care of automatic test suite generation and test case prioritization for effective testing of object oriented software. We built a tool to demonstrate the proof of concept. The empirical study with 20 case studies revealed that the proposed tool and underlying methods can have significant impact on the software industry and associated clientele.
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
Dynamic Radius Species Conserving Genetic Algorithm for Test Generation for S...ijseajournal
This paper critically examined nine different software models for modeling / developing multi-agent based
systems. The study revealed that the different models examined have their various advantages /
disadvantages and uniqueness in terms of practical development/deployment of the multi-agent based
information system in question. The agentology model is one methodology that can be easily adopted or
adapted for the development of any multi-agent based software prototype because it was inspired by the
best practices and good ideas contained in other agent oriented methodologies like CoMoMas, MASE,
GAIA, Prometheus, HIM, MASim and Tropos.
AUTOMATIC GENERATION AND OPTIMIZATION OF TEST DATA USING HARMONY SEARCH ALGOR...csandit
Software testing is the primary phase, which is performed during software development and it is
carried by a sequence of instructions of test inputs followed by expected output. The Harmony
Search (HS) algorithm is based on the improvisation process of music. In comparison to other
algorithms, the HSA has gain popularity and superiority in the field of evolutionary
computation. When musicians compose the harmony through different possible combinations of
the music, at that time the pitches are stored in the harmony memory and the optimization can
be done by adjusting the input pitches and generate the perfect harmony. The test case
generation process is used to identify test cases with resources and also identifies critical
domain requirements. In this paper, the role of Harmony search meta-heuristic search
technique is analyzed in generating random test data and optimized those test data. Test data
are generated and optimized by applying in a case study i.e. a withdrawal task in Bank ATM
through Harmony search. It is observed that this algorithm generates suitable test cases as well
as test data and gives brief details about the Harmony search method. It is used for test data
generation and optimization
Automatic Generation of Multiple Choice Questions using Surface-based Semanti...CSCJournals
Multiple Choice Questions (MCQs) are a popular large-scale assessment tool. MCQs make it much easier for test-takers to take tests and for examiners to interpret their results; however, they are very expensive to compile manually, and they often need to be produced on a large scale and within short iterative cycles. We examine the problem of automated MCQ generation with the help of unsupervised Relation Extraction, a technique used in a number of related Natural Language Processing problems. Unsupervised Relation Extraction aims to identify the most important named entities and terminology in a document and then recognize semantic relations between them, without any prior knowledge as to the semantic types of the relations or their specific linguistic realization. We investigated a number of relation extraction patterns and tested a number of assumptions about linguistic expression of semantic relations between named entities. Our findings indicate that an optimized configuration of our MCQ generation system is capable of achieving high precision rates, which are much more important than recall in the automatic generation of MCQs. Its enhancement with linguistic knowledge further helps to produce significantly better patterns. We furthermore carried out a user-centric evaluation of the system, where subject domain experts from biomedical domain evaluated automatically generated MCQ items in terms of readability, usefulness of semantic relations, relevance, acceptability of questions and distractors and overall MCQ usability. The results of this evaluation make it possible for us to draw conclusions about the utility of the approach in practical e-Learning applications.
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
This paper presents a set of methods that uses a genetic algorithm for automatic test-data generation in
software testing. For several years researchers have proposed several methods for generating test data
which had different drawbacks. In this paper, we have presented various Genetic Algorithm (GA) based test
methods which will be having different parameters to automate the structural-oriented test data generation
on the basis of internal program structure. The factors discovered are used in evaluating the fitness
function of Genetic algorithm for selecting the best possible Test method. These methods take the test
populations as an input and then evaluate the test cases for that program. This integration will help in
improving the overall performance of genetic algorithm in search space exploration and exploitation fields
with better convergence rate.
Dynamic Radius Species Conserving Genetic Algorithm for Test Generation for S...ijseajournal
This paper critically examined nine different software models for modeling / developing multi-agent based
systems. The study revealed that the different models examined have their various advantages /
disadvantages and uniqueness in terms of practical development/deployment of the multi-agent based
information system in question. The agentology model is one methodology that can be easily adopted or
adapted for the development of any multi-agent based software prototype because it was inspired by the
best practices and good ideas contained in other agent oriented methodologies like CoMoMas, MASE,
GAIA, Prometheus, HIM, MASim and Tropos.
AUTOMATIC GENERATION AND OPTIMIZATION OF TEST DATA USING HARMONY SEARCH ALGOR...csandit
Software testing is the primary phase, which is performed during software development and it is
carried by a sequence of instructions of test inputs followed by expected output. The Harmony
Search (HS) algorithm is based on the improvisation process of music. In comparison to other
algorithms, the HSA has gain popularity and superiority in the field of evolutionary
computation. When musicians compose the harmony through different possible combinations of
the music, at that time the pitches are stored in the harmony memory and the optimization can
be done by adjusting the input pitches and generate the perfect harmony. The test case
generation process is used to identify test cases with resources and also identifies critical
domain requirements. In this paper, the role of Harmony search meta-heuristic search
technique is analyzed in generating random test data and optimized those test data. Test data
are generated and optimized by applying in a case study i.e. a withdrawal task in Bank ATM
through Harmony search. It is observed that this algorithm generates suitable test cases as well
as test data and gives brief details about the Harmony search method. It is used for test data
generation and optimization
Automatic Generation of Multiple Choice Questions using Surface-based Semanti...CSCJournals
Multiple Choice Questions (MCQs) are a popular large-scale assessment tool. MCQs make it much easier for test-takers to take tests and for examiners to interpret their results; however, they are very expensive to compile manually, and they often need to be produced on a large scale and within short iterative cycles. We examine the problem of automated MCQ generation with the help of unsupervised Relation Extraction, a technique used in a number of related Natural Language Processing problems. Unsupervised Relation Extraction aims to identify the most important named entities and terminology in a document and then recognize semantic relations between them, without any prior knowledge as to the semantic types of the relations or their specific linguistic realization. We investigated a number of relation extraction patterns and tested a number of assumptions about linguistic expression of semantic relations between named entities. Our findings indicate that an optimized configuration of our MCQ generation system is capable of achieving high precision rates, which are much more important than recall in the automatic generation of MCQs. Its enhancement with linguistic knowledge further helps to produce significantly better patterns. We furthermore carried out a user-centric evaluation of the system, where subject domain experts from biomedical domain evaluated automatically generated MCQ items in terms of readability, usefulness of semantic relations, relevance, acceptability of questions and distractors and overall MCQ usability. The results of this evaluation make it possible for us to draw conclusions about the utility of the approach in practical e-Learning applications.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Instance Space Analysis for Search Based Software EngineeringAldeida Aleti
Search-Based Software Engineering is now a mature area with numerous techniques developed to tackle some of the most challenging software engineering problems, from requirements to design, testing, fault localisation, and automated program repair. SBSE techniques have shown promising results, giving us hope that one day it will be possible for the tedious and labour intensive parts of software development to be completely automated, or at least semi-automated. In this talk, I will focus on the problem of objective performance evaluation of SBSE techniques. To this end, I will introduce Instance Space Analysis (ISA), which is an approach to identify features of SBSE problems that explain why a particular instance is difficult for an SBSE technique. ISA can be used to examine the diversity and quality of the benchmark datasets used by most researchers, and analyse the strengths and weaknesses of existing SBSE techniques. The instance space is constructed to reveal areas of hard and easy problems, and enables the strengths and weaknesses of the different SBSE techniques to be identified. I will present on how ISA enabled us to identify the strengths and weaknesses of SBSE techniques in two areas: Search-Based Software Testing and Automated Program Repair. Finally, I will end my talk with future directions of the objective assessment of SBSE techniques.
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
Search-based testing of procedural programs:iterative single-target or multi-...Vrije Universiteit Brussel
In the context of testing of Object-Oriented (OO) software systems, researchers have recently proposed search based approaches to automatically generate whole test suites by considering simultaneously all targets (e.g., branches) defined by the coverage criterion (multi-target approach). The goal of whole suite approaches is to overcome the problem of wasting search budget that iterative single-target approaches (which iteratively generate test cases for each target) can encounter in case of infeasible targets. However, whole suite approaches have not been implemented and experimented in the context of procedural programs. In this paper we present OCELOT (Optimal Coverage sEarch-based tooL for sOftware Testing), a test data generation tool for C programs which implements both a state-of-the-art whole suite approach and an iterative single-target approach designed for a parsimonious use of the search budget. We also present an empirical study conducted on 35 open-source C programs to compare the two approaches implemented in OCELOT. The results indicate that the iterative single-target approach provides a higher efficiency while achieving the same or an even higher level of coverage than the whole suite approach.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
With the rise of software systems ranging from personal assistance to the nation's facilities, software defects become more critical concerns as they can cost millions of dollar as well as impact human lives. Yet, at the breakneck pace of rapid software development settings (like DevOps paradigm), the Quality Assurance (QA) practices nowadays are still time-consuming. Continuous Analytics for Software Quality (i.e., defect prediction models) can help development teams prioritize their QA resources and chart better quality improvement plan to avoid pitfalls in the past that lead to future software defects. Due to the need of specialists to design and configure a large number of configurations (e.g., data quality, data preprocessing, classification techniques, interpretation techniques), a set of practical guidelines for developing accurate and interpretable defect models has not been well-developed.
The ultimate goal of my research aims to (1) provide practical guidelines on how to develop accurate and interpretable defect models for non-specialists; (2) develop an intelligible defect model that offer suggestions how to improve both software quality and processes; and (3) integrate defect models into a real-world practice of rapid development cycles like CI/CD settings. My research project is expected to provide significant benefits including the reduction of software defects and operating costs, while accelerating development productivity for building software systems in many of Australia's critical domains such as Smart Cities and e-Health.
Instance Space Analysis for Search Based Software EngineeringAldeida Aleti
Search-Based Software Engineering is now a mature area with numerous techniques developed to tackle some of the most challenging software engineering problems, from requirements to design, testing, fault localisation, and automated program repair. SBSE techniques have shown promising results, giving us hope that one day it will be possible for the tedious and labour intensive parts of software development to be completely automated, or at least semi-automated. In this talk, I will focus on the problem of objective performance evaluation of SBSE techniques. To this end, I will introduce Instance Space Analysis (ISA), which is an approach to identify features of SBSE problems that explain why a particular instance is difficult for an SBSE technique. ISA can be used to examine the diversity and quality of the benchmark datasets used by most researchers, and analyse the strengths and weaknesses of existing SBSE techniques. The instance space is constructed to reveal areas of hard and easy problems, and enables the strengths and weaknesses of the different SBSE techniques to be identified. I will present on how ISA enabled us to identify the strengths and weaknesses of SBSE techniques in two areas: Search-Based Software Testing and Automated Program Repair. Finally, I will end my talk with future directions of the objective assessment of SBSE techniques.
In this presentation I will show a set of important topics about Software Engineering Empirical Studies that can be useful for increasing quality on your thesis and monographs in general. You can read this presentation and to think about how to do a good experimentation by apply its objectives, validation methods, questions, answers expected, define metrics and measuring it.I will exhibit how the researchers selected the data for avoid case studies in a biased way using a GQM methodology to sort the study in a simpler view as well.
Search-based testing of procedural programs:iterative single-target or multi-...Vrije Universiteit Brussel
In the context of testing of Object-Oriented (OO) software systems, researchers have recently proposed search based approaches to automatically generate whole test suites by considering simultaneously all targets (e.g., branches) defined by the coverage criterion (multi-target approach). The goal of whole suite approaches is to overcome the problem of wasting search budget that iterative single-target approaches (which iteratively generate test cases for each target) can encounter in case of infeasible targets. However, whole suite approaches have not been implemented and experimented in the context of procedural programs. In this paper we present OCELOT (Optimal Coverage sEarch-based tooL for sOftware Testing), a test data generation tool for C programs which implements both a state-of-the-art whole suite approach and an iterative single-target approach designed for a parsimonious use of the search budget. We also present an empirical study conducted on 35 open-source C programs to compare the two approaches implemented in OCELOT. The results indicate that the iterative single-target approach provides a higher efficiency while achieving the same or an even higher level of coverage than the whole suite approach.
Software Quality Assurance (SQA) teams play a critical role in the software development process to ensure the absence of software defects. It is not feasible to perform exhaustive SQA tasks (i.e., software testing and code review) on a large software product given the limited SQA resources that are available. Thus, the prioritization of SQA efforts is an essential step in all SQA efforts. Defect prediction models are used to prioritize risky software modules and understand the impact of software metrics on the defect-proneness of software modules. The predictions and insights that are derived from defect prediction models can help software teams allocate their limited SQA resources to the modules that are most likely to be defective and avoid common past pitfalls that are associated with the defective modules of the past. However, the predictions and insights that are derived from defect prediction models may be inaccurate and unreliable if practitioners do not control for the impact of experimental components (e.g., datasets, metrics, and classifiers) on defect prediction models, which could lead to erroneous decision-making in practice. In this thesis, we investigate the impact of experimental components on the performance and interpretation of defect prediction models. More specifically, we investigate the impact of the three often overlooked experimental components (i.e., issue report mislabelling, parameter optimization of classification techniques, and model validation techniques) have on defect prediction models. Through case studies of systems that span both proprietary and open-source domains, we demonstrate that (1) issue report mislabelling does not impact the precision of defect prediction models, suggesting that researchers can rely on the predictions of defect prediction models that were trained using noisy defect datasets; (2) automated parameter optimization for classification techniques substantially improve the performance and stability of defect prediction models, as well as they change their interpretation, suggesting that researchers should no longer shy from applying parameter optimization to their models; and (3) the out-of-sample bootstrap validation technique produces a good balance between bias and variance of performance estimates, suggesting that the single holdout and cross-validation families that are commonly-used nowadays should be avoided.
Software analytics focuses on analyzing and modeling a rich source of software data using well-established data analytics techniques in order to glean actionable insights for improving development practices, productivity, and software quality. However, if care is not taken when analyzing and modeling software data, the predictions and insights that are derived from analytical models may be inaccurate and unreliable. The goal of this hands-on tutorial is to guide participants on how to (1) analyze software data using statistical techniques like correlation analysis, hypothesis testing, effect size analysis, and multiple comparisons, (2) develop accurate, reliable, and reproducible analytical models, (3) interpret the models to uncover relationships and insights, and (4) discuss pitfalls associated with analytical techniques including hands-on examples with real software data. R will be the primary programming language. Code samples will be available in a public GitHub repository. Participants will do exercises via either RStudio or Jupyter Notebook through Binder.
In today's increasingly digitalised world, software defects are enormously expensive. In 2018, the Consortium for IT Software Quality reported that software defects cost the global economy $2.84 trillion dollars and affected more than 4 billion people. The average annual cost of software defects on Australian businesses is A$29 billion per year. Thus, failure to eliminate defects in safety-critical systems could result in serious injury to people, threats to life, death, and disasters. Traditionally, software quality assurance activities like testing and code review are widely adopted to discover software defects in a software product. However, ultra-large-scale systems, such as, Google, can consist of more than two billion lines of code, so exhaustively reviewing and testing every single line of code isn't feasible with limited time and resources. This project aims to create technologies that enable software engineers to produce the highest quality software systems with the lowest operational costs. To achieve this, this project will invent an end-to-end explainable AI platform to (1) understand the nature of critical defects; (2) predict and locate defects; (3) explain and visualise the characteristics of defects; (4) suggest potential patches to automatically fix defects; (5) integrate such platform as a GitHub bot plugin.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
A Software Measurement Using Artificial Neural Network and Support Vector Mac...ijseajournal
Today, Software measurement are based on various techniques such that neural network, Genetic
algorithm, Fuzzy Logic etc. This study involves the efficiency of applying support vector machine using
Gaussian Radial Basis kernel function to software measurement problem to increase the performance and
accuracy. Support vector machines (SVM) are innovative approach to constructing learning machines that
Minimize generalization error. There is a close relationship between SVMs and the Radial Basis Function
(RBF) classifiers. Both have found numerous applications such as in optical character recognition, object
detection, face verification, text categorization, and so on. The result demonstrated that the accuracy and
generalization performance of SVM Gaussian Radial Basis kernel function is better than RBFN. We also
examine and summarize the several superior points of the SVM compared with RBFN.
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
Comparative Analysis of Model Based Testing and Formal Based Testing - A ReviewIJERA Editor
Software testing is one of the most important steps in the process of Software Development. Testing provides
the glimpse of the proper functioning of the system under different conditions. It makes it a necessary step to
choose the best testing method for the software system to be successful and accepted by a large number of
people as the market is really competitive these days and only error free systems can survive for a longer period
of time. This paper gives the comparative analysis of two major methods of testing : Formal Specifications
Based Software Testing and Model Based Software Testing, which are used widely in the process of software
development process. It brings out how these two methods of testing can provide reliability to software system
including the major uses, advantages, and disadvantages of both the testing methods. It briefly gives the detailed
comparative analysis of these two methods of software testing. It also brings out the situations where formal
specifications based testing is more effective and efficient while model based testing being effective in others.
This comparative analysis will help one in deciding on a better testing technique, depending upon the situation,
and requirements of software, for the software to be successful in long run
Software Defect Prediction Using Radial Basis and Probabilistic Neural NetworksEditor IJCATR
Defects in modules of software systems is a major problem in software development. There are a variety of data mining
techniques used to predict software defects such as regression, association rules, clustering, and classification. This paper is concerned
with classification based software defect prediction. This paper investigates the effectiveness of using a radial basis function neural
network and a probabilistic neural network on prediction accuracy and defect prediction. The conclusions to be drawn from this work is
that the neural networks used in here provide an acceptable level of accuracy but a poor defect prediction ability. Probabilistic neural
networks perform consistently better with respect to the two performance measures used across all datasets. It may be advisable to use
a range of software defect prediction models to complement each other rather than relying on a single technique.
Software Defect Prediction Using Local and Global AnalysisEditor IJMTER
The software defect factors are used to measure the quality of the software. The software
effort estimation is used to measure the effort required for the software development process. The defect
factor makes an impact on the software development effort. Software development and cost factors are
also decided with reference to the defect and effort factors. The software defects are predicted with
reference to the module information. Module link information are used in the effort estimation process.
Data mining techniques are used in the software analysis process. Clustering techniques are used
in the property grouping process. Rule mining methods are used to learn rules from clustered data
values. The “WHERE” clustering scheme and “WHICH” rule mining scheme are used in the defect
prediction and effort estimation process. The system uses the module information for the defect
prediction and effort estimation process.
The proposed system is designed to improve the defect prediction and effort estimation process.
The Single Objective Genetic Algorithm (SOGA) is used in the clustering process. The rule learning
operations are carried out sing the Apriori algorithm. The system improves the cluster accuracy levels.
The defect prediction and effort estimation accuracy is also improved by the system. The system is
developed using the Java language and Oracle relation database environment.
Characterization of Open-Source Applications and Test Suites ijseajournal
Software systems that meet the stakeholders needs and expectations is the ultimate objective of the software
provider. Software testing is a critical phase in the software development lifecycle that is used to evaluate
the software. Tests can be written by the testers or the automatic test generators in many different ways and
with different goals. Yet, there is a lack of well-defined guidelines or a methodology to direct the testers to
write tests
We want to understand how tests are written and why they may have been written that way. This work is a characterization study aimed at recognizing the factors that may have influenced the development of the test suite. We found that increasing the coverage of the test suites for applications with at least 500 test
cases can make the test suites more costly. The correlation coeffieicent obtained was 0.543. The study also found that there is a positive correlation between the mutation score and the coverage score.
Software testing is an important activity of the software development process. Software testing is most
efforts consuming phase in software development. One would like to minimize the effort and maximize the
number of faults detected and automated test case generation contributes to reduce cost and time effort.
Hence test case generation may be treated as an optimization problem In this paper we have used genetic
algorithm to optimize the test case that are generated applying conditional coverage on source code. Test
case data is generated automatically using genetic algorithm are optimized and outperforms the test cases
generated by random testing.
Abstract—Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Machine learning techniques can be used to analyse data from different perspectives and enable developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Comparative Performance Analysis of Machine Learning Techniques for Software ...csandit
Machine learning techniques can be used to analyse data from different perspectives and enable
developers to retrieve useful information. Machine learning techniques are proven to be useful
in terms of software bug prediction. In this paper, a comparative performance analysis of
different machine learning techniques is explored for software bug prediction on public
available data sets. Results showed most of the machine learning methods performed well on
software bug datasets.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Sectors of the Indian Economy - Class 10 Study Notes pdf
Comprehensive Testing Tool for Automatic Test Suite Generation, Prioritization and Testing of Object Oriented Software Products
1. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 1
Comprehensive Testing Tool for Automatic Test Suite
Generation, Prioritization and Testing of Object Oriented
Software Products
C. Prakasa Rao prakashrao.pec@gmail.com
Research Scholar
Sri Venkateswara University,
Tirupathi, Chittoor(Dt) AP, India
P. Govindarajulu pgovindarajulu@yahoo.com
Retd Professor, Dept. of Computer Science,
Sri Venkateswara University,
Tirupathi, Chittoor (Dt) AP, India
Abstract
Testing has been an essential part of software development life cycle. Automatic test case and
test data generation has attracted many researchers in the recent past. Test suite generation is
the concept given importance which considers multiple objectives in mind and ensures core
coverage. The test cases thus generated can have dependencies such as open dependencies
and closed dependencies. When there are dependencies, it is obvious that the order of execution
of test cases can have impact on the percentage of flaws detected in the software under test.
Therefore test case prioritization is another important research area that complements automatic
test suite generation in objects oriented systems. Prior researches on test case prioritization
focused on dependency structures. However, in this paper, we automate the extraction of
dependency structures. We proposed a methodology that takes care of automatic test suite
generation and test case prioritization for effective testing of object oriented software. We built a
tool to demonstrate the proof of concept. The empirical study with 20 case studies revealed that
the proposed tool and underlying methods can have significant impact on the software industry
and associated clientele.
Keywords: Software Engineering, Testing, Test Case Prioritization, Dependency Structures,
Branch Coverage.
1. INTRODUCTION
Software testing can improve quality of Software Under Test (SUT). Unit testing [3] is one of the
important tests made. However, generating test cases automatically has been around in the
research circles. Nevertheless, it is a complex and challenging task [5]. Other approaches to test
case generation are based on search [2], [4]. The recent focus is on the high coverage while
generating test cases automatically. To solve this problem automatic test suite generation came
into existence. Test suite can reduce number of test cases as it can provide representative test
cases with high coverage. Test suite generation targets many goals at a time and generates
optimal test suites. A problem with considering a single goal is that it cannot guarantee of high
coverage and may produce infeasible ones. In this paper we proposed a mechanism for
automatic test suite generation based on Genetic Algorithm (GA). The test suites thus generated
are used for mutation testing which is widely used technique [2], [4]. GA is used to have an
evolutionary approach to generate test suites that can best represent high code coverage.
Test suites are used to detect faults. However, there are some test cases that depend on others.
This will provide way to further research on the dependencies and improving detection of fault
2. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 2
rates. In order to unearth hidden faults, it is essential to consider dependencies. To achieve this
dependency structures are to be discovered. Haidry and Miller [8] focused on the test case
prioritization but they did not discover dependency structures automatically. In this paper, we
focus on the automatic discovery of dependencies. Figure 1 shows both open and closed
dependencies.
FIGURE 1 : Sample Dependency Structure [1].
As can be seen in Figure 1, the root nodes are I1 and I2 that are independent on other nodes.
Other nodes have dependencies. Dependencies are classified into two categories. They are open
and closed dependencies. The D6 is indirectly dependent on I1 and directly dependent on D3.
Therefore there is closed dependency on D3 and open dependency on I1 (no immediate
dependency). When both kinds of dependencies are considered, the quality of test case
prioritization improves. DSP volume and DSP height are the two measures used in test case
prioritization. DSP height indicates depth while DSP volume indicates the count of dependencies.
In [6] different measure is used known as Average Percentage of Faults Detected (APFD). APFD
value is directly proportional to fault detection rate.
In the literature many kinds of test case prioritization techniques were found. They are knowledge
based [18], history based [11] and model based [25]. More on these techniques is covered in
Related Works section of this paper. Our main contribution in this paper is the development of a
comprehensive tool that supports test suite generation, automatic discovery of dependency
structures and test case prioritization. The tool employs the methods we proposed in our earlier
work to demonstrate the proof concept. The remainder of the paper is structured as follows.
Section 2 reviews literature related to test case generation and test case prioritization. Section 3
focuses on comprehensive testing tool for automatic test suite generation and prioritization.
Section 4 presents experimental results while section 7 concludes the paper besides providing
future work.
2. RELATED WORKS
This section provides review of literature pertaining to whole test suite generation, automatic
extraction of dependency structures and prioritization of test cases for improving bug detection
rate.
2.1 Whole Test Suite Generation
Many researchers tried to generate test suites that can reduce number of test cases as they
provide representative test cases. Moreover test suits can also provide maximum coverage.
Towards achieving this, Rodolph [13] focused on genetic algorithms while Arcuri and Briand [26]
threw light into adapting random testing. Bacteriologic algorithm [27], hybrid approach with static
and dynamic methods [28], Directed Automated Random Testing (DART) [29] are the other
approaches found in the literature. PathFinder [30], evolutionary programming [21], integrated
evolutionary programming [31], Mock classes and test cases [32], exploring length of lest cases
3. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 3
for code coverage [33], JCrasher [34] are other approaches found for testing Java applications.
Malburg and Fraser [2] combined both search-based and constraint based approaches to test
software. Mutation-driven generation [1] and parameter tuning [3] and mutation testing [4], [7]
were other useful researches found.
2.2 Automatic Generation of Dependency Structures
Ryser and Glinz [9] focused on dependency charts for reflecting dependencies among various
scenarios for improving testing. They used natural language processing techniques for mapping
system requirements and descriptions. State charts are used to refine the representations. This
will have clear picture of dependency structures based on which test cases are extracted while
making use of dependency charts. With respect to dependencies Kim and Bae [10] opine that
there are two features namely accidental and essential in a system. The former depends on the
latter thus producing different levels in the system. These researchers employed an approach to
align the features pertain to the two classes of features based on the dependencies. The
accidental features were found to be dynamic and changed often while the essential features do
not change frequently. However, they only focused on the dependency structures and did not
consider test case prioritisation.
2.3 Test Case Prioritization
This section provides test case prioritization categories that are close to our approach in this
paper. They are history-based, knowledge-based and model-based approaches.
2.3.1 History-Based Prioritization
Rothermel et al. [11] used execution information for improving rate of fault detection. Their
techniques are based on code coverage, probability of fault finding, untested code and so on. A
greedy algorithm is used to know tests that have highest coverage. Li et al. [12] focused on many
non-greedy algorithms for test case prioritization. These algorithms include genetic algorithms, hill
climbing, and so on. They achieved high fault detection rate. Jeffrey and Gupta [14] also
depended on the paths covered while running the code. They found the affecting paths and
considered them as relevant. The more relevancy demands more importance. Wong et al. [15]
focused on the changes in the source code from the previous runs in order to complete test case
prioritization with respect to regression testing. Li [16] followed an approach known as code-
coverage-based technique for test case prioritization. He could find the relationships among code
blocks and prioritize test cases based on the code coverage concept. He also coupled the
solution with symbolic execution for more accuracy. Later on in [17] the technique was improved
further which throws light into test suite generation and also priority in the execution of test cases.
2.3.2 Knowledge-Based Prioritization
Ma and Zhao [18] focused on program structure analysis in order to find faults in the software.
They used ranking given to modules that indicated testing importance of the module. This is
based on the importance of module and fault proneness. Their approach improved fault detection
rate when compared with random prioritization. Krishna moorthi and Sahaaya [19] used software
requirements as basis for test case prioritization. They used fault impact, traceability,
completeness, complexity, implementation, and requirements change and customer priority for
automatic test case prioritization. In [20] research was done in the similar lines. In [21] case-
based ranking approach was followed. It is a machine learning based mechanism employed for
test case prioritization. It depends on the user knowledge pertaining to program. Zhang et al. [28]
used data mining approach known as clustering for test case prioritization. They rank the test
cases based on the clustering results. In [22] combinatorial interaction testing is employed to
define prioritization. Based on the interactions and benefit analysis test cases are prioritized. In
[23], UML models are used to generate test case priorities based on the underlying interaction
levels. Adaptive Random Testing [24] focused on utilization of knowledge that has been
categorized to prioritize test cases. It was proved to be inexpensive when compared with other
techniques.
4. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 4
2.3.3 Model-Based Prioritization
Kundu et al. [25] explored test case generation and then prioritization for object oriented
programs. Their approach is to take UML sequence diagrams as input and generate sequence
graphs and prioritize the test cases. It makes use of weights concept such as message weight
and edge weight. Based on the weights certain metrics are used. They include sum of message
weights of a path, sum of message weights for all edges and weighted average edge weights.
The results reveal the interactions between the components of the system. This will also provide
a kind of trace that can be used to prioritize test cases. In this paper we proposed mechanisms
for whole test suite generation and test case prioritization besides testing them using a tool that
we built in Java programming language.
3. COMPREHENSIVE TESTING TOOL FOR AUTOMATIC TEST SUITE
GENERATION, PRIORITIZATION
We built a tool that facilitates automatic testing of object oriented applications. The tool includes
features such as automatic test suite generation, automatic discovery of dependency structures
and prioritization of test cases. The following sub sections provide insights of the underlying
features of the tool. This work of ours is an extension to our previous work [45], [46], and [47].
3.1 Automatic Test Suite Generation
This section provides details of our approach for automatic test suite generation. It is a search-
based approach to generate whole test suite based on testing goals. It also maximizes mutation
score. For test data derivation for search based testing, GAs are widely used. In fact, GAs are
meta-heuristic in nature and are very popular for search based testing. GA makes use of initial
population and uses operators like mutation and crossover to have next generation. Thus GA is
considered to be an evolutionary algorithm that can provide optimal solutions. In this paper, we
consider the generation of test suite for object oriented source code developed in Java. Let t be
the test case t= {s1, s2, s3, …, sl) with certain length l. In the traditional approaches a single goal is
used to generate test suites. However, in this paper, we use multiple goals at a time and generate
whole test suite that is representative for all tests with high coverage. Branch coverage is used to
guide test suite generation process. To this effect, fitness function is used. The methodology is
shown in Figure 3.
FIGURE 2: Methodology for proposed whole test suite generation.
5. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 5
As shown in Figure 2, the proposed solution starts with source code configuration. The framework
takes Java source code and generates mutations in iterative fashion. It also involves in
chromosome operations. JUnit is used for API required to generate test cases. JUnit has its
assertions process. The generated test cases are evaluated and some of the mutants are
eliminated or killed. The mutation operators are used at Java byte code level. The final test suite
is generated. The GA operators are used for performing various operations. The operators at
Java code level are used to have different mutants generated.
3.2 Automatic Discovery of Dependency Structures and Test Case Prioritization
There are many approaches found in the literature for test case prioritization. They include
adaptive random test case prioritization [39], program structure analysis [44], combinatorial
interaction testing [43], use case based ranked method [41], Prioritization of Requirements for
Test (PORT) [37], cost prioritization [38], function level and statement level techniques [42],
regression testing [36], test costs and severity of faults [42], and scenario-based testing [40]. The
approach we followed in this paper is close to that of [8] with difference in the approach for test
case prioritization. Overview of our architecture for test case prioritization is shown in Figure 3.
FIGURE 3: Proposed methodology for test case prioritization.
As shown in Figure 2, program execution traces and program given as input. The method
discovery is the process of finding methods in the program using reflection API. The list of
methods is further used to prioritize test cases. The call processing is the process of identifying
the method calls in the traces and determining the order in which methods are to be tested in
order to have high level of fault detection rate. The meta data associated with calls can help in
making well informed decisions. The test case prioritization module is responsible to actually
prioritize test cases that are supplied to it in the form of test suite and provide results. The results
contain test cases that have been prioritized. The priority when used will improve the percentage
of fault detection.
3.3 Test Case Prioritization Algorithm
This sub section provides the pseudo code for the test case prioritization. This code is
implemented in our tool in order to demonstrate the proof of concept.
6. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 6
Pseudo Code for Test Case Prioritization
Input : Execution Traces(ET), Program(P) and Test Suite(TS)
Output : Prioritized Test cases(PT)
Step 1 : Initialize a vector (M)
Step 2 : Initialize a another vector (MM)
Step 3 : Discover methods from P and populate M
Step 4 : for each method m in M
step 4.1 : scan TS
step 4.2 : associate meta data with calls
step 4.3 : add method m to vector MM
Step 5 : end for
Step 6 : for each mm in MM
step 6.1 : analyze TS
step 6.2 : correlate with mm
step 6.3 : add corresponding m to PT
Step 7 : end for
Step 8 : return PT
As seen, the Pseudo Code for implementing test case prioritization. The algorithm takes
execution traces, program and test suite. After processing, it generates prioritised test cases.
First of all it discovers methods and makes a collection object to hold it. For each method test
suite is verified and the meta data is associated with calls. This is an iterative process which ends
with a vector or collection containing methods associated. Then the test case prioritization
considers the dependencies and test suites available to have a list of test cases in the order of
priority. Prioritized test cases are the final result of the algorithm.
4 . EXPERIMENTAL RESULTS
The tool we built was tested with 20 real time applications. The details of the applications in terms
of number of classes, number of branches and LOC are as shown in Figure 4. The tool
demonstrates the proof of concept and discovers dependency structures from given program.
The tool can distinguish between open and closed dependencies as described earlier in this
paper. The tool supports both automatic test suite generation and test case prioritization with
automated discovery of dependency structures.
7. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7)
As can be seen in Figure 4 the case studies considered for
CLI, Commons Codec, Commons Collections, Commons Math, Commons Primitives, Google
Collections, Industrial Case Study, Java Collections, JDom, JGraphT, Joda Time, NanoXML,
Numerical Case Study, Java Regular Expressions, String Case St
Object Model and Java ZIP Utils.
similar approach EvoSuite besides a single objective approach
FIGURE 5: Comparison of proposed approach with other approaches
0
5000
10000
15000
20000
25000
COL
CCL
CCD
CCO
CMAA
Count
P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016
FIGURE 4: Details of Case Studies.
As can be seen in Figure 4 the case studies considered for experiments include Colt, Commons
CLI, Commons Codec, Commons Collections, Commons Math, Commons Primitives, Google
Collections, Industrial Case Study, Java Collections, JDom, JGraphT, Joda Time, NanoXML,
Java Regular Expressions, String Case Study, GNU Trove, Xmlenc, XML
Object Model and Java ZIP Utils. The experiments are made with proposed approach and the
similar approach EvoSuite besides a single objective approach.
Comparison of proposed approach with other approaches.
CPR
GCO
ICS
JCO
JDO
JGR
JTI
NXM
NCS
REG
SCS
TRO
XEN
XOM
ZIP
Case Studies
7
include Colt, Commons
CLI, Commons Codec, Commons Collections, Commons Math, Commons Primitives, Google
Collections, Industrial Case Study, Java Collections, JDom, JGraphT, Joda Time, NanoXML,
udy, GNU Trove, Xmlenc, XML
The experiments are made with proposed approach and the
# Classes
# Branches
LOC
8. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 8
As shown in Figure 5, it is evident that the proposed approach is compared with other
approaches. The proposed approach is better than the single objective approach. This is because
generating test suite with multiple objectives in mind can reduce number of test cases besides
being able to cover all branches. With respect to average branch coverage Evosuite and the
proposed method outperform single branch approach.
FIGURE 6: Average Branch Coverage comparison for all case studies
As shown in Figure 6, the average branch coverage of the proposed system and Evosuite
perform better than the single branch strategy. The reason behind this is that when test suite is
generating with multiple objectives in mind, the generated test cases will be less besides being
able to cover all possible branches. In horizontal axis case studies and in vertical axis the
average branch coverage is presented.
9. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7)
FIGURE 7:
As shown in Figure 7, the average
better than the single branch strategy. The reason behind this is that when test suite is generate
with multiple objectives in mind, the generated test cases will be less besides being able to cover
all possible branches. In horizontal axis case studies and in vertical axis the average
presented.
FIGURE 8: Case
As shown in Figure 8, the case study applications and their statistics in terms of lines of code,
number of functions and number of dependencies are presented. These case studies are used to
apply the proposed test case prioritization approach with automated discovery of dependencies.
0
5000
10000
15000
20000
25000
30000
Elite
1
Elite
2
Lines of Code 2487 2487
Functions 54 54
Dependencies 64 64
P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016
7: Average test suite length for all case studies.
, the average test suite length of the proposed system and Evosuite perform
better than the single branch strategy. The reason behind this is that when test suite is generate
with multiple objectives in mind, the generated test cases will be less besides being able to cover
all possible branches. In horizontal axis case studies and in vertical axis the average
Case study applications used for test case prioritization.
case study applications and their statistics in terms of lines of code,
number of functions and number of dependencies are presented. These case studies are used to
proposed test case prioritization approach with automated discovery of dependencies.
Elite
2
GSM
1
GSM
2
CRM
1
CRM
2
MET CZT
2487 385 975 1875 1875 10674 27246
54 14 60 33 33 270 756
64 65 65 30 30 80 314
9
of the proposed system and Evosuite perform
better than the single branch strategy. The reason behind this is that when test suite is generated
with multiple objectives in mind, the generated test cases will be less besides being able to cover
all possible branches. In horizontal axis case studies and in vertical axis the average length is
case study applications and their statistics in terms of lines of code,
number of functions and number of dependencies are presented. These case studies are used to
proposed test case prioritization approach with automated discovery of dependencies.
Lines of Code
Functions
Dependencies
10. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 10
The results revealed that the prioritization can help increase the number of faults detected. This is
the ultimate aim of the proposed tool in this paper.
FIGURE 9: Faults detection dynamics with order t1, t2, t3, t4 (a) and t4, t1, t2, t3 (b).
As shown in Figure 9, it is evident that the percentage of test suites executed is presented in
horizontal axis while the vertical axis presents percentage of faults detected. Test case
prioritization can have its impact on the fault detection. This is the important hypothesis that has
been tested with the proposed tool and underlying methodologies. As the results revealed, the
prioritized test cases were able to detect more faults. Thus the hypothesis has been tested and
proved to be positive.
5. DISCUSSION
In this paper, our research work is focused on three aspects, 1. Automatic test suite generation,
2. Automatic discovery of dependency structures and 3.Test case prioritization. We have followed
Automatic Test Suite approach to generate Chromosomes of the given input program. The whole
test suite generation, with proposed methodology as illustrated in Figure 2, count produce
comparable results with Evosuite. Further the usage of GA operators identifies test cases which
to be included in whole test suite. This is a representative and demonstrates high coverage of
SUT. The mutation operators are used at Java byte code level to identify possible test cases. The
results are obtained with several case study projects. Figure 4 shows different statistics of the
case studies. The results presented in Figure 5 reveals the average number of missed infeasible
targets between the proposed approach, EvoSuite and Single approaches. The proposed
approach outperforms Single approach, as whole test suite is representative of multiple
objectives and ensures high coverage. The proposed method is comparable with the Evosuite
with little difference in performance.
Apart from the above mentioned results, the average branch coverage for all case studies were
observe with our method and compared with Evosuite and Single. The rationale behind this is
that the proposed approach and Evosuite follows whole test suite generation concept that
considers representation of multiple objectives so as to reduce the number of test cases besides
ensuring high branch coverage. The results are presented, which in Figure 6 ,that reveals the
aforementioned outcomes. The comparative results show that the proposed approach has slight
performance than the Evosuite.
Another significant contribution in this paper is automatic discovery of dependency structures. In
the literature[7],[8] and [9],It is mentioned that dependency structures were used to prioritize test
cases. As prioritization can improve the fault detection ratio, the dependencies of function calls in
11. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 11
terms of open and closed dependencies can influence the results. In this regard, we proposed a
methodology and algorithm which extracts dependency structures automatically. In [8], the
dependencies are manually extracted, our methodology automate the procedure of extracting
dependencies. Since manual prioritization is very time taking, this is significant improvement in
our research. Our algorithm ensures that the dependencies are automatically extracted and used
to prioritise test cases. The prioritization is non-trivial and hard to achieve complexities in the case
studies with thousands of branches. The process of extraction of dependencies is not simple with
very complex projects. Therefore proposed methodology is extraction of dependencies
automatically paves way for significant speed and performance in software industry with respect
to testing. The proposed methodology for test case prioritization includes the process of
extracting dependencies a blend of reflection API for method identity. By using execution traces,
the call processing insights in runtime behaviour of SUTs. This knowhow paves way for automatic
prioritization of test cases. Further this results in improving the fault detection ratio. The
methodology is in Figure 3 and the results that shows the performance improvement are
presented in Figure 9. In Figure 9(b), clearly shows test case prioritization performance improves
significantly.
6. COMPARATIVE RESULTS
In this section, the two approaches results were compared with the TRO case study. The number
of faults detected is compared with automatic discovery of dependency structures in the proposed
methodology and the manual identification of dependency structures in [8] .Figure 10 shows the
comparison results.
FIGURE 10: Comparison between Existing and Proposed Methodology with TRO project
As shown in Figure 10, the performance of the two approaches for given SUT is recorded. The
results reveals that, the proposed methodology, determines the dependency structures
automatically which is comparable with that of [8], were dependency structures are manually
identified and prioritization performed. The increased count value in the fault detection is due to
the fact that the test case prioritization can help in running test cases in proper order that will
reflect in the increase of fault detection ratio.
7. CONCLUSIONS AND FUTURE WORK
In this paper our focus is three aspects pertaining to software testing. First, we proposed and
implemented a mechanism for generating automatic case suite generation with multiple
0
20
40
60
80
100
120
0.25 0.5 0.75 1
NumberofFaultsDetected
% of Executed Test Suites
Existing [8]
Proposed
12. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 12
objectives besides achieving complete coverage. Second, we focused on the automatic
extraction of dependency structures from SUT so as to improve prioritization of test cases. We
proposed and implemented a methodology for this purpose. Third, we built a tool that
demonstrates the automated test suite generation, automatic discovery of dependency structures
and test case prioritization for improving the percentage of flaws detected. The integrated
functionality has been tested. Our empirical results reveal significant improvement in the
percentage of detection of flaws with the new approach. This research can be extended further to
adapt our approaches to Software Product Lines (SPLs) in future. Software product line is the
modern approach in software development that takes care of set of products with core and
custom assets. The new products are derived based on the variabilities in the requirements. The
SPL approach improves software development in different angles especially when software is
delivered as product to clients. SPLs and their configuration management is complex in nature.
Testing such applications need improved mechanisms that can leverage the characteristics of
SPL for improving quality.
8. REFERENCES
[1] Gordon Fraser, and Andreas Zeller. (2012). Mutation-Driven Generation of Unit Tests and
Oracles, IEEE, VOL. 38, NO. 2 p1-15.
[2] J. Malburg and G. Fraser, “Combining Search-Based and Constraint-Based Testing,” Proc.
IEEE/ACM 26th Int’l Conf. Automated Software Eng., 2011.
[3] Gordon Fraser, and Andrea Arcuri. (2013). “Whole Test Suite Generation “ IEEE, VOL. 39,
NO. 2, p276-291.
[4] A. Baars, M. Harman, Y. Hassoun, K. Lakhotia, P. McMinn, P. Tonella, and T. Vos,
“Symbolic Search-Based Testing,” Proc. IEEE/ ACM 26th Int’l Conf. Automated Software
Eng., 2011.
[5] Saswat Ananda, Edmund K. Burkeb, Tsong Yueh Chenc, John Clarkd, Myra B. Cohene,
Wolfgang Grieskampf, Mark Harmang, Mary Jean Harroldh and Phil McMinni. (2013). “An
orchestrated survey of methodologies for automated software test case generation.” p1979
2001.
[6] Richard Baker and Ibrahim Habli. (2010). An Empirical Evaluation of Mutation Testing For
Improving the Test Quality of Safety- Critical Software. IEEE. p1-32.
[7] A. Arcuri and L. Briand, “A Practical Guide for Using Statistical Tests to Assess
Randomized Algorithms in Software Engineering,” Proc. 33rd Int’l Conf. Software Eng., pp.
1-10. 2011.
[8] Shifa-e-Zehra Haidry and Tim Miller. (2013). Using Dependency Structures for Prioritization
of Functional Test Suites. IEEE. 39 p258-275.
[9] J. Ryser and M. Glinz, “Using Dependency Charts to Improve Scenario-Based Testing,”
Proc 17th Int’l Conf. Testing Computer Software, 2000.
[10] J. Kim and D. Bae, “An Approach to Feature Based Modeling by Dependency Alignment for
the Maintenance of the Trustworthy System,” Proc. 28th Ann. Int’l Computer Software and
Applications Conf., pp. 416-423, 2004.
[11] G. Rothermel, R.H. Untch, C. Chu, and M.J. Harrold, “Prioritizing Test Cases for Regression
Testing,” IEEE Trans. Software Eng., vol. 27, no. 10, pp. 929-948, Sept. 2001.
13. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 13
[12] Z. Li, M. Harman, and R. Hierons, “Search Algorithms for Regression Test Case
Prioritization,”IEEE Trans. Software Eng., vol. 33, no. 4, pp. 225-237, Apr. 2007.
[13] G. Rudolph, “Convergence Analysis of Canonical Genetic Algorithms,” IEEE Trans. Neural
Networks, vol. 5, no. 1, pp. 96-101, Jan. 1994.
[14] D. Jeffrey and N. Gupta, “Experiments with Test Case Prioritization Using Relevant Slices,”
J.Systems and Software, vol. 81, no. 2, pp. 196-221, 2008.
[15] Wong, J.R. Horgan, S. London, and H. Agrawal, “A Study of Effective Regression Testing in
Practice,” Proc. Eighth Int’l Symp. Software Reliability Eng., pp. 230-238, 1997.
[16] J. Li, “Prioritize Code for Testing to Improve Code Coverage of Complex Software,” Proc.
16th IEEE Int’l Symp. Software Reliability Eng., pp. 75-84, 2005.
[17] J. Li, D. Weiss, and H. Yee, “Code-Coverage Guided Prioritized Test Generation,” J.
Information and Software Technology, vol. 48, no. 12, pp. 1187-1198, 2006.
[18] Z. Ma and J. Zhao, “Test Case Prioritization Based on Analysis of Program Structure,” Proc.
15th Asia-Pacific Software Eng. Conf., pp. 471-478, 2008.
[19] R. Krishnamoorthi and S.A. Sahaaya Arul Mary, “Factor Oriented Requirement Coverage
Based System Test Case Prioritization ofNew and Regression Test Cases,” Information and
Software Technology, vol. 51, no. 4, pp. 799-808, 2009.
[20] H. Srikanth, L. Williams, and J. Osborne, “System Test Case Prioritization of New and
Regression Test Cases,” Proc. Fourth Int’l Symp. Empirical Software Eng., pp. 62-71, 2005.
[21] P. Tonella, P. Avesani, and A. Susi, “Using the Case-Based Ranking Methodology for Test
Case Prioritization,” Proc. 22nd IEEE Int’l Conf. Software Maintenance, pp. 123-133, 2006.
[22] X. Qu, M.B. Cohen, and K.M. Woolf, “Combinatorial Interaction Regression Testing: A Study
of Test Case Generation and Prioritization,” Proc. IEEE Int’l Conf. Software Maintenance,
pp. 255-264, 2007.
[23] F. Basanieri, A. Bertolino, and E. Marchetti, “The Cow_Suite Approach to Planning and
Deriving Test Suites in UML Projects,” Proc. Fifth Int’l Conf. Unified Modeling Language, pp.
275-303, 2002.
[24] B. Jiang, Z. Zhang, W. Chan, and T. Tse, “Adaptive Random Test Case Prioritization,” Proc.
IEEE/ACM Int’l Conf. Automated Software Eng., pp. 233-244, 2009.
[25] D. Kundu, M. Sarma, D. Samanta, and R. Mall, “System Testing for Object-Oriented
Systems with Test Case Prioritization,” Software Testing, Verification, and Reliability, vol.
19, no. 4, pp.97- 333, 2009.
[26] Andrea Arcuri,Muhammad Zohaib Iqbal and Lionel Briand. (2012). Random Testing:
Theoretical Results and Practical Implications. IEEE. 38 p258-277.
[27] B. Baudry, F. Fleurey, J.-M. Je´ze´quel, and Y. Le Traon, “Automatic Test Cases
Optimization: A Bacteriologic Algorithm,” IEEE Software, vol. 22, no. 2, pp. 76-82, Mar./Apr.
2005.
[28] S. Zhang, D. Saff, Y. Bu, and M. Ernst, “Combined Static and Dynamic Automated Test
Generation,” Proc. ACM Int’l Symp. Software Testing and Analysis, 2011.
14. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 14
[29] P. Godefroid, N. Klarlund, and K. Sen, “DART: Directed Automated Random Testing,” Proc.
ACM SIGPLAN Conf. Programming Language Design and Implementation, pp. 213-223,
2005.
[30] W. Visser, C.S. Pasareanu, and S. Khurshid, “Test Input Generation with Java PathFinder,”
Proc. ACM SIGSOFT Int’l Symp. Software Testing and Analysis, pp. 97-107, 2004.
[31] K. Inkumsah and T. Xie, “Improving Structural Testing of Object-Oriented Programs via
Integrating Evolutionary Testing and Symbolic Execution,” Proc. 23rd IEEE/ACM Int’l Conf.
Automated Software Eng., pp. 297-306, 2008.
[32] M. Islam and C. Csallner, “Dsc+Mock: A Test Case + Mock Class Generator in Support of
Coding against Interfaces,” Proc. Eighth Int’l Workshop Dynamic Analysis, pp. 26-31, 2010.
[33] A. Arcuri, “A Theoretical and Empirical Analysis of the Role of Test Sequence Length in
Software Testing for Structural Coverage,” IEEE Trans. Software Eng., vol. 38, no. 3,
pp.497-519, May/ June 2011.
[34] C. Csallner and Y. Smaragdakis, “JCrasher: An Automatic Robustness Tester for Java,”
Software Practice and Experience, vol. 34, pp. 1025-1050, 2004.
[35] W. Eric Wong, J. R. Horgan, Saul London, Hira Agrawal. (1997). A Study of Effective
Regression Testing in Practice. IEEE. 8 (1), p 264-274.
[36] Greegg Rothermel, Sebastian Elbaum, Alexey Malishevsky, Praveen Kallakrui and Brian
Davia (2011), The impact of test suite granularity on the cost-effectiveness of Regression
Testing, University of Nebraska – Lincoln, p1-12.
[37] Hema Srikanth, Laurie Williams, Jason Osborne. (2000). System Test Case Prioritization of
New and Regression Test Cases. Department of Computer Science. 2 (4), p1-23.
[38] J. Jenny Li. (2001). Prioritize Code for Testing to Improve Code Coverage of Complex
Software. CID. p1-18.
[39] Jiang, B; Zhang, Z; Chan, WK; Tse, TH. (2009). Adaptive random test case prioritization.
International Conference On Automated Software Engineering. 4 (24), p233-244.
[40] Johannes Ryser. (2000). Using Dependency Charts to Improve Scenario-Based Testing.
International Conference on Testing Computer Software TCS. 18 (3), p1-10.
[41] Paolo Tonella, Paolo Avesani, Angelo Susi. (1997). Using the Case-Based Ranking
Methodology for Test Case Prioritization. IEEE.p1-10.
[42] Sebastian Elbaum. (2001). Incorporating Varying Test Costs and Fault Severities into Test
Case Prioritization. Proceedings of the 23rd International Conference on Software
Engineering. 23 (5), p1-10.
[43] Xiao Qu, Myra B. Cohen, Katherine M. Woolf. (2006). Combinatorial Interaction Regression
Testing: A Study of Test Case Generation and Prioritization. Department of Computer
Science and Engineering, p149-170.
[44] Zengkai Ma. (2001). Test Case Prioritization based on Analysis of Program Structure.
Department of Computer Science, p149-170.
[45] C. Prakasa Rao and P.Govindarajulu. (2015). “Automatic Discovery of Dependency
Structures for Test Case Prioritization” . IJCSNS. 15 (n.d), p-52-57.
15. C. Prakasa Rao & P. Govindarajulu
International Journal of Software Engineering (IJSE), Volume (7) : Issue (1) : 2016 15
[46] C. Prakasa Rao and P.Govindarajulu. (2014). “A Comprehensive Survey of Recent
Developments in Software Testing Methodologies”, IJSR. 3 (n.d), p-1889-1895.
[47] C. Prakasa Rao and P.Govindarajulu. (2015). “Genetic Algorithm for Automatic Generation
of Representative Test Suite for Mutation Testing”. IJCSNS. 15 (n.d), p-11-17.