This document summarizes an algorithm for automatically generating search-based test data using metaheuristic search techniques. The algorithm aims to generate test data that satisfies test adequacy criteria like statement, branch, and condition coverage. It uses an evolutionary algorithm approach where the test adequacy criteria define the fitness function and the program's input domain forms the search space. Test data is encoded as potential solutions that are evaluated and evolved over generations to find ones that maximize coverage based on the fitness function. The algorithm is evaluated on 50 real-world C programs and is found to generate test data faster and with better coverage than random testing.
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Ā
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
AbstractāCombinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Software test-case generation is the process of identifying a set of test cases. It is necessary to generate the test sequence that satisfies the testing criteria. For solving this kind of difficult problem there were a lot of research works, which have been done in the past. The length of the test sequence plays an important role in software testing. The length of test sequence decides whether the sufficient testing is carried or not. Many existing test sequence generation techniques uses genetic algorithm for test-case generation in software testing. The Genetic Algorithm (GA) is an optimization heuristic technique that is implemented through evolution and fitness function. It generates new test cases from the existing test sequence. Further to improve the existing techniques, a new technique is proposed in this paper which combines the tabu search algorithm and the genetic algorithm. The hybrid technique combines the strength of the two meta-heuristic methods and produces efficient test- case sequence.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Ā
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
AbstractāCombinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.
Software testing defect prediction model a practical approacheSAT Journals
Ā
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Ā
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Ā
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
Ā
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
Ā
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
The End User Requirement for Project Management Software Accuracy IJECEIAES
Ā
This research explains the relationship between the end user requirement and accuracy of PMS (Project Management Software). The research aims are to analyze the PMS accuracy and measuring the probability of PMS accuracy in achieving Ā±1% of the end user requirement. The bias statistical method will be used to prove the PMS accuracy that based on the hypothesis testing. The result indicates the PMS is still accurate to be implemented in AcehIndonesia area projects that using the SNI (National Indonesia Standard as current method) with the accuracy index of Ā±7.5%. The achievement probability of reaching the end user requirement is still low of Ā±21.77%. In case of the PMS, the low achievement of the end user requirement is not only caused by the low accuracy of the PMS but also caused by the amount of variability error, which is influenced by the amount of variation of the project activity. In this study, we confirm that it is necessary to reconcile both conditions between the PMS accuracy and the end user requirements.
Software testing defect prediction model a practical approacheSAT Journals
Ā
Abstract Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Ā
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Regression testing concentrates on finding defects after a major code change has occurred. Specifically, it
exposes software regressions or old bugs that have reappeared. It is an expensive testing process that has
been estimated to account for almost half of the cost of software maintenance. To improve the regression
testing process, test case prioritization techniques organizes the execution level of test cases. Further, it
gives an improved rate of fault identification, when test suites cannot run to completion.
A Review on Parameter Estimation Techniques of Software Reliability Growth Mo...Editor IJCATR
Ā
Software reliability is considered as a quantifiable metric, which is defined as the probability of a software to operate
without failure for a specified period of time in a specific environment. Various software reliability growth models have been proposed
to predict the reliability of a software. These models help vendors to predict the behaviour of the software before shipment. The
reliability is predicted by estimating the parameters of the software reliability growth models. But the model parameters are generally
in nonlinear relationships which creates many problems in finding the optimal parameters using traditional techniques like Maximum
Likelihood and least Square Estimation. Various stochastic search algorithms have been introduced which have made the task of
parameter estimation, more reliable and computationally easier. Parameter estimation of NHPP based reliability models, using MLE
and using an evolutionary search algorithm called Particle Swarm Optimization, has been explored in the paper.
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
Ā
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Machine Learning approaches are good in solving problems that have less information. In most cases, the
software domain problems characterize as a process of learning that depend on the various circumstances
and changes accordingly. A predictive model is constructed by using machine learning approaches and
classified them into defective and non-defective modules. Machine learning techniques help developers to
retrieve useful information after the classification and enable them to analyse data from different
perspectives. Machine learning techniques are proven to be useful in terms of software bug prediction. This
study used public available data sets of software modules and provides comparative performance analysis
of different machine learning techniques for software bug prediction. Results showed most of the machine
learning methods performed well on software bug datasets.
A NOVEL APPROACH TO ERROR DETECTION AND CORRECTION OF C PROGRAMS USING MACHIN...IJCI JOURNAL
Ā
There has always been a struggle for programmers to identify the errors while executing a program- be it
syntactical or logical error. This struggle has led to a research in identification of syntactical and logical
errors. This paper makes an attempt to survey those research works which can be used to identify errors as
well as proposes a new model based on machine learning and data mining which can detect logical and
syntactical errors by correcting them or providing suggestions. The proposed work is based on use of
hashtags to identify each correct program uniquely and this in turn can be compared with the logically
incorrect program in order to identify errors.
The End User Requirement for Project Management Software Accuracy IJECEIAES
Ā
This research explains the relationship between the end user requirement and accuracy of PMS (Project Management Software). The research aims are to analyze the PMS accuracy and measuring the probability of PMS accuracy in achieving Ā±1% of the end user requirement. The bias statistical method will be used to prove the PMS accuracy that based on the hypothesis testing. The result indicates the PMS is still accurate to be implemented in AcehIndonesia area projects that using the SNI (National Indonesia Standard as current method) with the accuracy index of Ā±7.5%. The achievement probability of reaching the end user requirement is still low of Ā±21.77%. In case of the PMS, the low achievement of the end user requirement is not only caused by the low accuracy of the PMS but also caused by the amount of variability error, which is influenced by the amount of variation of the project activity. In this study, we confirm that it is necessary to reconcile both conditions between the PMS accuracy and the end user requirements.
A Comparative Study of Software Requirement, Elicitation, Prioritization and ...IJERA Editor
Ā
The failure of many software systems are mainly due to the lack of the requirement engineering. Where
software requirement play a very vital role in the field of software engineering. The main task of the
requirement engineering are eliciting the requirements from the customer and to prioritize those requirements to
make decisions in the software design. Prioritization of the software requirement is very much useful in giving
priority within the set of requirements. Requirement prioritization is very much important when there are strict
constraints on schedule and the resources, then the software engineer must take some decisions on neglecting or
to give prioritization to some of the requirements that are to be added to the project which makes it successful.
This paper is the frame work of comparison of various techniques and to propose a most competent method
among them.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Ā
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
Ā
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
A Review on Software Fault Detection and Prevention Mechanism in Software Dev...iosrjce
Ā
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Ā
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
Test Case Optimization and Redundancy Reduction Using GA and Neural Networks IJECEIAES
Ā
More than 50% of software development effort is spent in testing phase in a typical software development project. Test case design as well as execution consume a lot of time. Hence, automated generation of test cases is highly required. Here a novel testing methodology is being presented to test objectoriented software based on UML state chart diagrams. In this approach, function minimization technique is being applied and generate test cases automatically from UML state chart diagrams. Software testing forms an integral part of the software development life cycle. Since the objective of testing is to ensure the conformity of an application to its specification, a test āoracleā is needed to determine whether a given test case exposes a fault or not. An automated oracle to support the activities of human testers can reduce the actual cost of the testing process and the related maintenance costs. In this paper, a new concept is being presented using an UML state chart diagram and tables for the test case generation, artificial neural network as an optimization tool for reducing the redundancy in the test case generated using the genetic algorithm. A neural network is trained by the backpropagation algorithm on a set of test cases applied to the original version of the system.
Development of software defect prediction system using artificial neural networkIJAAS Team
Ā
Software testing is an activity to enable a system is bug free during execution process. The software bug prediction is one of the most encouraging exercises of the testing phase of the software improvement life cycle. In any case, in this paper, a framework was created to anticipate the modules that deformity inclined in order to be utilized to all the more likely organize software quality affirmation exertion. Genetic Algorithm was used to extract relevant features from the acquired datasets to eliminate the possibility of overfitting and the relevant features were classified to defective or otherwise modules using the Artificial Neural Network. The system was executed in MATLAB (R2018a) Runtime environment utilizing a statistical toolkit and the performance of the system was assessed dependent on the accuracy, precision, recall, and the f-score to check the effectiveness of the system. In the finish of the led explores, the outcome indicated that ECLIPSE JDT CORE, ECLIPSE PDE UI, EQUINOX FRAMEWORK and LUCENE has the accuracy, precision, recall and the f-score of 86.93, 53.49, 79.31 and 63.89% respectively, 83.28, 31.91, 45.45 and 37.50% respectively, 83.43, 57.69, 45.45 and 50.84% respectively and 91.30, 33.33, 50.00 and 40.00% respectively. This paper presents an improved software predictive system for the software defect detections.
Software Testing: Issues and Challenges of Artificial Intelligence & Machine ...gerogepatton
Ā
The history of Artificial Intelligence and Machine Learning dates back to 1950ās. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.
Software Testing: Issues and Challenges of Artificial Intelligence & Machine ...gerogepatton
Ā
The history of Artificial Intelligence and Machine Learning dates back to 1950ās. In recent years, there has been an increase in popularity for applications that implement AI and ML technology. As with traditional development, software testing is a critical component of an efficient AI/ML application. However, the approach to development methodology used in AI/ML varies significantly from traditional development. Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for further investigation and has great potential to shed light on the way to more productive software testing strategies and methodologies that can be applied to AI/ML applications.
SOFTWARE TESTING: ISSUES AND CHALLENGES OF ARTIFICIAL INTELLIGENCE & MACHINE ...ijaia
Ā
The history of Artificial Intelligence and Machine Learning dates back to 1950ās. In recent years, there has
been an increase in popularity for applications that implement AI and ML technology. As with traditional
development, software testing is a critical component of an efficient AI/ML application. However, the
approach to development methodology used in AI/ML varies significantly from traditional development.
Owing to these variations, numerous software testing challenges occur. This paper aims to recognize and
to explain some of the biggest challenges that software testers face in dealing with AI/ML applications. For
future research, this study has key implications. Each of the challenges outlined in this paper is ideal for
further investigation and has great potential to shed light on the way to more productive software testing
strategies and methodologies that can be applied to AI/ML applications.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Ā
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Ā
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
Ā
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
Ā
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
Ā
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Ā
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Ā
Clients donāt know what they donāt know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clientsā needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Ā
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as āpredictable inferenceā.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Ā
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
Ā
As AI technology is pushing into IT I was wondering myself, as an āinfrastructure container kubernetes guyā, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefitās both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
Ā
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
1. IOSR Journal of Computer Engineering (IOSR-JCE)
e-ISSN: 2278-0661,p-ISSN: 2278-8727, Volume 17, Issue 6, Ver. IV (Nov ā Dec. 2015), PP 20-26
www.iosrjournals.org
DOI: 10.9790/0661-17642026 www.iosrjournals.org 20 | Page
Generation of Search Based Test Data on Acceptability Testing
Principle
Dr.I.Surya Prabha, Dr.Chinthagunta Mukundha
Professor Institute of Aeronautical Engineering, Dundugal,Hyd-500043,India.
AssociateProfessor Sreenidhi Institute of Science and Technology Ghatakesar,Hyd-501301,India
Abstract: The main objective of this paper is to acquire the basic concepts related to automated search based
test data generation. The use of Metaheuristic searching techniques for the automatic and generation of search
based test data has been a burgeoning interest for many researchers in recent years. Metaheuristic searching
techniques are much guaranted in regard to these problems. Metaheuristic searching techniques are high-level
tools, which utilize heuristics to seek solutions for combinatorial problems at a considarable computational
cost. Metaheuristic search techniques have been applied to automate search based test data generation for
structural and functional testing. Evolutionary testing designates the use of metaheuristic search methods for
test case generation.
The search space is the input domain of the test object, with each individual or potential solution,
being an encoded set of inputs to that test object. The fitness function is tailored to find search based test data
for the type of testing that is being considered. Evolutionary Testing (ET) uses optimizing search techniques
such as evolutionary algorithms to generate search based test data. The efficiency of GA-based testing system is
compared with a Random testing system. For easy programs both testing systems work fine, but as the
complexity of the program or the complexity of input domain grows, GA-based testing systems. The results
suggest that our acceptability based algorithm is better than the reliability based path testing and condition
testing techniques in both of these categories. Thus this algorithm may significantly reduce the time of search
based test data generation significantly outperforms Random testing.
Keywords: Automated, Metaheuristic, Framework, Potential, Random.
I. Introduction
The use of metaheuristic search techniques for the automatic generation of search based test data has
been a burgeoning interest for many researchers in recent years. In industry, search based test data selection is
generally a manual process - the responsibility for which usually falls on the tester.
The bugs in software can cause major loss in IT organization if they are not removed before delivery.
Software testing is important parameter developing software that is free from bugs and defects. Software testing
is performed to support quality assurance. A good quality software can be made by using an efficient test
method. Statistics say that 50% of the total cost of software development is devoted to software testing even it is
more in case of critical software. Depending on time, scale and performing methods we can classify testing as
unit testing, integration testing, system testing, alpha testing, beta testing, acceptance testing, regression testing,
mutation testing, performance testing, stress testing etc.
Finding a set of search based test data to achieve identified coverage criteria is typically a labour-
intensive activity consuming a good part of the resources of the software development process. Automation of
this process can greatly reduce the cost of testing and hence the overall cost of the system. Many automated
search based test data generation techniques have been proposed by researchers. We can broadly classify these
techniques into three categories: random, static and dynamic .Random approaches generate test input vectors
with elements randomly chosen from appropriate domains. Input vectors are generated until some identified
criterion has been satisfied. Random testing may be an effective means of gaining an adequate test set but may
simply fail to generate appropriate data in any reasonable time-frame for more complex software.
Recently Search Based Software Engineering has evolved as a major research field in the software
engineering community. Search Based Software Engineering has been applied successfully to many software
engineering activities ranging from requirements engineering to software maintenance and quality assessment.
One major area where Search Based Software Engineering has seen intense activity is software testing. Active
research is underway to improve the existing search based test data generation techniques and propose novel
approaches to solve the test generation problem. However, despite much research, there are still limitations that
have hampered the wide acceptance of these techniques. Also many areas are under-explored, and there are
distinct possibilities for the successful use of search based approaches.
A search based test data generation algorithm that generates search based test data using adequacy
based testing criteria and genetic algorithms. In this paper, we mainly focus on providing the algorithm in a
2. Generation of Search Based Test Data on Acceptability Testing Principle
DOI: 10.9790/0661-17642026 www.iosrjournals.org 21 | Page
formalized manner and on evaluating the algorithm by comparing it with other search based test data generation
techniques. The main aim is to prove the effectiveness of our proposed algorithm based on adequacy based
testing criteria. Our algorithm applies mutation analysis to generate an adequate search based test data set.
Search Based Software Engineering research has attracted much attention in recent years as part of a
general interest in search based software engineering approaches. The growing interest in search based software
testing can be attributed to the fact that there is a need for automatic generation of test data, since it is well
known that exhaustive testing is infeasible and the fact that software test data
Generation is considered NP-hard problem.
II. Literature Review
Software has become an intrinsic part of human life and it is important that it should perform its
intended function. Otherwise it can cause frustration, loss of resources and even loss of life. The main activity
that attempts to prevent this and verify software quality and reliability is software testing. Testing is a dynamic
activity, as it requires execution of program on some finite set of input data. Nevertheless there are other
methods such as static analysis and formal proof of correctness. However, only testing can be used to gain
confidence in the correct functioning of the software in its intended environment. We cannot perform exhaustive
testing because the
Domain of program inputs is usually too large and there are too many possible input paths. Therefore,
the software is tested against suitably selected test cases.
Evolutionary testing makes use of meta-heuristic search techniques for test case generation
.Evolutionary Testing is a sub-field of Search Based Testing in which Evolutionary Algorithms are used to
guide the search. The test aim is transformed into an optimization problem. The input domain of the test object
forms the search space. The test object searches for search based test data that fulfils the respective test aim in
the search space. A numeric representation of the test aim is necessary for this search. This numeric
representation is used to define objective functions suitable for the evaluation of the generated search based test
data. Depending on the test aim pursued, different heuristic functions emerge for test data evaluation. Due to the
non-linearity of software the conversion of test aim to optimization problems mostly leads to complex,
discontinuous and non-linear search spaces. Therefore neighborhood search methods are not recommended.
Instead, meta-heuristic search methods are employed, e.g. evolutionary algorithms, simulated annealing or tabu
search. Evolutionary Algorithms have proved a powerful optimization algorithm for the successful solution of
software testing.
The main activities in software testing are test case generation, executing program using these
generated test cases and evaluating the results. A test case is a set of test input data and the expected results. The
test data is a set of input values to the program, which may be generated from the code or usually derived from
program specifications. Program specifications also help in determining the expected results.
Meta-heuristic techniques have also been applied to testing problems in a field known as Search Based
Software Testing a sub-area of Search Based Software Engineering .Evolutionary algorithms are one of the most
popular meta-heuristic search algorithms and are widely used to solve a variety of problems.
The local Search techniques generally used are
i. Hill Climbing
ii. Simulated Annealing
iii. Tabu Search
Hill Climbing
In hill climbing, the search proceeds from randomly chosen point by considering the neighbors of the
point. Once a neighbor is found to be fitter then this becomes the current point in the search space and the
process is repeated. If there is no fitter neighbor, then the search terminates and a maximum has been found.
However, HC is a simple technique which is easy to implement and robust in the software engineering
applications of modularization and cost estimation.
Simulated Annealing
Simulated annealing is a local search method. It samples the whole domain and improves the solution
by recombination in some form. In simulated annealing a value x1, is chosen for the solution, x, and the solution
which has the minimal cost function, E, is chosen. Cost functions define the relative and desirability of
particular solutions. Minimizing the objective function is usually referred to as a cost function; whereas,
maximizing is usually referred to as fitness function.
3. Generation of Search Based Test Data on Acceptability Testing Principle
DOI: 10.9790/0661-17642026 www.iosrjournals.org 22 | Page
Tabu Search
Tabu search is a metaheuristic algorithm that can be used for solving combinatorial optimization
problems, such as the travelling salesman problem. Tabu search uses a local or neighbourhood search procedure
to iteratively move from a solution x to a solution x' in the neighbourhood of x, until some stopping criterion has
been satisfied. To explore regions of the search space that would be left unexplored by the local search
procedure, tabu search modifies the neighbourhood structure of each solution as the search progresses.
Evolutionary Search Using Genetic Algorithms
Genetic Algorithms forms a method of adaptive search in the sense that they modify the data in order
to optimize a fitness function. A search space is defined, and the Genetic Algorithm search probe for the global
optimum. A Genetic Algorithms starts with guesses and attempts to improve the guesses by evolution. A
Genetic Algorithms will typically have five parts: (1) a representation of a guess called a chromosome, (2) an
initial pool of chromosomes, (3) a fitness function, (4) a selection function and (5) a crossover operator and a
mutation operator. A chromosome can be a binary string or a more elaborate data structure. The initial pool of
chromosomes can be randomly produced or manually created. The fitness function measures the suitability of a
chromosome to meet a specified objective: for coverage based ATG, a chromosome is fitter if it corresponds to
greater coverage. The selection function decides which chromosomes will participate in the evolution stage of
the genetic algorithm made up by the crossover and mutation operators. The crossover operator exchanges genes
from two chromosomes and creates two new chromosomes. The mutation operator changes a gene in a
chromosome and creates one new chromosome.
III. Generation Of Search Based
Test Data
Genetic programming results in a program, which gives the solution of a particular problem. The
fitness function is defined in terms of how close the program comes to solving the problem. The operators for
mutation and mating are defined in terms of the programās abstract syntax tree. Because these operators are
applied to trees rather than sequences, their definition is typically less straight forward than those applied to
Genetic Algorithms GP can be used to find fits to software engineering data, such as project estimation data.
In order to apply metaheuristics to software engineering problems the following steps should therefore
be considered:
i. Ask: Is this a suitable problem?
That is, āis the search space sufficiently large to make exhaustive search impractical?ā
ii. Define a representation for the possible solutions.
iii. Define the fitness function.
iv. Select an appropriate metaheuristic technique for the problem.
v. Start with the simple local search and consider other genetic approaches.
The testing requirements satisfied by the generated test data is the measurement of coverage in terms of
statement, condition, path, branch, decision etc.
Statement coverage
Statement coverage measures the number of executable statements in the code that are executed by a
test suite. 100% statement coverage is achieved when every statement in the code is executed.
Decision coverage
Decision coverage, also known as branch coverage, measures the extent to which all outcomes of
branch statements are covered by test cases. To achieve decision coverage, two test data I1 and I2 need to be
generated corresponding to each decision di in the program such that di evaluates to true when the code is
executed with input I1 and evaluates to false when code is executed with input I2. For example, to cover the
decision at line 70 in Fig.1, we require two test data such that the āifā condition evaluates to true in one case and
false in the other.
10: int inp1 ,inp2 ; //inputs given
20: int test() // function
30: {
40: int 1Var=0 ,retVal=0;
50: if(inp1 > 15)
60: 1Var=1;
70: if(1Var && inp2)
4. Generation of Search Based Test Data on Acceptability Testing Principle
DOI: 10.9790/0661-17642026 www.iosrjournals.org 23 | Page
80: retVal=1;
90: return retVal;
100: }
Fig 1:Sample C Code
Condition coverage
Condition coverage is similar to decision coverage with the only difference being that for condition
coverage, two test data I1 and I2 are needed for each condition in a decision.
3.1 Automated test data generation (ATDG)
Most of the work on Software Testing has concerned the problem of generating inputs that provide a
test suite that meets a test adequacy criterion. The schematic representation is presented in Fig.2. Often this
problem of generating search based test inputs is called āAutomated Test Data Generation (ATDG)ā though,
strictly speaking, without an oracle, only the input is generated.Fig.2 illustrates the generic form of the most
common approach in the literature, in which search based test inputs are generated according to a test adequacy
criteria. The test adequacy criterion is the human input to the process. It determines the goal of testing.
The adequacy criteria can be almost any form of testing goal that can be defined and assessed
numerically. For instance, it can be structural functional, temporal etc. This generic nature of Search-Based
Testing has been a considerable advantage and has been one of the reasons why many authors have been able to
adapt the Search-Based Testing approach different formulations.
Figure2 : A generic search-based test input generation scheme
3.2 Evolutionary Algorithms
Evolutionary Algorithms use simulated evolution as a search strategy to evolve candidate solutions,
using operators inspired by genetics and natural selection. For Genetic Algorithms, the search is primarily
driven by the use of recombination - a mechanism of exchange of information between solutions to breed" new
ones - whereas Evolution Strategies principally use mutation - a process of randomly modifying solutions.
Select a starting Solution s Ļµ S
Select an Initial Temperature t > 0
Repeat
It ļ 0
Repeat
5. Generation of Search Based Test Data on Acceptability Testing Principle
DOI: 10.9790/0661-17642026 www.iosrjournals.org 24 | Page
Select sI
Ļµ N(s) at Random
āe ļ obj(sI
) ā obj(s)
If āe < 0
S ļ sI
Else
Generate random Number r , 0 ā¤ r < 1
If r < š ā šæ/š” then s ļ S I
End if
It ļ it+1
Until it=num_solns
Decrease t according to cooling schedule
Until Stopping Condition Reached
Fig: 3 Evolutionary Algorithm
IV. Pragmatic Data Collection
Evaluating the performance of any technique requires selecting certain subject programs which forms
the basis for evaluation. To evaluate the performance of our proposed algorithm and to compare it with other
techniques, we have selected fifty real time programs written in C language. The subject programs we have
chosen are described in Table 1. The programs range from 35 to 350 lines of source code.
We have selected a large program base that Contains programs ranging from very basic such as
computing the grade of student, finding the biggest of three numbers to very complex such as implementing the
binary search tree and finding the intersection of two linked lists. We have chosen a diversified range of
programs including mathematical problems such as finding roots of quadratic equation, triangle classification
problem, computing the median of the triangle; general
logical problems such as checking for the Armstrong number, magic number, palindrome number; business
problem such as payroll system, commission problem, credit risk analysis; data structures such as linked list,
sorting (insertion sort, selection sort, bubble sort, merge sort, heap sort, quick sort, shell sort), searching (linear
search, binary search) etc. All the programs are written in standard C language that makes it easier to work with
these programs.
7. Generation of Search Based Test Data on Acceptability Testing Principle
DOI: 10.9790/0661-17642026 www.iosrjournals.org 26 | Page
[9] M. Harman and B.F. Jones, āSearch Based Software Engineeringā, Information and Software Technology, Dec. 2001, 43(41): pp.
833-839.
[10] Praveen Ranjan Srivastava and Tai-hoon Kim, āApplication of Genetic Algorithm in Software Testingā, International Journal of
Software Engineering and Its Applications, Vol.3, No.4, Octā2009, pp. 87-95.
[11] Mitchell B.S., āA Heuristic Search Approach to Solving the Software Clustering Problemā, PhD Thesis, Drexel University,
Philadelphia, PA, Janā2002.
[12] N.J. Tracey, āA search-Based Automated Test-data Generation Framework for Safety-Critical Systemsā, DPhil University of York,
2000.
[13] Y. Zhan, John A. Clark, āA Search-Based Framework for Automatic Testing of MATLAB/Simulink Modelsā, The Journal of
Systems and Software 81 (2008), pp. 262-285.
[14] P. McMinn, M. Harman, D. Binkley and Paolo Tonella, āThe Species per Path Approach to Search-Based Test Data Generationā,
International Symposium on Software Testing and Analysis (ISSTAā06), July 17-20, USA, 2006, pp. 1-11.
[15] Yuanyuan Zhang, āMulti-Objective Search - based Requirements Selection and Optimisationā, Ph.D Thesis, Kingās College,
University of London, February 2010, pp. 1-276.
Authors Profile
Dr.I.Surya Prabha ,Professor,Department of Information Technology ,Institute of
AeronauticalEngineering,HYD-500043 , AP, India.
E-mail-ipsurya17@gmail.com
Dr.Chinthagunta Mukundha, Associate Professor, Department of Information Technology,
Sreenidhi Institute of Science and Technology, HYD-501301, AP, India.
E-mail-mukundhach@gmail.com.