This document presents a multi-objective hyper-heuristic evolutionary algorithm (MHypEA) for scheduling and inspection planning in software development projects. The MHypEA incorporates twelve low-level heuristics based on selection, crossover, and mutation operations of evolutionary algorithms. The algorithm selects heuristics based on reinforcement learning with adaptive weights. An experiment on randomly generated test problems found that MHypEA explores and exploits the search space thoroughly to find high quality solutions, achieving better results than other multi-objective evolutionary algorithms in half the time.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
A Model To Compare The Degree Of Refactoring Opportunities Of Three Projects ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects
Application of Genetic Algorithm in Software Engineering: A ReviewIRJESJOURNAL
Abstract. The software engineering is comparatively new and regularly changing field. The big challenge of meeting strict project schedules with high quality software requires that the field of software engineering be automated to large extent and human resource intervention be minimized to optimum level. To achieve this goal the researcher have explored the potential of machine learning approaches as they are adaptable, have learning ability. In this paper, we take a look at how genetic algorithm (GA) can be used to build tool for software development and maintenance tasks.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
The effectiveness of test-driven development approach on software projects: A...journalBEEI
Over recent years, software teams and companies have made attempts to achieve higher productivity and efficiency and get more success in the competitive market by employing proper software methods and practices. Test-driven development (TDD) is one of these practices. The literature review shows that this practice can lead to the improvement of the software development process. Existing empirical studies on TDD report different conclusions about its effects on quality and productivity. The present study tried to briefly report the results from a comparative multiple-case study of two software development projects where the effect of TDD within an industrial environment. Method: We conducted an experiment in an industrial case with 18 professionals. We measured TDD effectiveness in terms of team productivity and code quality. We also measured mood metric and cyclomatic complexity to compare our results with the literature. We have found that the test cases written for a TDD task have higher defect detection ability than test cases written for an incremental NON-TDD development task. Additionally, discovering bugs and fixing them became easier. The results obtained showed the TDD developers develop software code with a higher quality rate, and it results in increasing team productivity than NON_TDD developers.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of cost and effort in software projects determine the success and failure of projects. The project that will be completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have been used. COCOMO model is not capable of estimating the close approximations to the actual cost, because it runs in the form of linear. So, the models should be adapted that simultaneously with the number of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic algorithms can be a good model for SCE due to the ability of local and global search. In this paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error (MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...ijiert bestjournal
Role of optimization in engineering design is prominent one with the advent of computers. Optimization has become a part of computer aided design activities. It is prima rily being used in those design activities in which the goal is not only to achieve just a feasible design,but also a des ign objective. In most engineering design activities,the design objective could be simply to minimize the cost of production or to maximize the efficiency of the production. An optimization algorithm is a procedure which is executed it eratively by comparing various solutions till the optimum or a satisfactory solution is found. In many industri al design activities,optimization is achieved indirectly by comparing a few chosen design solutions and accept ing the best solution. This simplistic approach never guarantees and optimization algorithms being with one or more d esign solutions supplied by the user and then iteratively check new design the true optimum solution. There ar e two distinct types of optimization algorithms which are in use today. First there are algorithms which are deterministic,with specific rules for moving from one solution to the other secondly,there are algorithms whi ch are stochastic transition rules.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
The effectiveness of test-driven development approach on software projects: A...journalBEEI
Over recent years, software teams and companies have made attempts to achieve higher productivity and efficiency and get more success in the competitive market by employing proper software methods and practices. Test-driven development (TDD) is one of these practices. The literature review shows that this practice can lead to the improvement of the software development process. Existing empirical studies on TDD report different conclusions about its effects on quality and productivity. The present study tried to briefly report the results from a comparative multiple-case study of two software development projects where the effect of TDD within an industrial environment. Method: We conducted an experiment in an industrial case with 18 professionals. We measured TDD effectiveness in terms of team productivity and code quality. We also measured mood metric and cyclomatic complexity to compare our results with the literature. We have found that the test cases written for a TDD task have higher defect detection ability than test cases written for an incremental NON-TDD development task. Additionally, discovering bugs and fixing them became easier. The results obtained showed the TDD developers develop software code with a higher quality rate, and it results in increasing team productivity than NON_TDD developers.
Research Activities: past, present, and future.Marco Torchiano
Public seminar for Professor Position at Politecnico di Torino
- past research products
- current research activities
- future outlook
October 19, 2018
A NEW HYBRID FOR SOFTWARE COST ESTIMATION USING PARTICLE SWARM OPTIMIZATION A...ieijjournal
Software Cost Estimation (SCE) is considered one of the most important sections in software engineering that results in capabilities and well-deserved influence on the processes of cost and effort. Two factors of cost and effort in software projects determine the success and failure of projects. The project that will be completed in a certain time and manpower is a successful one and will have good profit to project
managers. In most of the SCE techniques, algorithmic models such as COCOMO algorithm models have been used. COCOMO model is not capable of estimating the close approximations to the actual cost, because it runs in the form of linear. So, the models should be adapted that simultaneously with the number of Lines of Code (LOC) has the ability to estimate in a fair and accurate fashion for effort factors. Metaheuristic algorithms can be a good model for SCE due to the ability of local and global search. In this paper, we have used the hybrid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) for the SCE. Test results on NASA60 software dataset show that the rate of Mean Magnitude of Relative Error (MMRE) error on hybrid model, in comparison with COCOMO model is reduced to about 9.55%
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
A Novel Optimization towards Higher Reliability in Predictive Modelling towar...IJECEIAES
Although, the area of software engineering has made a remarkable progress in last decade but there is less attention towards the concept of code reusability in this regards.Code reusability is a subset of Software Reusability which is one of the signature topics in software engineering. We review the existing system to find that there is no progress or availability of standard research approach toward code reusability being introduced in last decade. Hence, this paper introduced a predictive framework that is used for optimizing the performance of code reusability. For this purpose, we introduce a case study of near real-time challenge and involved it in our modelling. We apply neural network and Damped-Least square algorithm to perform optimization with a sole target to compute and ensure highest possible reliability. The study outcome of our model exhibits higher reliability and better computational response time
TOWARDS PREDICTING SOFTWARE DEFECTS WITH CLUSTERING TECHNIQUESijaia
The purpose of software defect prediction is to improve the quality of a software project by building a
predictive model to decide whether a software module is or is not fault prone. In recent years, much
research in using machine learning techniques in this topic has been performed. Our aim was to evaluate
the performance of clustering techniques with feature selection schemes to address the problem of software
defect prediction problem. We analysed the National Aeronautics and Space Administration (NASA)
dataset benchmarks using three clustering algorithms: (1) Farthest First, (2) X-Means, and (3) selforganizing map (SOM). In order to evaluate different feature selection algorithms, this article presents a
comparative analysis involving software defects prediction based on Bat, Cuckoo, Grey Wolf Optimizer
(GWO), and particle swarm optimizer (PSO). The results obtained with the proposed clustering models
enabled us to build an efficient predictive model with a satisfactory detection rate and acceptable number
of features.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
Software cost estimation is of the most challenging task in project management in order to ensuring smoother development operation and target achievement. There has been evolution of various standards tools and techniques for cost estimation practiced in the industry at present times. However, it was never investigated about the overall picturization of effectiveness of such techniques till date. This paper initiates its contribution by presenting taxonomies of conventional cost-estimation techniques and then investigates the research trends towards frequently addressed problems in it. The paper also reviews the existing techniques in well-structured manner in order to highlight the problems addressed, techniques used, advantages associated and limitation explored from literatures. Finally, we also brief the explored open research issues as an added contribution to this manuscript.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
Software Defect Trend Forecasting In Open Source Projects using A Univariate ...CSCJournals
Our objective in this research is to provide a framework that will allow project managers, business owners, and developers an effective way to forecast the trend in software defects within a software project in real-time. By providing these stakeholders with a mechanism for forecasting defects, they can then provide the necessary resources at the right time in order to remove these defects before they become too much ultimately leading to software failure. In our research, we will not only show general trends in several open-source projects but also show trends in daily, monthly, and yearly activity. Our research shows that we can use this forecasting method up to 6 months out with only an MSE of 0.019. In this paper, we present our technique and methodologies for developing the inputs for the proposed model and the results of testing on seven open source projects. Further, we discuss the prediction models, the performance, and the implementation using the FBProphet framework and the ARIMA model.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Applicability of Hooke’s and Jeeves Direct Search Solution Method to Metal c...ijiert bestjournal
Role of optimization in engineering design is prominent one with the advent of computers. Optimization has become a part of computer aided design activities. It is prima rily being used in those design activities in which the goal is not only to achieve just a feasible design,but also a des ign objective. In most engineering design activities,the design objective could be simply to minimize the cost of production or to maximize the efficiency of the production. An optimization algorithm is a procedure which is executed it eratively by comparing various solutions till the optimum or a satisfactory solution is found. In many industri al design activities,optimization is achieved indirectly by comparing a few chosen design solutions and accept ing the best solution. This simplistic approach never guarantees and optimization algorithms being with one or more d esign solutions supplied by the user and then iteratively check new design the true optimum solution. There ar e two distinct types of optimization algorithms which are in use today. First there are algorithms which are deterministic,with specific rules for moving from one solution to the other secondly,there are algorithms whi ch are stochastic transition rules.
STATISTICAL ANALYSIS FOR PERFORMANCE COMPARISONijseajournal
Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runs into performance problems at one time or another. This paper discusses about performance issues faced during Pre Examination Process Automation System (PEPAS) implemented in java technology. The challenges faced during the life cycle of the project and the mitigation actions performed. It compares 3 java technologies and shows how improvements are made through statistical analysis in response time of the application. The paper concludes with result analysis.
Integrating profiling into mde compilersijseajournal
Scientific computation requires more and more performance in its algorithms. New massively parallel
architectures suit well to these algorithms. They are known for offering high performance and power
efficiency. Unfortunately, as parallel programming for these architectures requires a complex distribution
of tasks and data, developers find difficult to implement their applications effectively. Although approaches
based on source-to-source intends to provide a low learning curve for parallel programming and take
advantage of architecture features to create optimized applications, programming remains difficult for
neophytes. This work aims at improving performance by returning to the high-level models, specific
execution data from a profiling tool enhanced by smart advices computed by an analysis engine. In order to
keep the link between execution and model, the process is based on a traceability mechanism. Once the
model is automatically annotated, it can be re-factored aiming better performances on the re-generated
code. Hence, this work allows keeping coherence between model and code without forgetting to harness the
power of parallel architectures. To illustrate and clarify key points of this approach, we provide an
experimental example in GPUs context. The example uses a transformation chain from UML-MARTE
models to OpenCL code.
THE UNIFIED APPROACH FOR ORGANIZATIONAL NETWORK VULNERABILITY ASSESSMENTijseajournal
The present business network infrastructure is quickly varying with latest servers, services, connections,
and ports added often, at times day by day, and with a uncontrollably inflow of laptops, storage media and
wireless networks. With the increasing amount of vulnerabilities and exploits coupled with the recurrent
evolution of IT infrastructure, organizations at present require more numerous vulnerability assessments.
In this paper new approach the Unified process for Network vulnerability Assessments hereafter called as
a unified NVA is proposed for network vulnerability assessment derived from Unified Software
Development Process or Unified Process, it is a popular iterative and incremental software development
process framework.
Automatically inferring structure correlated variable set for concurrent atom...ijseajournal
The at
omicity of correlated variables is
quite tedious and error prone for programmers to explicitly infer
and specify in
the multi
-
threaded program
. Researchers have studied the automatic discovery of atomic set
programmers intended, such as by frequent itemset
mining and rules
-
based filter. However, due to the
lack
of inspection of inner structure
,
some
implicit sematics
independent
variables intended by user are
mistakenly
classified to be correlated
. In this paper, we present a
novel
simplification method for
program
dependency graph
and the corresponding graph
mining approach to detect the set of variables correlated
by logical structures in the source code. This approach formulates the automatic inference of the correlated
variables as mining frequent subgra
ph of
the simplified
data and control flow dependency graph. The
presented simplified
graph representation of program dependency is not only robust for coding style’s
varieties, but also essential to recognize the logical correlation. We implemented our me
thod and
compared it with previous methods on the open source programs’ repositories. The experiment results
show that our method has less false positive rate than previous methods in the development initial stage. It
is concluded that our presented method
can provide programmers in the development with the sufficient
precise correlated variable set for checking atomicity
Program analysis is useful for debugging, testing and maintenance of software systems due to information
about the structure and relationship of the program’s modules . In general, program analysis is performed
either based on control flow graph or dependence graph. However, in the case of aspect-oriented
programming (AOP), control flow graph (CFG) or dependence graph (DG) are not enough to model the
properties of Aspect-oriented (AO) programs. With respect to AO programs, although AOP is good for
modular representation and crosscutting concern, suitable model for program analysis is required to
gather information on its structure for the purpose of minimizing maintenance effort. In this paper Aspect
Oriented Dependence Flow Graph (AODFG) as an intermediate representation model is proposed to
represent the structure of aspect-oriented programs. AODFG is formed by merging the CFG and DG, thus
more information about dependencies between the join points, advice, aspects and their associated
construct with the flow of control from one statement to another are gathered. We discussthe performance
of AODFG by analysing some examples of AspectJ program taken from AspectJ Development Tools
(AJDT).
DETECTION AND REFACTORING OF BAD SMELL CAUSED BY LARGE SCALEijseajournal
Bad smells are signs of potential problems in code. Detecting bad smells, however, remains time
consuming for software engineers despite proposals on bad smell detection and refactoring tools. Large
Class is a kind of bad smells caused by large scale, and the detection is hard to achieve automatically. In
this paper, a Large Class bad smell detection approach based on class length distribution model and
cohesion metrics is proposed. In programs, the lengths of classes are confirmed according to the certain
distributions. The class length distribution model is generalized to detect programs after grouping.
Meanwhile, cohesion metrics are analyzed for bad smell detection. The bad smell detection experiments of
open source programs show that Large Class bad smell can be detected effectively and accurately with this
approach, and refactoring scheme can be proposed for design quality improvements of programs.
MANAGING AND ANALYSING SOFTWARE PRODUCT LINE REQUIREMENTSijseajournal
Modelling software product line (SPL) features plays a crucial role to a successful development of SPL.
Feature diagram is one of the widely used notations to model SPL variants. However, there is a lack of
precisely defined formal notations for representing and verifying such models. This paper presents an
approach that we adopt to model SPL variants by using UML and subsequently verify them by using firstorder
logic. UML provides an overall modelling view of the system. First-order logic provides a precise
and rigorous interpretation of the feature diagrams. We model variants and their dependencies by using
propositional connectives and build logical expressions. These expressions are then validated by the Alloy
verification tool. The analysis and verification process is illustrated by using Computer Aided Dispatch
(CAD) system.
Development of an expert system for reducing medical errorsijseajournal
Recent advances in patient safety have been hampered by the hard
dealing with the development of a
uniform classification of patient safety concepts
in a systematic way
.
Therefore, m
any believe that medical
expert systems have great potential to improve health care.
A framework for computer
-
based medical
errors diagnose
s of primary systems’ deficiencies is presented.
Results of this research assisted in
developing the hierarchical structure of the medical errors expert system which
was
written and complied
in CLIPS. It has
225
rules,
52
parameters and
830
conditional pa
ragraphs. The system prompts the user
for response with suggested input formats. The system checks the user input for consistency within the
given limits. In addition, the system was validated through numerous consultations with the experts in the
field.
The benefits that
are
gained from such types of expert system
s
are eliminating
the fear from dealing
with
personal mistake, and providing the up
-
date information and helps medical staff as a learning tool.
EA-MDA MODEL TO RESOLVE IS CHARACTERISTIC PROBLEMS IN EDUCATIONAL INSTITUTIONSijseajournal
Higher education institutions require a proper standard and model that can be implemented to enhance
alignment between business strategy and existing information technologies. Developing the required model
is a complex task. A combination of the EA, MDA and SOA concepts can be one of the solutions to
overcome the complexity of building a specific information technology architecture for higher education
institutions. EA allows for a comprehensive understanding of the institution’s main business process while
defining the information system that will assist in optimizing the business process. EA essentially focuses on
strategy and integration. MDA relies on models as its main element and places focuses on efficiency and
quality. SOA on the other hand uses services as its principal element and focuses on flexibility and reuse.
This paper seeks to formulate an information technology architecture that can provide clear guidelines on
inputs and outputs for EA development activities within a given higher education institution. This proposed
model specifically emphasises on WIS development in order to ensure that WIS in higher education
institutions has a coherent planning, implementation and control process in place consistent with the
enterprise’s business strategy. The model will then be applied to support WIS development and
implementation at University of Lampung (Unila) as the case study.
Distributed Graphical User Interfaces to Class Diagram: Reverse Engineering A...ijseajournal
The graphical user interfaces of software programs are used by researchers in the soft-ware engineering
field to measure functionality, usability, durability, accessibility, and performance. This paper describes a
reverse engineering approach to transform the cap-tured images of the distributed GUIs into class
diagram. The processed distributed GUIs come from different and separate client computers. From the
distributed GUIs, the inter-faces are captured as images, attributes and functions are extracted and
processed through pattern recognitions mechanism to be stored into several temporary tables
corresponding to each client’s graphical user interface. These tables will be analyzed and processed into
one integrated normalized table eliminating any attribute redundancies. Further, the normalized the one
integrated table is to create a class diagram.
A MODEL-DRIVEN ENGINEERING APPROACH TO DEVELOP A COOPERATIVE INFORMATION SYSTEMijseajournal
To reuse one or several existing systems in order to develop a complex system is a common practice in
software engineering. This approach can be justified by the fact that it is often difficult for a single
Information System (IS) to accomplish all the requested tasks. So, one solution is to combine many different
ISs and make them collaborate in order to realize these tasks. We proposed an approach named AspeCiS
(An Aspect-oriented Approach to Develop a Cooperative Information System) to develop a Cooperative
Information System from existing ISs by using their artifacts such as existing requirements, and design.
AspeCiS covers the three following phases: (i) discovery and analysis of Cooperative Requirements, (ii)
design of Cooperative Requirements models, and (iii) preparation of the implementation phase. The main
issue of AspeCiS is the definition of Cooperative Requirements using the Existing Requirements and
Additional Requirements, which should be composed with Aspectual Requirements. We earlier studied how
to elicit the Cooperative Requirements in AspeCiS (phase of discovery and analysis of Cooperative
Requirements in AspeCiS) . We study here the second phase of AspeCiS (design of Cooperative
Requirements models), by the way of a model weaving process. This process uses so-called AspeCiS
Weaving Metamodel, and it weaves Existing and Additional Requirements models to realize Cooperative
Requirements models.
Ijctcm030301TILTED WINDOW DETECTION FOR GONDOLATYPED FACADE ROBOTijctcm
Working on building façade is dangerous. Many accidents have occurred, and many workers have lost
their lives. We have researched to apply robot systems to prevent these accidents. In this paper, we
purposed an algorithm for sensor system of a gondola-typed building façade maintenance robot. The
robot system moves on the building façade autonomously, or by remote control. Therefore, window area
detection is important for painting of the gondola system. For robust and accurate detection, the
proposed algorithm was designed for tilted windows.
Voorbeeld van een business review waarin aan alle onderdelen van het business model aandacht wordt besteed en toelichting wordt gegeven op de voortgang van de (strategische) doelen.
This presentation was made by the BBMP during the first multi-stakeholder workshop conducted for Strategic Action Planning for Revival of Bangalore Lakes
This is an brief introduction the Windows RT development, delivered in a very casual and unique style. The presentation was delivered as a Tech-Talk Lunch at Clearpoint Technology on April 4, 2013.
A MODEL TO COMPARE THE DEGREE OF REFACTORING OPPORTUNITIES OF THREE PROJECTS ...acijjournal
Refactoring is applied to the software artifacts so as to improve its internal structure, while preserving its
external behavior. Refactoring is an uncertain process and it is difficult to give some units for
measurement. The amount to refactoring that can be applied to the source-code depends upon the skills of
the developer. In this research, we have perceived refactoring as a quantified object on an ordinal scale of
measurement. We have a proposed a model for determining the degree of refactoring opportunities in the
given source-code. The model is applied on the three projects collected from a company. UML diagrams
are drawn for each project. The values for source-code metrics, that are useful in determining the quality of
code, are calculated for each UML of the projects. Based on the nominal values of metrics, each relevant
UML is represented on an ordinal scale. A machine learning tool, weka, is used to analyze the dataset,
imported in the form of arff file, produced by the three projects.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
The analytic hierarchy process (AHP) has been applied in many fields and especially to complex
engineering problems and applications. The AHP is capable of structuring decision problems and finding
mathematically determined judgments built on knowledge and experience. This suggests that AHP should
prove useful in agile software development where complex decisions occur routinely. In this paper, the
AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP
encourages applying the refactoring where the code smells bad. However, refactoring may consume more
time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been
applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to
focus on the technique that improve the code and the XP development process in general.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
From previous year researches, it is concluded that testing is playing a vital role in the development of the software product. As, software testing is a single approach to assure the quality of the software so most of the development efforts are put on the software testing. But software testing is an expensive process and consumes a lot of time. So, testing should be start as early as possible in the development to control the money and time problems. Even, testing should be performed at every step in the software development life cycle (SDLC) which is a structured approach used in the development of the software product. Software testing is a tradeoff between budget, time and quality. Now a day, testing becomes a very important activity in terms of exposure, security, performance and usability. Hence, software testing faces a collection of challenges.
A Complexity Based Regression Test Selection StrategyCSEIJJournal
Software is unequivocally the foremost and indispensable entity in this technologically driven world.
Therefore quality assurance, and in particular, software testing is a crucial step in the software
development cycle. This paper presents an effective test selection strategy that uses a Spectrum of
Complexity Metrics (SCM). Our aim in this paper is to increase the efficiency of the testing process by
significantly reducing the number of test cases without having a significant drop in test effectiveness. The
strategy makes use of a comprehensive taxonomy of complexity metrics based on the product level (class,
method, statement) and its characteristics.We use a series of experiments based on three applications with
a significant number of mutants to demonstrate the effectiveness of our selection strategy.For further
evaluation, we compareour approach to boundary value analysis. The results show the capability of our
approach to detect mutants as well as the seeded errors.
Ranking The Refactoring Techniques Based on The External Quality AttributesIJRES Journal
The selection of appropriate decisions is a significant issue that might lead to more satisfactory results. The difficulty comes when there are several alternatives and when all of them have the same chance of being selected. It is important, therefore, to find the kinds of priorities among all of these alternatives in order to choose the most appropriate one. The analytic hierarchy process (AHP) is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that AHP should prove useful in agile software development where complex decisions occur routinely. This paper presents an example of using the AHP to rank the refactoring techniques based on the external code quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring may consume more time and efforts. Therefore, to maximize the benefits of the refactoring in less time and effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to focus on the technique that improve the code and the XP development process in general. :
One of the core quality assurance feature which combines fault prevention and fault detection, is often known as testability approach also. There are many assessment techniques and quantification method evolved for software testability prediction which actually identifies testability weakness or factors to further help reduce test effort. This paper examines all those measurement techniques that are being proposed for software testability assessment at various phases of object oriented software development life cycle. The aim is to find the best metrics suit for software quality improvisation through software testability support. The ultimate objective is to establish the ground work for finding ways reduce the testing effort by improvising software testability and its assessment using well planned guidelines for object-oriented software development with the help of suitable metrics.
A Review of Agile Software Effort Estimation MethodsEditor IJCATR
Software cost estimation is an essential aspect of software project management and therefore the success or failure of a software
project depends on accuracy in estimating effort, time and cost. Software cost estimation is a scientific activity that requires knowledge of a
number of relevant attributes that will determine which estimation method to use in a given situation. Over the years various studies were done
to evaluate software effort estimation methods however due to introduction of new software development methods, the reviews have not
captured new software development methods. Agile software development method is one of the recent popular methods that were not taken
into account in previous cost estimation reviews. The main aim of this paper is to review existing software effort estimation methods
exhaustively by exploring estimation methods suitable for new software development methods.
Software Cost Estimation Using Clustering and Ranking SchemeEditor IJMTER
Software cost estimation is an important task in the software design and development process.
Planning and budgeting tasks are carried out with reference to the software cost values. A variety of
software properties are used in the cost estimation process. Hardware, products, technology and
methodology factors are used in the cost estimation process. The software cost estimation quality is
measured with reference to the accuracy levels.
Software cost estimation is carried out using three types of techniques. They are regression based
model, anology based model and machine learning model. Each model has a set of technique for the
software cost estimation process. 11 cost estimation techniques fewer than 3 different categories are
used in the system. The Attribute Relational File Format (ARFF) is used maintain the software product
property values. The ARFF file is used as the main input for the system.
The proposed system is designed to perform the clustering and ranking of software cost
estimation methods. Non overlapped clustering technique is enhanced with optimal centroid estimation
mechanism. The system improves the clustering and ranking process accuracy. The system produces
efficient ranking results on software cost estimation methods.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATIONIJCI JOURNAL
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Similar to SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MULTI-OBJECTIVE HYPER-HEURISTIC EVOLUTIONARY ALGORITHM (20)
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
SCHEDULING AND INSPECTION PLANNING IN SOFTWARE DEVELOPMENT PROJECTS USING MULTI-OBJECTIVE HYPER-HEURISTIC EVOLUTIONARY ALGORITHM
1. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
DOI : 10.5121/ijsea.2013.4304 45
SCHEDULING AND INSPECTION PLANNING IN
SOFTWARE DEVELOPMENT PROJECTS USING
MULTI-OBJECTIVE HYPER-HEURISTIC
EVOLUTIONARY ALGORITHM
A.Charan Kumari1
and K. Srinivas2
1
Department of Physics & Computer Science, Dayalbagh Educational Institute,
Dayalbagh, Agra, India
charankumari@yahoo.co.in
2
Department of Electrical Engineering, Dayalbagh Educational Institute, Dayalbagh,
Agra, India
ksri12@gmail.com
ABSTRACT
This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) for the solution
of Scheduling and Inspection Planning in Software Development Projects. Scheduling and Inspection
planning is a vital problem in software engineering whose main objective is to schedule the persons to
various activities in the software development process such as coding, inspection, testing and rework in
such a way that the quality of the software product is maximum and at the same time the project make span
and cost of the project are minimum. The problem becomes challenging when the size of the project is
huge. The MHypEA is an effective metaheuristic search technique for suggesting scheduling and inspection
planning. It incorporates twelve low-level heuristics which are based on different methods of selection,
crossover and mutation operations of Evolutionary Algorithms. The selection mechanism to select a low-
level heuristic is based on reinforcement learning with adaptive weights. The efficacy of the algorithm has
been studied on randomly generated test problem.
KEYWORDS
Scheduling and Inspection planning, software project development, Multi-objective optimization, Hyper-
heuristics, Evolutionary Algorithm
1. INTRODUCTION
The planning of software projects involves various intricacies such as the consideration of
resources, scheduling of members, dependencies among various activities, project deadlines and
many other restrictions. Besides these restrictions, the waiting times and other external
uncertainties also complicate the planning process. Due to the increase in the size and complexity
of the software projects, it is becoming a difficult task for the project managers to have a control
on the development costs and to maintain the deadlines of the projects. These two factors are in
turn dependent on the efficient scheduling of members to various activities involved in the
software development process such as coding, inspection, testing etc. Thus, the main goal of
scheduling and inspection planning process are to achieve high quality software product with
minimum development cost and project make span.
2. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
46
Search-based approaches to software project management were investigated by many researchers.
In 2001, Carl K. Chang and et al. [1] developed a new technique based on genetic algorithms
(GA) that automatically determines, using a programmable goal function, a near-optimal
allocation of resources and resulting schedule that satisfies a given task structure and resource
pool. In their method, they assumed that the estimated effort for each task is known a priori and
can be obtained from any known estimation method such as COCOMO. Based on the results of
these algorithms, the software manager will be able to assign tasks to staff in an optimal manner
and predict the corresponding future status of the project, including an extensive analysis on the
time and cost variations in the solution space. The results of the GA algorithm were evaluated
using exhaustive search for five test cases. In these tests the proposed GA showed strong
scalability and simplicity.
In 2005, Thomas Hanne and Stefan Nickel [2] modelled the problem of planning inspections and
other operations within a software development (SD) project with respect to the objectives quality
(no. of defects), project make span, and costs, as a multi-objective optimization problem. Their
considered model of SD processes comprises the phases coding, inspection, test, and rework and
includes the assignment of operations to persons and the generation of a project schedule. They
developed a multi-objective evolutionary algorithm and studied the results of its application to
sample problems.
E. Alba and J. F. Chicano [3], tackled the general Project Scheduling Problem with genetic
algorithms. In their approach, they combined the duration and cost objectives into a single fitness
function using weights in such a way that one can adjust these fitness weights to represent
particular real world project. They developed an automated tool based on genetic algorithms that
can be used to assign people to the project tasks in a nearly optimal way trying different
configurations concerning the relative importance of the cost and duration of the project. They
have performed an in depth analysis with an instance generator and solved 48 different project
scenarios and performed 100 independent runs for each test to get statistically meaningful
solutions. Their experiments concluded that the instances with more tasks are more difficult to
solve and their solutions are more expensive and the projects with a larger number of employees
are easier to tackle and can be driven to a successful end in a shorter time.
Leandro L. Minku et al. [4] presented novel theoretical insight into the performance of
Evolutionary Algorithms (EA) for the Project Scheduling Problem. Their theory inspired
improvements in the design of EAs, including normalisation of dedication values, a tailored
mutation operator, and fitness functions with a strong gradient towards feasible solutions.
According to their findings, Normalisation removes the problem of overwork and allows an EA to
focus on the solution quality and facilitates finding the right balance between dedication values
for different tasks and allows employees to adapt their workload whenever other tasks are started
or finished. Their empirical study concludes that normalisation is very effective in improving the
hit rate, the solution quality and making the EA more robust.
This paper presents a hyper-heuristic based multi-objective evolutionary algorithm [5] for the
solution of scheduling and inspection planning in the software development process, based on the
model suggested by Thomas Hanne and Stefan Nickel [2]. We implemented twelve low-level
heuristics based on various methods of selection, crossover and mutation operators in
evolutionary algorithms. The designed selection mechanism selects one of the low-level
heuristics based on reinforcement learning with adaptive weights. The efficacy of the algorithm
has been studied on randomly generated test problem [2]. Through our experimentation we found
that MHypEA is able to explore and exploit the search space thoroughly, to find high quality
3. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
47
solutions. And also the proposed algorithm is able to achieve better results in half the amount of
time expended by the MOEA reported in the literature.
The rest of the paper is organised as follows. Section 2 describes the scheduling and inspection
planning as multi-objective search problem. The basic concepts of hyper heuristics are presented
in section 3. The description of the proposed approach is provided in section 4. Section 5 presents
the Experiments and the results obtained are presented and analysed in section 6. Concluding
remarks are given in section 7.
2. A MULTI-OBJECTIVE SCHEDULING AND INSPECTION PLANNING
PROBLEM
This section briefly describes scheduling and inspection planning as a multi-objective search-
based problem as proposed by Thomas Hanne et al. and the detailed description can be found in
[2]. The basic activities and their sequence in the model are given in Figure 1.
The basic assumption of the model is that there are ‘n’ modules to develop, each with known size
and complexity. All the activities involved in the process are done by a team of ‘m’ developers.
The first activity in the model is coding. Each module is coded by one developer called its author.
After a module is coded, it is inspected by a team of developers known as inspection team. Based
on the outcome of inspection, the author reworks on the module. Thereafter, the module
undergoes testing done by a developer called tester and again the author reworks on the module
based on the findings of the testing process. All the activities are performed as per the sequence
shown in Figure1. As per the model, inspections are assumed to be done in a pre-empting way,
forcing the developers to interrupt their coding or testing process to inspect a module, as soon as
it is coded by its author.
As per the assumption each module is coded by only one developer called its author. Every
module is inspected by an inspection team of size 0 to m-1, with a restriction that an author of a
module cannot be its inspector. Similarly, each module is tested by a tester, different from its
author. For each module, priority values are assigned for coding and testing activities to
determine the temporal sequence for scheduling the tasks. The three objectives [2] considered in
the model are described below.
2.1. First objective – Quality of the product
The first objective is the quality of the product, measured in terms of total number of defects
produced and is calculated by
∑=
=
n
i
idtd
1 (1)
where
rdfdrdfdpdd ikikiKiKiki
2211
+−+−=
(2)
pdik denotes the produced defects during coding of an module i by an author k and is assumed to
be
coding rework testing reworkinspection
Figure 1 Sequence of activities in software development
4. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
48
cqsmddcplxsizepd kiiik /..= (3)
where size and cplx are the size and complexity of an module i and mdd denotes the minimum
defect density and cqs is the coding quality skill of the author k.
fd1
iK determines the found defects in a module i by the inspection team K and is given by
)).1(1.(1
∏ −−=
∈Kk
kiiK ddsitfpdfd
(4)
where itf represents inspection technique factor and dds represents the defect detection skill of the
inspector k.
It is assumed that though the rework of a module by its author removes the found defects by the
inspection team, it may introduce new defects in proportional to the found defects and is given by
fdrdfrd iKiK
11
.= (5)
where rdf represents rework defects factor.
fd2
ik denotes found defects in a module i, when it is tested by a tester k, and is given by
)1.( ..'2
edfd tttqsdfr
iik
ik−−=
(6)
where d′ represents the defects remaining in a module i after coding, inspection and rework, dfr
denotes defect find rate, tqs is the testing quality skill of the tester k and tt represents test time of a
module i and is determined as
sizecplxttt iiii ..=
(7)
where ti is test intensity.
Similarly, it is assumed that though the rework of a module by its author removes the found
defects by the tester, it may introduce new defects in proportional to the found defects and is
given by
fdrdfrd ikik
22 .=
(8)
2.2. Second objective – Project make span
In order to compute the project make span, a complete schedule for the software development
process is to be made. For each module, the specific time of each activity along with its waiting
times are to be calculated and the maximum time among all the modules determines the project
make span. The basic assumptions made in the model are that there are no specific dependencies
among the coding operations and all the inspections are carried out without any waiting times.
The coding time for a module is calculated as
)./(. cpsmcpcplxsizect kiii =
(9)
where mcp corresponds to the maximum coding productivity and cps corresponds to the coding
productivity skill of the author k.
5. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
49
The inspection time for a module i by the kth
inspector is calculated as
)./(. ipsmipcplxsizeit kiiik =
(10)
where mip corresponds to maximum inspection productivity and ips corresponds to inspection
productivity skill of the inspector k. The inspection time for a module is taken as the maximum
inspection time taken among the members of the inspection team.
The rework times are calculated as
)../(.11 cpsmcpadscplxfdrt kiii =
(11)
)../(.22 cpsmcpadscplxfdrt kiii =
(12)
where rt1
i and rt2
i represents rework times after inspection and testing respectively and ads
corresponds to average defect size.
The waiting time of the activities depends on the temporal sequence of the modules based on
coding and testing priorities along with the availability of the developers to carry out the specific
activity.
2.3. Third objective – cost
The project costs are assumed to be proportional to the effort which is measured as the total time
taken for each activity. Thus the cost is calculated as
)(. 21
rtttrtitctctc iiiik
i
i +++∑+∑=
(13)
where c represents unit cost of effort.
Thus, the Multi-objective scheduling and inspection planning problem is to schedule the
developers to various activities of different modules, in such a way that the number of defects,
project make span and cost are minimum.
3. HYPER-HEURISTICS
Hyper-heuristics are often defined as “heuristics to choose heuristics” [6]. A heuristic is
considered as a rule-of-thumb that reduces the search required to find a solution. Meta-heuristic
operates directly on the problem search space with the goal of finding optimal or near-optimal
solutions; whereas the hyper-heuristic operates on the heuristics search space which consists of all
the heuristics that can be used to solve a target problem. Thus, hyper-heuristics are search
algorithms that do not directly solve problems, but, instead, search the space of heuristics that can
then solve the problem. Therefore Hyper-heuristics are an approach that operates at a higher level
of abstraction than a metaheuristic. The framework of the proposed hyper-heuristic approach is
shown in Figure 2 [5].
The term Hyper-heuristics was coined by Cowling et al. and described it as “The hyper-heuristics
manage the choice of which lower-level heuristic method should be applied at any given time,
depending upon the characteristics of the heuristics and the region of the solution space currently
under exploration” [7]. So, they are broadly concerned with intelligently choosing a right
heuristic. The main objective of hyper-heuristics is to evolve more general systems that are able
6. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
50
to handle a wide range of problem domains. A general frame work of a hyper-heuristic is
presented in Algorithm 1 [6].
Algorithm 1 Hyper-heuristic algorithm
1: Start with a set H of heuristics, each of which is applicable to a problem state and transforms it
to a new problem state.
2: Let the initial problem state be S0.
3: If the problem state is Si then find the heuristic that is most suitable to transform the problem
state to Si+1.
4: If the problem is solved, stop. Otherwise go to step 3.
4. PROPOSED APPROACH
This section explains our proposed approach [5]. The designed format of the low-level heuristics
is EA/selection/crossover/mutation. The selection process involves identifying the parents for
generating the offspring. Two types of selection are proposed – rand and rand-to-best. In rand,
both the parents are selected randomly from the population, while in rand-to-best, one parent is
selected randomly from the population and the other parent is elite (the best one) selected from
the set of elites. Three types of crossover operators are identified to generate the offspring. The
first operator is uniform crossover; where in the offspring is generated by randomly selecting
each gene from either of the parents. The second operator is a hybrid crossover1 (hc1) that is
defined by hybridizing the single-point crossover with uniform crossover. The third operator is a
hybrid crossover2 (hc2) that is framed by the hybridization of two-point crossover with uniform
crossover. Two types of mutation are proposed - copy and exchange. In the first mutation
operator, two genes are selected randomly and the second gene is copied into the first one. In the
second mutation operator, two randomly selected genes exchange their positions. Based on the
above discussed selection, recombination and mutation operators, twelve low-level heuristics are
proposed for the hyper-heuristic and are shown in Table 1 [5].
CR ≤ 0.5 CR > 0.5
DOMAIN BARRIER
HYPER-HEURISTIC
LOW-LEVEL HEURISTICS
…..
Objective functions
h1
1
h2
1
1
h12
1
CR = U (0.0, 1.0)
hb = 1
he = 6
hb = 7
he = 12
hselect = roulette(hb, he)
Figure 2. Framework of the proposed Hyper-heuristic
7. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
51
The proposed hyper-heuristic selects a capable low-level heuristic in all iterations based on the
information about the efficacy of each low-level heuristic accrued during the preceding iterations.
This is executed through the principle of reinforcement learning [8]. The basic idea is to “reward”
improving low-level heuristics in all iterations of the search by increasing its weight gradually
and “punish” poorly performing ones by decreasing its weight. The weights of low-level
heuristics are changed adaptively during the search process and at any point they reflect the
effectiveness of low-level heuristics.
Table 1 Set of low-level heuristics used by the proposed hyper-heuristic
Group with copy mutation Group with exchange mutation
h1 : EA/rand/uniform/copy h7 : EA/rand/uniform/exchange
h2:EA/rand-to-best/uniform/copy h8 : EA/rand-to-best/uniform/ exchange
h3 : EA/rand/hc1/copy h9 : EA/rand/hc1/ exchange
h4 : EA/rand-to-best/hc1/copy h10 : EA/rand-to-best/hc1/ exchange
h5 : EA/rand/hc2/copy h11 : EA/rand/hc2/ exchange
h6 : EA/rand-to-best/hc2/copy h12 : EA/rand-to-best/hc2/ exchange
In the beginning all the low-level heuristics are assigned with equal weight. The weight of a
heuristic is changed as soon as a heuristic is called and its performance is evaluated. If the called
heuristic lead to an improvement in the values of the objective functions, its weight is increased,
otherwise it is decreased. All the weights are bounded from above and from below. Thus the
current values of the weights indicate the information about the past experience of using the
corresponding heuristics. The roulette-wheel approach is used to select a heuristic randomly with
the probability proportional to its weight [9].
The proposed hyper-heuristic works in two phases. The first phase selects the type of mutation to
be adopted (copy or exchange). This is done randomly with equal probability of both the groups
being selected. The values of hb and he denotes the subscripts of the beginning and ending of the
selected low-level hyper-heuristics. The second phase explicitly selects a low-level heuristic
within the selected group. This phase uses the reinforcement learning approach with adaptive
weights using roulette-wheel to select a particular model of EA. The framework of the proposed
hyper-heuristic is shown in Figure 2 [5]. Here, CR is a random number drawn from a uniform
distribution on the unit interval, to select an EA model either with copy or with exchange
mutation with equal probability.
Based on the selected low-level heuristic, an offspring population is generated and its fitness is
evaluated. Thereafter, the parent and offspring populations are combined together and a non
dominated sorting [10] is performed on the combined population to classify the solutions into
different Pareto fronts based on the goodness of the solutions. The population for the next
iteration is taken from the best fronts. The pseudo code of the MHypEA is given in Algorithm 2.
8. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
52
Algorithm 2 Multi-objective Hyper-heuristic Evolutionary Algorithm (MHypEA) [5]
1: Initialize parent population
2: Evaluate the fitness of parent population
3: While (not termination-condition) do
4: Select a low-level heuristic based on the selection mechanism
5: Apply the selected low-level heuristic on the parent population and obtain offspring
population
6: Evaluate the offspring population
7: Combine parent and offspring populations
8: Perform non dominated sorting on the combined population and select the individuals
from the best fronts for the next iteration
9: end while
5. EXPERIMENTS
This section describes methodology applied to compute the values of objective functions, test
problem considered along with the problem and algorithmic parameter settings.
5.1 Methodology
In this subsection we describe the initialization of decision variables, method adopted for
crossover and mutation operations and the procedure for the evaluation of objective functions.
5.1.1 Initialization of decision variables
The decision variables taken are (author, no_inspectors, inspector, tester, coding_priority,
testing_priority). Initially the developers are assigned randomly as authors to all the modules with
a condition that each module is coded exactly by only one author and an author may be assigned
to code more than one module. The number of inspectors for a module characterizes the
inspection team size of that module which is initially assigned randomly with a value in the range
0 to (m-1), where m signifies number of developers involved in the project; the upper limit of
(m-1) indicates that an author of a module cannot be its inspector and a lower limit of 0 indicates
that the module is not subject to inspection. In the current scenario, the maximum number of
inspectors for each module is taken as 3. The inspectors are randomly assigned for each module
as dictated by the inspection team size. Similarly, a developers are assigned as testers for testing
modules, with a constraint that an author of a module cannot be its tester. Further,
coding_priority and testing_priority for the modules are taken as decision variables and are
assigned values in the range [0, 1], to determine the temporal sequence for scheduling operations;
with an intention that a module with a higher priority value is to be scheduled first than a module
with a lower priority value.
5.1.2 Crossover operator
Parents are selected for crossover operation in two ways – rand and rand-to-best. Then for each
module in the offspring the author, inspectors, tester, priorities of coding and testing operations
are assigned with the values of a randomly chosen module from either of the parents, in
accordance with the three methods discussed in section 4. In this way the offspring are generated
from the parents leading to different permutations of scheduling.
9. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
53
5.1.3 Mutation
A simple strategy is adopted for mutation. In every offspring that is generated from the crossover
operation, two modules are selected randomly and the scheduling assignment of one is copied
into another in copy variant of mutation and the scheduling assignments are exchanged between
the selected modules in the case of exchange variant of mutation.
5.1.4 Evaluation of objective functions
The number of defects indicating the quality of the product and the cost objectives can be
calculated straightforwardly as per the formulae depicted in section 2. But the calculation of
project make span is a complicated one, as one needs to frame the complete schedule of the
software development process. The approach used in this paper for the computation of project
make span is as follows: as a first step, the time for each activity, (depicted in the Figure 1) for
each module is calculated. In the next step, the waiting times are calculated before each activity
starts for each module, based on observing the given assignment of persons to modules &
activities and scheduling the modules for one person according to the priorities, coding_priority
for coding and rework, testing_priority for tests. Inspections are assumed to be performed as soon
as the coding finishes, without any waiting time. Since inspection activity has a higher priority
than other activities, it is assumed that they interrupt the coding activity of an inspector. All the
activities are scheduled by considering the restrictions on their precedence. Finally the make span
of each module is computed based on the actual activity times and their corresponding waiting
times. The maximum make span among the modules is considered as the project make span.
5.2 Test Problem
In order to evaluate the efficiency of the proposed MHypEA, test problem and the problem
parameters are taken from the literature as recommended in [2]. The following technical
parameters are used:
• number of modules: n = 100
• number of developers: m = 20
• maximum coding productivity: mcp = 25 [loc/h]
• minimum defect density: mdd = 0.02 [defects/loc]
• maximum inspection productivity: mip = 175 [loc/h]
• inspection technique factor: itf = 0.45
• test intensity: ti = 0.05
• defect find rate: dfr = 0.1
• rework defects factor rdf = 0.1
• average defect size: ads = 8 [loc]
• unit costs : c = 150[EUR/h]
For the all skill attributes, a normal distribution with an expected value 0.5 and a variance of 0.1
is assumed (but ensuring that the values are in [0, 1]); with an assumption that the person on the
average reach out to 50% of the optimum skill values. For the module size, a lognormal
distribution with expected value 300 [loc] and variance 120 is applied. For the module
complexity, a normal distribution with expected value 1 and variance 0.1 is assumed.
The population size is taken as 30 and the test problem were run for a maximum of 500 iterations.
The algorithm has been implemented in MATLAB 7.6.0, on an Intel® Core™ 2 Duo CPU T6600
@2.20 GHz processor, 3 GB RAM and Windows 7 platform.
10. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
54
6. RESULTS
The results obtained by MHypEA on the above described test problem is presented in this section.
Figures 3-5 represents the box plots of some generations visualizing the distribution of the three
objective functions.
The box plots of figures 3 and 4 shows the most considerable improvement in the best values for
the defects objective within the first 150 generations and costs objective within the first 250
generations of MHypEA. With respect to the duration objective there is a negligible improvement
in the best values found. The conflicting nature of the objectives is evident from the three box
plots.
50 100 150 200 250 300 350 400 450 500
175
180
185
190
195
200
205
210
215
220
Defects
Geneartion
Figure 3. Defects of solutions based on generation number
50 100 150 200 250 300 350 400 450 500
6.15
6.2
6.25
6.3
6.35
6.4
6.45
6.5
6.55
6.6
x 10
5
Costs
Generation
Figure 4. Costs of solutions based on generation number
11. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
55
50 100 150 200 250 300 350 400 450 500
280
300
320
340
360
380
400
420
440
Generation
Duration
Figure 5. Duration of solutions based on generation number
6.15 6.2 6.25 6.3 6.35 6.4 6.45 6.5 6.55
x 10
5
175
180
185
190
195
200
205
210
215
costs
defects
gen. 100
gen. 200
gen. 300
gen. 400
gen. 500
Figure 6. Costs and Defects in solutions of several generations
6.15 6.2 6.25 6.3 6.35 6.4 6.45 6.5 6.55
x 10
5
260
280
300
320
340
360
380
400
420
440
co sts
Duration
g en . 100
g en . 200
g en . 300
g en . 400
g en . 500
Figure 7. Costs and Duration in solutions of several generations
12. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
56
260 280 300 320 340 360 380 400 420 440
175
180
185
190
195
200
205
210
215
Duration
defects
gen. 100
gen. 200
gen. 300
gen. 400
gen. 500
Figure 8. Duration and Defects in solutions of several generations
Figures 6-8 represents the solutions obtained by MHypEA for some selected generations for each
combination of two objectives. It is apparent from the three figures 6-8 that there is a continuous
progress towards the minimization of objectives and the diversity of the solutions. For example,
considering the population of generation 500 there is a significant improvement in the number of
defects and costs, in comparison to the population of generation 100. As the objectives are highly
conflicting, in some instances one objective is decreasing only by increasing the other objective.
The comparison of the results of MHypEA with the Multi-objective Evolutionary Algorithm
(MOEA) reported in the literature [2] shows that MHypEA is able to achieve better values for all
the three objective functions in half the number of generations. This is perceptible from the
corresponding box plots and 2-dimensional plots of MOEA [2] and MHypEA.
7. CONCLUSION
This paper presents a Multi-objective Hyper-heuristic Evolutionary Algorithm for the solution of
scheduling and inspection planning in software development projects. Scheduling and inspection
planning is an important problem in the software development process, as it directly influences
the quality of the end product. The scheduling of personnel is considered for coding, inspection,
testing and rework phases of the development process. Three highly conflicting objectives of
number of defects found (indicating the quality of the product), costs and duration of the project
are evaluated. A hyper-heuristic based multi-objective evolutionary algorithm is proposed for the
purpose and the results are assessed. The obtained results show that the proposed algorithm is
able to improve the solutions significantly with better diversity in the solutions. The comparison
of the results with MOEA shows that MHypEA is able to achieve high quality solutions in half of
the number of iterations.
ACKNOWLEDGEMENTS
The authors are extremely grateful to the Revered Prof. P.S. Satsangi, Chairman, Advisory
Committee on Education, Dayalbagh for continued guidance and support.
13. International Journal of Software Engineering & Applications (IJSEA), Vol.4, No.3, May 2013
57
REFERENCES
[1] C. K. Chang, M. J. Christensen, T. Zhang, (2001) “Genetic algorithms for project management”,
Annals of Software Engineering, Vol. 11, pp107-139.
[2] T. Hanne & S. Nickel, (2005) “A multiobjective evolutionary algorithm for scheduling and
inspection planning in software development project”, European Journal of Operational Research,
Vol. 167, pp 663-678.
[3] E. Alba & J. F. Chicano, (2007) “Software project management with GAs”, Information Science,
Vol. 177, pp 2380-2401.
[4] Leandro L. Minku, Dirk Sudholt, Xin Yao, (2012) “Software Evolutionary Algorithms for the Project
Scheduling Problem : Runtime Analysis and Improved Design”, Proceedings of the fourteenth
international conference on Genetic and evolutionary computation conference, pp 1221-1228.
[5] Charan Kumari, A. & Srinivas, K., (2013) “ Software Module Clustering using a Fast Multi-objective
Hyper-heuristic Evolutionary Algorithm”, International Journal of Applied Information Systems,
Vol. 5, pp 12-18.
[6] Burke, E., Kendall, G., Newall, J., Hart, E., Ross, P., and Schulenburg, S. (2003) “Hyper-heuristics:
an emerging direction in modern search technology”, Handbook of metaheuristics, pp. 457–474,
Kluwer Academic Publishers.
[7] Cowling, P.I., Kendall, G., Soubeiga, E. (2001) “Hyperheuristic Approach to Scheduling a Sales
Summit”, Proceedings of the Third International Conference of Practice And Theory of Automated
Timetabling, Vol. 2079, pp 176-190.
[8] Kaelbling, L.P., Littman, M.L., Moore, A.W. (1996) “Reinforcement learning: a survey”, Journal of
Artificial Intelligence Research 4, 237–285.
[9] Nareyek,A. (2003) “Choosing search heuristics by non-stationary reinforcement learning”, In
Metaheuristics: Computer decision-making, pp. 523–544. Kluwer Academic Publishers.
[10] K., A. Pratap, S. Agarwal, and T. Meyarivan. (2002) “A Fast and Elitist Multiobjective Genetic
Algorithm: NSGA-II”, IEEE Transactions on Evolutionary Computation, Vol. 6, pp 182-197.
Authors
A. Charan Kumari received the Master of Computer Applications degree from Andhra
University, India in 1996 and stood first in the district, and M. Phil degree in Computer
Science from Bharathidasan University, Tiruchirappalli, India. She has an excellent
teaching experience of 15 years in various esteemed institutions and received various
accolades including the best teacher award from the management of D. L. Reddy College,
Andhra Pradesh. She is working towards the Ph.D. degree in Computer science at
Dayalbagh Educational Institute, Agra, India under collaboration with IIT Delhi, India
under their MoU. Her current research interests include Search-based Software engineering, Evolutionary
computation and soft computing techniques. She has published papers in international conferences and
journals. She is a life member of Systems Society of India (SSI).
K. Srinivas received B.E. in Computer Science & Technology, M.Tech. in Engineering
Systems and Ph.D. in Evolutionary Algorithms. He is currently working as Assistant
Professor in Electrical Engineering Department, Faculty of Engineering, Dayalbagh
Educational Institute, Agra, India. His research interests include Soft Computing,
Optimization using Metaheuristic Search Techniques, Search-based Software
Engineering, Mobile Telecommunication Networks, Systems Engineering, and E-
learning Technologies. He received Director’s medal for securing highest marks in
M.Tech. Programme at Dayalbagh Educational Institute in 1999. He is an active
researcher, guiding research, published papers in journals of national and international repute, and is also
involved in R&D projects. He is a member of IEEE and a life member of Systems Society of India (SSI).
He is also the Publication Manager of Literary Paritantra (Systems)- An International Journal on Literature
and Theory