Industrial reports show massive cost overruns associated with software
projects. The cost of software reworks constitutes a large portion of the
overall cost, reflecting a substantial challenge in cost control. Earned value
management (EVM) is the most recognized model for project cost control.
However, it shows many limitations in forecasting the software project cost,
leading to a considerable challenge in cost control. Nevertheless, the major
EVM limitation found is its inability to forecast the cost of software rework.
This research investigated the factors affecting this limitation and suggests an
enhanced EVM model. The significant contribution of this research is its
incorporation of software-related factors into the EVM model. We
introduced the software rework index (SRI), which is incorporated into the
traditional EVM model to enhance its predictability of the software project
cost at completion, including the rework cost. We defined the SRI in terms of
two factors: product functional complexity and the team competency.
Finally, we evaluated the proposed model using a dataset drawn from five
actual projects. The results showed a significant enhancement in forecasting
accuracy.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Enhancing the Software Effort Prediction Accuracy using Reduced Number of Cos...IRJET Journal
This document presents research on modifying the COCOMO II software cost estimation model to improve prediction accuracy. The researchers reduced the number of cost estimation factors (called cost drivers) from 17 to 13 by adjusting the definitions and impact levels to better reflect current industry situations. They estimated effort for software projects using the modified model and found lower percentage errors compared to the original COCOMO II model, demonstrating improved estimation efficiency. The goal of the research was to analyze cost drivers and their impact on effort estimation in COCOMO II and enhance the model for more accurate predictions.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
Enhancing the Software Effort Prediction Accuracy using Reduced Number of Cos...IRJET Journal
This document presents research on modifying the COCOMO II software cost estimation model to improve prediction accuracy. The researchers reduced the number of cost estimation factors (called cost drivers) from 17 to 13 by adjusting the definitions and impact levels to better reflect current industry situations. They estimated effort for software projects using the modified model and found lower percentage errors compared to the original COCOMO II model, demonstrating improved estimation efficiency. The goal of the research was to analyze cost drivers and their impact on effort estimation in COCOMO II and enhance the model for more accurate predictions.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
A simplified predictive framework for cost evaluation to fault assessment usi...IJECEIAES
Software engineering is an integral part of any software development scheme which frequently encounters bugs, errors, and faults. Predictive evaluation of software fault contributes towards mitigating this challenge to a large extent; however, there is no benchmarked framework being reported in this case yet. Therefore, this paper introduces a computational framework of the cost evaluation method to facilitate a better form of predictive assessment of software faults. Based on lines of code, the proposed scheme deploys adopts a machine-learning approach to address the perform predictive analysis of faults. The proposed scheme presents an analytical framework of the correlation-based cost model integrated with multiple standards machine learning (ML) models, e.g., linear regression, support vector regression, and artificial neural networks (ANN). These learning models are executed and trained to predict software faults with higher accuracy. The study considers assessing the outcomes based on error-based performance metrics in detail to determine how well each learning model performs and how accurate it is at learning. It also looked at the factors contributing to the training loss of neural networks. The validation result demonstrates that, compared to logistic regression and support vector regression, neural network achieves a significantly lower error score for software fault prediction.
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
Software projects mostly exceeds budget, delivered late and does not meet with the customer’s satisfaction for years. In the past, many traditional development models like waterfall, spiral, iterative, and prototyping methods are used to build the software systems. In recent years, agile models are widely used in developing the software products. The major reasons are – simplicity, incorporating the requirement changes at any time, light-weight approach and delivering the working product early and in short duration. Whatever the development model used, it still remains a challenge for software engineer’s to accurately estimate the size, effort and the time required for developing the software system. This survey focuses on the existing estimation models used in traditional as well in agile software development.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
Analysis of software cost estimation usingijfcstjournal
The growing application of software and resource constraints in software projects development need a
more accurate estimate of the cost and effort because of the importance in program planning, coordinated
scheduling and resource management including the number of programming's and software design using
tools and modern methods of modeling. Effectively control of investment for software development is
achieved by accurate cost estimation.The accurate Software Cost Estimation (SCE) is very difficult in the
early stages of software development because many of input parameters that are effective in software's
effort are very vague and uncertain in the early stages. SCE that is the basis of software projects
development planning is considered to be of high accuracy, because if the estimate is less than actual
values, confidence factor is reduce and this is means the possibility of failure in project. Conversely, if the
project is estimated at more than the actual value it would be the concept of unhelpful investment and
waste of resources. In the evaluation of software projects is commonly used deterministic method. But
software world is totally different from the linear variables and nowadays for performance and estimation
should be used nonlinear and non-probabilistic methods. In this paper, we have studied the SCE Using
Fuzzy Logic (FL) and we have compared it with COCOMO model. Results of investigations show that FL is
a performance model for SCE.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
The document describes a proposed tool called the Class Breakpoint Analyzer (CBA) that evaluates software quality at the class level. The CBA extracts metrics like weighted methods per class (WMC), depth of inheritance tree (DIT), number of children (NOC), and lack of cohesion in methods (LCOM) based on the Chidamber and Kemerer (CK) metrics suite. Threshold values are set for each metric to determine if a class is overloaded. The CBA then generates a scorecard for each class to identify classes that need to be refactored to improve quality and reusability. The goal is to help evaluate code quality, identify areas for improvement, and make off-the-shelf
Class quality evaluation using class quality scorecardsIAEME Publication
The document describes a Class Breakpoint Analyzer tool that evaluates software quality using metrics. The tool extracts metrics like Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), and Lack of Cohesion in Methods (LCOM) from source code. Threshold values for each metric indicate if a class needs restructuring. The tool generates a scorecard to determine if a class is overloaded or saturated. This helps improve reusability of existing software and evaluate code quality for junior programmers. The tool uses metrics from the Chidamber and Kemerer (CK) suite to analyze classes and suggest where to break classes for better design.
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
A METRICS -BASED MODEL FOR ESTIMATING THE MAINTENANCE EFFORT OF PYTHON SOFTWAREijseajournal
Software project management includes a substantial area for estimating software maintenance effort.
Estimation of software maintenance effort improves the overall performance and efficiency of software.
The Constructive Cost Model (COCOMO) and other effort estimation models are mentioned in literature
but are inappropriate for Python programming language. This research aimed to modify the Constructive
Cost Model (COCOMO II) by considering a range of Python maintenance effort influencing factors to get
estimations and incorporated size and complexity metrics to estimate maintenance effort. A within-subjects
experimental design was adopted and an experiment questionnaire was administered to forty subjects
aiming to rate the maintainability of twenty Python programs. Data collected from the experiment
questionnaire was analyzed using descriptive statistics. Metric values were collected using a developed
metric tool. The subject ratings on software maintainability were correlated with the developed model’s
maintenance effort, a strong correlation of 0.610 was reported meaning that the model is valid.
IRJET- Analysis of Software Cost Estimation TechniquesIRJET Journal
This document analyzes and compares different software cost estimation techniques using machine learning algorithms. It uses the COCOMO and function point estimation models on NASA project datasets to test the performance of the ZeroR and M5Rules classifiers. The M5Rules classifier produced more accurate results with lower mean absolute errors and root mean squared errors compared to COCOMO, function points, and the ZeroR classifier. Therefore, the study suggests using M5Rules techniques to build models for more precise software effort estimation.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Insights of effectivity analysis of learning-based approaches towards softwar...IJECEIAES
Software defect prediction is one of the essential sets of operation towards mitigating issues of risk management in software development known to contribute towards enhancing the quality of software. There is evolution of various methodologies towards resolving this issue while learning-based methodology is witnessed to be the most dominant contributor. The problem identified is that there are yet many unsolved queries associated with practical viability of such learning-based approach adoption in software quality management. Proposed approaches discussed in this paper contributes towards mitigating this challenge by introducing a simplified, compact, and crisp analysis of effectiveness associated with learning-based schemes. The paper presents its major findings of effectivity analysis of machine learning, deep learning, hybrid, and other miscellaneous approaches deployed for fault prediction followed by highlighting research trend. The major findings infer that feature selection, data imbalance, interpretability, and in adequate involvement of context are prime gaps in existing methods. The paper also contributes towards research gap as well as essential learning outcomes of present review work.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
The document discusses software cost estimation and planning. It describes several models for software cost estimation including COCOMO and Putnam models. COCOMO uses staff months and lines of code to initially estimate effort which is then adjusted based on cost drivers. Putnam uses a Rayleigh curve staffing model based on volume, difficulty, and time constraints. Thorough planning is important to software projects and factors like life cycle, quality assurance, and risk management should be considered. Historical data and validated models can help produce more accurate cost and schedule estimates.
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
More Related Content
Similar to Enhancing software development cost control by forecasting the cost of rework: preliminary study
FACTORS ON SOFTWARE EFFORT ESTIMATION ijseajournal
Software effort estimation is an important process of system development life cycle, as it may affect the
success of software projects if project designers estimate the projects inaccurately. In the past of few
decades, various effort prediction models have been proposed by academicians and practitioners.
Traditional estimation techniques include Lines of Codes (LOC), Function Point Analysis (FPA) method
and Mark II Function Points (Mark II FP) which have proven unsatisfactory for predicting effort of all
types of software. In this study, the author proposed a regression model to predict the effort required to
design small and medium scale application software. To develop such a model, the author used 60
completed software projects developed by a software company in Macau. From the projects, the author
extracted factors and applied them to a regression model. A prediction of software effort with accuracy of
MMRE = 8% was constructed.
Software projects mostly exceeds budget, delivered late and does not meet with the customer’s satisfaction for years. In the past, many traditional development models like waterfall, spiral, iterative, and prototyping methods are used to build the software systems. In recent years, agile models are widely used in developing the software products. The major reasons are – simplicity, incorporating the requirement changes at any time, light-weight approach and delivering the working product early and in short duration. Whatever the development model used, it still remains a challenge for software engineer’s to accurately estimate the size, effort and the time required for developing the software system. This survey focuses on the existing estimation models used in traditional as well in agile software development.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
Analysis of software cost estimation usingijfcstjournal
The growing application of software and resource constraints in software projects development need a
more accurate estimate of the cost and effort because of the importance in program planning, coordinated
scheduling and resource management including the number of programming's and software design using
tools and modern methods of modeling. Effectively control of investment for software development is
achieved by accurate cost estimation.The accurate Software Cost Estimation (SCE) is very difficult in the
early stages of software development because many of input parameters that are effective in software's
effort are very vague and uncertain in the early stages. SCE that is the basis of software projects
development planning is considered to be of high accuracy, because if the estimate is less than actual
values, confidence factor is reduce and this is means the possibility of failure in project. Conversely, if the
project is estimated at more than the actual value it would be the concept of unhelpful investment and
waste of resources. In the evaluation of software projects is commonly used deterministic method. But
software world is totally different from the linear variables and nowadays for performance and estimation
should be used nonlinear and non-probabilistic methods. In this paper, we have studied the SCE Using
Fuzzy Logic (FL) and we have compared it with COCOMO model. Results of investigations show that FL is
a performance model for SCE.
The Impact of Software Complexity on Cost and Quality - A Comparative Analysi...ijseajournal
Early prediction of software quality is important for better software planning and controlling. In early
development phases, design complexity metrics are considered as useful indicators of software testing
effort and some quality attributes. Although many studies investigate the relationship between design
complexity and cost and quality, it is unclear what we have learned beyond the scope of individual studies.
This paper presented a systematic review on the influence of software complexity metrics on quality
attributes. We aggregated Spearman correlation coefficients from 59 different data sets from 57 primary
studies by a tailored meta-analysis approach. We found that fault proneness and maintainability are most
frequently investigated attributes. Chidamber & Kemerer metric suite is most frequently used but not all of
them are good quality attribute indicators. Moreover, the impact of these metrics is not different in
proprietary and open source projects. The result provides some implications for building quality model
across project type.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
The document describes a proposed tool called the Class Breakpoint Analyzer (CBA) that evaluates software quality at the class level. The CBA extracts metrics like weighted methods per class (WMC), depth of inheritance tree (DIT), number of children (NOC), and lack of cohesion in methods (LCOM) based on the Chidamber and Kemerer (CK) metrics suite. Threshold values are set for each metric to determine if a class is overloaded. The CBA then generates a scorecard for each class to identify classes that need to be refactored to improve quality and reusability. The goal is to help evaluate code quality, identify areas for improvement, and make off-the-shelf
Class quality evaluation using class quality scorecardsIAEME Publication
The document describes a Class Breakpoint Analyzer tool that evaluates software quality using metrics. The tool extracts metrics like Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), and Lack of Cohesion in Methods (LCOM) from source code. Threshold values for each metric indicate if a class needs restructuring. The tool generates a scorecard to determine if a class is overloaded or saturated. This helps improve reusability of existing software and evaluate code quality for junior programmers. The tool uses metrics from the Chidamber and Kemerer (CK) suite to analyze classes and suggest where to break classes for better design.
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
A METRICS -BASED MODEL FOR ESTIMATING THE MAINTENANCE EFFORT OF PYTHON SOFTWAREijseajournal
Software project management includes a substantial area for estimating software maintenance effort.
Estimation of software maintenance effort improves the overall performance and efficiency of software.
The Constructive Cost Model (COCOMO) and other effort estimation models are mentioned in literature
but are inappropriate for Python programming language. This research aimed to modify the Constructive
Cost Model (COCOMO II) by considering a range of Python maintenance effort influencing factors to get
estimations and incorporated size and complexity metrics to estimate maintenance effort. A within-subjects
experimental design was adopted and an experiment questionnaire was administered to forty subjects
aiming to rate the maintainability of twenty Python programs. Data collected from the experiment
questionnaire was analyzed using descriptive statistics. Metric values were collected using a developed
metric tool. The subject ratings on software maintainability were correlated with the developed model’s
maintenance effort, a strong correlation of 0.610 was reported meaning that the model is valid.
IRJET- Analysis of Software Cost Estimation TechniquesIRJET Journal
This document analyzes and compares different software cost estimation techniques using machine learning algorithms. It uses the COCOMO and function point estimation models on NASA project datasets to test the performance of the ZeroR and M5Rules classifiers. The M5Rules classifier produced more accurate results with lower mean absolute errors and root mean squared errors compared to COCOMO, function points, and the ZeroR classifier. Therefore, the study suggests using M5Rules techniques to build models for more precise software effort estimation.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Insights of effectivity analysis of learning-based approaches towards softwar...IJECEIAES
Software defect prediction is one of the essential sets of operation towards mitigating issues of risk management in software development known to contribute towards enhancing the quality of software. There is evolution of various methodologies towards resolving this issue while learning-based methodology is witnessed to be the most dominant contributor. The problem identified is that there are yet many unsolved queries associated with practical viability of such learning-based approach adoption in software quality management. Proposed approaches discussed in this paper contributes towards mitigating this challenge by introducing a simplified, compact, and crisp analysis of effectiveness associated with learning-based schemes. The paper presents its major findings of effectivity analysis of machine learning, deep learning, hybrid, and other miscellaneous approaches deployed for fault prediction followed by highlighting research trend. The major findings infer that feature selection, data imbalance, interpretability, and in adequate involvement of context are prime gaps in existing methods. The paper also contributes towards research gap as well as essential learning outcomes of present review work.
Contributors to Reduce Maintainability Cost at the Software Implementation PhaseWaqas Tariq
This document discusses factors that can reduce software maintenance costs during the implementation phase. It identifies that maintenance costs are highest during software development phases. The objective is to define criteria to assess software quality characteristics and assist during implementation. This will help reduce maintenance costs by creating criteria groups to support writing standard code, developing a model to apply criteria, and increasing understandability. Student groups will study code standardization, write programs, and test software maintenance on programs to validate the model and proposed criteria.
The document discusses software cost estimation and planning. It describes several models for software cost estimation including COCOMO and Putnam models. COCOMO uses staff months and lines of code to initially estimate effort which is then adjusted based on cost drivers. Putnam uses a Rayleigh curve staffing model based on volume, difficulty, and time constraints. Thorough planning is important to software projects and factors like life cycle, quality assurance, and risk management should be considered. Historical data and validated models can help produce more accurate cost and schedule estimates.
Similar to Enhancing software development cost control by forecasting the cost of rework: preliminary study (20)
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
Control of a servo-hydraulic system utilizing an extended wavelet functional ...nooriasukmaningtyas
Servo-hydraulic systems have been extensively employed in various industrial applications. However, these systems are characterized by their highly complex and nonlinear dynamics, which complicates the control design stage of such systems. In this paper, an extended wavelet functional link neural network (EWFLNN) is proposed to control the displacement response of the servo-hydraulic system. To optimize the controller's parameters, a recently developed optimization technique, which is called the modified sine cosine algorithm (M-SCA), is exploited as the training method. The proposed controller has achieved remarkable results in terms of tracking two different displacement signals and handling external disturbances. From a comparative study, the proposed EWFLNN controller has attained the best control precision compared with those of other controllers, namely, a proportional-integralderivative (PID) controller, an artificial neural network (ANN) controller, a wavelet neural network (WNN) controller, and the original wavelet functional link neural network (WFLNN) controller. Moreover, compared to the genetic algorithm (GA) and the original sine cosine algorithm (SCA), the M-SCA has shown better optimization results in finding the optimal values of the controller's parameters.
Decentralised optimal deployment of mobile underwater sensors for covering la...nooriasukmaningtyas
This paper presents the problem of sensing coverage of layers of the ocean in three dimensional underwater environments. We propose distributed control laws to drive mobile underwater sensors to optimally cover a given confined layer of the ocean. By applying this algorithm at first the mobile underwater sensors adjust their depth to the specified depth. Then, they make a triangular grid across a given area. Afterwards, they randomly move to spread across the given grid. These control laws only rely on local information also they are easily implemented and computationally effective as they use some easy consensus rules. The feature of exchanging information just among neighbouring mobile sensors keeps the information exchange minimum in the whole networks and makes this algorithm practicable option for undersea. The efficiency of the presented control laws is confirmed via mathematical proof and numerical simulations.
Evaluation quality of service for internet of things based on fuzzy logic: a ...nooriasukmaningtyas
The development of the internet of thing (IoT) technology has become a major concern in sustainability of quality of service (SQoS) in terms of efficiency, measurement, and evaluation of services, such as our smart home case study. Based on several ambiguous linguistic and standard criteria, this article deals with quality of service (QoS). We used fuzzy logic to select the most appropriate and efficient services. For this reason, we have introduced a new paradigmatic approach to assess QoS. In this regard, to measure SQoS, linguistic terms were collected for identification of ambiguous criteria. This paper collects the results of other work to compare the traditional assessment methods and techniques in IoT. It has been proven that the comparison that traditional valuation methods and techniques could not effectively deal with these metrics. Therefore, fuzzy logic is a worthy method to provide a good measure of QoS with ambiguous linguistic and criteria. The proposed model addresses with constantly being improved, all the main axes of the QoS for a smart home. The results obtained also indicate that the model with its fuzzy performance importance index (FPII) has efficiently evaluate the multiple services of SQoS.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Smart monitoring system using NodeMCU for maintenance of production machinesnooriasukmaningtyas
Maintenance is an activity that helps to reduce risk, increase productivity, improve quality, and minimize production costs. The necessity for maintenance actions will increase efficiency and enhance the safety and quality of products and processes. On getting these conditions, it is necessary to implement a monitoring system used to observe machines' conditions from time to time, especially the machine parts that often experience problems. This paper presents a low-cost intelligent monitoring system using NodeMCU to continuously monitor machine conditions and provide warnings in the case of machine failure. Not only does it provide alerts, but this monitoring system also generates historical data on machine conditions to the Google Cloud (Google Sheet), includes which machines were down, downtime, issues occurred, repairs made, and technician handling. The results obtained are machine operators do not need to lose a relatively long time to call the technician. Likewise, the technicians assisted in carrying out machine maintenance activities and online reports so that errors that often occur due to human error do not happen again. The system succeeded in reducing the technician-calling time and maintenance workreporting time up to 50%. The availability of online and real-time maintenance historical data will support further maintenance strategy.
Design and simulation of a software defined networkingenabled smart switch, f...nooriasukmaningtyas
Using sustainable energy is the future of our planet earth, this became not only economically efficient but also a necessity for the preservation of life on earth. Because of such necessity, smart grids became a very important issue to be researched. Many literatures discussed this topic and with the development of internet of things (IoT) and smart sensors, smart grids are developed even further. On the other hand, software defined networking is a technology that separates the control plane from the data plan of the network. It centralizes the management and the orchestration of the network tasks by using a network controller. The network controller is the heart of the SDN-enabled network, and it can control other networking devices using software defined networking (SDN) protocols such as OpenFlow. A smart switching mechanism called (SDN-smgrid-sw) for the smart grid will be modeled and controlled using SDN. We modeled the environment that interact with the sensors, for the sun and the wind elements. The Algorithm is modeled and programmed for smart efficient power sharing that is managed centrally and monitored using SDN controller. Also, all if the smart grid elements (power sources) are connected to the IP network using IoT protocols.
Efficient wireless power transmission to remote the sensor in restenosis coro...nooriasukmaningtyas
In this study, the researchers have proposed an alternative technique for designing an asymmetric 4 coil-resonance coupling module based on the series-to-parallel topology at 27 MHz industrial scientific medical (ISM) band to avoid the tissue damage, for the constant monitoring of the in-stent restenosis coronary artery. This design consisted of 2 components, i.e., the external part that included 3 planar coils that were placed outside the body and an internal helical coil (stent) that was implanted into the coronary artery in the human tissue. This technique considered the output power and the transfer efficiency of the overall system, coil geometry like the number of coils per turn, and coil size. The results indicated that this design showed an 82% efficiency in the air if the transmission distance was maintained as 20 mm, which allowed the wireless power supply system to monitor the pressure within the coronary artery when the implanted load resistance was 400 Ω.
Grid reactive voltage regulation and cost optimization for electric vehicle p...nooriasukmaningtyas
Expecting large electric vehicle (EV) usage in the future due to environmental issues, state subsidies, and incentives, the impact of EV charging on the power grid is required to be closely analyzed and studied for power quality, stability, and planning of infrastructure. When a large number of energy storage batteries are connected to the grid as a capacitive load the power factor of the power grid is inevitably reduced, causing power losses and voltage instability. In this work large-scale 18K EV charging model is implemented on IEEE 33 network. Optimization methods are described to search for the location of nodes that are affected most due to EV charging in terms of power losses and voltage instability of the network. Followed by optimized reactive power injection magnitude and time duration of reactive power at the identified nodes. It is shown that power losses are reduced and voltage stability is improved in the grid, which also complements the reduction in EV charging cost. The result will be useful for EV charging stations infrastructure planning, grid stabilization, and reducing EV charging costs.
Topology network effects for double synchronized switch harvesting circuit on...nooriasukmaningtyas
Energy extraction takes place using several different technologies, depending on the type of energy and how it is used. The objective of this paper is to study topology influence for a smart network based on piezoelectric materials using the double synchronized switch harvesting (DSSH). In this work, has been presented network topology for circuit DSSH (DSSH Standard, Independent DSSH, DSSH in parallel, mono DSSH, and DSSH in series). Using simulation-based on a structure with embedded piezoelectric system harvesters, then compare different topology of circuit DSSH for knowledge is how to connect the circuit DSSH together and how to implement accurately this circuit strategy for maximizing the total output power. The network topology DSSH extracted power a technique allows again up to in terms of maximal power output compared with network topology standard extracted at the resonant frequency. The simulation results show that by using the same input parameters the maximum efficiency for topology DSSH in parallel produces 120% more energy than topology DSSH-series. In addition, the energy harvesting by mono-DSSH is more than DSSH-series by 650% and it has exceeded DSSHind by 240%.
Improving the design of super-lift Luo converter using hybrid switching capac...nooriasukmaningtyas
In this article, an improvement to the positive output super-lift Luo converter (POSLC) has been proposed to get high gain at a low duty cycle. Also, reduce the stress on the switch and diodes, reduce the current through the inductors to reduce loss, and increase efficiency. Using a hybrid switch unit composed of four inductors and two capacitors it is replaced by the main inductor in the elementary circuit. It’s charged in parallel with the same input voltage and discharged in series. The output voltage is increased according to the number of components. The gain equation is modeled. The boundary condition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM) has been derived. Passive components are designed to get high output voltage (8 times at D=0.5) and low ripple about (0.004). The circuit is simulated and analyzed using MATLAB/Simulink. Maximum power point tracker (MPPT) controls the converter to provide the most interest from solar energy.
Third harmonic current minimization using third harmonic blocking transformernooriasukmaningtyas
Zero sequence blocking transformers (ZSBTs) are used to suppress third harmonic currents in 3-phase systems. Three-phase systems where singlephase loading is present, there is every chance that the load is not balanced. If there is zero-sequence current due to unequal load current, then the ZSBT will impose high impedance and the supply voltage at the load end will be varied which is not desired. This paper presents Third harmonic blocking transformer (THBT) which suppresses only higher harmonic zero sequences. The constructional features using all windings in single-core and construction using three single-phase transformers explained. The paper discusses the constructional features, full details of circuit usage, design considerations, and simulation results for different supply and load conditions. A comparison of THBT with ZSBT is made with simulation results by considering four different cases
Power quality improvement of distribution systems asymmetry caused by power d...nooriasukmaningtyas
With an increase of non-linear load in today’s electrical power systems, the rate of power quality drops and the voltage source and frequency deteriorate if not properly compensated with an appropriate device. Filters are most common techniques that employed to overcome this problem and improving power quality. In this paper an improved optimization technique of filter applies to the power system is based on a particle swarm optimization with using artificial neural network technique applied to the unified power flow quality conditioner (PSO-ANN UPQC). Design particle swarm optimization and artificial neural network together result in a very high performance of flexible AC transmission lines (FACTs) controller and it implements to the system to compensate all types of power quality disturbances. This technique is very powerful for minimization of total harmonic distortion of source voltages and currents as a limit permitted by IEEE-519. The work creates a power system model in MATLAB/Simulink program to investigate our proposed optimization technique for improving control circuit of filters. The work also has measured all power quality disturbances of the electrical arc furnace of steel factory and suggests this technique of filter to improve the power quality.
Studies enhancement of transient stability by single machine infinite bus sys...nooriasukmaningtyas
Maintaining network synchronization is important to customer service. Low fluctuations cause voltage instability, non-synchronization in the power system or the problems in the electrical system disturbances, harmonics current and voltages inflation and contraction voltage. Proper tunning of the parameters of stabilizer is prime for validation of stabilizer. To overcome instability issues and get reinforcement found a lot of the techniques are developed to overcome instability problems and improve performance of power system. Genetic algorithm was applied to optimize parameters and suppress oscillation. The simulation of the robust composite capacitance system of an infinite single-machine bus was studied using MATLAB was used for optimization purpose. The critical time is an indication of the maximum possible time during which the error can pass in the system to obtain stability through the simulation. The effectiveness improvement has been shown in the system
Renewable energy based dynamic tariff system for domestic load managementnooriasukmaningtyas
To deal with the present power-scenario, this paper proposes a model of an advanced energy management system, which tries to achieve peak clipping, peak to average ratio reduction and cost reduction based on effective utilization of distributed generations. This helps to manage conventional loads based on flexible tariff system. The main contribution of this work is the development of three-part dynamic tariff system on the basis of time of utilizing power, available renewable energy sources (RES) and consumers’ load profile. This incorporates consumers’ choice to suitably select for either consuming power from conventional energy sources and/or renewable energy sources during peak or off-peak hours. To validate the efficiency of the proposed model we have comparatively evaluated the model performance with existing optimization techniques using genetic algorithm and particle swarm optimization. A new optimization technique, hybrid greedy particle swarm optimization has been proposed which is based on the two aforementioned techniques. It is found that the proposed model is superior with the improved tariff scheme when subjected to load management and consumers’ financial benefit. This work leads to maintain a healthy relationship between the utility sectors and the consumers, thereby making the existing grid more reliable, robust, flexible yet cost effective.
Energy harvesting maximization by integration of distributed generation based...nooriasukmaningtyas
The purpose of distributed generation systems (DGS) is to enhance the distribution system (DS) performance to be better known with its benefits in the power sector as installing distributed generation (DG) units into the DS can introduce economic, environmental and technical benefits. Those benefits can be obtained if the DG units' site and size is properly determined. The aim of this paper is studying and reviewing the effect of connecting DG units in the DS on transmission efficiency, reactive power loss and voltage deviation in addition to the economical point of view and considering the interest and inflation rate. Whale optimization algorithm (WOA) is introduced to find the best solution to the distributed generation penetration problem in the DS. The result of WOA is compared with the genetic algorithm (GA), particle swarm optimization (PSO), and grey wolf optimizer (GWO). The proposed solutions methodologies have been tested using MATLAB software on IEEE 33 standard bus system
Intelligent fault diagnosis for power distribution systemcomparative studiesnooriasukmaningtyas
Short circuit is one of the most popular types of permanent fault in power distribution system. Thus, fast and accuracy diagnosis of short circuit failure is very important so that the power system works more effectively. In this paper, a newly enhanced support vector machine (SVM) classifier has been investigated to identify ten short-circuit fault types, including single line-toground faults (XG, YG, ZG), line-to-line faults (XY, XZ, YZ), double lineto-ground faults (XYG, XZG, YZG) and three-line faults (XYZ). The performance of this enhanced SVM model has been improved by using three different versions of particle swarm optimization (PSO), namely: classical PSO (C-PSO), time varying acceleration coefficients PSO (T-PSO) and constriction factor PSO (K-PSO). Further, utilizing pseudo-random binary sequence (PRBS)-based time domain reflectometry (TDR) method allows to obtain a reliable dataset for SVM classifier. The experimental results performed on a two-branch distribution line show the most optimal variant of PSO for short fault diagnosis.
A deep learning approach based on stochastic gradient descent and least absol...nooriasukmaningtyas
More than eighty-five to ninety percentage of the diabetic patients are affected with diabetic retinopathy (DR) which is an eye disorder that leads to blindness. The computational techniques can support to detect the DR by using the retinal images. However, it is hard to measure the DR with the raw retinal image. This paper proposes an effective method for identification of DR from the retinal images. In this research work, initially the Weiner filter is used for preprocessing the raw retinal image. Then the preprocessed image is segmented using fuzzy c-mean technique. Then from the segmented image, the features are extracted using grey level co-occurrence matrix (GLCM). After extracting the fundus image, the feature selection is performed stochastic gradient descent, and least absolute shrinkage and selection operator (LASSO) for accurate identification during the classification process. Then the inception v3-convolutional neural network (IV3-CNN) model is used in the classification process to classify the image as DR image or non-DR image. By applying the proposed method, the classification performance of IV3-CNN model in identifying DR is studied. Using the proposed method, the DR is identified with the accuracy of about 95%, and the processed retinal image is identified as mild DR.
Blood finder application project report (1).pdfKamal Acharya
Blood Finder is an emergency time app where a user can search for the blood banks as
well as the registered blood donors around Mumbai. This application also provide an
opportunity for the user of this application to become a registered donor for this user have
to enroll for the donor request from the application itself. If the admin wish to make user
a registered donor, with some of the formalities with the organization it can be done.
Specialization of this application is that the user will not have to register on sign-in for
searching the blood banks and blood donors it can be just done by installing the
application to the mobile.
The purpose of making this application is to save the user’s time for searching blood of
needed blood group during the time of the emergency.
This is an android application developed in Java and XML with the connectivity of
SQLite database. This application will provide most of basic functionality required for an
emergency time application. All the details of Blood banks and Blood donors are stored
in the database i.e. SQLite.
This application allowed the user to get all the information regarding blood banks and
blood donors such as Name, Number, Address, Blood Group, rather than searching it on
the different websites and wasting the precious time. This application is effective and
user friendly.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Tools & Techniques for Commissioning and Maintaining PV Systems W-Animations ...Transcat
Join us for this solutions-based webinar on the tools and techniques for commissioning and maintaining PV Systems. In this session, we'll review the process of building and maintaining a solar array, starting with installation and commissioning, then reviewing operations and maintenance of the system. This course will review insulation resistance testing, I-V curve testing, earth-bond continuity, ground resistance testing, performance tests, visual inspections, ground and arc fault testing procedures, and power quality analysis.
Fluke Solar Application Specialist Will White is presenting on this engaging topic:
Will has worked in the renewable energy industry since 2005, first as an installer for a small east coast solar integrator before adding sales, design, and project management to his skillset. In 2022, Will joined Fluke as a solar application specialist, where he supports their renewable energy testing equipment like IV-curve tracers, electrical meters, and thermal imaging cameras. Experienced in wind power, solar thermal, energy storage, and all scales of PV, Will has primarily focused on residential and small commercial systems. He is passionate about implementing high-quality, code-compliant installation techniques.
AI in customer support Use cases solutions development and implementation.pdfmahaffeycheryld
AI in customer support will integrate with emerging technologies such as augmented reality (AR) and virtual reality (VR) to enhance service delivery. AR-enabled smart glasses or VR environments will provide immersive support experiences, allowing customers to visualize solutions, receive step-by-step guidance, and interact with virtual support agents in real-time. These technologies will bridge the gap between physical and digital experiences, offering innovative ways to resolve issues, demonstrate products, and deliver personalized training and support.
https://www.leewayhertz.com/ai-in-customer-support/#How-does-AI-work-in-customer-support
This presentation is about Food Delivery Systems and how they are developed using the Software Development Life Cycle (SDLC) and other methods. It explains the steps involved in creating a food delivery app, from planning and designing to testing and launching. The slide also covers different tools and technologies used to make these systems work efficiently.
Sri Guru Hargobind Ji - Bandi Chor Guru.pdfBalvir Singh
Sri Guru Hargobind Ji (19 June 1595 - 3 March 1644) is revered as the Sixth Nanak.
• On 25 May 1606 Guru Arjan nominated his son Sri Hargobind Ji as his successor. Shortly
afterwards, Guru Arjan was arrested, tortured and killed by order of the Mogul Emperor
Jahangir.
• Guru Hargobind's succession ceremony took place on 24 June 1606. He was barely
eleven years old when he became 6th Guru.
• As ordered by Guru Arjan Dev Ji, he put on two swords, one indicated his spiritual
authority (PIRI) and the other, his temporal authority (MIRI). He thus for the first time
initiated military tradition in the Sikh faith to resist religious persecution, protect
people’s freedom and independence to practice religion by choice. He transformed
Sikhs to be Saints and Soldier.
• He had a long tenure as Guru, lasting 37 years, 9 months and 3 days
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Supermarket Management System Project Report.pdfKamal Acharya
Supermarket management is a stand-alone J2EE using Eclipse Juno program.
This project contains all the necessary required information about maintaining
the supermarket billing system.
The core idea of this project to minimize the paper work and centralize the
data. Here all the communication is taken in secure manner. That is, in this
application the information will be stored in client itself. For further security the
data base is stored in the back-end oracle and so no intruders can access it.
Enhancing software development cost control by forecasting the cost of rework: preliminary study
1. Indonesian Journal of Electrical Engineering and Computer Science
Vol. 21, No. 1, January 2021, pp. 524~537
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v21.i1. pp524-537 524
Journal homepage: http://ijeecs.iaescore.com
Enhancing software development cost control by forecasting the
cost of rework: preliminary study
Tarig Ahmed Khalid, Eng-Thiam Yeoh
Faculty of Computing and Informatics, Multimedia University, Cyberjaya, Malaysia
Article Info ABSTRACT
Article history:
Received May 12, 2020
Revised Jul 13, 2020
Accepted Jul 27, 2020
Industrial reports show massive cost overruns associated with software
projects. The cost of software reworks constitutes a large portion of the
overall cost, reflecting a substantial challenge in cost control. Earned value
management (EVM) is the most recognized model for project cost control.
However, it shows many limitations in forecasting the software project cost,
leading to a considerable challenge in cost control. Nevertheless, the major
EVM limitation found is its inability to forecast the cost of software rework.
This research investigated the factors affecting this limitation and suggests an
enhanced EVM model. The significant contribution of this research is its
incorporation of software-related factors into the EVM model. We
introduced the software rework index (SRI), which is incorporated into the
traditional EVM model to enhance its predictability of the software project
cost at completion, including the rework cost. We defined the SRI in terms of
two factors: product functional complexity and the team competency.
Finally, we evaluated the proposed model using a dataset drawn from five
actual projects. The results showed a significant enhancement in forecasting
accuracy.
Keywords:
Earned value management
EVM
Software cost control
Software cost estimation
Software rework
This is an open access article under the CC BY-SA license.
Corresponding Author:
Tarig Ahmed Khalid
Faculty of Computing and Informatics
Multimedia University
Cyberjaya 63100, Malaysia
Email: t.a.khalid@ieee.org
1. INTRODUCTION
Delivering a quality product within the budget is a core objective for project managers. Meeting this
aim in the software industry is challenging. Organizations waste billions of dollars on failed and challenging
software projects [1]. The Chaos Report stated that 56% of software projects experienced cost-overrun
throughout the year 2011 to 2015 and increased to 65% by the end of 2017 [2, 3], showing the complexity of
the software development process. This complexity is due to the impacts of many factors, such as
requirements complexity and volatility, software measurement difficulties, unplanned rework, and
developers' competency levels [4-10]. In practice, the development and testing teams, mutually, conduct
multiple cycles of reworks and tests until the software is compliant with the customers' requirements [6, 11].
Early detection of defects is of less cost compared to late detection. The delayed repairs may make the cost
higher as 300 to 1000 times the cost of early corrections [12, 13]. In general, the cost of software reworks
may constitute 30% to 50% of the total cost [10, 14-16]. Technical complexity makes early identification of
issues challenging [12]. The rise of the Internet-of-Things and system of systems has increased the
complexity of software development in terms of attributes such as code complexity, requirement complexity
and volatility, and architectural complexity [6, 11, 17]. Recent studies stated that 78% to 82% of the reasons
2. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
525
behind software reworks are due to requirement-related problems [12, 14]. Since people develop the
software, the human side cannot be ignored in favor of technical attributes. Many of the software
development challenges are due to human factors [4, 9].
To control software project cost, project management and software engineering best practices
recommend the earned value management (EVM) model as the most recognized cost control tool. The
institute of electrical and electronics engineers (IEEE), project management institute (PMI), and international
organization for standardization (ISO) have all included the EVM as part of their project management
standards [11, 18-21]. The EVM model provides an integrated scope, schedule, and cost baseline [18]. It
estimates the total project cost in terms of a variable called the estimate at completion (EAC). The EAC is
derived in terms of two indices called the cost performance index (CPI) and Schedule Performance Index
(SPI). The CPI and SPI are periodically calculated in terms of many variables among them is the percent
completion (%C), which represents the percent of work done since the project kick-off.
However, Jurison [22] argued that there are difficulties in managing software projects due to
software intangibility, complexity, and volatility. The EVM model is not designed to deal with such
dimensions. It is incapable of accurately forecasting the EAC of software projects due to four factors. First,
the EAC does not include the cost of software rework. Second, the calculation of its indices depends on the
progress measurement. However, measuring intangible products such as software is challenging. Although
%C could be subjective and uncertain in software projects, the EVM parameters are deterministic [23, 24].
Third, the EVM does not measure the quality as it only accepts the deliverables upon completion [25].
However, poor quality will lead to further rework and cost overrun. Fourth, many software projects failed or
exceeded their baselines due to the human side, which is overlooked by the EVM model [26]. During
software development, it is challenging to isolate human factors from technical ones [9].
This study suggests enhancing the EVM cost predictability by including the cost of rework and
incorporating indigenous software factors such as product characteristics and team competency. Moreover, it
reduces the subjectivity in software progress measurement by adopting the function point analysis (FPA). We
hypothesized that the error in estimating the EAC using the proposed EVM is less than the error obtained
when using the traditional EVM. We evaluated the proposed model using data collected from actual projects.
The rest of the paper is organized as follows. Section 2 covers the related research. Section 3 provides a
detailed description of the research method. Section 4 illustrates the results while Section 5 shows the
discussion. Conclusions and future work are discussed in Section 6.
2. RELATED WORKS
Langsari and Sarno [27] suggested hybrid models by using the COCOMO II and fuzzy logic to
estimate the project cost. Esteki et al. [28] suggested the augmentation of the cost control methods with a risk
management framework. De Souza et al. suggested statistical approaches to enhance EAC accuracy by using
historical data [29]. Acebes et al. [30] used the Monte Carlo simulation combined with the EVM model to
conduct a project risk analysis. They argued that the traditional EVM model results in optimistic indices
leading to an inaccurate EAC. Other researchers proposed quality based EVM models by incorporating
quality performance indices based on historical data [31-33]. Although the introduction of these indices is a
significant enhancement, the proposed models are difficult to generalize due to their dependency on historical
data, which defies the fundamental attribute of projects, i.e., uniqueness. Also, the data unavailability may
make statistical techniques challenging to apply.
The subjective nature of software progress measurements inspired researchers to integrate the FPA
with the EVM model to provide quantitative measurements [34-36]. Other studies suggested the Agile EVM,
which expresses the EVM variables in terms of Agile variables such as story points and the number of sprints
[37-40]. While the Agile EVM makes it easy to apply the traditional EVM, it does not address limitations
such as the lack of quality indicators and the human factors. Efe and Demiros [10] developed a change
management model incorporating the cost of rework and changes into the EVM model. The suggested model
obtained promising results in forecasting the software cost. Pracharasniyom et al. [41] and Vargas [42]
proposed modified EVM models that assess individual project members' performance. However, both studies
did not incorporate real human attributes into their proposed models as they used traditional EVM data. Our
literature review indicates a considerable knowledge gap in relating the human factors to the EVM model.
We found only limited studies investigating the importance of the human side of software development
[8, 43-47].
3. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 1, January 2021 : 524 - 537
526
3. RESEARCH METHOD
3.1. Conceptual modelling
We constructed a conceptual model that monitors the project performance via three variables: the
cost, schedule, and predicted rework. We suggested that the software rework mainly depends on two major
factors: the product complexity and developers’ competency. Moreover, we used fuzzy logic to represent the
software rework as the amount of software reworks depends on three distinct probabilities:
a) No reworks if the deliverables are wholly accepted
b) High level of unplanned reworks if the deliverables are entirely rejected
c) Reworks ranging from low to high levels if the deliverables are partially accepted
Accordingly, if a low-competency team develops sophisticated software, the volume of defects will
likely be significant, leading to a high level of software reworks. Conversely, the reworks level may be a
minimum if a remarkably high competency team is assigned to develop software of low complexity. We also
suggested the fuzzy logic to mitigate the subjectivity in the assessment of developers’ competencies. The
conceptual model is shown in Figure 1.
Figure 1. Proposed EVM conceptual model
3.2. Mathematical modelling
We constructed a mathematical model consisting of two parts: the traditional EVM and the proposed
extension. In the traditional part, we adopted the FPA in measuring the software size, requirement volatility,
and the software developed so far. Then we used these sizes to calculate the %C. We determined the software
rework in terms of the functional complexity and team competency, as described by the conceptual model.
We introduced the functional complexity index (FCI), which defines the software product's functional
complexity. The FCI value was derived from the system functions using the FPA. Moreover, we introduced
the team competency index (TCI) to model the team competency. The TCI was derived from the software
development competency framework (SDCF), as suggested by Khalid and Yeoh [8]. The two indices were
related in terms of fuzzy logic to calculate the software rework index (SRI). Finally, we expressed the EAC
in terms of the CPI, SPI, and SRI, as described in Figure 2. The techniques used to address the limitations of
the traditional model are described in Table 1.
Figure 2. Proposed mathematical model
4. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
527
Table 1. Solution approaches to the proposed mathematical model
Limitation Addressed By Adopted Technique
Uncertainty in measuring %C Quantifying the software sizing Function Point Analysis
EAC forecast does not include the software reworks Introducing the Software Rework Index (SRI) Fuzzy Logic
Lack of addressing the human factors Introducing the Team Performance Index (TCI)
Fuzzy Logic Function Point
Analysis
Lack of addressing the technical software
characteristics
Introducing the Functional Complexity Index
(FCI)
Fuzzy Logic Function Point
Analysis
We calculated the traditional EVM indices according to the following equations.
(1)
Where c% is the percent completion of the project work.
(2)
Where EV is the earned value and BAC is the budget at completion.
(3)
(4)
Where CPI and SPI are the cost performance index and schedule performance index, respectively. EV, AC,
and PV are the earned value, actual cost, and planned value, respectively.
(5)
Where EACT1 is the cost estimate at completion considering only the cost performance.
(6)
Where EACT2 is the cost estimate at completion considering bot the cost performance and schedule
performance. Both AC and PV were calculated in person-days. On the other hand, we used fuzzy logic to
model the relationship between the extended model variables. We used Mamdani’s fuzzy inference system
(FIS) as it is the most used FIS due to its intuitive nature and its suiting to the human input [48]. The fuzzy
model is shown in Figure 3.
Figure 3. Proposed fuzzy model
We implemented the proposed fuzzy model according to the following steps.
5. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 1, January 2021 : 524 - 537
528
Step 1: SRI definition and fuzzification
We represented the software rework level as a ratio of the cost of rework (COR) to the cost of
development (COD). This ratio is called the software rework index (SRI). Hence,
(7)
The value of SRI is fuzzy, depending on the quality status of deliverables after testing. If the
deliverable is entirely accepted, then SRI equals zero. Conversely, if the deliverable is entirely rejected, then
SRI is expected to be a sizable number. If the deliverable is partially accepted, then SRI>0 ranging from low
to high values. The fuzzy membership of SRI is shown in terms of trapezoidal shapes, as in Figure 4. The
fuzzy numbers are selected according to the percentages of reworks stated by Micro Focus [14].
Figure 4. SRI fuzzy membership
Step 2: FCI definition and fuzzification
According to the FPA, each software, regardless of its function type, is assigned a specific weight
representing its complexity. This weight could be low, average, or high. The FCI of each software function is
given by:
X100 (8)
Where FP is the weight assigned to the software function (low, average, or high), FPmax corresponds to the
“High” weight. The system function complexity is defined by calculating the number of data element types
(DET), the number of record element types (RET), and the number of file types referenced (FTR), as
described by IFPUG [49]. At a specific checkpoint in the project lifecycle, the overall FCI is calculated as:
∑
∑
X100 (9)
Where FPi is the value of the function points developed by the ith
task, including the size of requirement
changes, FP_max is the sum of the function points when assigned the highest values, and N is the number of
tasks performed at the time of measurement. Then, FCI is fuzzified and expressed in linguistic terms such
that:
̃ =[Very Low, Low, Average, High, Very High] (10)
The fuzzy membership of the FCI is shown in terms of trapezoidal shapes, as in Figure 5.
6. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
529
Figure 5. FCI fuzzy membership
Step 3: TCI calculation and fuzzification
We calculated the TCI in terms of the individual developer competency index (DCI), as described
by Khalid and Yeoh [8]. The DCI, which quantifies the individual developer's competency, is calculated by
giving weights to each competency. Then, the overall competency of each developer is calculated. The TCI
resembles the centroid of the whole developers. TCI is fuzzified and expressed in linguistic terms as follows:
̃ =[Very Low, Low, Average, High, Very High] (11)
The fuzzy membership of TCI is shown in terms of trapezoidal shapes, as in Figure 6.
Figure 6. TCI fuzzy membership
Step 4: Fuzzy rules’ definition
The fuzzy values of SRI are determined by establishing a relationship among SRI, FCI, and TCI
according to the following fuzzy rule: IF TCI IS [Value] AND FCI IS [Value] THEN SRI IS [VALUE].
The set of fuzzy rules is shown in Table 2.
Table 2. Proposed fuzzy rules
ID TCI FCI SRI
1 Very Low Very Low Average
2 Very Low Low Average
3 Very Low Average High
4 Very Low High Very High
5 Very Low Very High Very High
6 Low Very Low Average
7 Low Low Average
8 Low Average High
9 Low High High
10 Low Very High Very High
11 Average Very Low Low
12 Average Low Low
7. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 1, January 2021 : 524 - 537
530
ID TCI FCI SRI
14 Average High High
15 Average Very High High
16 High Very Low Very Low
17 High Low Low
18 High Average Low
19 High High Average
20 High Very High Average
21 V. High Very Low Very Low
22 V. High Low Very Low
23 V. High Average Low
24 V. High High Low
25 V. High Very High Average
Step 5: EAC Calculation
The total cost of the development work is given by:
Total Cost=COD+COR (12)
Where COD is the cost of developed and COR is the cost of reworks. From (7), it is given that
COR=SRIxCOD. Hence, total cost=CODx(1+SRI). The COD is equal to the EACT according to the
traditional EVM model, while the total cost equals the EACP according to the proposed model. Therefore:
EACP=EACTx(1+SRI) (13)
Where SRI is the crisp value calculated according to the fuzzy rules described in Table 2.
Substituting the value of EACT in (5) and (6), respectively, yields the proposed two scenarios of EAC:
(14)
(15)
Where P1 is influenced by the value of CPI, and P2 is influenced by the values of both CPI
and SPI.
3.3. Hypotheses definition
We defined two hypotheses.
The first hypothesis
Null hypothesis: When CPI is the only influencing factor, there is no difference in the errors obtained
when estimating the EAC using the traditional EVM and proposed EVM.
Alternative hypothesis: When CPI is the only influencing factor, the error in estimating the EAC using
the proposed EVM is less than the error obtained when using the traditional EVM.
Hence,
H10:│∆T1│-│∆P1│≤0
H1A:│∆T1│-│∆P1│>0
Where ∆T1 and ∆P1 are the errors in estimation, using the traditional and proposed models,
respectively.
The second hypothesis
Null hypothesis: When both CPI and SPI are influencing factors, there is no difference in the errors
obtained when estimating the EAC using the traditional EVM and proposed EVM.
Alternative hypothesis: When both CPI and SPI are influencing factors, the error in estimating the EAC
using the proposed EVM is less than the error obtained when using the traditional EVM.
Hence,
H20:│∆T2│-│∆P2│≤0
H2A:│∆T2│-│∆P2│>0
Where ∆T2 and ∆P2 are the errors in estimation, using the traditional and proposed models,
respectively. As described in Figure 7, we used the paired t-test to compare the errors of the two models [50].
8. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
531
Figure 7. Paired t-test description
3.4. Experiment design
We designed two experiments. The first is to conduct the traditional EVM analysis. The output of
each treatment consisted of the traditional EAC values; EACT1 and EACT2. The second experiment conducted
the fuzzy analysis as defined by the extended EVM model. The output of each treatment was an SRI value
used to calculate the proposed EAC values; EACP1 and EACP2. The traditional and proposed EAC values are
used to define the elements of the first and second hypotheses, as shown in the following equations:
∆T1=EACT1–ACAC (16)
∆P1=EACP1–ACAC (17)
∆T2=EACT2–ACAC (18)
∆P2=EACP2–ACAC (19)
Where EACT1 and EACP1 are the estimates at completion using the traditional and proposed models,
respectively, where the calculation is based on the influence of the CPI only, EACT2 and EACP2 are the
estimates at completion using the traditional and proposed models, respectively, where the calculation is
based on the influence of both the CPI and SPI. ACAC is the Actual Cost At Completion. The elements of
the two experiments are shown in Table 3.
Table 3. Experiments’ design elements
Element Experiment # 1 Experiment # 2
Experimental Units EVM checkpoints in a project EVM checkpoints in a project
Dependent Variables EACT1, EACT1, SRI
Independent Variables %C, PV, AC, BAC FCI, TCI
Number of Treatments 56 56
To facilitate the experiment execution, we developed a prototype using visual basic.net 2017 and
fuzzy logic controller library called fuzzinator [51].
3.5. Data collection
This research adopted convenience sampling, where data was drawn from that available part of the
population. We requested the research data from 55 sources. However, only three firms responded positively
by sharing data belongs to five projects. The dataset facilitated 56 treatments for each experiment. We found
the number of treatments enough to test the hypotheses since each experiment's sample size exceeded the
minimum limit of 30 samples [52]. The experimental treatments provided by projects are shown in Table 4.
Table 4. Experimental treatments
Project ID Experiment1 treatments Experiement2 treatments
P01 11 11
10. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
533
2 P1 0.82 0.57 0.13 515.07 579.81
3 P1 0.81 0.57 0.14 519.62 534.23
4 P1 0.81 0.59 0.15 500.89 530.35
5 P1 0.81 0.68 0.23 535.74 574.37
6 P1 0.81 0.61 0.16 516.72 554.72
7 P1 0.8 0.64 0.17 552.58 576.55
8 P1 0.8 0.64 0.17 566.22 575.37
9 P1 0.8 0.65 0.17 595.64 599.36
10 P1 0.8 0.67 0.21 624.11 628.58
11 P1 0.8 0.68 0.23 651.57 649.44
12 P2 0.78 0.53 0.17 329.61 350.37
13 P2 0.78 0.51 0.17 340.6 366.43
14 P2 0.78 0.55 0.17 329.61 341.45
15 P2 0.78 0.57 0.16 345.37 360.1
16 P2 0.78 0.57 0.16 341.48 337.92
17 P2 0.78 0.61 0.17 344.42 350.38
18 P2 0.78 0.62 0.17 344.42 347.86
19 P2 0.78 0.63 0.17 356.44 355.73
20 P2 0.78 0.63 0.17 348.34 347.49
21 P3 0.79 0.55 0.17 500.62 482.78
22 P3 0.78 0.56 0.16 537.71 557.66
23 P3 0.78 0.55 0.17 559.83 588.88
24 P3 0.77 0.57 0.16 555.05 576.07
25 P3 0.78 0.56 0.16 537.71 553.71
26 P3 0.78 0.58 0.16 549.14 569.59
27 P3 0.78 0.58 0.16 555.05 574.64
28 P3 0.78 0.58 0.16 543.37 551.23
29 P3 0.79 0.58 0.16 549.14 548.74
30 P3 0.79 0.58 0.16 521.41 518.49
31 P4 0.82 0.56 0.13 226 248.6
32 P4 0.82 0.54 0.13 226 243.58
33 P4 0.82 0.56 0.13 207.55 210.27
34 P4 0.82 0.59 0.14 228 240.92
35 P4 0.82 0.58 0.13 226 233.28
36 P4 0.82 0.59 0.14 230.57 236.49
37 P4 0.82 0.58 0.13 228.54 231.05
38 P4 0.82 0.58 0.13 233.8 232.78
39 P5 0.72 0.48 0.13 377.52 425.86
40 P5 0.72 0.49 0.14 380.86 426.46
41 P5 0.72 0.5 0.16 387.54 430.72
42 P5 0.71 0.5 0.16 442.91 548.32
43 P5 0.72 0.52 0.16 396.56 443.36
44 P5 0.72 0.59 0.23 420.49 465.38
45 P5 0.72 0.58 0.23 430.5 479.7
46 P5 0.72 0.56 0.19 421.52 469.72
47 P5 0.72 0.57 0.21 428.61 473.9
48 P5 0.72 0.56 0.19 411.6 443.92
49 P5 0.72 0.57 0.21 418.51 446.04
50 P5 0.72 0.58 0.23 430.5 456.94
51 P5 0.72 0.57 0.21 428.61 451.92
52 P5 0.72 0.56 0.19 411.6 427.75
53 P5 0.72 0.56 0.19 406.81 414.58
54 P5 0.72 0.56 0.19 406.81 409.04
55 P5 0.72 0.56 0.19 406.81 405.77
56 P5 0.72 0.54 0.16 415.91 414.12
To test the two hypotheses, we applied the procedures described by Xu et al. [50] to derive the
results shown in Table 7.
Table 7. Hypotheses testing results
1st
Hypothesis 2nd
Hypothesis
Mean of the differences, ̅ 0.131 0.089
Standard deviation of the differences, STDev(∆) 0.180 0.528
Standard error of the differences, SE(∆) 0.024 0.071
T value 40.372 9.34`8
Degree of Freedom (n) 55 55
Significance Level, α 0.05 0.05
t-distribution of n degree of freedom, tn 1.673 1.673
4.1. Results significance
Regarding Table 7, we observed that the T value is higher than the tn value for the two tests.
Therefore, we may conclude that the null hypothesis is rejected at a confidence level of 95% for the two tests.
11. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 1, January 2021 : 524 - 537
534
Hence, we may conclude that │∆P1│<│∆T1│ for the first hypothesis and │∆P2│<│∆T2│for the second
hypothesis. Moreover, when we investigated the accuracy of the traditional EVM model, assuming no
software reworks, we found that the mean error in estimating EAC is less than 6%, which is quite tolerable.
Upon considering the reworks, we found that the mean errors approaching values between 14% and 18% that
can be seen in Table 8. These results strongly support Efe and Demiros [15] that when ignoring the reworks,
the overall picture was acceptable. Therefore, we can conclude that the inaccuracy in estimating the software
projects' EAC is due to the incapability of the traditional EVM to incorporate the cost of software rework.
Table 8. Accuracy of the traditional and proposed models
Mean Error
Considering the CPI Considering the CPI and SPI
Traditional EVM 17.47 % 14.08 %
Proposed EVM 04.48 % 05.18 %
Error Reduction 12.99% 8.90%
The results of the two experiments show that the proposed EVM model is more precise than the
traditional one by around 13% in estimating the EAC when considering the impact of the CPI. On the other
hand, when considering the impact of both the CPI and SPI, the proposed model is more precise than the
traditional one by around 9%.
4.2. Scientific and practical implications
This research brings the recommendations made by many scholars such as Efe and Demiros [15]
and Bhardwaj and Rana [53] into reality. It customizes the traditional EVM, to suit the very needs of
software projects. This research's main scientific achievement is identifying the main reason behind the
inaccuracy of the traditional EVM, i.e., its incapability to estimate the cost of software rework. Moreover, we
establish a new way of thinking by investigating the relationship between human factors, software
complexity, and the cost of rework. We believe this study can assist the software project managers in many
aspects, such as forecasting the software rework and overall cost more accurately, reducing the subjectivity in
progress measurement and software sizing by adopting the FPA, and considering and quantifying human
factors.
4.3. Threats to validity
The threats to the research validity may include threats to internal validity, construct validity, and
external validity. The analysis of these threats follows the guidelines suggested by Kitchenham et al. [54] and
Perry et al. [55]. Internal validity. The study suggests that software rework depends mainly on team
competency and software functional complexity. Other factors, such as the competency of the testing team,
are not studied. For example, if the testers are not competent, the product might be signed off with uncovered
defects resulting in a high maintenance cost. Moreover, the TCI is calculated as an average of the individual
developers' DCIs while ignoring the team's collective attributes. Therefore, there is a need to investigate the
influence of collective attributes on software project performance. Moreover, the collected data did not
include agile projects, which may have more focus on the human side by embodying values such as
commitment, focus, and respect [56].
Construct validity. The developers' experience may influence the accuracy of the FPA
measurements [57]. Also, the subjectivity, skills, and experiences of individuals who filled the developers'
assessment forms may affect the accuracy of both the DCI and TCI calculations. External validity. Although
we successfully evaluate the proposed EVM model, we suggest that this model is neither comprehensive nor
generic due to the following threats:
a) Due to the weak response of the software firms, the dataset was drawn from only three firms. This
situation may increase the possibility of inadvertent sampling bias.
b) The model evaluation did not include projects with large sizes and large team members. These projects
may have more complexity-related factors not investigated by this study.
c) We did not test projects adopting methodologies other than the waterfall projects.
5. CONCLUSION
This research aims to enhance the software cost control by customizing the traditional EVM to
satisfy the needs of the software projects. Accordingly, we design a modified EVM to reduce the identified
12. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
535
limitations. The model has been evaluated using data drawn from five actual projects. The evaluation
indicates that the proposed EVM model is more accurate than the traditional model. However, our main
finding is that the EVM cost forecast inaccuracy is due to its inability to incorporate the cost of software
rework. The research contributions include the introduction of the software rework index (SRI), enabling the
EVM model to forecast the cost of rework during the project lifecycle. The direction of the future work may
include evaluating the proposed model using data drawn from projects with larger sizes and higher functional
complexity. Future studies should also investigate the impact of different software development
methodologies such as spiral and Scrum. Moreover, we plan to study the team's collective characteristics, as
the overall team competency is not a simple averaging of the individual attributes
REFERENCES
[1] R. N. Charette, “Why software fails [software failure],” in IEEE Spectrum, vol. 42, no. 9, pp. 42-49, Sept. 2005.
doi: 10.1109/MSPEC.2005.1502528.
[2] The Standish Group, “The chaos report,” standishgroup.com, 2015. Available:
https://www.standishgroup.com/sample_research_files/CHAOSReport2015-Final.pdf. [Accessed: November 30,
2017]
[3] The Standish Group, “Project resolution benchmark report,” standishgroup.com, 2018. Available:
https://www.standishgroup.com/sample_research_files/DemoPRBR.pdf. [Accessed December 7, 2018].
[4] R. Colomo-Palacios, C. Casado-Lumbreras, P. Soto-Acosta, F.J. GarcíA-PeñAlvo, E. Tovar-Caro, “Competence
gaps in software personnel: A multi-organizational study,” Computers in Human Behavior, vol. 29, no. 2, pp. 456-
461, 2013.
[5] Blueprint, “The rework tax: reducing software development rework by improving requirements,” Blueprint
Software Systems Inc., 2015. Available: http://www.blueprintsys.com/content/the-rework-tax-reducing-software-
development-rework-by-improving-requirements/. [Accessed: July 2, 2016].
[6] I. Sommerville, “Software testing” in software engineering, 9th ed. Boston: Edison Wesley, 2010.
[7] T.A. Khalid, ET. Yeoh, “Controlling software cost using fuzzy quality based EVM,” In conference on computing,
control, networking, electronics, and embedded systems engineering,-ICCCCEE. IEEE, pp 275-280, 2015.
[8] T.A. Khalid, ET. Yeoh, “Towards incorporating human factors in the software project cost control models–
Preliminary study,” The Journal of Modern Project Management, vol. 6, no. 2, 2018.
[9] P. Waychal, L.F. Capretz, “Need for a soft dimension,” 3rd International conference on software engineering,
Geneva, Switzerland. doi: 10.5121/csit.2017.70414, 2017.
[10] P. Efe, O. Demirors, “A change management model and its application in software development projects,”
Computer Standards & Interfaces, 2019.
[11] Project Management Institute, “Software Extension to the PMBOK Guide, 5th Edition,” Project Management
Institute, IEEE Computer Society. Newtown Square, PA, 2013.
[12] S. Sheard, C.B Weinstock, M.D. Konrad, D. Firesmith, “FAA research project on system complexity effects on
aircraft safety: identifying the impact of complexity on safety,” Carnegie-Mellon University, Software Engineering
Institute. Pittsburgh, PA, 2016.
[13] F. Feiler, “An incremental life-cycle assurance strategy for critical system certification,” Carnegie-Mellon
University, Software Engineering Institute. Pittsburgh, PA, 2014
[14] Micro Focus, “Successful projects start with high quality requirements,” Micro Focus International, Rockville,
Maryland, 2016. Available: https://www.microfocus.com/media/white-paper/WP-Successful-projects-start-
with.pdf. [Accessed: December 10, 2016].
[15] P. Efe, P. Demirors, “Applying EVM in a software company: benefits and difficulties,” In 39th Euromicro
Conference on Software Engineering and Advanced Applications. IEEE, pp. 333-340, 2013.
[16] G. A. Cass, M. Stanley, Sutton Jr., L. J. Osterweil, “Formalizing rework in software processes,” In Software
Process Technology. Springer, Berlin, pp. 16-31, 2013.
[17] Nguyen-Duc, “The impact of software complexity on cost and quality-a comparative analysis between Open source
and proprietary software,” International Journal on Software Engineering and Application, vol. 8, no. 2, pp. 17-31,
2017.
[18] Project Management Institute, “A guide to the project management body of knowledge,” sixth edition. PMI.
Newton Square, PA, 2017
[19] Naval Air Forces Organization, NAVAIR, “Using software metrics and measurements for earned value toolkit,”
2004. Available: https://acc.dau.mil/adl/en-
US/19591/file/1043/NAVAIR%20Software%20EVM%20Toolkit%202%20Dec%2004.pdf. [Accessed: September
27, 2013].
[20] P. Bourque, R.E. Fairley (eds.), “Guide to the software engineering body of knowledge,” Version 3.0. IEEE
Computer Society, www.swebok.org, 2014.
[21] ISO, “ISO 21508:2018 Earned value management in project and program management,” Edition 1. Switzerland,
2018.
[22] J. Jurison, “Software project management: the manager's view,” Communications of the association for
information Systems, vol. 2, no. 1, pp. 17, 1999.
[23] D. Zowghi, M. Haghighi, B. Zohouri, “Cost and schedule control approach in fuzzy environment,” International
Journal of Research and Reviews in Information Sciences (IJRRIS), vol. 1, no. 2, pp. 67-73, 2011.
13. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 1, January 2021 : 524 - 537
536
[24] L. M. Naeni S. Shadrokh, A. Salehipour, “A fuzzy approach for the earned value management,” International
Journal of Project Management, vol. 32, pp. 709-716, 2014.
[25] P. Solomon P, “Basing based earned value on technical performance,” The Journal of Defense Software
Engineering, vol. 1, no. 2, pp. 25-28, 2013.
[26] M. Yilmaz, R. V. O’Connor, R. Colomo-Palacios, P. Clarke P, “An examination of personality traits and how they
impact on software development teams,” Information and Software Technology, vol. 86, pp. 101-22, 2017.
[27] K. Langsari and R. Sarno, “Optimizing effort and time parameters of COCOMO II estimation using fuzzy multi-
objective PSO,” 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics
(EECSI), Yogyakarta, pp. 1-6, 2017. doi: 10.1109/EECSI.2017.8239157.
[28] M. Esteki, T. Javdani Gandomani, & H. Khosravi Farsani, “A risk management framework for distributed scrum
using PRINCE2 methodology,” Bulletin of Electrical Engineering And Informatics, vol. 9 no. 3, pp. 1299-1310,
2020. doi:10.11591/eei.v9i3.1905.
[29] D. De Souza, A. R. C. Rocha, “A proposal for the improvement the predictability of project cost using EVM and
historical data of cost,” In 35th international conference of software engineering-ICSE, ACM SRC, San Francisco,
International Journal of Software Engineering and Knowledge Engineering, vol. 25, no. 1, pp. 27-50, 2015.
Available: https://doi.org/10.1142/S0218194015400021. [Accessed: October 30, 2015].
[30] F. Acebes, J. Pajares, J. M. Galán, A. López-Paredes, “A new approach for project control under uncertainty.
Going back to the basics,” International Journal of Project Management, vol. 32, no. 3, pp. 423-434, 2014.
[31] D. De Souza, A. R. C. Rocha, D. Cristina, B. A. Constantino, “A proposal for the improvement of project’s cost
predictability using earned value management and quality data-an empirical study,” In European conference on
software process improvement. Springer, Berlin, pp. 170-181, 2014.
[32] X. Ma, B. Yang, “Optimization study of Earned Value Method in construction project management,” In 2012
International conference on information management, innovation management and industrial engineering (ICIII).
IEEE, vol. 2, pp 201-204, 2012.
[33] J. Xu, H. Zhang, Li F, “Project integrated management based on quality earned value,” In 2nd international
conference on information science and engineering (ICISE). IEEE, pp 432-435, 2010.
[34] H. K. Raju, Y. T Krishnegowda, “Software sizing and productivity with Function Points,” Lecture Notes on
Software Engineering, vol. 1, no. 2, pp. 204, 2013.
[35] G. P. Jiang, L. Xie, “Associated Control Research for Software Project Schedule and Budget Based on Function
Point Method,” DEStech Transactions on Computer Science and Engineering, (CSSE), 2018.
[36] H. Huang, J. Zheng, “Quality earned value analysis based on IFPUG method in software project,” In Proceedings
of 2018 International Conference on Big Data Technologies, pp. 101-108. ACM, 2018.
[37] T. Sulaiman, B. Barton, T. Blackburn, “Agile EVM-earned value management in Scrum Projects,” Agile
Conference, IEEE, pp. 10-16, 2006.
[38] J. Rusk, “Earned value for agile development,” Software Tech News, vol. 12. no. 1, pp. 20-27, 2009.
[39] P. K. Roy, P. Goutam, “Agile with EVM,” Int. Journal of Core Engineering & Management, vol. 1, no. 8, 2014.
[40] C. J. Torrecilla-Salinas, J. Sedeño, M. J. Escalona, M. Mejías, “Estimating, planning and managing Agile Web
development projects under a value-based perspective,” Information and Software Technology, vol. 61, pp. 124-
144, 2015.
[41] K. Pracharasniyom, S. Utsugi, Y. Koizumi, S. Hirose, H. Aso, T. Konosu, “Human resource management in small-
scale projects,” Journal of Business Administration and Languages, 2015. Available:
http://journal.tni.ac.th/upload/files/pdf/Human%20Resource%20Management%20in%20Smallscale%20Project.pdf
. [Accessed: October 2, 2017].
[42] R. V. Vargas, “Using earned value management indexes as a team development factor and a compensation tool,” In
Project management institute global congress EMEA. Prague, 2004.
[43] G. Broza, “The Human Side of Agile: How to help your team deliver,” 3P Vantage Media, ISBN-13: 978-
0988001626, 2012.
[44] T. DeMarco, T. R. Lister, “Peopleware: productive projects and teams,” Addison-Wesley. Upper Saddle River, NJ,
2015.
[45] R. AlQaisi, E. Gray, B. Steves, “Software systems engineering: A journey to contemporary agile and beyond, do
people matter?,” BCS, pp. 159-173, 2017. Available: http://ssudl.solent.ac.uk/id/eprint/3580 [Accessed: January 15,
2018].
[46] S. Wagner, M. Ruhe, “A systematic review of productivity factors in software development,” In Proceedings of the
2nd international workshop on software productivity analysis and cost estimation. State Key Laboratory of
Computer Science, Institute of Software, Chinese Academy of Sciences, arXiv:1801.06475v1, 2018.
[47] F. Zainal Abidin, M. Darmawan, M. Osman, S. Anwar, S. Kasim, A. Yunianta, T. Sutikno, “Adaboost-multilayer
perceptron to predict the student’s performance in software engineering,” Bulletin of Electrical Engineering and
Informatics, vol. 8, no. 4, pp. 1556-1562, 2019.
[48] S. N. Sivanandam, S. Sumathi, S. N. Deepa, “Introduction to fuzzy logic using MATLAB,” Springer. Berlin, 2007.
[49] International Function Points Users Groups (IFPUG), “Function Point Counting Practices Manual,” Release 4.3.1.
Netherland, 2010.
[50] M. Xu, D. Fralick, J. Z. Zheng, B. Wang, X. M. Tu, C. Feng, “The differences and similarities between two-sample
t-test and paired t-test,” Shanghai Archives of Psychiatry, vol. 29, no. 3, pp. 184-188, 2017.
[51] H. Omran, “Fuzzinator: A fuzzy logic controller,” 2011. Available:
https://www.codeproject.com/KB/recipes/Fuzzinator/FuzzyLogicController.zip. Accessed: 1 June 2016
14. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
Enhancing software development cost control by forecasting the cost of rework:… (Tarig Ahmed Khalid)
537
[52] Minitab, Inc. (2010). Statistical inference and t-tests. http://www.minitab.com/uploadedFiles/Documents/sample-
materials/TrainingTTest16EN.pdf. [Accessed: November 20, 2018].
[53] M. Bhardwaj, A. Rana, “Impact of Size and Productivity on Testing and Rework Efforts for Web-based
Development Projects,” In ACM SIGSOFT software engineering notes, vol. 40, no. 2, pp. 1-4, 2015.
[54] B. A. Kitchenham, S. L. Pfleeger, L. M. Pickard, P. W. Jones, D. C. Hoaglin, K. El Emam, J. Rosenberg,
“Preliminary guidelines for empirical research in software engineering,” IEEE Transactions on software
engineering, vol. 28, no. 8, pp. 721-734, 2002.
[55] D. E. Perry, A. A. Porter, L. G. Votta, “Empirical studies of software engineering: a roadmap,” In Proceedings of
the conference on the future of software engineering. ACM, pp 345-355, 2000.
[56] K. Schwaber, J. Sutherland, “The scrum guide. The definitive guide to scrum: The rules of the game,” 2017.
Available: https://www.scrumguides.org/docs/scrumguide/v2017/2017-Scrum-Guide-US.pdf#zoom=100.
[Accessed: May 10, 2018].
[57] T.S.O.I Ho-Leung, “To evaluate the function point analysis: a case study,” International Journal of the Computer,
the Internet and Management, vol. 13, no. 1, pp. 31-40, 2005.
BIOGRAPHIES OF AUTHORS
Tarig Ahmed Khalid is a research scholar at Multimedia University, Malaysia. He obtained his
B.Sc. and M.Sc. in electrical engineering from University of Khartoum, Sudan. He is a certified
project management professional (PMP). His research interests include software project
management, requirements engineering, and fuzzy systems. He is a senior member of the IEEE
and member of the PMI.
Eng-Thiam Yeoh is a Senior Lecturer in Faculty of Computing & Informatics, Multimedia
University, Malaysia. He obtained his PhD from Multimedia University in 2009, after obtaining
his M.Phil. degree from University of Cambridge, England (1991) and his first degree in
Computer Science from National University of Malaysia (1989). His research interests include e-
learning, software engineering, multimedia education, fuzzy systems, natural language
processing and big data. He is a member of the IEEE.