Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
A Review of Agile Software Effort Estimation MethodsEditor IJCATR
Software cost estimation is an essential aspect of software project management and therefore the success or failure of a software
project depends on accuracy in estimating effort, time and cost. Software cost estimation is a scientific activity that requires knowledge of a
number of relevant attributes that will determine which estimation method to use in a given situation. Over the years various studies were done
to evaluate software effort estimation methods however due to introduction of new software development methods, the reviews have not
captured new software development methods. Agile software development method is one of the recent popular methods that were not taken
into account in previous cost estimation reviews. The main aim of this paper is to review existing software effort estimation methods
exhaustively by exploring estimation methods suitable for new software development methods.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The document discusses project management risk sensitivity analysis. It begins by introducing the importance of fulfilling project outcomes and how a single change in risk sensitivity analysis measures can impact overall project outcomes. It then defines risk sensitivity analysis as a useful analytical procedure for project evaluation and assessment that can help identify major risk variables and guide risk control, though it does not fully evaluate or eliminate all project risks. The objective of the research survey discussed is to evaluate risk sensitivity analysis techniques and models under real-world scenarios to demonstrate their usefulness and inform future risk reduction and management strategies.
Making decision in the presence of uncertainty requires estimating the impact of the outcome of those decisions. Here;s a collection of resources can can be used to guide that process
This document provides an overview of project evaluation and size and cost estimation in software project management. It discusses conducting an initial high-level project evaluation to assess strategic fit, technical feasibility, and economic viability before more detailed size and cost estimations are made. Size can be estimated using function point analysis or object point analysis, while costs are estimated via techniques like cost-benefit analysis, cash flow forecasting, and net present value/internal rate of return calculations. Accurate estimation is challenging, and positive attitudes and periodic revisions are important.
A Review of Agile Software Effort Estimation MethodsEditor IJCATR
Software cost estimation is an essential aspect of software project management and therefore the success or failure of a software
project depends on accuracy in estimating effort, time and cost. Software cost estimation is a scientific activity that requires knowledge of a
number of relevant attributes that will determine which estimation method to use in a given situation. Over the years various studies were done
to evaluate software effort estimation methods however due to introduction of new software development methods, the reviews have not
captured new software development methods. Agile software development method is one of the recent popular methods that were not taken
into account in previous cost estimation reviews. The main aim of this paper is to review existing software effort estimation methods
exhaustively by exploring estimation methods suitable for new software development methods.
call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The document discusses project management risk sensitivity analysis. It begins by introducing the importance of fulfilling project outcomes and how a single change in risk sensitivity analysis measures can impact overall project outcomes. It then defines risk sensitivity analysis as a useful analytical procedure for project evaluation and assessment that can help identify major risk variables and guide risk control, though it does not fully evaluate or eliminate all project risks. The objective of the research survey discussed is to evaluate risk sensitivity analysis techniques and models under real-world scenarios to demonstrate their usefulness and inform future risk reduction and management strategies.
Making decision in the presence of uncertainty requires estimating the impact of the outcome of those decisions. Here;s a collection of resources can can be used to guide that process
This document provides an overview of project evaluation and size and cost estimation in software project management. It discusses conducting an initial high-level project evaluation to assess strategic fit, technical feasibility, and economic viability before more detailed size and cost estimations are made. Size can be estimated using function point analysis or object point analysis, while costs are estimated via techniques like cost-benefit analysis, cash flow forecasting, and net present value/internal rate of return calculations. Accurate estimation is challenging, and positive attitudes and periodic revisions are important.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
This document summarizes research on factors that contribute to the success of software projects. It discusses functional requirements, operational quality, and usability. The introduction provides background on the importance of quality management and identifies reasons why software projects fail. The literature review then summarizes several studies that have examined how reliability, response time, usability, and meeting user requirements impact project success. Finally, the document discusses the need to balance functionality and usability through various design and testing approaches.
This document discusses using statistical techniques to improve predictability in project performance. It provides three scenarios as examples: 1) Predicting critical activities on a schedule using a criticality index, 2) Estimating a project schedule and cost using Monte Carlo simulation, and 3) Building an early warning system for project monitoring and control. The document emphasizes that statistical methods can help project managers develop more accurate estimates and better manage project risks and performance.
This document describes a study that examined the differences between top-down and bottom-up expert estimation strategies for software development effort. Seven estimation teams estimated the effort for two software projects, one using a top-down strategy and one using a bottom-up strategy. The study found that bottom-up estimation led to more accurate estimates than top-down estimation. Specifically, bottom-up estimates had a lower average magnitude of relative error than top-down estimates. Additionally, the recall of very similar past projects seemed necessary for accurate top-down estimates, suggesting bottom-up should be used unless experience with very similar projects exists.
The document discusses reasons why projects often fail, citing statistics from various studies. It notes that according to one survey, 82% of employees believe ongoing projects will fail and 78% are working on projects deemed "doomed." Another study found that 50-70% of CRM projects from 2001-2009 failed to meet expectations. A study of ERP implementations found that over half went over budget and most under-delivered business value. The key reasons for project failure discussed in more depth include scope, time and cost overruns, risks, and quality issues.
New approach for solving software project scheduling problem using differenti...ijfcstjournal
Software Project Scheduling Problem (SPSP) is one of the most critical issues in developing software. The
major factor in completing the software project consistent with planned cost and schedule is implementing
accurate and true scheduling. The subject of SPSP is an important topic which in software projects
development and management should be considered over other topics and software project development
must be done on it. SPSP usually includes resources planning, cost estimates, staffing and cost control.
Therefore, it is necessary for SPSP use an algorithmic that with considering of costs and limited resources
can estimate optimal time for completion of the project. Simultaneously reduce of time and cost in software
projects development is very vital and necessary. The meta-heuristic algorithms has good performance in
SPSP in recent years. When software projects factors are vague and incomplete, cost based scheduling
models based on meta-heuristic algorithms can look better at scheduling. This algorithm works based on
the Collective Intelligence and using the fitness function, it has more accurate ability for SPSP. In this
paper, Differential Evolution (DE) algorithm is used to optimize SPSP. Experimental results show that the
proposed model has better performance than Genetic Algorithm (GA).
5 immutable principles and 5 processes in 60 secondsGlen Alleman
This document outlines five principles, five practices, and five processes for project success according to Niwot Ridge Consulting. The principles focus on defining what success looks like, how to achieve it, ensuring adequate resources, identifying impediments, and measuring progress. The practices involve identifying capabilities and requirements, establishing and executing a performance baseline, and performing continuous risk management. The processes are organizing work, budgeting and scheduling, project accounting, performance analysis, and change control.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
A NEW MATHEMATICAL RISK MANAGEMENT MODEL FOR AGILE SOFTWARE DEVELOPMENT METHO...ijseajournal
This paper proposes a new mathematical model for estimating the cost of explicit Agile software
development risk management with its Impact Benefit s (savings/profits). This is necessitated by the fact
that despite the increase in the need for managing risks explicitly in medium-to-large scale agile software
development projects presently, there are no known ways to estimate explicit risk management
costs/benefits. With the proposed model, explicit risk management procedures alongside with risk
management estimation techniques is made known to Stakeholders who will be able to make the right
decisions on risk management costs and its impacts as well as when to utilise implicit or explicit risk
management. The proposed system proves to be feasible and dependable and is evidently capable of
enhancing the agile methods for use for all sizes of software projects while still maintaining the swiftness of
the agile process.
Presentation plan
Articles
“Know Your Enemy”: Software Risk Management
"Stop promising miracles" – Wideband Delphi method
General project
LCA System Software Project Plan
Articles “Know Your Enemy”: Software Risk Management “Stop Promising miracles”Wideband Delphi method
What is the RISk?
A problem that can endanger the success of the project. It causes losses.
It can have a negative influence over the cost, planning and the quality of the project and for the team.
The Risk Management indetifies, addresses and solves the risk problems before they damage the project.
We can absorb the risk without allowing the avoiding of any measure (action) or to eliminate them
RISk types
Incertitude – it strikes us exactly what we dont's know
Dependences – of the external factors
Request issues – we build the right product wrong or the wrong product
Management problems – makes the project harder to succeed
Lack of knowledge – drives to trainings, hiring more apropiate people
RISk Management
Apreciating the risk – identifying the potential risk zones
Identifying the risk – verifies the previewsly identified risks databases
Analizing the risk– verifies the outcoming of the project in accordance with the risk variables
The risk priority– focuses on the most severe risks
Avoiding the risk
Control the risk– it's the process of management of the risk
Keeping data about the risk– monitoring and solving each risk problem
Concluzions
From these articles we learned a few basic rules of how to better plan and develop a project and 14 good lessons about how to become a good manager
A manager is put to help people do their job better. They have to create good working conditions for the employees in order to do the job in time
We have to pay attention to the risk problems that may occur and to take the right measures to solve them
Articles – “Stop Promising miracles” Wideband Delphi method
must provide estimates for the work
but few of us are skillful estimators
this can be improved
principle that multiple heads are better than one.
Articles – “Stop Promising miracles” Wideband Delphi method
Wideband Delphi
can be used to estimate virtually anything:
the number of labor months needed
the lines of code or number of classes, etc.
Articles – “Stop Promising miracles” Wideband Delphi method
Articles – “Stop Promising miracles” Wideband Delphi method
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
This document discusses risk management for projects. It defines risk as uncertain events that can positively or negatively impact project objectives. The key elements of risk are that it relates to future events with uncertain outcomes and involves causes and effects. A socio-technical model of risk identifies four components - actors, technology, structure, and tasks. The document then outlines a framework for dealing with risks through identification, analysis, prioritization, planning, and monitoring. Methods like checklists and brainstorming are presented for risk identification.
The document discusses various techniques for project estimation including three point estimation, Delphi method, planning poker, function point analysis, use case points, and PERT diagrams. It provides details on each technique including how they are conducted, their advantages and disadvantages, and when each is best applied. The key aspects that estimators need to consider for large scale projects are work partitioning challenges, increasing communication overhead with larger teams, and understanding how fast the project can realistically be completed based on its size.
The document discusses key concepts for managing software projects including the four Ps of project management: People, Product, Process, and Project. It describes stakeholders and team structures, and emphasizes establishing clear objectives and scope, tracking progress, and learning lessons through post-mortem reviews. Metrics for both processes and products are discussed to assess status, risks, and quality in order to guide improvement.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
Increasing the Probability of Success with Continuous Risk ManagementGlen Alleman
This document summarizes an article from The Measurable News publication that discusses increasing the probability of program success through continuous risk management. It describes how identifying, analyzing, planning, tracking and controlling risk on complex systems can help assess the maturity of an existing risk management process and determine actions needed to improve it. The article provides examples of root cause analysis, assumptions, and sources of risk, and argues that separating risks into aleatory and epistemic categories helps better measure the impacts of each type of risk. Continuous risk management is presented as a way to produce risk-informed decisions that can address issues leading to cost overruns, schedule delays, and shortfalls in technical performance.
Managing in the Presence of UncertantyGlen Alleman
Managing in the Presence of Uncertainty requires making decision with Models of that Uncertainty
Monte Carlo Simulation and some related approaches can be the basis of making informed decisions in the presence of Uncertainty
An Improved Differential Evolution Algorithm for Real Parameter Optimization ...IDES Editor
Differential Evolution (DE) is a powerful yet simple
evolutionary algorithm for optimization of real valued, multi
modal functions. DE is generally considered as a reliable,
accurate and robust optimization technique. However, the
algorithm suffers from premature convergence, slow
convergence rate and large computational time for optimizing
the computationally expensive objective functions. Therefore,
an attempt to speed up DE is considered necessary. This paper
introduces an improved differential evolution (IDE), a
modification to DE that enhances the convergence rate without
compromising with the solution quality. In improved
differential evolution (IDE) algorithm, initial population of
individual is partitioned into several sub-populations, and then
DE algorithm which utilize only one set of population instead
of two as in original DE, is applied to each sub-population
independently. At periodic stages in evolution, the entire
population is shuffled, and then points are reassigned to subpopulations.
The performance of IDE on a test bed of functions
is compared with original DE. It is found that IDE requires
less computational effort to locate global optimal solution.
An Adaptive Differential Evolution Algorithm for Reactive Power DispatchIjorat1
This document summarizes an adaptive differential evolution algorithm for solving the reactive power dispatch problem, which involves minimizing real power losses. The problem is formulated as a non-linear constrained optimization problem. An adaptive differential evolution algorithm is proposed that uses time-varying chaotic mutation and crossover to avoid parameter tuning. The algorithm is applied to the IEEE 57-bus and 118-bus test systems and found to provide superior convergence and solution quality compared to classical differential evolution and self-adaptive differential evolution algorithms.
Decision Support Analyss for Software Effort Estimation by AnalogyTim Menzies
The document discusses decision support for software effort estimation by analogy (EBA). It outlines the basic tasks and decision problems in applying EBA, including searching for analogs, determining similarity measures, and choosing an analogy adaptation strategy. It presents a decision-centric process model for EBA and discusses empirical studies to support decision-making when customizing EBA for a specific dataset. An example EBA method called AQUA+ is analyzed through a comparative study evaluating different attribute weighting heuristics.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
This document summarizes research on factors that contribute to the success of software projects. It discusses functional requirements, operational quality, and usability. The introduction provides background on the importance of quality management and identifies reasons why software projects fail. The literature review then summarizes several studies that have examined how reliability, response time, usability, and meeting user requirements impact project success. Finally, the document discusses the need to balance functionality and usability through various design and testing approaches.
This document discusses using statistical techniques to improve predictability in project performance. It provides three scenarios as examples: 1) Predicting critical activities on a schedule using a criticality index, 2) Estimating a project schedule and cost using Monte Carlo simulation, and 3) Building an early warning system for project monitoring and control. The document emphasizes that statistical methods can help project managers develop more accurate estimates and better manage project risks and performance.
This document describes a study that examined the differences between top-down and bottom-up expert estimation strategies for software development effort. Seven estimation teams estimated the effort for two software projects, one using a top-down strategy and one using a bottom-up strategy. The study found that bottom-up estimation led to more accurate estimates than top-down estimation. Specifically, bottom-up estimates had a lower average magnitude of relative error than top-down estimates. Additionally, the recall of very similar past projects seemed necessary for accurate top-down estimates, suggesting bottom-up should be used unless experience with very similar projects exists.
The document discusses reasons why projects often fail, citing statistics from various studies. It notes that according to one survey, 82% of employees believe ongoing projects will fail and 78% are working on projects deemed "doomed." Another study found that 50-70% of CRM projects from 2001-2009 failed to meet expectations. A study of ERP implementations found that over half went over budget and most under-delivered business value. The key reasons for project failure discussed in more depth include scope, time and cost overruns, risks, and quality issues.
New approach for solving software project scheduling problem using differenti...ijfcstjournal
Software Project Scheduling Problem (SPSP) is one of the most critical issues in developing software. The
major factor in completing the software project consistent with planned cost and schedule is implementing
accurate and true scheduling. The subject of SPSP is an important topic which in software projects
development and management should be considered over other topics and software project development
must be done on it. SPSP usually includes resources planning, cost estimates, staffing and cost control.
Therefore, it is necessary for SPSP use an algorithmic that with considering of costs and limited resources
can estimate optimal time for completion of the project. Simultaneously reduce of time and cost in software
projects development is very vital and necessary. The meta-heuristic algorithms has good performance in
SPSP in recent years. When software projects factors are vague and incomplete, cost based scheduling
models based on meta-heuristic algorithms can look better at scheduling. This algorithm works based on
the Collective Intelligence and using the fitness function, it has more accurate ability for SPSP. In this
paper, Differential Evolution (DE) algorithm is used to optimize SPSP. Experimental results show that the
proposed model has better performance than Genetic Algorithm (GA).
5 immutable principles and 5 processes in 60 secondsGlen Alleman
This document outlines five principles, five practices, and five processes for project success according to Niwot Ridge Consulting. The principles focus on defining what success looks like, how to achieve it, ensuring adequate resources, identifying impediments, and measuring progress. The practices involve identifying capabilities and requirements, establishing and executing a performance baseline, and performing continuous risk management. The processes are organizing work, budgeting and scheduling, project accounting, performance analysis, and change control.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
A NEW MATHEMATICAL RISK MANAGEMENT MODEL FOR AGILE SOFTWARE DEVELOPMENT METHO...ijseajournal
This paper proposes a new mathematical model for estimating the cost of explicit Agile software
development risk management with its Impact Benefit s (savings/profits). This is necessitated by the fact
that despite the increase in the need for managing risks explicitly in medium-to-large scale agile software
development projects presently, there are no known ways to estimate explicit risk management
costs/benefits. With the proposed model, explicit risk management procedures alongside with risk
management estimation techniques is made known to Stakeholders who will be able to make the right
decisions on risk management costs and its impacts as well as when to utilise implicit or explicit risk
management. The proposed system proves to be feasible and dependable and is evidently capable of
enhancing the agile methods for use for all sizes of software projects while still maintaining the swiftness of
the agile process.
Presentation plan
Articles
“Know Your Enemy”: Software Risk Management
"Stop promising miracles" – Wideband Delphi method
General project
LCA System Software Project Plan
Articles “Know Your Enemy”: Software Risk Management “Stop Promising miracles”Wideband Delphi method
What is the RISk?
A problem that can endanger the success of the project. It causes losses.
It can have a negative influence over the cost, planning and the quality of the project and for the team.
The Risk Management indetifies, addresses and solves the risk problems before they damage the project.
We can absorb the risk without allowing the avoiding of any measure (action) or to eliminate them
RISk types
Incertitude – it strikes us exactly what we dont's know
Dependences – of the external factors
Request issues – we build the right product wrong or the wrong product
Management problems – makes the project harder to succeed
Lack of knowledge – drives to trainings, hiring more apropiate people
RISk Management
Apreciating the risk – identifying the potential risk zones
Identifying the risk – verifies the previewsly identified risks databases
Analizing the risk– verifies the outcoming of the project in accordance with the risk variables
The risk priority– focuses on the most severe risks
Avoiding the risk
Control the risk– it's the process of management of the risk
Keeping data about the risk– monitoring and solving each risk problem
Concluzions
From these articles we learned a few basic rules of how to better plan and develop a project and 14 good lessons about how to become a good manager
A manager is put to help people do their job better. They have to create good working conditions for the employees in order to do the job in time
We have to pay attention to the risk problems that may occur and to take the right measures to solve them
Articles – “Stop Promising miracles” Wideband Delphi method
must provide estimates for the work
but few of us are skillful estimators
this can be improved
principle that multiple heads are better than one.
Articles – “Stop Promising miracles” Wideband Delphi method
Wideband Delphi
can be used to estimate virtually anything:
the number of labor months needed
the lines of code or number of classes, etc.
Articles – “Stop Promising miracles” Wideband Delphi method
Articles – “Stop Promising miracles” Wideband Delphi method
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
This document discusses risk management for projects. It defines risk as uncertain events that can positively or negatively impact project objectives. The key elements of risk are that it relates to future events with uncertain outcomes and involves causes and effects. A socio-technical model of risk identifies four components - actors, technology, structure, and tasks. The document then outlines a framework for dealing with risks through identification, analysis, prioritization, planning, and monitoring. Methods like checklists and brainstorming are presented for risk identification.
The document discusses various techniques for project estimation including three point estimation, Delphi method, planning poker, function point analysis, use case points, and PERT diagrams. It provides details on each technique including how they are conducted, their advantages and disadvantages, and when each is best applied. The key aspects that estimators need to consider for large scale projects are work partitioning challenges, increasing communication overhead with larger teams, and understanding how fast the project can realistically be completed based on its size.
The document discusses key concepts for managing software projects including the four Ps of project management: People, Product, Process, and Project. It describes stakeholders and team structures, and emphasizes establishing clear objectives and scope, tracking progress, and learning lessons through post-mortem reviews. Metrics for both processes and products are discussed to assess status, risks, and quality in order to guide improvement.
Productivity Factors in Software Development for PC PlatformIJERA Editor
Identifying the most relevant factors influencing project performance is essential for implementing business strategies by selecting and adjusting proper improvement activities. The two major classification algorithms CRT and ANN that were recommended by the Auto Classifier tool in SPSS Modeler used for determining the most important variables (attributes) of software development in PC environment. While the accuracy of classification of productive versus non-productive cases are relatively close (72% vs 69%), their ranking of important variables are different. CRT ranks the Programming Language as the most important variable and Function Points as the least important. On the other hand, ANN ranks the Function Points as the most important followed by team size and Programming Language.
Increasing the Probability of Success with Continuous Risk ManagementGlen Alleman
This document summarizes an article from The Measurable News publication that discusses increasing the probability of program success through continuous risk management. It describes how identifying, analyzing, planning, tracking and controlling risk on complex systems can help assess the maturity of an existing risk management process and determine actions needed to improve it. The article provides examples of root cause analysis, assumptions, and sources of risk, and argues that separating risks into aleatory and epistemic categories helps better measure the impacts of each type of risk. Continuous risk management is presented as a way to produce risk-informed decisions that can address issues leading to cost overruns, schedule delays, and shortfalls in technical performance.
Managing in the Presence of UncertantyGlen Alleman
Managing in the Presence of Uncertainty requires making decision with Models of that Uncertainty
Monte Carlo Simulation and some related approaches can be the basis of making informed decisions in the presence of Uncertainty
An Improved Differential Evolution Algorithm for Real Parameter Optimization ...IDES Editor
Differential Evolution (DE) is a powerful yet simple
evolutionary algorithm for optimization of real valued, multi
modal functions. DE is generally considered as a reliable,
accurate and robust optimization technique. However, the
algorithm suffers from premature convergence, slow
convergence rate and large computational time for optimizing
the computationally expensive objective functions. Therefore,
an attempt to speed up DE is considered necessary. This paper
introduces an improved differential evolution (IDE), a
modification to DE that enhances the convergence rate without
compromising with the solution quality. In improved
differential evolution (IDE) algorithm, initial population of
individual is partitioned into several sub-populations, and then
DE algorithm which utilize only one set of population instead
of two as in original DE, is applied to each sub-population
independently. At periodic stages in evolution, the entire
population is shuffled, and then points are reassigned to subpopulations.
The performance of IDE on a test bed of functions
is compared with original DE. It is found that IDE requires
less computational effort to locate global optimal solution.
An Adaptive Differential Evolution Algorithm for Reactive Power DispatchIjorat1
This document summarizes an adaptive differential evolution algorithm for solving the reactive power dispatch problem, which involves minimizing real power losses. The problem is formulated as a non-linear constrained optimization problem. An adaptive differential evolution algorithm is proposed that uses time-varying chaotic mutation and crossover to avoid parameter tuning. The algorithm is applied to the IEEE 57-bus and 118-bus test systems and found to provide superior convergence and solution quality compared to classical differential evolution and self-adaptive differential evolution algorithms.
Decision Support Analyss for Software Effort Estimation by AnalogyTim Menzies
The document discusses decision support for software effort estimation by analogy (EBA). It outlines the basic tasks and decision problems in applying EBA, including searching for analogs, determining similarity measures, and choosing an analogy adaptation strategy. It presents a decision-centric process model for EBA and discusses empirical studies to support decision-making when customizing EBA for a specific dataset. An example EBA method called AQUA+ is analyzed through a comparative study evaluating different attribute weighting heuristics.
How to Find Relevant Data for Effort EstimationCS, NcState
This document discusses finding relevant data for effort estimation. It introduces the concept of locality, where data may best divide based on a single attribute (locality(1)) or a combination of attributes (locality(N)). The TEAK technique is presented, which uses clustering and variance pruning to find low-variance, relevant projects across attributes rather than relying on rigid locality(1) boundaries. Experiments show TEAK performs equally well on within-company and cross-company data, and tends to retrieve from both within and cross sources, challenging the distinction between the two.
A Hybrid Differential Evolution Method for the Design of IIR Digital FilterIDES Editor
This paper establishes methodology for the robust
and stable design of infinite impulse response (IIR) digital
filters using hybrid differential evolution method. Differential
Evolution (DE) is undertaken as a global search technique
and exploratory search is exploited as a local search technique.
DE is a population based stochastic real parameter
optimization technique relating to evolutionary computation,
whose simple yet powerful and straight forward features make
it very attractive for numerical optimization. Exploratory
search aims to fine tune the solution locally in promising
search area. This proposed DE method augments the capability
to explore and exploit the search space locally as well globally
to achieve the optimal filter design parameters by applying
the opposition learning strategy and random migration. A
multivariable optimization is employed as the design criterion
to obtain the optimal stable IIR filter that minimizes the
magnitude approximation error and ripple magnitude. DE
method is implemented to design low-pass, high-pass, bandpass,
and band-stop digital IIR filters. The achieved design of
IIR digital filters by applying DE method authenticates that
its results are comparable to other algorithms and can be
effectively applied for higher filter design.
An Improved Differential Evolution Algorithm for Congestion Management Consid...Suganthi Thangaraj
In deregulated electricity market, Congestion Management (CM) is one of the most significant issues in order to maintain
the system in secure state and to get the reliable system operation. While addressing Congestion Management voltage
stability should also be taken into account. This paper elucidates an Improved Differential Evolution (IDE) algorithm to
alleviate Congestion in transmission line by rescheduling of generators while considering voltage stability. Differential
Evolution (DE) is one of the heuristic, population based algorithm which is well suited for solving complex and non-linear
optimization problems. A Double Best Mutation Operator (DBMO) is proposed to improve DE algorithm’s convergence
rate. In order to validate suitability of the suggested approach, it has been evaluated on the IEEE-30 bus test system on
both base case loading as well as 10% increased load. The test system has been also examined under critical line outages.
The results and discussions clearly depicts the effectiveness of the projected approach in solving Congestion Management
Problem.
The document discusses software cost estimation and scheduling. It covers topics like software cost components, productivity measures, estimation techniques like function point analysis and lines of code, and project scheduling. Function point analysis measures functionality based on user requirements and design specifications by counting inputs, outputs, files, inquiries and interfaces. Estimates are adjusted based on complexity factors. Estimates are used to determine effort and schedule tasks on a project.
This document discusses software cost estimation and factors that influence productivity. It defines software cost estimation as predicting resources needed for development like effort, time and total cost. Cost components include hardware/software, travel/training, and effort costs like salaries and overheads. Productivity measures include lines of code, function points based on functionality, and object points. Factors like language, code verbosity, and system characteristics can impact productivity estimates.
Software Cost Estimation in Software Engineering SE23koolkampus
Software cost estimation involves predicting resources required for development using metrics like productivity, size, and function points. The COCOMO 2 model is an empirical algorithmic model that estimates effort as a function of product, project, and process attributes. It operates at three levels - early prototyping based on object points, early design using function points mapped to lines of code, and post-architecture directly using lines of code. Key cost drivers include requirements, personnel experience, reuse, and development flexibility.
The document provides an overview of software cost estimation, outlining various methods used including algorithmic models like COCOMO, expert judgement, top-down and bottom-up approaches, and estimation by analogy. It discusses COCOMO in detail, including the original COCOMO 81 model and updated COCOMO II model, and emphasizes the importance of calibration for accurate estimates.
Iwsm2014 an analogy-based approach to estimation of software development ef...Nesma
The document discusses fuzzy analogy, a technique for software effort estimation that can handle categorical data. It introduces fuzzy analogy and fuzzy k-modes clustering. Fuzzy k-modes is used to cluster similar software projects from a repository based on categorical attributes into homogeneous groups. Fuzzy analogy then assesses the similarity between projects based on their membership to clusters and estimates the effort of a new project as a weighted average of similar past projects' efforts. The document evaluates fuzzy analogy on 194 projects from the ISBSG repository selected based on data quality and attributes criteria.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
This document provides an overview of software estimation techniques. It discusses why estimation is important, the general estimation process, and factors that impact accuracy such as requirements management and experience. The document also describes different estimation methods like expert judgment, analogy-based, top-down, and bottom-up. It provides examples of estimation tools like Function Point Analysis and COCOMO II for sizing and effort estimation.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Software metric analysis methods for product development maintenance projectsIAEME Publication
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase. The remainder of the document focuses on different methods for analyzing metrics, including pie charts, Pareto diagrams, bar charts, line charts, scatter diagrams, radar diagrams, and control charts. These analysis methods help identify areas for process improvement and determine if changes have led to desired outcomes.
Software metric analysis methods for product developmentiaemedu
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase, such as lines of code, defects, and staff hours. The document proceeds to explain different types of charts and diagrams that can be used to analyze and visualize metrics data, including pie charts, Pareto diagrams, histograms, line charts, scatter plots, radar diagrams, and control charts. These various analysis methods help identify areas for process improvement and determine whether changes have resulted in desired outcomes.
Software metric analysis methods for product developmentiaemedu
This document discusses various software metrics and methods for analyzing metrics to improve the software development process. It begins with an introduction to software metrics and their importance for project management. It then describes common software development phases and associated metrics that can be collected at each phase, such as lines of code, defects, and staff hours. The document proceeds to explain different types of charts and diagrams that can be used to analyze and visualize metrics data, including pie charts, Pareto diagrams, histograms, line charts, scatter plots, radar diagrams, and control charts. These various analysis methods help identify problems, determine correlations, and track performance over time in order to control and improve the software development process.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
This document discusses software cost estimation. It begins by distinguishing between effort, which is the number of hours of work required, and time, which is the duration from start to finish. It then describes factors that influence cost estimation, such as project type and size, and development team size. Finally, it outlines several techniques used for cost estimation, including algorithmic models, expert judgment, top-down estimation, and bottom-up estimation.
This document summarizes the key phases in planning a software project:
1) Planning is the most important phase and involves estimating effort, schedules, resources, and risks. It defines goals for costs, timelines and quality.
2) Process planning defines the development stages and activities. It may modify established models to suit a project's needs.
3) Effort estimation predicts the time and costs required, and is needed for budgets, monitoring, and contracting. Estimates are most accurate after requirements are defined.
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Exploring the Efficiency of the Program using OOAD MetricsIRJET Journal
This document proposes a methodology to analyze the efficiency of object-oriented programs using OOAD (Object Oriented Analysis and Design) metrics. The methodology involves compiling a program successively until it is error-free, recording the error rate at each compilation. These results are then compared to determine how many compilations were needed for the program to be error-free, indicating its efficiency. The methodology is experimentally validated on a sample Java program, with results showing the error rate decreasing with each compilation until the program is error-free after the 8th compilation, demonstrating good efficiency.
Estimation of agile functionality in software developmentBashir Nasr Azadani
Estimation of Agile Functionality in Software Development - ISBN: 978-988-98671-8-8
Publication date: Mar 21, 2008 presented at International MultiConference of Engineers and Computer Scientists 2008 Vol I
This document discusses defect prediction models in software development. It begins by covering the importance of effort estimation in software maintenance planning and management. The document then discusses how data from software defect reports, including details on defects, components, testers and fixes, can be used to build reliability models to predict remaining defects. Machine learning and data mining techniques are proposed to analyze relationships between software quality across releases and to construct predictive models for forecasting time to fix defects. The document provides an overview of typical software development processes and then discusses a two-step approach to defect prediction and analysis using appropriate statistics and data mining techniques.
The document discusses software project estimation and scheduling. It explains that accurate estimation is important to avoid cost overruns and schedule slips. Common problems with estimation include predicting software costs, schedules, and risks. Fundamental estimation questions relate to effort, time, and costs. Estimation methods include expert judgment, analogy, top-down and bottom-up approaches, and algorithmic modeling. Productivity and function measures are used to estimate software size. Changing technologies also impact estimation accuracy. Compression techniques can shorten schedules through crashing or fast tracking.
Review on Algorithmic and Non Algorithmic Software Cost Estimation Techniquesijtsrd
Effective software cost estimation is the most challenging and important activities in software development. Developers want a simple and accurate method of efforts estimation. Estimation of the cost before starting of work is a prediction and prediction always not accurate. Software effort estimation is a very critical task in the software engineering and to control quality and efficiency a suitable estimation technique is crucial. This paper gives a review of various available software effort estimation methods, mainly focus on the algorithmic model and non algorithmic model. These existing methods for software cost estimation are illustrated and their aspect will be discussed. No single technique is best for all situations, and thus a careful comparison of the results of several approaches is most likely to produce realistic estimation. This paper provides a detailed overview of existing software cost estimation models and techniques. This paper presents the strength and weakness of various cost estimation methods. This paper focuses on some of the relevant reasons that cause inaccurate estimation. Pa Pa Win | War War Myint | Hlaing Phyu Phyu Mon | Seint Wint Thu "Review on Algorithmic and Non-Algorithmic Software Cost Estimation Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26511.pdfPaper URL: https://www.ijtsrd.com/engineering/-/26511/review-on-algorithmic-and-non-algorithmic-software-cost-estimation-techniques/pa-pa-win
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
STRATEGIES TO REDUCE REWORK IN SOFTWARE DEVELOPMENT ON AN ORGANISATION IN MAU...ijseajournal
Rework is a known vicious circle in software development since it plays a central role in the generation of
delays, extra costs and diverse risks introduced after software delivery. It eventually triggers a negative
impact on the quality of the software developed. In order to cater the rework issue, this paper goes in depth
with the notion of rework in software development as it occurs in practice by analysing a development
process on an organisation in Mauritius where rework is a major issue. Meticulous strategies to reduce
rework are then analysed and discussed. The paper ultimately leads to the recommendation of the best
strategy that is software configuration management to reduce the rework problem in software development
Similar to SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATION (20)
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATION
1. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
DOI: 10.5121/ijci.2016.5112 113
SOFTWARE COST ESTIMATION USING FUZZY
NUMBER AND PARTICLE SWARM OPTIMIZATION
Divya Kashyap1
and D. L. Gupta2
1
Computer Science and Engineering Department, IADC, Bangalore
2
Computer Science and Engineering Department, KNIT, Sultanpur
ABSTRACT
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
KEYWORDS
Software Cost estimation, Particle swarm optimization, Fuzzy logic etc
1. INTRODUCTION
Software cost prediction is a crucial, essential and an important issue for software engineering
research communities. As the software size varies from small to medium or large, the need for
accuracy or correctness in software cost estimation with understanding has also grown. Software
cost estimation is a method to determine effort, time and cost of a project, which in turn helps in
better decision making about the feasibility and/or viability of the project. To effectively organize
the project development tasks and make considerable economical and strategic planning, project
management requires accurate software cost estimation. Cost estimation is a very difficult process
because several known and unknown factors affect the estimation process. Software size is a very
important (desirable) factor that impacts the process of cost estimation. Accuracy of cost
estimation is directly proportional to the accuracy of the size estimation.
Software estimators sometimes confuse size and effort. Size, in a software development context,
is the complete set of business functionalities that the end user gets when the product is deployed
and is in use, and the Person months required to produce the software application of a given size
is the effort.
Failure of Software projects has always been an important area of concern for the Software
Industry. Implementation phase is not the only phase for Software projects to fail, instead
planning and estimation steps are the most crucial ones, which leads to their failure. More than
50% of the total projects which go beyond the estimated time and cost fail.
2. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
114
2. MOTIVATION
Based on literature review and current trends in the software industry, some of the important
motivators for this research work are discussed below:
1.) Insignificant Contribution by the various Algorithmic Techniques
There are multiple studies available which accounts for the incorrect and inaccurate
contribution by the various algorithmic techniques for software cost estimation for
various projects. The errors are far beyond the acceptable limits and hence impacting the
overall project. Since these types of traditional techniques and modeling equations based
on algorithmic approach has failed, it has become imperative for us to look out for better
options either in terms of new methods or optimization of the existing ones. This is quite
motivating for a researcher to explore new options.
2.) Emergence of Soft Computing & Meta-heuristic Techniques
Soft Computing techniques like Fuzzy systems and meta-heuristic algorithms like particle
swarm optimization, firefly algorithm, harmony search, and the likes have enhanced their
presence and utility in the software industry. With so many applications existing for such
techniques, software cost estimation is an area which can be explored through their
applications. Their advantageous characteristics can be imparted on the estimation
techniques which can become more efficient. So it is also an important motivation to
study the application of such techniques in the software cost estimation scenario.
3.) Increasing criticality of Software Cost Estimation techniques & rising impact on
Business Activities
These days, the importance of Software Cost Estimation has gone beyond limits. The criticality of
the software is increasing exponentially and thus cost is becoming a very important parameter. A
small error in the process of cost estimation can destroy the organization‟s reputation and
recognition in the market. With digital world becoming so powerful, this negative publicity can
spread virally. Thus cut-throat competitions are making it very difficult for everyone to survive
and better techniques must be researched for cost estimation. Once the estimation process is
completed successfully, one can easily execute the project without much pressure from the
costing point of view.
3. INTRODUCTION TO SOFTWARE EFFORT ESTIMATION
Estimation of effort and time involved in the development of the software product under
consideration are the most critical activities of the software life cycle. This process may be
performed at any stage during the software development and is termed as software cost
estimation. It is necessary to perform the estimate during early phases of the Software
Development Life Cycle (SDCL), which helps us in taking an early decision regarding feasibility
of the software development project. The results of estimation are also used for early analysis of
the project efforts [1].
The primary components [2] of project development costs are: Hardware costs, Operational costs
and Effort costs (in terms of Man-hours deployed on a particular project with their salaries as per
their time utilized on it). The most important (dominant) cost out of all the three is the effort cost.
It has been found to be very difficult to estimate the effort cost and control it, which can have a
significant effect on overall costs. Software cost estimation is an ongoing process which starts at
3. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
115
the beginning of the SDLC at the requirement gathering and proposal stage and continues
throughout the conduction of a project. Projects are bound to have a specific monetary plan in
terms of revenue and expenditure, and consistent cost estimation technique is necessary to ensure
that spending is in line with the budget plan. Man-hours or man-months are the units used to
measure the effort. An ill estimated project or poor feasibility study for a project hampers the
image of the software development organization. If a project cannot be completed within the
estimated time or efforts, the budget may get disturbed. This in turn can bring lots of losses for
both the client and the organization. Thus we need to have a strong estimation model available
with us, which can facilitate us in preparing a correct and feasible plan.
Though there has been various models proposed in the past [1] [6], but still the accuracy levels
are not that satisfactory and leaves a scope of improvement over the previous work. In this paper,
a new algorithm which is combination of Particle swarm optimization and fuzzy theory has been
proposed and discussed for the Software cost, effort and time estimation.
4. BACKGROUND
Software cost estimation methods can be majorly categorized into three types: Algorithmic
method, Expert judgment and Analogy based method [7].Machine learning is also an estimation
method but is not a prominently different one in regular practices and is more categorized under
the Analogy based methodology only. Each technique has its own advantages, disadvantages and
limitations. With projects of large cost magnitude, many cost estimation methods should be used
in parallel. This way one can compare the results produced from these methods and reliability
over one particular methodology gets reduced and hence risk is also minimized [8]. But for
smaller or medium sized projects, applying all the cost models might become costly affair thus
making it an expensive start of the project [8].
5. FUZZY LOGIC
Fuzzy logic is a kind of multi-valued logic, which deals with approximate reasoning rather than
exact and absolute reasoning. It is an approach for computing based on degree of truth rather than
just true or false (1 or 0). There are few terms associated with fuzzy logic which are mentioned as
below:
5.1 Fuzzy Number
A fuzzy number is a quantity whose value is uncertain rather than exact as in case of single
valued numbers. Fuzzy number refers to a connected set of possible values with each value
having its own weight in range 0 to 1. The weight assigned to each value is called the
membership function.
5.2. Membership Function
As per the definition, for a fuzzy set A on the universe of discourse X, membership function is
defined as
µA:X → [0,1] (1)
where, each element of X is mapped to a value between 0 and 1[3].It represents degree of truth as
an extension of valuation. Membership functions are of different shapes [9] such as triangular,
trapezoidal, piecewise linear, Gaussian, bell-shaped, etc.
4. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
116
Trapezoidal Function: It is defined by a - lower limit, d- an upper limit, b- lower support limit,
and an upper support limit c, where a < b < c < d. [3]. The Membership function depicting the
Trapezoidal Function is shown below in equation 2.
(2)
Let A be a fuzzy number, A = [a, b, c, d; w] , w is the weight, 0 < w ≤1 .The Trapezoidal
membership of this Fuzzy number A should satisfy the following conditions:
(a) A is a continuous mapping from R to the closed interval in [0, 1].
(b) A(x) = 0, where infinite ≤ x ≤ a and d ≤ x ≤ infinite.
(c) A(x) is monotonically increasing in [a, b].
(d) A(x) = w where b≤ x≤ c.
(e) A(x) is monotonically decreasing in [c, d].
Figure 1 Trapezoidal Membership Function within the range of 0 and 1 [4]
In the Figure 1, the x axis represents the universe of discourse, whereas the degrees of
membership in the [0, 1] interval is represented by y-axis [3].
6. PARTICLE SWARM OPTIMIZATION (PSO)
Particle swarm optimization (PSO) is a stochastic optimization technique invented by Dr.
Eberhart and Dr. Kennedy in 1995[5], used the concept of population and social behaviour of bird
flocking or schooling. PSO is an evolutionary computation technique such as Genetic algorithm.
PSO optimizes problem by iteratively trying to improve a candidate (initial) solution with regard
to a given measure of quality or fitness function. Initially PSO starts with candidate solutions,
known as particles and move these particles around in the search-space according to the
mathematical formulae describing the particle‟s position and velocity. Each particle‟s movement
depends on its local best known position but, is also diverted toward the best known positions in
5. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
117
the search space [10], [11]. These local best positions are updated as better positions are found by
other particles.
6.1. PSO Algorithm
The steps of PSO Algorithm [5] are summarized as follows
1. For each particle i = 1, ..., S do:
1.1 Initialize the particle's position with a uniformly distributed random
vector: xi ~ U(blo, bup), where blo and bup are the lower and upper boundaries of the
search-space.
1.2 Initialize the particle's best known position to its initial position: pi ← xi
1.3 If (f(pi) < f(g)) update the swarm's best known position: g ← pi
1.4 Initialize the particle's velocity: vi ~ U(-|bup-blo|, |bup-blo|)
2. Until a termination criterion is met (e.g. number of iterations performed, or a solution
with adequate objective function value is found), repeat:
2.1 For each particle i = 1, ..., S do:
2.1.1 Pick random numbers: rp, rg ~ U(0,1)
2.1.2 For each dimension d = 1, ..., n do:
2.1.2.1 Update the particle's velocity: vi,d ← ω vi,d + φp rp (pi,d-xi,d) +
φg rg (gd-xi,d)
2.1.3 Update the particle's position: xi ← xi + vi
2.1.4 If (f(xi) < f(pi)) do:
2.1.4.1 Update the particle's best known position: pi ← xi
2.1.4.2 If (f(pi) < f(g)) update the swarm's best known position: g ← pi
3. Now g holds the best found solution.
The parameters ω, φp, and φg are selected by the practitioner and control the behaviour
and efficacy of the PSO method.
7. COCOMO MODEL
Developed by Barry Boehm in 1981 [12], Cost Constructive Model or COCOMO is a cost
estimation model based on algorithmic properties. There were more than 60 projects which were
tested and analyzed under this model before it was formally announced. Since the beginning,
there were three levels of this model which were defined by Boehm. They were Basic,
Intermediate and Detailed. For the purpose of this research, Intermediate COCOMO model has
been used.
7.1. Intermediate COCOMO
The basic COCOMO model is based on the relationship:
DE = a*(SIZE)b (3)
where SIZE is measured in thousand delivered source instructions. The constants a, b are
dependent upon the „mode‟ of development of projects, DE is Development Effort and is
measured in man-months.
There were three modes proposed by Boehm- Organic, Semi-detached and Embedded Mode.
Organic was associated with small teams with known development environment, Semi-detached
followed a mixture of experience and fresher with mode lying between organic and embedded
and finally, Embedded was used for real time projects with strict guidelines and very tight
schedule. The development efforts for various modes of software in Intermediate COCOMO are
6. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
118
calculated by equations given in table 3.1. Since the basic COCOMO was not that much accurate,
intermediate COCOMO was developed with introduction of Cost Drivers.
The EAF term is the product of 15 Cost Drivers [12] that are presented below. The various
multipliers of the cost drivers can be categorized from Very Low to Extra High as given.
The 15 cost drivers are broadly classified into 4 categories of product, platform, personnel and
project. The meaning and classification of the cost drivers are as follows:
Table 1 Development Efforts for various modes in Intermediate COCOMO
Development Mode Intermediate Effort Equation
Organic DE=EAF*3.2*(SIZE)1.05
Semi-detached DE=EAF*3.0*(SIZE)1.12
Embedded DE=EAF*2.8*(SIZE)1.2
a) Product: RELY - Required software reliability
DATA - Data base size
CPLX - Product complexity
b) Platform: TIME - Execution time
STOR- Main storage constraint
VIRT - Virtual machine volatility
TURN - Computer turnaround time
c) Personnel: ACAP - Analyst capability
AEXP - Applications experience
PCAP - Programmer capability
VEXP - Virtual machine experience
LEXP - Language experience
d) Project: MODP - Modern programming
TOOL - Use of software tools
SCED - Required development schedule
With a dependency on the projects, various multipliers of the cost drivers will differ and thereby
the EAF may be greater than or less than 1, thus affecting the Effort.
8. PROPOSED EFFORT ESTIMATION APPROACH
Analogy based approach [13] operates with one or two past projects selected on the basis of their
similarity to the target project. Based on this approach following algorithm has been developed to
estimate the effort.
ALGORITHM 1:
1. Establish attributes of planned project i.e. select all the attributes that are required for the
estimation.
2. Estimate the values of attributes established in step 1.Since estimates are uncertain so
fuzzy numbers are used to depict the generated values for attributes.
3. Now from the repository of historical projects find the project that closely matches the
attributes of planned project.
4. Similarity distance is calculated by comparing attribute values of planned project to that
of historical project. The most similar project, though closest, is still not „identical‟ to the
considered project. Hence, its „weight‟ (as described in step 6) will be 1.
7. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
119
5. At this step, the values of linear coefficients may be computed using the inputs of
„similarity distances‟ from the existing project using PSO algorithm.
6.
The equation will be similar to linear equation, except that the weights will be adjusted in powers
of distance ratios (powered to s/si, where s is the minimum distance for the best project and si for
the target project).
Figure 2 Generalized framework for estimation using Fuzzy numbers and PSO
The proposed model as shown in Figure 2 is considered as a form of EA (Estimation by Analogy)
and comprises of following main stages:
i. Construction of fuzzy number of attributes.
ii. Finding Similarity distance between planed project and historical project.
iii. Deriving project weights according to the distance.
iv. Applying PSO algorithm to find B, as per the equation Y = B*X, where B is the coefficient
vector, Y is the matrix for current project and X is the matrix for historic projects.
Construction of fuzzy number of attributes
The proposed method chooses COCOMO81 Dataset that describes the project using 15 attributes.
Firstly, value of each attribute is replaced by its corresponding Fuzzy number in order to depict
uncertainty. Fuzzy numbers can be constructed by any of the following two methods: [15].
8. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
120
a. Expert opinion
b. From data
Expert opinion is a totally subjective technique. It depends on identifying pessimistic, optimistic
and most likely values for each Fuzzy number [16], whereas construction using data is based on
the structure of data only. Therefore based on DATA, and using Fuzzy modelling [14], the
approach develops the membership functions for each attributes. Let the numeric value of
attributes in Fuzzy Number is represented in the following sequence, where trapezoidal
membership has been considered as given in the section 4.
Planned Project (P): [a1, a2, a3, a4; wa]
and [a1, a2, a3, a4; wa] represents a fuzzy number for one attribute, where 0 ≤ a1 ≤ a2 ≤
a3 ≤ a4 ≤ 1 and all are real numbers. Therefore, planned project will also have 15 sets of
this type representing 15 attributes.
Historical Project (H): [b1, b2, b3, b4; wb]
and [b1, b2, b3, b4; wb] represents a fuzzy number for one attribute , where
0≤b1≤b2≤b3≤b4≤1 and all are real numbers. Therefore, historical project will have 15
sets of this type representing 15 attributes.
Finding Similarity between planned project and historical project
To calculate the similarity distance between two fuzzy numbers, the method proposed by
Azzehet. Al has been used [14] which combines the concept of geometric distance, Center of
Gravity (COG) and height of generalized fuzzy numbers. The degree of similarity S(P,H)
between two generalized Fuzzy numbers comprises of three elements:
a) Height of Adjustment Ratio (HAR)
b) Geometric Distance (GD)
c) Shape of Adjustment Ratio (SAF)
Equation for S(P,H) is given by the following equation[14]:
HAR is used to assess the degree of difference in height between two generalized Fuzzy
numbers. Equation for HAR is given by[4]:
HAR = √ (min (wa/wb ,wb/wa)) (4)
where, wa is weight for attribute of P and wb is weight for attribute of H.
GD is used to measure the geometric distance between two generalized Fuzzy numbers
including the distance between their x-axis centroid. Equation for GD is given by:
GD = 1 *((a1-b1) + (a2-b2) + (a3-b3) + (a4-b4) + (xa- xb) )/5 (5)
SAF is used to adjust the geometric distance, like, when the 2 Fuzzy numbers are having
different shapes. Equation for SAF is given by:
SAF = 1 + abs (ya-yb) (6)
Equation for S(P,H) is given by the following equation
S(P,H) = HAR*(1-GD)/SAF (7)
(xa, ya) and (xb, yb) presents the centre of gravity of generalized fuzzy numbers P and H
respectively and calculated using equations 8 and 9:
9. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
121
(8)
(9)
Substituting the values of HAR, GD, SAF from equations 4, 5 and 6 respectively in equation 7,
will give the similarity distance between one pair of attribute. Similarly, similarity distances for
rest of the 14 attributes are calculated.
Now, Let Si where i varies from 1 to 15, denote similarity distance between ith attribute of the
two projects we are comparing, then collectively it will give the similarity distance between two
projects denoted by S, given by equation 10:
(10)
In a similar way, similarity distance between planned project and each of historical projects are
calculated to get the most similar project.
Ranking the Closest Project
After calculating the aggregated similarity measure S between the planned project and each
individual analogue project, we will sort the similarity results according to their value. The
project with highest similarity to target project is chosen. We choose project with least similarity
distance because it has highest potential to contribute to the final estimate. We assume that the
effort of the closest project is E which will be utilized for Effort adjustment.
Application of PSO algorithm for determination of Coefficients
For deriving the new estimate only using the closest project is not sufficient in many cases as [6]
it may lead to bad estimation accuracy. Therefore Particle Swarm Algorithm is used to ensure that
all projects are considered according to the extent to which they are similar to the test project. The
framework of implementation of PSO model is shown in Figure 3. Here, the potential benefits of
applying PSO to analogy-based software effort estimation models based on Fuzzy Numbers are
evaluated. A suitable linear model is derived from the similarity distances between pairs of
projects for obtaining the weights to be used for PSO. PSO may be considered as a process of
random allocation of coefficient values, and then randomly changing them till they are closest in
linking the projects (minimum error).
Figure 3, presents the iterative process to optimize the coefficients of linear equation and software
cost estimation process that combines the best linear equation with estimation by analogy using
Fuzzy number [15].
10. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
122
Figure 3 Framework of Adjusted Fuzzy Analogy based Estimation
The algorithm for PSO implementation goes as follows:
1. Pick 100 swarms corresponding to the coordinates of 15 attributes for the equation e =
Sum (abs((A(i,j)*B(j)) – E(i))^(s/si))/(no. of past projects), where A represents the values
corresponding to the original scaled ratings (1-6), B is the quantity to be measured, and s
is similarity ratings. Thus, dissimilar projects get lower share of contribution.
2. Pick random positions and velocities for 15 attributes for each of the swarms. Random
velocity would be in the range (-x,+x), where x could take a value like 1, and position can
be taken in range (0,y). Later, when equation was modified as e = Sum (abs((B(j)/A(i,j))
– E(i))^(s/si))/(no. of past projects), most of the coefficients were found in the range
(0,10). The first attempt would take y = 10.
3. Compute total Error = Sum (| Estimated – Actual effort|/(Actual effort + Estimated
effort)), for each project.
4. New velocity = unit random velocity * error for the swarm
5. No. of iterations may be varied as per the accuracy requirements.
6. The „particle best‟ and „global best‟ coordinates may be computed as per the standard
PSO method.
11. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
123
A representation of the PSO algorithm for the project is shown in Fig. 4.
Figure 4 Adjusting Reused Effort process diagram
The failure of implementation of an iterator may come in several forms, such as
(i) Poor/Slow convergence, or slowdown before convergence near the required point,
(ii) Divergence
(iii) Undesirable oscillations
These problems may be prevented if the iteration function is mathematically consistent. This may
be ensured as follows:
The swarm movement starts slowing down when the error value start decreases, leading
to a tendency to converge around the best value, as the number of iterations are increased.
To ensure that the 15 coefficients do not go too high, the normalized data is inverted to
form the equation E = Sum (B/S) + e implies e = E – sum (B/S);
e (total) = sum [ abs({ E – sum(B/S) }) ^ s/si]
If the absolute value of S increases, the error grows to bring it down again. This keep the
values of coefficients represented by „S‟ in a narrow range.
An important mathematical step in the algorithm was that of „initialization‟. If initial
coefficients are taken in the range (0, 10), then there are good chances that the errors
would be in the range (0, 1). If velocities are allocated in (0, 1) range, a fast convergence
may be expected.
12. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
124
9. EXPERIMENTAL RESULTS
Using the effort of most similar project and calculated similarity distances between each attribute,
the coefficients of all effort drivers in OPTIMTOOL (MATLAB (R2011a)) [104] are found that
results in significantly lesser errors. There is no proof on software cost estimation models to
perform consistently accurate within PRED 25% (number of observation for which MRE is less
than 0.25). Applying PSO algorithm, the value of coefficients of effort drivers (shown in Table 3)
is obtained that aim to minimize the MRE [204] of projects. For some of the effort drivers the
coefficient value is very very less. So we can deduce that these drivers does not play important
role in the adjustment of effort. For the proposed PSO algorithm, Figure 5 shows how algorithm
converges with the number of iterations. Two sets of iterations per simulation set were performed
in order to note the variation of precision as well. It was found that the precision also improves
with the number of iterations, since the absolute difference of error of each pair of simulation
decreases with the number of runs.
Figure 5 PSO algorithm stabilizes as the number of iterations is increased
Table 2 Corresponding to Figure 3.6
A B Avg
10 94.514 12.909 100.968
20 83.157 28.896 97.605
30 73.032 5.996 76.030
40 64.894 9.600 69.694
50 57.730 4.567 60.014
60 50.379 4.414 52.586
70 51.989 3.388 53.683
80 47.565 13.645 54.387
90 48.159 2.691 49.505
100 47.819 6.980 51.309
13. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
125
Table 3 Best Coefficient Values Obtained through PSO
Cost
Driver
Coefficents of drivers
(Bi)
ACAP -1.74
AEXP 4.51
CPLX 8.84
DATA 0.39
LEXP 5.43
MODP 6.33
PCAP 3.89
RELY 9.60
SCEP 1.86
STOR 0.97
TIME -2.37
TOOL -1.78
TURN 7.75
VEXP 0.51
VIRT 5.87
10. CONCLUSION AND FUTURE WORK
To solve the problem of uncertain, lost values and ambiguous and vague values of attributes
related to project, the concept of fuzzy numbers is used in the present proposed approach. The
fuzzy model is employed in Estimation by Analogy to reduce uncertainty and improving the way
to handle both numerical and categorical data in similarity measurement. Converting each
attribute (real number) in to fuzzy number solve the problem of ambiguous data. Once it gets
over, calculating the similarity distance of the target project with all historical projects results the
most similar project. But the most similar project still has similarity distance with the project
being estimated. Hence, all projects were considered in varying weights, as per their similarity
distance with the proposed project – the lower the distance, the higher the weight.
Combining the concept of Fuzzy logic and PSO algorithm in a model to estimate the software
effort improves the accuracy of estimation techniques. The experiments, which were conducted
applying this model to NASA63 dataset, establish significant improvement under various
accuracy measures. Case Based Reasoning (CBR), traditional COCOMO and estimation analogy
based Fuzzy Model (EA) have been used for the comparison purpose with the proposed
technique. The evaluation parameter used for the comparison is Mean Magnitude of Relative
Error (MMRE).
14. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 1, February 2016
126
REFERENCES
[1] Martin Shepperd, and Schofield Chris, "Estimating software project effort using analogies." Software
Engineering, IEEE Transactions on 23.11, pp: 736-743, 1997.
[2] Practical Software Engineering, Accessed from http://ksi.cpsc.ucalgary.ca/courses/451-
96/mildred/451/CostEffort.html on September 5, 2013.
[3] Mohammad Azzeh, Daniel Neagu, Peter I. Cowling, “Fuzzy Grey Relational Analysis for Software
Effort Estimation”, Empirical Software Engineering, Vol. 15, No. 1, pp. 60-90, 2010.
[4] The fuzzy descriptive Logic System, http://gaia.isti.cnr.it/straccia/software/fuzzyDL/fuzzyDL.html
[5] R. C. Eberhart and Y. Shi, “Particle swarm optimization: Developments, applications and resources,”
in Proc. IEEE International Conference on Evolutionary Computation, vol. 1, pp. 81–86, 2001.
[6] MATDOCS : http://www.mathworks.in/matlabcentral/fileexchange
[7] Fiona Walkerden, Ross Jeffery, “An empirical Study of Analogy-based Software Effort Estimation”,
Empirical Software Engineering, Kluwer Academic Publishers, Boston, Volume 4, pp. 135-158,
1999.
[8] Giancarlo Attolini , “Guide to Practice Management for Small- and Medium-Sized Practices”,
International Federation of Accountants ,Third Edition, 2012.
[9] Fuzzy logic membership function, http://www.bindichen.co.uk/post/AI/fuzzy-inference-membership
function.html
[10] K. Vinay Kumar, V. Ravi and Mahil Carr, “Software Cost Estimation using Soft Computing
Approaches”, Handbook of Research on Machine Learning Applications and Trends: Algorithms,
Methods, and Techniques, pp. 499-518, IGI Global, 2010.
[11] Y. Shi, R. C. Eberhart , “A modified particle swarm optimizer”, Proceedings of IEEE International
Conference on Evolutionary Computation, pp. 69–73, 1998.
[12] B. Boehm, “Software Engineering Economics”, Prentice Hall, 1981.
[13] Nikolaos Mittas, Marinos Athanasiades, Lefteris Angelis, “Improving Analogy-based Software Cost
Estimation by a Resampling Method”, Information and Software Technology, 2007.
[14] Mohammad Azzeh, Daniel Neagu, Peter I. Cowling, “Analogy-based Software Effort Estimation
Using Fuzzy Numbers”, The journal of Systems and Software 84, 2010.
[15] M. Jorgensen, “Realim in Assessment of Effort Estimation Uncertainty: It Matter How You Ask.”
IEEE Transaction on software Engineering 30, 2004.
[16] Barry Boehm, Chris Abts, “Software Development Cost Estimation Approaches – A Survey”, 650
Harry Road, San Jose, CA 95120: IBM Research, pp.1-45, 1998.