call for papers, research paper publishing, where to publish research paper, journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJEI, call for papers 2012,journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, research and review articles, engineering journal, International Journal of Engineering Inventions, hard copy of journal, hard copy of certificates, journal of engineering, online Submission, where to publish research paper, journal publishing, international journal, publishing a paper, hard copy journal, engineering journal
A Review of Agile Software Effort Estimation MethodsEditor IJCATR
Software cost estimation is an essential aspect of software project management and therefore the success or failure of a software
project depends on accuracy in estimating effort, time and cost. Software cost estimation is a scientific activity that requires knowledge of a
number of relevant attributes that will determine which estimation method to use in a given situation. Over the years various studies were done
to evaluate software effort estimation methods however due to introduction of new software development methods, the reviews have not
captured new software development methods. Agile software development method is one of the recent popular methods that were not taken
into account in previous cost estimation reviews. The main aim of this paper is to review existing software effort estimation methods
exhaustively by exploring estimation methods suitable for new software development methods.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATIONIJCI JOURNAL
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
A Review of Agile Software Effort Estimation MethodsEditor IJCATR
Software cost estimation is an essential aspect of software project management and therefore the success or failure of a software
project depends on accuracy in estimating effort, time and cost. Software cost estimation is a scientific activity that requires knowledge of a
number of relevant attributes that will determine which estimation method to use in a given situation. Over the years various studies were done
to evaluate software effort estimation methods however due to introduction of new software development methods, the reviews have not
captured new software development methods. Agile software development method is one of the recent popular methods that were not taken
into account in previous cost estimation reviews. The main aim of this paper is to review existing software effort estimation methods
exhaustively by exploring estimation methods suitable for new software development methods.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATIONIJCI JOURNAL
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
EARLY STAGE SOFTWARE DEVELOPMENT EFFORT ESTIMATIONS – MAMDANI FIS VS NEURAL N...cscpconf
Accurately estimating the software size, cost, effort and schedule is probably the biggest
challenge facing software developers today. It has major implications for the management of
software development because both the overestimates and underestimates have direct impact for
causing damage to software companies. Lot of models have been proposed over the years by
various researchers for carrying out effort estimations. Also some of the studies for early stage
effort estimations suggest the importance of early estimations. New paradigms offer alternatives
to estimate the software development effort, in particular the Computational Intelligence (CI)
that exploits mechanisms of interaction between humans and processes domain
knowledge with the intention of building intelligent systems (IS). Among IS,
Artificial Neural Network and Fuzzy Logic are the two most popular soft computing techniques
for software development effort estimation. In this paper neural network models and Mamdani
FIS model have been used to predict the early stage effort estimations using the student dataset.
It has been found that Mamdani FIS was able to predict the early stage efforts more efficiently in
comparison to the neural network models based models.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The article proposes a new model for optimizing software effort and cost estimation based on code reusability. The model compares new projects to previously completed, similar projects stored in a code repository. By searching for and retrieving reusable code, functions, and methods from old projects, the model aims to reduce effort and cost estimates for new software development. The model is described as being based on the concept of estimation by analogy and using innovative search and retrieval techniques to achieve code reuse and thus decreased cost and effort estimates.
Insights on Research Techniques towards Cost Estimation in Software Design IJECEIAES
This document summarizes research on techniques for cost estimation in software design. It begins by describing common cost estimation techniques like Constructive Cost Modeling (COCOMO) and Function Point Analysis. It then analyzes research trends in cost estimation, effort estimation, and fault prediction based on literature from 2010 to present. Fewer than 50 papers were found related to overall cost estimation, less than 25 for effort estimation, and only 9 for fault prediction. The document then reviews existing research addressing general cost estimation, enhancement of Function Point Analysis, statistical modeling approaches, cost estimation for embedded systems, and estimation for fourth generation languages and NASA projects. Most techniques use COCOMO or extend existing models with techniques like fuzzy logic, neural networks, or statistical
The peer-reviewed International Journal of Engineering Inventions (IJEI) is started with a mission to encourage contribution to research in Science and Technology. Encourage and motivate researchers in challenging areas of Sciences and Technology.
This document provides an overview of project evaluation and size and cost estimation in software project management. It discusses conducting an initial high-level project evaluation to assess strategic fit, technical feasibility, and economic viability before more detailed size and cost estimations are made. Size can be estimated using function point analysis or object point analysis, while costs are estimated via techniques like cost-benefit analysis, cash flow forecasting, and net present value/internal rate of return calculations. Accurate estimation is challenging, and positive attitudes and periodic revisions are important.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
This document describes a study that examined the differences between top-down and bottom-up expert estimation strategies for software development effort. Seven estimation teams estimated the effort for two software projects, one using a top-down strategy and one using a bottom-up strategy. The study found that bottom-up estimation led to more accurate estimates than top-down estimation. Specifically, bottom-up estimates had a lower average magnitude of relative error than top-down estimates. Additionally, the recall of very similar past projects seemed necessary for accurate top-down estimates, suggesting bottom-up should be used unless experience with very similar projects exists.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
This document discusses software estimation techniques. It defines estimation, targets, and commitments, distinguishing between them. It describes reasons for inaccurate estimates like unstable requirements, omitted activities, and unrealistic optimism. It discusses fundamental estimation techniques like counting and computing, and choosing techniques based on project size, development style, and stage. Throughout, it emphasizes removing uncertainty through techniques like requirements stabilization to improve estimates.
The document provides an overview of software project estimation techniques. It discusses that estimation involves determining the money, effort, resources and time required to build a software system. The key steps are: describing product scope, decomposing problems, estimating sub-problems using historical data and experience, and considering complexity and risks. It also covers decomposition techniques, empirical estimation models like COCOMO II, and factors considered in estimation like resources, feasibility and risks.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
Estimation determines the resources needed to build a system and involves estimating the software size, effort, time, and cost. It is based on past data, documents, assumptions, and risks. The main steps are estimating the software size, effort, time, and cost. Software size can be estimated in lines of code or function points. Effort estimation calculates person-hours or months based on software size using formulas like COCOMO-II. Cost estimation considers additional factors like hardware, tools, personnel skills, and travel. Techniques for estimation include decomposition and empirical models like Putnam and COCOMO, which relate size to time and effort.
The document discusses project management risk sensitivity analysis. It begins by introducing the importance of fulfilling project outcomes and how a single change in risk sensitivity analysis measures can impact overall project outcomes. It then defines risk sensitivity analysis as a useful analytical procedure for project evaluation and assessment that can help identify major risk variables and guide risk control, though it does not fully evaluate or eliminate all project risks. The objective of the research survey discussed is to evaluate risk sensitivity analysis techniques and models under real-world scenarios to demonstrate their usefulness and inform future risk reduction and management strategies.
Function Point Software Cost Estimates using Neuro-Fuzzy techniqueijceronline
Software estimation accuracy is among the greatest challenges for software developers. As Neurofuzzy based system is able to approximate the non-linear function with more precision so it is used as a soft computing approach to generate model by formulating the relationship based on its training. The approach presented in this paper is independent of the nature and type of estimation. In this paper, Function point is used as algorithmic model and an attempt is being made to validate the soundness of Neuro fuzzy technique using ISBSG and NASA project data.
This document proposes a new approach for software project estimation that combines existing estimation techniques. It involves using case-based reasoning to retrieve similar past projects, reusing their estimates, and revising the estimates based on new parameters and delay-causing incidents. The approach allows parameters to be added dynamically during project execution to make estimates more context-sensitive and help converge to actual values. A prototype tool has been implemented to demonstrate calculating estimates by dynamically selecting parameters and computing similarity indexes between current and past projects.
New approach for solving software project scheduling problem using differenti...ijfcstjournal
Software Project Scheduling Problem (SPSP) is one of the most critical issues in developing software. The
major factor in completing the software project consistent with planned cost and schedule is implementing
accurate and true scheduling. The subject of SPSP is an important topic which in software projects
development and management should be considered over other topics and software project development
must be done on it. SPSP usually includes resources planning, cost estimates, staffing and cost control.
Therefore, it is necessary for SPSP use an algorithmic that with considering of costs and limited resources
can estimate optimal time for completion of the project. Simultaneously reduce of time and cost in software
projects development is very vital and necessary. The meta-heuristic algorithms has good performance in
SPSP in recent years. When software projects factors are vague and incomplete, cost based scheduling
models based on meta-heuristic algorithms can look better at scheduling. This algorithm works based on
the Collective Intelligence and using the fitness function, it has more accurate ability for SPSP. In this
paper, Differential Evolution (DE) algorithm is used to optimize SPSP. Experimental results show that the
proposed model has better performance than Genetic Algorithm (GA).
This document discusses using statistical techniques to improve predictability in project performance. It provides three scenarios as examples: 1) Predicting critical activities on a schedule using a criticality index, 2) Estimating a project schedule and cost using Monte Carlo simulation, and 3) Building an early warning system for project monitoring and control. The document emphasizes that statistical methods can help project managers develop more accurate estimates and better manage project risks and performance.
Estimating involves forecasting the time and cost to complete project deliverables. There are two main types of estimates: bottom-up estimates require more effort but rely on those familiar with the work, while top-down estimates can be made by managers without direct experience. Software cost and effort estimation is not an exact science due to many variable factors. Key parameters that affect estimates include resources, time, human skills, and cost. Common software estimation techniques include top-down and bottom-up methods such as the three-point estimation technique.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
This document summarizes a study that used finite element analysis to investigate the effectiveness of a proposed seismic isolation platform called Seer-iPlat. The study modeled a 4-story steel frame building with and without Seer-iPlat, which isolates each floor by installing elastomeric rubber bearings between the floor and columns. Analysis of the building subjected to ground motions from the Pomona earthquake found that installing Seer-iPlat at each floor significantly reduced column axial forces, shear, and moments throughout the building by 63-71% compared to the building without isolation. Therefore, the study demonstrated the potential of Seer-iPlat as an effective rehabilitation approach to mitigate seismic impacts on existing buildings.
This document summarizes a research paper that proposes using the firefly algorithm to solve the NP-hard optimization problem of assigning cells in a cellular network to switches in a way that minimizes total costs. The total cost includes handoff costs for calls between adjacent cells and cabling costs for connecting cells to switches, subject to constraints like switch capacity. It describes modeling the problem mathematically and comparing the firefly algorithm to the particle swarm optimization (PSO) algorithm. The firefly algorithm is shown to solve the cell-to-switch assignment problem faster than existing approaches like PSO.
This document provides an overview of project evaluation and size and cost estimation in software project management. It discusses conducting an initial high-level project evaluation to assess strategic fit, technical feasibility, and economic viability before more detailed size and cost estimations are made. Size can be estimated using function point analysis or object point analysis, while costs are estimated via techniques like cost-benefit analysis, cash flow forecasting, and net present value/internal rate of return calculations. Accurate estimation is challenging, and positive attitudes and periodic revisions are important.
APPLYING REQUIREMENT BASED COMPLEXITY FOR THE ESTIMATION OF SOFTWARE DEVELOPM...cscpconf
The need of computing the software complexity in requirement analysis phase of software
development life cycle (SDLC) would be an enormous benefit for estimating the required
development and testing effort for yet to be developed software. Also, a relationship between
source code and difficulty in developing a source code are also attempted in order to estimate the
complexity of the proposed software for cost estimation, man power build up, code and
developer’s evaluation. Therefore, this paper presents a systematic and an integrated approach
for the estimation of software development and testing effort on the basis of improved
requirement based complexity (IRBC) of the proposed software. The IRBC measure serves as the
basis for estimation of these software development activities to enable the developers and
practitioners to predict the critical information about the software development intricacies and
obtained from software requirement specification (SRS) of proposed software. Hence, this paper
presents an integrated approach, for the prediction of software development and testing effort
using IRBC. For validation purpose, the proposed measures are categorically compared with
various established and prevalent practices proposed in the past. Finally, the results obtained, validates the claim, for the approaches discussed in this paper, for estimation of software development and testing effort, in the early phases of SDLC appears to be robust, comprehensive, early alarming and compares well with other measures proposed in the past.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
This document describes a study that examined the differences between top-down and bottom-up expert estimation strategies for software development effort. Seven estimation teams estimated the effort for two software projects, one using a top-down strategy and one using a bottom-up strategy. The study found that bottom-up estimation led to more accurate estimates than top-down estimation. Specifically, bottom-up estimates had a lower average magnitude of relative error than top-down estimates. Additionally, the recall of very similar past projects seemed necessary for accurate top-down estimates, suggesting bottom-up should be used unless experience with very similar projects exists.
An approach for software effort estimation using fuzzy numbers and genetic al...csandit
One of the most critical tasks during the software development life cycle is that of estimating the
effort and time involved in the development of the software product. Estimation may be
performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being
developed and then using the estimates from them. Analogy-based estimation is integrated with
Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with
attribute measurement and data availability, fuzzy logic is introduced in the proposed model.
But hardly a historical project is exactly same as the project being estimated due to some
distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort
needs to be adjusted when the most similar project has a similarity distance with the project
being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The
proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic
algorithm based adjustment mechanism may result to near the correct effort estimation.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...csandit
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, Algorithmic effort estimation, Machine
learning and Analogy-based estimation. In which Analogy-based software effort estimation is the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during
the early stages of a software development lifecycle. Because of uncertainty associated with attribute measurement and data availability, fuzzy logic is introduced in the proposed model.But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a
similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
This document discusses software estimation techniques. It defines estimation, targets, and commitments, distinguishing between them. It describes reasons for inaccurate estimates like unstable requirements, omitted activities, and unrealistic optimism. It discusses fundamental estimation techniques like counting and computing, and choosing techniques based on project size, development style, and stage. Throughout, it emphasizes removing uncertainty through techniques like requirements stabilization to improve estimates.
The document provides an overview of software project estimation techniques. It discusses that estimation involves determining the money, effort, resources and time required to build a software system. The key steps are: describing product scope, decomposing problems, estimating sub-problems using historical data and experience, and considering complexity and risks. It also covers decomposition techniques, empirical estimation models like COCOMO II, and factors considered in estimation like resources, feasibility and risks.
In the software measurement validations, assessing the validation of software metrics in software
engineering is a very difficult task due to lack of theoretical methodology and empirical methodology [41,
44, 45]. During recent years, there have been a number of researchers addressing the issue of validating
software metrics. At present, software metrics are validated theoretically using properties of measures.
Further, software measurement plays an important role in understanding and controlling software
development practices and products. The major requirement in software measurement is that the measures
must represent accurately those attributes they purport to quantify and validation is critical to the success
of software measurement. Normally, validation is a collection of analysis and testing activities across the
full life cycle and complements the efforts of other quality engineering functions and validation is a critical
task in any engineering project. Further, validation objective is to discover defects in a system and assess
whether or not the system is useful and usable in operational situation. In the case of software engineering,
validation is one of the software engineering disciplines that help build quality into software. The major
objective of software validation process is to determine that the software performs its intended functions
correctly and provides information about its quality and reliability. This paper discusses the validation
methodology, techniques and different properties of measures that are used for software metrics validation.
In most cases, theoretical and empirical validations are conducted for software metrics validations in
software engineering [1-50].
Testability measurement model for object oriented design (tmmood)ijcsit
Measuring testability early in the development life cycle especially at design phase is a criterion of crucial importance to software designers, developers, quality controllers and practitioners. However, most of the
mechanism available for testability measurement may be used in the later phases of development life cycle.
Early estimation of testability, absolutely at design phase helps designers to improve their designs before
the coding starts. Practitioners regularly advocate that testability should be planned early in design phase.
Testability measurement early in design phase is greatly emphasized in this study; hence, considered significant for the delivery of quality software. As a result, it extensively reduces rework during and after implementation, as well as facilitate for design effective test plans, better project and resource planning in a practical manner, with a focus on the design phase. An effort has been put forth in this paper to recognize the key factors contributing in testability measurement at design phase. Additionally, testability
measurement model is developed to quantify software testability at design phase. Furthermore, the relationship of Testability with these factors has been tested and justified with the help of statistical measures. The developed model has been validated using experimental tryout. Finally, it incorporates the empirical validation of the testability measurement model as the author’s most important contribution.
Estimation determines the resources needed to build a system and involves estimating the software size, effort, time, and cost. It is based on past data, documents, assumptions, and risks. The main steps are estimating the software size, effort, time, and cost. Software size can be estimated in lines of code or function points. Effort estimation calculates person-hours or months based on software size using formulas like COCOMO-II. Cost estimation considers additional factors like hardware, tools, personnel skills, and travel. Techniques for estimation include decomposition and empirical models like Putnam and COCOMO, which relate size to time and effort.
The document discusses project management risk sensitivity analysis. It begins by introducing the importance of fulfilling project outcomes and how a single change in risk sensitivity analysis measures can impact overall project outcomes. It then defines risk sensitivity analysis as a useful analytical procedure for project evaluation and assessment that can help identify major risk variables and guide risk control, though it does not fully evaluate or eliminate all project risks. The objective of the research survey discussed is to evaluate risk sensitivity analysis techniques and models under real-world scenarios to demonstrate their usefulness and inform future risk reduction and management strategies.
Function Point Software Cost Estimates using Neuro-Fuzzy techniqueijceronline
Software estimation accuracy is among the greatest challenges for software developers. As Neurofuzzy based system is able to approximate the non-linear function with more precision so it is used as a soft computing approach to generate model by formulating the relationship based on its training. The approach presented in this paper is independent of the nature and type of estimation. In this paper, Function point is used as algorithmic model and an attempt is being made to validate the soundness of Neuro fuzzy technique using ISBSG and NASA project data.
This document proposes a new approach for software project estimation that combines existing estimation techniques. It involves using case-based reasoning to retrieve similar past projects, reusing their estimates, and revising the estimates based on new parameters and delay-causing incidents. The approach allows parameters to be added dynamically during project execution to make estimates more context-sensitive and help converge to actual values. A prototype tool has been implemented to demonstrate calculating estimates by dynamically selecting parameters and computing similarity indexes between current and past projects.
New approach for solving software project scheduling problem using differenti...ijfcstjournal
Software Project Scheduling Problem (SPSP) is one of the most critical issues in developing software. The
major factor in completing the software project consistent with planned cost and schedule is implementing
accurate and true scheduling. The subject of SPSP is an important topic which in software projects
development and management should be considered over other topics and software project development
must be done on it. SPSP usually includes resources planning, cost estimates, staffing and cost control.
Therefore, it is necessary for SPSP use an algorithmic that with considering of costs and limited resources
can estimate optimal time for completion of the project. Simultaneously reduce of time and cost in software
projects development is very vital and necessary. The meta-heuristic algorithms has good performance in
SPSP in recent years. When software projects factors are vague and incomplete, cost based scheduling
models based on meta-heuristic algorithms can look better at scheduling. This algorithm works based on
the Collective Intelligence and using the fitness function, it has more accurate ability for SPSP. In this
paper, Differential Evolution (DE) algorithm is used to optimize SPSP. Experimental results show that the
proposed model has better performance than Genetic Algorithm (GA).
This document discusses using statistical techniques to improve predictability in project performance. It provides three scenarios as examples: 1) Predicting critical activities on a schedule using a criticality index, 2) Estimating a project schedule and cost using Monte Carlo simulation, and 3) Building an early warning system for project monitoring and control. The document emphasizes that statistical methods can help project managers develop more accurate estimates and better manage project risks and performance.
Estimating involves forecasting the time and cost to complete project deliverables. There are two main types of estimates: bottom-up estimates require more effort but rely on those familiar with the work, while top-down estimates can be made by managers without direct experience. Software cost and effort estimation is not an exact science due to many variable factors. Key parameters that affect estimates include resources, time, human skills, and cost. Common software estimation techniques include top-down and bottom-up methods such as the three-point estimation technique.
Optimizing Effort Parameter of COCOMO II Using Particle Swarm Optimization Me...TELKOMNIKA JOURNAL
Estimating the effort and cost of software is an important activity for software project managers. A poor estimate (overestimates or underestimates) will result in poor software project management. To handle this problem, many researchers have proposed various models for estimating software cost. Constructive Cost Model II (COCOMO II) is one of the best known and widely used models for estimating software costs. To estimate the cost of a software project, the COCOMO II model uses software size, cost drivers, scale factors as inputs. However, this model is still lacking in terms of accuracy. To improve the accuracy of COCOMO II model, this study examines the effect of the cost factor and scale factor in improving the accuracy of effort estimation. In this study, we initialized using Particle Swarm Optimization (PSO) to optimize the parameters in a model of COCOMO II. The method proposed is implemented using the Turkish Software Industry dataset which has 12 data items. The method can handle improper and uncertain inputs efficiently, as well as improves the reliability of software effort. The experiment results by MMRE were 34.1939%, indicating better high accuracy and significantly minimizing error 698.9461% and 104.876%.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
This document summarizes a study that used finite element analysis to investigate the effectiveness of a proposed seismic isolation platform called Seer-iPlat. The study modeled a 4-story steel frame building with and without Seer-iPlat, which isolates each floor by installing elastomeric rubber bearings between the floor and columns. Analysis of the building subjected to ground motions from the Pomona earthquake found that installing Seer-iPlat at each floor significantly reduced column axial forces, shear, and moments throughout the building by 63-71% compared to the building without isolation. Therefore, the study demonstrated the potential of Seer-iPlat as an effective rehabilitation approach to mitigate seismic impacts on existing buildings.
This document summarizes a research paper that proposes using the firefly algorithm to solve the NP-hard optimization problem of assigning cells in a cellular network to switches in a way that minimizes total costs. The total cost includes handoff costs for calls between adjacent cells and cabling costs for connecting cells to switches, subject to constraints like switch capacity. It describes modeling the problem mathematically and comparing the firefly algorithm to the particle swarm optimization (PSO) algorithm. The firefly algorithm is shown to solve the cell-to-switch assignment problem faster than existing approaches like PSO.
The document summarizes a research paper that proposes a new topology for a DSTATCOM (Distribution Static Compensator) to improve power quality in three-phase four-wire distribution systems. The proposed topology integrates a three-leg voltage source converter with a T-connected transformer. This helps mitigate neutral current and allows the DSTATCOM to compensate for load harmonics, reactive power, and imbalance. The design and control strategy of the DSTATCOM are described. Its performance is validated using MATLAB simulation for applications like power factor correction and voltage regulation along with neutral current compensation and harmonic reduction with nonlinear loads.
The document compares the low field electron transport properties in compounds of groups III-V semiconductors by solving the Boltzmann equation using an iterative technique. It calculates the temperature and doping dependencies of electron mobility in InP, InAs, GaP and GaAs. The electron mobility decreases with increasing temperature from 100K to 500K for each material due to increased electron-phonon scattering. Electron mobility also increases significantly with higher doping concentration at low temperatures. The iterative results show good agreement with other calculations and experiments. Electron mobility is highest in InAs and lowest in GaP at 300K, due to differences in their effective masses.
This document presents a framework for optimizing transportation costs when supplying resources to construction sites. It identifies renewable and non-renewable resources and considers the locations of storage facilities and construction sites. The objective is to minimize total transportation costs by determining optimal quantities to ship from each storage location to each site. A mathematical model is formulated to represent the transportation costs as a function of shipping quantities between sources and sinks, subject to supply and demand constraints. The model aims to develop a tradeoff between transportation costs and transshipment quantities for efficiently supplying resources in construction projects.
This document summarizes a research paper that proposes using a particle swarm optimization (PSO) algorithm to design hybrid active power filters (HAPFs) for 3-phase 4-wire power systems with variable and nonlinear loads. The objectives are to minimize harmonic distortion, reactive power capacity, and costs while satisfying constraints. PSO is applied to find optimal filter parameters and component ratings. Simulation results show PSO designs outperform conventional trial-and-error methods in meeting standards for harmonic mitigation and power factor correction.
This document summarizes previous research on the deformation of endodontic obturator tips using finite element modeling (FEM). It discusses how FEM has been used to analyze stress distributions and deformation patterns in gutta percha cutter tips when subjected to thermal and mechanical loads. The document reviews literature on using tools like ANSYS to model different tip geometries and materials. It also discusses how FEM can help optimize tip design by predicting failure modes and analyzing stresses under various loading conditions. FEM is presented as an effective tool for analyzing complex dental structures when direct measurement is difficult.
This document discusses methods for interpolating rainfall data using modified inverse distance weighting (MIDW) techniques. It examines four general forms of the MIDW method where the effects of distance and elevation difference can take positive or negative weights. The authors apply genetic algorithms to optimize the MIDW interpolation equation regionally and compare integrated vs. separated dimensionless weighting approaches. Daily rainfall data from 49 stations in the Mashhad plain catchment area of Iran over 16 years is analyzed. The results show that accounting for elevation improves interpolation and that regional optimization of the MIDW method leads to better performance than local optimization.
This document describes using an artificial neural network (ANN) model to optimize the cost of reinforced concrete beams designed according to ACI 318-08 code requirements. The ANN model considers costs of concrete, reinforcement steel, and formwork. A simply supported beam was designed with variable cross-sectional dimensions to demonstrate the model. Computer models were developed using NEURO SHELL-2 software and results were compared to a classical optimization model in Excel using generalized reduced gradient methods, showing good agreement between the two approaches. The document provides details on the ANN model formulation, including design variables, constraints, and objective function to minimize total cost. An example problem is presented to optimize the design of a simply supported beam.
COMPARATIVE STUDY OF SOFTWARE ESTIMATION TECHNIQUES ijseajournal
Many information technology firms among other organizations have been working on how to perform estimation of the sources such as fund and other resources during software development processes. Software development life cycles require lot of activities and skills to avoid risks and the best software estimation technique is supposed to be employed. Therefore, in this research, a comparative study was conducted, that consider the accuracy, usage, and suitability of existing methods. It will be suitable for the project managers and project consultants during the whole software project development process. In this project technique such as linear regression; both algorithmic and non-algorithmic are applied. Model, composite and regression techniques are used to derive COCOMO, COCOMO II, SLIM and linear multiple respectively. Moreover, expertise-based and linear-based rules are applied in non-algorithm methods. However, the technique needs some advancement to reduce the errors that are experienced during the software development process. Therefore, this paper in relation to software estimation techniques has proposed a model that can be helpful to the information technology firms, researchers and other firms that use information technology in the processes such as budgeting and decision-making processes.
This document discusses software cost estimation. It introduces metrics for assessing software productivity and techniques for estimating software costs, including algorithmic modeling. A key technique is the COCOMO model, which relates software size to project costs based on historical data. The document also covers how to estimate project duration and staffing needs.
Software projects mostly exceeds budget, delivered late and does not meet with the customer’s satisfaction for years. In the past, many traditional development models like waterfall, spiral, iterative, and prototyping methods are used to build the software systems. In recent years, agile models are widely used in developing the software products. The major reasons are – simplicity, incorporating the requirement changes at any time, light-weight approach and delivering the working product early and in short duration. Whatever the development model used, it still remains a challenge for software engineer’s to accurately estimate the size, effort and the time required for developing the software system. This survey focuses on the existing estimation models used in traditional as well in agile software development.
The performance of an algorithm can be improved using a parallel computing programming approach. In this study, the performance of bubble sort algorithm on various computer specifications has been applied. Experimental results have shown that parallel computing programming can save significant time performance by 61%-65% compared to serial computing programming.
Efficient Indicators to Evaluate the Status of Software Development Effort Es...IJMIT JOURNAL
Development effort is an undeniable part of the project management which considerably influences the
success of project. Inaccurate and unreliable estimation of effort can easily lead to the failure of project.
Due to the special specifications, accurate estimation of effort in the software projects is a vital
management activity that must be carefully done to avoid from the unforeseen results. However numerous
effort estimation methods have been proposed in this field, the accuracy of estimates is not satisfying and
the attempts continue to improve the performance of estimation methods. Prior researches conducted in
this area have focused on numerical and quantitative approaches and there are a few research works that
investigate the root problems and issues behind the inaccurate effort estimation of software development
effort. In this paper, a framework is proposed to evaluate and investigate the situation of an organization in
terms of effort estimation. The proposed framework includes various indicators which cover the critical
issues in field of software development effort estimation. Since the capabilities and shortages of
organizations for effort estimation are not the same, the proposed indicators can lead to have a systematic
approach in which the strengths and weaknesses of organizations in field of effort estimation are
discovered
A new model for software costestimationijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
Enhancing the Software Effort Prediction Accuracy using Reduced Number of Cos...IRJET Journal
This document presents research on modifying the COCOMO II software cost estimation model to improve prediction accuracy. The researchers reduced the number of cost estimation factors (called cost drivers) from 17 to 13 by adjusting the definitions and impact levels to better reflect current industry situations. They estimated effort for software projects using the modified model and found lower percentage errors compared to the original COCOMO II model, demonstrating improved estimation efficiency. The goal of the research was to analyze cost drivers and their impact on effort estimation in COCOMO II and enhance the model for more accurate predictions.
How Should We Estimate Agile Software Development Projects and What Data Do W...Glen Alleman
Estimating techniques for an acquisition program progresses from analogies to actual cost method as the program matures and more information is known. The analogy method is most appropriate early in the program life cycle when the system is not yet fully defined.
AN APPROACH FOR SOFTWARE EFFORT ESTIMATION USING FUZZY NUMBERS AND GENETIC AL...cscpconf
One of the most critical tasks during the software development life cycle is that of estimating the effort and time involved in the development of the software product. Estimation may be performed by many ways such as: Expert judgments, lgorithmic effort estimation, Machine learning and Analogy-based estimation. In which Analogy-based software effort estimation is
the process of identifying one or more historical projects that are similar to the project being developed and then using the estimates from them. Analogy-based estimation is integrated with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle. Because of uncertainty associated with tribute measurement and data availability, fuzzy logic is introduced in the proposed model. But hardly a historical project is exactly same as the project being estimated due to some distance associated in similarity distance. This means that the most similar project still has a similarity distance with the project being estimated in most of the cases. Therefore, the effort needs to be adjusted when the most similar project has a similarity distance with the project being estimated. To adjust the reused effort, we build an adjustment mechanism whose
algorithm can derive the optimal adjustment on the reused effort using Genetic Algorithm. The proposed model Combine the fuzzy logic to estimate software effort in early stages with Genetic algorithm based adjustment mechanism may result to near the correct effort estimation.
A DECISION SUPPORT SYSTEM FOR ESTIMATING COST OF SOFTWARE PROJECTS USING A HY...ijfcstjournal
One of the major challenges for software, nowadays, is software cost estimation. It refers to estimating the
cost of all activities including software development, design, supervision, maintenance and so on. Accurate
cost-estimation of software projects optimizes the internal and external processes, staff works, efforts and
the overheads to be coordinated with one another. In the management software projects, estimation must
be taken into account so that reduces costs, timing and possible risks to avoid project failure. In this paper,
a decision- support system using a combination of multi-layer artificial neural network and decision tree is
proposed to estimate the cost of software projects. In the model included into the proposed system,
normalizing factors, which is vital in evaluating efforts and costs estimation, is carried out using C4.5
decision tree. Moreover, testing and training factors are done by multi-layer artificial neural network and
the most optimal values are allocated to them. The experimental results and evaluations on Dataset
NASA60 show that the proposed system has less amount of the total average relative error compared with
COCOMO model.
The document provides an overview of software cost estimation. It discusses factors that influence software cost such as programmer ability, product size, available time, required reliability, and level of technology. It also covers different software cost estimation techniques including expert judgment, Delphi estimation, work breakdown structures, algorithmic models like COCOMO, and estimating maintenance costs. The document emphasizes that accurate cost estimation is difficult during planning due to many unknown factors, so organizations may use iterative estimates to reduce cost overruns.
This document summarizes the key phases in planning a software project:
1) Planning is the most important phase and involves estimating effort, schedules, resources, and risks. It defines goals for costs, timelines and quality.
2) Process planning defines the development stages and activities. It may modify established models to suit a project's needs.
3) Effort estimation predicts the time and costs required, and is needed for budgets, monitoring, and contracting. Estimates are most accurate after requirements are defined.
DESQA a Software Quality Assurance FrameworkIJERA Editor
In current software development lifecycles of heterogeneous environments, the pitfalls businesses have to face are that software defect tracking, measurements and quality assurance do not start early enough in the development process. In fact the cost of fixing a defect in a production environment is much higher than in the initial phases of the Software Development Life Cycle (SDLC) which is particularly true for Service Oriented Architecture (SOA). Thus the aim of this study is to develop a new framework for defect tracking and detection and quality estimation for early stages particularly for the design stage of the SDLC. Part of the objectives of this work is to conceptualize, borrow and customize from known frameworks, such as object-oriented programming to build a solid framework using automated rule based intelligent mechanisms to detect and classify defects in software design of SOA. The implementation part demonstrated how the framework can predict the quality level of the designed software. The results showed a good level of quality estimation can be achieved based on the number of design attributes, the number of quality attributes and the number of SOA Design Defects. Assessment shows that metrics provide guidelines to indicate the progress that a software system has made and the quality of design. Using these guidelines, we can develop more usable and maintainable software systems to fulfill the demand of efficient systems for software applications. Another valuable result coming from this study is that developers are trying to keep backwards compatibility when they introduce new functionality. Sometimes, in the same newly-introduced elements developers perform necessary breaking changes in future versions. In that way they give time to their clients to adapt their systems. This is a very valuable practice for the developers because they have more time to assess the quality of their software before releasing it. Other improvements in this research include investigation of other design attributes and SOA Design Defects which can be computed in extending the tests we performed.
1. Software project estimation involves decomposing a project into smaller problems like major functions and activities. Estimates can be based on similar past projects, decomposition techniques, or empirical models.
2. Accurate estimates depend on properly estimating the size of the software product using techniques like lines of code, function points, or standard components. Baseline metrics from past projects are then applied to the size estimates.
3. Decomposition techniques involve estimating the effort needed for each task or function and combining them. Process-based estimation decomposes the software process into tasks while problem-based estimation decomposes the problem.
In the present paper, applicability and
capability of A.I techniques for effort estimation prediction has
been investigated. It is seen that neuro fuzzy models are very
robust, characterized by fast computation, capable of handling
the distorted data. Due to the presence of data non-linearity, it is
an efficient quantitative tool to predict effort estimation. The one
hidden layer network has been developed named as OHLANFIS
using MATLAB simulation environment.
Here the initial parameters of the OHLANFIS are
identified using the subtractive clustering method. Parameters of
the Gaussian membership function are optimally determined
using the hybrid learning algorithm. From the analysis it is seen
that the Effort Estimation prediction model developed using
OHLANFIS technique has been able to perform well over normal
ANFIS Model.
A survey of predicting software reliability using machine learning methodsIAESIJAI
In light of technical and technological progress, software has become an urgent need in every aspect of human life, including the medicine sector and industrial control. Therefore, it is imperative that the software always works flawlessly. The information technology sector has witnessed a rapid expansion in recent years, as software companies can no longer rely only on cost advantages to stay competitive in the market, but programmers must provide reliable and high-quality software, and in order to estimate and predict software reliability using machine learning and deep learning, it was introduced A brief overview of the important scientific contributions to the subject of software reliability, and the researchers' findings of highly efficient methods and techniques for predicting software reliability.
A sustainable procedural method of software design process improvementsnooriasukmaningtyas
In practice, the software process is an intermediate phase for enhancement and improvements the design for different types of software products and help developers to converts the specified requirements into prototypes that implement the design into reality. The objective of this paper is to provide software developers, designers and software engineers who work in small companies with a standards-based process improvement using a procedural method technique including detailed steps for designing the small software systems into their companies. The method used in this paper includes 1) analysis four different types of commonly design processes used by industry such as CMMI, conventional or software process in ISO 19759, generic and engineering design processes. 2) mapping between those four design processes. 3) collect the dispersed design concepts proposed by those four processes. 4) proposed a sustainable procedural method of software design process improvements 5) Illustration of the applicability of the proposed approach using A template-based implementation. The primary result of this study is a guideline procedure with detailed steps for software design process improvements to help and guide developers in small companies to analyze and design a small software scales with limited cost and duration. In conclusion, this paper proposed a method to improve the design process for different kinds of the software systems using a templatebased implementation to reduce the cost, effort and time needed in the implementation phase in small companies. The scientific implication behind a template-based implementation helps the system and software engineering to use this template easily in their small companies; because most of the time those engineering developers are responsible for analyzing, designing, implementing and testing their software systems during the whole software life cycle.
This document discusses software cost estimation. It describes the objectives of software cost estimation and the fundamental questions that must be answered. It explains different techniques for software cost estimation including algorithmic cost modeling, expert judgement, and estimation by analogy. A key part of the document focuses on the COCOMO model, including the different levels of COCOMO 2 for estimation. It also discusses using cost estimation for project planning and management.
A Systematic Review On Software Cost Estimation In Agile Software DevelopmentBrooke Heidt
This document provides a summary of a systematic literature review on software cost estimation techniques for agile software development. It discusses the challenges of cost estimation for agile projects due to their dynamic nature. The review examines various cost estimation mechanisms that have been explored for agile methodologies and compares their accuracy based on parameters like magnitude of relative error, mean magnitude of relative error, and others. It aims to help agile practitioners understand current trends in cost estimation and determine which techniques may be suitable given different project circumstances.
Similar to International Journal of Engineering Inventions (IJEI), (20)
This document discusses the impact of data mining on business intelligence. It begins by defining business intelligence as using new technologies to quickly respond to changes in the business environment. Data mining is an important part of the business intelligence lifecycle, which includes determining requirements, collecting and analyzing data, generating reports, and measuring performance. Data mining allows businesses to access real-time, accurate data from multiple sources to improve decision making. Using business intelligence and data mining techniques can help businesses become more efficient and make better decisions to increase profits and customer satisfaction. The expected results of applying business intelligence include improved decision making through accurate, timely information to support organizational goals and strategic plans.
This document presents a novel technique for solving the transcendental equations of selective harmonics elimination pulse width modulation (SHEPWM) inverters based on the secant method. The proposed algorithm uses the secant method to simplify the numerical solution of the nonlinear equations and solve them faster compared to other methods. Simulation results validate that the proposed method accurately estimates the switching angles to eliminate specific harmonics from the output voltage waveform and achieves near sinusoidal output current for various modulation indices and numbers of harmonics eliminated.
This document summarizes a research paper that designed and implemented a dual tone multi-frequency (DTMF) based GSM-controlled car security system. The system uses a DTMF decoder and GSM module to allow a car to be remotely controlled and secured from a mobile phone. It works by sending DTMF tones from the phone through calls to the GSM module in the car. The decoder interprets the tones and a microcontroller executes commands to disable the ignition or control other devices. The system was created to improve car security and accessibility through remote monitoring and control with DTMF and GSM technology.
This document presents an algorithm for imperceptibly embedding a DNA-encoded watermark into a color image for authentication purposes. It applies a multi-resolution discrete wavelet transform to decompose the image. The watermark, encoded into DNA nucleotides, is then embedded into the third-level wavelet coefficients through a quantization process. Specifically, the watermark nucleotides are complemented and used to quantize coefficients in the middle frequency band, modifying the coefficients. The watermarked image is reconstructed through inverse wavelet transform. Extraction reverses these steps to recover the watermark without the original image. The algorithm aims to balance imperceptibility and robustness through this wavelet-based, blind watermarking scheme.
1) The document analyzes the dynamic saturation point of a deep-water channel in Shanghai port based on actual traffic data and a ship domain model.
2) A dynamic channel transit capacity model is established that considers factors like channel width, ship density, speed, and reductions due to traffic conditions.
3) Based on AIS data from the channel, the average traffic flow is calculated to be 15.7 ships per hour, resulting in a dynamic saturation of 32.5%, or 43.3% accounting for uneven day/night traffic volumes.
The document summarizes research on the use of earth air tunnels and wind towers as passive solar techniques. Key findings include:
- Earth air tunnels circulate air through underground pipes to take advantage of the stable temperature 4 meters below ground for cooling in summer and heating in winter. Testing showed the technique can reduce ambient temperatures by up to 14 degrees Celsius.
- Wind towers circulate air through tall shafts to cool air entering buildings at night and provide downward airflow of cooled air during the day.
- Experimental testing of an earth air tunnel system over multiple months found maximum temperature reductions of 33% in spring and minimum reductions of 15% in summer.
The document compares the mechanical and physical properties of low density polyethylene (LDPE) thin films and sheets reinforced with graphene nanoparticles. LDPE/graphene thin films were produced via solution casting, while sheets were made by compression molding. Testing showed that the thin films had enhanced tensile strength, lower melt flow index, and higher thermal stability compared to sheets. The tensile strength of thin films increased by up to 160% with 1% graphene, while sheets increased by 70%. Melt flow index decreased more for thin films, indicating higher viscosity. Thin films also showed greater improvement in glass transition temperature. These results demonstrate that processing technique affects the properties of LDPE/graphene nanocomposites.
The document describes improvements made to a friction testing machine. A stepper motor and PLC control system were added to automatically vary the load on friction pairs, replacing the manual method. Tests using the improved machine found that the friction coefficient decreases as the load increases, and that abrasive and adhesive wear increased with higher loads. The improved machine allows more accurate and convenient testing of friction pairs under varying load conditions.
This document summarizes a research article that investigates the steady, two-dimensional Falkner-Skan boundary layer flow over a stationary wedge with momentum and thermal slip boundary conditions. The flow considers a temperature-dependent thermal conductivity in the presence of a porous medium and viscous dissipation. Governing partial differential equations are non-dimensionalized and transformed into ordinary differential equations using similarity transformations. The equations are highly nonlinear and cannot be solved analytically, so a numerical solver is used. Numerical results are presented for the skin friction coefficient, local Nusselt number, velocity and temperature profiles for varying parameters like the Falkner-Skan parameter and Eckert number.
An improvised white board compass was designed and developed to enhance the teaching of geometrical construction concepts in basic technology courses. The compass allows teachers to visually demonstrate geometric concepts and constructions on a white board in an engaging, hands-on manner. It supports constructivist learning principles by enabling students to observe and emulate the teacher. The design process utilized design and development research methodology to test educational theories and validate the practical application of the compass. The improvised compass was found to effectively engage students and improve their performance in learning geometric constructions.
The document describes the design of an energy meter that calculates energy using a one second logic for improved accuracy. The meter samples voltage and current values using an ADC synchronized to the line frequency via PLL. It calculates active and reactive power by averaging the sampled values over each second. The accumulated active power for each second is multiplied by one second to calculate energy, which is accumulated and converted to kWh. Test results showed the meter achieved an error of 0.3%, within the acceptable limit for class 1 meters. Considering energy over longer durations like one second helps reduce percentage error in the calculation.
This document presents a two-stage method for solving fuzzy transportation problems where the costs, supplies, and demands are represented by symmetric trapezoidal fuzzy numbers. In the first stage, the problem is solved to satisfy minimum demand requirements. Remaining supplies are then distributed in the second stage to further minimize costs. A numerical example demonstrates using robust ranking techniques to convert the fuzzy problem into a crisp one, which is then solved using a zero suffix method. The total optimal costs from both stages provide the solution to the original fuzzy transportation problem.
1) The document proposes using an Adaptive Neuro-Fuzzy Inference System (ANFIS) controller for a Distributed Power Flow Controller (DPFC) to improve voltage regulation and power quality in a transmission system.
2) A DPFC is placed at a load bus in an IEEE 4 bus system and its performance is compared using a PI controller and ANFIS controller.
3) Simulation results show the ANFIS controller provides faster convergence and better voltage profile maintenance during voltage sags and swells compared to the PI controller.
The document describes an improved particle swarm optimization algorithm to solve vehicle routing problems. It introduces concepts of leptons and hadrons to particles in the algorithm. Leptons interact weakly based on individual and neighborhood best positions, while hadrons (local best particles) undergo strong interactions by colliding with the global best particle. When stagnation occurs, particle decay is used to increase diversity. Simulations show the improved algorithm avoids premature convergence and finds better solutions compared to the basic particle swarm optimization.
This document presents a method for analyzing photoplethysmographic (PPG) signals using correlative analysis. The method involves calculating the autocorrelation function of the PPG signal, extracting the envelope of the autocorrelation function using a low pass filter, and approximating the envelope by determining attenuation coefficients. Ten PPG signals were collected from volunteers and analyzed using this method. The attenuation coefficients were found to have similar values around 0.46, providing a potentially useful parameter for medical diagnosis.
This document describes the simulation and design of a process to recover monoethylene glycol (MEG) from effluent waste streams of a petrochemical company in Iran. Aspen Plus simulation software was used to model the process, which involves separating water, salts, and various glycols (MEG, DEG, TEG, TTEG) using a series of distillation columns. Sensitivity analyses were performed to optimize column parameters such as pressure, reflux ratio, and boilup ratio. The results showed that MEG, DEG, TEG, and TTEG could be recovered at rates of 5.01, 2.039, 0.062, and 0.089 kg/hr, respectively.
This document presents a numerical analysis of fluid flow and heat transfer characteristics of ventilated disc brake rotors using computational fluid dynamics (CFD). Two types of rotor configurations are considered: circular pillared (CP) and diamond pillared radial vane (DP). A 20° sector of each rotor is modeled and meshed. Governing equations for mass, momentum, and energy are solved using ANSYS CFX. Boundary conditions include 900K and 1500K isothermal rotor walls for different speeds. Results show the DP rotor has 70% higher mass flow and 24% higher heat dissipation than the CP rotor. Velocity and pressure distributions are more uniform for the DP rotor at higher speeds, ensuring more uniform cooling. The
This document describes the design and testing of an automated cocoa drying house prototype in Trinidad and Tobago. The prototype included automated features like a retractable roof, automatic heaters, and remote control. It aims to address issues with the traditional manual sun drying process, which is time-consuming and relies on human monitoring of changing weather conditions. Initial testing with farmers showed interest in the automated system as a potential solution.
This document presents the design of a telemedical system for remote monitoring of cardiac insufficiency. The system includes an electrocardiography (ECG) device that collects and digitizes ECG signals. The ECG signals undergo digital signal processing including autocorrelation analysis. Graphical interfaces allow patients and doctors to view ECG data and attenuation coefficients derived from autocorrelation analysis. Data is transmitted between parties using TCP/IP protocol. The system aims to facilitate remote monitoring of cardiac patients to reduce hospitalizations through early detection of health changes.
The document summarizes a polygon oscillating piston engine invention. The engine uses multiple pistons arranged around the sides of a polygon within cylinders. As the pistons oscillate, they compress and combust air-fuel mixtures to produce power. This design achieves a very high power-to-weight ratio of up to 2 hp per pound. Engineering analysis and design of a prototype 6-sided engine is presented, showing it can produce 168 hp from a 353 cubic feet per minute air flow at 12,960 rpm. The invention overcomes issues with prior oscillating piston designs by keeping the pistons moving in straight lines within cylinders using conventional piston rings.
More from International Journal of Engineering Inventions www.ijeijournal.com (20)
International Journal of Engineering Inventions (IJEI),
1. International Journal of Engineering Inventions
ISSN: 2278-7461, ISBN: 2319-6491, www.ijeijournal.com
Volume 1, Issue 10 (November2012) PP: 29-34
An Approach towards Developing an Efficient Software Cost
Estimation System Using Fuzzy and Analogy Methods
1
Mr S A RAMESH KUMAR, 2Dr.S.SivaSubramanian,
1
Asst Professor, Karpaga Vinayaga College of Engineering, Chennai
2
Professor, Dhanalakshmi College of Engineering, Chennai
Abstract:––Software development cost estimation is important for effective project management. Many models have been
introduced to predict software development cost. In this paper, a novel emotional COnstructive Cost MOdel II (COCOMO
II) has been proposed for software cost estimation. In COCOMO II only the project characteristics are considered, whereas
the characteristics of team members are also important factors. Hence our proposed method is effectively estimated the
software effort and the cost by employing analogy with fuzzy based methods. In order to overcome all these difficulties of
existing systems, in the software cost estimation process, we proposed an efficient software cost estimation system based on
fuzzy analogy. In this, we have developed and validated a set of candidate measures for software projects similarity. These
measures are based on fuzzy sets, fuzzy reasoning and linguistic quantifiers. A new approach to estimate effort by analogy
when software projects are described either by numerical or categorical data. The normal procedure for analogy is not
applicable for categorical data. So a new approach which can be seen as a Fuzzification of the classical approach of
estimation by analogy. The proposed method will applicable to Function Point Metric also. It easy to generate rules in fuzzy
analogy method. The fuzzy analogy based cost estimation will overcome the multi co-linearity problem. The effort can be
estimated based on the emotional characteristics of the employees in the projects. The neuroticism characters highly affect
the performance of the cost estimation system. Thus our proposed method will give better performance when compared to
these recent papers. Our method will overcome all the drawbacks of these previous methods.
I. INTRODUCTION
Software cost estimation is the process of predicting the effort required to develop software system.The key factor
in selecting a cost estimation model is the accuracy of its estimates. It involves in the determination of some estimates are
effort (usually in person-months), project duration (in calendar time) and cost (in dollars). The goal was to obtain an in-
depth understanding of estimation practice and to examine factors that affect the accuracy of effort estimation. The basis for
the measurement of estimation accuracy was a comparison of the actual use of effort with the estimated most likely effort
provided in the planning stage of the project, i.e. the amount of effort the contractor believes that the project will require,
regardless of the price to the customer or the budget [5]. As software grew in size and importance, it also grew in its
complexity, making it very difficult to accurately predict the cost of the software development. The better accuracy in
estimation can be achieved by developing the domain based useful models that constructively explain the development life
cycle and accurately predict the effort of developing the software [16]. Effort Estimation with good accuracy helps in
managing overall budgeting and planning. The accuracy of these estimates is very less and most difficult to obtain, because
no or very little detail about the project is known at the beginning [12].Accurate effort estimates help software consultancies
to make appropriate bids when quoting for tenders – a lower estimate than the actual will lead to a loss and an unreasonably
high estimate will loose the bid. Such estimation models are developed using a set of measures that describe the software
development process, product and resources such as developer experience, system size and complexity and the
characteristics of the development environment, respectively [11]. This technique evolves the consultation of one expert in
order to derive an estimate for the project based on his experience and available information about the project under
development. This technique has been used extensively. However, the estimates are produced in intuitive and non-explicit
way [4]. It means that the component should provide the functions and services as per the requirement when used under the
specified condition. Pre existing components with or minimum changes will allow low cost, faster delivery of end product. If
the functionality is high then the efforts invested are also high [12]. Software development effort estimation is the process of
predicting the most realistic use of effort required for developing software based on some parameters. It has always
characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the
early stages of the software development are the most difficult to obtain and they are often the least accurate [6]. The
precision and reliability of the effort estimation is very important for software industry because both overestimates and
underestimates of the software effort are harmful to software companies. Nevertheless, accurate estimation of software
development effort in reality has major implications for the management of software development [2]. Some major aspects
of estimations are Nueral Network, project management, regression modeling and so on. Project development has increased
importance in business due to stiff competition and a fast changing business environment. Its cycle time is critical as it
decides the success of a business. Regardless of scope or complexity, a series of stages are Initiation, in which the outputs
and critical success factors are defined, followed by a Planning phase, characterized by breaking down the project into
smaller parts/tasks, an Execution phase, in which the project plan is executed, and lastly a Closure Project activities must be
grouped into phases or Exit phase, that marks the completion of the project [15]. Evidence-based Software Engineering is to
provide the means by which current best evidence from research can be integrated with practical experience and human
values in the decision making process regarding the development and maintenance of software [3]. Some Systematic rules
are, existing evidence regarding a treatment of technology, to review existing empirical evidence of the benefits and
ISSN: 2278-7461 www.ijeijournal.com P a g e | 29
2. An Approach towards Developing an Efficient Software Cost…
limitations of a specific Web development method, identify gaps in the existing research that will lead to topics for further
investigation, provide context/framework so as to properly place new research activities [1].
Prediction of software development effort using Artificial Neural Networks has focused mostly on the accuracy
comparison of algorithmic models rather than on the suitability of the approach for building software effort prediction
systems.The case study considering real time software to predict the testability of each module from source code static
measures. They consider Artificial Neural Networks as promising techniques to build predictive models [7]. Artificial neural
networks method has a potential to be the most appropriate technique for early stage estimation. This is because neural
networks, unlike linear aggression, are able to model interdependencies between input data which will inevitably occur when
considering the significant variables on the construction such as number of storeys, floor area and number of lifts [14]. The
challenge then lies in accurately modeling and predicting the software development effort, and then create project
development schedule. This work employs a neural network (NN) approach and a multiple regression modeling approach to
model and predict the software development effort based on an available real life dataset [13] Future software cost
estimation research increase the breadth of the search for relevant studies, Search manually for relevant papers within a
carefully selected set of journals when completeness is essential, Conduct more research on basic software cost estimation
topics, Conduct more studies of software cost estimation in real-life settings, Conduct more studies on estimation methods
commonly used by the software industry and Conduct fewer studies that evaluate methods based on arbitrarily chosen data
sets [8] most of the available data cost estimation and found that above preposition develops the best way of finding cost
estimation at the requirement phase itself. This will be appreciated if judgment is drawn on some worked out project and
their cost estimation [10]. This gives rise to an interesting dilemma in software cost estimation: the quality of inputs starts
becoming better only when the outputs have started losing value. Addressing this dilemma, which represents a major
problem in the field of software cost estimation, is tantamount to addressing the problem of poor estimation accuracy during
the early phases of software development. The quality of inputs during these early phases is difficult to improve due to the
limited amount of information available at that time [9].
II. ANALOGY-BASED ESTIMATION
Estimation by Analogy presented the best prediction accuracy when using a dataset of Web hypermedia
applications. Estimation by Analogy is an intuitive method and there is evidence that experts apply analogic reasoning when
making estimates. Estimation by Analogy is simple and flexible, compared to algorithmic models. Estimation by Analogy
can be used on qualitative and quantitative data, reflecting closer types of datasets found in real life [26]. Analogy-based
estimation is able to deal with poorly understood domains that are difficult to model, since it relies on historical data of
projects similar to the target project rather than on a formal model, or rules in the case of rule-based systems. It can be
applied in the very early phase of a software project when detailed information about the project is not yet available, and can
be later improved when more detailed information is accessible. Analogy-based estimation has the potential to mitigate the
effect of outliers in a historical data set, since estimation by analogy does not rely on calibrating a single model to suit all
projects [22]. Both Ant colony optimization (ACO) and Genetic algorithms (GA) use a population of agents or individuals to
represent solutions, and the information collected by the population influences the next generation of the search.
III. REGRESSION-BASED SOFTWARE COST ESTIMATION MODEL
An important difference of ACO over GAs is that the information maintained in the artificial pheromone trails
represents the memory of the entire colony from all generations, whereas the information on the performance of the search is
contained only in the current generation of a GA.The main advantage of ACO over GAs is that in every new cycle of the
search, solutions are developed from the collective information maintained in the pheromone trails [25]. The COCOMO
model is a regression-based software cost estimation model. It was developed by Bohem (1995; 2000) in 1981 and thought
to be the most cited, best known and the most plausible (Fei and Liu, 1992) of all traditional cost prediction models.
COCOMO model can be used to calculate the amount of effort and the time schedule for software projects. COCOMO 81
was a stable model on that time. One of the problems with using COCOMO 81 today is that it does not match the
development environment of the late 1990‟s [24]. COCOMO represents an approach that could be regarded as “off the
shelf.” Here the estimator hopes that the equations contained in the cost model adequately represent their development
environment and that any variations can be satisfactorily accounted for in terms of cost drivers or parameters built into the
model.
For instance COCOMO has 15 such drivers. Unfortunately, there is considerable evidence that this “off the shelf”
approach is not always very successful. Kemerer reports average errors (in terms of the difference between predicted and
actual project effort) of over 600 percent in his independent study of COCOMO. Analogy-based systems can also handle
failed cases (i.e., those cases for which an accurate prediction was not made). This is useful as it enables users to identify
potentially high-risk situations. Analogy is able to deal with poorly understood domains (such as software projects) since
solutions are based upon what has actually happened as opposed to chains of rules in the case of rule based systems. Users
may be more willing to accept solutions from analogy based systems since they are derived from a form of reasoning more
akin to human problem solving, as opposed to the somewhat arcane chains of rules or neural nets. This final advantage is
particularly important if systems are to be not only deployed but also have reliance placed upon them [23].
3.1 RBFN NETWORK STRUCTURE
A handful of researches have been presented in the literature for the software cost detection using software
development estimation Recently, utilizing artificial intelligence techniques like Neural Network, Fuzzy and web
development in software cost estimation have received a great deal of attention among researchers. A brief review of some
recent researches is presented here Ali Idri et al. [4] has suggested that the several software effort estimation models
ISSN: 2278-7461 www.ijeijournal.com P a g e | 30
3. An Approach towards Developing an Efficient Software Cost…
developed over the last 30 years, providing accurate estimates of the software project under development was still
unachievable goal. Therefore, many researchers were working on the development of new models and the improvement of
the existing ones using artificial intelligence techniques such as: case-based reasoning, decision trees, genetic algorithms and
neural networks. This paper was devoted to the design of Radial Basis Function Networks for software cost estimation. It
shows the impact of the RBFN network structure, especially the number of neurons in the hidden layer and the widths of the
basis function, on the accuracy of the produced estimates measured by means of MMRE and Pred indicators.
3.2 STRUCTURE REVIEW
Stein Grimstad et al. [5] has documented that the software industry suffers from frequent cost overruns. A
contributing factor was the imprecise estimation terminologies have been used. A lack of clarity and precision in the use of
estimation terms reduces the interpretability of estimation accuracy results, makes the communication of estimates difficult,
and lowers the learning possibilities. This paper reports on a structured review of typical software effort estimation
terminology in software engineering textbooks and software estimation research papers. The review provides evidence that
the term „effort estimate‟ was frequently used without sufficient clarification of its meaning, and that estimation accuracy
have often evaluated without ensuring that the estimated and the actual effort was comparable. Guidelines were suggested on
how to reduce that lack of clarity and precision in terminology. B. Tirimula Rao et al. [7] proposed that the software cost
estimation was critical for software project management. Many approaches have been proposed to estimate the cost with
current project by referring to the data collected from past projects. We discuss an approach for the validation of the dataset
for training the neural network for the software cost estimation. By combining the mathematical approach, it gives an base
model for our validation procedure, we get much more accurate results when compared to the primitive ones .By tracking the
results with the standard ones we calculate the error percentile which was proved to be very efficient than artificial neural
networks and which simplifies the operations of artificial neural networks
3.3 FUTURE SOFTWARE COST ESTIMATION
Magne Jargensen and Martin Shepperd [8] have proved that the basis for the improvement of software estimation
research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76
journals and classifies the papers according to research topic, estimation approach, research approach, study context and data
set. Based on the review, we provide recommendations for future software cost estimation research: 1) Increase the breadth
of the search for relevant studies, 2) Search manually for relevant papers within a carefully selected set of journals when
completeness is essential, 3) Conduct more research on basic software cost estimation topics, 4) Conduct more studies of
software cost estimation in real-life settings, 5) Conduct more studies on estimation methods commonly used by the software
industry, and, 6) Conduct fewer studies that evaluate methods based on arbitrarily chosen data sets.Kirti Seth et al. [12] has
pointed that the Effort Estimation with good accuracy helps in managing overall budgeting and planning. The accuracy of
those estimates was very less and most difficult to obtain, because no or very little detail about the project were known at the
beginning. Due to architectural difference in CBS (Component Based Systems), the estimation becomes more crucial.
Component-based development mainly involves the reuse of already developed components. The selection of best quality
component is of prime concern for developing an overall quality product.CBS mainly involves two types of efforts: selection
and integration. Present paper presents a fuzzy rule based model for estimating the efforts in selecting those Components for
developing an application using CBSE approach.This simulator will be an asset to affordably keep track of time during the
process of development and thus to satisfy the client in those era of competitive market of software.Maya Daneva and Roel
Wieringa [18] have concluded that many methods for estimating size, effort, schedule and other cost aspects of IS projects,
but only one specifically developed for Enterprise Resource Planning(ERP) and none for simultaneous, interdependent ERP
projects in a cross-organizational context. The objective of those papers was to sketch the problem domain of cross-
organizational ERP cost estimation, to survey available solutions, and to propose a research program to improve those
solutions. In it, we explained why knowledge in the cost estimation of cross-organizational ERP was fragmented, assess the
need to integrate research perspectives and propose research directions that an integrated view of estimating cross-
organizational ERP project cost should include.Jaswinder Kaur et al. [19] estimated that software development effort is an
important task in the management of large software projects. The task was challenging and it has been receiving the
attentions of researchers ever since software was developed for commercial purpose. A number of estimation models exist
for effort prediction. However, there was a need for novel model to obtain more accurate estimations. The primary purpose
of those study, propose a precise method of estimation by selecting the most popular models in order to improve accuracy.
Here, we explore the use of Soft Computing techniques to build a suitable model structure to utilize improved estimation of
software effort for NASA software projects. A comparison between Artificial-Neural-Network Based Model (ANN) and
Halstead, Walston-Felix, Bailey-Basili and Doty models was provided. The evaluation criteria were based upon MRE and
MMRE. Consequently, the final results very precise and reliable when they are applied to a real dataset in a software project.
.The results show that Artificial Neural Network were effective in effort estimation. Bingchiang jeng et al [20] has guided
that the software estimation provides an important tool for project planning; whose quality and accuracy greatly affect the
success of a project. Despite a plethora of estimation models, practitioners experience difficulties in applying them because
most models attempt to include as many influential factors as possible in estimating software size and/or effort. This research
suggests a different approach that simplifies and tailors a generic function point analysis model to increase ease of use. The
proposed approach redefines the function type categories in the FPA model, on the basis of the target application‟s
characteristics and system architecture. This method makes the function types more suitable for the particular application
domain. It also enables function point counting by the programmers themselves instead of by an expert. An empirical study
using historical data establishes the regression model and demonstrates that its prediction accuracy was comparable to that of
a FPA model.
ISSN: 2278-7461 www.ijeijournal.com P a g e | 31
4. An Approach towards Developing an Efficient Software Cost…
IV. CURRENT PROPOSED METHODOLOGY DURING THE TENURE OF THE
RESEARCH WOR
ACTION DESCRIPTION RESPONSIBILITY SUMMARY
Gather and Analyze Analyze and refine Software manager, • Identified constraints
Software Functional & software requirements, system engineers, and • Methods used to refine
Programmatic Requirements software architecture, cognizant engineers requirements
and programmatic • Resulting requirements
constraints. • Resulting architecture
hierarchy
• Refined software
architecture
• Refined software
functional requirements
Define the Work Elements Define software work Software manager, • Project-Specific
and Procurements elements and system engineers, and product-based software
procurements for cognizant engineers WBS
specific project. • Procurements
• Risk List
Estimate Software Size Estimate size of software Software manager, • Methods used for size
in logical Source Lines cognizant engineers estimation
of Code (SLOC). • Lower level and total
software size estimates
in logical SLOC
Estimate Software Effort Convert software size Software manager, • Methods used to
estimate in SLOC to cognizant engineers, and estimate effort for all
software development software estimators work elements
effort. Use software • Lower level and Total
development effort to Software Development
derive effort for all work Effort in work-months
elements. (WM)
• Total Software Effort
for all work elements of
the project WBS in
work months
• Major assumptions
used in effort estimates
Schedule the effort Determine length of time Software manager, • Schedule for all work
needed to complete the cognizant engineers, and elements of project‟s
software effort. Establish software estimators software WBS
time periods of work • Milestones and review
elements of the software dates
project WBS and • Revised estimates and
milestones assumptions made
Calculate the Cost Estimate the total cost of Software manager, • Methods used to
the software project cognizant engineers, and estimate the cost
software estimators • Cost of procurements
• Itemization of cost
elements in dollars
across all work elements
• Total cost estimate in
dollars
Software cost estimation is the process of predicting the effort to be required to develop a software system. The
Software cost estimation process included a number of iterative steps which can be summarized in Table I.
Totally there are six steps needed for the software cost estimation process. They are,
1. Gather and Analyze Software Functional and Programmatic Requirements
2. Define the Work Elements and Procurements
3. Estimate Software Size
4. Estimate Software Effort
5. Schedule the Effort
6. Calculate the Cost
The limitations of algorithmic models led to the exploration of the non-algorithmic techniques which are soft
computing based. New paradigms offer alternatives to estimate the software development effort, in particular the
Computational Intelligence (CI) that exploits mechanisms of interaction between humans and processes domain knowledge
ISSN: 2278-7461 www.ijeijournal.com P a g e | 32
5. An Approach towards Developing an Efficient Software Cost…
with the intention of building intelligent systems (IS). Amongst IS, fuzzy logic may be a convenient tool for software
development effort estimation. Fuzzy logic-based cost estimation models are more appropriate when vague and imprecise
information is to be accounted for. Fuzzy systems try to behave just like the processes of the brain with a rule base. In our
proposed method the effort and the cost can be estimated using the Fuzzy logic. In fuzzy logic, there are three steps, 1)
Fuzzification: to produce trapezoidal numbers for the linguistic terms 2) to develop the complexity matrix by producing a
new linguistic term 3) to determine the productivity rate and the attempt for the new linguistic terms 4) Defuzzification: to
determine the effort required to complete a task. After the estimation of effort, the cost for the software development process
can be estimated. The implementation of the proposed work will be done in JAVA.
V. CONCLUSION
The limitations of algorithmic models led to the exploration of the non-algorithmic techniques which are soft
computing based. New paradigms offer alternatives to estimate the software development effort, in particular the
Computational Intelligence (CI) that exploits mechanisms of interaction between humans and processes domain knowledge
with the intention of building intelligent systems (IS). Amongst IS, fuzzy logic may be a convenient tool for software
development effort estimation. Fuzzy logic-based cost estimation models are more appropriate when vague and imprecise
information is to be accounted for. Fuzzy systems try to behave just like the processes of the brain with a rule base. In our
proposed method the effort and the cost can be estimated using the Fuzzy logic. In fuzzy logic, there are three steps, 1)
Fuzzification: to produce trapezoidal numbers for the linguistic terms 2) to develop the complexity matrix by producing a
new linguistic term 3) to determine the productivity rate and the attempt for the new linguistic terms 4) Defuzzification: to
determine the effort required to complete a task. After the estimation of effort, the cost for the software development process
can be estimated. The implementation of the proposed work will be done in JAVA. In our future work, we aim to perform
experiments on the other personality factors (agreeableness and neuroticism), other emotions of OCC, and other parameters
of human modeling such as culture and motivational states. Fixed values for emotions were considered in this study and they
initialized the mood only without any affect during the simulation. Implementing a computational emotional model on the
basis of OCC to compute the intensity of emotions with respect to the events and TMA‟s actions during the simulation is
another work for future. In this study, the role of team members did not affect the computation. Roles only affect the
direction of communications not the quality of the communications. Hence, the effect of the role assignment (i.e. Belbin
model [26]) will be assessed in computations. The lack of data is one of the hindrances in our present work. We will be
testing the FECSCE with more data in future. In this study, project difficulty was considered without change during project
execution time. When a project progresses, there are more developed CASE development tools which can be reused [27];
hence, the project difficulty may encounter change during the project progression. We want to consider this aspect also in
future work.
REFERENCES
1. Barbara A. Kitchenham, Emilia Mendes and Guilherme H. Travassos," Cross versus Within-Company Cost
Estimation Studies: A Systematic Review," IEEE Transactions on Software Engineering, Vol. 33, No. 5, pp. 316-
329, “May 2007.
2. Ch. Satyananda Reddy and KVSVN Raju," Improving the Accuracy of Effort Estimation through Fuzzy Set
Representation of Size," Journal of Computer Science, Vol. 5, No.6, pp. 451-455, 2009
3. Barbara Kitchenham, O. Pearl Brereton , David Budgen, Mark Turner, John Bailey and Stephen Linkman,"
Systematic literature reviews in software engineering – A systematic literature review," Information and Software
Technology, Vol. 51, pp.7–15, 2009.
4. Ali Idri, Abdelali Zakrani and Azeddine Zahi," Design of Radial Basis Function Neural Networks for Software
Effort Estimation," IJCSI International Journal of Computer Science Issues, Vol. 7, No. 3, pp. 11-18, July 2010.
5. Stein Grimstad, Magne Jorgensen and Kjetil Molokken-stvold," Software effort estimation terminology: The
tower of Babel," Information and Software Technology, Vol. 48, pp. 302–310, 2006
6. Iman Attarzadeh and Siew Hock Ow," A Novel Algorithmic Cost Estimation Model Based on Soft Computing
Technique," Journal of Computer Science, Vol. 6, No. 2, pp 117-125, 2010.
7. B. Tirimula Rao, B. Sameet, G. Kiran Swathi, K. Vikram Gupta, Ch. Ravi Teja and S.Sumana," A Novel Neural
Network Approach For Software Cost Estimation Using Functional Link Artificial Neural Network (FLANN),"
IJCSNS International Journal of Compute Science and Network Security, Vol.9, No.6, pp. 126-131, June 2009.
8. Mange Jorgensen and Martin Shepperd," A Systematic Review of Software Development Cost Estimation
Studies,” IEEE Transactions on Software Engineering," Vol. 33, No.1, pp. 33 - 53, Jan. 2007.
9. Ali Afzal Malik and Barry Boehm," Quantifying requirements elaboration to improve early software cost
estimation," Information Sciences, 2009.
10. Surya Prakash Tripathi, Kavita Agarwal and Shyam Kishore Bajpai," A note on cost estimation based on
prime numbers," International Journal of Information Technology and Knowledge Management, Vol. 2, No. 2, pp.
241-245 Dec. 2010.
11. K. Vinay Kumar, V. Raves, Mahil Carr and N. Raj Kiran," Software development cost estimation using wavelet
neural networks," The Journal of Systems and Software, 2008.
12. Kirti Seth, Arun Sharma and Ashish Seth," Component Selection Efforts Estimation– a Fuzzy Logic Based
Approach," International Journal of Computer Science and Security, (IJCSS) Vol. 3, No.3, pp. 210-215, 2009.
13. Roheet Bhatnagar, Vandana Bhattacharjee and Mrinal Kanti Ghose," Software Development Effort Estimation–
Neural Network vs. Regression Modeling Approach," International Journal of Engineering Science and
Technology, Vol. 2, No.7, pp.2950-2956, 2010.
ISSN: 2278-7461 www.ijeijournal.com P a g e | 33
6. An Approach towards Developing an Efficient Software Cost…
14. Mohammed arafa and Mamoun Alqedra," Early stage cost estimation of building construction projects using
artifical neural networks," Journal of Artifical Intelligence, Vol.4, No.1, pp. 63-75, 2011.
15. P. K. Suri, Bharat Bhushan and Ashish Jolly," Time Estimation for Project Management Life Cycle: A Simulation
Approach," IJCSNS International Journal of Computer Science and Network Security, Vol.9, No.5, pp. 211-216,
May 2009.
16. Naveen Aggarwal, Nupur Prakash, and Sanjeev Sofat," Content Management System Effort Estimation Model
based on Object Point Analysis," International Journal of Electrical and Electronics Engineering," Vol.2, No.8,
pp.483-490, 2008
17. P. K. Suri and Bharat Bhushan," Simulator for Time Estimation of Software Development Process," IJCSNS
International Journal of Computer Science and Network Security, Vol.7, No.7, pp 288-292, Jul 2007.
18. Maya Daneva and Roel Wieringa," Cost estimation for cross-organizational ERP projects: research perspectives,"
software quality journal, Vol. 16, No. 3, pp. 459-481, 2008.
19. Jaswinder Kaur, Satwinder Singh, Dr. Karanjeet Singh Kahlon and Pourush Bassi," Neural Network-A Novel
Technique for Software Effort Estimation," International Journal of Computer Theory and Engineering, Vol. 2,
No. 1 Feb. 2010.
20. Bingchiang jeng, dowming yeh, deron wang, shu-lan chu and chia-mei chen," A Specific Effort Estimation
Method Using Function Point," Journal of information science and engineering, Vol. 27, pp.1363-1376, 2011.
21. Dheerendra Singh and K. S. Dhindsa, "Estimation By Analogy In The Perspective Educational Web Hypermedia
Applications", International Journal of Information Technology and Knowledge Management, Vol. 4, No. 1, pp.
305-313,Jan 2011.
22. Jingzhou Li, Guenther Ruhe, Ahmed Al-Emran and Michael M. Richter, "A flexible method for software effort
estimation by analogy", Journal Empirical Software Engineering, Vol. 12, No. 1, pp.65-106, Feb 2007.
23. Martin Shepperd and Chris Schofield, "Estimating Software Project Effort Using Analogies", IEEE Transactions
On Software Engineering, Vol. 23, No. 12, pp. 736-743, Nov 1997.
24. Iman Attarzadeh and Siew Hock Ow, "A Novel Algorithmic Cost Estimation Model Based on Soft Computing
Technique", Journal of Computer Science, Vol.6, No. 2, pp. 117-125, 2010.
25. Khaled Mahar and Ahmed El-Deraa, "Software Project Estimation Model Using Function Point Analysis With Cbr
Support", International Conference Applied Computing, Alexandria, Egypt,pp. 508-512, 2006.
26. R. Belbin, Team Roles at Work, Butterworth- Heinemann, Oxford, 1983
27. R. Banker, R. Kauffman, R. Kumar, An empirical test of object-based outputmeasurement metrics in a computer
aided software engineering (CASE) environment, Journal of Management Information Systems 8 (1994) 127–150.
AUTHOR PROFILE
1) Mr.S.A.Rameshkumar , M.Tech(IT)., pursuing Ph.d. as an Asst.professor from Department of CSE of Karpaga
Vinayaga College of Engineering Bharath University, Chennai, Tamil Nadu. He has more than 9 years of teaching and his
areas of specialization are mobile computing, Artificial Intelligence Operating System, software Engineering and Data
Structure .
2) Dr.S.Sivasubramanain,M.Tech(CSE)., Ph.D(CSE) as anProfessor from Department of IT of Dhanalakshmi College
Of Enginnering, Chennai,Tamil Nadu. He has more than 12 years of teaching and research experience and his areas of
specialization are mobile computing,Database Management System, Computer Networks, Networks Security and Data
Mining.
ISSN: 2278-7461 www.ijeijournal.com P a g e | 34