The document discusses software cost estimation techniques. It begins by introducing software productivity metrics like lines of code and function points. It then describes various estimation techniques, highlighting the COCOMO model. COCOMO is an algorithmic cost model that relates project size and factors to effort. It has different models for early design, reuse, and post-architecture. The document concludes by mentioning recent trends in software cost estimation like neural networks, analogy estimation, and Bayesian belief networks.
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.
Basic COCOMO compute software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC, KLOC).
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics.
Basic COCOMO compute software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of source lines of code (SLOC, KLOC).
Resource Allocation In Software Project ManagementSyed Hassan Ali
Resource Allocation In Software Project Management
what is Resource Allocation In Software Project Management
define Resource Allocation In Software Project Management
how to allocate resource in software project management
This Presentation will describe you,
01. What is software project management
02. The Role of Software Project Manager
03. Risk Management
04. People Management
not only these point you will have with example.
In software testing, there are many paths between the entry and exit of a software program. So it’s difficult to fully test all paths of even a simple unit. This is a challenge when we design test cases.
Resource Allocation In Software Project ManagementSyed Hassan Ali
Resource Allocation In Software Project Management
what is Resource Allocation In Software Project Management
define Resource Allocation In Software Project Management
how to allocate resource in software project management
This Presentation will describe you,
01. What is software project management
02. The Role of Software Project Manager
03. Risk Management
04. People Management
not only these point you will have with example.
In software testing, there are many paths between the entry and exit of a software program. So it’s difficult to fully test all paths of even a simple unit. This is a challenge when we design test cases.
Estimation of resources, cost, and schedule for a software engineering effort requires experience, access to good historical information, and the courage to commit to quantitative predictions when qualitative information is all that exists. Halstead’s Measure & COCOMO Modeol COCOMO II Model of Estimation techniquesused or S/w Developments and Maintenance
Effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input.
Effort estimation is essential for many people and different departments in an organization.
An evaluation of two popular segmentation algorithms, the mean shift-based segmentation algorithm and a graph-based segmentation scheme. We also consider a hybrid method which combines the other two methods.
4.1Introduction
- Potential Threats and Attacks on Computer System
- Confinement Problems
- Design Issues in Building Secure Distributed Systems
4.2 Cryptography
- Symmetric Cryptosystem Algorithm: DES
- Asymmetric Cryptosystem
4.3 Secure Channels
- Authentication
- Message Integrity and Confidentiality
- Secure Group Communication
4.4 Access Control
- General Issues
- Firewalls
- Secure Mobile Code
4.5 Security Management
- Key Management
- Issues in Key Distribution
- Secure Group Management
- Authorization Management
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
2. Agenda
Objectives
Introduction
Software productivity
Estimation techniques
Algorithmic cost modelling
The COCOMO model
Project duration and staffing
Recent Trends
References
3. Objectives
Understanding the fundamentals of software
costing and reasons why the price of the software
may not be directly related to its development
cost.
Understanding the three metrics that are used for
software productivity assessment .
Understanding why different techniques should
be used for software estimation.
Understanding the principles of the COCOMO 2
algorithmic cost estimation model.
4. Introduction
Fundamental estimation questions
How much effort is required to complete an
activity?
How much calendar time is needed to complete an
activity?
What is the total cost of each activity?
5. Introduction (Cont.)
Computing the total cost of the software
development project involved :
Hardware and software costs.
Travel and training costs.
Effort costs.
For most projects, the dominant cost is the effort cost. Computers
that are powerful enough for software development are relatively
cheap. Using electronic communications systems such as e-mail,
shared web sites and videoconferencing can significantly reduce
the travel required.
6. Introduction (Cont.)
Effort costs
Are not just the salaries of the software engineers who are involved in
the project , also include :
• Costs of providing, heating and lighting office space.
• Costs of support staff such as accountants, administrators, system
managers, cleaners and technicians.
• Costs of networking and communications.
• Costs of central facilities such as a library or recreational facilities.
• Costs of Social Security and employee benefits such as pensions
and health insurance.
7. Introduction (Cont.)
Effort costs
So that, organizations compute effort costs in terms of overhead
costs where they take the total cost of running the organization
and divide this by the number of productive staff.
This overhead factor is usually at least twice the software
engineer’s salary, depending on the size of the organizations and
its associated overheads. Therefore, if a company pays a software
engineer $90,000 per year, its total costs are at least $180,000 per
year
8. Pricing
Classically, price is simply cost plus profit. However, the
relationship between the project cost and the price to the
customer is not usually so simple.
Software pricing must take into account group of factors as
Market opportunity, Cost estimate uncertainty, Contractual
terms, Requirements volatility, Requirements volatility, and
Financial health
Introduction (Cont.)
11. Software productivity
Productivity can be expressed as the ratio of output
to inputs used in the production process, i.e. output
per unit of input.
Essentially, we want to measure useful functionality
produced per time unit.
12. Productivity estimates are usually based on measuring attributes
of the software and dividing this by the total effort required for
development.
There are two types of metric that have been used :
Size-related metrics
These are related to the size of some output from an activity. The
most commonly used size-related metric is lines of delivered
source code.
Function-related metrics
These are related to the overall functionality of the delivered
software. Function points and object points are the best-known
metrics of this type.
Software productivity (Cont.)
13. Lines of code
Lines of source code per programmer-month (LOC/pm) is a
widely used software productivity metric. You can compute
LOC/pm by counting the total number of lines of source code
that are delivered, then divide the count by the total time in
programmer-months required to complete the project.
Lines of code metric is a programming language dependent ,
i.e. The lower level the language, the more productive the
programmer.
Software productivity (Cont.)
14. For example, consider a software project that might be coded in
5,000 lines of assembly code or 1,500 lines of C. The assembler
programmer has a productivity higher than the high-level
language programmer.
Software productivity (Cont.)
15. Function points
Productivity is expressed as the number of function points
that are implemented per person-month.
Function points can be computed by by measuring or
estimating the following program features :
External inputs and outputs.
User interactions.
External interfaces.
Files used by the system.
Software productivity (Cont.)
16. Software productivity (Cont.)
Function points
Some inputs and outputs, interactions. and so on are more
complex than others and take longer to implement. The
function-point metric takes this into account by multiplying the
initial function-point estimate by a complexity-weighting factor.
You should assess each of these features for complexity and
then assign the weighting factor that varies from 3 (for simple
external inputs) to 15 for complex internal files.
17. Software productivity (Cont.)
Function points
Unadjusted function-point count (UFC) can be computed by
multiplying each initial count by the estimated weight and summing
all values.
Then modify UFC by multiply it by additional complexity factors
(degree of distributed processing, the amount of reuse, and so on.)
that are related to the complexity of the system as a whole to
produce a final function-point count for the overall system.
18. Software productivity (Cont.)
Function points
The advantage compared to size measurement (LOC/pm) is
that the complexity of the project is considered.
The problem around function counting is that function-point
count in a program depends on the estimator. Different people
have different notions of complexity.
19. Object points
Are an alternative to function points. They can be used
with languages such as database programming languages
Object points are not object classes that may be produced
when an object-oriented approach is taken to software
development.
Software productivity (Cont.)
20. Object points
The number of object points in a program is a weighted
estimate of :
The number of separate screens that are displayed Simple screens
count as 1 object point, moderately complex screens count as 2,
and very complex screens count as 3 object points.
The number of reports that are produced For simple reports,
count 2 object points, for moderately complex reports, count 5,
and difficult reports to produce , count 8 object points.
The number of modules in programming languages such as Java
or C++ that must be developed to supplement the database
programming code Each of these modules counts as 10 object
points.
Software productivity (Cont.)
21. Object points
The advantage of object points over function points is that
they are easier to estimate from a high-level software
specification.
Object points are only concerned with screens, reports and
modules. They are not concerned with implementation
details, and the complexity factor estimation is much
simpler.
Software productivity (Cont.)
25. Algorithmic cost modelling :
A model is developed using historical cost information
that relates some software metric (like its size) to the
project cost. An estimate is made of that metric and the
model predicts the effort required.
Estimation techniques (Cont.)
26. Expert judgement :
Several experts on the proposed software development
techniques and the application domain are consulted. They
each estimate the project cost. These estimates are
compared and discussed. The estimation process iterates
until an agreed estimate is reached.
Estimation techniques (Cont.)
27. Estimation by analogy :
This technique is applicable when other projects in the
same application domain have been completed. The cost
of a new project is estimated by analogy with these
completed projects.
Estimation techniques (Cont.)
28. Parkinson's Law :
Parkinson’s Law states that work expands to fill the time
available. The cost is determined by available resources
rather than by objective assessment. If the software has to
be delivered in 12 months and 5 people are available, the
effort required is estimated to be 60 person-months.
Estimation techniques (Cont.)
29. Pricing to win :
The software cost is estimated to be whatever the customer
has available to spend on the project. The estimated effort
depends on the customer’s budget and not on the software
functionality.
Estimation techniques (Cont.)
30. Algorithmic cost modelling
Uses a mathematical formula to predict project costs
based on estimates of the project size, the number of
software engineers, and other process and product
factors.
An algorithmic cost model can be built by analysing the
costs and attributes of completed projects and finding
the closest fit formula to actual experience.
31. Algorithmic cost modelling
Effort in terms of person-months.
A is a constant factor that depends on local
organizational practices and the type of software that is
developed.
Size may be either an assessment of the code size of the
software or a functionality estimate.
B usually lies between 1 and 1.5 (associated with the
size estimate).
M is a multiplier made by combining process, product
and development attributes
33. Algorithmic models difficulties
1. It is often difficult to estimate Size at an early stage,
when only a specification is available.
2. The estimates of the factors contributing to B and M
are subjective. Estimates vary from one person to
another, depending on their background and experience
with the type of system that is being developed.
34. Depends on the system information that is
available.
If the initial estimate of effort required is x
months of effort, this range may be from 0.25x
to 4x when the system is first proposed.
The accuracy af the estimates
Fig1: Estimate uncertainty
35. The COCOMO model is an empirical model that was
derived by collecting data from a large number of
software projects.
These data were analyzed to discover formulae that
were the best fit to the observations.
These formulae link the size of the system and product,
project and team factors to the effort to develop the
system.
The COCOMO model
36. It is well documented, available in the public domain.
It has been widely used and evaluated in a range of
organisations.
It has a long pedigree from its first instantiation in 1981
Reasons to choose COCOMO model
37. Three-level model where the levels corresponded to the
detail of the analysis of the cost estimate.
The first level (basic) provided an initial rough
estimate.
The second level modified this using a number of
project and process multipliers;
The third level and the most detailed level produced
estimates for different phases of the project.
Supports waterfall model of development
COCOMO I
38. There have been radical changes to software development
since this initial version was proposed (ex:Prototyping and
incremental development)
To take these changes into account, the COCOMO II model
recognises different approaches to software development.
It supports a spiral model of development and embeds
several sub-models that produce increasingly detailed
estimates.
An application-composition model
An early design model
A reuse model
A post-architecture model
COCOMO II
40. It was introduced to support the estimation of effort
required for prototyping projects and for projects where
the software is developed by composing existing
components.
PM is the effort estimate in person-months.
NAP is the total number of application points in the delivered
system.
%reuse is an estimate of the amount of reused code in the
development.
PROD is the object-point productivity.
An application-composition model
41. This model is used during early stages of the system
design after the requirements have been established.
A should be 2.94 (Boehm)
The size of the system is expressed in KSLOC, which is
the number of thousands of lines of source code.
B reflects the increased effort required as the size of the
project increases
M is based on a simplified set of seven project and
process characteristics that influence the estimate.
An early design model
42. These characteristics used in the early design
model are
Personnel capability (PERS)
Product reliability and complexity (RCPX),
Reuse required (RUSE),
Platform difficulty(PDIF),
Personnel experience (PREX),
Schedule (SCED) and support facilities (FCIL).
An early design model (2)
44. Is used to compute the effort required to integrate reusable
components and/or program code that is automatically generated by
design or program translation tools.
For code that is automatically generated, the model estimates the
number of personmonths required to integrate this code.
AT is the percentage of adapted code that is automatically generated
ATPROD is the productivity of engineers in integrating such code
(2400 )
Ex: if there is a total of 20,000 lines of white-box reused code in a
system and 30% of this is automatically generated, then the effort
required to integrate this generated code is: (20,000 * 30/100) /
2400 = 2.5 person months
A reuse model
45. Once the system architecture has been designed, a more accurate
estimate of the software size can be made.
The size estimate for the software should be more accurate by this
stage in the estimation process
The estimate of the code size in the post-architecture model is
computed using three components:
1. An estimate of the total number of lines of new code to be
developed.
2. An estimate of the equivalent number of source lines of code
(ESLOC) calculated using the reuse model
3. An estimate of the number of lines of code that have to be
modified because of changes to the requirements.
A post-architecture model
46. The relationship between the number of staff working
on a project, the total effort required and the
development time is not linear. (Why?)
The time computation formula is the same for all
COCOMO levels:
PM is the effort computation
B is the exponent computed as discussed above
(B is 1 for the early prototyping model).
Project duration and staffing
47. Example: assume that 60 months of effort are
estimated to develop a software system Assume
that the value of exponent B is 1.17. From the
schedule equation, the time required to complete
the project is:
Project duration and staffing
48. Learning Oriented Techniques
It use prior and current knowledge to develop a software
estimation model. Neural network and analogy estimation
are examples.
The neural network is trained with a series of inputs and
the correct output from the training data so as to minimize
the prediction error.
Analogy Based Estimation is simple when provided a new
project for estimation, compare it with historical projects
to obtain the nearest project to estimate development cost
of the new project.
RECENT TRENDS
49. RECENT TRENDS
Fuzzy Based Software Cost Estimation
The effort is estimated using different
memberships function such as the Triangular
Membership Function, Gaussian Membership
Function and Trapezoidal Membership.
50. Meta-heuristic and Genetic Algorithms
are used for tuning parameters of COCOMO model
It was observed that the model we proposed gives better
results when compared with the standard COCOMO
model.
RECENT TRENDS
51. Bayesian Belief Network (BBNs)
They are compact networks of probabilities that capture
the probabilistic relationship between variables, as well as
historical information about their relation ships.
BBNs are very effective for modeling situations where
some information is already known and incoming data is
uncertain or partially unavailable.
BBNs combine visual representation with a strong
mathematical background
BBN may not cover all the cases that can appear.
RECENT TRENDS
52. References
1. Mukherjee, S., Bhattacharya, B., & Mandal, S. A SURVEY ON METRICS,
MODELS & TOOLS OF SOFTWARE COST ESTIMATION.
2. Waghmode, Swati, and Kishor Kolhe. "A Novel Way of Cost Estimation in
Software Project Development Based on Clustering Techniques.”
3. Bibi, S., I. Stamelos, and L. Angelis. "Bayesian belief networks as a software
productivity estimation tool." In 1st Balkan Conference in Informatics,
Thessaloniki, Greece. 2003.
4. Maleki, I., Ebrahimi, L., Jodati, S., & Ramesh, I. (2014). Analysis of Software Cost
Estimation Using Fuzzy Logic. International Journal in Foundations of Computer
Science & Technology (IJFCST), 4(3), 27-41.
5. Milos Hauskrecht, Bayesian belief networks Lecture, X4-8845, The University of
Pittsburgh
6. Hjalmarsson, Alexander, Matus Korman, and Robert Lagerström. "Software
Migration Project Cost Estimation using COCOMO II and Enterprise Architecture
Modeling." PoEM (Short Papers). 2013.
7. Odeh, Ayman Hussein. "Software Effort and Cost Estimation using Software
Requirement Specification." Science 2: 3.
8. Ian Sommerville,"Software Engineering",Addison-Wesley Longman Publishing
Co, England,8th ed,2007
9. Mizbah Fatima, “Fuzzy Based Software Cost Estimation Methods: A
Comparative Study”, International Journal for Innovative Research in Science,vol
1,dec 2014