2. :BASIS OF SOFTWARE ESTIMATING:
The need for historical data:
Nearly all estimating methods need information about how
projects have been implemented in the past.
possible differences in environmental factors such as the
programming languages used and the experience of staff.
Parameters to be tested
The project manager needs to estimate two project
parameters for carrying our project planning: 1) effort and
2) duration
Duration is always measure in months. Work-month (wm)
is a popular unit for effort measurement. Also Person-
month (pm) is also frequently used to mean same as the
work-month.
3. :BASIS OF SOFTWARE ESTIMATING:
Measure of work: Measure of work involved in completing
a project is also called the size of project.
Work itself can be characterized by cost in accomplishing
the project and the time over which it is to be completed.
Standard practice to first estimate the project size, any by
using it, the effort and the time taken to develop the
software can be computed.
The project size is a measure of the problem
complexity in terms of the effort and time taken to
develop a product.
4. :BASIS OF SOFTWARE ESTIMATING:
Two metrics are used currently used to measure size.
These are:
I) Source Lines of Code (SLOC): The SLOC suffers from
various types of disadvantages, which are to great extent
corrected in the FP measure.
Disadvantages of SLOC:
No precise definition
Difficult to estimate at start of a project
Only a code measure
Programmer-dependent
Does not consider code complexity
II) Function Point (FP): measure for programming
productivity.
5. :SOFTWARE EFFORT ESTIMATING TECHNIQUES:
Barry Bohem, in his classic work on software effort models,
identified the main ways of deriving estimates of software
development effort as:
Algorithmic models: which use 'effort drivers'
representing characteristics of the target system and the
implementation environment to predict effort.
Expert Judgment: based on the advice of
knowledgeable staff.
Analogy: where a similar, completed, project is
identified and its actual effort is used as a basis for the
new project.
Parkinson: which identifies the staff effort available to
do a project and uses that as the 'estimate'
6. :SOFTWARE EFFORT ESTIMATING TECHNIQUES:
Price to win: where the 'estimate' is a figure that
appears to be sufficiently low to win a contract.
To-down: where an overall estimate is formulated for
the whole project and is then broken down into the effort
required for component tasks.
Bottom-up: where component tasks are identified and
sized and these individual estimates are aggregated.
7. : BOTTOM-UP ESTIMATING:
With the bottom-up approach, the estimator breaks the
project into its component tasks and then estimates how
much effort will be required to carry out each task.
With a large project, the process of breaking down into
tasks would be a repetitive one: each task would be
analysed into its component sub-tasks and these would be
further analysed.
Work Breakdown Structure (WBS).
The bottom-up part comes in adding up the calculated
effort for each activity to get an overall estimate.
The bottom-up approach is most appropriate at the later,
more detailed, stages of project planning.
8. : BOTTOM-UP ESTIMATING:
If this method is used early on in the project cycle then the
estimator will have to make some assumptions about the
characteristics of the final system, for example the number
and size of software modules.
Where a project is completely novel or there is no historical
data available, the estimator would be well advised to use
the bottom-up approach.
A Procedure Code-Oriented Approach:
A) Predict the number and type of software modules in
the final system
B) Estimate the SLOC[Source Lines of Code] of each
identified module
C) Estimate the work content, taking into account
complexity and technical difficulty
D) Calculate the work-days effort
9. :TOP DOWN APPROACH AND PARAMETRIC
MODELS:
The top-down approach is normally associated with
parametric (or algorithmic) models.
These may be explained using the analogy of estimating
the cost of rebuilding a house. This would be of practical
concern to a house-owner who needs sufficient insurance
cover to allow for rebuilding the property if it were
destroyed.
Insurance companies, however, produce convenient tables
where the house-owner can find an estimate of rebuilding
costs based on such parameters as the number of storeys
and the floor space of a house. This is a simple
parametric model.
10. :TOP DOWN APPROACH AND PARAMETRIC
MODELS:
The effort needed to implement a project will be related
mainly to variables associated with characteristics of
the final system. The form of the parametric model will
normally be one or more formula in the form:
Effort= (system size) X (productivity rate)
For example, system size might be in the form 'thousands
of lines of code' (KLOC) and the productivity rate 40 days
per KLOC. The values to be used will often be matters, of
subjective judgement.
11. :TOP DOWN APPROACH AND PARAMETRIC
MODELS:
A model to forecast software development effort
therefore has two key components.
i) The first is a method of assessing the amount of work
needed.
ii) The second assesses the rate of work at which the
task can be done.
Some parametric models, such as that implied by function
points, are focused on system or task size, while others,
such are COCOMO [COnstructive COst MOdel ], are
more concerned with productivity factors.
Having calculated the overall effort required, the problem is
then to allocate proportions of that effort the various
activities within that project.
12. :ALBRECHT FUNCTION POINT ANALYSIS:
This is a top-down method that was devised by Allan
Albrecht when he worked for IBM.
Albrecht was investigating programming productivity and
needed some way to quantify the functional size of
programs independently of the programming languages in
which they had been coded. He developed the idea of
function points (FPs).
The basis of function point analysis is that computer-based
information systems comprise five major components, or
external user types in Albrecht's terminology, that are of
benefit to the users:
1) External input types- are input transactions that
update internal computer files.
13. :ALBRECHT FUNCTION POINT ANALYSIS:
2) External output types- are transactions where data is
output to the user. Typically these would be printed
reports, since screen displays would come under
external inquiry types.
3) External inquiry types- Are transactions initiated by
the user which provide information but not update the
internal files.
4) Logical internal file types- are the standing files used
by the system. It refers to a group of data that is
usually accessed together.
5) External interface file types: Allow for output and
input that may pass to and from other computer
systems. It may also be files shared between
applications
14. APPROACH FOR FUNCTION POINT
ANALYSIS
• Identify each external user type in your application.
• Determine the complexity of each user type
(high, average or low)
• FP score for of each external user type = Multiply the
weight of each complexity by the count of each
external user type that has that complexity.
• FP count = summation of all the FP scores.
• FP count indicates the size of the information
processing.
15. USER TYPE COMPLEXITY
• For the original function points defined by Albrecht, the
complexity of the components (external user
types) was intuitively decided.
• Now there is a group called (IFPUG) International FP
User Group have put rules governing the complexity
and how it is assessed.
• The Albrecht FP is often refereed to as the IFPUG FP
method.
17. IFPUG FILE TYPE COMPLEXITY
(CONT’D)
• The boundaries shown in this table show how the
complexity level for the logical internal files is decided
on.
• There are similar tables for external inputs and outputs.
• Record Type is also called Record Element Type (RET)
• Data Type is also called Data Element Type (DET)
18. COSMIC FULL FUNCTION POINTS
COSMIC FFPs stands for Common
Software Measurement International
Consortium Full Function Points.
This approach is developed to measure
the sizes of real-time or embedded systems.
In COSMIC method: the system
architecture is decomposed into a hierarchy
of software layers.
19. COSMIC FULL FUNCTION POINTS
(CONT’D)
They define 4 data groups that a software component can
deal with:
Entries (E). effected by sub-processes that moves the
data group into the SW component in question from a
user outside its boundary.
Exits (X). effected by sub-processes that moves the
data group from the SW component into a user outside
its boundary.
Reads (R). data movements that move data groups
from a persistent storage (DB) to the SW component.
Writes (W). data movements that move data groups
from the SW component to a persistent storage
21. COSMIC FULL FUNCTION POINTS
(CONT’D)
The overall FFP is derived by simply
summing the counts of the four groups all
together.
The method doesn’t take account of any
processing of the data groups once they are
moved into the software component.
It is not recommended for systems that
include complex mathematical algorithms.