SlideShare a Scribd company logo
Author's note 2016 : This article was originally written in 1999. A lot has changed since then but it still
has some relevancy, particularly the 'Estimating Human Effort' section which is now described as 'T
Shirting' in the agile word.
PROJECT ESTIMATION
Introduction
Estimating the scope and profile of a software development project is an imprecise but essential science.
It is necessary whether the work is to be done internally for your own users or whether it is to be done
under contract for an external organisation. The two main areas for ‘estimation’ are (a) the human effort
required and (b) the technological environment required (for both development and deployment).
The initial input for this process must be the results of an initial analysis exercise – a feasibility study of
some sort or another. This study will be based on sponsor interviews to capture and prioritise
requirements and then these requirements documented to a sufficient level to make estimation
meaningful. The level of detail provided about the functionality required is fundamental to the accuracy
of the estimates. Whether this information is contained in an Invitation To Tender (ITT) from another
organisation or an internal Feasibility Study Report it must contain sufficient information. A crude logical
data model is required and each function must have a couple of paragraphs of description with reference
to the main entities used. If this level of detail is not forthcoming then either, point this fact out and don’t
tender for the project, or offer to do the feasibility work to the appropriate level.
Estimating human effort
There are many ways to estimate the effort required. Some complex mathematical models exist, Function
Point Analysis being one. It is best not to rely on any one method but to ask three or more people to each
individually come up with their own figures using their own preferred method and then compare and try
and rationalise the differences.
Here is one method. It does rely on the estimator’s experience at doing the type of work proposed and
knowledge of the capabilities of the staff that will be doing the work. This information is based on
experience with development using the Oracle database and toolsets but should be equally applicable to
any other environment with suitable adjustments to the metrics.
The function descriptions are used to assign an easy/medium/hard/complex rating to each based on the
estimator’s experience with similar work. An easy function would be, for example, a simple data entry
screen based on one entity with no special business rules. A complex function would be, for example, an
accounting year end close down that impacted many entities.
The next main factor is assigning effort figures to the above ratings. This is the effort that would be
required to build and unit test modules to support functions of this type. These figures are going to
depend on the profile and abilities of the staff involved and have to be based on experienced guesswork or
preferably metrics gathered from previous projects. For a team made up of motivated staff with average
experience (some expert, some novice) the following figures would probably be appropriate:
Type Effort
Easy 1 day,
Project Estimation Page 1
Medium 3 days,
Hard 7 days,
Complex 15 days
Some would argue that the ‘easy’ figure should be lower but again this depends on the type of
environment that the development is being performed in. If any sort of formal configuration management
and development documentation (e.g. Unit Test records) is required then one day is a minimum safe
figure to use for any activity.
Once these figures have been assigned the total ‘build’ effort is calculated. If we imagine a project that
has the following mix of functions:
Type Number to build Effort
Easy 25 25 days
Medium 16 48 days
Hard 6 42 days
Complex 2 30 days
Total 145 days
This figure is then used as the foundation for calculating the other activities required during software
development. Again these activities and the metrics involved are going to be dependent on the type of
organisation that will be doing the work and ought to be based on local metrics. The profile used
throughout these example is intended to be one where a structured and standards based approach to
software development is used pragmatically. Formal project management, analysis and design are
performed but not taken to extremes – SSADM modelling techniques might be used but the full
methodology would not. This list can be changed to fit the activities of any sort of development
methodology.
For this example, based on the above imaginary project profile, the following metrics are used:
To perform the required Systems Analysis allow 20% of the ‘build’ figure. This level of analysis
would allow the function and entity models to be confirmed, consolidated and formally documented (in a
CASE tool preferably).
To perform the required Physical Design allow 25% of the ‘build’ figure. This level of design would
allow for pseudo coding the business logic required and allow co-ordinated high level design.
Project Estimation Page 2
To perform the required Systems Testing allow 40% of the ‘build’ figure. This level of testing would
allow for condition lists and test scripts to be produced with formally recorded test cycles.
To provide development ‘team leader’ type activities allow 10% of all the above (Build, Analysis,
Design and Systems Test). This activity includes, for example, co-ordinating the application of standards,
ensuring adherence to high-level design and re-usability, integrating developed code.
Finally to provide project management allow 15 % of all the above (Build, Analysis, Design, Systems
Test and Team Leader). This would allow a part or full-time project manager, depending on team size
who, for example, produces project plans (GANTT charts), tracks progress against plan, produces
summary reports for senior management and attends management meetings.
Following the above:
Activity Effort
Build 145
Systems Analysis (20% build) 29
Systems Design (25% build) 36
Systems Test (40% build) 58
Total development 268
Team leading (10% of above) 27
Total development + team leading 295
Project management (15% of above) 44
Grand total 339
If this was to be used for budgeting purposes either a composite rate of could be used or the effort graded
by type (A systems analyst costs £750 per day, a junior programmer £450 etc.). Using a composite rate,
for example, of £600 per day would make the above project cost £203,400.
If the work was for a fixed price tender then a risk factor should be introduced at a rate that depended on
the feel for the project. Will the team be made up of the best available or will it contain many novice
members or people being cross trained from other skills? Is the client/user one with who the development
team is likely to have good relations or will they be awkward and argue about the scope and the
Project Estimation Page 3
specification? Is the technology proven or leading edge? A figure of 20% is not uncommon, this would
bring our effort figure up to 407 days and our composite rate cost up to £244,200.
The Technological Environment
Many users specify what environment they wish their application software use from the outset. Equally
many development organisations have their own preferences, after all it is impossible to have ‘on tap’
skills in every modern development environment. The appropriate make-up of operating system,
development toolset, RDBMS and hardware used must be confirmed at this early stage.
Sometimes the sponsors will attempt to cut costs by requesting that the project be done using, for
example, Access on a PC when a higher end environment like, for example, Oracle on Unix is required.
Similarly development outfits sometimes specify a ‘Rolls Royce’ solution when something far simpler
will suffice.
Pointers to suggest that a robust, higher end, environment is required would be:
1. System availability – the users want a 24*7 system
2. Regular Backups are required with (no downtime)
3. A multi-user environment is required
4. Capacity for future expansion
5. Remote access will be needed
6. The system is described as mission critical
Feasibility studies and ITTs often specify the eventual hardware platform to be used. This is another area
where the science is at best ‘imprecise’! RDBMS and toolset vendors are notoriously bad at providing
mathematical formulae to estimate all hardware requirements. CASE tools will usually provide disk
sizing data from the entity model and RAM requirements can usually be calculated basing the figures on
the number of users expected but processor power is a real bug-bear. The standard response from one
major RDBMS vendor, who shall remain nameless, is ‘you need one of our consultants to come on site
and do this work for you’!
The attributes of the hardware that will be required to support the eventual software application can be
broken down into three areas. Processor (CPU) power, disk size and RAM size.
(a) Processor Power
This is the area where estimation is the crudest. The best way is to work out the CPU requirements after
the software has been built by measuring them on a test machine using the standard operating system and
database analysis tools. However a very rough ‘stab in the dark’ can made in advance. Processor Power
can be estimated by analysing the functions that are likely to have the worst performance and are going to
be used by the greatest number of concurrent users. The number of Insert, Update, Delete and Select
operations performed by each function against the database entities is multiplied against the following
database access metrics, each:
• Insert requires 4 accesses
• Update requires 2 accesses
• Delete requires 3 accesses
• Select requires 1 access
These represent the work that is incurred manipulating indexes etc.
If we imagine a function with the following properties:
Project Estimation Page 4
2 Inserts = 8 accesses
4 Updates = 8 accesses
1 Delete = 3 accesses
5 Selects = 5 accesses
Total = 24 accesses
This figure must then be multiplied by whatever metric is specified for the database per access on the
relevant hardware platform. For example, if MIPS are being used on a UNIX RISC chip platform with an
Oracle version 7 database then this figure is usually of the order of 0.3 million instructions per access.
This example would therefore require 7.2 million instructions to carry out. To get the crude processing
power required this figure must be multiplied by the expected number of concurrent users and the divided
by the performance required in seconds. If we expected thirty concurrent users which a one second
response time this example results in a total of
(7.2 * 30) / 1 = 216 million instructions per second (MIPS).
Allowing for any other concurrent activity this can then be used to estimate the size of processor required.
For example, a Sun Enterprise 3500 server is rated at 400 million instructions per second (MIPS) and a
Sun Starefire Enterprise 10,000 with a 400 MHz CPU is rated at about 3000 MIPS.
Unfortunately advances in hardware architecture and the changing flavours of benchmarking test makes
this even more complicated. A current popular benchmark is the TPC rating. TPC is supposed to give a
more accurate measure of machine to database performance. Equating this measure to database accesses
is very difficult - depending on the size and architecture of the machine 1 MIPS equates to a TPC-C rating
of anything between 4 and 40. As these benchmarks mature, however, more hard and fast rules will
appear.
This is a very crude measure with many obvious flaws (not least the ability to predict eventual user
activity) but it does allow ‘ball-park’ sizing to take place. As a rule of thumb it is always best to go for a
larger rather than a smaller machine. It will nearly always be cheaper to buy a bigger machine up front
than to pay staff to tune code and queries.
(b) Disk Space
Disk Space requirements fall into two main areas. The operating system files and the size of the relational
database. The best way of sizing the database is to put the entity model and volumetrics into the
appropriate CASE tool (Oracle Designer in the case of the Oracle RDBMS) and allow that to calculate the
figures. Alternatively this can be done by hand using the equations found in the appropriate Database
Administrators reference manual, but this can be laborious. Also allow space for any temporary storage,
for example, the input files of a data take on exercise from an old system.
The operating system files will be less dynamic but there can be many different factors to consider. For
example, Application software executables, report files, temporary files, database exports. Disk space is,
however, cheap.
(c) RAM
The Random Access Memory (RAM) requirement is estimated by making allowances for the RDBMS
software, the operating system and the peak number of concurrent users. The figures vary between
versions and platforms but they could, for example, work out as follows:
Project Estimation Page 5
Unix Operating System = 10 MB
Oracle RDBMS = 120 MB
40 Users = 80 MB (2 MB per user)
Total = 210 MB
The actual figures must be checked for the eventual platform used. Other factors, for example, Web
‘served’ software may also have to be considered. The figures, per user, for Oracle Developer Forms
Server can vary between 2 and 10 MB depending on the version of Developer and the platform used.
Conclusion
Hopefully the above will help but there are too many unknowns for any hard and fast rules or concrete
assurance that what is predicted will eventually be. The estimates obtained are used best for planning with
regular feedback and re-estimation as the project progresses. At regular points throughout the project
lifecycle revisit all the estimates and assumptions and re-calculate them using the latest information
available. The realities of life are:
i. it is impossible to predict the future 100%
ii. you will have a better idea of what to expect if you try to predict the future rather than if you don’t
bother at all
iii.your predictions will be more accurate if they apply to the short term rather than to the long term.
Michael Wigley
Project Estimation Page 6

More Related Content

What's hot

Are Function Points Still Relevant?
Are Function Points Still Relevant?Are Function Points Still Relevant?
Are Function Points Still Relevant?
DCG Software Value
 
Process Improvement in Software Engineering SE25
Process Improvement in Software Engineering SE25Process Improvement in Software Engineering SE25
Process Improvement in Software Engineering SE25koolkampus
 
Cost estimation
Cost estimationCost estimation
Cost estimation
Nameirakpam Sundari
 
Slides chapters 21-23
Slides chapters 21-23Slides chapters 21-23
Slides chapters 21-23
Priyanka Shetty
 
Modern Software Productivity Measurement: The Pragmatic Guide
Modern Software Productivity Measurement: The Pragmatic GuideModern Software Productivity Measurement: The Pragmatic Guide
Modern Software Productivity Measurement: The Pragmatic Guide
CAST
 
Project control and process instrumentation
Project control and process instrumentationProject control and process instrumentation
Project control and process instrumentation
Kuppusamy P
 
Introduction to Software Cost Estimation
Introduction to Software Cost EstimationIntroduction to Software Cost Estimation
Introduction to Software Cost Estimation
Hemanth Raj
 
Software Estimation
Software EstimationSoftware Estimation
Software Estimation
Dinesh Singh
 
Metrics for project size estimation
Metrics for project size estimationMetrics for project size estimation
Metrics for project size estimationNur Islam
 
Effort estimation for web applications
Effort estimation for web applicationsEffort estimation for web applications
Effort estimation for web applications
Nagaraja Gundappa
 
Software metric analysis methods for product development maintenance projects
Software metric analysis methods for product development  maintenance projectsSoftware metric analysis methods for product development  maintenance projects
Software metric analysis methods for product development maintenance projectsIAEME Publication
 
Software metric analysis methods for product development
Software metric analysis methods for product developmentSoftware metric analysis methods for product development
Software metric analysis methods for product developmentiaemedu
 
Decomposition technique In Software Engineering
Decomposition technique In Software Engineering Decomposition technique In Software Engineering
Decomposition technique In Software Engineering
Bilal Hassan
 
Defect effort prediction models in software maintenance projects
Defect  effort prediction models in software maintenance projectsDefect  effort prediction models in software maintenance projects
Defect effort prediction models in software maintenance projectsiaemedu
 
Lecture5
Lecture5Lecture5
Lecture5
soloeng
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
jothisekaran
 
D0365030036
D0365030036D0365030036
D0365030036
theijes
 
Afrekenen met functiepunten
Afrekenen met functiepuntenAfrekenen met functiepunten
Afrekenen met functiepunten
Nesma
 

What's hot (19)

Are Function Points Still Relevant?
Are Function Points Still Relevant?Are Function Points Still Relevant?
Are Function Points Still Relevant?
 
Process Improvement in Software Engineering SE25
Process Improvement in Software Engineering SE25Process Improvement in Software Engineering SE25
Process Improvement in Software Engineering SE25
 
Cost estimation
Cost estimationCost estimation
Cost estimation
 
Slides chapters 21-23
Slides chapters 21-23Slides chapters 21-23
Slides chapters 21-23
 
Modern Software Productivity Measurement: The Pragmatic Guide
Modern Software Productivity Measurement: The Pragmatic GuideModern Software Productivity Measurement: The Pragmatic Guide
Modern Software Productivity Measurement: The Pragmatic Guide
 
Project control and process instrumentation
Project control and process instrumentationProject control and process instrumentation
Project control and process instrumentation
 
Oasize llnl
Oasize llnlOasize llnl
Oasize llnl
 
Introduction to Software Cost Estimation
Introduction to Software Cost EstimationIntroduction to Software Cost Estimation
Introduction to Software Cost Estimation
 
Software Estimation
Software EstimationSoftware Estimation
Software Estimation
 
Metrics for project size estimation
Metrics for project size estimationMetrics for project size estimation
Metrics for project size estimation
 
Effort estimation for web applications
Effort estimation for web applicationsEffort estimation for web applications
Effort estimation for web applications
 
Software metric analysis methods for product development maintenance projects
Software metric analysis methods for product development  maintenance projectsSoftware metric analysis methods for product development  maintenance projects
Software metric analysis methods for product development maintenance projects
 
Software metric analysis methods for product development
Software metric analysis methods for product developmentSoftware metric analysis methods for product development
Software metric analysis methods for product development
 
Decomposition technique In Software Engineering
Decomposition technique In Software Engineering Decomposition technique In Software Engineering
Decomposition technique In Software Engineering
 
Defect effort prediction models in software maintenance projects
Defect  effort prediction models in software maintenance projectsDefect  effort prediction models in software maintenance projects
Defect effort prediction models in software maintenance projects
 
Lecture5
Lecture5Lecture5
Lecture5
 
Software Testing Fundamentals
Software Testing FundamentalsSoftware Testing Fundamentals
Software Testing Fundamentals
 
D0365030036
D0365030036D0365030036
D0365030036
 
Afrekenen met functiepunten
Afrekenen met functiepuntenAfrekenen met functiepunten
Afrekenen met functiepunten
 

Similar to Basic-Project-Estimation-1999

Are Function Points Still Relevant?
Are Function Points Still Relevant?Are Function Points Still Relevant?
Are Function Points Still Relevant?
Premios Group
 
Unit 5
Unit   5Unit   5
Formalizing Collaborative Software Development Issues: A Collaborative Work A...
Formalizing Collaborative Software Development Issues: A Collaborative Work A...Formalizing Collaborative Software Development Issues: A Collaborative Work A...
Formalizing Collaborative Software Development Issues: A Collaborative Work A...
IOSR Journals
 
Softwareenggineering lab manual
Softwareenggineering lab manualSoftwareenggineering lab manual
Softwareenggineering lab manual
Vivek Kumar Sinha
 
software engineering
software engineering software engineering
software engineering
bharati vidhyapeeth uni.-pune
 
Software development life cycle
Software development life cycle Software development life cycle
Software development life cycle
shefali mishra
 
Blue book
Blue bookBlue book
Estimation sharbani bhattacharya
Estimation sharbani bhattacharyaEstimation sharbani bhattacharya
Estimation sharbani bhattacharya
Sharbani Bhattacharya
 
Software models
Software modelsSoftware models
Software models
MOULA HUSSAIN KHATTHEWALE
 
Management Information Systems – Week 7 Lecture 2Developme.docx
Management Information Systems – Week 7 Lecture 2Developme.docxManagement Information Systems – Week 7 Lecture 2Developme.docx
Management Information Systems – Week 7 Lecture 2Developme.docx
croysierkathey
 
Enterprise performance engineering solutions
Enterprise performance engineering solutionsEnterprise performance engineering solutions
Enterprise performance engineering solutionsInfosys
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for Exams
MuhammadTalha436
 
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
IRJET Journal
 
Hard work matters for everyone in everytbing
Hard work matters for everyone in everytbingHard work matters for everyone in everytbing
Hard work matters for everyone in everytbing
lojob95766
 
5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)randhirlpu
 
Management Information system
Management Information systemManagement Information system
Management Information system
Cochin University
 

Similar to Basic-Project-Estimation-1999 (20)

Are Function Points Still Relevant?
Are Function Points Still Relevant?Are Function Points Still Relevant?
Are Function Points Still Relevant?
 
Unit 5
Unit   5Unit   5
Unit 5
 
Formalizing Collaborative Software Development Issues: A Collaborative Work A...
Formalizing Collaborative Software Development Issues: A Collaborative Work A...Formalizing Collaborative Software Development Issues: A Collaborative Work A...
Formalizing Collaborative Software Development Issues: A Collaborative Work A...
 
Softwareenggineering lab manual
Softwareenggineering lab manualSoftwareenggineering lab manual
Softwareenggineering lab manual
 
software engineering
software engineering software engineering
software engineering
 
Software development life cycle
Software development life cycle Software development life cycle
Software development life cycle
 
Blue book
Blue bookBlue book
Blue book
 
What is jad_session
What is jad_sessionWhat is jad_session
What is jad_session
 
Estimation sharbani bhattacharya
Estimation sharbani bhattacharyaEstimation sharbani bhattacharya
Estimation sharbani bhattacharya
 
Software models
Software modelsSoftware models
Software models
 
Management Information Systems – Week 7 Lecture 2Developme.docx
Management Information Systems – Week 7 Lecture 2Developme.docxManagement Information Systems – Week 7 Lecture 2Developme.docx
Management Information Systems – Week 7 Lecture 2Developme.docx
 
Enterprise performance engineering solutions
Enterprise performance engineering solutionsEnterprise performance engineering solutions
Enterprise performance engineering solutions
 
Sadchap3
Sadchap3Sadchap3
Sadchap3
 
Sdpl1
Sdpl1Sdpl1
Sdpl1
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for Exams
 
Session3
Session3Session3
Session3
 
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
Benchmarking Techniques for Performance Analysis of Operating Systems and Pro...
 
Hard work matters for everyone in everytbing
Hard work matters for everyone in everytbingHard work matters for everyone in everytbing
Hard work matters for everyone in everytbing
 
5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)5(re dfd-erd-data dictionay)
5(re dfd-erd-data dictionay)
 
Management Information system
Management Information systemManagement Information system
Management Information system
 

Basic-Project-Estimation-1999

  • 1. Author's note 2016 : This article was originally written in 1999. A lot has changed since then but it still has some relevancy, particularly the 'Estimating Human Effort' section which is now described as 'T Shirting' in the agile word. PROJECT ESTIMATION Introduction Estimating the scope and profile of a software development project is an imprecise but essential science. It is necessary whether the work is to be done internally for your own users or whether it is to be done under contract for an external organisation. The two main areas for ‘estimation’ are (a) the human effort required and (b) the technological environment required (for both development and deployment). The initial input for this process must be the results of an initial analysis exercise – a feasibility study of some sort or another. This study will be based on sponsor interviews to capture and prioritise requirements and then these requirements documented to a sufficient level to make estimation meaningful. The level of detail provided about the functionality required is fundamental to the accuracy of the estimates. Whether this information is contained in an Invitation To Tender (ITT) from another organisation or an internal Feasibility Study Report it must contain sufficient information. A crude logical data model is required and each function must have a couple of paragraphs of description with reference to the main entities used. If this level of detail is not forthcoming then either, point this fact out and don’t tender for the project, or offer to do the feasibility work to the appropriate level. Estimating human effort There are many ways to estimate the effort required. Some complex mathematical models exist, Function Point Analysis being one. It is best not to rely on any one method but to ask three or more people to each individually come up with their own figures using their own preferred method and then compare and try and rationalise the differences. Here is one method. It does rely on the estimator’s experience at doing the type of work proposed and knowledge of the capabilities of the staff that will be doing the work. This information is based on experience with development using the Oracle database and toolsets but should be equally applicable to any other environment with suitable adjustments to the metrics. The function descriptions are used to assign an easy/medium/hard/complex rating to each based on the estimator’s experience with similar work. An easy function would be, for example, a simple data entry screen based on one entity with no special business rules. A complex function would be, for example, an accounting year end close down that impacted many entities. The next main factor is assigning effort figures to the above ratings. This is the effort that would be required to build and unit test modules to support functions of this type. These figures are going to depend on the profile and abilities of the staff involved and have to be based on experienced guesswork or preferably metrics gathered from previous projects. For a team made up of motivated staff with average experience (some expert, some novice) the following figures would probably be appropriate: Type Effort Easy 1 day, Project Estimation Page 1
  • 2. Medium 3 days, Hard 7 days, Complex 15 days Some would argue that the ‘easy’ figure should be lower but again this depends on the type of environment that the development is being performed in. If any sort of formal configuration management and development documentation (e.g. Unit Test records) is required then one day is a minimum safe figure to use for any activity. Once these figures have been assigned the total ‘build’ effort is calculated. If we imagine a project that has the following mix of functions: Type Number to build Effort Easy 25 25 days Medium 16 48 days Hard 6 42 days Complex 2 30 days Total 145 days This figure is then used as the foundation for calculating the other activities required during software development. Again these activities and the metrics involved are going to be dependent on the type of organisation that will be doing the work and ought to be based on local metrics. The profile used throughout these example is intended to be one where a structured and standards based approach to software development is used pragmatically. Formal project management, analysis and design are performed but not taken to extremes – SSADM modelling techniques might be used but the full methodology would not. This list can be changed to fit the activities of any sort of development methodology. For this example, based on the above imaginary project profile, the following metrics are used: To perform the required Systems Analysis allow 20% of the ‘build’ figure. This level of analysis would allow the function and entity models to be confirmed, consolidated and formally documented (in a CASE tool preferably). To perform the required Physical Design allow 25% of the ‘build’ figure. This level of design would allow for pseudo coding the business logic required and allow co-ordinated high level design. Project Estimation Page 2
  • 3. To perform the required Systems Testing allow 40% of the ‘build’ figure. This level of testing would allow for condition lists and test scripts to be produced with formally recorded test cycles. To provide development ‘team leader’ type activities allow 10% of all the above (Build, Analysis, Design and Systems Test). This activity includes, for example, co-ordinating the application of standards, ensuring adherence to high-level design and re-usability, integrating developed code. Finally to provide project management allow 15 % of all the above (Build, Analysis, Design, Systems Test and Team Leader). This would allow a part or full-time project manager, depending on team size who, for example, produces project plans (GANTT charts), tracks progress against plan, produces summary reports for senior management and attends management meetings. Following the above: Activity Effort Build 145 Systems Analysis (20% build) 29 Systems Design (25% build) 36 Systems Test (40% build) 58 Total development 268 Team leading (10% of above) 27 Total development + team leading 295 Project management (15% of above) 44 Grand total 339 If this was to be used for budgeting purposes either a composite rate of could be used or the effort graded by type (A systems analyst costs £750 per day, a junior programmer £450 etc.). Using a composite rate, for example, of £600 per day would make the above project cost £203,400. If the work was for a fixed price tender then a risk factor should be introduced at a rate that depended on the feel for the project. Will the team be made up of the best available or will it contain many novice members or people being cross trained from other skills? Is the client/user one with who the development team is likely to have good relations or will they be awkward and argue about the scope and the Project Estimation Page 3
  • 4. specification? Is the technology proven or leading edge? A figure of 20% is not uncommon, this would bring our effort figure up to 407 days and our composite rate cost up to £244,200. The Technological Environment Many users specify what environment they wish their application software use from the outset. Equally many development organisations have their own preferences, after all it is impossible to have ‘on tap’ skills in every modern development environment. The appropriate make-up of operating system, development toolset, RDBMS and hardware used must be confirmed at this early stage. Sometimes the sponsors will attempt to cut costs by requesting that the project be done using, for example, Access on a PC when a higher end environment like, for example, Oracle on Unix is required. Similarly development outfits sometimes specify a ‘Rolls Royce’ solution when something far simpler will suffice. Pointers to suggest that a robust, higher end, environment is required would be: 1. System availability – the users want a 24*7 system 2. Regular Backups are required with (no downtime) 3. A multi-user environment is required 4. Capacity for future expansion 5. Remote access will be needed 6. The system is described as mission critical Feasibility studies and ITTs often specify the eventual hardware platform to be used. This is another area where the science is at best ‘imprecise’! RDBMS and toolset vendors are notoriously bad at providing mathematical formulae to estimate all hardware requirements. CASE tools will usually provide disk sizing data from the entity model and RAM requirements can usually be calculated basing the figures on the number of users expected but processor power is a real bug-bear. The standard response from one major RDBMS vendor, who shall remain nameless, is ‘you need one of our consultants to come on site and do this work for you’! The attributes of the hardware that will be required to support the eventual software application can be broken down into three areas. Processor (CPU) power, disk size and RAM size. (a) Processor Power This is the area where estimation is the crudest. The best way is to work out the CPU requirements after the software has been built by measuring them on a test machine using the standard operating system and database analysis tools. However a very rough ‘stab in the dark’ can made in advance. Processor Power can be estimated by analysing the functions that are likely to have the worst performance and are going to be used by the greatest number of concurrent users. The number of Insert, Update, Delete and Select operations performed by each function against the database entities is multiplied against the following database access metrics, each: • Insert requires 4 accesses • Update requires 2 accesses • Delete requires 3 accesses • Select requires 1 access These represent the work that is incurred manipulating indexes etc. If we imagine a function with the following properties: Project Estimation Page 4
  • 5. 2 Inserts = 8 accesses 4 Updates = 8 accesses 1 Delete = 3 accesses 5 Selects = 5 accesses Total = 24 accesses This figure must then be multiplied by whatever metric is specified for the database per access on the relevant hardware platform. For example, if MIPS are being used on a UNIX RISC chip platform with an Oracle version 7 database then this figure is usually of the order of 0.3 million instructions per access. This example would therefore require 7.2 million instructions to carry out. To get the crude processing power required this figure must be multiplied by the expected number of concurrent users and the divided by the performance required in seconds. If we expected thirty concurrent users which a one second response time this example results in a total of (7.2 * 30) / 1 = 216 million instructions per second (MIPS). Allowing for any other concurrent activity this can then be used to estimate the size of processor required. For example, a Sun Enterprise 3500 server is rated at 400 million instructions per second (MIPS) and a Sun Starefire Enterprise 10,000 with a 400 MHz CPU is rated at about 3000 MIPS. Unfortunately advances in hardware architecture and the changing flavours of benchmarking test makes this even more complicated. A current popular benchmark is the TPC rating. TPC is supposed to give a more accurate measure of machine to database performance. Equating this measure to database accesses is very difficult - depending on the size and architecture of the machine 1 MIPS equates to a TPC-C rating of anything between 4 and 40. As these benchmarks mature, however, more hard and fast rules will appear. This is a very crude measure with many obvious flaws (not least the ability to predict eventual user activity) but it does allow ‘ball-park’ sizing to take place. As a rule of thumb it is always best to go for a larger rather than a smaller machine. It will nearly always be cheaper to buy a bigger machine up front than to pay staff to tune code and queries. (b) Disk Space Disk Space requirements fall into two main areas. The operating system files and the size of the relational database. The best way of sizing the database is to put the entity model and volumetrics into the appropriate CASE tool (Oracle Designer in the case of the Oracle RDBMS) and allow that to calculate the figures. Alternatively this can be done by hand using the equations found in the appropriate Database Administrators reference manual, but this can be laborious. Also allow space for any temporary storage, for example, the input files of a data take on exercise from an old system. The operating system files will be less dynamic but there can be many different factors to consider. For example, Application software executables, report files, temporary files, database exports. Disk space is, however, cheap. (c) RAM The Random Access Memory (RAM) requirement is estimated by making allowances for the RDBMS software, the operating system and the peak number of concurrent users. The figures vary between versions and platforms but they could, for example, work out as follows: Project Estimation Page 5
  • 6. Unix Operating System = 10 MB Oracle RDBMS = 120 MB 40 Users = 80 MB (2 MB per user) Total = 210 MB The actual figures must be checked for the eventual platform used. Other factors, for example, Web ‘served’ software may also have to be considered. The figures, per user, for Oracle Developer Forms Server can vary between 2 and 10 MB depending on the version of Developer and the platform used. Conclusion Hopefully the above will help but there are too many unknowns for any hard and fast rules or concrete assurance that what is predicted will eventually be. The estimates obtained are used best for planning with regular feedback and re-estimation as the project progresses. At regular points throughout the project lifecycle revisit all the estimates and assumptions and re-calculate them using the latest information available. The realities of life are: i. it is impossible to predict the future 100% ii. you will have a better idea of what to expect if you try to predict the future rather than if you don’t bother at all iii.your predictions will be more accurate if they apply to the short term rather than to the long term. Michael Wigley Project Estimation Page 6