INTRODUCTION TO SOFTWARE ENGINEERING
The nature and complexity of software have changed significantly in the last 30 years. In the
1970s, applications ran on a single processor, produced alphanumeric output, and received their
input from a linear source. Today‘s applications are far more complex; typically have graphical
user interface and client-server architecture. They frequently run on two or more processors,
under different operating systems, and on geographically distributed machines.
Rarely, in history has a field of endeavor evolved as rapidly as software development. The
struggle to stay, abreast of new technology, deal with accumulated development backlogs, and
cope with people issues has become a treadmill race, as software groups work as hard as they
can, just to stay in place. The initial concept of one ―guru‖, indispensable to a project and
hostage to its continued maintenance has changed. The Software Engineering Institute (SEI) and
group of ―gurus‖ advise us to improve our development process. Improvement means ―ready to
change‖. Not every member of an organization feels the need to change
Therefore, there is an urgent need to adopt software engineering concepts, strategies,
practices to avoid conflict, and to improve the software development process in order to deliver
good quality maintainable software in time and within budget.
1.1 THE EVOLVING ROLE OF SOFTWARE
The software has seen many changes since its inception. After all, it has evolved over the period
of time against all odds and adverse circumstances. Computer industry has also progressed at a
break-neck speed through the computer revolution, and recently, the network revolution
triggered and/or accelerated by the explosive spread of the internet and most recently the web.
Computer industry has been delivering exponential improvement in price- performance, but the
problems with software have not been decreasing. Software still come late, exceed budget and
are full of residual faults. As per the latest IBM report, ―31% of the projects get cancelled before
they are completed, 53% over-run their cost estimates by an average of 189% and for every 100
projects, there are 94 restarts‖ [IBMG2K].
1.1.1 Some Software Failures:
A major problem of software industry is its inability to develop bug free software. If software
developers are asked to certify that the developed software is bug free, no software would have
ever been released. Hence, ‗‗software crisis‘‘ has become a fixture of everyday life. Many well
published failures have had not only major economic impact but also become the cause of death
of many human beings. Some of the failures are discussed below:
(i) The Y2K problem was the most crucial problem of last century. It was simply the
ignorance about the adequacy or otherwise of using only last two digits of the year. The 4-digit
date format, like 1964, was shortened to 2-digit format, like 64. The developers could not
visualize the problem of year 2000. Millions of rupees have been spent to handle this practically
(ii) The ―star wars‖ program of USA produced ―Patriot missile‖ and was used first time in
Gulf war. Patriot missiles were used as a defense for Iraqi Scud missiles. The Patriot missiles
failed several times to hit Scud missiles, including one that killed 28 U.S. soldiers in Dhahran,
Saudi Arabia. A review team was constituted to find the reason and result was software bug. A
small timing error in the system‘s clock accumulated to the point that after 14 hours, the tracking
system was no longer accurate. In the Dhahran attack, the system had been operating for more
than 100 hours.
(iii) In 1996, a US consumer group embarked on an 18-month, $1 million project to replace
its customer database. The new system was delivered on time but did not work as promised,
handling routine transactions smoothly but tripping over more complex ones. Within three weeks
the database was shutdown, transactions were processed by hand and a new team was brought in
to rebuild the system. Possible reasons for such a failure may be that the design team was over
optimistic in agreeing to requirements and developers became fixated on deadlines, allowing
errors to be ignored.
(iv) ―One little bug, one big crash‖ of Ariane-5 space rocket, developed at a cost of $7000
M over a 10 year period. The space rocket was destroyed after 39 seconds of its launch, at an
altitude of two and a half miles along with its payload of four expensive and uninsured scientific
satellites. The reason was very simple. When the guidance system‘s own computer tried to
convert one piece of data—the sideways velocity of the rocket—from a 64-bit format to a 16-bit
format; the number was too big, and an overflow error resulted after 36.7 seconds. When the
guidance system shutdown, it passed control to an identical, redundant unit, which was there to
provide backup in case of just such a failure. Unfortunately, the second unit had failed in the
identical manner a few milliseconds before. In this case, the developers had decided that this
particular velocity figure would never be large enough to cause trouble—after all, it never had
(v) Financial software is an essential part of any company‘s I.T. infrastructure. How- ever,
many companies have experienced failures in their accounting system due to faults in the
software itself. The failures ranged from producing the wrong information to the complete
system crashing. There is a widespread dissatisfaction over the quality of the financial software.
Even if a system only gives incorrect information, this may have an adverse impact on
(vi) Charles C. Mann shared his views about windows XP through his article in technology
review [MANN02] as:
―Microsoft released windows XP on October 25, 2001 and on the same day, which may be a
record, the company posted 18 megabytes of patches on its website for bug fixes, compatibility
updates, and enhancements. Two patches were intended to fix important security holes. (One of
them did; the other patch did not work). Microsoft advised (still advises) users to back up critical
files before installing patches‖.
This situation is quite embarrassing and clearly explains the sad state of affairs of present
software companies. The developers were either too rushed or too careless to fix obvious defects.
We may keep on discussing the history of software failures which have played with
human safety and caused the projects failures in the past.
1.1.2 No Silver Bullet:
We have discussed some of the software failures. Is there any light? Or same scenario
would continue for many years. When automobile engineers discuss the cars in the market, they
do not say that cars today are no better than they were ten or fifteen years ago. The same is true
for aeronautical engineers; nobody says that Boeing or Airbus does not make better quality
planes as compared to their previous decade planes. Civil engineers also do not show their
anxieties over the quality of today‘s structures over the structures of last decade. Every- one feels
that things are improving day by day. But software, also, seems different. Many software
engineers believe that software quality is NOT improving. If anything they say, ‗‗it is getting
As we all know, the hardware cost continues to decline drastically. However, there are
desperate cries for a silver bullet-something to make software costs drop as rapidly as computer
hardware costs do. But as we look to the horizon of a decade, we see no silver bullet. There is no
single development, either in technology or in management technique that by itself promises
even one order-of-magnitude improvement in productivity, in reliability, in simplicity.
Inventions in electronic design through transistors and large scale integration have
significantly affected the cost, performance and reliability of the computer hardware. No other
technology, since civilization began, has seen six orders of magnitude in performance-price gain
in 30 years. The progress in software technology is not that rosy due to certain difficulties with
this technology. Some of the difficulties are complexity, changeability and invisibility.
The hard part of building software is the specification, design and testing of this
conceptual construct, not the labor of representing it and testing the correctness of representation.
We still make syntax errors, to be sure, but they are trivial as compared to the conceptual errors
(logic errors) in most systems. That is why, building software is always hard and there is
inherently no silver bullet.
Many people (especially CASE tool vendors) believe that CASE (Computer Aided
Software Engineering) tools represent the so-called silver bullet that would rescue the software
industry from the software crisis. Many companies have used these tools and spent large sums of
money, but results were highly unsatisfactory, we learnt the hard way that there is no such thing
as a silver bullet [BROO87].
The software is evolving day by day and its impact is also increasing on every facet of
human life. We cannot imagine a day without using cell phones, logging on to the internet,
sending e-mails, watching television and so on. All these activities are dependent on software
and software bugs exist nearly everywhere. The blame for these bugs goes to software
companies that rush products to market without adequately testing them. It belongs to software
developers who could not understand the importance of detecting and removing faults before the
customer experiences them as failures. It belongs to the legal system that has given a free pass to
software developers on bug related damages. It also belongs to universities and other higher
educational institutions that stress on programming over software engineering principles and
1.2 WHAT IS SOFTWARE ENGINEERING?
Software has become critical to advancement in almost all areas of human endeavor. The art of
programming only is no longer sufficient to construct large programs. There are serious
problems in the cost, timeliness, maintenance and quality of many software products.
Software engineering has the objective of solving these problems by producing good
quality, maintainable software, on time, within budget. To achieve this objective, we have to
focus in a disciplined manner on both the quality of the product and on the process used to
develop the product.
At the first conference on software engineering in 1968, Fritz Bauer [FRIT68] defined software
engineering as “The establishment and use of sound engineering principles in order to obtain economically
developed software that is reliable and works efficiently on real machines”.
Stephen Schach defined the same as “A discipline whose aim is the production of quality software, software
that is delivered on time, within budget, and that satisfies its requirements”.
Both the definitions are popular and acceptable to majority. However, due to increase in
cost of maintaining software, objective is now shifting to produce quality software that is
maintainable, delivered on time, within budget, and also satisfies its requirements.
1.2.2 Program versus Software:
Software is more than programs. It consists of programs; documentation of any facet of the
program and the procedures used to setup and operate the software system. The components of
the software systems are shown in Fig. 1.1.
Fig. 1.1: Components of Software
Any program is a subset of software and it becomes software only if documentation and
operating procedure manuals are prepared. Program is a combination of source code and object
code. Documentation consists of different types of manuals as shown in Fig. 1.2.
Fig. 1.2: List of documentation manuals
Operating procedures consist of instructions to setup and use the software system and
instructions on how to react to system failure. List of operating procedure manuals/documents is
given in Fig. 1.3.
Fig. 1.3: List of Operating Procedural Manuals/Documents
1.2.3 Software Process:
The software process is the way in which we produce software. This differs from organization to
organization. Surviving in the increasingly competitive software business requires more than
hiring smart, knowledgeable developers and buying the latest development tools. We also need
to use effective software development processes, so that developers can systematically use the
best technical and managerial practices to successfully complete their projects. Many software
organizations are looking at software process improvement as a way to improve the quality,
productivity, predictability of their software development, and maintenance efforts [WIEG96].
It seems straight forward, and the literature has a number of success stories of companies
that substantially improved their software development and project management capabilities.
However, many other organizations do not manage to achieve significant and lasting
improvements in the way they conduct their projects. Here we discuss few reasons why is it
difficult to improve software process.
1. Not enough time: Unrealistic schedules leave insufficient time to do the essential
project work. No software groups are sitting around with plenty of spare time to devote to
exploring what is wrong with their current development processes and what they should be doing
differently. Customers and senior managers are demanding more software, of higher quality in
minimum possible time. Therefore, there is always a shortage of time. One consequence is that
software organizations may deliver release 1.0 on time, but then they have to ship release 1.01
almost immediately thereafter to fix the recently discovered bugs.
2. Lack of knowledge: A second obstacle to widespread process improvement is that
many software developers do not seem to be familiar with industry best practices. Normally,
software developers do not spend much time reading the literature to find out about the best-
known ways of software development. Developers may buy books on Java, Visual Basic or
ORACLE, but do not look for anything about process, testing or quality on their bookshelves.
The industry awareness of process improvement frameworks such as the capability maturity
model and ISO 9001 for software have grown in recent years, but effective and sensible
application still is not that common. Many recognized best practices available in literature simply
are not in widespread use in the software development world.
3. Wrong motivations: Some organizations launch process improvement initiatives for
the wrong reasons. May be an external entity, such as a contractor, demanded that the
development organization should achieve CMM level X by date Y. Or perhaps a senior manager
learned just enough about the CMM and directed his organization to climb on the CMM
The basic motivation for software process improvement should be to make some of the
current difficulties we experience on our projects to go away. Developers are rarely motivated by
seemingly arbitrary goals of achieving a higher maturity level or an external certification (ISO
9000) just because someone has decreed it. However, most people should be motivated by the
prospect of meeting their commitments, improving customer satisfaction, and delivering
excellent products that meet customer expectations. The developers have resisted many process
improvement initiatives when they were directed to do ―the CMM thing‖, without a clear
explanation of the reasons why improvement was needed and the benefits the team expected to
4. Insufficient commitment: Many times, the software process improvement fails, despite
best of intentions, due to lack of true commitment. It starts with a process assessment but fails to
follow through with actual changes. Management sets no expectations from the development
community around process improvement; they devote insufficient resources, write no
improvement plan, develop no roadmap, and pilot no new processes.
The investment we make in process improvement will not have an impact on current
productivity; because the time we spend developing better ways to work tomorrow is not
available for today‘s assignment. It can be tempting to abandon the effort when skeptics see the
energy they want to be devoted to immediate demands being siphoned off in the hope of a better
future (Fig. 1.4). Software organizations should not give up, but should take motivation from the
very real, long-term benefits that many companies (including Motorola, Hewlett- Packard,
Boeing, Microsoft etc.) have enjoyed from sustained software process improvement initiatives.
Improvements will take place over time and organizations should not expect and promise
miracles and should always remember the learning curve.
Fig. 1.4: Process Improvement Learning Curve
1.2.4 Software Characteristics:
The software has a very special characteristic e.g., ―it does not wear out‖. Its behavior and nature
is quite different than other products of human life. A comparison with one such case, i.e.,
constructing a bridge vis-a-vis writing a program is given in Table 1.1. Both activities require
different processes and have different characteristics.
Table 1.1: A comparison of constructing a bridge and writing a program
Some of the important characteristics are discussed below:
(i) Software does not wear out: There is a well-known ―bath tub curve‖ in reliability studies
for hardware products. The curve is given in Fig. 1.5. The shape of the curve is like ―bath tub‖;
and is known as bath tub curve.
Fig. 1.5: Bath tub curve
There are three phases for the life of a hardware product. Initial phase is burn-in phase, where
failure intensity is high. It is expected to test the product in the industry before delivery. Due to
testing and fixing faults, failure intensity will come down initially and may stabilize after certain
time. The second phase is the useful life phase where failure intensity is approximately constant
and is called useful life of a product. After few years, again failure intensity will increase due to
wearing out of components. This phase is called wear out phase. We do not have this phase for
the software as it does not wear out. The curve for software is given in Fig. 1.6.
Fig. 1.6: Software Failure Curve
Important point is software becomes reliable overtime instead of wearing out. It becomes
obsolete, if the environment for which it was developed, changes. Hence software may be retired
due to environmental changes, new requirements, new expectations, etc.
(ii) Software is not manufactured: The life of software is from concept exploration to the
retirement of the software product. It is one time development effort and continuous maintenance
effort in order to keep it operational. However, making 1000 copies is not an issue and it does
not involve any cost. In case of hardware product, every product costs us due to raw material and
other processing expenses. We do not have assembly line in software development. Hence it is
not manufactured in the classical sense.
(iii) Reusability of components: If we have to manufacture a TV, we may purchase picture tube
from one vendor, cabinet from another, design card from third and other electronic components
from fourth vendor. We will assemble every part and test the product thoroughly to produce a
good quality TV. We may be required to manufacture only a few components or no component
at all. We purchase every unit and component from the market and produce the finished product.
We may have standard quality guidelines and effective processes to produce a good quality
In software, every project is a new project. We start from the scratch and design every unit
of the software product. Huge effort is required to develop software which further increases the
cost of the software product. However, effort has been made to design standard components that
may be used in new projects. Software reusability has introduced another area and is known as
component based software engineering.
Hence developers can concentrate on truly innovative elements of design, that is, the parts
of the design that represent something new. As explained earlier, in the hardware world,
component reuse is a natural part of the engineering process. In software, there is only a humble
beginning like graphical user interfaces are built using reusable components that enable the
creation of graphics windows, pull-down menus, and a wide variety of interaction mechanisms.
(iv) Software is flexible: We all feel that software is flexible. A program can be developed to
do almost anything. Sometimes, this characteristic may be the best and may help us to
accommodate any kind of change. However, most of the times, this ―almost anything‖
characteristic has made software development difficult to plan, monitor and control. This
unpredictability is the basis of what has been referred to for the past 30 years as the ―Software
1.2.5 Software Crisis:
Software engineering appears to be among the few options available to tackle the present
To explain the present software crisis in simple words, consider the following. The expenses that
organizations all around the world are incurring on software purchases compared to those on
hardware purchases have been showing a worrying trend over the years (as shown in Fig. 1.7)
Fig. 1.7: Change in the relative cost of hardware and software over time
Organizations are spending larger and larger portions of their budget on software. Not only are
the software products turning out to be more expensive than hardware, but they also present a
host of other problems to the customers: software products are difficult to alter, debug, and
enhance; use resources non-optimally; often fail to meet the user requirements; are far from
being reliable; frequently crash; and are often delivered late. Among these, the trend of
increasing software costs is probably the most important symptom of the present software crisis.
Remember that the cost we are talking of here is not on account of increased features, but due to
ineffective development of the product characterized by inefficient resource usage, and time and
There are many factors that have contributed to the making of the present software crisis. Factors
are larger problem sizes, lack of adequate training in software engineering, increasing skill
shortage, and low productivity improvements.
It is believed that the only satisfactory solution to the present software crisis can possibly come
from a spread of software engineering practices among the engineers, coupled with further
advancements to the software engineering discipline itself.
1.3 THE CHANGING NATURE OF SOFTWARE
Software has become integral part of most of the fields of human life. We name a field and we
find the usage of software in that field. Software applications are grouped in to eight areas for
convenience as shown in Fig. 1.7.
Fig. 1.7: Software Applications
(i) System software: Infrastructure software comes under this category like compilers,
operating systems, editors, drivers, etc. Basically system software is a collection of programs to
provide service to other programs.
(ii) Real time software: This software is used to monitor, control and analyze real world
events as they occur. An example may be software required for weather forecasting. Such
software will gather and process the status of temperature, humidity and other environmental
parameters to forecast the weather.
(iii) Embedded software: This type of software is placed in ―Read-Only-Memory (ROM)‖ of
the product and controls the various functions of the product. The product could be an aircraft,
automobile, security system, signaling system, control unit of power plants, etc. The embedded
software handles hardware components and is also termed as intelligent software.
(iv) Business software: This is the largest application area. The software designed to process
business applications is called business software. Business software could be payroll, file
monitoring system, employee management, account management. It may also be a data
warehousing tool which helps us to take decisions based on available data. Management
information system, enterprise resource planning (ERP) and such other software are popular
examples of business software.
(v) Personal computer software: The software used in personal computers is covered in this
category. Examples are word processors, computer graphics, multimedia and animating tools,
database management, computer games etc. This is a very upcoming area and many big
organizations are concentrating their effort here due to large customer base.
(vi) Artificial intelligent software: Artificial Intelligence software makes use of non- numerical
algorithms to solve complex problems that are not amenable to computation or straight forward
analysis. Examples are expert systems, artificial neural network, signal processing software etc.
(vii) Web based software: The software related to web applications comes under this category.
Examples are CGI, HTML, Java, Perl, DHTML etc.
(viii) Engineering and scientific software: Scientific and engineering application softwares are
grouped in this category. Huge computing is normally required to process data. Examples are
CAD/CAM package, SPSS, MATLAB, Engineering Pro, Circuit analyzers etc.
The expectations from software are increasing in modern civilization. Software of any of
the above groups has a specialized role to play. Customers and development organizations desire
more features which may not be always possible to provide. Another trend has emerged to
provide source code to the customers and organizations so that they can make modifications for
their needs. This trend is particularly visible in infrastructure software like data bases, operating
systems, compilers etc. Software where source codes are available, are known as open source.
Organizations can develop software applications around such source codes. Some of the
examples of ‗‗open source software‘‘ are LINUX, MySQL, PHP, open office, Apache web
server etc. Open source software has risen to great prominence. We may say that these are the
programs whose licenses give users the freedom to run the program for any purpose, to study and
modify the program, and to redistribute copies of either the original or modified program without
paying royalties to original developers. Whether open source software is better than proprietary
software? Answer is not easy. Both schools of thought are in the market. However, popularity of
many open source software give confidence to every user. They may also help us to develop
small business applications at low cost.
• SOFTWARE PARADIGMS
"Paradigm" (a Greek word meaning example) is commonly used to refer to a category of entities
that share a common characteristic.
We can distinguish between three different kinds of Software Paradigms:
• Programming Paradigm is a model of how programmers communicate a calculation to
• Software Design Paradigm is a model for implementing a group of applications sharing
• Software Development Paradigm is often referred to as Software Engineering, may be seen
as a management model for implementing big software projects using engineering principles.
• Programming Paradigm:
A Programming Paradigm is a model for a class of Programming Languages that share a set of
A programming language is a system of signs used to communicate a task/algorithm to a
computer, causing the task to be performed. The task to be performed is called a computation,
which follows absolutely precise and unambiguous rules.
At the heart it all is a fundamental question: What does it mean to understand a programming
language? What do we need to know to program in a language? There are three crucial
components to any language.
The language paradigm is a general principle that is used by a programmer to communicate a
task/algorithm to a computer.
The syntax of the language is a way of specifying what is legal in the phrase structure of the
language; knowing the syntax is analogous to knowing how to spell and form sentences in a
natural language like English. However, this doesn‘t tell us anything about what the sentences
The third component is semantics, or meaning, of a program in that language. Ultimately,
without semantics, a programming language is just a collection of meaningless phrases; hence,
the semantics is the crucial part of a language.
There have been a large number of programming languages. Back in the 60‘s there were over
700 of them – most were academic, special purpose, or developed by an organization for their
Fortunately, there are just four major programming language paradigms:
• Imperative (Procedural) Paradigm (Fortran, C, Ada, etc.)
• Object-Oriented Paradigm (SmallTalk, Java, C++)
• Logical Paradigm (Prolog)
• Functional Paradigm (Lisp, ML, Haskell)
Generally, a selected Programming Paradigm defines main property of software developed by
means of a programming language supporting the paradigm.
• ease of creation
• Design Paradigm:
Software Design Paradigm embody the results of people‘s ideas on how to construct programs,
combine them into large software systems and formal mechanisms for how those ideas should be
Thus, we can say that a Software Design Paradigm is a model for a class of problems that share a
set of common characteristics.
Software design paradigms can be sub-divided as:
• Design Patterns
• Software Architecture
It should be especially noted that a particular Programming Paradigm essentially defines
software design paradigms. For example, we can speak about Object-Oriented design patterns,
procedural components (modules), functional software architecture, etc.
• Design Patterns:
A design pattern is a proven solution for a general design problem. It consists of communicating
‗objects‘ that are customized to solve the problem in a particular context.
Patterns have their origin in object-oriented programming where they began as collections of
objects organized to solve a problem. There isn't any fundamental relationship between patterns
and objects; it just happens they began there. Patterns may have arisen because objects seem so
elemental, but the problems we were trying to solve with them were so complex.
• Architectural Patterns: An architectural pattern expresses a fundamental structural
organization or schema for software systems. It provides a set of predefined subsystems,
specifies their responsibilities, and includes rules and guidelines for organizing the
relationships between them.
• Design Patterns: A design pattern provides a scheme for refining the subsystems or
components of a software system, or the relationships between them. It describes commonly
recurring structure of communicating components that solves a general design problem
within a particular context.
• Idioms: An idiom is a low-level pattern specific to a programming language. An idiom
describes how to implement particular aspects of components or the relationships between
them using the features of the given language.
Software components are binary units of independent production, acquisition, and deployment
that interact to form a functioning program.
A component is a physical and replaceable part of a system that conforms to and provides the
realization of a set of interfaces...typically represents the physical packaging of otherwise logical
elements, such as classes, interfaces, and collaborations. A component must be compatible and
interoperate with a whole range of other components.
Example: ―Window‖, ―Push Button‖, ―Text Editor‖, etc.
Two main issues arise with respect to interoperability information:
1. How to express interoperability information (e.g. how to add a ―push button‖ to a ―window‖;
2. How to publish this information (e.g. library with API reusable via an ―include‖ statement)
• Software Architecture:
Software architecture is the structure of the components of the solution. Particular software
architecture decomposes a problem into smaller pieces and attempts to find a solution
(Component) for each piece. We can also say that architecture defines software system
components, their integration and interoperability:
• Integration means the pieces fit together well.
• Interoperation means that they work together effectively to produce an answer.
There are many software architectures. Choosing the right one can be a difficult problem in
A software framework is a reusable mini-architecture that provides the generic structure and
behavior for a family of software abstractions, along with a context of metaphors which specifies
their collaboration and use within a given domain.
Frameworks can be seen as an intermediate level between components and software architecture.
Example: Suppose architecture of a WBT system reuse such components as ―Text Editing Input
object‖ and ―Push buttons‖. A software framework may define an ―HTML Editor‖ which can be
further reused for building the architecture.
1.5 SOFTWARE ENGINEERING PARADIGMS
Manufacturing software can be characterized by a series of steps ranging from concept
exploration to final retirement; this series of steps is generally referred to as a software lifecycle.
Steps or phases in a software lifecycle fall generally into these categories:
• Requirements (Relative Cost 2%)
• Specifications (Analysis) (Relative Cost 5%)
• Design (Relative Cost 6%)
• Implementation (Relative Cost 5%)
• Testing (Relative Cost 7%)
• Integration (Relative Cost 8%)
• Maintenance (Relative Cost 67%), and,
Software engineering employs a variety of methods, tools, and paradigms. Paradigms refer to
particular approaches or philosophies for designing, building and maintaining software. Different
paradigms each have their own advantages and disadvantages which make one more appropriate
in a given situation than perhaps another (!).
A method (also referred to as a technique) is heavily depended on a selected paradigm and may
be seen as a procedure for producing some result. Methods generally involve some formal
notations and processes.
Tools are automated systems implementing a particular method.
Thus, the following phases are heavily affected by selected software paradigms
The software development cycle involves the activities in the production of a software system.
Generally the software development cycle can be divided into the following phases:
• Requirement analysis and specifications
• Preliminary design
• Detailed design
• Component Implementation
• Component Integration
• System Documenting
• Unit testing
• Integration testing
• System testing
• Installation and Acceptance Testing
• Bug Reporting and Fixing
• Change requirements and software upgrading
1.6 SOFTWARE LIFE CYCLE
A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities required
to make a software product transit through its life cycle phases. It also captures the order in
which these activities are to be undertaken. In other words, a life cycle model maps the different
activities performed on a software product from its inception to retirement. Different life cycle
models may map the basic development activities to phases in different ways. Thus, no matter
which life cycle model is followed, the basic activities are included in all life cycle models
though the activities may be carried out in different orders in different life cycle models. During
any life cycle phase, more than one activity may also be carried out. For example, the design
phase might consist of the structured analysis activity followed by the structured design activity.
The development team must identify a suitable life cycle model for the particular project and
then adhere to it. Without using of a particular life cycle model the development of a software
product would not be in a systematic and disciplined manner. When a software product is being
developed by a team there must be a clear understanding among team members about when and
what to do. Otherwise it would lead to chaos and project failure. This problem can be illustrated
by using an example. Suppose a software development problem is divided into several parts and
the parts are assigned to the team members. From then on, suppose the team members are
allowed the freedom to develop the parts assigned to them in whatever way they like. It is
possible that one member might start writing the code for his part, another might decide to
prepare the test documents first, and some other engineer might begin with the design phase of
the parts assigned to him. This would be one of the perfect recipes for project failure.
Software lifecycles that will be briefly reviewed include:
• Build and Fix model
• Waterfall and Modified Waterfall models
• Rapid Prototyping
• Boehm's spiral model
• BUILD & FIX MODEL:
In the build and fix model (also referred to as an ad hoc model), the software is developed
without any specification or design. An initial product is built, which is then repeatedly modified
until it (software) satisfies the user. That is, the software is developed and delivered to the user.
The user checks whether the desired functions are present. If not, then the software is changed
according to the needs by adding, modifying or deleting functions. This process goes on until the
user feels that the software can be used productively. However, the lack of design requirements
and repeated modifications result in loss of acceptability of software. Thus, software engineers
are strongly discouraged from using this development approach.
Fig. 1.8 Build and Fix Model
This model includes the following two phases.
• Build: In this phase, the software code is developed and passed on to the next phase.
• Fix: In this phase, the code developed in the build phase is made error free. Also, in addition to
the corrections to the code, the code is modified according to the user's requirements.
Various advantages and disadvantages associated with the build and fix model are listed in
Table 1.2: Advantages and Disadvantages of Build and Fix Model
• Requires less experience to execute or
manage other than the ability to
• Suitable for smaller software.
• Requires less project planning.
• No real means is available of assessing
the progress, quality, and risks.
• Cost of using this process model is
high as it requires rework until user's
requirements are accomplished.
• Informal design of the software as it
involves unplanned procedure.
• Maintenance of these models is
• WATERFALL AND EXTENDED WATERFALL MODEL:
Waterfall model is also known as sequential development model. Waterfall model was proposed
by Winston W. Royce during 1970s. This development model has been successfully used in
several Organizations, U.S. Department of Defense and NASA. In Waterfall Model, one stage
precedes other in a sequential manner and only after preceding stage is successfully completed.
Waterfall model is also known as sequential development model. Waterfall model was proposed
by Winston W. Royce during 1970s. This development model has been successfully used in
several Organizations, U.S. Department of Defense and NASA.
In Waterfall Model, one stage precedes other in a sequential manner and only after preceding
stage is successfully completed. This model is simple and more disciplined software
development approach. This model is effective only when each phase is completed successfully
(100%) on time.
Waterfall model also carries a reputation of not being a practical development model as it
expects previous phase to be completed before moving on to next phase and does not
accommodate any requirement changes during later phase of SDLC. This model can be
successfully followed in small projects and when the requirements are clearly known and
resources are available as planned.
Classical Waterfall model has the following stages as shown in Fig 1.9:
• Requirements specification
• Implementation & Unit Testing
• Software Integration & Verification
Fig. 1.9 Classical Waterfall Model
The weaknesses of the Waterfall Model are at hand:
• It is very important to gather all possible requirements during the first phase of
requirements collection and analysis. If not all requirements are obtained at
once the subsequent phases will suffer from it. Reality is cruel. Usually only a
part of the requirements is known at the beginning and a good deal will be
gathered during the complete development time.
• Iterations are only meant to happen within the same phase or at best from the
start of the subsequent phase back to the previous phase. If the process is kept
according to the school book this tends to shift the solution of problems into
later phases which eventually results in a bad system design. Instead of solving
the root causes the tendency is to patch problems with inadequate measures.
• There may be a very big "Maintenance" phase at the end. The process only
allows for a single run through the waterfall. Eventually this could be only a
first sample phase which means that the further development is squeezed into
the last never ending maintenance phase and virtually run without a proper
When to use the waterfall model? If,
• Requirements are very well known, clear and fixed.
• Product definition is stable.
• Technology is understood.
• There are no ambiguous requirements
• Ample resources with required expertise are available freely
• The project is short.
Advantages of Waterfall Model:
• It allows for departmentalization and managerial control.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model – each phase has specific deliverables and a
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• A schedule can be set with deadlines for each stage of development and a product can
proceed through the development process like a car in a car-wash, and theoretically, be
delivered on time.
Disadvantages of Waterfall Model:
• It does not allow for much reflection or revision.
• Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-thought out in the concept stage.
• No working software is produced until late during the life cycle.
• High amounts of risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high risk of changing.
• THE V MODEL:
The V-model represents a software development process (also applicable to hardware
development) which may be considered an extension of the waterfall model. Instead of moving
down in a linear way, the process steps are bent upwards after the coding phase, to form the
typical V shape. The V-Model demonstrates the relationships between each phase of the
development life cycle and its associated phase of testing. The horizontal and vertical axes
represents time or project completeness (left-to-right) and level of abstraction (coarsest-grain
abstraction uppermost), respectively, as shown in Fig. 1.10.
Fig. 1.10 The V Model
The V represents the sequence of steps in a project life cycle development. It describes the
activities to be performed and the results that have to be produced during product development.
The left side of the "V" represents the decomposition of requirements, and creation of system
specifications. The right side of the V represents integration of parts and their validation.
It is sometimes said that validation can be expressed by the query "Are you building the right
thing?" and verification by "Are you building it right?" In practice, the usage of these terms
varies. Sometimes they are even used interchangeably.
The PMBOK guide, an IEEE standard, defines them as follows in its 4th edition:
Validation: The assurance that a product, service, or system meets the needs of the customer and
other identified stakeholders. It often involves acceptance and suitability with external
customers. Contrast with verification."
Verification: The evaluation of whether or not a product, service, or system complies with a
regulation, requirement, specification, or imposed condition. It is often an internal process.
Contrast with validation."
Advantages of V-Model:
• Simple and easy to use.
• Each phase has specific deliverables.
• Higher chance of success over the waterfall model due to the development of test plans early
on during the life cycle.
• Works well for small projects where requirements are easily understood.
Disadvantages of V-Model:
• Very rigid, like the waterfall model.
• Little flexibility and adjusting scope is difficult and expensive.
• Software is developed during the implementation phase, so no early prototypes of the
software are produced.
• Model doesn‘t provide a clear path for problems found during testing phases.
• MODIFIED WATERFALL MODEL:
This model in software engineering came into existence because of the defects in the traditional
waterfall model. The phases of the modified model are similar to the traditional model, they are:
• Requirement Analysis Phase
• Design Phase
• Implementation Phase
• Testing Phase
• Maintenance Phase
The main change, which is seen in the modified model, is that the phases in modified waterfall
model life cycle are permitted to overlap. Because the phases overlap, a lot of flexibility has been
introduced in the modified model of software engineering. At the same time, a number of tasks
can function concurrently, which ensures that the defects in the software are removed in the
development stage itself and the overhead cost of making changes to the software before
implementation is saved.
At the same time making changes to the basic design is also possible, as there are a number of
phases active at one point of time. In case there are any errors introduced because of the changes
made, rectifying them is also easy. This helps to reduce any oversight issues. The diagram of this
model does not differ from the traditional waterfall model diagram, as to every phase of the
model verification and validation step has been added, as shown in Fig. 1.11.
Fig. 1.11 Modified Waterfall Model
Advantages of Waterfall Model:
The other advantage of this model is that it is a more relaxed approach to formal procedures,
documents and reviews. It also reduces the huge bundle of documents. Due to this the
development team has more time to devote to work on the code and does not have to bother
about the procedures. This in turn helps to finish the product faster.
Disadvantages of Waterfall Model:
Like everything has two sides, this model also has a drawback. It indeed does lend flexibility to
the software development process, but tracking the progress on all the stages becomes difficult,
as a number of phases are underway at the same time. Also, the modified waterfall model has not
done away with the stages from the traditional waterfall model. Therefore, there is some
dependency on the previous stage, which continues to exist. This dependency adds to the
problem and the project can become a complicated at times. The development team may be
tempted to move back and forth between the stages for fine tuning. This results in delay in
project completion. However, this drawback can be removed, if certain benchmarks are set up
for every phase. Benchmarking helps to ensure the project is on schedule and does not go
In spite of the drawbacks in the modified waterfall model, this model is extensively used in the
software industries. Most of the drawbacks from the traditional waterfall model have been taken
care of and this has made it easier to work in the advanced stages.
• PROTOTYPING MODEL:
The Software Prototyping refers to building software application prototypes which display the
functionality of the product under development but may not actually hold the exact logic of the
Software prototyping is becoming very popular as a software development model, as it enables
to understand customer requirements at an early stage of development. It helps get valuable
feedback from the customer and helps software designers and developers understand about what
exactly is expected from the product under development.
What is Software Prototyping?
• Prototype is a working model of software with some limited functionality.
• The prototype does not always hold the exact logic used in the actual software application
and is an extra effort to be considered under effort estimation.
• Prototyping is used to allow the users evaluate developer proposals and try them out
• It also helps understand the requirements which are user specific and may not have been
considered by the developer during product design.
Following is the stepwise approach to design a software prototype:
• Basic Requirement Identification: This step involves understanding the very basics
the internal design and external aspects like performance and security can be ignored at
• Developing the initial Prototype: The initial Prototype is developed in this stage, where
the very basic requirements are showcased and user interfaces are provided. These
features may not exactly work in the same manner internally in the actual software
developed and the workarounds are used to give the same look and feel to the customer in
the prototype developed.
• Review of the Prototype: The prototype developed is then presented to the customer and
the other important stakeholders in the project. The feedback is collected in an organized
manner and used for further enhancements in the product under development.
• Revise and enhance the Prototype: The feedback and the review comments are
discussed during this stage and some negotiations happen with the customer based on
factors like, time and budget constraints and technical feasibility of actual
implementation. The changes accepted are again incorporated in the new Prototype
developed and the cycle repeats until customer expectations are met.
Prototypes can have horizontal or vertical dimensions. Horizontal prototype displays the
user interface for the product and gives a broader view of the entire system, without
concentrating on internal functions. A vertical prototype on the other side is a detailed
elaboration of a specific function or a sub system in the product.
The purpose of both horizontal and vertical prototype is different. Horizontal prototypes
are used to get more information on the user interface level and the business
requirements. It can even be presented in the sales demos to get business in the market.
Vertical prototypes are technical in nature and are used to get details of the exact
functioning of the sub systems. For example, database requirements, interaction and data
processing loads in a given sub system. Following is a diagrammatic representation of
spiral model listing the activities in each phase:
Fig. 1.12 Prototyping Model
Software Prototyping Types:
There are different types of software prototypes used in the industry. Following are the
major software prototyping types used widely:
• Throwaway/Rapid Prototyping: Throwaway prototyping is also called as rapid or close
ended prototyping. This type of prototyping uses very little efforts with minimum
requirement analysis to build a prototype. Once the actual requirements are understood,
the prototype is discarded and the actual system is developed with a much clear
understanding of user requirements.
• Evolutionary Prototyping: Evolutionary prototyping also called as breadboard
prototyping is based on building actual functional prototypes with minimal functionality
in the beginning. The prototype developed forms the heart of the future prototypes on top
of which the entire system is built. Using evolutionary prototyping only well understood
requirements are included in the prototype and the requirements are added as and when
they are understood.
• Incremental Prototyping: Incremental prototyping refers to building multiple functional
prototypes of the various sub systems and then integrating all the available prototypes to
form a complete system.
• Extreme Prototyping: Extreme prototyping is used in the web development domain. It
consists of three sequential phases. First, a basic prototype with all the existing pages is
presented in the html format. Then the data processing is simulated using a prototype
services layer. Finally the services are implemented and integrated to the final prototype.
This process is called Extreme Prototyping used to draw attention to the second phase of
the process, where a fully functional UI is developed with very little regard to the actual
Software Prototyping Application:
Software Prototyping is most useful in development of systems having high level of user
interactions such as online systems. Systems which need users to fill out forms or go
through various screens before data is processed can use prototyping very effectively to
give the exact look and feel even before the actual software is developed.
Software that involves too much of data processing and most of the functionality is
internal with very little user interface does not usually benefit from prototyping.
Prototype development could be an extra overhead in such projects and may need lot of
Software Prototyping Pros and Cons:
Software prototyping is used in typical cases and the decision should be taken very
carefully so that the efforts spent in building the prototype add considerable value to the
final software developed. The model has its own pros and cons discussed as below.
• Increased user involvement in the
product even before
• Since a working model of the
system is displayed, the users get a
better understanding of the system
• Reduces time and cost as the
defects can be detected much
• Quicker user feedback is available
leading to better solutions.
• Missing functionality can be
• Confusing or difficult functions
can be identified
• Risk of insufficient requirement
analysis owing to too much
dependency on prototype
• Users may get confused in the
prototypes and actual systems.
• Practically, this methodology may
increase the complexity of the
system as scope of the system may
expand beyond original plans.
• Developers may try to reuse the
existing prototypes to build the
actual system, even when it is not
• The effort invested in building
prototypes may be too much if not
• THE SPIRAL MODEL:
The spiral model combines the idea of iterative development with the systematic, controlled
aspects of the waterfall model.
Spiral model is a combination of iterative development process model and sequential linear
development model i.e. waterfall model with very high emphasis on risk analysis.
It allows for incremental releases of the product, or incremental refinement through each
iteration around the spiral.
• Spiral Model design:
The spiral model has four phases. A software project repeatedly passes through these phases in
iterations called Spirals.
• Identification: This phase starts with gathering the business requirements in the baseline
spiral. In the subsequent spirals as the product matures, identification of system
requirements, subsystem requirements and unit requirements are all done in this phase.
This also includes understanding the system requirements by continuous communication
between the customer and the system analyst. At the end of the spiral the product is
deployed in the identified market.
• Evaluation and Risk Analysis: Risk Analysis includes identifying, estimating, and
monitoring technical feasibility and management risks, such as schedule slippage and
cost overrun. After testing the build, at the end of first iteration, the customer evaluates
the software and provides feedback.
• Design: Design phase starts with the conceptual design in the baseline spiral and involves
architectural design, logical design of modules, physical product design and final design
in the subsequent spirals.
• Construct or Build: Construct phase refers to production of the actual software product
at every spiral. In the baseline spiral when the product is just thought of and the design is
being developed a POC (Proof of Concept) is developed in this phase to get customer
Then in the subsequent spirals with higher clarity on requirements and design details a
working model of the software called build is produced with a version number. These
builds are sent to customer for feedback.
Following is a diagrammatic representation of Bohem‘s spiral model (Fig. 1.13) listing the
activities in each phase:
Fig. 1.13 Bohem’s Spiral Model
Based on the customer evaluation, software development process enters into the next
iteration and subsequently follows the linear approach to implement the feedback
suggested by the customer. The process of iterations along the spiral continues
throughout the life of the software.
Spiral Model Applications:
Spiral Model is very widely used in the software industry as it is in synch with the natural
development process of any product i.e. learning with maturity and also involves
minimum risk for the customer as well as the development firms. Following are the
typical uses of Spiral model:
• Whenever there is a budget constraint and the risk evaluation is important.
• For medium to high-risk projects.
• Long-term project commitment because of potential changes to economic priorities as the
requirements change with time.
• Customer is not sure of their requirements, which is a usual case.
• Requirements are complex and need evaluation to get clarity.
• New product line which should be released in phases to get enough customer feedback.
• Significant changes are expected in the product during the development cycle.
Spiral Model Pros and Cons:
The advantage of spiral lifecycle model is that it allows for elements of the product to be
added in when they become available or known. This assures that there is no conflict with
previous requirements and design.
This method is consistent with approaches that have multiple software builds and releases
and allows for making an orderly transition to a maintenance activity. Another positive
aspect is that the spiral model forces early user involvement in the system development
On the other side, it takes very strict management to complete such products and there is
a risk of running the spiral in indefinite loop. So the discipline of change and the extent of
taking change requests are very important to develop and deploy the product
The following table lists out the pros and cons of Spiral SDLC Model:
• Changing requirements can be
• Allows for extensive use of prototypes
• Requirements can be captured more
• Users see the system early.
• Development can be divided into
smaller parts and more risky parts can
• Management is more complex.
• End of project may not be known early.
• Not suitable for small or low risk
projects and could be expensive for
• Process is complex
• Spiral may go indefinitely.
be developed earlier which helps better
• Large number of intermediate stages
requires excessive documentation.
1.7 SOFTWARE PLANNING
Software project management begins with a set of activities that are collectively called project
planning. Before the project can begin, the manager and the software team must estimate the
work to be done, the resources that will be required, and the time that will elapse from start to
finish. Whenever estimates are made, we look into the future and accept some degree of
uncertainty as a matter of course.
What is planning?
Software project planning actually encompasses several activities. However, in the context of
this chapter, planning involves estimation—the attempt to determine how much money, how
much effort, how many resources, and how much time it will take to build a specific software-
based system or product.
Are there any methods for ensuring the accuracy of the estimates?
Though it is very hard to predict until the project has been completed, there are methods with
enough experience and systematic approach using solid historical data and a factor of complexity
and risk. These methods can be used to improve the accuracy of the estimates.
1.7.2 Observations on Estimating - “What will go wrong before it actually does”.
Estimation of resources, cost, and schedule for a software engineering effort requires experience,
access to good historical information, and the courage to commit to quantitative predictions
when qualitative information is all that exists. Estimation carries inherent risk and this risk leads
Project complexity and size play a very important role in estimating the project. The other
important points are noted below:
• An important activity that should not be conducted in a haphazard manner.
• Techniques for estimating time and effort do exist.
• A degree of uncertainty must be taken into account for estimation.
• In some projects, it is possible to revise the estimate when customer makes a change
“The more you know, the better you estimate”.
What are the objectives of Project Planning?
The objective of software project planning is to provide a framework that enables the manager to
make reasonable estimates of resources, cost, and schedule. These estimates are made within a
limited time frame at the beginning of a software project and should be updated regularly as the
project progresses. In addition, estimates should attempt to define best case and worst case
scenarios so that project outcomes can be bounded.
1.7.3 The Activities involved in Software Project Planning:
Software scope and feasibility
Scope – Functions & features of the S/w that are to be delivered to the end users are assessed to
establish a project scope.
Scope of software can be defined by the project manager by analyzing:
• A narrative description of the S/w developed after communication with all stakeholders
• A set of use cases developed by end users
A general scope statement comprises of Input, output, content, consequences, performance,
constraints, interfaces, reliability, scalability, maintainability, interoperability matters of the
software product being built.
Feasibility - ―Not everything imaginable is feasible”
Software feasibility is a study about whether it is feasible to build economically meeting all the
requirements of the client. The answer to this question depends upon the following dimensions:
• Technical feasibility
A study which focuses on finding whether a project is technically feasible or not? It also explores
the nature of the technology to see if the defects be reduced to match the needs of the
• Financial feasibility
A study which focuses on finding whether a project will be completed at a cost that is affordable
to its Stakeholders, Development Company & the Market?
• Time feasibility
A study which focuses on finding whether the product can be released in time?
• Resource feasibility
A study which focuses on finding the total resource requirement for completing a specific
• Identifying Resources:
The second software planning task is estimation of the resources required to accomplish the
software development effort. The resources include development environment—hardware and
software tools — which acts as the foundation of the resources and provides the infrastructure to
support the development effort. After that reusable software components —software building
blocks that can dramatically reduce development costs and accelerate delivery. The primary
resource is being people.
Each resource is specified with four characteristics: description of the resource, a statement of
availability, time when they are required and duration of the time they are applied.
The following resources need to be mentioned in the resource list of all software engineering
projects, also shown in Fig. 1.14 below:
1. Human Resource:
Based on the evaluated effort, the human resources will be selected. Both organizational position
(e.g., manager, senior software engineer) and specialty (e.g., telecommunications, database,
client/server) are specified.
Fig. 1.14 Resource triangle of a software project
2. Reusable Software Components:
Component-based software engineering discussed previously emphasizes reusability -that is, the
creation and reuse of software building blocks. Such building blocks, often called components,
must be cataloged for easy reference, standardized for easy application, and validated for easy
According to Bennatan, there are four different components:
• Off-the shelf components – Using existing software, acquired from 3rd party with no
modification required (outsourcing).
• Full-experience components – Obtained from previous project code which is similar and
team members have full experience in this application area.
• Partial-experience components – The current project code is related to previous project
code but requires substantial modification and team has limited experience in the
• New components – To be built from scratch for the new project
3. Hardware/Software Tools:
The environment that supports the software project, often called the software engineering
environment (SEE), incorporates hardware and software. Hardware provides a platform that
supports the tools (software) required to produce the work products that are an outcome of good
software engineering practice.
Since organizations allow multiple projects to run at a time, the SEE resources need to be
allocated properly to the project that requires it.
Software Project Estimation:
The third important activity of project planning is software project estimation. Software cost and
effort estimation will never be an exact science. Too many variables - human, technical,
environmental and political can affect the ultimate cost of a software and effort applied to
develop it. However, software project estimation can be transformed from a black magic to a
series of systematic steps that provide estimates with acceptable risk.
There are some options to increase the reliability of the estimations:
• Delaying estimation until the completion of the project to get 100% accuracy.
• Estimating a project based on similar projects that are already completed
• Estimation by simple decomposition ( Divide & Conquer )
• Estimation using one or more empirical models
The first option may be attractive but not practical. The second one can work reasonably well but
past projects has not always been good indicator of future results. The third and fourth options
are viable (practical) options and they both can be applied in tandem.
1.7.5 Decomposition Techniques – “break it into pieces”
Developing a cost and effort estimate for a software project is too complex to be considered in
one piece. For this reason, the problem can be decomposed to problem, re-characterize it as a set
of smaller and hopefully, more manageable problems or pieces.
The size of the project must be determined before decomposing it into smaller units.
The accuracy of a software project estimate is predicated on a number of things:
• The degree to which the planner has properly estimated the size of the product to be built.
• The ability to translate the size estimate into human effort, calendar time, and dollars.
• The degree to which the project plan reflects the abilities of the software Team.
• The stability of product requirements and the environment that supports the software
Out of these factors, the factor that influences more is the size. Size can be determined by LOC
(Lines of Code) or FP (Function Point).
126.96.36.199 Problem-based Decomposition:
LOC and FP estimation are distinct estimation techniques. Yet both have a number of
characteristics in common. The project planner begins with a bounded statement of software
scope and from this statement attempts to decompose software into problem functions that can
each be estimated individually. LOC or FP (the estimation variable) is then estimated for each
function. In problem based decomposition, the following steps are involved.
The problem is divided into a number of small segments
Size of each segment need to be determined
Based on the size, the effort and cost required is estimated
188.8.131.52 Process-based Decomposition:
The most common technique for estimating a project is to base the estimate on the process that
will be used. That is, the process is decomposed into a relatively small set of tasks and the effort
required to accomplish each task is estimated.
Like the problem-based techniques, process-based estimation begins with a delineation of
software functions obtained from the project scope. A series of software process activities must
be performed for each function. Once problem functions and process activities are combined, the
planner estimates the effort (e.g., person-months) that will be required to accomplish each
software process activity for each software function.
184.108.40.206 LOC – Total Lines of Code:
The size of a product can be measured by total lines of code or by the function points. The degree of
partition is very important for determining the accuracy of the LOC value. If the problem is
decomposed into more number of segments, then accuracy level of the LOC is higher. The LOC
estimation uses the three point method. The points are:
• Most likely LOC value
• Optimistic LOC value
• Pessimistic LOC value
The degree of partition is very important for determining the accuracy of the LOC value.
1.7.6 Different Estimation Models:
There are several estimating methods to estimate the cost and effort required to develop a
software engineering product. The following list comprises of some widely used methods.
• Empirical Estimation models
• using some software metric (usually size) & historical cost information
• Function Points
• Regression Models: COCOMO
• Expert judgement
• Estimation by analogy
• Parkinson's Law
• Pricing to win
• Top-down Estimation
• on the basis of logical functions rather than on the basis of components required
to implement the functions
• Bottom-up Estimation
• aggregate the cost of components
220.127.116.11 Empirical Estimation Models:
Empirical estimation models use Historical data and mathematical models to estimate the cost
and effort in person-months. The COCOMO and COCOMO II techniques are popular and
COCOMO is one of the most widely used software estimation models in the world developed by
Barry Boehm in 1981. COCOMO predicts the effort and schedule for a software product
development based on inputs relating to the size of the software and a number of cost drivers
that affect productivity. The basic COCOMO has three different models that reflect the
complexity. They are:
• developed in a familiar, stable environment
• similar to the previously developed projects
• relatively small and requires little innovation
Intermediate - between Organic and Embedded
• tight, inflexible constraints and interface requirements
• The product requires great innovation
The COCOMO uses the following equation to find the effort and scheduled time:
Effort = a*(Volume)b
Dev_Time = c*(Effort)d
Where the values of a,b,c,d depends upon the models (Organic, Semi-detached and Embedded)
For an Organic model project where the code volume = 30000 LOC, COCOMO calculates effort
30000 LOC = 30 KLOC
Effort = 2.4 * (30)1.05
= 85 PM
DevTime = 2.5 * (85)0.38
= 13.5 Months
The values of a,b,c,d are taken from the following table:
Variables a b c d
Organic 2.4 1.05 2.5 0.38
Semi-Detached 3.0 1.12 2.5 0.35
Embedded 3.6 1.20 2.5 0.32
• COCOMO II
Basic COCOMO became obsolete after some point of time, since it was developed to work well
with Waterfall model and Mainframe applications. The evolutionary process models which were
developed after the Waterfall model required a new way of estimation and the accuracy of the
estimations developed using basic cocomo was less. This scenario led to the development of
COCOMO II in 1995.
Like the basic COCOMO, COCOMO II also uses three different models, which serves as tools
for applying COCOMO at various stages of the software projects.
• Application Composition Model:
• To estimate the prototype
• Used during requirements gathering
• Uses Object Points
• Early Design Model:
• Used when requirements have been stabilized and
• basic architecture has been Established
• Similar to the original COCOMO
• uses Function Points as size estimation
• Post-Architecture Model :
• to get a more accurate estimate of the model
• Used during the construction of the software
• Uses FPs or SLOC as size measure
• Most detailed COCOMO II model
For our convenience, we will look at Application Composition Model
This model takes the following,
• Object points
• % of reuse
• Productivity rate
Where the object points counts the total number of
• User Interfaces
• Components used
Each object point is classified into one of three complexity levels Simple, Medium, and Difficult
based on the weights table developed by Boehm.
Productivity Rate is calculated as:
PROD = Process or Developers capability X Organizational Maturity
New object points are calculated by using object point weight and percentage of reuse.
NOP = (object points) X [(100-%reuse)/100]
Based on new object points and productivity rate, the effort is estimated using the following
Effort = NOP/ PROD
• PROJECT SCHEDULING
To build complex software systems, many engineering tasks need to occur in parallel with one
another to complete the project on time. The output from one task often determines when another
may begin. Software engineers need to build activity networks that take these task
interdependencies into account. Managers find that it is difficult to ensure that a team is working
on the most appropriate tasks without building a detailed schedule and sticking to it. This
requires that tasks are assigned to people, milestones are created, resources are allocated for each
task, and progress is tracked.
Root Causes for Late Software:
• Unrealistic deadline established outside the team
• Changing customer requirements not reflected in schedule changes
• Underestimating the resources required to complete the project
• Risks that were not considered when project began
• Technical difficulties that complete not have been predicted in advance
• Human difficulties that complete not have been predicted in advance
• Miscommunication among project staff resulting in delays
• Failure by project management to recognize project failing behind schedule and failure to
take corrective action to correct problems
How to Deal With Unrealistic Schedule Demands:
• Perform a detailed project estimate for project effort and duration using historical data.
• Use an incremental process model that will deliver critical functionality imposed by deadline,
but delay other requested functionality.
• Meet with the customer and explain why the deadline is unrealistic using your estimates
based on prior team performance.
• Offer an incremental development and delivery strategy as an alternative to increasing
resources or allowing the schedule to slip beyond the deadline.
Project Scheduling Perspectives:
• Project scheduling is an activity that distributes estimated effort across the planned project
duration by allocating the effort to specific software engineering tasks.
• One view is that the end-date for the software release is set externally and that the software
organization is constrained to distribute effort in the prescribed time frame.
• Another view is that the rough chronological bounds have been discussed by the developers
and customers, but the end-date is best set by the developer after carefully considering how
to best use the resources needed to meet the customer‘s needs.
• Schedules evolve over time as the first macroscopic schedule is refined into the detailed
schedule for each planned software increment.
Software Project Scheduling Principles:
• Compartmentalization - the product and process must be decomposed into a manageable
number of activities and tasks
• Interdependency - tasks that can be completed in parallel must be separated from those that
must completed serially
• Time allocation - every task has start and completion dates that take the task
interdependencies into account
• Effort validation - project manager must ensure that on any given day there are enough staff
members assigned to completed the tasks within the time estimated in the project plan
• Defined Responsibilities - every scheduled task needs to be assigned to a specific team
• Defined outcomes - every task in the schedule needs to have a defined outcome (usually a
work product or deliverable)
• Defined milestones - a milestone is accomplished when one or more work products from an
engineering task have passed quality review
Relationship between People and Effort:
• Adding people to a project after it is behind schedule often causes the schedule to slip further
• The relationship between the number of people on a project and overall productivity is not
linear (e.g. 3 people do not produce 3 times the work of 1 person, if the people have to work
in cooperation with one another)
• The main reasons for using more than 1 person on a project are to get the job done more
rapidly and to improve software quality.
Project Effort Distribution:
• The 40-20-40 rule:
• 40% front-end analysis and design
• 20% coding
• 40% back-end testing
• Generally accepted guidelines are:
• 02-03 % planning
• 10-25 % requirements analysis
• 20-25 % design
• 15-20 % coding
• 30-40 % testing and debugging
Software Project Types:
• Concept development - initiated to explore new business concept or new application of
• New application development - new product requested by customer
• Application enhancement - major modifications to function, performance, or interfaces
(observable to user)
• Application maintenance - correcting, adapting, or extending existing software (not
immediately obvious to user)
• Reengineering - rebuilding all (or part) of a legacy system
Factors Affecting Task Set:
• Size of project
• Number of potential users
• Mission criticality
• Application longevity
• Requirement stability
• Ease of customer/developer communication
• Maturity of applicable technology
• Performance constraints
• Embedded/non-embedded characteristics
• Project staffing
• Reengineering factors
Concept Development Tasks:
• Concept scoping - determine overall project scope
• Preliminary concept planning - establishes development team's ability to undertake the
• Technology risk assessment - evaluates the risk associated with the technology implied by
the software scope
• Proof of concept - demonstrates the feasibility of the technology in the software context
• Concept implementation - concept represented in a form that can be used to sell it to the
• Customer reaction to concept - solicits feedback on new technology from customer
• Task networks (activity networks) are graphic representations can be of the task
interdependencies and can help define a rough schedule for particular project
• Scheduling tools should be used to schedule any non-trivial project.
• Program evaluation and review technique (PERT) and critical path method (CPM) are
quantitative techniques that allow software planners to identify the chain of dependent tasks
in the project work breakdown structure (WBS) that determine the project duration time.
Example of a Critical Path Method:
• Timeline (Gantt) charts enable software planners to determine what tasks will need to be
conducted at a given point in time (based on estimates for effort, start time, and duration
for each task).
Example of a Timeline/ Gantt Chart:
• The best indicator of progress is the completion and successful review of a defined
software work product.
Time-boxing is the practice of deciding a priori the fixed amount of time that can be spent on
each task. When the task's time limit is exceeded, development moves on to the next task (with
the hope that a majority of the critical work was completed before time ran out).
1.9 PERSONAL PLANNING, STAFFING AND TERM STRUCTURE:
• Staffing & Personnel Planning:
Staffing is the practice of finding, evaluating, and establishing a working relationship with future
colleagues on a project and firing them when they are no longer needed. Staffing involves
finding people, who may be hired or already working for the company (organization) or may be
working for competing companies.
In knowledge economies, where talent becomes the new capital, this discipline takes on added
significance to help organizations achieve a competitive advantage in each of their marketplaces.
"Staffing" can also refer to the industry and/or type of company that provides the functions
described in the previous definition for a price. A staffing company may offer a variety of
services, including temporary help, permanent placement, temporary-to-permanent placement,
long-term and contract help, managed services (often called outsourcing), training, human
resources consulting, and PEO arrangements (Professional Employer Organization), in which a
staffing firm assumes responsibility for payroll, benefits, and other human resource functions.
STAFFING A SOFTWARE PROJECT
Staffing must be done in a way that maximizes the creation of some value to a project. In this
sense, the semantics of value must be addressed regarding project and organizational
characteristics. Some projects are schedule driven: to create value for such project could mean to
act in a way that reduces its schedule or risks associated to it. Other projects may be driven by
budget, resource allocation, and so on. So, the maximization target for the Permission to make
digital or hard copies of all or part of this work for personal or classroom use is granted without
fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
Staff allocation optimizer cannot be fixed by a single utility function, but several such functions
should be available for the manager to decide which best fit the project under analysis. It is
considered that a characteristic may be a skill, a capability, an experience, some knowledge, a
role in the organization or in the project, and others. Each characteristic is associated with a
rating scale, with the intensity levels the characteristic may assume.
• Rules to perform Staffing:
• A person can only be allocated to an activity if he or she possesses at least all the
characteristics demanded by the activity, in an intensity level greater or equal to the
• A person can only be allocated to an activity if he or she is available to perform the
activity in the period it needs to be performed (reasons of unavailability could be
allocation to other activities, vacation, etc).
This includes estimation or allocation of right persons or individuals for the right type of tasks of
which they are capable. The capability of the individuals should always match with the overall
objective of the project. In software Engineering, personnel planning should be in accordance to
the final development of the project.
Common to all players, the locker-room agreement includes the team structure while team
members are acting in a time-critical environment with limited or no communication. In this
section, we present our teamwork structure. It defines:
Flexible agent roles with protocols for switching among them;
Collections of roles built into team formations; and Multi-step, multi-agent plans for execution in
specific situations: set-plays. Example of team structuring is given below (Fig. 1.15) showing
mixed and decentralized team architectures.
Fig. 1.15 Team Structuring
The teamwork structure indirectly affects the agent external behaviors by changing the agents'
internal states via internal behaviors.