SlideShare a Scribd company logo
1 of 35
1. Discuss the structured system analysis and design
methodologies.

Structured systems analysis and design method (SSADM) is a systems approach to the
analysis and design of information systems. SSADM was produced for the Central
Computer and Telecommunications Agency (now Office of Government Commerce), a
UK government office concerned with the use of technology in government, from 1980
onwards.

Overview
SSADM is a waterfall method for the analysis and design of information systems.
SSADM can be thought to represent a pinnacle of the rigorous document-led approach to
system design.

The names "Structured Systems Analysis and Design Method" and "SSADM" are
registered trademarks of the Office of Government Commerce (OGC), which is an office
of the United Kingdom's Treasury. [1]

History
The principal stages of the development of SSADM were:[2]

   •   1980: Central Computer and Telecommunications Agency (CCTA) evaluate
       analysis and design methods.
   •   1981: Learmonth & Burchett Management Systems (LBMS) method chosen from
       shortlist of five.
   •   1983: SSADM made mandatory for all new information system developments
   •   1998:PLATINUM TECHNOLOGY acquires LBMS
   •   2000: CCTA renamed SSADM as "Business System Development". The method
       was repackaged into 15 modules and another 6 modules were added.[3][4]

SSADM techniques
The three most important techniques that are used in SSADM are:

Logical data modeling
       This is the process of identifying, modeling and documenting the data
       requirements of the system being designed. The data are separated into entities
       (things about which a business needs to record information) and relationships (the
       associations between the entities).
Data Flow Modeling
       This is the process of identifying, modeling and documenting how data moves
       around an information system. Data Flow Modeling examines processes
(activities that transform data from one form to another), data stores (the holding
       areas for data), external entities (what sends data into a system or receives data
       from a system), and data flows (routes by which data can flow).
Entity Behavior Modeling
       This is the process of identifying, modeling and documenting the events that
       affect each entity and the sequence in which these events occur.

Stages
The SSADM method involves the application of a sequence of analysis, documentation
and design tasks concerned with the following.

Stage 0 – Feasibility study

In order to determine whether or not a given project is feasible, there must be some form
of investigation into the goals and implications of the project. For very small scale
projects this may not be necessary at all as the scope of the project is easily understood.
In larger projects, the feasibility may be done but in an informal sense, either because
there is not time for a formal study or because the project is a “must-have” and will have
to be done one way or the other.

When a feasibility study is carried out, there are four main areas of consideration:

   •   Technical – is the project technically possible?
   •   Financial – can the business afford to carry out the project?
   •   Organizational – will the new system be compatible with existing practices?
   •   Ethical – is the impact of the new system socially acceptable?

To answer these questions, the feasibility study is effectively a condensed version of a
fully blown systems analysis and design. The requirements and users are analyzed to
some extent, some business options are drawn up and even some details of the technical
implementation.

The product of this stage is a formal feasibility study document. SSADM specifies the
sections that the study should contain including any preliminary models that have been
constructed and also details of rejected options and the reasons for their rejection.

Stage 1 – Investigation of the current environment

This is one of the most important stages of SSADM. The developers of SSADM
understood that though the tasks and objectives of a new system may be radically
different from the old system, the underlying data will probably change very little. By
coming to a full understanding of the data requirements at an early stage, the remaining
analysis and design stages can be built up on a firm foundation.
In almost all cases there is some form of current system even if it is entirely composed of
people and paper. Through a combination of interviewing employees, circulating
questionnaires, observations and existing documentation, the analyst comes to full
understanding of the system as it is at the start of the project. This serves many purposes:

   •   the analyst learns the terminology of the business, what users do and how they do
       it
   •   the old system provides the core requirements for the new system
   •   faults, errors and areas of inefficiency are highlighted and their correction added
       to the requirements
   •   the data model can be constructed
   •   the users become involved and learn the techniques and models of the analyst
   •   the boundaries of the system can be defined

The products of this stage are:

   •   Users Catalog describing all the users of the system and how they interact with it
   •   Requirements Catalogs detailing all the requirements of the new system
   •   Current Services Description further composed of
   •   Current environment logical data structure (ERD)
   •   Context diagram (DFD)
   •   Leveled set of DFDs for current logical system
   •   Full data dictionary including relationship between data stores and entities

To produce the models, the analyst works through the construction of the models as we
have described. However, the first set of data-flow diagrams (DFDs) are the current
physical model, that is, with full details of how the old system is implemented. The final
version is the current logical model which is essentially the same as the current physical
but with all reference to implementation removed together with any redundancies such as
repetition of process or data.

In the process of preparing the models, the analyst will discover the information that
makes up the users and requirements catalogs.

Stage 2 – Business system options

Having investigated the current system, the analyst must decide on the overall design of
the new system. To do this, he or she, using the outputs of the previous stage, develops a
set of business system options. These are different ways in which the new system could
be produced varying from doing nothing to throwing out the old system entirely and
building an entirely new one. The analyst may hold a brainstorming session so that as
many and various ideas as possible are generated.

The ideas are then collected to form a set of two or three different options which are
presented to the user. The options consider the following:
•   the degree of automation
   •   the boundary between the system and the users
   •   the distribution of the system, for example, is it centralized to one office or spread
       out across several?
   •   cost/benefit
   •   impact of the new system

Where necessary, the option will be documented with a logical data structure and a level
1 data-flow diagram.

The users and analyst together choose a single business option. This may be one of the
ones already defined or may be a synthesis of different aspects of the existing options.
The output of this stage is the single selected business option together with all the outputs
of stage 1.

Stage 3 – Requirements specification

This is probably the most complex stage in SSADM. Using the requirements developed
in stage 1 and working within the framework of the selected business option, the analyst
must develop a full logical specification of what the new system must do. The
specification must be free from error, ambiguity and inconsistency. By logical, we mean
that the specification does not say how the system will be implemented but rather
describes what the system will do.

To produce the logical specification, the analyst builds the required logical models for
both the data-flow diagrams (DFDs) and the entity relationship diagrams (ERDs). These
are used to produce function definitions of every function which the users will require of
the system, entity life-histories (ELHs) and effect correspondence diagrams, these are
models of how each event interacts with the system, a complement to entity life-histories.
These are continually matched against the requirements and where necessary, the
requirements are added to and completed.

The product of this stage is a complete requirements specification document which is
made up of:

*the updated data catalog

   •   the updated requirements catalog
   •   the processing specification which in turn is made up of

           •   user role/function matrix
           •   function definitions
           •   required logical data model
           •   entity life-histories
           •   effect correspondence diagrams
Though some of these items may be unfamiliar to you, it is beyond the scope of this unit
to go into them in great detail.

Stage 4 – Technical system options

This stage is the first towards a physical implementation of the new system. Like the
Business System Options, in this stage a large number of options for the implementation
of the new system are generated. This is honed down to two or three to present to the user
from which the final option is chosen or synthesized.

However, the considerations are quite different being:

   •   the hardware architectures
   •   the software to use
   •   the cost of the implementation
   •   the staffing required
   •   the physical limitations such as a space occupied by the system
   •   the distribution including any networks which that may require
   •   the overall format of the human computer interface

All of these aspects must also conform to any constraints imposed by the business such as
available money and standardization of hardware and software.

The output of this stage is a chosen technical system option.

Stage 5 – Logical design

Though the previous level specifies details of the implementation, the outputs of this
stage are implementation-independent and concentrate on the requirements for the human
computer interface. The logical design specifies the main methods of interaction in terms
of menu structures and command structures.

One area of activity is the definition of the user dialogues. These are the main interfaces
with which the users will interact with the system. Other activities are concerned with
analyzing both the effects of events in updating the system and the need to make inquiries
about the data on the system. Both of these use the events, function descriptions and
effect correspondence diagrams produced in stage 3 to determine precisely how to update
and read data in a consistent and secure way.

The product of this stage is the logical design which is made up of:

   •   Data catalog
   •   Required logical data structure
   •   Logical process model – includes dialogues and model for the update and inquiry
       processes
   •   Stress & Bending moment.
Stage 6 – Physical design

This is the final stage where all the logical specifications of the system are converted to
descriptions of the system in terms of real hardware and software. This is a very technical
stage and a simple overview is presented here.

The logical data structure is converted into a physical architecture in terms of database
structures. The exact structure of the functions and how they are implemented is
specified. The physical data structure is optimized where necessary to meet size and
performance requirements.

The product is a complete Physical Design which could tell software engineers how to
build the system in specific details of hardware and software and to the appropriate
standards.

Advantages and disadvantages
Using this methodology involves a significant undertaking which may not be suitable to
all projects.

The main advantages of SSADM are:

   •   Three different views of the system
   •   Mature
   •   Separation of logical and physical aspects of the system
   •   Well-defined techniques and documentation
   •   User involvement

The size of SSADM is a hindrance to using it in some circumstances. There is an
investment in cost and time in training people to use the techniques. The learning curve
can be considerable if the full method is used, as not only are there several modeling
techniques to come to terms with, but there are also a lot of standards for the preparation
and presentation of documents.
2. What is DSS? Discuss the components and
capabilities of DSS.

A decision support system (DSS) is a computer-based information system that supports
business or organizational decision-making activities. DSSs serve the management,
operations, and planning levels of an organization and help to make decisions, which may
be rapidly changing and not easily specified in advance.

DSSs include knowledge-based systems. A properly designed DSS is an interactive
software-based system intended to help decision makers compile useful information from
a combination of raw data, documents, and personal knowledge, or business models to
identify and solve problems and make decisions.

Typical information that a decision support application might gather and present are:

   •   inventories of information assets (including legacy and relational data sources,
       cubes, data warehouses, and data marts),
   •   comparative sales figures between one period and the next,
   •   projected revenue figures based on product sales assumptions.

History
According to Keen (1978),[1] the concept of decision support has evolved from two main
areas of research: The theoretical studies of organizational decision making done at the
Carnegie Institute of Technology during the late 1950s and early 1960s, and the technical
work on interactive computer systems, mainly carried out at the Massachusetts Institute
of Technology in the 1960s. It is considered that the concept of DSS became an area of
research of its own in the middle of the 1970s, before gaining in intensity during the
1980s. In the middle and late 1980s, executive information systems (EIS), group decision
support systems (GDSS), and organizational decision support systems (ODSS) evolved
from the single user and model-oriented DSS.

According to Sol (1987)[2] the definition and scope of DSS has been migrating over the
years. In the 1970s DSS was described as "a computer based system to aid decision
making". Late 1970s the DSS movement started focusing on "interactive computer-based
systems which help decision-makers utilize data bases and models to solve ill-structured
problems". In the 1980s DSS should provide systems "using suitable and available
technology to improve effectiveness of managerial and professional activities", and end
1980s DSS faced a new challenge towards the design of intelligent workstations.[2]

In 1987 Texas Instruments completed development of the Gate Assignment Display
System (GADS) for United Airlines. This decision support system is credited with
significantly reducing travel delays by aiding the management of ground operations at
various airports, beginning with O'Hare International Airport in Chicago and Stapleton
Airport in Denver Colorado.[3][4]

Beginning in about 1990, data warehousing and on-line analytical processing (OLAP)
began broadening the realm of DSS. As the turn of the millennium approached, new
Web-based analytical applications were introduced.

The advent of better and better reporting technologies has seen DSS start to emerge as a
critical component of management design. Examples of this can be seen in the intense
amount of discussion of DSS in the education environment.

DSS also have a weak connection to the user interface paradigm of hypertext. Both the
University of Vermont PROMIS system (for medical decision making) and the Carnegie
Mellon ZOG/KMS system (for military and business decision making) were decision
support systems which also were major breakthroughs in user interface research.
Furthermore, although hypertext researchers have generally been concerned with
information overload, certain researchers, notably Douglas Engelbart, have been focused
on decision makers in particular

Taxonomies
As with the definition, there is no universally-accepted taxonomy of DSS either.
Different authors propose different classifications. Using the relationship with the user as
the criterion, Haettenschwiler[5] differentiates passive, active, and cooperative DSS. A
passive DSS is a system that aids the process of decision making, but that cannot bring
out explicit decision suggestions or solutions. An active DSS can bring out such decision
suggestions or solutions. A cooperative DSS allows the decision maker (or its advisor) to
modify, complete, or refine the decision suggestions provided by the system, before
sending them back to the system for validation. The system again improves, completes,
and refines the suggestions of the decision maker and sends them back to him for
validation. The whole process then starts again, until a consolidated solution is generated.

Another taxonomy for DSS has been created by Daniel Power. Using the mode of
assistance as the criterion, Power differentiates communication-driven DSS, data-driven
DSS, document-driven DSS, knowledge-driven DSS, and model-driven DSS.[6]

   •   A communication-driven DSS supports more than one person working on a
       shared task; examples include integrated tools like Microsoft's NetMeeting or
       Groove[7]
   •   A data-driven DSS or data-oriented DSS emphasizes access to and manipulation
       of a time series of internal company data and, sometimes, external data.
   •   A document-driven DSS manages, retrieves, and manipulates unstructured
       information in a variety of electronic formats.
   •   A knowledge-driven DSS provides specialized problem-solving expertise stored
       as facts, rules, procedures, or in similar structures.[6]
•   A model-driven DSS emphasizes access to and manipulation of a statistical,
           financial, optimization, or simulation model. Model-driven DSS use data and
           parameters provided by users to assist decision makers in analyzing a situation;
           they are not necessarily data-intensive. Dicodess is an example of an open source
           model-driven DSS generator.[8]

Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop
DSS. An enterprise-wide DSS is linked to large data warehouses and serves many
managers in the company. A desktop, single-user DSS is a small system that runs on an
individual manager's PC.

Components
Design of a Drought Mitigation Decision Support System.

Three fundamental components of a DSS architecture are:[5][6][10][11][12]

       1. the database (or knowledge base),
       2. the model (i.e., the decision context and user criteria), and
       3. the user interface.

The users themselves are also important components of the architecture.[5][12]

Development Frameworks

DSS systems are not entirely different from other systems and require a structured
approach. Such a framework includes people, technology, and the development approach.
[10]



DSS technology levels (of hardware and software) may include:

       1. The actual application that will be used by the user. This is the part of the
          application that allows the decision maker to make decisions in a particular
          problem area. The user can act upon that particular problem.
       2. Generator contains Hardware/software environment that allows people to easily
          develop specific DSS applications. This level makes use of case tools or systems
          such as Crystal, AIMMS, Analytica and iThink.
       3. Tools include lower level hardware/software. DSS generators including special
          languages, function libraries and linking modules

An iterative developmental approach allows for the DSS to be changed and redesigned at
various intervals. Once the system is designed, it will need to be tested and revised where
necessary for the desired outcome.

Classification
There are several ways to classify DSS applications. Not every DSS fits neatly into one
of the categories, but may be a mix of two or more architectures.

Holsapple and Whinston[13] classify DSS into the following six frameworks: Text-
oriented DSS, Database-oriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS,
Rule-oriented DSS, and Compound DSS.

A compound DSS is the most popular classification for a DSS. It is a hybrid system that
includes two or more of the five basic structures described by Holsapple and Whinston.[13]

The support given by DSS can be separated into three distinct, interrelated categories[14]:
Personal Support, Group Support, and Organizational Support.

DSS components may be classified as:

   1.   Inputs: Factors, numbers, and characteristics to analyze
   2.   User Knowledge and Expertise: Inputs requiring manual analysis by the user
   3.   Outputs: Transformed data from which DSS "decisions" are generated
   4.   Decisions: Results generated by the DSS based on user criteria

DSSs which perform selected cognitive decision-making functions and are based on
artificial intelligence or intelligent agents technologies are called Intelligent Decision
Support Systems (IDSS).[citation needed]

The nascent field of Decision engineering treats the decision itself as an engineered
object, and applies engineering principles such as Design and Quality assurance to an
explicit representation of the elements that make up a decision.

Applications
As mentioned above, there are theoretical possibilities of building such systems in any
knowledge domain.

One example is the clinical decision support system for medical diagnosis. Other
examples include a bank loan officer verifying the credit of a loan applicant or an
engineering firm that has bids on several projects and wants to know if they can be
competitive with their costs.

DSS is extensively used in business and management. Executive dashboard and other
business performance software allow faster decision making, identification of negative
trends, and better allocation of business resources.

A growing area of DSS application, concepts, principles, and techniques is in agricultural
production, marketing for sustainable development. For example, the DSSAT4 package,
[15][16]
         developed through financial support of USAID during the 80's and 90's, has allowed
rapid assessment of several agricultural production systems around the world to facilitate
decision-making at the farm and policy levels. There are, however, many constraints to
the successful adoption on DSS in agriculture.[17]

DSS are also prevalent in forest management where the long planning time frame
demands specific requirements. All aspects of Forest management, from log
transportation, harvest scheduling to sustainability and ecosystem protection have been
addressed by modern DSSs.

A specific example concerns the Canadian National Railway system, which tests its
equipment on a regular basis using a decision support system. A problem faced by any
railroad is worn-out or defective rails, which can result in hundreds of derailments per
year. Under a DSS, CN managed to decrease the incidence of derailments at the same
time other companies were experiencing an increase.

Benefits
   1. Improves personal efficiency
   2. Speed up the process of decision making
   3. Increases organizational control
   4. Encourages exploration and discovery on the part of the decision maker
   5. Speeds up problem solving in an organization
   6. Facilitates interpersonal communication
   7. Promotes learning or training
   8. Generates new evidence in support of a decision
   9. Creates a competitive advantage over competition
   10. Reveals new approaches to thinking about the problem space
   11. Helps automate managerial processes
3. Narrate the stages of SDLC
The Systems development life cycle (SDLC), or Software development process in
systems engineering, information systems and software engineering, is a process of
creating or altering information systems, and the models and methodologies that people
use to develop these systems. In software engineering, the SDLC concept underpins
many kinds of software development methodologies. These methodologies form the
framework for planning and controlling the creation of an information system[1]: the
software development process.

Overview
The SDLC is a process used by a systems analyst to develop an information system,
training, and user (stakeholder) ownership. Any SDLC should result in a high quality
system that meets or exceeds customer expectations, reaches completion within time and
cost estimates, works effectively and efficiently in the current and planned Information
Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2]
Computer systems are complex and often (especially with the recent rise of service-
oriented architecture) link multiple traditional systems potentially supplied by different
software vendors. To manage this level of complexity, a number of SDLC models or
methodologies have been created, such as "waterfall"; "spiral"; "Agile software
development"; "rapid prototyping"; "incremental"; and "synchronize and stabilize".[3]....

SDLC models can be described along spectrum of agile to iterative to sequential. Agile
methodologies, such as XP and Scrum, focus on lightweight processes which allow for
rapid changes along the development cycle. Iterative methodologies, such as Rational
Unified Process and dynamic systems development method, focus on limited project
scope and expanding or improving products by multiple iterations. Sequential or big-
design-up-front (BDUF) models, such as Waterfall, focus on complete and correct
planning to guide large projects and risks to successful and predictable results[citation needed].
Other models, such as Anamorphic Development, tend to focus on a form of
development that is guided by project scope and adaptive iterations of feature
development.

In project management a project can be defined both with a project life cycle (PLC) and
an SDLC, during which slightly different activities occur. According to Taylor (2004)
"the project life cycle encompasses all the activities of the project, while the systems
development life cycle focuses on realizing the product requirements".[4] SDLC (systems
development life cycle) is used during the development of an IT project, it describes the
different stages involved in the project from the drawing board, through the completion
of the project.

History
The systems life cycle (SLC) is a methodology used to describe the process for building
information systems, intended to develop information systems in a very deliberate,
structured and methodical way, reiterating each stage of the life cycle. The systems
development life cycle, according to Elliott & Strachan & Radford (2004), "originated in
the 1960's,to develop large scale functional business systems in an age of large scale
business conglomerates. Information systems activities revolved around heavy data
processing and number crunching routines".[5]

Several systems development frameworks have been partly based on SDLC, such as the
structured systems analysis and design method (SSADM) produced for the UK
government Office of Government Commerce in the 1980s. Ever since, according to
Elliott (2004), "the traditional life cycle approaches to systems development have been
increasingly replaced with alternative approaches and frameworks, which attempted to
overcome some of the inherent deficiencies of the traditional SDLC".[5]

Systems development phases
The System Development Life Cycle framework provides a sequence of activities for
system designers and developers to follow. It consists of a set of steps or phases in which
each phase of the SDLC uses the results of the previous one.

A Systems Development Life Cycle (SDLC) adheres to important phases that are
essential for developers, such as planning, analysis, design, and implementation, and are
explained in the section below. A number of system development life cycle (SDLC)
models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping,
incremental, and synchronize and stabilize. The oldest of these, and the best known, is the
waterfall model: a sequence of stages in which the output of each stage becomes the input
for the next. These stages can be characterized and divided up in different ways,
including the following[6]:

   •   Preliminary analysis: The objective of phase1 is to conduct a preliminary
       analysis, propose alternative solutions, describe costs and benefits and submit a
       preliminary plan with recommendations.

       Conduct the preliminary analysis: in this step, you need to find out the
       organization's objectives and the nature and scope of the problem under study.
       Even if a problem refers only to a small segment of the organization itself then
       you need find out what the objectives of the organization itself are. Then you need
       to see how the problem being studied fits in with them.
       Propose alternative solutions: In digging into the organization's objectives and
       specific problems, you may have already covered some solutions. Alternate
       proposals may come from interviewing employees, clients , suppliers, and/or
       consultants. You can also study what competitors are doing. With this data, you
       will have three choices: leave the system as is, improve it, or develop a new
       system.
       Describe the costs and benefits.
•   Systems analysis, requirements definition: Defines project goals into defined
        functions and operation of the intended application. Analyzes end-user
        information needs.

    •   Systems design: Describes desired features and operations in detail, including
        screen layouts, business rules, process diagrams, pseudocode and other
        documentation.

    •   Development: The real code is written here.

    •   Integration and testing: Brings all the pieces together into a special testing
        environment, then checks for errors, bugs and interoperability.

    •   Acceptance, installation, deployment: The final stage of initial development,
        where the software is put into production and runs actual business.

    •   Maintenance: What happens during the rest of the software's life: changes,
        correction, additions, moves to a different computing platform and more. This is
        often the longest of the stages.

In the following example (see picture) these stage of the systems development life cycle
are divided in ten steps from definition to creation and modification of IT work products:

'




Model of the Systems Development Life Cycle
The tenth phase occurs when the system is disposed of and the task performed is either
eliminated or transferred to other systems. The tasks and work products for each phase
are described in subsequent chapters.[7]

Not every project will require that the phases be sequentially executed. However, the
phases are interdependent. Depending upon the size and complexity of the project, phases
may be combined or may overlap.[7]

System analysis

The goal of system analysis is to determine where the problem is in an attempt to fix the
system. This step involves breaking down the system in different pieces to analyze the
situation, analyzing project goals, breaking down what needs to be created and attempting
to engage users so that definite requirements can be defined.

Design

In systems design the design functions and operations are described in detail, including
screen layouts, business rules, process diagrams and other documentation. The output of
this stage will describe the new system as a collection of modules or subsystems.
The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will
be produced as a result of interviews, workshops, and/or prototype efforts.

Design elements describe the desired software features in detail, and generally include
functional hierarchy diagrams, screen layout diagrams, tables of business rules, business
process diagrams, pseudocode, and a complete entity-relationship diagram with a full
data dictionary. These design elements are intended to describe the software in sufficient
detail that skilled programmers may develop the software with minimal additional input
design.

Testing

The code is tested at various levels in software testing. Unit, system and user acceptance
testings are often performed. This is a grey area as many different opinions exist as to
what the stages of testing are and how much, if any iteration occurs. Iteration is not
generally part of the waterfall model, but usually some occur at this stage. In the testing
the whole system is test one by one

Following are the types of testing:

   •   Defect testing the failed scenarios, including defect tracking
   •   Path testing
   •   Data set testing
   •   Unit testing
   •   System testing
   •   Integration testing
   •   Black-box testing
   •   White-box testing
   •   Regression testing
   •   Automation testing
   •   User acceptance testing
   •   Performance testing

Operations and maintenance

The deployment of the system includes changes and enhancements before the
decommissioning or sunset of the system. Maintaining the system is an important aspect
of SDLC. As key personnel change positions in the organization, new changes will be
implemented, which will require system.

Systems analysis and design
The Systems Analysis and Design (SAD) is the process of developing Information
Systems (IS) that effectively use hardware, software, data, processes, and people to
support the company’s business objectives.

Object-oriented analysis
Object-oriented analysis (OOA) is the process of analyzing a task (also known as a
problem domain), to develop a conceptual model that can then be used to complete the
task. A typical OOA model would describe computer software that could be used to
satisfy a set of customer-defined requirements. During the analysis phase of problem-
solving, a programmer might consider a written requirements statement, a formal vision
document, or interviews with stakeholders or other interested parties. The task to be
addressed might be divided into several subtasks (or domains), each representing a
different business, technological, or other areas of interest. Each subtask would be
analyzed separately. Implementation constraints, (e.g., concurrency, distribution,
persistence, or how the system is to be built) are not considered during the analysis phase;
rather, they are addressed during object-oriented design (OOD).

The conceptual model that results from OOA will typically consist of a set of use cases,
one or more UML class diagrams, and a number of interaction diagrams. It may also
include some kind of user interface mock-up.

Input (sources) for object-oriented design

The input for object-oriented design is provided by the output of object-oriented analysis.
Realize that an output artifact does not need to be completely developed to serve as input
of object-oriented design; analysis and design may occur in parallel, and in practice the
results of one activity can feed the other in a short feedback cycle through an iterative
process. Both analysis and design can be performed incrementally, and the artifacts can
be continuously grown instead of completely developed in one shot.

Some typical input artifacts for object-oriented design are:

   •   Conceptual model: Conceptual model is the result of object-oriented analysis, it
       captures concepts in the problem domain. The conceptual model is explicitly
       chosen to be independent of implementation details, such as concurrency or data
       storage.

   •   Use case: Use case is a description of sequences of events that, taken together,
       lead to a system doing something useful. Each use case provides one or more
       scenarios that convey how the system should interact with the users called actors
       to achieve a specific business goal or function. Use case actors may be end users
       or other systems. In many circumstances use cases are further elaborated into use
       case diagrams. Use case diagrams are used to identify the actor (users or other
       systems) and the processes they perform.
•   System Sequence Diagram: System Sequence diagram (SSD) is a picture that
       shows, for a particular scenario of a use case, the events that external actors
       generate, their order, and possible inter-system events.

   •   User interface documentations (if applicable): Document that shows and describes
       the look and feel of the end product's user interface. It is not mandatory to have
       this, but it helps to visualize the end-product and therefore helps the designer.

   •   Relational data model (if applicable): A data model is an abstract model that
       describes how data is represented and used. If an object database is not used, the
       relational data model should usually be created before the design, since the
       strategy chosen for object-relational mapping is an output of the OO design
       process. However, it is possible to develop the relational data model and the
       object-oriented design artifacts in parallel, and the growth of an artifact can
       stimulate the refinement of other artifacts.

Systems development life cycle
Management and control




SPIU phases related to management controls.[8]

The SDLC phases serve as a programmatic guide to project activity and provide a
flexible but consistent way to conduct projects to a depth matching the scope of the
project. Each of the SDLC phase objectives are described in this section with key
deliverables, a description of recommended tasks, and a summary of related control
objectives for effective management. It is critical for the project manager to establish and
monitor control objectives during each SDLC phase while executing projects. Control
objectives help to provide a clear statement of the desired result or purpose and should be
used throughout the entire SDLC process. Control objectives can be grouped into major
categories (domains), and relate to the SDLC phases as shown in the figure.[8]

To manage and control any SDLC initiative, each project will be required to establish
some degree of a Work Breakdown Structure (WBS) to capture and schedule the work
necessary to complete the project. The WBS and all programmatic material should be
kept in the “project description” section of the project notebook. The WBS format is
mostly left to the project manager to establish in a way that best describes the project
work. There are some key areas that must be defined in the WBS as part of the SDLC
policy. The following diagram describes three key areas that will be addressed in the
WBS in a manner established by the project manager.[8]

Work breakdown structured organization




Work breakdown structure.[8]

The upper section of the work breakdown structure (WBS) should identify the major
phases and milestones of the project in a summary fashion. In addition, the upper section
should provide an overview of the full scope and timeline of the project and will be part
of the initial project description effort leading to project approval. The middle section of
the WBS is based on the seven systems development life cycle (SDLC) phases as a guide
for WBS task development. The WBS elements should consist of milestones and “tasks”
as opposed to “activities” and have a definitive period (usually two weeks or more). Each
task must have a measurable output (e.x. document, decision, or analysis). A WBS task
may rely on one or more activities (e.g. software engineering, systems engineering) and
may require close coordination with other tasks, either internal or external to the project.
Any part of the project needing support from contractors should have a statement of work
(SOW) written to include the appropriate tasks from the SDLC phases. The development
of a SOW does not occur during a specific phase of SDLC but is developed to include the
work from the SDLC process that may be conducted by external resources such as
contractors and struct.[8]

Baselines in the SDLC
Baselines are an important part of the systems development life cycle (SDLC). These
baselines are established after four of the five phases of the SDLC and are critical to the
iterative nature of the model .[9] Each baseline is considered as a milestone in the SDLC.

   •   functional baseline: established after the conceptual design phase.
   •   allocated baseline: established after the preliminary design phase.
   •   product baseline: established after the detail design and development phase.
   •   updated product baseline: established after the production construction phase.

Complementary to SDLC

Complementary software development methods to systems development life cycle
(SDLC) are:

   •   Software prototyping
   •   Joint applications development (JAD)
   •   Rapid application development (RAD)
   •   Extreme programming (XP); extension of earlier work in Prototyping and RAD.
   •   Open-source development
   •   End-user development
   •   Object-oriented programming

           Comparison of Methodology Approaches (Post, & Anderson 2006)[10]
                                       Open                                   End
                   SDLC       RAD             Objects JAD Prototyping
                                      source                                  User
Control         Formal      MIS     Weak     Standards Joint    User        User
                                                                            Short
Time frame      Long        Short   Medium Any          Medium Short
                                                                            –
Users           Many        Few     Few      Varies     Few     One or two One
MIS staff       Many        Few     Hundreds Split      Few     One or two None
Transaction/DSS Transaction Both    Both     Both       DSS     DSS         DSS
Interface       Minimal     Minimal Weak     Windows Crucial Crucial        Crucial
Documentation                                In
                Vital       Limited Internal            Limited Weak        None
and training                                 Objects
Integrity and                                In
                Vital       Vital   Unknown             Limited Weak        Weak
security                                     Objects
Reusability     Limited     Some Maybe Vital            Limited Weak        None

Strengths and weaknesses
Few people in the modern computing world would use a strict waterfall model for their
systems development life cycle (SDLC) as many modern methodologies have superseded
this thinking. Some will argue that the SDLC no longer applies to models like Agile
computing, but it is still a term widely in use in technology circles. The SDLC practice
has advantages in traditional models of software development, that lends itself more to a
structured environment. The disadvantages to using the SDLC methodology is when
there is need for iterative development or (i.e. web development or e-commerce) where
stakeholders need to review on a regular basis the software being designed. Instead of
viewing SDLC from a strength or weakness perspective, it is far more important to take
the best practices from the SDLC model and apply it to whatever may be most
appropriate for the software being designed.

A comparison of the strengths and weaknesses of SDLC:

                          Strength and Weaknesses of SDLC [10]
                   Strengths                               Weaknesses
     Control.                               Increased development time.
     Monitor large projects.                Increased development cost.
     Detailed steps.                        Systems must be defined up front.
     Evaluate costs and completion targets. Rigidity.
     Documentation.                         Hard to estimate costs, project overruns.
     Well defined user input.               User input is sometimes limited.
     Ease of maintenance.
     Development and design standards.
     Tolerates changes in MIS staffing.

An alternative to the SDLC is rapid application development, which combines
prototyping, joint application development and implementation of CASE tools. The
advantages of RAD are speed, reduced development cost, and active user involvement in
the development process.
4. Define OOP. What are the applications of it?

Object-oriented programming (OOP) is a programming paradigm using "objects" –
data structures consisting of data fields and methods together with their interactions – to
design applications and computer programs. Programming techniques may include
features such as data abstraction, encapsulation, messaging, modularity, polymorphism,
and inheritance. Many modern programming languages now support OOP, at least as an
option.

Overview
Simple, non-OOP programs may be one "long" list of statements (or commands). More
complex programs will often group smaller sections of these statements into functions or
subroutines each of which might perform a particular task. With designs of this sort, it is
common for some of the program's data to be 'global', i.e. accessible from any part of the
program. As programs grow in size, allowing any function to modify any piece of data
means that bugs can have wide-reaching effects.

In contrast, the object-oriented approach encourages the programmer to place data where
it is not directly accessible by the rest of the program. Instead, the data is accessed by
calling specially written functions, commonly called methods, which are either bundled
in with the data or inherited from "class objects." These act as the intermediaries for
retrieving or modifying the data they control. The programming construct that combines
data with a set of methods for accessing and managing those data is called an object. The
practice of using subroutines to examine or modify certain kinds of data, however, was
also quite commonly used in non-OOP modular programming, well before the
widespread use of object-oriented programming.

An object-oriented program will usually contain different types of objects, each type
corresponding to a particular kind of complex data to be managed or perhaps to a real-
world object or concept such as a bank account, a hockey player, or a bulldozer. A
program might well contain multiple copies of each type of object, one for each of the
real-world objects the program is dealing with. For instance, there could be one bank
account object for each real-world account at a particular bank. Each copy of the bank
account object would be alike in the methods it offers for manipulating or reading its
data, but the data inside each object would differ reflecting the different history of each
account.

Objects can be thought of as wrapping their data within a set of functions designed to
ensure that the data are used appropriately, and to assist in that use. The object's methods
will typically include checks and safeguards that are specific to the types of data the
object contains. An object can also offer simple-to-use, standardized methods for
performing particular operations on its data, while concealing the specifics of how those
tasks are accomplished. In this way alterations can be made to the internal structure or
methods of an object without requiring that the rest of the program be modified. This
approach can also be used to offer standardized methods across different types of objects.
As an example, several different types of objects might offer print methods. Each type of
object might implement that print method in a different way, reflecting the different kinds
of data each contains, but all the different print methods might be called in the same
standardized manner from elsewhere in the program. These features become especially
useful when more than one programmer is contributing code to a project or when the goal
is to reuse code between projects.

Object-oriented programming has roots that can be traced to the 1960s. As hardware and
software became increasingly complex, manageability often became a concern.
Researchers studied ways to maintain software quality and developed object-oriented
programming in part to address common problems by strongly emphasizing discrete,
reusable units of programming logic[citation needed]. The technology focuses on data rather
than processes, with programs composed of self-sufficient modules ("classes"), each
instance of which ("objects") contains all the information needed to manipulate its own
data structure ("members"). This is in contrast to the existing modular programming that
had been dominant for many years that focused on the function of a module, rather than
specifically the data, but equally provided for code reuse, and self-sufficient reusable
units of programming logic, enabling collaboration through the use of linked modules
(subroutines). This more conventional approach, which still persists, tends to consider
data and behavior separately.

An object-oriented program may thus be viewed as a collection of interacting objects, as
opposed to the conventional model, in which a program is seen as a list of tasks
(subroutines) to perform. In OOP, each object is capable of receiving messages,
processing data, and sending messages to other objects. Each object can be viewed as an
independent "machine" with a distinct role or responsibility. The actions (or "methods")
on these objects are closely associated with the object. For example, OOP data structures
tend to "carry their own operators around with them" (or at least "inherit" them from a
similar object or class) - except when they have to be serialized.

History
The terms "objects" and "oriented" in something like the modern sense of object-oriented
programming seem to make their first appearance at MIT in the late 1950s and early
1960s. In the environment of the artificial intelligence group, as early as 1960, "object"
could refer to identified items (LISP atoms) with properties (attributes);[1][2] Alan Kay was
later to cite a detailed understanding of LISP internals as a strong influence on his
thinking in 1966.[3] Another early MIT example was Sketchpad created by Ivan
Sutherland in 1960-61; in the glossary of the 1963 technical report based on his
dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with
the class concept covered by "master" or "definition"), albeit specialized to graphical
interaction.[4] Also, an MIT ALGOL version, AED-0, linked data structures ("plexes", in
that dialect) directly with procedures, prefiguring what were later termed "messages",
"methods" and "member functions".[5][6]

Objects as a formal concept in programming were introduced in the 1960s in Simula 67, a
major revision of Simula I, a programming language designed for discrete event
simulation, created by Ole-Johan Dahl and Kristen Nygaard of the Norwegian Computing
Center in Oslo.[7] Simula 67 was influenced by SIMSCRIPT and C.A.R. "Tony" Hoare's
proposed "record classes".[5][8] Simula introduced the notion of classes and instances or
objects (as well as subclasses, virtual methods, coroutines, and discrete event simulation)
as part of an explicit programming paradigm. The language also used automatic garbage
collection that had been invented earlier for the functional programming language Lisp.
Simula was used for physical modeling, such as models to study and improve the
movement of ships and their content through cargo ports. The ideas of Simula 67
influenced many later languages, including Smalltalk, derivatives of LISP (CLOS),
Object Pascal, and C++.

The Smalltalk language, which was developed at Xerox PARC (by Alan Kay and others)
in the 1970s, introduced the term object-oriented programming to represent the pervasive
use of objects and messages as the basis for computation. Smalltalk creators were
influenced by the ideas introduced in Simula 67, but Smalltalk was designed to be a fully
dynamic system in which classes could be created and modified dynamically rather than
statically as in Simula 67.[9] Smalltalk and with it OOP were introduced to a wider
audience by the August 1981 issue of Byte Magazine.

In the 1970s, Kay's Smalltalk work had influenced the Lisp community to incorporate
object-based techniques that were introduced to developers via the Lisp machine.
Experimentation with various extensions to Lisp (like LOOPS and Flavors introducing
multiple inheritance and mixins), eventually led to the Common Lisp Object System
(CLOS, a part of the first standardized object-oriented programming language, ANSI
Common Lisp), which integrates functional programming and object-oriented
programming and allows extension via a Meta-object protocol. In the 1980s, there were a
few attempts to design processor architectures that included hardware support for objects
in memory but these were not successful. Examples include the Intel iAPX 432 and the
Linn Smart Rekursiv.

Object-oriented programming developed as the dominant programming methodology in
the early and mid 1990s when programming languages supporting the techniques became
widely available. These included Visual FoxPro 3.0,[10][11][12] C++[citation needed], and
Delphi[citation needed]. Its dominance was further enhanced by the rising popularity of
graphical user interfaces, which rely heavily upon object-oriented programming
techniques. An example of a closely related dynamic GUI library and OOP language can
be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-
oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also
enhanced the popularity of event-driven programming (although this concept is not
limited to OOP). Some[who?] feel that association with GUIs (real or perceived) was what
propelled OOP into the programming mainstream.
At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics
as data abstraction and modular programming (although this had been in common use in
the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design,
Oberon, included a distinctive approach to object orientation, classes, and such. The
approach is unlike Smalltalk, and very unlike C++.

Object-oriented features have been added to many existing languages during that time,
including Ada, BASIC, Fortran, Pascal, and others. Adding these features to languages
that were not initially designed for them often led to problems with compatibility and
maintainability of code.

More recently, a number of languages have emerged that are primarily object-oriented
yet compatible with procedural methodology, such as Python and Ruby. Probably the
most commercially important recent object-oriented languages are Visual Basic.NET
(VB.NET) and C#, both designed for Microsoft's .NET platform, and Java, developed by
Sun Microsystems. Both frameworks show the benefit of using OOP by creating an
abstraction from implementation in their own way. VB.NET and C# support cross-
language inheritance, allowing classes defined in one language to subclass classes
defined in the other language. Developers usually compile Java to bytecode, allowing
Java to run on any operating system for which a Java virtual machine is available.
VB.NET and C# make use of the Strategy pattern to accomplish cross-language
inheritance, whereas Java makes use of the Adapter pattern[citation needed].

Just as procedural programming led to refinements of techniques such as structured
programming, modern object-oriented software design methods include refinements[citation
needed]
        such as the use of design patterns, design by contract, and modeling languages (such
as UML).

Fundamental features and concepts
See also: List of object-oriented programming terms

A survey by Deborah J. Armstrong of nearly 40 years of computing literature identified a
number of "quarks", or fundamental concepts, found in the strong majority of definitions
of OOP.[13]

Not all of these concepts are to be found in all object-oriented programming languages.
For example, object-oriented programming that uses classes is sometimes called class-
based programming, while prototype-based programming does not typically use classes.
As a result, a significantly different yet analogous terminology is used to define the
concepts of object and instance.

Benjamin C. Pierce and some other researchers view as futile any attempt to distill OOP
to a minimal set of features. He nonetheless identifies fundamental features that support
the OOP programming style in most object-oriented languages:[14]
•   Dynamic dispatch – when a method is invoked on an object, the object itself
       determines what code gets executed by looking up the method at run time in a
       table associated with the object. This feature distinguishes an object from an
       abstract data type (or module), which has a fixed (static) implementation of the
       operations for all instances. It is a programming methodology that gives modular
       component development while at the same time being very efficient.
   •   Encapsulation (or multi-methods, in which case the state is kept separate)
   •   Subtype polymorphism
   •   Object inheritance (or delegation)
   •   Open recursion – a special variable (syntactically it may be a keyword), usually
       called this or self, that allows a method body to invoke another method body of
       the same object. This variable is late-bound; it allows a method defined in one
       class to invoke another method that is defined later, in some subclass thereof.

Similarly, in his 2003 book, Concepts in programming languages, John C. Mitchell
identifies four main features: dynamic dispatch, abstraction, subtype polymorphism, and
inheritance.[15] Michael Lee Scott in Programming Language Pragmatics considers only
encapsulation, inheritance and dynamic dispatch.[16]

Additional concepts used in object-oriented programming include:

   •   Classes of objects
   •   Instances of classes
   •   Methods which act on the attached objects.
   •   Message passing
   •   Abstraction

Decoupling

Decoupling refers to careful controls that separate code modules from particular use
cases, which increases code re-usability. A common use of decoupling in OOP is to
polymorphically decouple the encapsulation (see Bridge pattern and Adapter pattern) -
for example, using a method interface which an encapsulated object must satisfy, as
opposed to using the object's class.

Formal semantics
See also: Formal semantics of programming languages

There have been several attempts at formalizing the concepts used in object-oriented
programming. The following concepts and constructs have been used as interpretations of
OOP concepts:

   •   coalgebraic data types [17]
•   abstract data types (which have existential types) allow the definition of modules
       but these do not support dynamic dispatch
   •   recursive types
   •   encapsulated state
   •   Inheritance
   •   records are basis for understanding objects if function literals can be stored in
       fields (like in functional programming languages), but the actual calculi need be
       considerably more complex to incorporate essential features of OOP. Several
       extensions of System F<: that deal with mutable objects have been studied;[18]
       these allow both subtype polymorphism and parametric polymorphism (generics)

Attempts to find a consensus definition or theory behind objects have not proven very
successful (however, see Abadi & Cardelli, A Theory of Objects[18] for formal definitions
of many OOP concepts and constructs), and often diverge widely. For example, some
definitions focus on mental activities, and some on program structuring. One of the
simpler definitions is that OOP is the act of using "map" data structures or arrays that can
contain functions and pointers to other maps, all with some syntactic and scoping sugar
on top. Inheritance can be performed by cloning the maps (sometimes called
"prototyping"). OBJECT:=>> Objects are the run time entities in an object-oriented
system. They may represent a person, a place, a bank account, a table of data or any item
that the program has to handle.

OOP languages
See also: List of object-oriented programming languages

Simula (1967) is generally accepted as the first language to have the primary features of
an object-oriented language. It was created for making simulation programs, in which
what came to be called objects were the most important information representation.
Smalltalk (1972 to 1980) is arguably the canonical example, and the one with which
much of the theory of object-oriented programming was developed. Concerning the
degree of object orientation, following distinction can be made:

   •   Languages called "pure" OO languages, because everything in them is treated
       consistently as an object, from primitives such as characters and punctuation, all
       the way up to whole classes, prototypes, blocks, modules, etc. They were
       designed specifically to facilitate, even enforce, OO methods. Examples: Eiffel,
       Emerald.[19], JADE, Obix, Scala, Smalltalk
   •   Languages designed mainly for OO programming, but with some procedural
       elements. Examples: C++, Java, C#, VB.NET, Python.
   •   Languages that are historically procedural languages, but have been extended
       with some OO features. Examples: Visual Basic (derived from BASIC), Fortran
       2003, Perl, COBOL 2002, PHP, ABAP.
   •   Languages with most of the features of objects (classes, methods, inheritance,
       reusability), but in a distinctly original form. Examples: Oberon (Oberon-1 or
       Oberon-2) and Common Lisp.
•   Languages with abstract data type support, but not all features of object-
       orientation, sometimes called object-based languages. Examples: Modula-2 (with
       excellent encapsulation and information hiding), Pliant, CLU.

OOP in dynamic languages

In recent years, object-oriented programming has become especially popular in dynamic
programming languages. Python, Ruby and Groovy are dynamic languages built on OOP
principles, while Perl and PHP have been adding object oriented features since Perl 5 and
PHP 4, and ColdFusion since version 5.

The Document Object Model of HTML, XHTML, and XML documents on the Internet
have bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps the
best known prototype-based programming language, which employs cloning from
prototypes rather than inheriting from a class. Another scripting language that takes this
approach is Lua. Earlier versions of ActionScript (a partial superset of the ECMA-262
R3, otherwise known as ECMAScript) also used a prototype-based object model. Later
versions of ActionScript incorporate a combination of classification and prototype-based
object models based largely on the currently incomplete ECMA-262 R4 specification,
which has its roots in an early JavaScript 2 Proposal. Microsoft's JScript.NET also
includes a mash-up of object models based on the same proposal, and is also a superset of
the ECMA-262 R3 specification.

Design patterns
Challenges of object-oriented design are addressed by several methodologies. Most
common is known as the design patterns codified by Gamma et al.. More broadly, the
term "design patterns" can be used to refer to any general, repeatable solution to a
commonly occurring problem in software design. Some of these commonly occurring
problems have implications and solutions particular to object-oriented development.

Inheritance and behavioral subtyping

See also: Object oriented design

It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to
infer that objects instantiated from subclasses can always be safely used instead of those
instantiated from the superclass. This intuition is unfortunately false in most OOP
languages, in particular in all those that allow mutable objects. Subtype polymorphism as
enforced by the type checker in OOP languages (with mutable objects) cannot guarantee
behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it
cannot be implemented by a program (compiler). Class or object hierarchies need to be
carefully designed considering possible incorrect uses that cannot be detected
syntactically. This issue is known as the Liskov substitution principle.
Gang of Four design patterns

Main article: Design pattern (computer science)

Design Patterns: Elements of Reusable Object-Oriented Software is an influential book
published in 1995 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides,
often referred to humorously as the "Gang of Four". Along with exploring the capabilities
and pitfalls of object-oriented programming, it describes 23 common programming
problems and patterns for solving them. As of April 2007, the book was in its 36th
printing.

Object-orientation and databases

Main articles: Object-Relational impedance mismatch, Object-relational mapping, and
Object database

Both object-oriented programming and relational database management systems
(RDBMSs) are extremely common in software today. Since relational databases don't
store objects directly (though some RDBMSs have object-oriented features to
approximate this), there is a general need to bridge the two worlds. The problem of
bridging object-oriented programming accesses and data patterns with relational
databases is known as Object-Relational impedance mismatch. There are a number of
approaches to cope with this problem, but no general solution without downsides.[20] One
of the most common approaches is object-relational mapping, as found in libraries like
Java Data Objects and Ruby on Rails' ActiveRecord.

There are also object databases that can be used to replace RDBMSs, but these have not
been as technically and commercially successful as RDBMSs.

Real-world modeling and relationships

OOP can be used to associate real-world objects and processes with digital counterparts.
However, not everyone agrees that OOP facilitates direct real-world mapping (see
Negative Criticism section) or that real-world mapping is even a worthy goal; Bertrand
Meyer argues in Object-Oriented Software Construction[21] that a program is not a model
of the world but a model of some part of the world; "Reality is a cousin twice removed".
At the same time, some principal limitations of OOP had been noted.[22] For example, the
Circle-ellipse problem is difficult to handle using OOP's concept of inheritance.

However, Niklaus Wirth (who popularized the adage now known as Wirth's law:
"Software is getting slower more rapidly than hardware becomes faster") said of OOP in
his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the
structure of systems 'in the real world', and it is therefore well suited to model complex
systems with complex behaviours" (contrast KISS principle).
Steve Yegge and others noted that natural languages lack the OOP approach of strictly
prioritizing things (objects/nouns) before actions (methods/verbs).[23] This problem may
cause OOP to suffer more convoluted solutions than procedural programming.[24]

OOP and control flow

OOP was developed to increase the reusability and maintainability of source code.[25]
Transparent representation of the control flow had no priority and was meant to be
handled by a compiler. With the increasing relevance of parallel hardware and
multithreaded coding, developer transparent control flow becomes more important,
something hard to achieve with OOP.[26][27][28][29]

Responsibility- vs. data-driven design

Responsibility-driven design defines classes in terms of a contract, that is, a class should
be defined around a responsibility and the information that it shares. This is contrasted by
Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around
the data-structures that must be held. The authors hold that responsibility-driven design is
preferable.
5. Explain steps involved in the process of Software
Project Management

Software project management is the art and science of planning and leading software
projects.[1] It is a sub-discipline of project management in which software projects are
planned, monitored and controlled.

History
Companies quickly understood the relative ease of use that software programming had
over hardware circuitry, and the software industry grew very quickly in the 1970s and
1980s.

To manage new development efforts, companies applied proven project management
methods, but project schedules slipped during test runs, especially when confusion
occurred in the gray zone between the user specifications and the delivered software. To
be able to avoid these problems, software project management methods focused on
matching user requirements to delivered products, in a method known now as the
waterfall model. Since then, analysis of software project management failures has shown
that the following are the most common causes:[2]

   1. Unrealistic or unarticulated project goals
   2. Inaccurate estimates of needed resources
   3. Badly defined system requirements
   4. Poor reporting of the project's status
   5. Unmanaged risks
   6. Poor communication among customers, developers, and users
   7. Use of immature technology
   8. Inability to handle the project's complexity
   9. Sloppy development practices
   10. Poor project management
   11. Stakeholder politics
   12. Commercial pressures

The first three items in the list above show the difficulties articulating the needs of the
client in such a way that proper resources can deliver the proper project goals. Specific
software project management tools are useful and often necessary, but the true art in
software project management is applying the correct method and then using tools to
support the method. Without a method, tools are worthless. Since the 1960s, several
proprietary software project management methods have been developed by software
manufacturers for their own use, while computer consulting firms have also developed
similar methods for their clients. Today software project management methods are still
evolving, but the current trend leads away from the waterfall model to a more cyclic
project delivery model that imitates a Software release life cycle.

Software development process
A software development process is concerned primarily with the production aspect of
software development, as opposed to the technical aspect, such as software tools. These
processes exist primarily for supporting the management of software development, and
are generally skewed toward addressing business concerns. Many software development
processes can be run in a similar way to general project management processes.
Examples are:

   •   Risk management is the process of measuring or assessing risk and then
       developing strategies to manage the risk. In general, the strategies employed
       include transferring the risk to another party, avoiding the risk, reducing the
       negative effect of the risk, and accepting some or all of the consequences of a
       particular risk. Risk management in software project management begins with the
       business case for starting the project, which includes a cost-benefit analysis as
       well as a list of fallback options for project failure, called a contingency plan.
            o A subset of risk management that is gaining more and more attention is
                Opportunity Management, which means the same thing, except that the
                potential risk outcome will have a positive, rather than a negative impact.
                Though theoretically handled in the same way, using the term
                "opportunity" rather than the somewhat negative term "risk" helps to keep
                a team focused on possible positive outcomes of any given risk register in
                their projects, such as spin-off projects, windfalls, and free extra resources.
   •   Requirements management is the process of identifying, eliciting, documenting,
       analyzing, tracing, prioritizing and agreeing on requirements and then controlling
       change and communicating to relevant stakeholders. New or altered computer
       system[1] Requirements management, which includes Requirements analysis, is an
       important part of the software engineering process; whereby business analysts or
       software developers identify the needs or requirements of a client; having
       identified these requirements they are then in a position to design a solution.
   •   Change management is the process of identifying, documenting, analyzing,
       prioritizing and agreeing on changes to scope (project management) and then
       controlling changes and communicating to relevant stakeholders. Change impact
       analysis of new or altered scope, which includes Requirements analysis at the
       change level, is an important part of the software engineering process; whereby
       business analysts or software developers identify the altered needs or
       requirements of a client; having identified these requirements they are then in a
       position to re-design or modify a solution. Theoretically, each change can impact
       the timeline and budget of a software project, and therefore by definition must
       include risk-benefit analysis before approval.
   •   Software configuration management is the process of identifying, and
       documenting the scope itself, which is the software product underway, including
       all sub-products and changes and enabling communication of these to relevant
stakeholders. In general, the processes employed include version control, naming
       convention (programming), and software archival agreements.
   •   Release management is the process of identifying, documenting, prioritizing and
       agreeing on releases of software and then controlling the release schedule and
       communicating to relevant stakeholders. Most software projects have access to
       three software environments to which software can be released; Development,
       Test, and Production. In very large projects, where distributed teams need to
       integrate their work before release to users, there will often be more environments
       for testing, called unit testing, system testing, or integration testing, before release
       to User acceptance testing (UAT).
           o A subset of release management that is gaining more and more attention is
                Data Management, as obviously the users can only test based on data that
                they know, and "real" data is only in the software environment called
                "production". In order to test their work, programmers must therefore also
                often create "dummy data" or "data stubs". Traditionally, older versions of
                a production system were once used for this purpose, but as companies
                rely more and more on outside contributors for software development,
                company data may not be released to development teams. In complex
                environments, datasets may be created that are then migrated across test
                environments according to a test release schedule, much like the overall
                software release schedule.

Project planning, monitoring and control
The purpose of project planning is to identify the scope of the project, estimate the work
involved, and create a project schedule. Project planning begins with requirements that
define the software to be developed. The project plan is then developed to describe the
tasks that will lead to completion.

The purpose of project monitoring and control is to keep the team and management up to
date on the project's progress. If the project deviates from the plan, then the project
manager can take action to correct the problem. Project monitoring and control involves
status meetings to gather status from the team. When changes need to be made, change
control is used to keep the products up to date.

Issue
In computing, the term issue is a unit of work to accomplish an improvement in a system.
An issue could be a bug, a requested feature, task, missing documentation, and so forth.
The word "issue" is popularly misused in lieu of "problem." This usage is probably
related.[citation needed]

For example, OpenOffice.org used to call their modified version of BugZilla IssueZilla.
As of September 2010, they call their system Issue Tracker.
Problems occur from time to time and fixing them in a timely fashion is essential to
achieve correctness of a system and avoid delayed deliveries of products.

Severity levels

Issues are often categorized in terms of severity levels. Different companies have
different definitions of severities, but some of the most common ones are:

Critical
High
      The bug or issue affects a crucial part of a system, and must be fixed in order for
      it to resume normal operation.
Medium
      The bug or issue affects a minor part of a system, but has some impact on its
      operation. This severity level is assigned when a non-central requirement of a
      system is affected.
Low
      The bug or issue affects a minor part of a system, and has very little impact on its
      operation. This severity level is assigned when a non-central requirement of a
      system (and with lower importance) is affected.
Cosmetic
      The system works correctly, but the appearance does not match the expected one.
      For example: wrong colors, too much or too little spacing between contents,
      incorrect font sizes, typos, etc. This is the lowest severity issue.

In many software companies, issues are often investigated by Quality Assurance Analysts
when they verify a system for correctness, and then assigned to the developer(s) that are
responsible for resolving them. They can also be assigned by system users during the
User Acceptance Testing (UAT) phase.

Issues are commonly communicated using Issue or Defect Tracking Systems. In some
other cases, emails or instant messengers are used.

Philosophy
As a subdiscipline of project management, some regard the management of software
development akin to the management of manufacturing, which can be performed by
someone with management skills, but no programming skills. John C. Reynolds rebuts
this view, and argues that software development is entirely design work, and compares a
manager who cannot program to the managing editor of a newspaper who cannot write.[3]

In Software Project Management, the end users and developers need
to know the length, duration and cost of the project. It is a process of
managing, allocating and timing resources to develop computer
software that meets requirements. It consists of eight tasks:
-   Problem Identification
-   Problem Definition
-   Project Planning
-   Project Organization
-   Resource Allocation
-   Project Scheduling
-   Tracking, Reporting and Controlling
-   Project Termination

In problem identification and definition, the decisions are made while
approving, declining or prioritizing projects. In problem identification,
project is identified, defined and justified. In problem definition, the
purpose of the project is clarified. The main product is project
proposal.

In project planning, it describes a series of actions or steps that are
needed to for the development of work product. In project
organization, the functions of the personnel are integrated. It is done
in parallel with project planning.

In resource allocation, the resources are allocated to a project so that
the goals and objectives are achieved. In project scheduling, resources
are allocated so that project objectives are achieved within a
reasonable time span.

In tracking, reporting and controlling, the process involves whether the
project results are in accordance with project plans and performance
specification. In controlling, proper action is taken to correct
unacceptable deviations.

In project termination, the final report is submitted or a release order
is signed.

More Related Content

What's hot

Management Information System
Management Information SystemManagement Information System
Management Information Systemcpjcollege
 
MIS ( Management Information System ) | DEFINITION, IMPORTANCE & BENIFITS
MIS ( Management  Information System ) | DEFINITION, IMPORTANCE & BENIFITSMIS ( Management  Information System ) | DEFINITION, IMPORTANCE & BENIFITS
MIS ( Management Information System ) | DEFINITION, IMPORTANCE & BENIFITSMSA Technosoft
 
Definitions of the_acronyms_mis,tps_and_dss
Definitions of the_acronyms_mis,tps_and_dssDefinitions of the_acronyms_mis,tps_and_dss
Definitions of the_acronyms_mis,tps_and_dssKwame Afreh
 
Management information sysytem{MIS}
Management information sysytem{MIS}Management information sysytem{MIS}
Management information sysytem{MIS}Bpn Dhungel
 
Management information systems
Management information systemsManagement information systems
Management information systemsDheeraj Negi
 
IT-314 MIS (Practicals)
IT-314 MIS (Practicals)IT-314 MIS (Practicals)
IT-314 MIS (Practicals)Dushmanta Nath
 
Management Information System by professor sai chandu
Management Information System by professor sai chanduManagement Information System by professor sai chandu
Management Information System by professor sai chandusai chandu kandati
 
Information system
Information systemInformation system
Information systemhiddensoul
 
Management Information Systems
Management Information SystemsManagement Information Systems
Management Information SystemsTrinity Dwarka
 
Management Information System
Management Information SystemManagement Information System
Management Information Systemcpjcollege
 
Chapter 09 dss mis eis es ai
Chapter 09   dss mis eis es aiChapter 09   dss mis eis es ai
Chapter 09 dss mis eis es aiPooja Sakhla
 
22830802 management-information-system
22830802 management-information-system22830802 management-information-system
22830802 management-information-systemarunakankshadixit
 
MIS practical file
MIS practical fileMIS practical file
MIS practical fileAnkit Dixit
 

What's hot (18)

Management Information System
Management Information SystemManagement Information System
Management Information System
 
MIS ( Management Information System ) | DEFINITION, IMPORTANCE & BENIFITS
MIS ( Management  Information System ) | DEFINITION, IMPORTANCE & BENIFITSMIS ( Management  Information System ) | DEFINITION, IMPORTANCE & BENIFITS
MIS ( Management Information System ) | DEFINITION, IMPORTANCE & BENIFITS
 
Definitions of the_acronyms_mis,tps_and_dss
Definitions of the_acronyms_mis,tps_and_dssDefinitions of the_acronyms_mis,tps_and_dss
Definitions of the_acronyms_mis,tps_and_dss
 
Management information sysytem{MIS}
Management information sysytem{MIS}Management information sysytem{MIS}
Management information sysytem{MIS}
 
Management information systems
Management information systemsManagement information systems
Management information systems
 
Management information system Unit 1
Management information system Unit 1Management information system Unit 1
Management information system Unit 1
 
IT-314 MIS (Practicals)
IT-314 MIS (Practicals)IT-314 MIS (Practicals)
IT-314 MIS (Practicals)
 
Management Information System by professor sai chandu
Management Information System by professor sai chanduManagement Information System by professor sai chandu
Management Information System by professor sai chandu
 
Information system
Information systemInformation system
Information system
 
Management Information Systems
Management Information SystemsManagement Information Systems
Management Information Systems
 
Information Systems (Lecture 1)
Information Systems (Lecture 1)Information Systems (Lecture 1)
Information Systems (Lecture 1)
 
Management Information System
Management Information SystemManagement Information System
Management Information System
 
Chapter 09 dss mis eis es ai
Chapter 09   dss mis eis es aiChapter 09   dss mis eis es ai
Chapter 09 dss mis eis es ai
 
22830802 management-information-system
22830802 management-information-system22830802 management-information-system
22830802 management-information-system
 
MIS practical file
MIS practical fileMIS practical file
MIS practical file
 
Mis 1 Chapter
Mis 1 ChapterMis 1 Chapter
Mis 1 Chapter
 
1. Introduction to mis
1. Introduction to mis1. Introduction to mis
1. Introduction to mis
 
MIS
MISMIS
MIS
 

Viewers also liked

MIS 373 Lecture Notes Jan27
MIS 373 Lecture Notes Jan27MIS 373 Lecture Notes Jan27
MIS 373 Lecture Notes Jan27Spredfast
 
Final mojor project risk assesment
Final mojor project risk assesmentFinal mojor project risk assesment
Final mojor project risk assesmentconornorton
 
System Analysis and Design Project
System Analysis and Design ProjectSystem Analysis and Design Project
System Analysis and Design ProjectSiddharth Shah
 
Risk assesment
Risk assesment Risk assesment
Risk assesment daisysadeh
 
Chapter03 managing the information systems project
Chapter03 managing the information systems projectChapter03 managing the information systems project
Chapter03 managing the information systems projectDhani Ahmad
 
Advanced System Analysis And Design
Advanced System Analysis And Design Advanced System Analysis And Design
Advanced System Analysis And Design Anit Thapaliya
 
System Analysis and Design slides by yared yenealem DTU Ethiopia
System Analysis and Design slides by yared yenealem DTU EthiopiaSystem Analysis and Design slides by yared yenealem DTU Ethiopia
System Analysis and Design slides by yared yenealem DTU EthiopiaDebre Tabor University
 
Feasibility report
Feasibility reportFeasibility report
Feasibility reportnithishpro
 
Different types of information system from functional perspective
Different types of information system from functional perspectiveDifferent types of information system from functional perspective
Different types of information system from functional perspectiveDanish Musthafa
 
Sample contents of a completed feasibility study
Sample contents of a completed feasibility studySample contents of a completed feasibility study
Sample contents of a completed feasibility studynazcats
 
Online Hotel Room Booking System
Online Hotel Room Booking SystemOnline Hotel Room Booking System
Online Hotel Room Booking SystemAbhishek Kumar
 
Management information system
Management  information systemManagement  information system
Management information systemRamya Sree
 
Management information system
Management information systemManagement information system
Management information systemAnamika Sonawane
 
Management Information System (Full Notes)
Management Information System (Full Notes)Management Information System (Full Notes)
Management Information System (Full Notes)Harish Chand
 
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide Ramco Systems
 
IB History OPVL Chart
IB History OPVL Chart IB History OPVL Chart
IB History OPVL Chart YCIS Beijing
 

Viewers also liked (20)

MIS Notes for BBA 8th Evening
MIS Notes for BBA 8th EveningMIS Notes for BBA 8th Evening
MIS Notes for BBA 8th Evening
 
Mis notes
Mis notesMis notes
Mis notes
 
MIS 373 Lecture Notes Jan27
MIS 373 Lecture Notes Jan27MIS 373 Lecture Notes Jan27
MIS 373 Lecture Notes Jan27
 
Final mojor project risk assesment
Final mojor project risk assesmentFinal mojor project risk assesment
Final mojor project risk assesment
 
system analysis and design Class 3
system analysis and design Class 3system analysis and design Class 3
system analysis and design Class 3
 
System Analysis and Design Project
System Analysis and Design ProjectSystem Analysis and Design Project
System Analysis and Design Project
 
Risk assesment
Risk assesment Risk assesment
Risk assesment
 
System analyst and design
System analyst and designSystem analyst and design
System analyst and design
 
Chapter03 managing the information systems project
Chapter03 managing the information systems projectChapter03 managing the information systems project
Chapter03 managing the information systems project
 
Advanced System Analysis And Design
Advanced System Analysis And Design Advanced System Analysis And Design
Advanced System Analysis And Design
 
System Analysis and Design slides by yared yenealem DTU Ethiopia
System Analysis and Design slides by yared yenealem DTU EthiopiaSystem Analysis and Design slides by yared yenealem DTU Ethiopia
System Analysis and Design slides by yared yenealem DTU Ethiopia
 
Feasibility report
Feasibility reportFeasibility report
Feasibility report
 
Different types of information system from functional perspective
Different types of information system from functional perspectiveDifferent types of information system from functional perspective
Different types of information system from functional perspective
 
Sample contents of a completed feasibility study
Sample contents of a completed feasibility studySample contents of a completed feasibility study
Sample contents of a completed feasibility study
 
Online Hotel Room Booking System
Online Hotel Room Booking SystemOnline Hotel Room Booking System
Online Hotel Room Booking System
 
Management information system
Management  information systemManagement  information system
Management information system
 
Management information system
Management information systemManagement information system
Management information system
 
Management Information System (Full Notes)
Management Information System (Full Notes)Management Information System (Full Notes)
Management Information System (Full Notes)
 
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide
Ramco ERP on Cloud - The Best Cloud Computing Solution Worldwide
 
IB History OPVL Chart
IB History OPVL Chart IB History OPVL Chart
IB History OPVL Chart
 

Similar to Management Information system

Structure system analysis and design method -SSADM
Structure system analysis and design method -SSADMStructure system analysis and design method -SSADM
Structure system analysis and design method -SSADMFLYMAN TECHNOLOGY LIMITED
 
Software development life cycle
Software development life cycle Software development life cycle
Software development life cycle shefali mishra
 
System_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptSystem_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptMarissaPedragosa
 
system development life cycle
system development life cyclesystem development life cycle
system development life cycleSuhleemAhmd
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsMuhammadTalha436
 
Software Development Methodologies-HSM, SSADM
Software Development Methodologies-HSM, SSADMSoftware Development Methodologies-HSM, SSADM
Software Development Methodologies-HSM, SSADMNana Sarpong
 
Analyzing Systems Using Data Flow Diagrams
Analyzing Systems Using Data Flow DiagramsAnalyzing Systems Using Data Flow Diagrams
Analyzing Systems Using Data Flow DiagramsChristina Valadez
 
System analysis and design Part2
System analysis and design Part2System analysis and design Part2
System analysis and design Part2Joel Briza
 
Software Development Life Cycle (SDLC).pptx
Software Development Life Cycle (SDLC).pptxSoftware Development Life Cycle (SDLC).pptx
Software Development Life Cycle (SDLC).pptxsandhyakiran10
 
System Analysis and Design Project documentation
System Analysis and Design Project documentationSystem Analysis and Design Project documentation
System Analysis and Design Project documentationMAHERMOHAMED27
 

Similar to Management Information system (20)

Structure system analysis and design method -SSADM
Structure system analysis and design method -SSADMStructure system analysis and design method -SSADM
Structure system analysis and design method -SSADM
 
Software development life cycle
Software development life cycle Software development life cycle
Software development life cycle
 
System_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.pptSystem_Analysis_and_Design_Assignment_New2.ppt
System_Analysis_and_Design_Assignment_New2.ppt
 
system development life cycle
system development life cyclesystem development life cycle
system development life cycle
 
Sdlc1
Sdlc1Sdlc1
Sdlc1
 
Software Engineering Important Short Question for Exams
Software Engineering Important Short Question for ExamsSoftware Engineering Important Short Question for Exams
Software Engineering Important Short Question for Exams
 
Software Development Methodologies-HSM, SSADM
Software Development Methodologies-HSM, SSADMSoftware Development Methodologies-HSM, SSADM
Software Development Methodologies-HSM, SSADM
 
Presentation2
Presentation2Presentation2
Presentation2
 
Analyzing Systems Using Data Flow Diagrams
Analyzing Systems Using Data Flow DiagramsAnalyzing Systems Using Data Flow Diagrams
Analyzing Systems Using Data Flow Diagrams
 
Gr 6 sdlc models
Gr 6   sdlc modelsGr 6   sdlc models
Gr 6 sdlc models
 
System analysis and design Part2
System analysis and design Part2System analysis and design Part2
System analysis and design Part2
 
SE chapters 6-7
SE chapters 6-7SE chapters 6-7
SE chapters 6-7
 
Slides chapters 6-7
Slides chapters 6-7Slides chapters 6-7
Slides chapters 6-7
 
S D L C
S D L CS D L C
S D L C
 
Software Development Life Cycle (SDLC).pptx
Software Development Life Cycle (SDLC).pptxSoftware Development Life Cycle (SDLC).pptx
Software Development Life Cycle (SDLC).pptx
 
Sadchap3
Sadchap3Sadchap3
Sadchap3
 
Building an Information System
Building an Information SystemBuilding an Information System
Building an Information System
 
System analysis 1
System analysis 1System analysis 1
System analysis 1
 
Sdlc
SdlcSdlc
Sdlc
 
System Analysis and Design Project documentation
System Analysis and Design Project documentationSystem Analysis and Design Project documentation
System Analysis and Design Project documentation
 

More from Cochin University

More from Cochin University (20)

Fea questions
Fea questions Fea questions
Fea questions
 
E5 finite element_analysis_r01
E5 finite element_analysis_r01E5 finite element_analysis_r01
E5 finite element_analysis_r01
 
Central office for university hostels
Central office for university hostelsCentral office for university hostels
Central office for university hostels
 
Structural dynamics
Structural dynamicsStructural dynamics
Structural dynamics
 
Earthquake resistant design of structures question
Earthquake resistant design of structures questionEarthquake resistant design of structures question
Earthquake resistant design of structures question
 
Rase2013 registration form
Rase2013 registration formRase2013 registration form
Rase2013 registration form
 
conference
conference conference
conference
 
Mba project report
Mba project reportMba project report
Mba project report
 
Vote of thanks
Vote of thanksVote of thanks
Vote of thanks
 
csir test
csir test csir test
csir test
 
Inventory management system
Inventory management systemInventory management system
Inventory management system
 
SWOT analysis
SWOT analysisSWOT analysis
SWOT analysis
 
Types of advertisement
Types of advertisement Types of advertisement
Types of advertisement
 
Pcs module1&2 Cusat syllabus
Pcs module1&2 Cusat syllabusPcs module1&2 Cusat syllabus
Pcs module1&2 Cusat syllabus
 
Recent Advances in Finite element methods
Recent Advances in Finite element methodsRecent Advances in Finite element methods
Recent Advances in Finite element methods
 
Recent Advances in Concrete Technology
 Recent Advances in Concrete Technology Recent Advances in Concrete Technology
Recent Advances in Concrete Technology
 
Business plan for construction solutions company
Business plan  for construction solutions companyBusiness plan  for construction solutions company
Business plan for construction solutions company
 
Business plan for structural engineering firm
Business plan for structural engineering firmBusiness plan for structural engineering firm
Business plan for structural engineering firm
 
Entrepreneurial quotient Calculation
Entrepreneurial quotient CalculationEntrepreneurial quotient Calculation
Entrepreneurial quotient Calculation
 
Business plan for medical lab
Business plan for medical labBusiness plan for medical lab
Business plan for medical lab
 

Management Information system

  • 1. 1. Discuss the structured system analysis and design methodologies. Structured systems analysis and design method (SSADM) is a systems approach to the analysis and design of information systems. SSADM was produced for the Central Computer and Telecommunications Agency (now Office of Government Commerce), a UK government office concerned with the use of technology in government, from 1980 onwards. Overview SSADM is a waterfall method for the analysis and design of information systems. SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system design. The names "Structured Systems Analysis and Design Method" and "SSADM" are registered trademarks of the Office of Government Commerce (OGC), which is an office of the United Kingdom's Treasury. [1] History The principal stages of the development of SSADM were:[2] • 1980: Central Computer and Telecommunications Agency (CCTA) evaluate analysis and design methods. • 1981: Learmonth & Burchett Management Systems (LBMS) method chosen from shortlist of five. • 1983: SSADM made mandatory for all new information system developments • 1998:PLATINUM TECHNOLOGY acquires LBMS • 2000: CCTA renamed SSADM as "Business System Development". The method was repackaged into 15 modules and another 6 modules were added.[3][4] SSADM techniques The three most important techniques that are used in SSADM are: Logical data modeling This is the process of identifying, modeling and documenting the data requirements of the system being designed. The data are separated into entities (things about which a business needs to record information) and relationships (the associations between the entities). Data Flow Modeling This is the process of identifying, modeling and documenting how data moves around an information system. Data Flow Modeling examines processes
  • 2. (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow). Entity Behavior Modeling This is the process of identifying, modeling and documenting the events that affect each entity and the sequence in which these events occur. Stages The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following. Stage 0 – Feasibility study In order to determine whether or not a given project is feasible, there must be some form of investigation into the goals and implications of the project. For very small scale projects this may not be necessary at all as the scope of the project is easily understood. In larger projects, the feasibility may be done but in an informal sense, either because there is not time for a formal study or because the project is a “must-have” and will have to be done one way or the other. When a feasibility study is carried out, there are four main areas of consideration: • Technical – is the project technically possible? • Financial – can the business afford to carry out the project? • Organizational – will the new system be compatible with existing practices? • Ethical – is the impact of the new system socially acceptable? To answer these questions, the feasibility study is effectively a condensed version of a fully blown systems analysis and design. The requirements and users are analyzed to some extent, some business options are drawn up and even some details of the technical implementation. The product of this stage is a formal feasibility study document. SSADM specifies the sections that the study should contain including any preliminary models that have been constructed and also details of rejected options and the reasons for their rejection. Stage 1 – Investigation of the current environment This is one of the most important stages of SSADM. The developers of SSADM understood that though the tasks and objectives of a new system may be radically different from the old system, the underlying data will probably change very little. By coming to a full understanding of the data requirements at an early stage, the remaining analysis and design stages can be built up on a firm foundation.
  • 3. In almost all cases there is some form of current system even if it is entirely composed of people and paper. Through a combination of interviewing employees, circulating questionnaires, observations and existing documentation, the analyst comes to full understanding of the system as it is at the start of the project. This serves many purposes: • the analyst learns the terminology of the business, what users do and how they do it • the old system provides the core requirements for the new system • faults, errors and areas of inefficiency are highlighted and their correction added to the requirements • the data model can be constructed • the users become involved and learn the techniques and models of the analyst • the boundaries of the system can be defined The products of this stage are: • Users Catalog describing all the users of the system and how they interact with it • Requirements Catalogs detailing all the requirements of the new system • Current Services Description further composed of • Current environment logical data structure (ERD) • Context diagram (DFD) • Leveled set of DFDs for current logical system • Full data dictionary including relationship between data stores and entities To produce the models, the analyst works through the construction of the models as we have described. However, the first set of data-flow diagrams (DFDs) are the current physical model, that is, with full details of how the old system is implemented. The final version is the current logical model which is essentially the same as the current physical but with all reference to implementation removed together with any redundancies such as repetition of process or data. In the process of preparing the models, the analyst will discover the information that makes up the users and requirements catalogs. Stage 2 – Business system options Having investigated the current system, the analyst must decide on the overall design of the new system. To do this, he or she, using the outputs of the previous stage, develops a set of business system options. These are different ways in which the new system could be produced varying from doing nothing to throwing out the old system entirely and building an entirely new one. The analyst may hold a brainstorming session so that as many and various ideas as possible are generated. The ideas are then collected to form a set of two or three different options which are presented to the user. The options consider the following:
  • 4. the degree of automation • the boundary between the system and the users • the distribution of the system, for example, is it centralized to one office or spread out across several? • cost/benefit • impact of the new system Where necessary, the option will be documented with a logical data structure and a level 1 data-flow diagram. The users and analyst together choose a single business option. This may be one of the ones already defined or may be a synthesis of different aspects of the existing options. The output of this stage is the single selected business option together with all the outputs of stage 1. Stage 3 – Requirements specification This is probably the most complex stage in SSADM. Using the requirements developed in stage 1 and working within the framework of the selected business option, the analyst must develop a full logical specification of what the new system must do. The specification must be free from error, ambiguity and inconsistency. By logical, we mean that the specification does not say how the system will be implemented but rather describes what the system will do. To produce the logical specification, the analyst builds the required logical models for both the data-flow diagrams (DFDs) and the entity relationship diagrams (ERDs). These are used to produce function definitions of every function which the users will require of the system, entity life-histories (ELHs) and effect correspondence diagrams, these are models of how each event interacts with the system, a complement to entity life-histories. These are continually matched against the requirements and where necessary, the requirements are added to and completed. The product of this stage is a complete requirements specification document which is made up of: *the updated data catalog • the updated requirements catalog • the processing specification which in turn is made up of • user role/function matrix • function definitions • required logical data model • entity life-histories • effect correspondence diagrams
  • 5. Though some of these items may be unfamiliar to you, it is beyond the scope of this unit to go into them in great detail. Stage 4 – Technical system options This stage is the first towards a physical implementation of the new system. Like the Business System Options, in this stage a large number of options for the implementation of the new system are generated. This is honed down to two or three to present to the user from which the final option is chosen or synthesized. However, the considerations are quite different being: • the hardware architectures • the software to use • the cost of the implementation • the staffing required • the physical limitations such as a space occupied by the system • the distribution including any networks which that may require • the overall format of the human computer interface All of these aspects must also conform to any constraints imposed by the business such as available money and standardization of hardware and software. The output of this stage is a chosen technical system option. Stage 5 – Logical design Though the previous level specifies details of the implementation, the outputs of this stage are implementation-independent and concentrate on the requirements for the human computer interface. The logical design specifies the main methods of interaction in terms of menu structures and command structures. One area of activity is the definition of the user dialogues. These are the main interfaces with which the users will interact with the system. Other activities are concerned with analyzing both the effects of events in updating the system and the need to make inquiries about the data on the system. Both of these use the events, function descriptions and effect correspondence diagrams produced in stage 3 to determine precisely how to update and read data in a consistent and secure way. The product of this stage is the logical design which is made up of: • Data catalog • Required logical data structure • Logical process model – includes dialogues and model for the update and inquiry processes • Stress & Bending moment.
  • 6. Stage 6 – Physical design This is the final stage where all the logical specifications of the system are converted to descriptions of the system in terms of real hardware and software. This is a very technical stage and a simple overview is presented here. The logical data structure is converted into a physical architecture in terms of database structures. The exact structure of the functions and how they are implemented is specified. The physical data structure is optimized where necessary to meet size and performance requirements. The product is a complete Physical Design which could tell software engineers how to build the system in specific details of hardware and software and to the appropriate standards. Advantages and disadvantages Using this methodology involves a significant undertaking which may not be suitable to all projects. The main advantages of SSADM are: • Three different views of the system • Mature • Separation of logical and physical aspects of the system • Well-defined techniques and documentation • User involvement The size of SSADM is a hindrance to using it in some circumstances. There is an investment in cost and time in training people to use the techniques. The learning curve can be considerable if the full method is used, as not only are there several modeling techniques to come to terms with, but there are also a lot of standards for the preparation and presentation of documents.
  • 7. 2. What is DSS? Discuss the components and capabilities of DSS. A decision support system (DSS) is a computer-based information system that supports business or organizational decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance. DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, and personal knowledge, or business models to identify and solve problems and make decisions. Typical information that a decision support application might gather and present are: • inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts), • comparative sales figures between one period and the next, • projected revenue figures based on product sales assumptions. History According to Keen (1978),[1] the concept of decision support has evolved from two main areas of research: The theoretical studies of organizational decision making done at the Carnegie Institute of Technology during the late 1950s and early 1960s, and the technical work on interactive computer systems, mainly carried out at the Massachusetts Institute of Technology in the 1960s. It is considered that the concept of DSS became an area of research of its own in the middle of the 1970s, before gaining in intensity during the 1980s. In the middle and late 1980s, executive information systems (EIS), group decision support systems (GDSS), and organizational decision support systems (ODSS) evolved from the single user and model-oriented DSS. According to Sol (1987)[2] the definition and scope of DSS has been migrating over the years. In the 1970s DSS was described as "a computer based system to aid decision making". Late 1970s the DSS movement started focusing on "interactive computer-based systems which help decision-makers utilize data bases and models to solve ill-structured problems". In the 1980s DSS should provide systems "using suitable and available technology to improve effectiveness of managerial and professional activities", and end 1980s DSS faced a new challenge towards the design of intelligent workstations.[2] In 1987 Texas Instruments completed development of the Gate Assignment Display System (GADS) for United Airlines. This decision support system is credited with
  • 8. significantly reducing travel delays by aiding the management of ground operations at various airports, beginning with O'Hare International Airport in Chicago and Stapleton Airport in Denver Colorado.[3][4] Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began broadening the realm of DSS. As the turn of the millennium approached, new Web-based analytical applications were introduced. The advent of better and better reporting technologies has seen DSS start to emerge as a critical component of management design. Examples of this can be seen in the intense amount of discussion of DSS in the education environment. DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS system (for military and business decision making) were decision support systems which also were major breakthroughs in user interface research. Furthermore, although hypertext researchers have generally been concerned with information overload, certain researchers, notably Douglas Engelbart, have been focused on decision makers in particular Taxonomies As with the definition, there is no universally-accepted taxonomy of DSS either. Different authors propose different classifications. Using the relationship with the user as the criterion, Haettenschwiler[5] differentiates passive, active, and cooperative DSS. A passive DSS is a system that aids the process of decision making, but that cannot bring out explicit decision suggestions or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative DSS allows the decision maker (or its advisor) to modify, complete, or refine the decision suggestions provided by the system, before sending them back to the system for validation. The system again improves, completes, and refines the suggestions of the decision maker and sends them back to him for validation. The whole process then starts again, until a consolidated solution is generated. Another taxonomy for DSS has been created by Daniel Power. Using the mode of assistance as the criterion, Power differentiates communication-driven DSS, data-driven DSS, document-driven DSS, knowledge-driven DSS, and model-driven DSS.[6] • A communication-driven DSS supports more than one person working on a shared task; examples include integrated tools like Microsoft's NetMeeting or Groove[7] • A data-driven DSS or data-oriented DSS emphasizes access to and manipulation of a time series of internal company data and, sometimes, external data. • A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic formats. • A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures, or in similar structures.[6]
  • 9. A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization, or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in analyzing a situation; they are not necessarily data-intensive. Dicodess is an example of an open source model-driven DSS generator.[8] Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktop DSS. An enterprise-wide DSS is linked to large data warehouses and serves many managers in the company. A desktop, single-user DSS is a small system that runs on an individual manager's PC. Components Design of a Drought Mitigation Decision Support System. Three fundamental components of a DSS architecture are:[5][6][10][11][12] 1. the database (or knowledge base), 2. the model (i.e., the decision context and user criteria), and 3. the user interface. The users themselves are also important components of the architecture.[5][12] Development Frameworks DSS systems are not entirely different from other systems and require a structured approach. Such a framework includes people, technology, and the development approach. [10] DSS technology levels (of hardware and software) may include: 1. The actual application that will be used by the user. This is the part of the application that allows the decision maker to make decisions in a particular problem area. The user can act upon that particular problem. 2. Generator contains Hardware/software environment that allows people to easily develop specific DSS applications. This level makes use of case tools or systems such as Crystal, AIMMS, Analytica and iThink. 3. Tools include lower level hardware/software. DSS generators including special languages, function libraries and linking modules An iterative developmental approach allows for the DSS to be changed and redesigned at various intervals. Once the system is designed, it will need to be tested and revised where necessary for the desired outcome. Classification
  • 10. There are several ways to classify DSS applications. Not every DSS fits neatly into one of the categories, but may be a mix of two or more architectures. Holsapple and Whinston[13] classify DSS into the following six frameworks: Text- oriented DSS, Database-oriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS, Rule-oriented DSS, and Compound DSS. A compound DSS is the most popular classification for a DSS. It is a hybrid system that includes two or more of the five basic structures described by Holsapple and Whinston.[13] The support given by DSS can be separated into three distinct, interrelated categories[14]: Personal Support, Group Support, and Organizational Support. DSS components may be classified as: 1. Inputs: Factors, numbers, and characteristics to analyze 2. User Knowledge and Expertise: Inputs requiring manual analysis by the user 3. Outputs: Transformed data from which DSS "decisions" are generated 4. Decisions: Results generated by the DSS based on user criteria DSSs which perform selected cognitive decision-making functions and are based on artificial intelligence or intelligent agents technologies are called Intelligent Decision Support Systems (IDSS).[citation needed] The nascent field of Decision engineering treats the decision itself as an engineered object, and applies engineering principles such as Design and Quality assurance to an explicit representation of the elements that make up a decision. Applications As mentioned above, there are theoretical possibilities of building such systems in any knowledge domain. One example is the clinical decision support system for medical diagnosis. Other examples include a bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on several projects and wants to know if they can be competitive with their costs. DSS is extensively used in business and management. Executive dashboard and other business performance software allow faster decision making, identification of negative trends, and better allocation of business resources. A growing area of DSS application, concepts, principles, and techniques is in agricultural production, marketing for sustainable development. For example, the DSSAT4 package, [15][16] developed through financial support of USAID during the 80's and 90's, has allowed rapid assessment of several agricultural production systems around the world to facilitate
  • 11. decision-making at the farm and policy levels. There are, however, many constraints to the successful adoption on DSS in agriculture.[17] DSS are also prevalent in forest management where the long planning time frame demands specific requirements. All aspects of Forest management, from log transportation, harvest scheduling to sustainability and ecosystem protection have been addressed by modern DSSs. A specific example concerns the Canadian National Railway system, which tests its equipment on a regular basis using a decision support system. A problem faced by any railroad is worn-out or defective rails, which can result in hundreds of derailments per year. Under a DSS, CN managed to decrease the incidence of derailments at the same time other companies were experiencing an increase. Benefits 1. Improves personal efficiency 2. Speed up the process of decision making 3. Increases organizational control 4. Encourages exploration and discovery on the part of the decision maker 5. Speeds up problem solving in an organization 6. Facilitates interpersonal communication 7. Promotes learning or training 8. Generates new evidence in support of a decision 9. Creates a competitive advantage over competition 10. Reveals new approaches to thinking about the problem space 11. Helps automate managerial processes
  • 12. 3. Narrate the stages of SDLC The Systems development life cycle (SDLC), or Software development process in systems engineering, information systems and software engineering, is a process of creating or altering information systems, and the models and methodologies that people use to develop these systems. In software engineering, the SDLC concept underpins many kinds of software development methodologies. These methodologies form the framework for planning and controlling the creation of an information system[1]: the software development process. Overview The SDLC is a process used by a systems analyst to develop an information system, training, and user (stakeholder) ownership. Any SDLC should result in a high quality system that meets or exceeds customer expectations, reaches completion within time and cost estimates, works effectively and efficiently in the current and planned Information Technology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2] Computer systems are complex and often (especially with the recent rise of service- oriented architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of SDLC models or methodologies have been created, such as "waterfall"; "spiral"; "Agile software development"; "rapid prototyping"; "incremental"; and "synchronize and stabilize".[3].... SDLC models can be described along spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on limited project scope and expanding or improving products by multiple iterations. Sequential or big- design-up-front (BDUF) models, such as Waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results[citation needed]. Other models, such as Anamorphic Development, tend to focus on a form of development that is guided by project scope and adaptive iterations of feature development. In project management a project can be defined both with a project life cycle (PLC) and an SDLC, during which slightly different activities occur. According to Taylor (2004) "the project life cycle encompasses all the activities of the project, while the systems development life cycle focuses on realizing the product requirements".[4] SDLC (systems development life cycle) is used during the development of an IT project, it describes the different stages involved in the project from the drawing board, through the completion of the project. History
  • 13. The systems life cycle (SLC) is a methodology used to describe the process for building information systems, intended to develop information systems in a very deliberate, structured and methodical way, reiterating each stage of the life cycle. The systems development life cycle, according to Elliott & Strachan & Radford (2004), "originated in the 1960's,to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines".[5] Several systems development frameworks have been partly based on SDLC, such as the structured systems analysis and design method (SSADM) produced for the UK government Office of Government Commerce in the 1980s. Ever since, according to Elliott (2004), "the traditional life cycle approaches to systems development have been increasingly replaced with alternative approaches and frameworks, which attempted to overcome some of the inherent deficiencies of the traditional SDLC".[5] Systems development phases The System Development Life Cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which each phase of the SDLC uses the results of the previous one. A Systems Development Life Cycle (SDLC) adheres to important phases that are essential for developers, such as planning, analysis, design, and implementation, and are explained in the section below. A number of system development life cycle (SDLC) models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and synchronize and stabilize. The oldest of these, and the best known, is the waterfall model: a sequence of stages in which the output of each stage becomes the input for the next. These stages can be characterized and divided up in different ways, including the following[6]: • Preliminary analysis: The objective of phase1 is to conduct a preliminary analysis, propose alternative solutions, describe costs and benefits and submit a preliminary plan with recommendations. Conduct the preliminary analysis: in this step, you need to find out the organization's objectives and the nature and scope of the problem under study. Even if a problem refers only to a small segment of the organization itself then you need find out what the objectives of the organization itself are. Then you need to see how the problem being studied fits in with them. Propose alternative solutions: In digging into the organization's objectives and specific problems, you may have already covered some solutions. Alternate proposals may come from interviewing employees, clients , suppliers, and/or consultants. You can also study what competitors are doing. With this data, you will have three choices: leave the system as is, improve it, or develop a new system. Describe the costs and benefits.
  • 14. Systems analysis, requirements definition: Defines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. • Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. • Development: The real code is written here. • Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. • Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. • Maintenance: What happens during the rest of the software's life: changes, correction, additions, moves to a different computing platform and more. This is often the longest of the stages. In the following example (see picture) these stage of the systems development life cycle are divided in ten steps from definition to creation and modification of IT work products: ' Model of the Systems Development Life Cycle
  • 15. The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters.[7] Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.[7] System analysis The goal of system analysis is to determine where the problem is in an attempt to fix the system. This step involves breaking down the system in different pieces to analyze the situation, analyzing project goals, breaking down what needs to be created and attempting to engage users so that definite requirements can be defined. Design In systems design the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems.
  • 16. The design stage takes as its initial input the requirements identified in the approved requirements document. For each requirement, a set of one or more design elements will be produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe the desired software features in detail, and generally include functional hierarchy diagrams, screen layout diagrams, tables of business rules, business process diagrams, pseudocode, and a complete entity-relationship diagram with a full data dictionary. These design elements are intended to describe the software in sufficient detail that skilled programmers may develop the software with minimal additional input design. Testing The code is tested at various levels in software testing. Unit, system and user acceptance testings are often performed. This is a grey area as many different opinions exist as to what the stages of testing are and how much, if any iteration occurs. Iteration is not generally part of the waterfall model, but usually some occur at this stage. In the testing the whole system is test one by one Following are the types of testing: • Defect testing the failed scenarios, including defect tracking • Path testing • Data set testing • Unit testing • System testing • Integration testing • Black-box testing • White-box testing • Regression testing • Automation testing • User acceptance testing • Performance testing Operations and maintenance The deployment of the system includes changes and enhancements before the decommissioning or sunset of the system. Maintaining the system is an important aspect of SDLC. As key personnel change positions in the organization, new changes will be implemented, which will require system. Systems analysis and design
  • 17. The Systems Analysis and Design (SAD) is the process of developing Information Systems (IS) that effectively use hardware, software, data, processes, and people to support the company’s business objectives. Object-oriented analysis Object-oriented analysis (OOA) is the process of analyzing a task (also known as a problem domain), to develop a conceptual model that can then be used to complete the task. A typical OOA model would describe computer software that could be used to satisfy a set of customer-defined requirements. During the analysis phase of problem- solving, a programmer might consider a written requirements statement, a formal vision document, or interviews with stakeholders or other interested parties. The task to be addressed might be divided into several subtasks (or domains), each representing a different business, technological, or other areas of interest. Each subtask would be analyzed separately. Implementation constraints, (e.g., concurrency, distribution, persistence, or how the system is to be built) are not considered during the analysis phase; rather, they are addressed during object-oriented design (OOD). The conceptual model that results from OOA will typically consist of a set of use cases, one or more UML class diagrams, and a number of interaction diagrams. It may also include some kind of user interface mock-up. Input (sources) for object-oriented design The input for object-oriented design is provided by the output of object-oriented analysis. Realize that an output artifact does not need to be completely developed to serve as input of object-oriented design; analysis and design may occur in parallel, and in practice the results of one activity can feed the other in a short feedback cycle through an iterative process. Both analysis and design can be performed incrementally, and the artifacts can be continuously grown instead of completely developed in one shot. Some typical input artifacts for object-oriented design are: • Conceptual model: Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. • Use case: Use case is a description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform.
  • 18. System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events. • User interface documentations (if applicable): Document that shows and describes the look and feel of the end product's user interface. It is not mandatory to have this, but it helps to visualize the end-product and therefore helps the designer. • Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design, since the strategy chosen for object-relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of other artifacts. Systems development life cycle Management and control SPIU phases related to management controls.[8] The SDLC phases serve as a programmatic guide to project activity and provide a flexible but consistent way to conduct projects to a depth matching the scope of the project. Each of the SDLC phase objectives are described in this section with key deliverables, a description of recommended tasks, and a summary of related control objectives for effective management. It is critical for the project manager to establish and monitor control objectives during each SDLC phase while executing projects. Control objectives help to provide a clear statement of the desired result or purpose and should be
  • 19. used throughout the entire SDLC process. Control objectives can be grouped into major categories (domains), and relate to the SDLC phases as shown in the figure.[8] To manage and control any SDLC initiative, each project will be required to establish some degree of a Work Breakdown Structure (WBS) to capture and schedule the work necessary to complete the project. The WBS and all programmatic material should be kept in the “project description” section of the project notebook. The WBS format is mostly left to the project manager to establish in a way that best describes the project work. There are some key areas that must be defined in the WBS as part of the SDLC policy. The following diagram describes three key areas that will be addressed in the WBS in a manner established by the project manager.[8] Work breakdown structured organization Work breakdown structure.[8] The upper section of the work breakdown structure (WBS) should identify the major phases and milestones of the project in a summary fashion. In addition, the upper section should provide an overview of the full scope and timeline of the project and will be part of the initial project description effort leading to project approval. The middle section of the WBS is based on the seven systems development life cycle (SDLC) phases as a guide for WBS task development. The WBS elements should consist of milestones and “tasks” as opposed to “activities” and have a definitive period (usually two weeks or more). Each task must have a measurable output (e.x. document, decision, or analysis). A WBS task may rely on one or more activities (e.g. software engineering, systems engineering) and may require close coordination with other tasks, either internal or external to the project. Any part of the project needing support from contractors should have a statement of work (SOW) written to include the appropriate tasks from the SDLC phases. The development of a SOW does not occur during a specific phase of SDLC but is developed to include the work from the SDLC process that may be conducted by external resources such as contractors and struct.[8] Baselines in the SDLC
  • 20. Baselines are an important part of the systems development life cycle (SDLC). These baselines are established after four of the five phases of the SDLC and are critical to the iterative nature of the model .[9] Each baseline is considered as a milestone in the SDLC. • functional baseline: established after the conceptual design phase. • allocated baseline: established after the preliminary design phase. • product baseline: established after the detail design and development phase. • updated product baseline: established after the production construction phase. Complementary to SDLC Complementary software development methods to systems development life cycle (SDLC) are: • Software prototyping • Joint applications development (JAD) • Rapid application development (RAD) • Extreme programming (XP); extension of earlier work in Prototyping and RAD. • Open-source development • End-user development • Object-oriented programming Comparison of Methodology Approaches (Post, & Anderson 2006)[10] Open End SDLC RAD Objects JAD Prototyping source User Control Formal MIS Weak Standards Joint User User Short Time frame Long Short Medium Any Medium Short – Users Many Few Few Varies Few One or two One MIS staff Many Few Hundreds Split Few One or two None Transaction/DSS Transaction Both Both Both DSS DSS DSS Interface Minimal Minimal Weak Windows Crucial Crucial Crucial Documentation In Vital Limited Internal Limited Weak None and training Objects Integrity and In Vital Vital Unknown Limited Weak Weak security Objects Reusability Limited Some Maybe Vital Limited Weak None Strengths and weaknesses Few people in the modern computing world would use a strict waterfall model for their systems development life cycle (SDLC) as many modern methodologies have superseded this thinking. Some will argue that the SDLC no longer applies to models like Agile
  • 21. computing, but it is still a term widely in use in technology circles. The SDLC practice has advantages in traditional models of software development, that lends itself more to a structured environment. The disadvantages to using the SDLC methodology is when there is need for iterative development or (i.e. web development or e-commerce) where stakeholders need to review on a regular basis the software being designed. Instead of viewing SDLC from a strength or weakness perspective, it is far more important to take the best practices from the SDLC model and apply it to whatever may be most appropriate for the software being designed. A comparison of the strengths and weaknesses of SDLC: Strength and Weaknesses of SDLC [10] Strengths Weaknesses Control. Increased development time. Monitor large projects. Increased development cost. Detailed steps. Systems must be defined up front. Evaluate costs and completion targets. Rigidity. Documentation. Hard to estimate costs, project overruns. Well defined user input. User input is sometimes limited. Ease of maintenance. Development and design standards. Tolerates changes in MIS staffing. An alternative to the SDLC is rapid application development, which combines prototyping, joint application development and implementation of CASE tools. The advantages of RAD are speed, reduced development cost, and active user involvement in the development process.
  • 22. 4. Define OOP. What are the applications of it? Object-oriented programming (OOP) is a programming paradigm using "objects" – data structures consisting of data fields and methods together with their interactions – to design applications and computer programs. Programming techniques may include features such as data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance. Many modern programming languages now support OOP, at least as an option. Overview Simple, non-OOP programs may be one "long" list of statements (or commands). More complex programs will often group smaller sections of these statements into functions or subroutines each of which might perform a particular task. With designs of this sort, it is common for some of the program's data to be 'global', i.e. accessible from any part of the program. As programs grow in size, allowing any function to modify any piece of data means that bugs can have wide-reaching effects. In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are either bundled in with the data or inherited from "class objects." These act as the intermediaries for retrieving or modifying the data they control. The programming construct that combines data with a set of methods for accessing and managing those data is called an object. The practice of using subroutines to examine or modify certain kinds of data, however, was also quite commonly used in non-OOP modular programming, well before the widespread use of object-oriented programming. An object-oriented program will usually contain different types of objects, each type corresponding to a particular kind of complex data to be managed or perhaps to a real- world object or concept such as a bank account, a hockey player, or a bulldozer. A program might well contain multiple copies of each type of object, one for each of the real-world objects the program is dealing with. For instance, there could be one bank account object for each real-world account at a particular bank. Each copy of the bank account object would be alike in the methods it offers for manipulating or reading its data, but the data inside each object would differ reflecting the different history of each account. Objects can be thought of as wrapping their data within a set of functions designed to ensure that the data are used appropriately, and to assist in that use. The object's methods will typically include checks and safeguards that are specific to the types of data the object contains. An object can also offer simple-to-use, standardized methods for performing particular operations on its data, while concealing the specifics of how those
  • 23. tasks are accomplished. In this way alterations can be made to the internal structure or methods of an object without requiring that the rest of the program be modified. This approach can also be used to offer standardized methods across different types of objects. As an example, several different types of objects might offer print methods. Each type of object might implement that print method in a different way, reflecting the different kinds of data each contains, but all the different print methods might be called in the same standardized manner from elsewhere in the program. These features become especially useful when more than one programmer is contributing code to a project or when the goal is to reuse code between projects. Object-oriented programming has roots that can be traced to the 1960s. As hardware and software became increasingly complex, manageability often became a concern. Researchers studied ways to maintain software quality and developed object-oriented programming in part to address common problems by strongly emphasizing discrete, reusable units of programming logic[citation needed]. The technology focuses on data rather than processes, with programs composed of self-sufficient modules ("classes"), each instance of which ("objects") contains all the information needed to manipulate its own data structure ("members"). This is in contrast to the existing modular programming that had been dominant for many years that focused on the function of a module, rather than specifically the data, but equally provided for code reuse, and self-sufficient reusable units of programming logic, enabling collaboration through the use of linked modules (subroutines). This more conventional approach, which still persists, tends to consider data and behavior separately. An object-oriented program may thus be viewed as a collection of interacting objects, as opposed to the conventional model, in which a program is seen as a list of tasks (subroutines) to perform. In OOP, each object is capable of receiving messages, processing data, and sending messages to other objects. Each object can be viewed as an independent "machine" with a distinct role or responsibility. The actions (or "methods") on these objects are closely associated with the object. For example, OOP data structures tend to "carry their own operators around with them" (or at least "inherit" them from a similar object or class) - except when they have to be serialized. History The terms "objects" and "oriented" in something like the modern sense of object-oriented programming seem to make their first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes);[1][2] Alan Kay was later to cite a detailed understanding of LISP internals as a strong influence on his thinking in 1966.[3] Another early MIT example was Sketchpad created by Ivan Sutherland in 1960-61; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction.[4] Also, an MIT ALGOL version, AED-0, linked data structures ("plexes", in
  • 24. that dialect) directly with procedures, prefiguring what were later termed "messages", "methods" and "member functions".[5][6] Objects as a formal concept in programming were introduced in the 1960s in Simula 67, a major revision of Simula I, a programming language designed for discrete event simulation, created by Ole-Johan Dahl and Kristen Nygaard of the Norwegian Computing Center in Oslo.[7] Simula 67 was influenced by SIMSCRIPT and C.A.R. "Tony" Hoare's proposed "record classes".[5][8] Simula introduced the notion of classes and instances or objects (as well as subclasses, virtual methods, coroutines, and discrete event simulation) as part of an explicit programming paradigm. The language also used automatic garbage collection that had been invented earlier for the functional programming language Lisp. Simula was used for physical modeling, such as models to study and improve the movement of ships and their content through cargo ports. The ideas of Simula 67 influenced many later languages, including Smalltalk, derivatives of LISP (CLOS), Object Pascal, and C++. The Smalltalk language, which was developed at Xerox PARC (by Alan Kay and others) in the 1970s, introduced the term object-oriented programming to represent the pervasive use of objects and messages as the basis for computation. Smalltalk creators were influenced by the ideas introduced in Simula 67, but Smalltalk was designed to be a fully dynamic system in which classes could be created and modified dynamically rather than statically as in Simula 67.[9] Smalltalk and with it OOP were introduced to a wider audience by the August 1981 issue of Byte Magazine. In the 1970s, Kay's Smalltalk work had influenced the Lisp community to incorporate object-based techniques that were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (like LOOPS and Flavors introducing multiple inheritance and mixins), eventually led to the Common Lisp Object System (CLOS, a part of the first standardized object-oriented programming language, ANSI Common Lisp), which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv. Object-oriented programming developed as the dominant programming methodology in the early and mid 1990s when programming languages supporting the techniques became widely available. These included Visual FoxPro 3.0,[10][11][12] C++[citation needed], and Delphi[citation needed]. Its dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object- oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP). Some[who?] feel that association with GUIs (real or perceived) was what propelled OOP into the programming mainstream.
  • 25. At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use in the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such. The approach is unlike Smalltalk, and very unlike C++. Object-oriented features have been added to many existing languages during that time, including Ada, BASIC, Fortran, Pascal, and others. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code. More recently, a number of languages have emerged that are primarily object-oriented yet compatible with procedural methodology, such as Python and Ruby. Probably the most commercially important recent object-oriented languages are Visual Basic.NET (VB.NET) and C#, both designed for Microsoft's .NET platform, and Java, developed by Sun Microsystems. Both frameworks show the benefit of using OOP by creating an abstraction from implementation in their own way. VB.NET and C# support cross- language inheritance, allowing classes defined in one language to subclass classes defined in the other language. Developers usually compile Java to bytecode, allowing Java to run on any operating system for which a Java virtual machine is available. VB.NET and C# make use of the Strategy pattern to accomplish cross-language inheritance, whereas Java makes use of the Adapter pattern[citation needed]. Just as procedural programming led to refinements of techniques such as structured programming, modern object-oriented software design methods include refinements[citation needed] such as the use of design patterns, design by contract, and modeling languages (such as UML). Fundamental features and concepts See also: List of object-oriented programming terms A survey by Deborah J. Armstrong of nearly 40 years of computing literature identified a number of "quarks", or fundamental concepts, found in the strong majority of definitions of OOP.[13] Not all of these concepts are to be found in all object-oriented programming languages. For example, object-oriented programming that uses classes is sometimes called class- based programming, while prototype-based programming does not typically use classes. As a result, a significantly different yet analogous terminology is used to define the concepts of object and instance. Benjamin C. Pierce and some other researchers view as futile any attempt to distill OOP to a minimal set of features. He nonetheless identifies fundamental features that support the OOP programming style in most object-oriented languages:[14]
  • 26. Dynamic dispatch – when a method is invoked on an object, the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. This feature distinguishes an object from an abstract data type (or module), which has a fixed (static) implementation of the operations for all instances. It is a programming methodology that gives modular component development while at the same time being very efficient. • Encapsulation (or multi-methods, in which case the state is kept separate) • Subtype polymorphism • Object inheritance (or delegation) • Open recursion – a special variable (syntactically it may be a keyword), usually called this or self, that allows a method body to invoke another method body of the same object. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof. Similarly, in his 2003 book, Concepts in programming languages, John C. Mitchell identifies four main features: dynamic dispatch, abstraction, subtype polymorphism, and inheritance.[15] Michael Lee Scott in Programming Language Pragmatics considers only encapsulation, inheritance and dynamic dispatch.[16] Additional concepts used in object-oriented programming include: • Classes of objects • Instances of classes • Methods which act on the attached objects. • Message passing • Abstraction Decoupling Decoupling refers to careful controls that separate code modules from particular use cases, which increases code re-usability. A common use of decoupling in OOP is to polymorphically decouple the encapsulation (see Bridge pattern and Adapter pattern) - for example, using a method interface which an encapsulated object must satisfy, as opposed to using the object's class. Formal semantics See also: Formal semantics of programming languages There have been several attempts at formalizing the concepts used in object-oriented programming. The following concepts and constructs have been used as interpretations of OOP concepts: • coalgebraic data types [17]
  • 27. abstract data types (which have existential types) allow the definition of modules but these do not support dynamic dispatch • recursive types • encapsulated state • Inheritance • records are basis for understanding objects if function literals can be stored in fields (like in functional programming languages), but the actual calculi need be considerably more complex to incorporate essential features of OOP. Several extensions of System F<: that deal with mutable objects have been studied;[18] these allow both subtype polymorphism and parametric polymorphism (generics) Attempts to find a consensus definition or theory behind objects have not proven very successful (however, see Abadi & Cardelli, A Theory of Objects[18] for formal definitions of many OOP concepts and constructs), and often diverge widely. For example, some definitions focus on mental activities, and some on program structuring. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps, all with some syntactic and scoping sugar on top. Inheritance can be performed by cloning the maps (sometimes called "prototyping"). OBJECT:=>> Objects are the run time entities in an object-oriented system. They may represent a person, a place, a bank account, a table of data or any item that the program has to handle. OOP languages See also: List of object-oriented programming languages Simula (1967) is generally accepted as the first language to have the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is arguably the canonical example, and the one with which much of the theory of object-oriented programming was developed. Concerning the degree of object orientation, following distinction can be made: • Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Eiffel, Emerald.[19], JADE, Obix, Scala, Smalltalk • Languages designed mainly for OO programming, but with some procedural elements. Examples: C++, Java, C#, VB.NET, Python. • Languages that are historically procedural languages, but have been extended with some OO features. Examples: Visual Basic (derived from BASIC), Fortran 2003, Perl, COBOL 2002, PHP, ABAP. • Languages with most of the features of objects (classes, methods, inheritance, reusability), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2) and Common Lisp.
  • 28. Languages with abstract data type support, but not all features of object- orientation, sometimes called object-based languages. Examples: Modula-2 (with excellent encapsulation and information hiding), Pliant, CLU. OOP in dynamic languages In recent years, object-oriented programming has become especially popular in dynamic programming languages. Python, Ruby and Groovy are dynamic languages built on OOP principles, while Perl and PHP have been adding object oriented features since Perl 5 and PHP 4, and ColdFusion since version 5. The Document Object Model of HTML, XHTML, and XML documents on the Internet have bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps the best known prototype-based programming language, which employs cloning from prototypes rather than inheriting from a class. Another scripting language that takes this approach is Lua. Earlier versions of ActionScript (a partial superset of the ECMA-262 R3, otherwise known as ECMAScript) also used a prototype-based object model. Later versions of ActionScript incorporate a combination of classification and prototype-based object models based largely on the currently incomplete ECMA-262 R4 specification, which has its roots in an early JavaScript 2 Proposal. Microsoft's JScript.NET also includes a mash-up of object models based on the same proposal, and is also a superset of the ECMA-262 R3 specification. Design patterns Challenges of object-oriented design are addressed by several methodologies. Most common is known as the design patterns codified by Gamma et al.. More broadly, the term "design patterns" can be used to refer to any general, repeatable solution to a commonly occurring problem in software design. Some of these commonly occurring problems have implications and solutions particular to object-oriented development. Inheritance and behavioral subtyping See also: Object oriented design It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to infer that objects instantiated from subclasses can always be safely used instead of those instantiated from the superclass. This intuition is unfortunately false in most OOP languages, in particular in all those that allow mutable objects. Subtype polymorphism as enforced by the type checker in OOP languages (with mutable objects) cannot guarantee behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it cannot be implemented by a program (compiler). Class or object hierarchies need to be carefully designed considering possible incorrect uses that cannot be detected syntactically. This issue is known as the Liskov substitution principle.
  • 29. Gang of Four design patterns Main article: Design pattern (computer science) Design Patterns: Elements of Reusable Object-Oriented Software is an influential book published in 1995 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, often referred to humorously as the "Gang of Four". Along with exploring the capabilities and pitfalls of object-oriented programming, it describes 23 common programming problems and patterns for solving them. As of April 2007, the book was in its 36th printing. Object-orientation and databases Main articles: Object-Relational impedance mismatch, Object-relational mapping, and Object database Both object-oriented programming and relational database management systems (RDBMSs) are extremely common in software today. Since relational databases don't store objects directly (though some RDBMSs have object-oriented features to approximate this), there is a general need to bridge the two worlds. The problem of bridging object-oriented programming accesses and data patterns with relational databases is known as Object-Relational impedance mismatch. There are a number of approaches to cope with this problem, but no general solution without downsides.[20] One of the most common approaches is object-relational mapping, as found in libraries like Java Data Objects and Ruby on Rails' ActiveRecord. There are also object databases that can be used to replace RDBMSs, but these have not been as technically and commercially successful as RDBMSs. Real-world modeling and relationships OOP can be used to associate real-world objects and processes with digital counterparts. However, not everyone agrees that OOP facilitates direct real-world mapping (see Negative Criticism section) or that real-world mapping is even a worthy goal; Bertrand Meyer argues in Object-Oriented Software Construction[21] that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed". At the same time, some principal limitations of OOP had been noted.[22] For example, the Circle-ellipse problem is difficult to handle using OOP's concept of inheritance. However, Niklaus Wirth (who popularized the adage now known as Wirth's law: "Software is getting slower more rapidly than hardware becomes faster") said of OOP in his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the structure of systems 'in the real world', and it is therefore well suited to model complex systems with complex behaviours" (contrast KISS principle).
  • 30. Steve Yegge and others noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs).[23] This problem may cause OOP to suffer more convoluted solutions than procedural programming.[24] OOP and control flow OOP was developed to increase the reusability and maintainability of source code.[25] Transparent representation of the control flow had no priority and was meant to be handled by a compiler. With the increasing relevance of parallel hardware and multithreaded coding, developer transparent control flow becomes more important, something hard to achieve with OOP.[26][27][28][29] Responsibility- vs. data-driven design Responsibility-driven design defines classes in terms of a contract, that is, a class should be defined around a responsibility and the information that it shares. This is contrasted by Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around the data-structures that must be held. The authors hold that responsibility-driven design is preferable.
  • 31. 5. Explain steps involved in the process of Software Project Management Software project management is the art and science of planning and leading software projects.[1] It is a sub-discipline of project management in which software projects are planned, monitored and controlled. History Companies quickly understood the relative ease of use that software programming had over hardware circuitry, and the software industry grew very quickly in the 1970s and 1980s. To manage new development efforts, companies applied proven project management methods, but project schedules slipped during test runs, especially when confusion occurred in the gray zone between the user specifications and the delivered software. To be able to avoid these problems, software project management methods focused on matching user requirements to delivered products, in a method known now as the waterfall model. Since then, analysis of software project management failures has shown that the following are the most common causes:[2] 1. Unrealistic or unarticulated project goals 2. Inaccurate estimates of needed resources 3. Badly defined system requirements 4. Poor reporting of the project's status 5. Unmanaged risks 6. Poor communication among customers, developers, and users 7. Use of immature technology 8. Inability to handle the project's complexity 9. Sloppy development practices 10. Poor project management 11. Stakeholder politics 12. Commercial pressures The first three items in the list above show the difficulties articulating the needs of the client in such a way that proper resources can deliver the proper project goals. Specific software project management tools are useful and often necessary, but the true art in software project management is applying the correct method and then using tools to support the method. Without a method, tools are worthless. Since the 1960s, several proprietary software project management methods have been developed by software manufacturers for their own use, while computer consulting firms have also developed similar methods for their clients. Today software project management methods are still
  • 32. evolving, but the current trend leads away from the waterfall model to a more cyclic project delivery model that imitates a Software release life cycle. Software development process A software development process is concerned primarily with the production aspect of software development, as opposed to the technical aspect, such as software tools. These processes exist primarily for supporting the management of software development, and are generally skewed toward addressing business concerns. Many software development processes can be run in a similar way to general project management processes. Examples are: • Risk management is the process of measuring or assessing risk and then developing strategies to manage the risk. In general, the strategies employed include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk. Risk management in software project management begins with the business case for starting the project, which includes a cost-benefit analysis as well as a list of fallback options for project failure, called a contingency plan. o A subset of risk management that is gaining more and more attention is Opportunity Management, which means the same thing, except that the potential risk outcome will have a positive, rather than a negative impact. Though theoretically handled in the same way, using the term "opportunity" rather than the somewhat negative term "risk" helps to keep a team focused on possible positive outcomes of any given risk register in their projects, such as spin-off projects, windfalls, and free extra resources. • Requirements management is the process of identifying, eliciting, documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. New or altered computer system[1] Requirements management, which includes Requirements analysis, is an important part of the software engineering process; whereby business analysts or software developers identify the needs or requirements of a client; having identified these requirements they are then in a position to design a solution. • Change management is the process of identifying, documenting, analyzing, prioritizing and agreeing on changes to scope (project management) and then controlling changes and communicating to relevant stakeholders. Change impact analysis of new or altered scope, which includes Requirements analysis at the change level, is an important part of the software engineering process; whereby business analysts or software developers identify the altered needs or requirements of a client; having identified these requirements they are then in a position to re-design or modify a solution. Theoretically, each change can impact the timeline and budget of a software project, and therefore by definition must include risk-benefit analysis before approval. • Software configuration management is the process of identifying, and documenting the scope itself, which is the software product underway, including all sub-products and changes and enabling communication of these to relevant
  • 33. stakeholders. In general, the processes employed include version control, naming convention (programming), and software archival agreements. • Release management is the process of identifying, documenting, prioritizing and agreeing on releases of software and then controlling the release schedule and communicating to relevant stakeholders. Most software projects have access to three software environments to which software can be released; Development, Test, and Production. In very large projects, where distributed teams need to integrate their work before release to users, there will often be more environments for testing, called unit testing, system testing, or integration testing, before release to User acceptance testing (UAT). o A subset of release management that is gaining more and more attention is Data Management, as obviously the users can only test based on data that they know, and "real" data is only in the software environment called "production". In order to test their work, programmers must therefore also often create "dummy data" or "data stubs". Traditionally, older versions of a production system were once used for this purpose, but as companies rely more and more on outside contributors for software development, company data may not be released to development teams. In complex environments, datasets may be created that are then migrated across test environments according to a test release schedule, much like the overall software release schedule. Project planning, monitoring and control The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan is then developed to describe the tasks that will lead to completion. The purpose of project monitoring and control is to keep the team and management up to date on the project's progress. If the project deviates from the plan, then the project manager can take action to correct the problem. Project monitoring and control involves status meetings to gather status from the team. When changes need to be made, change control is used to keep the products up to date. Issue In computing, the term issue is a unit of work to accomplish an improvement in a system. An issue could be a bug, a requested feature, task, missing documentation, and so forth. The word "issue" is popularly misused in lieu of "problem." This usage is probably related.[citation needed] For example, OpenOffice.org used to call their modified version of BugZilla IssueZilla. As of September 2010, they call their system Issue Tracker.
  • 34. Problems occur from time to time and fixing them in a timely fashion is essential to achieve correctness of a system and avoid delayed deliveries of products. Severity levels Issues are often categorized in terms of severity levels. Different companies have different definitions of severities, but some of the most common ones are: Critical High The bug or issue affects a crucial part of a system, and must be fixed in order for it to resume normal operation. Medium The bug or issue affects a minor part of a system, but has some impact on its operation. This severity level is assigned when a non-central requirement of a system is affected. Low The bug or issue affects a minor part of a system, and has very little impact on its operation. This severity level is assigned when a non-central requirement of a system (and with lower importance) is affected. Cosmetic The system works correctly, but the appearance does not match the expected one. For example: wrong colors, too much or too little spacing between contents, incorrect font sizes, typos, etc. This is the lowest severity issue. In many software companies, issues are often investigated by Quality Assurance Analysts when they verify a system for correctness, and then assigned to the developer(s) that are responsible for resolving them. They can also be assigned by system users during the User Acceptance Testing (UAT) phase. Issues are commonly communicated using Issue or Defect Tracking Systems. In some other cases, emails or instant messengers are used. Philosophy As a subdiscipline of project management, some regard the management of software development akin to the management of manufacturing, which can be performed by someone with management skills, but no programming skills. John C. Reynolds rebuts this view, and argues that software development is entirely design work, and compares a manager who cannot program to the managing editor of a newspaper who cannot write.[3] In Software Project Management, the end users and developers need to know the length, duration and cost of the project. It is a process of managing, allocating and timing resources to develop computer software that meets requirements. It consists of eight tasks:
  • 35. - Problem Identification - Problem Definition - Project Planning - Project Organization - Resource Allocation - Project Scheduling - Tracking, Reporting and Controlling - Project Termination In problem identification and definition, the decisions are made while approving, declining or prioritizing projects. In problem identification, project is identified, defined and justified. In problem definition, the purpose of the project is clarified. The main product is project proposal. In project planning, it describes a series of actions or steps that are needed to for the development of work product. In project organization, the functions of the personnel are integrated. It is done in parallel with project planning. In resource allocation, the resources are allocated to a project so that the goals and objectives are achieved. In project scheduling, resources are allocated so that project objectives are achieved within a reasonable time span. In tracking, reporting and controlling, the process involves whether the project results are in accordance with project plans and performance specification. In controlling, proper action is taken to correct unacceptable deviations. In project termination, the final report is submitted or a release order is signed.