Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Management Information system


Published on

1. Discuss the structured system analysis and design methodologies
2. What is DSS? Discuss the components and capabilities of DSS.
3. Narrate the stages of SDLC
4. Define OOP. What are the applications of it?

  • Be the first to comment

  • Be the first to like this

Management Information system

  1. 1. 1. Discuss the structured system analysis and designmethodologies.Structured systems analysis and design method (SSADM) is a systems approach to theanalysis and design of information systems. SSADM was produced for the CentralComputer and Telecommunications Agency (now Office of Government Commerce), aUK government office concerned with the use of technology in government, from 1980onwards.OverviewSSADM is a waterfall method for the analysis and design of information systems.SSADM can be thought to represent a pinnacle of the rigorous document-led approach tosystem design.The names "Structured Systems Analysis and Design Method" and "SSADM" areregistered trademarks of the Office of Government Commerce (OGC), which is an officeof the United Kingdoms Treasury. [1]HistoryThe principal stages of the development of SSADM were:[2] • 1980: Central Computer and Telecommunications Agency (CCTA) evaluate analysis and design methods. • 1981: Learmonth & Burchett Management Systems (LBMS) method chosen from shortlist of five. • 1983: SSADM made mandatory for all new information system developments • 1998:PLATINUM TECHNOLOGY acquires LBMS • 2000: CCTA renamed SSADM as "Business System Development". The method was repackaged into 15 modules and another 6 modules were added.[3][4]SSADM techniquesThe three most important techniques that are used in SSADM are:Logical data modeling This is the process of identifying, modeling and documenting the data requirements of the system being designed. The data are separated into entities (things about which a business needs to record information) and relationships (the associations between the entities).Data Flow Modeling This is the process of identifying, modeling and documenting how data moves around an information system. Data Flow Modeling examines processes
  2. 2. (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow).Entity Behavior Modeling This is the process of identifying, modeling and documenting the events that affect each entity and the sequence in which these events occur.StagesThe SSADM method involves the application of a sequence of analysis, documentationand design tasks concerned with the following.Stage 0 – Feasibility studyIn order to determine whether or not a given project is feasible, there must be some formof investigation into the goals and implications of the project. For very small scaleprojects this may not be necessary at all as the scope of the project is easily understood.In larger projects, the feasibility may be done but in an informal sense, either becausethere is not time for a formal study or because the project is a “must-have” and will haveto be done one way or the other.When a feasibility study is carried out, there are four main areas of consideration: • Technical – is the project technically possible? • Financial – can the business afford to carry out the project? • Organizational – will the new system be compatible with existing practices? • Ethical – is the impact of the new system socially acceptable?To answer these questions, the feasibility study is effectively a condensed version of afully blown systems analysis and design. The requirements and users are analyzed tosome extent, some business options are drawn up and even some details of the technicalimplementation.The product of this stage is a formal feasibility study document. SSADM specifies thesections that the study should contain including any preliminary models that have beenconstructed and also details of rejected options and the reasons for their rejection.Stage 1 – Investigation of the current environmentThis is one of the most important stages of SSADM. The developers of SSADMunderstood that though the tasks and objectives of a new system may be radicallydifferent from the old system, the underlying data will probably change very little. Bycoming to a full understanding of the data requirements at an early stage, the remaininganalysis and design stages can be built up on a firm foundation.
  3. 3. In almost all cases there is some form of current system even if it is entirely composed ofpeople and paper. Through a combination of interviewing employees, circulatingquestionnaires, observations and existing documentation, the analyst comes to fullunderstanding of the system as it is at the start of the project. This serves many purposes: • the analyst learns the terminology of the business, what users do and how they do it • the old system provides the core requirements for the new system • faults, errors and areas of inefficiency are highlighted and their correction added to the requirements • the data model can be constructed • the users become involved and learn the techniques and models of the analyst • the boundaries of the system can be definedThe products of this stage are: • Users Catalog describing all the users of the system and how they interact with it • Requirements Catalogs detailing all the requirements of the new system • Current Services Description further composed of • Current environment logical data structure (ERD) • Context diagram (DFD) • Leveled set of DFDs for current logical system • Full data dictionary including relationship between data stores and entitiesTo produce the models, the analyst works through the construction of the models as wehave described. However, the first set of data-flow diagrams (DFDs) are the currentphysical model, that is, with full details of how the old system is implemented. The finalversion is the current logical model which is essentially the same as the current physicalbut with all reference to implementation removed together with any redundancies such asrepetition of process or data.In the process of preparing the models, the analyst will discover the information thatmakes up the users and requirements catalogs.Stage 2 – Business system optionsHaving investigated the current system, the analyst must decide on the overall design ofthe new system. To do this, he or she, using the outputs of the previous stage, develops aset of business system options. These are different ways in which the new system couldbe produced varying from doing nothing to throwing out the old system entirely andbuilding an entirely new one. The analyst may hold a brainstorming session so that asmany and various ideas as possible are generated.The ideas are then collected to form a set of two or three different options which arepresented to the user. The options consider the following:
  4. 4. • the degree of automation • the boundary between the system and the users • the distribution of the system, for example, is it centralized to one office or spread out across several? • cost/benefit • impact of the new systemWhere necessary, the option will be documented with a logical data structure and a level1 data-flow diagram.The users and analyst together choose a single business option. This may be one of theones already defined or may be a synthesis of different aspects of the existing options.The output of this stage is the single selected business option together with all the outputsof stage 1.Stage 3 – Requirements specificationThis is probably the most complex stage in SSADM. Using the requirements developedin stage 1 and working within the framework of the selected business option, the analystmust develop a full logical specification of what the new system must do. Thespecification must be free from error, ambiguity and inconsistency. By logical, we meanthat the specification does not say how the system will be implemented but ratherdescribes what the system will do.To produce the logical specification, the analyst builds the required logical models forboth the data-flow diagrams (DFDs) and the entity relationship diagrams (ERDs). Theseare used to produce function definitions of every function which the users will require ofthe system, entity life-histories (ELHs) and effect correspondence diagrams, these aremodels of how each event interacts with the system, a complement to entity life-histories.These are continually matched against the requirements and where necessary, therequirements are added to and completed.The product of this stage is a complete requirements specification document which ismade up of:*the updated data catalog • the updated requirements catalog • the processing specification which in turn is made up of • user role/function matrix • function definitions • required logical data model • entity life-histories • effect correspondence diagrams
  5. 5. Though some of these items may be unfamiliar to you, it is beyond the scope of this unitto go into them in great detail.Stage 4 – Technical system optionsThis stage is the first towards a physical implementation of the new system. Like theBusiness System Options, in this stage a large number of options for the implementationof the new system are generated. This is honed down to two or three to present to the userfrom which the final option is chosen or synthesized.However, the considerations are quite different being: • the hardware architectures • the software to use • the cost of the implementation • the staffing required • the physical limitations such as a space occupied by the system • the distribution including any networks which that may require • the overall format of the human computer interfaceAll of these aspects must also conform to any constraints imposed by the business such asavailable money and standardization of hardware and software.The output of this stage is a chosen technical system option.Stage 5 – Logical designThough the previous level specifies details of the implementation, the outputs of thisstage are implementation-independent and concentrate on the requirements for the humancomputer interface. The logical design specifies the main methods of interaction in termsof menu structures and command structures.One area of activity is the definition of the user dialogues. These are the main interfaceswith which the users will interact with the system. Other activities are concerned withanalyzing both the effects of events in updating the system and the need to make inquiriesabout the data on the system. Both of these use the events, function descriptions andeffect correspondence diagrams produced in stage 3 to determine precisely how to updateand read data in a consistent and secure way.The product of this stage is the logical design which is made up of: • Data catalog • Required logical data structure • Logical process model – includes dialogues and model for the update and inquiry processes • Stress & Bending moment.
  6. 6. Stage 6 – Physical designThis is the final stage where all the logical specifications of the system are converted todescriptions of the system in terms of real hardware and software. This is a very technicalstage and a simple overview is presented here.The logical data structure is converted into a physical architecture in terms of databasestructures. The exact structure of the functions and how they are implemented isspecified. The physical data structure is optimized where necessary to meet size andperformance requirements.The product is a complete Physical Design which could tell software engineers how tobuild the system in specific details of hardware and software and to the appropriatestandards.Advantages and disadvantagesUsing this methodology involves a significant undertaking which may not be suitable toall projects.The main advantages of SSADM are: • Three different views of the system • Mature • Separation of logical and physical aspects of the system • Well-defined techniques and documentation • User involvementThe size of SSADM is a hindrance to using it in some circumstances. There is aninvestment in cost and time in training people to use the techniques. The learning curvecan be considerable if the full method is used, as not only are there several modelingtechniques to come to terms with, but there are also a lot of standards for the preparationand presentation of documents.
  7. 7. 2. What is DSS? Discuss the components andcapabilities of DSS.A decision support system (DSS) is a computer-based information system that supportsbusiness or organizational decision-making activities. DSSs serve the management,operations, and planning levels of an organization and help to make decisions, which maybe rapidly changing and not easily specified in advance.DSSs include knowledge-based systems. A properly designed DSS is an interactivesoftware-based system intended to help decision makers compile useful information froma combination of raw data, documents, and personal knowledge, or business models toidentify and solve problems and make decisions.Typical information that a decision support application might gather and present are: • inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data marts), • comparative sales figures between one period and the next, • projected revenue figures based on product sales assumptions.HistoryAccording to Keen (1978),[1] the concept of decision support has evolved from two mainareas of research: The theoretical studies of organizational decision making done at theCarnegie Institute of Technology during the late 1950s and early 1960s, and the technicalwork on interactive computer systems, mainly carried out at the Massachusetts Instituteof Technology in the 1960s. It is considered that the concept of DSS became an area ofresearch of its own in the middle of the 1970s, before gaining in intensity during the1980s. In the middle and late 1980s, executive information systems (EIS), group decisionsupport systems (GDSS), and organizational decision support systems (ODSS) evolvedfrom the single user and model-oriented DSS.According to Sol (1987)[2] the definition and scope of DSS has been migrating over theyears. In the 1970s DSS was described as "a computer based system to aid decisionmaking". Late 1970s the DSS movement started focusing on "interactive computer-basedsystems which help decision-makers utilize data bases and models to solve ill-structuredproblems". In the 1980s DSS should provide systems "using suitable and availabletechnology to improve effectiveness of managerial and professional activities", and end1980s DSS faced a new challenge towards the design of intelligent workstations.[2]In 1987 Texas Instruments completed development of the Gate Assignment DisplaySystem (GADS) for United Airlines. This decision support system is credited with
  8. 8. significantly reducing travel delays by aiding the management of ground operations atvarious airports, beginning with OHare International Airport in Chicago and StapletonAirport in Denver Colorado.[3][4]Beginning in about 1990, data warehousing and on-line analytical processing (OLAP)began broadening the realm of DSS. As the turn of the millennium approached, newWeb-based analytical applications were introduced.The advent of better and better reporting technologies has seen DSS start to emerge as acritical component of management design. Examples of this can be seen in the intenseamount of discussion of DSS in the education environment.DSS also have a weak connection to the user interface paradigm of hypertext. Both theUniversity of Vermont PROMIS system (for medical decision making) and the CarnegieMellon ZOG/KMS system (for military and business decision making) were decisionsupport systems which also were major breakthroughs in user interface research.Furthermore, although hypertext researchers have generally been concerned withinformation overload, certain researchers, notably Douglas Engelbart, have been focusedon decision makers in particularTaxonomiesAs with the definition, there is no universally-accepted taxonomy of DSS either.Different authors propose different classifications. Using the relationship with the user asthe criterion, Haettenschwiler[5] differentiates passive, active, and cooperative DSS. Apassive DSS is a system that aids the process of decision making, but that cannot bringout explicit decision suggestions or solutions. An active DSS can bring out such decisionsuggestions or solutions. A cooperative DSS allows the decision maker (or its advisor) tomodify, complete, or refine the decision suggestions provided by the system, beforesending them back to the system for validation. The system again improves, completes,and refines the suggestions of the decision maker and sends them back to him forvalidation. The whole process then starts again, until a consolidated solution is generated.Another taxonomy for DSS has been created by Daniel Power. Using the mode ofassistance as the criterion, Power differentiates communication-driven DSS, data-drivenDSS, document-driven DSS, knowledge-driven DSS, and model-driven DSS.[6] • A communication-driven DSS supports more than one person working on a shared task; examples include integrated tools like Microsofts NetMeeting or Groove[7] • A data-driven DSS or data-oriented DSS emphasizes access to and manipulation of a time series of internal company data and, sometimes, external data. • A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic formats. • A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures, or in similar structures.[6]
  9. 9. • A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization, or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in analyzing a situation; they are not necessarily data-intensive. Dicodess is an example of an open source model-driven DSS generator.[8]Using scope as the criterion, Power[9] differentiates enterprise-wide DSS and desktopDSS. An enterprise-wide DSS is linked to large data warehouses and serves manymanagers in the company. A desktop, single-user DSS is a small system that runs on anindividual managers PC.ComponentsDesign of a Drought Mitigation Decision Support System.Three fundamental components of a DSS architecture are:[5][6][10][11][12] 1. the database (or knowledge base), 2. the model (i.e., the decision context and user criteria), and 3. the user interface.The users themselves are also important components of the architecture.[5][12]Development FrameworksDSS systems are not entirely different from other systems and require a structuredapproach. Such a framework includes people, technology, and the development approach.[10]DSS technology levels (of hardware and software) may include: 1. The actual application that will be used by the user. This is the part of the application that allows the decision maker to make decisions in a particular problem area. The user can act upon that particular problem. 2. Generator contains Hardware/software environment that allows people to easily develop specific DSS applications. This level makes use of case tools or systems such as Crystal, AIMMS, Analytica and iThink. 3. Tools include lower level hardware/software. DSS generators including special languages, function libraries and linking modulesAn iterative developmental approach allows for the DSS to be changed and redesigned atvarious intervals. Once the system is designed, it will need to be tested and revised wherenecessary for the desired outcome.Classification
  10. 10. There are several ways to classify DSS applications. Not every DSS fits neatly into oneof the categories, but may be a mix of two or more architectures.Holsapple and Whinston[13] classify DSS into the following six frameworks: Text-oriented DSS, Database-oriented DSS, Spreadsheet-oriented DSS, Solver-oriented DSS,Rule-oriented DSS, and Compound DSS.A compound DSS is the most popular classification for a DSS. It is a hybrid system thatincludes two or more of the five basic structures described by Holsapple and Whinston.[13]The support given by DSS can be separated into three distinct, interrelated categories[14]:Personal Support, Group Support, and Organizational Support.DSS components may be classified as: 1. Inputs: Factors, numbers, and characteristics to analyze 2. User Knowledge and Expertise: Inputs requiring manual analysis by the user 3. Outputs: Transformed data from which DSS "decisions" are generated 4. Decisions: Results generated by the DSS based on user criteriaDSSs which perform selected cognitive decision-making functions and are based onartificial intelligence or intelligent agents technologies are called Intelligent DecisionSupport Systems (IDSS).[citation needed]The nascent field of Decision engineering treats the decision itself as an engineeredobject, and applies engineering principles such as Design and Quality assurance to anexplicit representation of the elements that make up a decision.ApplicationsAs mentioned above, there are theoretical possibilities of building such systems in anyknowledge domain.One example is the clinical decision support system for medical diagnosis. Otherexamples include a bank loan officer verifying the credit of a loan applicant or anengineering firm that has bids on several projects and wants to know if they can becompetitive with their costs.DSS is extensively used in business and management. Executive dashboard and otherbusiness performance software allow faster decision making, identification of negativetrends, and better allocation of business resources.A growing area of DSS application, concepts, principles, and techniques is in agriculturalproduction, marketing for sustainable development. For example, the DSSAT4 package,[15][16] developed through financial support of USAID during the 80s and 90s, has allowedrapid assessment of several agricultural production systems around the world to facilitate
  11. 11. decision-making at the farm and policy levels. There are, however, many constraints tothe successful adoption on DSS in agriculture.[17]DSS are also prevalent in forest management where the long planning time framedemands specific requirements. All aspects of Forest management, from logtransportation, harvest scheduling to sustainability and ecosystem protection have beenaddressed by modern DSSs.A specific example concerns the Canadian National Railway system, which tests itsequipment on a regular basis using a decision support system. A problem faced by anyrailroad is worn-out or defective rails, which can result in hundreds of derailments peryear. Under a DSS, CN managed to decrease the incidence of derailments at the sametime other companies were experiencing an increase.Benefits 1. Improves personal efficiency 2. Speed up the process of decision making 3. Increases organizational control 4. Encourages exploration and discovery on the part of the decision maker 5. Speeds up problem solving in an organization 6. Facilitates interpersonal communication 7. Promotes learning or training 8. Generates new evidence in support of a decision 9. Creates a competitive advantage over competition 10. Reveals new approaches to thinking about the problem space 11. Helps automate managerial processes
  12. 12. 3. Narrate the stages of SDLCThe Systems development life cycle (SDLC), or Software development process insystems engineering, information systems and software engineering, is a process ofcreating or altering information systems, and the models and methodologies that peopleuse to develop these systems. In software engineering, the SDLC concept underpinsmany kinds of software development methodologies. These methodologies form theframework for planning and controlling the creation of an information system[1]: thesoftware development process.OverviewThe SDLC is a process used by a systems analyst to develop an information system,training, and user (stakeholder) ownership. Any SDLC should result in a high qualitysystem that meets or exceeds customer expectations, reaches completion within time andcost estimates, works effectively and efficiently in the current and planned InformationTechnology infrastructure, and is inexpensive to maintain and cost-effective to enhance.[2]Computer systems are complex and often (especially with the recent rise of service-oriented architecture) link multiple traditional systems potentially supplied by differentsoftware vendors. To manage this level of complexity, a number of SDLC models ormethodologies have been created, such as "waterfall"; "spiral"; "Agile softwaredevelopment"; "rapid prototyping"; "incremental"; and "synchronize and stabilize".[3]....SDLC models can be described along spectrum of agile to iterative to sequential. Agilemethodologies, such as XP and Scrum, focus on lightweight processes which allow forrapid changes along the development cycle. Iterative methodologies, such as RationalUnified Process and dynamic systems development method, focus on limited projectscope and expanding or improving products by multiple iterations. Sequential or big-design-up-front (BDUF) models, such as Waterfall, focus on complete and correctplanning to guide large projects and risks to successful and predictable results[citation needed].Other models, such as Anamorphic Development, tend to focus on a form ofdevelopment that is guided by project scope and adaptive iterations of featuredevelopment.In project management a project can be defined both with a project life cycle (PLC) andan SDLC, during which slightly different activities occur. According to Taylor (2004)"the project life cycle encompasses all the activities of the project, while the systemsdevelopment life cycle focuses on realizing the product requirements".[4] SDLC (systemsdevelopment life cycle) is used during the development of an IT project, it describes thedifferent stages involved in the project from the drawing board, through the completionof the project.History
  13. 13. The systems life cycle (SLC) is a methodology used to describe the process for buildinginformation systems, intended to develop information systems in a very deliberate,structured and methodical way, reiterating each stage of the life cycle. The systemsdevelopment life cycle, according to Elliott & Strachan & Radford (2004), "originated inthe 1960s,to develop large scale functional business systems in an age of large scalebusiness conglomerates. Information systems activities revolved around heavy dataprocessing and number crunching routines".[5]Several systems development frameworks have been partly based on SDLC, such as thestructured systems analysis and design method (SSADM) produced for the UKgovernment Office of Government Commerce in the 1980s. Ever since, according toElliott (2004), "the traditional life cycle approaches to systems development have beenincreasingly replaced with alternative approaches and frameworks, which attempted toovercome some of the inherent deficiencies of the traditional SDLC".[5]Systems development phasesThe System Development Life Cycle framework provides a sequence of activities forsystem designers and developers to follow. It consists of a set of steps or phases in whicheach phase of the SDLC uses the results of the previous one.A Systems Development Life Cycle (SDLC) adheres to important phases that areessential for developers, such as planning, analysis, design, and implementation, and areexplained in the section below. A number of system development life cycle (SDLC)models have been created: waterfall, fountain, spiral, build and fix, rapid prototyping,incremental, and synchronize and stabilize. The oldest of these, and the best known, is thewaterfall model: a sequence of stages in which the output of each stage becomes the inputfor the next. These stages can be characterized and divided up in different ways,including the following[6]: • Preliminary analysis: The objective of phase1 is to conduct a preliminary analysis, propose alternative solutions, describe costs and benefits and submit a preliminary plan with recommendations. Conduct the preliminary analysis: in this step, you need to find out the organizations objectives and the nature and scope of the problem under study. Even if a problem refers only to a small segment of the organization itself then you need find out what the objectives of the organization itself are. Then you need to see how the problem being studied fits in with them. Propose alternative solutions: In digging into the organizations objectives and specific problems, you may have already covered some solutions. Alternate proposals may come from interviewing employees, clients , suppliers, and/or consultants. You can also study what competitors are doing. With this data, you will have three choices: leave the system as is, improve it, or develop a new system. Describe the costs and benefits.
  14. 14. • Systems analysis, requirements definition: Defines project goals into defined functions and operation of the intended application. Analyzes end-user information needs. • Systems design: Describes desired features and operations in detail, including screen layouts, business rules, process diagrams, pseudocode and other documentation. • Development: The real code is written here. • Integration and testing: Brings all the pieces together into a special testing environment, then checks for errors, bugs and interoperability. • Acceptance, installation, deployment: The final stage of initial development, where the software is put into production and runs actual business. • Maintenance: What happens during the rest of the softwares life: changes, correction, additions, moves to a different computing platform and more. This is often the longest of the stages.In the following example (see picture) these stage of the systems development life cycleare divided in ten steps from definition to creation and modification of IT work products:Model of the Systems Development Life Cycle
  15. 15. The tenth phase occurs when the system is disposed of and the task performed is eithereliminated or transferred to other systems. The tasks and work products for each phaseare described in subsequent chapters.[7]Not every project will require that the phases be sequentially executed. However, thephases are interdependent. Depending upon the size and complexity of the project, phasesmay be combined or may overlap.[7]System analysisThe goal of system analysis is to determine where the problem is in an attempt to fix thesystem. This step involves breaking down the system in different pieces to analyze thesituation, analyzing project goals, breaking down what needs to be created and attemptingto engage users so that definite requirements can be defined.DesignIn systems design the design functions and operations are described in detail, includingscreen layouts, business rules, process diagrams and other documentation. The output ofthis stage will describe the new system as a collection of modules or subsystems.
  16. 16. The design stage takes as its initial input the requirements identified in the approvedrequirements document. For each requirement, a set of one or more design elements willbe produced as a result of interviews, workshops, and/or prototype efforts.Design elements describe the desired software features in detail, and generally includefunctional hierarchy diagrams, screen layout diagrams, tables of business rules, businessprocess diagrams, pseudocode, and a complete entity-relationship diagram with a fulldata dictionary. These design elements are intended to describe the software in sufficientdetail that skilled programmers may develop the software with minimal additional inputdesign.TestingThe code is tested at various levels in software testing. Unit, system and user acceptancetestings are often performed. This is a grey area as many different opinions exist as towhat the stages of testing are and how much, if any iteration occurs. Iteration is notgenerally part of the waterfall model, but usually some occur at this stage. In the testingthe whole system is test one by oneFollowing are the types of testing: • Defect testing the failed scenarios, including defect tracking • Path testing • Data set testing • Unit testing • System testing • Integration testing • Black-box testing • White-box testing • Regression testing • Automation testing • User acceptance testing • Performance testingOperations and maintenanceThe deployment of the system includes changes and enhancements before thedecommissioning or sunset of the system. Maintaining the system is an important aspectof SDLC. As key personnel change positions in the organization, new changes will beimplemented, which will require system.Systems analysis and design
  17. 17. The Systems Analysis and Design (SAD) is the process of developing InformationSystems (IS) that effectively use hardware, software, data, processes, and people tosupport the company’s business objectives.Object-oriented analysisObject-oriented analysis (OOA) is the process of analyzing a task (also known as aproblem domain), to develop a conceptual model that can then be used to complete thetask. A typical OOA model would describe computer software that could be used tosatisfy a set of customer-defined requirements. During the analysis phase of problem-solving, a programmer might consider a written requirements statement, a formal visiondocument, or interviews with stakeholders or other interested parties. The task to beaddressed might be divided into several subtasks (or domains), each representing adifferent business, technological, or other areas of interest. Each subtask would beanalyzed separately. Implementation constraints, (e.g., concurrency, distribution,persistence, or how the system is to be built) are not considered during the analysis phase;rather, they are addressed during object-oriented design (OOD).The conceptual model that results from OOA will typically consist of a set of use cases,one or more UML class diagrams, and a number of interaction diagrams. It may alsoinclude some kind of user interface mock-up.Input (sources) for object-oriented designThe input for object-oriented design is provided by the output of object-oriented analysis.Realize that an output artifact does not need to be completely developed to serve as inputof object-oriented design; analysis and design may occur in parallel, and in practice theresults of one activity can feed the other in a short feedback cycle through an iterativeprocess. Both analysis and design can be performed incrementally, and the artifacts canbe continuously grown instead of completely developed in one shot.Some typical input artifacts for object-oriented design are: • Conceptual model: Conceptual model is the result of object-oriented analysis, it captures concepts in the problem domain. The conceptual model is explicitly chosen to be independent of implementation details, such as concurrency or data storage. • Use case: Use case is a description of sequences of events that, taken together, lead to a system doing something useful. Each use case provides one or more scenarios that convey how the system should interact with the users called actors to achieve a specific business goal or function. Use case actors may be end users or other systems. In many circumstances use cases are further elaborated into use case diagrams. Use case diagrams are used to identify the actor (users or other systems) and the processes they perform.
  18. 18. • System Sequence Diagram: System Sequence diagram (SSD) is a picture that shows, for a particular scenario of a use case, the events that external actors generate, their order, and possible inter-system events. • User interface documentations (if applicable): Document that shows and describes the look and feel of the end products user interface. It is not mandatory to have this, but it helps to visualize the end-product and therefore helps the designer. • Relational data model (if applicable): A data model is an abstract model that describes how data is represented and used. If an object database is not used, the relational data model should usually be created before the design, since the strategy chosen for object-relational mapping is an output of the OO design process. However, it is possible to develop the relational data model and the object-oriented design artifacts in parallel, and the growth of an artifact can stimulate the refinement of other artifacts.Systems development life cycleManagement and controlSPIU phases related to management controls.[8]The SDLC phases serve as a programmatic guide to project activity and provide aflexible but consistent way to conduct projects to a depth matching the scope of theproject. Each of the SDLC phase objectives are described in this section with keydeliverables, a description of recommended tasks, and a summary of related controlobjectives for effective management. It is critical for the project manager to establish andmonitor control objectives during each SDLC phase while executing projects. Controlobjectives help to provide a clear statement of the desired result or purpose and should be
  19. 19. used throughout the entire SDLC process. Control objectives can be grouped into majorcategories (domains), and relate to the SDLC phases as shown in the figure.[8]To manage and control any SDLC initiative, each project will be required to establishsome degree of a Work Breakdown Structure (WBS) to capture and schedule the worknecessary to complete the project. The WBS and all programmatic material should bekept in the “project description” section of the project notebook. The WBS format ismostly left to the project manager to establish in a way that best describes the projectwork. There are some key areas that must be defined in the WBS as part of the SDLCpolicy. The following diagram describes three key areas that will be addressed in theWBS in a manner established by the project manager.[8]Work breakdown structured organizationWork breakdown structure.[8]The upper section of the work breakdown structure (WBS) should identify the majorphases and milestones of the project in a summary fashion. In addition, the upper sectionshould provide an overview of the full scope and timeline of the project and will be partof the initial project description effort leading to project approval. The middle section ofthe WBS is based on the seven systems development life cycle (SDLC) phases as a guidefor WBS task development. The WBS elements should consist of milestones and “tasks”as opposed to “activities” and have a definitive period (usually two weeks or more). Eachtask must have a measurable output (e.x. document, decision, or analysis). A WBS taskmay rely on one or more activities (e.g. software engineering, systems engineering) andmay require close coordination with other tasks, either internal or external to the project.Any part of the project needing support from contractors should have a statement of work(SOW) written to include the appropriate tasks from the SDLC phases. The developmentof a SOW does not occur during a specific phase of SDLC but is developed to include thework from the SDLC process that may be conducted by external resources such ascontractors and struct.[8]Baselines in the SDLC
  20. 20. Baselines are an important part of the systems development life cycle (SDLC). Thesebaselines are established after four of the five phases of the SDLC and are critical to theiterative nature of the model .[9] Each baseline is considered as a milestone in the SDLC. • functional baseline: established after the conceptual design phase. • allocated baseline: established after the preliminary design phase. • product baseline: established after the detail design and development phase. • updated product baseline: established after the production construction phase.Complementary to SDLCComplementary software development methods to systems development life cycle(SDLC) are: • Software prototyping • Joint applications development (JAD) • Rapid application development (RAD) • Extreme programming (XP); extension of earlier work in Prototyping and RAD. • Open-source development • End-user development • Object-oriented programming Comparison of Methodology Approaches (Post, & Anderson 2006)[10] Open End SDLC RAD Objects JAD Prototyping source UserControl Formal MIS Weak Standards Joint User User ShortTime frame Long Short Medium Any Medium Short –Users Many Few Few Varies Few One or two OneMIS staff Many Few Hundreds Split Few One or two NoneTransaction/DSS Transaction Both Both Both DSS DSS DSSInterface Minimal Minimal Weak Windows Crucial Crucial CrucialDocumentation In Vital Limited Internal Limited Weak Noneand training ObjectsIntegrity and In Vital Vital Unknown Limited Weak Weaksecurity ObjectsReusability Limited Some Maybe Vital Limited Weak NoneStrengths and weaknessesFew people in the modern computing world would use a strict waterfall model for theirsystems development life cycle (SDLC) as many modern methodologies have supersededthis thinking. Some will argue that the SDLC no longer applies to models like Agile
  21. 21. computing, but it is still a term widely in use in technology circles. The SDLC practicehas advantages in traditional models of software development, that lends itself more to astructured environment. The disadvantages to using the SDLC methodology is whenthere is need for iterative development or (i.e. web development or e-commerce) wherestakeholders need to review on a regular basis the software being designed. Instead ofviewing SDLC from a strength or weakness perspective, it is far more important to takethe best practices from the SDLC model and apply it to whatever may be mostappropriate for the software being designed.A comparison of the strengths and weaknesses of SDLC: Strength and Weaknesses of SDLC [10] Strengths Weaknesses Control. Increased development time. Monitor large projects. Increased development cost. Detailed steps. Systems must be defined up front. Evaluate costs and completion targets. Rigidity. Documentation. Hard to estimate costs, project overruns. Well defined user input. User input is sometimes limited. Ease of maintenance. Development and design standards. Tolerates changes in MIS staffing.An alternative to the SDLC is rapid application development, which combinesprototyping, joint application development and implementation of CASE tools. Theadvantages of RAD are speed, reduced development cost, and active user involvement inthe development process.
  22. 22. 4. Define OOP. What are the applications of it?Object-oriented programming (OOP) is a programming paradigm using "objects" –data structures consisting of data fields and methods together with their interactions – todesign applications and computer programs. Programming techniques may includefeatures such as data abstraction, encapsulation, messaging, modularity, polymorphism,and inheritance. Many modern programming languages now support OOP, at least as anoption.OverviewSimple, non-OOP programs may be one "long" list of statements (or commands). Morecomplex programs will often group smaller sections of these statements into functions orsubroutines each of which might perform a particular task. With designs of this sort, it iscommon for some of the programs data to be global, i.e. accessible from any part of theprogram. As programs grow in size, allowing any function to modify any piece of datameans that bugs can have wide-reaching effects.In contrast, the object-oriented approach encourages the programmer to place data whereit is not directly accessible by the rest of the program. Instead, the data is accessed bycalling specially written functions, commonly called methods, which are either bundledin with the data or inherited from "class objects." These act as the intermediaries forretrieving or modifying the data they control. The programming construct that combinesdata with a set of methods for accessing and managing those data is called an object. Thepractice of using subroutines to examine or modify certain kinds of data, however, wasalso quite commonly used in non-OOP modular programming, well before thewidespread use of object-oriented programming.An object-oriented program will usually contain different types of objects, each typecorresponding to a particular kind of complex data to be managed or perhaps to a real-world object or concept such as a bank account, a hockey player, or a bulldozer. Aprogram might well contain multiple copies of each type of object, one for each of thereal-world objects the program is dealing with. For instance, there could be one bankaccount object for each real-world account at a particular bank. Each copy of the bankaccount object would be alike in the methods it offers for manipulating or reading itsdata, but the data inside each object would differ reflecting the different history of eachaccount.Objects can be thought of as wrapping their data within a set of functions designed toensure that the data are used appropriately, and to assist in that use. The objects methodswill typically include checks and safeguards that are specific to the types of data theobject contains. An object can also offer simple-to-use, standardized methods forperforming particular operations on its data, while concealing the specifics of how those
  23. 23. tasks are accomplished. In this way alterations can be made to the internal structure ormethods of an object without requiring that the rest of the program be modified. Thisapproach can also be used to offer standardized methods across different types of objects.As an example, several different types of objects might offer print methods. Each type ofobject might implement that print method in a different way, reflecting the different kindsof data each contains, but all the different print methods might be called in the samestandardized manner from elsewhere in the program. These features become especiallyuseful when more than one programmer is contributing code to a project or when the goalis to reuse code between projects.Object-oriented programming has roots that can be traced to the 1960s. As hardware andsoftware became increasingly complex, manageability often became a concern.Researchers studied ways to maintain software quality and developed object-orientedprogramming in part to address common problems by strongly emphasizing discrete,reusable units of programming logic[citation needed]. The technology focuses on data ratherthan processes, with programs composed of self-sufficient modules ("classes"), eachinstance of which ("objects") contains all the information needed to manipulate its owndata structure ("members"). This is in contrast to the existing modular programming thathad been dominant for many years that focused on the function of a module, rather thanspecifically the data, but equally provided for code reuse, and self-sufficient reusableunits of programming logic, enabling collaboration through the use of linked modules(subroutines). This more conventional approach, which still persists, tends to considerdata and behavior separately.An object-oriented program may thus be viewed as a collection of interacting objects, asopposed to the conventional model, in which a program is seen as a list of tasks(subroutines) to perform. In OOP, each object is capable of receiving messages,processing data, and sending messages to other objects. Each object can be viewed as anindependent "machine" with a distinct role or responsibility. The actions (or "methods")on these objects are closely associated with the object. For example, OOP data structurestend to "carry their own operators around with them" (or at least "inherit" them from asimilar object or class) - except when they have to be serialized.HistoryThe terms "objects" and "oriented" in something like the modern sense of object-orientedprogramming seem to make their first appearance at MIT in the late 1950s and early1960s. In the environment of the artificial intelligence group, as early as 1960, "object"could refer to identified items (LISP atoms) with properties (attributes);[1][2] Alan Kay waslater to cite a detailed understanding of LISP internals as a strong influence on histhinking in 1966.[3] Another early MIT example was Sketchpad created by IvanSutherland in 1960-61; in the glossary of the 1963 technical report based on hisdissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (withthe class concept covered by "master" or "definition"), albeit specialized to graphicalinteraction.[4] Also, an MIT ALGOL version, AED-0, linked data structures ("plexes", in
  24. 24. that dialect) directly with procedures, prefiguring what were later termed "messages","methods" and "member functions".[5][6]Objects as a formal concept in programming were introduced in the 1960s in Simula 67, amajor revision of Simula I, a programming language designed for discrete eventsimulation, created by Ole-Johan Dahl and Kristen Nygaard of the Norwegian ComputingCenter in Oslo.[7] Simula 67 was influenced by SIMSCRIPT and C.A.R. "Tony" Hoaresproposed "record classes".[5][8] Simula introduced the notion of classes and instances orobjects (as well as subclasses, virtual methods, coroutines, and discrete event simulation)as part of an explicit programming paradigm. The language also used automatic garbagecollection that had been invented earlier for the functional programming language Lisp.Simula was used for physical modeling, such as models to study and improve themovement of ships and their content through cargo ports. The ideas of Simula 67influenced many later languages, including Smalltalk, derivatives of LISP (CLOS),Object Pascal, and C++.The Smalltalk language, which was developed at Xerox PARC (by Alan Kay and others)in the 1970s, introduced the term object-oriented programming to represent the pervasiveuse of objects and messages as the basis for computation. Smalltalk creators wereinfluenced by the ideas introduced in Simula 67, but Smalltalk was designed to be a fullydynamic system in which classes could be created and modified dynamically rather thanstatically as in Simula 67.[9] Smalltalk and with it OOP were introduced to a wideraudience by the August 1981 issue of Byte Magazine.In the 1970s, Kays Smalltalk work had influenced the Lisp community to incorporateobject-based techniques that were introduced to developers via the Lisp machine.Experimentation with various extensions to Lisp (like LOOPS and Flavors introducingmultiple inheritance and mixins), eventually led to the Common Lisp Object System(CLOS, a part of the first standardized object-oriented programming language, ANSICommon Lisp), which integrates functional programming and object-orientedprogramming and allows extension via a Meta-object protocol. In the 1980s, there were afew attempts to design processor architectures that included hardware support for objectsin memory but these were not successful. Examples include the Intel iAPX 432 and theLinn Smart Rekursiv.Object-oriented programming developed as the dominant programming methodology inthe early and mid 1990s when programming languages supporting the techniques becamewidely available. These included Visual FoxPro 3.0,[10][11][12] C++[citation needed], andDelphi[citation needed]. Its dominance was further enhanced by the rising popularity ofgraphical user interfaces, which rely heavily upon object-oriented programmingtechniques. An example of a closely related dynamic GUI library and OOP language canbe found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits alsoenhanced the popularity of event-driven programming (although this concept is notlimited to OOP). Some[who?] feel that association with GUIs (real or perceived) was whatpropelled OOP into the programming mainstream.
  25. 25. At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topicsas data abstraction and modular programming (although this had been in common use inthe 1960s or earlier). Modula-2 (1978) included both, and their succeeding design,Oberon, included a distinctive approach to object orientation, classes, and such. Theapproach is unlike Smalltalk, and very unlike C++.Object-oriented features have been added to many existing languages during that time,including Ada, BASIC, Fortran, Pascal, and others. Adding these features to languagesthat were not initially designed for them often led to problems with compatibility andmaintainability of code.More recently, a number of languages have emerged that are primarily object-orientedyet compatible with procedural methodology, such as Python and Ruby. Probably themost commercially important recent object-oriented languages are Visual Basic.NET(VB.NET) and C#, both designed for Microsofts .NET platform, and Java, developed bySun Microsystems. Both frameworks show the benefit of using OOP by creating anabstraction from implementation in their own way. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classesdefined in the other language. Developers usually compile Java to bytecode, allowingJava to run on any operating system for which a Java virtual machine is available.VB.NET and C# make use of the Strategy pattern to accomplish cross-languageinheritance, whereas Java makes use of the Adapter pattern[citation needed].Just as procedural programming led to refinements of techniques such as structuredprogramming, modern object-oriented software design methods include refinements[citationneeded] such as the use of design patterns, design by contract, and modeling languages (suchas UML).Fundamental features and conceptsSee also: List of object-oriented programming termsA survey by Deborah J. Armstrong of nearly 40 years of computing literature identified anumber of "quarks", or fundamental concepts, found in the strong majority of definitionsof OOP.[13]Not all of these concepts are to be found in all object-oriented programming languages.For example, object-oriented programming that uses classes is sometimes called class-based programming, while prototype-based programming does not typically use classes.As a result, a significantly different yet analogous terminology is used to define theconcepts of object and instance.Benjamin C. Pierce and some other researchers view as futile any attempt to distill OOPto a minimal set of features. He nonetheless identifies fundamental features that supportthe OOP programming style in most object-oriented languages:[14]
  26. 26. • Dynamic dispatch – when a method is invoked on an object, the object itself determines what code gets executed by looking up the method at run time in a table associated with the object. This feature distinguishes an object from an abstract data type (or module), which has a fixed (static) implementation of the operations for all instances. It is a programming methodology that gives modular component development while at the same time being very efficient. • Encapsulation (or multi-methods, in which case the state is kept separate) • Subtype polymorphism • Object inheritance (or delegation) • Open recursion – a special variable (syntactically it may be a keyword), usually called this or self, that allows a method body to invoke another method body of the same object. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.Similarly, in his 2003 book, Concepts in programming languages, John C. Mitchellidentifies four main features: dynamic dispatch, abstraction, subtype polymorphism, andinheritance.[15] Michael Lee Scott in Programming Language Pragmatics considers onlyencapsulation, inheritance and dynamic dispatch.[16]Additional concepts used in object-oriented programming include: • Classes of objects • Instances of classes • Methods which act on the attached objects. • Message passing • AbstractionDecouplingDecoupling refers to careful controls that separate code modules from particular usecases, which increases code re-usability. A common use of decoupling in OOP is topolymorphically decouple the encapsulation (see Bridge pattern and Adapter pattern) -for example, using a method interface which an encapsulated object must satisfy, asopposed to using the objects class.Formal semanticsSee also: Formal semantics of programming languagesThere have been several attempts at formalizing the concepts used in object-orientedprogramming. The following concepts and constructs have been used as interpretations ofOOP concepts: • coalgebraic data types [17]
  27. 27. • abstract data types (which have existential types) allow the definition of modules but these do not support dynamic dispatch • recursive types • encapsulated state • Inheritance • records are basis for understanding objects if function literals can be stored in fields (like in functional programming languages), but the actual calculi need be considerably more complex to incorporate essential features of OOP. Several extensions of System F<: that deal with mutable objects have been studied;[18] these allow both subtype polymorphism and parametric polymorphism (generics)Attempts to find a consensus definition or theory behind objects have not proven verysuccessful (however, see Abadi & Cardelli, A Theory of Objects[18] for formal definitionsof many OOP concepts and constructs), and often diverge widely. For example, somedefinitions focus on mental activities, and some on program structuring. One of thesimpler definitions is that OOP is the act of using "map" data structures or arrays that cancontain functions and pointers to other maps, all with some syntactic and scoping sugaron top. Inheritance can be performed by cloning the maps (sometimes called"prototyping"). OBJECT:=>> Objects are the run time entities in an object-orientedsystem. They may represent a person, a place, a bank account, a table of data or any itemthat the program has to handle.OOP languagesSee also: List of object-oriented programming languagesSimula (1967) is generally accepted as the first language to have the primary features ofan object-oriented language. It was created for making simulation programs, in whichwhat came to be called objects were the most important information representation.Smalltalk (1972 to 1980) is arguably the canonical example, and the one with whichmuch of the theory of object-oriented programming was developed. Concerning thedegree of object orientation, following distinction can be made: • Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Eiffel, Emerald.[19], JADE, Obix, Scala, Smalltalk • Languages designed mainly for OO programming, but with some procedural elements. Examples: C++, Java, C#, VB.NET, Python. • Languages that are historically procedural languages, but have been extended with some OO features. Examples: Visual Basic (derived from BASIC), Fortran 2003, Perl, COBOL 2002, PHP, ABAP. • Languages with most of the features of objects (classes, methods, inheritance, reusability), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2) and Common Lisp.
  28. 28. • Languages with abstract data type support, but not all features of object- orientation, sometimes called object-based languages. Examples: Modula-2 (with excellent encapsulation and information hiding), Pliant, CLU.OOP in dynamic languagesIn recent years, object-oriented programming has become especially popular in dynamicprogramming languages. Python, Ruby and Groovy are dynamic languages built on OOPprinciples, while Perl and PHP have been adding object oriented features since Perl 5 andPHP 4, and ColdFusion since version 5.The Document Object Model of HTML, XHTML, and XML documents on the Internethave bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps thebest known prototype-based programming language, which employs cloning fromprototypes rather than inheriting from a class. Another scripting language that takes thisapproach is Lua. Earlier versions of ActionScript (a partial superset of the ECMA-262R3, otherwise known as ECMAScript) also used a prototype-based object model. Laterversions of ActionScript incorporate a combination of classification and prototype-basedobject models based largely on the currently incomplete ECMA-262 R4 specification,which has its roots in an early JavaScript 2 Proposal. Microsofts JScript.NET alsoincludes a mash-up of object models based on the same proposal, and is also a superset ofthe ECMA-262 R3 specification.Design patternsChallenges of object-oriented design are addressed by several methodologies. Mostcommon is known as the design patterns codified by Gamma et al.. More broadly, theterm "design patterns" can be used to refer to any general, repeatable solution to acommonly occurring problem in software design. Some of these commonly occurringproblems have implications and solutions particular to object-oriented development.Inheritance and behavioral subtypingSee also: Object oriented designIt is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus toinfer that objects instantiated from subclasses can always be safely used instead of thoseinstantiated from the superclass. This intuition is unfortunately false in most OOPlanguages, in particular in all those that allow mutable objects. Subtype polymorphism asenforced by the type checker in OOP languages (with mutable objects) cannot guaranteebehavioral subtyping in any context. Behavioral subtyping is undecidable in general, so itcannot be implemented by a program (compiler). Class or object hierarchies need to becarefully designed considering possible incorrect uses that cannot be detectedsyntactically. This issue is known as the Liskov substitution principle.
  29. 29. Gang of Four design patternsMain article: Design pattern (computer science)Design Patterns: Elements of Reusable Object-Oriented Software is an influential bookpublished in 1995 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides,often referred to humorously as the "Gang of Four". Along with exploring the capabilitiesand pitfalls of object-oriented programming, it describes 23 common programmingproblems and patterns for solving them. As of April 2007, the book was in its 36thprinting.Object-orientation and databasesMain articles: Object-Relational impedance mismatch, Object-relational mapping, andObject databaseBoth object-oriented programming and relational database management systems(RDBMSs) are extremely common in software today. Since relational databases dontstore objects directly (though some RDBMSs have object-oriented features toapproximate this), there is a general need to bridge the two worlds. The problem ofbridging object-oriented programming accesses and data patterns with relationaldatabases is known as Object-Relational impedance mismatch. There are a number ofapproaches to cope with this problem, but no general solution without downsides.[20] Oneof the most common approaches is object-relational mapping, as found in libraries likeJava Data Objects and Ruby on Rails ActiveRecord.There are also object databases that can be used to replace RDBMSs, but these have notbeen as technically and commercially successful as RDBMSs.Real-world modeling and relationshipsOOP can be used to associate real-world objects and processes with digital counterparts.However, not everyone agrees that OOP facilitates direct real-world mapping (seeNegative Criticism section) or that real-world mapping is even a worthy goal; BertrandMeyer argues in Object-Oriented Software Construction[21] that a program is not a modelof the world but a model of some part of the world; "Reality is a cousin twice removed".At the same time, some principal limitations of OOP had been noted.[22] For example, theCircle-ellipse problem is difficult to handle using OOPs concept of inheritance.However, Niklaus Wirth (who popularized the adage now known as Wirths law:"Software is getting slower more rapidly than hardware becomes faster") said of OOP inhis paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects thestructure of systems in the real world, and it is therefore well suited to model complexsystems with complex behaviours" (contrast KISS principle).
  30. 30. Steve Yegge and others noted that natural languages lack the OOP approach of strictlyprioritizing things (objects/nouns) before actions (methods/verbs).[23] This problem maycause OOP to suffer more convoluted solutions than procedural programming.[24]OOP and control flowOOP was developed to increase the reusability and maintainability of source code.[25]Transparent representation of the control flow had no priority and was meant to behandled by a compiler. With the increasing relevance of parallel hardware andmultithreaded coding, developer transparent control flow becomes more important,something hard to achieve with OOP.[26][27][28][29]Responsibility- vs. data-driven designResponsibility-driven design defines classes in terms of a contract, that is, a class shouldbe defined around a responsibility and the information that it shares. This is contrasted byWirfs-Brock and Wilkerson with data-driven design, where classes are defined aroundthe data-structures that must be held. The authors hold that responsibility-driven design ispreferable.
  31. 31. 5. Explain steps involved in the process of SoftwareProject ManagementSoftware project management is the art and science of planning and leading softwareprojects.[1] It is a sub-discipline of project management in which software projects areplanned, monitored and controlled.HistoryCompanies quickly understood the relative ease of use that software programming hadover hardware circuitry, and the software industry grew very quickly in the 1970s and1980s.To manage new development efforts, companies applied proven project managementmethods, but project schedules slipped during test runs, especially when confusionoccurred in the gray zone between the user specifications and the delivered software. Tobe able to avoid these problems, software project management methods focused onmatching user requirements to delivered products, in a method known now as thewaterfall model. Since then, analysis of software project management failures has shownthat the following are the most common causes:[2] 1. Unrealistic or unarticulated project goals 2. Inaccurate estimates of needed resources 3. Badly defined system requirements 4. Poor reporting of the projects status 5. Unmanaged risks 6. Poor communication among customers, developers, and users 7. Use of immature technology 8. Inability to handle the projects complexity 9. Sloppy development practices 10. Poor project management 11. Stakeholder politics 12. Commercial pressuresThe first three items in the list above show the difficulties articulating the needs of theclient in such a way that proper resources can deliver the proper project goals. Specificsoftware project management tools are useful and often necessary, but the true art insoftware project management is applying the correct method and then using tools tosupport the method. Without a method, tools are worthless. Since the 1960s, severalproprietary software project management methods have been developed by softwaremanufacturers for their own use, while computer consulting firms have also developedsimilar methods for their clients. Today software project management methods are still
  32. 32. evolving, but the current trend leads away from the waterfall model to a more cyclicproject delivery model that imitates a Software release life cycle.Software development processA software development process is concerned primarily with the production aspect ofsoftware development, as opposed to the technical aspect, such as software tools. Theseprocesses exist primarily for supporting the management of software development, andare generally skewed toward addressing business concerns. Many software developmentprocesses can be run in a similar way to general project management processes.Examples are: • Risk management is the process of measuring or assessing risk and then developing strategies to manage the risk. In general, the strategies employed include transferring the risk to another party, avoiding the risk, reducing the negative effect of the risk, and accepting some or all of the consequences of a particular risk. Risk management in software project management begins with the business case for starting the project, which includes a cost-benefit analysis as well as a list of fallback options for project failure, called a contingency plan. o A subset of risk management that is gaining more and more attention is Opportunity Management, which means the same thing, except that the potential risk outcome will have a positive, rather than a negative impact. Though theoretically handled in the same way, using the term "opportunity" rather than the somewhat negative term "risk" helps to keep a team focused on possible positive outcomes of any given risk register in their projects, such as spin-off projects, windfalls, and free extra resources. • Requirements management is the process of identifying, eliciting, documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. New or altered computer system[1] Requirements management, which includes Requirements analysis, is an important part of the software engineering process; whereby business analysts or software developers identify the needs or requirements of a client; having identified these requirements they are then in a position to design a solution. • Change management is the process of identifying, documenting, analyzing, prioritizing and agreeing on changes to scope (project management) and then controlling changes and communicating to relevant stakeholders. Change impact analysis of new or altered scope, which includes Requirements analysis at the change level, is an important part of the software engineering process; whereby business analysts or software developers identify the altered needs or requirements of a client; having identified these requirements they are then in a position to re-design or modify a solution. Theoretically, each change can impact the timeline and budget of a software project, and therefore by definition must include risk-benefit analysis before approval. • Software configuration management is the process of identifying, and documenting the scope itself, which is the software product underway, including all sub-products and changes and enabling communication of these to relevant
  33. 33. stakeholders. In general, the processes employed include version control, naming convention (programming), and software archival agreements. • Release management is the process of identifying, documenting, prioritizing and agreeing on releases of software and then controlling the release schedule and communicating to relevant stakeholders. Most software projects have access to three software environments to which software can be released; Development, Test, and Production. In very large projects, where distributed teams need to integrate their work before release to users, there will often be more environments for testing, called unit testing, system testing, or integration testing, before release to User acceptance testing (UAT). o A subset of release management that is gaining more and more attention is Data Management, as obviously the users can only test based on data that they know, and "real" data is only in the software environment called "production". In order to test their work, programmers must therefore also often create "dummy data" or "data stubs". Traditionally, older versions of a production system were once used for this purpose, but as companies rely more and more on outside contributors for software development, company data may not be released to development teams. In complex environments, datasets may be created that are then migrated across test environments according to a test release schedule, much like the overall software release schedule.Project planning, monitoring and controlThe purpose of project planning is to identify the scope of the project, estimate the workinvolved, and create a project schedule. Project planning begins with requirements thatdefine the software to be developed. The project plan is then developed to describe thetasks that will lead to completion.The purpose of project monitoring and control is to keep the team and management up todate on the projects progress. If the project deviates from the plan, then the projectmanager can take action to correct the problem. Project monitoring and control involvesstatus meetings to gather status from the team. When changes need to be made, changecontrol is used to keep the products up to date.IssueIn computing, the term issue is a unit of work to accomplish an improvement in a system.An issue could be a bug, a requested feature, task, missing documentation, and so forth.The word "issue" is popularly misused in lieu of "problem." This usage is probablyrelated.[citation needed]For example, used to call their modified version of BugZilla IssueZilla.As of September 2010, they call their system Issue Tracker.
  34. 34. Problems occur from time to time and fixing them in a timely fashion is essential toachieve correctness of a system and avoid delayed deliveries of products.Severity levelsIssues are often categorized in terms of severity levels. Different companies havedifferent definitions of severities, but some of the most common ones are:CriticalHigh The bug or issue affects a crucial part of a system, and must be fixed in order for it to resume normal operation.Medium The bug or issue affects a minor part of a system, but has some impact on its operation. This severity level is assigned when a non-central requirement of a system is affected.Low The bug or issue affects a minor part of a system, and has very little impact on its operation. This severity level is assigned when a non-central requirement of a system (and with lower importance) is affected.Cosmetic The system works correctly, but the appearance does not match the expected one. For example: wrong colors, too much or too little spacing between contents, incorrect font sizes, typos, etc. This is the lowest severity issue.In many software companies, issues are often investigated by Quality Assurance Analystswhen they verify a system for correctness, and then assigned to the developer(s) that areresponsible for resolving them. They can also be assigned by system users during theUser Acceptance Testing (UAT) phase.Issues are commonly communicated using Issue or Defect Tracking Systems. In someother cases, emails or instant messengers are used.PhilosophyAs a subdiscipline of project management, some regard the management of softwaredevelopment akin to the management of manufacturing, which can be performed bysomeone with management skills, but no programming skills. John C. Reynolds rebutsthis view, and argues that software development is entirely design work, and compares amanager who cannot program to the managing editor of a newspaper who cannot write.[3]In Software Project Management, the end users and developers needto know the length, duration and cost of the project. It is a process ofmanaging, allocating and timing resources to develop computersoftware that meets requirements. It consists of eight tasks:
  35. 35. - Problem Identification- Problem Definition- Project Planning- Project Organization- Resource Allocation- Project Scheduling- Tracking, Reporting and Controlling- Project TerminationIn problem identification and definition, the decisions are made whileapproving, declining or prioritizing projects. In problem identification,project is identified, defined and justified. In problem definition, thepurpose of the project is clarified. The main product is projectproposal.In project planning, it describes a series of actions or steps that areneeded to for the development of work product. In projectorganization, the functions of the personnel are integrated. It is donein parallel with project planning.In resource allocation, the resources are allocated to a project so thatthe goals and objectives are achieved. In project scheduling, resourcesare allocated so that project objectives are achieved within areasonable time span.In tracking, reporting and controlling, the process involves whether theproject results are in accordance with project plans and performancespecification. In controlling, proper action is taken to correctunacceptable deviations.In project termination, the final report is submitted or a release orderis signed.