Your SlideShare is downloading. ×

Software engineering final


Published on

  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Software Engineering 2008 Introduction to Software Engineering Software is (1) instructions (computer programs) that when executed provide desired features, function, and performance; (2) data structures that enable the programs to adequately manipulate information; and (3) documents that describe the operation and use of the programs. Software Characteristics: 1. Software is developed or engineered; it is not manufactured in the classical sense. 2. Software does not wear out. As we see the failure curve for hardware and other engineered products is bath tub shaped. The increase in utility is sudden. But in case of software, the curves are upward sloping after a downfall. That is because, with the introduction of change (new features) in the software, the utility of software rises suddenly. After a while, due to the dynamic nature of business, the utility falls again. Again the new enhanced features are introduced early, to ensure utility does not fall, rather rises. Thus in totality, the curve would be upward sloping. Wear out Infant mortality Failure curve for hardware Increased failure rate due to side effects Change Actual Curve Idealized curve Failure curve for software 3. Most software continue to be custom built 1
  • 2. Software Engineering 2008 Types of Software 1. System Software is a collection of programs written to service other programs. E.g. compilers, editors and file management utilities. 2. Application Software consists of standalone programs that solve a specific business need 3. Engineering/ Scientific software 4. Embedded software resides within a product or system and is used to implement and control features and functions for the end user and for the system itself 5. Product-line software are designed to provide a specific capability for use by many different customers, product line software can focus o a limited and esoteric marketplace. 6. Web-applications can be a little more than a set of linked hypertext files that present information using text and limited graphics. 7. Artificial intelligence software makes use of non-numerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. 8. Legacy software is the software which is still being used by corporate, in spite of advent of new technologies. These systems have evolved with the changes in environment and are still serving their purpose well. Nature of Software 1. Software is flexible, or open for changes 2. Software is developed for long term usage 3. Software is complex as the problems to be solved are getting more complex. 4. Software involves communication with the machine What is good software? Good software includes a few qualities that can be grouped into the following categories. 1. Quality of product 2. Quality of the process 3. Quality in the context of business environment (ROI or Return On Investment) Essential qualities of a good software include correctness, reliability, robustness, performance, usability, verifiability, maintainability, repair ability, evolve ability, portability, understand ability, interoperability, productivity, timeliness, visibility and reusability, to name a few. 2
  • 3. Software Engineering 2008 Measurable Characteristics for Software 1. Functionality: The capability to provide functions which meet stated and implied needs (suitability) when the software is used, accuracy and security. 2. Reliability: The capability to maintain a specified level of performance. Performance is also measured according to the requirement of the system. 3. Usability: The capability to be understood (understand ability), learned (learn ability) and used (operability) 4. Efficiency: The capability to provide appropriate performance relative to the amount of resources used. 5. Maintainability: The capability to be modified for purposes of making corrections (changeability, testability), improvements, or adaptation (stability). Software is repairable if its defects can be corrected with a reasonable amount of work. It should not be cheaper to replace it than to repair it. Since software is malleable it is easy to modify it. But after a lot of modifications, new modifications complicate the execution, causing lack of evolve ability. Thus proper designing before changing the software is important. 6. Portability: The capability to be adopted (adaptability) for different specified environments (install ability) without applying actions or means other than those provided for this purpose in the product. 7. Robustness: Software is robust if it behaves reasonably even in circumstances that were not anticipated in the requirement specification. 8. Scalability: Software that runs for a small level should also run on a large scale and vice versa. 9. Correctness: is an absolute quality. Any deviation from its requirements would make it incorrect, regardless of how minor or serious the consequences of the deviation are. 10. Verifiability: exists if its properties can be verified easily. (Testability) 11. Reusability: may be applied at different levels of granularity such as components level, or requirement specification level, software process level etc. Reusability characterizes maturity of engineered products. 12. Interoperability: refers to the ability of the system to coexist and cooperate with other systems. Reusability is enhanced with interoperability. 13. Productivity: is the quality of software production process, referring to its efficiency and performance. Modern software engineering tools and environments lead to increase in productivity. 14. Timeliness: is a process-related quality that refers to the ability to deliver a product on time. Due to lack of standard principles, projects prepare improper schedules. Thus measuring timeliness becomes difficult. Software crisis There is a rise in software prices as compared to hardware prices. Also the software is difficult to alter, debug and enhance. This is because of lack of adequate training in software engineering. Large software has failed and has caused huge losses and was called software runways. Reasons for software Crisis 1. Lack of communication between developers and users 2. Increase in size of software 3. Increase in cost of developing software 4. Increased complexity of problems 3
  • 4. Software Engineering 2008 5. Lack of understanding of problem and environment 6. High optimistic estimates 7. Difficult estimation of time 8. Quality parameters not standardized 9. Maintenance problems with code Software Engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines. It is a layered technology. The engineering approach must rest on an organizational commitment to quality. The foundation of software engineering is the process layer, which defines a framework for effective delivery of software engineering technology. Methods provide the technical how-to’s for building software. Software engineering tools provide automated or semi-automated support for the process and methods. Software engineering is about solving problems. It can be broken into analyzing (problem) and synthesis (solution). Computer Science Customer Theories Computer Problem functions Software Engineering Tools and techniques to solve problems 4
  • 5. Software Engineering 2008 Evolution of Software Engineering T e c Engineering h Unorganized use of past n experience Systematic use of past o experience . Formulation of l Craft scientific basis o Esoteric use of past g Art experience y Time Theory of software evolution Over the years a few observations made on project display the following laws present in software • Law of continuing change, thus adaptability is required. • Law of increasing complexity, thus work should be done to reduce it. • Law of self-regulation causes balanced and normal product and process improvements • Law of conservation of organizational stability • Law of conservation of familiarity i.e. all associated with the e-type system evolve with it. • Law of continuing growth so as to ensure user satisfaction • Law of declining quality unless maintained and adapted • The Feedback system law to achieve improvement Emergence of Software Engineering 5
  • 6. Software Engineering 2008 Object Oriented Design Data Flow Oriented Design Data Structure Oriented Design Control Flow Oriented Design Exploratory Style Comparing Software Engineering with other Engineering Disciplines Issue Software Engineering Engineering Based on computer science, Based on science, mathematics, and empirical Foundations information science, and knowledge. discrete mathematics. Compilers and computers are cheap, so software engineering In some projects, construction and and consulting are often more manufacturing costs can be high, so than half of the cost of a Cost engineering may only be 15% of the cost of a project. Minor software project. Major engineering cost overruns may engineering cost-overruns can not affect the total project cost. adversely affect the total project cost. Replication (copying CDs or Radically new or one-of-a-kind systems can downloading files) is trivial. require significant development effort to Most development effort goes create a new design or change an existing Replication into building new (unproven) design. Other kinds of systems may require or changing old designs and less development effort, but more attention to adding features. issues such as manufacturability. Engineers generally try to apply known and Software engineers often apply tested principles, and limit the use of untested Innovation new and untested elements in innovations to only those necessary to create a software projects. product that meets its requirements. Software engineers emphasize Some engineers solve long-ranged problems Duration projects that will live for years (bridges and dams) that endure for centuries. or decades. Engineers in some disciplines, such as civil Management Few software engineers engineering, manage construction, Status manage anyone. manufacturing, or maintenance crews. Software engineers must blame Engineers in some fields can often blame Blame themselves for project construction, manufacturing, or maintenance problems. crews for project problems. Practitioners 611,900 software engineers 1,157,020 total non-software engineers 6
  • 7. Software Engineering 2008 in U.S. Software engineering is about Engineering as a whole is thousands of years Age 50 years old. old. Software engineers are In many jurisdictions it is illegal to call typically self-appointed. A yourself an engineer without specific formal Title computer science degree is education and/or accreditation by Regulations common but not at all a formal governmental or engineering association requirement.[ bodies. Some engineering disciplines are based on a closed system theory and can in theory prove Methods for formally verifying formal correctness of a design. In practice, a correctness are developed in lack of computing power or input data can Analysis computer science, but they are make such proofs of correctness intractable, Methodology rarely used by software leading many engineers to use a pragmatic engineers. The issue remains mix of analytical approximations and controversial. empirical test data to ensure that a product will meet its requirements. Engineers have nominally refined synthesis techniques over the ages to provide exactly this. However, this has not prevented some notable engineering failures, such as the SE struggles to synthesize Synthesis collapse of Galloping Gertie (the original (build to order) a result Methodology Tacoma Narrows Bridge), the sinking of the according to requirements. Titanic, and the Pentium FDIV bug. In addition, new technologies inevitably result in new challenges that cannot be met using existing techniques. Traditional engineering nominally separates Software engineering is often these activities. A project is supposed to apply Research busy with researching the research results in known or new clever ways during unknown (e.g. to derive an to build the desired result. However, ground- Projects algorithm) right in the middle breaking engineering projects such as Project of a project. Apollo often include a lot of research into the unknown. Some engineering disciplines have thousands of years of best practice experience handed Software engineering has just over from generation to generation via a field's Codified Best recently started to codify and literature, standards, rules and regulations. Practice teach best practice in the form Newer disciplines such as electronic of design patterns. engineering and computer engineering have codified their own best practices as they have developed. Software Engineering Challenges 7
  • 8. Software Engineering 2008 1. Scale: Rules for small scale do not apply on large scale 2. Quality and productivity: These are the terms that are vaguely defined. Productivity directly depends upon the people in development. Quality includes N number of parameters. 3. Consistently and repeatability: The methods for software development should be repeatable across projects leading to consistency in the quality of software produced. 4. Change: Software should accommodate and embrace change. Since as businesses change, they require that the software supporting it should change. 5. Heterogeneity: Developing techniques for building software that can cope with heterogeneous platforms and execution environments; 6. Delivery: Developing techniques that lead to faster delivery of software; 7. Trust: Developing techniques that demonstrate that software can be trusted by its users. Software process Software projects utilize a process to organize the execution of tasks to achieve the goals on the cost, schedule and quality fronts. A process model specifies a general process, usually as a set of stages in which a project should be divided; the order in which the stages should be executed and any other constraints and conditions on the execution of stages. A project’s process may utilize some process model. The process that deals with the technical and management issues of software development is called a software process. The desired Characteristics of a Software Process are: 1. Predictability: It determines how accurately the outcome of following the process can be predicted. The fundamental basis for quality prediction is that quality of the product is determined largely by the process followed for developing it. Effective management of quality control activities depends upon the predictability of the process. 2. Support Testability and Maintainability: One of the most important objectives of software development should be to reduce the maintenance effort. The process used, should ensure that maintainability. Both testing and maintenance depend heavily on the quality of design and code, and these costs can be considerably reduced if the software is designed and coded to make testing and maintenance easier. 3. Support change: Software changes are driven by business need, people’s mindset etc. Thus change is prevalent, and a process that can handle change easily is desirable. 4. Early Defect removal: We should attempt to detect errors that occur in a phase, during that phase itself, to reduce effort and cost or removing them. Error detection and correction should be a continuous process. 5. Process improvement and feedback: To satisfy the objectives of quality improvement are cost reduction, the software process must be improved. Each step in a process has.. 8
  • 9. Software Engineering 2008 1. An objective 2. Requires people or effort defined 3. Has inputs and produces specific output 4. Beginning time or criteria, and exit criteria 5. Has a duration 6. Has constraints 7. Must produce information for the management, so that corrective action may be taken e.g. adding more resources 8. Step ends with a review to verify the step is correctly done Software Components In engineering disciplines, products are always constructed from parts or components. Similarly, software engineering is component based. Until the early 1990’s, components were routines and libraries. With advances in programming languages, other mechanisms like generic constructs in languages as ADA and C++ and objects and frameworks in object oriented language also came. A framework is a collection of related classes that are designed to be used together in developing applications in a certain domain. Standard template library in C++ is fine grained, code level component. While swing and Java Beans that provide classes and objects for visual approach to software development are medium grained components. Large grained components such as database management systems act as components in the architecture. A component is a software element that confirms to a component model and can be independently deployed and composed without modification, according to a composition standard. Components are standardized, Independent, comparable, deployable and documented. Examples of components are the standard template library in C++, JavaBeans and Swing. Software Reuse Almost all artifacts associated with software development, including project plan and test plan can be reused. Some prominent items that can be effectively reused are requirements specification, design, code, test cases, knowledge etc. Software reuse requires consideration of the following points: • Component creation: It should be decided in advance what should be created. • Component indexing and storing: Classification of reuse components is essential • Component search: Component matching the requirements has to be identified correctly • Component understanding: Proper understanding of the component is necessary before applying it to use. • Component adaptation: Customization should be possible • Repository maintenance: New components added should be stored and should be traceable. 9
  • 10. Software Engineering 2008 Increasing reusability • Name generalization helps identification with the application of the component • Operation generalization increases applicability • Exception generalization involves providing for all exceptions that might occur in advance. Software Costs • Software costs often dominate computer system costs. The costs of software on a PC are often greater than the hardware costs. • Software costs more to maintain than it does to develop. For systems with a long life, maintenance costs may be several times development costs. • Software engineering is concerned with cost-effective software development. • Cost of the software includes the cost of developers, the hardware resources used for development and the cost of infrastructure required for such development. Thus following points become important considerations for software engineering • Attempt to estimate cost/effort • Define plan • Prepare schedules • Involve user • Identify stages • Define milestones • Reviews • Define deliverables • Quality assurance steps (testing) Job of a Software developer Every developer has to become accurate in three forms of communication 1. Communication with the user. The user is not always well informed. There is also a gap between the user and the developer in terms of field of work. Thus getting the right information from the user is very critical task for the developer 2. Communication with the technical specialists. It is difficult to understand the jargon of a technical specialist and thus the developer has to convey the information correctly to him, in the way he can understand. 3. Communication with the management. The management personnel have business knowledge and understanding of the application of the software. But they are not well versed with the technicalities involved. Judging correctly the importance of the requirement stated by the management is an important task for the developer. 10
  • 11. Software Engineering 2008 Software Engineering Code of Ethics The preamble of the code states that Computers have a central and growing role in commerce, industry, government, medicine, education, entertainment, and society at large. Software engineers are those who contribute by direct participation or by teaching, to the analysis, specification, design, development, certification, maintenance, and testing of software systems. Because of their roles in developing software systems, software engineers have significant opportunities to do good or cause harm. To ensure, as much as possible, that their efforts will be used for good, software engineers must commit themselves to making software engineering a beneficial and respected profession. Software engineers shall commit themselves to making the analysis, specification, design, development, testing and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety and welfare of the public, software engineers shall adhere to the following Eight Principles: The eight principles are as follows: 1. Public: Software engineers shall act consistently with the public interest. 2. Client and Employer: Software engineers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest. 3. Product: Software engineers shall ensure that their products and related modifications meet the highest professional standards possible. 4. Judgment: Software engineers shall maintain integrity and independence in their professional judgment. 5. Management: Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance. 6. Profession: Software engineers shall advance the integrity and reputation of the profession, consistent with the public interest. 7. Colleagues: Software engineers shall be fair to, and supportive of, their colleagues. 8. Self: Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. Software Metrics are quantifiable measures that could be used in Software engineering to calculate 1. The complexity of the software which in turn helps in other ways 2. Cost estimation based on the calculations 3. Allocation of appropriate resources as per the complexity of project 4. Correct scheduling of tasks as per the complexity 5. Evaluation of Quality and reliability of system 6. Improving quality of the system 7. Help in management of risks They help in the overall development of the system in a more organized and methodical way. Metrics are important to manage the project successfully. There are three types of software metrics: 1. Product metrics measure the software 11
  • 12. Software Engineering 2008 2. Process metrics quantify characteristics of the process used 3. Project metrics quantifies the whole project such as amount of project completion There are different scales for measuring metrics such as: 1. Nominal which categorizes in groups and is thus a qualitative and not a quantitative measure 2. Ordered scale and tell the sequence 3. Interval can be incremental or regression analysis 4. Ratio can tell exact quantitative differences which can be ordered and grouped Examples: Measures are of different types 1. Size metrics a. Lines Of Code (LOC) b. Effective lines of Code (ELOC)/NLOC c. Commented lines of code (LOC) d. Functionality e. Effort 2. Function points 3. Complexity a. ‘O’ notation b. Algorithm/ logic 4. Structure a. Cyclomatic complexity 5. Process measurement a. Time taken for process activities to be completed E.g. Calendar time or effort to complete an activity or process. b. Resources required for processes or activities E.g. Total effort in person-days. c. Number of occurrences of a particular event E.g. Number of defects discovered. 12
  • 13. Software Engineering 2008 Software Development Life Cycle The software Life Cycle encompasses all activities required to define, develop, test deliver, operate and maintain a software product. Planning the software development process involves several important considerations. The first is to define a product life cycle model. The common Software Development Life Cycle activities are: 1. Feasibility: Determine if the software has significant contribution to business. It includes market analysis if similar software is in demand in market. Software is evaluated on the basis of cost, schedule and quality. 2. Requirements are determined such as functional properties desired by users, system requirements such as availability, performance and safety, establishing set of objectives the system should meet, characteristics that the system should not exhibit. Such requirements are obtained by various methods such as interviews, communication with stake holders, scenario discussions, use cases and ethnography. These requirements are then verified and tested. 3. Project Planning, cost analysis, scheduling, quality assurance plans are made 4. Designing i.e. Architectural design, Interface design and detailed design 5. Implementation includes writing the code for the project as per the designs and plan 6. Testing ensures accuracy and reliability 7. Delivery, Installation, Training, Help Desk 8. Maintenance, software configuration management. Another view of SDLC emphasizes the milestone, documents and reviews throughout product development. It is difficult for project managers to assess progress or anticipate problems. Establishing milestones, improves product visibility. The following are the milestones for the project. 1. Feasibility report 2. System definition, project plan 3. Software requirement specification, Preliminary user manual 4. Architectural design document 5. Detailed design document 6. Software verification plan 7. Product Schedule 8. Software test plan, Acceptance test 9. Software quality assurance plan 10. User manual 11. Source code 12. Test results 13. Defect report 13
  • 14. Software Engineering 2008 Every software engineering organization should describe a unique set of framework activities for the software process it adopts. Various models have been developed over the years to accommodate these activities. Models for Software Development Evolutionary models 1. Waterfall Model suggests a systematic sequential approach to software development that begins with customer specification of requirements. The principle stages of the model map onto fundamental development activities. a. Requirement Analysis and definition b. System and software design c. Implementation and unit testing d. Integration and system testing e. Operation and maintenance The verification at each stage ensures that the output is consistent with its input and overall requirement of the system. These outputs are often called work products and can be listed as: a. Requirements documents 14
  • 15. Software Engineering 2008 b. Project plan c. Design documents d. Test plan and test reports e. Final code f. Software manuals (e.g. user, installation, etc) Advantages 1. Simple method with clear steps 2. Easy to administer as it is systematic 3. Verification at each stage ensures early detection of errors / misunderstanding 4. Documentation helps in future maintenance and revisions Disadvantage 1. Unchanging requirements are not realistic 2. Document driven process 3. Change adaptability is very slow 2. Incremental Model: Customer identifies the services to be provided. The delivery increments are then defined, with each increment providing a sub-set of the system functionality. Once the system increments for the services have been identified, the requirements to be delivered in the first increment are defined in detail and the increment is developed. Assign Develop Define outline Design System requirements System requirement Architecture to increments increment Validate Integrate Validate Increment Increment System Final System Advantages of Incremental Model 1. Accelerated delivery of customer services 2. User engagement with the system Disadvantages of incremental Model 1. Management problems: Progress can be hard to judge and problems hard to find because there is no documentation to demonstrate what has been done. 2. Contractual problems: The normal contract may include a specification; without a specification, different forms of contract have to be used. 15
  • 16. Software Engineering 2008 3. Validation problems: Without a specification, it is difficult testing the system 4. Maintenance problem: Continual change tends to corrupt software structure making it more expensive to change and evolve to meet new requirements. 3. Prototyping: First a working prototype of the software is developed instead of developing the actual software. The developers use this prototype to refine the requirements and prepare final specification document. After the finalization of SRS document, the prototype is discarded and actual system is then developed using the waterfall approach. Requirements Quick Design / Prototype Implement Refinement of Customer Evaluation Requirements as per suggestions Design Implementation and Unit Testing Integration and System Testing Operation and Maintenance There are 2 types of prototypes; throw-away, where the initial prototype is discarded after the requirements are defined and the evolutionary prototype, where the system is built further on the prototype itself. A prototype is built for those requirements which are critical for the project and are not understood well. If the number of requirements which need clarification is more, then a working prototype is built for them. The importance or criticality of such requirements is also a significant factor in determining the need of a prototype. Prototyping becomes important in cases where there is no model to follow e.g. experimental projects which are being made for the first time. Advantages of Prototyping 1. Users are actively involved in the development 16
  • 17. Software Engineering 2008 2. It provides a better system to users, as users have natural tendency to change their mind in specifying requirements and this method of developing systems supports this user tendency. Thus improved usability. 3. Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed. 4. Errors can be detected much earlier as the system is mode side by side. 5. Quicker user feedback is available leading to better solutions. 6. Improved design quality because an operational design is created first. Disadvantages 1. Leads to implementing and then repairing way of building systems. 2. Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans. 4. Spiral Model: Boehm tried to incorporate the ‘project risk’ factor into a life cycle model. Each phase is split roughly into four sections namely planning, risk analysis, development and assessment. There are 4 phases namely feasibility, requirements analysis, design and coding and testing. The spiral development consists of four quadrants as shown in the figure above The first step in each phase is to identify the objective, alternatives and constraints of that phase. The next stage involves identifying the best alternative, by comparing all and performing risk analysis. One the alternative is chose, prototypes are built using simulation 17
  • 18. Software Engineering 2008 and benchmarks. Each phase is completed with a review by the people concerned with the project. Specialized process models 1. Extreme programming includes creating a set of stories and assigning their priority. The commitment defines the order for development. The objects are organized for development. Designing occurs both after and before programming, in the form of design and refactoring. Programming happens in the form of pain programming where two people work together at one workstation. Writing test cases before starting the coding is a key feature of extreme programming. Acceptance testing is followed by system testing. The test cases are developed incrementally from scenarios, with user involvement through validation and are supported by automated test tools. Extreme programming delivery cycle S le t u e e c sr B a dw re k o n s rie fo th to s r is P n re a e la le s s rie tota ks to s s re a e le s Eva a lu te R le s e ae De lo /in g ra / ve p te te s te ys m s ftw re o a te t s ftw re s o a 2. Component-Based Development uses commercial-off-the-shelf (COTS) software components. 3. Unified Process is a use-case driven, architecture centric, iterative and incremental software process. 4. Dynamic Systems Development Method (DSDM) suggests an iterative software process. After the feasibility study follows the business study that establishes the functional and information requirements for the application. The iteration steps include Functional model iteration which creates a prototype. New requirements might be generated through the prototype, by the user. Subsequently or simultaneously designs are built or rebuilt. Implementation iteration places the latest software increments. 5. Feature driven development emphasizes project management guidelines and techniques. The processes defined for such development are to develop an overall model, build a feature list, plan by feature, design by feature, build by feature. A rapid application development environment 18
  • 19. Software Engineering 2008 Inte ce rfa Office g ne e rato r s te s ys m DB Re o p rt p gram ing ro m g ne to e ra r la uag ng e Data a em na g m nt s te bs a e e ys m Rap a p tio id p lica n d ve p e e e lo m nt nviro e nm nt Process choice Process used should depend on type of product which is being developed • For large systems, management is usually the principal problem so you need a strictly managed process; • For smaller systems, more informality is possible. There is no uniformly applicable process which should be standardised within an organisation • High costs may be incurred if you force an inappropriate process on a development team; • Inappropriate methods can also increase costs and lead to reduced quality. Process analysis and modelling When defining the set of process stages for a project, the following steps can be followed. 1. Study an existing process to understand its activities.Some of the process analysis techniques are : • Published process models and process standards: It is always best to start process analysis with an existing model. People then may extend and change this. • Questionnaires and interviews: Must be carefully designed. Participants may tell you what they think you want to hear. • Ethnographic analysis: Involves assimilating process knowledge by observation. Best for in-depth analysis of process fragments rather than for whole-process understanding 2. Produce an abstract model of the process. You should normally represent this graphically. Several different views (e.g. activities, deliverables, etc.) may be required. 3. Analyse the model to discover process problems. This involves discussing process activities with stakeholders and discovering problems and possible process changes. Process improvement 19
  • 20. Software Engineering 2008 Process improvement is about understanding existing processes and introducing process changes to improve product quality reduce costs or accelerate schedules. Most process improvement work so far has focused on defect reduction. This reflects the increasing attention paid by industry to quality. However, other process attributes can also be the focus of improvement. Feedback is important for process improvement to initiate. Why is it difficult to improve a software process? 1. Not enough time: Developers, because of unrealistic schedules are left with no time to explore problems of development and find solutions. 2. Lack of knowledge: many developers are not aware of best practices 3. Wrong motivation: The basic motivation should be to eradicate current difficulties and not just to achieve a higher CMM level. 4. Insufficient commitment: A lot of times when the improvement is identified, we start implementing it. But before the corrected state is achieved, developers lose hope of finding the correct solution and quit the improvement process. This should not happen. Pro duc Process tivit Improvement y Begins Improved future state Do not quit here Learning curve Time Process improvement stages 20
  • 21. Software Engineering 2008 Process measurement: Attributes of the current process are measured. These are a baseline for assessing improvements. Process analysis: The current process is assessed and bottlenecks and weaknesses are identified. Process change: Changes to the process that have been identified during the analysis are introduced. Process change stages • Improvement identification: highlights the area that requires improvement. The assessment team seeks feedback from the project representatives to help ensure that they properly understand the issues. The assessment participants are briefed the senior management as a group about the objectives and activities of the assessment. The team then holds discussions with the leader of each selected project, functional area representatives (FARs), selected software practitioners to clarify information provided from the maturity questionnaire. The team generates a preliminary set of findings. The findings are revised and presented by the assessment participants and senior site management as preliminary recommendations. • Improvement prioritization: involves identifying which recommendations are necessary and should be implemented. They are evaluated on the basis of quality impacts and cost benefits. • Process change introduction: is the first change management step initiated to implement the recommendation. It includes planning how to implement the change, getting acceptance from all users and management to introduce it and defining the new changed process. • Process change training: is necessary to educate everyone about the new process and how it should be performed. • Change tuning: is performed finally to practically implement the change. Any problems associated with the new process are corrected. It takes some time to reach the correct practical solution. 21
  • 22. Software Engineering 2008 Software Requirement Requirements analysis in software engineering encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users. Requirements analysis is critical to the success of a development project. Requirements must be actionable, measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. Requirement process Need for Software Requirement Specification (SRS) The Software Requirement Specification describes what the proposed software should do without describing how the software will do it. 1. It establishes the basis for agreement between the client and the supplier on what the software product will do. 2. It provides a reference for validation of the final product 3. A high quality SRS is a pre-requisite for high quality product 4. A high quality SRS reduces the development cost as the cost of fixing a requirement error is high as all the phases have to be repeated Requirement engineering tasks 1. Inception: Casual interaction to get product request 2. Elicitation: Ask customer, user and others about their requirements 1. Requirement discovery 2. Requirement classification and organization 3. Requirement prioritizing and negotiation 4. Requirement documentation 3. Requirement elaboration: refinement of user scenarios 22
  • 23. Software Engineering 2008 4. Negotiation 5. Specification in written document 6. Validation These are discussed in detail in further sections. Requirement Discovery Requirements discovery is the process of gathering information about the proposed and existing systems and distilling the user and system requirements from this information. Before the system analyst moves on to the task of information collection, he must plan his strategy. The plan must include- • Serialized steps of information collection • List of information to be collected at each step • List of sources for the information to be collected • The enquiry about the purpose of each information • The method of collection of each information Techniques for requirement discovery are discussed as follows: 1. Viewpoints: The requirements sources (stakeholders, domain, systems) can all be represented as system viewpoints, where each viewpoint presents a sub-set of the requirements for the system. There are three generic types of viewpoints: 1. Interactor viewpoints represent people or other system that interact directly with the system. 2. Indirect viewpoints represent stakeholders who do not use the system themselves but who influence requirements. 3. Domain viewpoints represent domain characteristics and constraints that influence the system requirements. 2. Interviewing: Formal or informal interviews with system stakeholders are part of most requirements engineering processes. Interviews may be of two types: 1. Closed interviews where the stakeholder answers a predefined set of questions 2. Open interviews where there is no predefined agenda. For getting the maximum out of an interview, the selection of the right candidate is important. Before starting the interviews, the analyst must write down the hierarchy, or position order of all. The interview must be conducted strictly in that order only, else the integration of information into the system modeling will become impossible, and some information may be lost or may remain unused. It is hard to elicit domain knowledge during interviews for two reasons: 1. All application specialists use terminology and jargon that is specific to a domain. 2. Some domain knowledge may be so familiar to stakeholders that they might forget to mention it. Effective interviews have two characteristics: 1. They are open-minded, avoid preconceived ideas about the requirements and are willing to listen to stakeholders. 23
  • 24. Software Engineering 2008 2. They prompt the interviewee to start discussions with a question, a requirements proposal or by suggesting working together on a prototype. Interview Method Personal rapport is the soul of successful interview. The interview of different people has to be conducted depending upon the level and content of information. Before conducting the interview, the analyst must ensure the status and role of the person in the information system. Accordingly, an ordered list of unambiguous questions must be framed for the interview. Each interview must start with simple and convenient questions to encourage the interviewee. After the statement of the interviewee is over, the analyst must summarize his statement to him and get his confirmation. Questions can be closed (requiring a reply) or open (requiring interviewee to speak out his mind). Interviews with introvert type of people must start with closed questions whereas those with extrovert persons must start with open questions. Transition from one to the other type must be gradual and smooth. 3. Scenarios They start with an outline of the interaction, and during elicitation, details added to create a complete description of that interaction. A scenario includes: 1. A description of what the system and users expect when the system starts 2. A description of the normal flow of events in the scenario 3. A description of what can go wrong and how this is handled 4. Information about other activities that might be going on at the same time 5. A description of the system state when the scenario finishes 4. Use-cases These are scenarios based techniques which identify the individual interactions with the system. Actors in the process are represented as stick figures, and each class of interaction is represented as a named ellipse. A use case encapsulates a set of scenarios, and each scenario is a single thread through the use-case. 5. Ethnography is the observational technique that can be used to understand social and organizational requirements. Following two types of requirements can be effectively discovered: 24
  • 25. Software Engineering 2008 1. Requirements that are derived from the way people actually work 2. Requirements that are derived from cooperation and awareness of other people’s activities Classification of Requirements When we talk about the detail of requirements in relation to the system, we find two levels. The first is user requirements which are governed by the external users of the system such as client manager, system end-users, client engineers, system architects, etc. These briefly outline the system and are part of high level abstraction specifying the external behavior of the system. The second is system requirement which specifies the characteristics of the system. These define what the system should do. These are defined by the system end-users, client engineers, system architects, software engineers, etc. Each of the above requirements can be further divided as follows: 1. A functional requirement describes an interaction between the system and its environment. (Environmental Model). 2. Non-functional requirement describes a restriction on the system that limits our choices for constructing a solution to the problem. They may relate to emergent system properties such as reliability, response time etc. Failing to meet non-functional requirements can render a system unusable.(Behavioral model) Non-functional requirements can be related to the product, organization (cost, process, quality) or general (such as legislation, ethical). These requirements are difficult to verify because they are not easily quantifiable. 3. Domain requirements are defined by similar systems implemented. These may be functional or non-functional and are important, as they reflect fundamentals of the application domain. The third is interface specification which defines the procedure interface (APIs), Data Structures and representation of Data structures. Types of requirements: 1. Physical environment under which the system should work. 2. Interfaces required for interaction with the external environment 3. User and human factors that affect the operation of system. 4. Functionality desired by the system. 5. Documentation needs can be specified by the management. The time for such documentation can also be specified. 6. Data resource should be carefully defined for the system as data is a precious resource. 7. Resources apart from above should be mentioned, if they are involved with the system. 8. Security requirements are very important for systems dealing with critical resources. 9. Quality Assurance requirements as stated by the user or standards followed, should be mentioned in the requirements report to allow verifications and inspections. 25
  • 26. Software Engineering 2008 Evaluation: The requirements that have been gathered need to be evaluated to resolve inconsistencies and also to understand why each requirement has been stated. In this task the analyst needs to do the following: o For every requirement X, get answers to question “Why do you need X?” o Convert any requirements stated as “how to” into the corresponding “what” is required. o Capture rationale to support future requirements o Perform risk assessment, feasibility and cost/benefit analysis considering the technical, cost and schedule concerns. A risk assessment is also performed to address technical, cost and schedule concerns. The rationale behind the information gathered in the previous stages is examined to determine whether the true requirements are hidden in this rationale instead of being expressed explicitly. Internally and commercially available software products and components are evaluated for possible reuse. Prioritisation: The requirements are then prioritised based on cost and dependency and user needs. Knowing the rationale behind each requirements helps in deciding the priority Consolidation is required to put together all the requirements in a way that can be analysed further. It comprises: o Filling in as many ‘to be determined’ issues as possible o Validating that requirements are in agreement with originally stated goals o Removing inconsistencies o Resolving conflicts o Authorizing/ verifying to move to the next step of development, i.e. detailed requirements analysis Often group development techniques are used for consolidation because they remove the possibility of an individual’s interpretation of the requirements. Expressing Requirements Following representational techniques are used: 1. Static descriptions: A static description lists the system entities or objects, their attributes, and their relationships with each other. This view is static because it does not describe how relationships change with time. Different ways to describe a system statically are: 1. Indirect reference is made to the problem and its solution 2. Recurrence relation like Fibonacci 3. Axiomatic definition with the help of axioms/ theorems to specify basic system properties 4. Expression as a language (PDL) 2. Dynamic descriptions 1. Decision tables represent a actions to be taken when the system is in one of the states illustrated. 2. Functional description and transition diagrams (automata) 3. Event tables represent system’ states and transitions in a tabular form. 26
  • 27. Software Engineering 2008 4. Petri nets representing a graphical system by drawing a node for each state and an arrow to mark the transitions 3. System models are graphical representations that describe business processes and the problem to be solved. Context models: Architecture models describe the environment of the system. They are supplemented by process models. Example: Behavioral models: They describe the overall behavior of the system. Examples are dataflow diagrams and state machine models. 1. Data flow models are used to show how data flows through a sequence of processing steps. It shows a functional perspective where each transformation represents a single function or process. These are valuable because tracking and documenting how data associated with a particular process moves through the system helps analysts understand what is going on. The development of models such as data flow models should be a top-down process. 2. State machine models describe how a system responds to internal or external events. This type of model is often used for modeling real-time systems since these systems are often driven by stimuli from system’s environment. 3. Data flow diagrams 27
  • 28. Software Engineering 2008 Data models: Most widely used technique is Entity-relation-Attribute modeling. Defining the logical form of the data processed by the system is called semantic data modeling. Like all graphical models, data models lack detail. One may collect these descriptions in a repository or data dictionary. Advantages of data dictionary are : 1. It is a mechanism for name management. 2. It serves as a store for organizational information. Object models: Here system requirements are expressed using object model, designing using objects and developing the system in an object-oriented programming language. Developing object models during requirements analysis usually specifies the transition to object-oriented design and programming. 1. Inheritance model 2. Object aggregation 3. Object behavior modeling Critical system specification High potential costs of system failure cause the specification for critical systems to accurately reflect the real needs of users of the system. These are the requirements that should be a part of requirements report. 1. Risk driven specification is an approach that has been widely used by safety and security critical systems developers. It is applicable to any system where dependability is a critical attribute. 2. Safety specification process in its first stage defines the scope of the system, assesses the potential system hazards and estimates the risk they pose. This is followed by safety requirements specification and the allocation of these safety requirements to different sub- systems. These requirements are broken into functional safety requirements and safety integrity requirements. After delivery, the system must be installed as planned so that the hazard analysis remains valid. 3. Security specifications are based around the assets to be protected and their value. The steps involved are asset identification and evaluation of the degree of protection, threat analysis and risk assessment, threat assignment to the assets, analysis of available technology and finally specification of security requirements. Different types of security 28
  • 29. Software Engineering 2008 specifications are authentication and authorization requirements, immunity requirements, intrusion detection requirements, non-repudiation requirements, privacy requirements etc. Requirement Elaboration The information obtained from the customer during inception and elicitation is expanded and refined. It focuses on developing a refined technical model of software functions, features, and constraints. The end result is an analysis model that defines the informational, functional, and behavioral domain of the problem. Requirement Negotiation It might be that different users have conflicting requirements. To reach a consensus these requirements are prioritized by all parties and the most important requirements become part of the document. Risk associated with each requirement is identified and evaluated. The finalized requirements are used to assess approximate effort, time and cost for the project. Finally the baseline is defined. Requirements documentation 1. Requirement definition is a complete listing of everything the customers expects the system to do. It is written jointly by the customer and developer. First we outline the general purpose of the system; next we describe the background and objectives of system development. We outline a description to the solution. Next we describe the detailed characteristics of the proposed system. Finally we discuss the environment in which the system will operate. 2. Requirement specification restates the requirements definition in technical terms appropriate for the development of system design. It is written by requirements specialists. The specification document may define the same requirements as in the definition document as a series of equation. Contents of an SRS 1. Introduction This chapter contains purpose of the document, scope of the document, an overview of the requirements and the context in which the document was prepared 2. General Description In this chapter, the proposed product is described in the context of existing systems, user characteristics, problems faced, objectives of the proposed system and any known constraints. 3. Functional Requirements 29
  • 30. Software Engineering 2008 This chapter lists all the functional requirements in the decreasing order of importance. For each requirement, a description, criticality, risks and dependencies with other requirements is documented. 4. Interface Requirements This chapter is used to describe the user interfaces, hardware interfaces, communications interfaces and interfaces to other software systems. 5. Performance Requirements Speed and throughput requirements are described in this requirement 6. Design Constraints This chapter specifies the design constraints imposed on the design team and could cover standards to be complied with, hardware limitations, etc. 7. Other Non-Functional Attributes Aspects such as security, reliability, maintainability, etc. that are important to the project are described in this chapter. 8. Preliminary Domain Analysis This chapter contains the modelling of the proposed system. 9. Operational Scenarios This chapter contains the main scenarios (use cases) that will be experienced from the proposed system. 10. Schedule and Budgets This chapter can contain the estimates and a high-level project plan. Characteristics of a good Software Requirement Specification 1. Completeness can be aimed by ensuring the following: • Elicitation of the requirements from all the “stakeholders” • Focus on user tasks, problems, bottlenecks and improvements required; rather than the system functionality • Ranking or prioritising each requirement (functional as well as non-functional) for importance. • Marking areas where requirements are not known as “To Be Determined” • Resolving all ‘to be determined’ requirements before the design phase. 2. Clarity (Unambiguous) is related by following issues • Requirements should be reviewed by an independent entity to identify ambiguous use of natural language. • Specifications should be written in a requirement specification language. • Requirements can also be expressed using requirements analysis and modelling techniques. 3. Correctness can be assumed if every requirement stated in the SRS is required in the proposed system. All stakeholders need to review the SRS and confirm that the SRS correctly reflects their needs. 4. Consistency can be maintained when any conflict between requirements within the SRS is identified and resolved. Logical conflicts should be identified and removed. 5. Modifiability can be increased by proper documentation. Certain practices that can lead to high modifiability are: • Minimal redundancy leads to lower inconsistencies when changes are incorporated. • Labelling helps easily identify changed requirements. 30
  • 31. Software Engineering 2008 6. Traceability cab is created by labelling all requirements and following them all through the designing phase, test phase and so on. Fine grained, structured and precise statements are much more preferable to large, narrative paragraphs. Traceability can be backward or Forward. 7. Feasibility should be checked before including any requirement 8. Testability (Verifiability) should be enhanced by stating the requirements correctly. 9. Ranked for importance and/or stability to ensure the most important ones are incorporated early and carefully. Requirement Validation A number of requirements validation techniques can be used in conjunction or individually: 1. Requirements review where requirements are analyzed systematically by a team of reviewers. The development team should explain all requirements and their implications to the reviewing team in a formal review. In an informal review, the team interacts with stakeholders to verify requirements. In both cases the team reviews the requirements for consistency, verifiability, comprehensibility, traceability and adaptability. Conflicts, contradictions, errors and omissions are pointed out by the review team. 2. Prototyping helps to ensure the correct requirements are found. Throwaway prototype, where the prototype is discarded after correcting requirements is prepared, when the not well understood requirements are significant for system. Otherwise exploratory prototype is built which enhances the prototype once built. 3. Test case generation is a technique which prepares test cases for the requirements and ensures that they are correctly stated. 4. Validation can be done by using checks on 1. Validity of requirements 2. Consistency to avoid contradiction 3. Completeness to include all constraints 4. Realism check to see if it can be really implemented 5. Verifiability ensures that test cases can be generated to check requirements Common problems with Software Requirement Specification 1. Making Bad Assumptions due to lack of proper or no information. 2. Writing implementation (HOW) instead of requirements (WHAT) 3. Using incorrect terms such as ‘will not be limited to’, ‘such as’ etc. 4. Using Wrong Language which is ambiguous 5. Unverifiable Requirements that are written using terms such as maximize, fast, sometimes etc. 6. Missing Requirements can be avoided using proper templates 7. Over-Specifying or stating unnecessary things 31
  • 32. Software Engineering 2008 Requirements management consists of: 1. Requirements change management which involves systematic handling of changes to agreed requirements (SRS) during the course of the project. Changes occur because there are changes in the business and technical environment and preferences and priorities of users change. Some requirements are enduring, i.e. relatively stable, and there are volatile requirements that change over time. Change management process is as follows: Initiation Further Impact analysis Impact analysis Pending partly rejected Evaluation approved Planning Execution Closing Change Management process 1. Initiation: A change request form (CRF) is filled. It includes why such a change is required, what benefits are expected with the change, or what difficulties will be faced without the change. 2. Impact Analysis: The CRF is handed over to the Configuration Controller, who in cooperation with others, evaluates the change from various perspectives e.g. technical feasibility, additional resources, additional calendar time, impact on current activities etc. 3. Evaluation: The change is presented in the periodic Change Control Board (CCB) meeting which may accept, reject, partially approve, keep pending or perform further impact analysis for the proposed change. 32
  • 33. Software Engineering 2008 4. Planning: If CRF is partially or fully approved, the Configuration controller fill sup the implementation plan for implementing the CRF. 5. Execution: changes are made according to the implementation plan. 6. Closing: The CRF is closed after verifying that all changes that were required have been made. 2. Requirements traceability consists of maintaining traceability between the agreed (and changed) SRS and the other work products (like design documents, test plans) created downstream in the software life cycle. Test case TC01 TC02 TC03 TC04 ...... TC49 requirement R-01 X X X R-02 X x R-03 x X : R-096 X There are three types of traceability information that may be maintained. 1. Source traceability information links the requirements to the stakeholders who proposed the system. 2. Requirements traceability information links dependent requirements within the requirements document. This helps in assessing how many requirements are likely to be affected by a proposed change. 3. Design traceability information links the requirements to the design modules where these requirements are implemented. 3. Tools for requirement documentation are also available. A few are a. PSL/PSA: Problem statement language specifies the requirements which are converted into models by the problem statement analyzer. b. RSL/REVS: Requirement statement language specifies the requirement and Requirement engineering validation system processes and analyzes RSL statements. REVS consists of a translator for RSL, Centralized database and a set of automated tools. c. Structured analyses and design technique (SADT) creates a model that consists of a set of ordered SA diagrams. d. Structured System Analysis (SSA) includes data flow diagrams, data dictionaries, and procedure logic representations and data store structuring techniques. Key enablers that drive the maturity of the requirements engineering process are: 33
  • 34. Software Engineering 2008 1. Senior management reviews requirements engineering for process compliance 2. Independent Software Quality Assurance group reviews requirements engineering for process compliance. 3. Project manager monitors the progress of requirements engineering activities 4. Requirements engineering responsibility is explicitly assigned in each project. 5. Tools required for facilitating requirements engineering activities are explicitly identified, made available and used. 6. Persons performing requirements engineering activities are trained/ skilled in the application domain and requirements engineering process. Other project members are provided the appropriate orientation on the application domain and requirements engineering process. 7. Tools and resources are available to be able to maintain the requirements related information and documents of past projects that may be re-used. Commandments for Requirements 1. Don’t ignore the other side (LISTEN) 2. Understand the Domain (CONCEPTS) 3. Understand Why They Want it and How Badly (UNDERSTAND THEM BETTER) 4. Don’t be in a hurry (IT IS THE MOST IMPORTANT STAGE) 5. Do not Consider Anything as Obvious (DOCUMENT EVERYTHING) 6. Don’t Sweep it Under the Carpet (DON’T IGNORE ISSUES) 7. Don’t say “Yes” when you may need to say “No” (PROCEDURES FOR CHANGE MANAGEMENT) 8. Work as a team 9. Don’t expect Analysts to Read your Mind 10. Don’t Just Sign-off without Rigorous SRS Review Requirement metrics Requirements can be measured on various parameters. A few commonly used measurable are: • Function point metrics is a calculation dependent on the characteristic of requirement. • Number of requirements can be used as a measure • Number of changes to requirements is measured during the iterations • Requirements size looks at the bigger picture of project size and complexity • Kinds of requirements are important to be identified, because critical requiremenst are more difficult to implement. Function point calculation Factor Value Backup and recovery 4 Data Communications 2 Distributed processing 0 Performance critical 4 34
  • 35. Software Engineering 2008 Existing operating environment 3 On-line data entry 4 Input transaction over multiple screens 5 ILFs updated online 3 Information domain values complex 5 Internal processing complex 5 Code designed for reuse 4 Conversion/installation design 3 Multiple installations 5 Application designed for change 5 Value adjustment factor 1.17 Information domain Opt Likely Pess Est. count Weight FP count No. of external inputs 20 24 30 24 4 97 No. of external outputs 12 15 22 16 5 78 No. of external inquiries 16 22 28 22 5 88 No. of internal logical files 4 4 5 4 10 42 No. of external interface files 2 2 3 2 7 15 Count total 320 FP=count-total*[0.65+0.01*(Fi)] FP=375 35
  • 36. Software Engineering 2008 Software Design Design modeling principles 1.Design should be traceable to the analysis model 2.Always consider the architecture of system to be built 3.Design of data is as important as design of processing functions 4.Interfaces (both internal and external) must be designed, with care 5.User interface design should be tuned to the needs of the end users 6.Component level design should be functionally independent 7.Components should be loosely coupled to each other and to external environment 8.Design representations (models) should be easily understandable 9.The design should be developed iteratively, with each iteration, the design should strive for greater simplicity. Design Process model Requirement specification Architectural Design It represents a critical link between the design and requirements engineering processes. It establishes a basic structural framework that identifies the major components of a system and the communications between these components. 1. Repository model: Large amounts of data are organized around a shared database or repository. 36
  • 37. Software Engineering 2008 2. Call and Return: The system is organized as a set of main program and services and associated servers and clients that access and use the services. 3. The layered model: It organizes a system into layers, each of which provides a set of services. 4. Pipes and Filter: where input from one goes to the other component. 5. Object oriented architecture: accomplishes coordination via message passing. There are architectural models that are specific to a particular application domain. There are two types of domain-specific architectural models: 1. Generic models are abstractions from a number of real systems. 2. Reference models are most abstract and describe a larger class of systems Repository model Client-server architecture Documenting Architecture Design A document describing the architecture should contain the following: 1. System and architecture context 2. Description of architecture views 1. Element catalog describes the purpose of the element and its interfaces and their semantic information 2. Architecture rationale gives the reason for selecting the different elements and composing them in the way it was done. 3. Behavior description is added to help aid understanding of the system execution. 37
  • 38. Software Engineering 2008 4. Other information includes description of all those decisions that have not been taken during architecture creation. 3. Across views documentation Evaluating Architectures The architectural tradeoff analysis method (ATAM) helps in identifying dependencies between competing properties and performs a tradeoff analysis. The basic ATAM has the following steps: 1. Collect scenarios 2. Collect requirements or constraints 3. Describe architectural views 4. Attribute-Specific analysis 5. Identify sensitivities and tradeoffs: Sensitivity is the impact an element has on the attribute value and tradeoff points are those elements that have sensitivity points for multiple attributes. Design concepts Abstraction It is a tool that permits a designer to consider a component at an abstract level without worrying about the details of the implementation of the component. It is an important concept for problem partitioning. During the detailed design and implementation, it is essential to implement the modules so that the abstract specifications of each module are satisfied. There are two common abstraction mechanisms: • Functional abstraction and • Data abstraction Modularity A system is considered modular if it consists of discreet components so that each component can be implemented separately, and a change to one component has minimal impact on other components. Modularity helps in system debugging, system repair, and in system building. Cohesion Cohesion of a module represents how tightly bound the internal elements of the module are to one another. There are several levels of cohesion: 1. Coincidental cohesion is the lowest and occurs when there is no meaningful relationship among the elements of a module. 2. Logical cohesion exists if there is some logical relationship between the elements of a module and elements perform functions that fall in the same logical class. 3. Temporal cohesion is same as above except that elements are also related in time and are executed together. 4. Procedural cohesion contains elements in the same procedural unit. 38
  • 39. Software Engineering 2008 5. Communicational cohesion has elements related by reference to the same input or output data. 6. Sequential cohesion exists when the output of one form is the input to another. 7. Functional cohesion binds the elements performing a single function. Coupling Coupling between modules is the strength of interconnections between modules or a measure of interdependence among modules. Coupling increases with the complexity and obscurity of the interface between modules. There are 8 types of coupling: 1. Content coupling occurs when one component modifies data that is internal to another component. 2. Common coupling occurs when a number of components all make use of a global variable. 3. Control coupling occurs when operation A() invokes operation B() and passes a control flag to B. 4. Stamp coupling occurs when class B is declared as a type for an argument of operation of class A 5. Data coupling occurs when operations pass long strings of data arguments. 6. Routine call coupling occurs when one operation invokes another. 7. Type use coupling occurs when component A imports or includes a package or the content of component B 8. External coupling when a component communicates or collaborates with infrastructure components. Patterns: are existing designs that are standardized and can be used the same format in other designs. Software reuse Almost all artefacts associated with software development, including project plan and test plan can be reused. Some prominent items that can be effectively reused are requirements specification, design, code, test cases, knowledge etc. The software units that are reused may be of radically different sizes like: 1. Application system reuse i.e. whole of an application system. It can be vertical reuse, i.e. reused at all levels, or horizontal reuse, where it can be used across domains. 2. Component reuse from sub-systems to single objects. 3. Generative reuse is the use of components in a specific application domain. 4. Object and function reuse e.g. library files Reuse can be black-box reuse, where the product is used as it is. It can also be clear-box reuse, where the product is customized as per the user. Benefits of software reuse 1. Increased dependability as the reused software is tested and tried 2. Reduced process risk as cost of component is known, thus no error in cost estimation 3. Effective use of specialists who develop these reusable software. 39
  • 40. Software Engineering 2008 4. Standards compliance in reusable components 5. Accelerated development by reduced production time. Excessive reuse of components can cause design complexities, because of customisation, or compromise in requirements. Software reuse requires consideration of the following points: • Component creation: It should be decided in advance what should be created. • Component indexing and storing: Classification of reuse components is essential • Component search: Component matching the requirements has to be identified correctly • Component understanding: Proper understanding of the component is necessary before applying it to use. • Component adaptation: Customization should be possible • Repository maintenance: New components added should be stored and should be traceable. Increasing reusability • Name generalization helps identification with the application of the component • Operation generalization increases applicability • Exception generalization involves providing for all exceptions that might occur in advance. Component Based Software Engineering CBSE process 40
  • 41. Software Engineering 2008 User Interface Design To make visual interfaces more attractive, 14 guidelines are laid. The important ones being: 1. Limit the number of colors employed and be conservative how these are used. 2. Use color change to show a change in system status. 3. Use color coding to support the task users are trying to perform. 4. Use color coding in a thoughtful and consistent way. 5. Be careful about color pairings to avoid eye straining. One should anticipate the background and experience of users when designing error messages. Error messages should be polite, concise, consistent and constructive. Factors to be considered while designing messages are context, experience, skill level, style and culture. User Interface Design process Sometimes the interface is separately prototyped and at times generated iteratively. The three core activities are: 1. User analysis: It is the analysis of user activities. 2. User interface prototyping: There are three approaches for UI prototyping 1. Script driven approach where a script is associated with each element like menu or button. 2. Visual programming language allows access to reusable object to develop interfaces quickly 3. Internet-based prototyping using java, which is fast to develop 2. Interface evolution is the process of assessing the usability of an interface and checking that it meets user requirements. Some techniques for UI evaluation are: 1. Questionnaires to users and their views on UI 2. Observation of user at work 3. Snapshots of typical system use 4. Most common errors and most used facilities 41
  • 42. Software Engineering 2008 User Interface Design issues Design issues 1. User interaction Styles of interaction depend upon the type of application and user. A few styles are discussed below: 1. Direct manipulation using pointer devices like mouse 2. Menu selection through direct manipulation 3. Form fill 4. Command language 5. Natural language 2. Information presentation Information can be presented in the form of tables, pictures or graphs. But information should be isolated from presentation. Changing information can be shown graphically. Large pieces of information can be shown visually. Mapping data flow into software architecture Transform flow: Information enters the system along paths that transform external data into an internal form. 1. Review the fundamental system model 2. Review an refine data flow diagram for the software 3. Determine whether the DFD has transform or transaction flow characteristics 4. Isolate the transform centre by specifying incoming and outgoing flow boundaries 5. Perform first level factoring 6. Perform second level factoring 7. Refine the first iteration architecture using design heuristics for improved software quality 42
  • 43. Software Engineering 2008 Transaction flow: Information flow is often characterized by a single data item, called a transaction that triggers other data flow along one of many paths. 1. Review the fundamental system model 2. Review and refine data flow diagram for the software 3. Determine whether the DFD has transform or transaction flow characteristics 4. Identify the transaction centre and the flow characteristics along each of the action paths. 5. Map the DFD in a progress structure amenable to transaction processing 6. Factor and refine the transaction structure and the structure of each action path 7. Refine the first iteration architecture using design heuristics for improved software quality. Software Design Approaches 1. Function-oriented design 1. The system is viewed as something that performs a set of functions. This function may consist of sub-functions which may be split into more detailed sub-functions. 2. The system state is centralized and shared among different functions 2. Object-oriented design 1. The system is viewed as a collection of objects. The system state is viewed as a collection of objects. 2. The system state is decentralized among the objects and each object manages its own state information. 3. Objects have their own internal data which define their state. Similar objects constitute a class . 4. Objects communicate by message passing. Object oriented Vs Function oriented Function-oriented Object-oriented • Basic abstraction are not real world • Basic abstraction are real world entities functions • State information is distributed among • State information is represented in a the objects of the system and objects centralized shared memory communicate with messages • Functions are grouped together if they • No class grouping is performed constitute a higher level function. Pure object oriented development starts from object oriented approach from requirement gathering to design and development. 43
  • 44. Software Engineering 2008 Object oriented design concepts 1. Classes and objects are basic building blocks of an OOD. Encapsulation is a property of objects, by which objects encapsulate the data and information it contains and supports a well defined abstraction 2. Inheritance is a relation between classes that allows for definition and implementation of one class based on the definition of existing classes. Inheritance can be strict inheritance where all features of the super-class plus additional features exist in base class; or non-strict inheritance, where some features of super-class do not exist in base class and some features are redefined. When a class inherits from more than one class then it is called multiple inheritances. Polymorphism deals with the ability of an object to be of different types. 3. Objects communicate by message passing often implemented as calls 4. Abstraction is the elimination of the irrelevant and amplification of the essentials. 5. Coupling is of three types: 1. Interaction coupling occurs due to methods of a class invoking methods of other classes. The worst form of coupling exists if methods directly access internal parts of other methods. Coupling reduces if methods of a class interact with methods in another class by directly manipulating instance variables of objects of other classes. Coupling is least if methods communicate directly through parameters. 2. Component coupling refers to the interaction between two classes where a class has variables of other class. 3. Inheritance coupling is due to the inheritance relation between classes. The worst form is when a subclass B modifies the signature of a method in B. The least coupling scenario is when a subclass only adds instance variables and methods but does not modify any inherited ones. 6. Cohesion is of three types: 1. Method cohesion focuses on why the different code elements of a method are together within the method. 2. Class cohesion focuses on why different attributes and methods are together in this class. 3. Inheritance cohesion focuses on why classes are together in a hierarchy. Main two reasons for inheritance are to model generalization-specialization relationship and for code-reuse 7. Open closed principle states: Software entities should be open for extension, but closed for modification. The inheritance and polymorphism concepts allow extension in behavior of existing classes without changing original class Object oriented analysis and design (OOAD) The fundamental difference between object-oriented analysis (OOA) and object-oriented design (OOD) is that the former models the problem domain, leading to an understanding and specification of the problem, while the latter models to solution to the problem. Detailed design Design can be expressed in any of the following ways: 44
  • 45. Software Engineering 2008 1. Process design language (PDL) minmax(infile) ARRAY a DO UNTIL end of input READ an item into a ENDDO max, min:=first item of a DO FOR each item in a IF max<item THEN set max to item IF min>item THEN set min to item ENDDO END 1. Logic/ algorithmic design involves statement of the problem, development of a mathematical model, design of the algorithm, verification of correctness. 2. State modeling represents the logical state of an object such as finite state model Software Design Strategies • Top-down design Advantages 1. It has a strong focus on specific requirements that help to make a design responsive to its requirements. Disadvantages 1. Component-reuse is missed as the system boundaries are specification oriented 2. Misses the benefits of well-structured simple architecture. • Bottom-up design Advantages 1. General solution can be reused 2. It can hide low-level details of implementation Disadvantages 1. It is not so closely related to structure of the problem 2. It may not fit a given need 3. Hard to construct and thus under-designed 4. It leads to proliferation of potentially useful functions rather than the most appropriate ones Metrics for Software Design (detailed design) 1. Cyclomatic complexity for a graph G with n nodes , e edges and p connected components is 1. V(G)=e-n+p 2. The number of regions, 3. The number of decisions+1 2. Data binding captures the module level concept of coupling. It is defined as a triplet (p, x, q) where p and q are modules and x is a variable within the static scope of both p and q. An used data binding exists when both p and q use the variable x for reference or assignment. 45
  • 46. Software Engineering 2008 An actual data binding exist when the module p assigns a value too x and q references x therefore information flow exists. 3. Cohesion metrics CM=∑i=nC(Ri) where C(Ri)=|S| Dim (S) where |S| is the set of statements and |G|=(no. of |G|dim (G) nodes-1) for a reduced graph and dim() is the dimension of a set of statements which is the max. no. of linearly independent paths for S. Metrics for software design (functional) 1. Network metrics focuses on the structure chart. The graph impurity increases with the increase in the number of interactions. Graph impurity=n-e-1 2. Stability metrics tries to quantify the resistance of the design to ripple effects that are caused by changes in modules. Jx= {modules that invoke x} J’x= {modules invoked by x} Rxy= {passed parameters returned from x to y, y Є Jx} R’xy= {parameters passed from x to y, y Є J’x} GRx= {Global data referenced in x} GDx= {Global data defined in x} TPxy= {total number of assumptions made by module y about parameters in Rxy} TP’xy= {total number of assumptions made by y called by module x about elements in R’xy} TGx= {total number of assumptions made by other modules about the elements in GDx} Design Logical ripple effect (DLRE) = TGx + ΣTPxy + ΣTP’xy Design Stability (DS) = 1/(1+DLREx) Program Design Stability (PDS) = 1/(1+ΣDLREx) 3. Information flow metrics is the amount of interaction with other modules Complexity = fan-in*fan-out + inflow*outflow where Fan-in is the number of modules that call module x Fan-out is the number of modules called by module x Inflow is the amount of information sent to module x Outflow is the amount of information sent out by module x Metrics for software design (object-oriented) 1. Weighted methods per class (WMC) WMC=∑i=nCi where Ci is the complexity of methods Mi(M1, M2..) being all the methods of class. Complexity can be estimated size, interface complexity or data flow complexity. 2. Depth of inheritance tree (DIT) is the length of the shortest path from the root of the tree to the node representing C or the number of ancestors C has. 3. Number of children (NOC) is the number of immediate subclasses of C 46
  • 47. Software Engineering 2008 4. Coupling between classes (CBC) is the total number of other classes to which the class is coupled. 5. Response for a class (RFC) is the cardinality of the response set for a class. The response set of a class C is the set of all methods that can be invoked if a message is sent to an object of this class. 6. Lack of cohesion in methods (LCOM) LCOM=|P|-|Q|, if |P|>|Q|, 0 otherwise, where Q is the set of all cohesive pairs of methods and P is the set of non cohesive pairs. Design Verification methods 1. Design walkthrough is a manual method of verification and is done in an informal meeting called by designer or the leader of the designer’s group. In a walkthrough designer explains the logic step-by-step, and the members of the group ask questions, point out possible errors or seek clarification. 2. Critical design review ensures that the detailed design satisfies the specification laid down during system design. The process is same as inspection with aim of revealing design errors or undesirable properties. The use of checklists ensures focus of discussion on the search of errors. A sample checklist might include following questions: 1. Does each module in system design exist in detailed design 2. Are all assumptions explicitly stated 3. Are all exceptions handled 4. Is the design standardized 5. Is the module logic too complex 6. Are data structures properly created 3. Consistency checkers are essentially compilers that take as input the design specification in a design language. A consistency checker can ensure that any modules invoked or used by a given module actually exist in the design that the interface used by the caller is consistent with the interface definition of the called module. The fourth generation technique model encompasses a broad array of software tools that have one thing in common: each enables the software developer to specify some characteristic of the software at a high level. The tool then automatically generated source code based on the developer's specification. The 4GT paradigm for software engineering focuses on the ability to specify software to a machine at a level that is close to natural language or in a notation that imparts significant function, but it tends to be used in a single, well defined application domain. Also the GT approach reuses certain elements, such as existing packages and databases rather than reinventing them. Fourth generation techniques have already become an important part of software development when coupled with CASE tools, the 4GL paradigm may become the dominant approach to software development Fourth generation Techniques: 1. Report generation 47
  • 48. Software Engineering 2008 2. Database query language 3. Data manipulation 4. Screen definition 5. Code generation 6. Connectivity and interfacing 7. Web designing tool 8. Graphics generation capability 9. CASE tools 10. Reuse of code Software Implementation Coding Process The coding activity starts when some form of design has been done and the specifications of the modules to be developed are available. The programming team must have a designated leader, a well defined organization structure and a thorough understanding of the duties and responsibilities of each team member. The implementation team must be provided with a well defined set of software requirements, an architectural design specification, and a detailed design description. Also, each team member must understand the objectives of implementation. A famous experiment by Weinberg showed that if programmers are specified a clear objective for the program, they usually satisfy it. Incremental Coding Process A code is written for implementing only part of the functionality of the module. This code is compiled and tested with some quick tests to check and when it is passed, the developer proceeds to add further functionality. The advantage is easy identification of error in the code as small parts are tested. Test scripts can be prepared and run each time with new code to ensure the old functionality is still working. Other methods • Test driven development The programmer first writes the test scripts then writes the code to pass the tests. The whole process is done incrementally. • Visual programming Scripting languages such as Visual Basic support visual programming where the prototype is developed by creating a user interface from standard items and associating components with these items. A large library of components exists to support this type of development. These may be tailored to suit the specific application requirements • Pair programming 48
  • 49. Software Engineering 2008 More than one programmer implements the code and thus algorithms are checked in a better way. In XP, programmers work in pairs, sitting together to develop code. This helps develop common ownership of code and spreads knowledge across the team. It serves as an informal review process as each line of code is looked at by more than 1 person. It encourages refactoring as the whole team can benefit from this. Measurements suggest that development productivity with pair programming is similar to that of two people working independently. Refactoring It is a technique to improve existing code and prevent this design decay with time, caused by changes. It is a change made to the internal structure of software to make it easier to understand. Refactoring though done on source code, has the objective of improving the design that the code implements. The main risk of refactoring is that existing code may break due to changes. Therefore rules: 1. Refactor in small parts 2. Have test scripts available to test existing functionality. With refactoring, code becomes continuously improving. Thus the design rather than decaying evolves and improves with time. Need for refactoring arises when.. 1. Duplicate code exists in code 2. Methods become long 3. Classes are large 4. Parameter lists are long 5. Too many switch statements 6. Too much communication between objects 7. When one method calls another method which simply passes the call or message chaining Common code refactorings • Improving methods 1. Extracting functionally cohesive codes from a large method. 2. Add/remove parameters as per requirements • Improving classes 1. Moving methods to appropriate classes where they are actually required 2. Move field 3. Extract classes from a large class 4. Replace data value with object if the data is used frequently for operations • Improve hierarchies 1. Replace conditional with polymorphism: At times the functionality is defined by conditions in the code. It should be replaced with OOD by incorporating polymorphism 2. Pull up field/method: if a field or method exists in most of the subclasses, it can be pulled up to superclass. 49
  • 50. Software Engineering 2008 Good programming practices • Control constructs should have a single-in and single-out • Gotos should be minimized • Information hiding should be practiced • Nesting should not be deep • Module size should not be large • Do not leave catch block empty • Give importance to exceptions • Buffer overflow should be avoided • Arithmetic exceptions should be provided for • Memory leaks should be avoided Verification Techniques 1. Code inspection The inspection team includes the programmer, the designer and the tester. They are provided with the code to be reviewed and the design documents. In addition to defect, the inspection team also looks for quality issues like efficiency, compliance to coding standards etc. Often the type of defects the inspection focuses on are included in the checklist. Inspection is done through code reading. It starts with reading the innermost structure and reaching top. 2. Static analysis. Model checking (where model is checked for desired properties), dynamic checking (where input and output variables after execution of the program all compared to know if its executing correctly) and static analysis where program is methodically checked, generally by the aid of software tools. Two methods for finding errors are to look for unusual patterns and to directly look for errors. 3. Proving correctness or the axiomatic approach 1. Basic assertion P{S}Q 2. Axiom of assignment Pxf{x:=f}P 3. Rule of composition P{S1}Q,Q{S2}R P{S1;S2}R 1. Rule of alternate statement P^B{S}, P^~B=>Q P{if B then S}Q P^B{S1}Q,P^B{S2}Q P{if B then S1 else S2}Q 1. Rules of consequence P{S}R, R=>Q , P=>R, R{S}Q P{S}Q P{S}Q 1. Rule of iteration . P^B{S}P . P{while B do S}P^~B Code metrics 1. Size measure: The non-comment and non-blank lines are counted. Halstead’s measure or COCOMO model can be implemented for size. 50
  • 51. Software Engineering 2008 2. Complexity metrics 1. Size is related to complexity. Bigger the program, the more complex it is to understand. 2. Cyclomatic complexity 3. Halstead’s measure of ease of reading 4. Live variables exist from its first to last reference within a module. More the number of live variables, it’s more complex. Another data usage-oriented concept is span, the number of statements between two successive uses of a variable is referenced at n different places in a module, then for that variable, there are ( n-1) spans. 5. Style metrics 1. Module length 2. Identifier length 3. Comments 4. Indentation 5. Blank lines 6. Line length 7. Embedded spaces 8. Constants definition 9. Reserved works 10. gotos 51
  • 52. Software Engineering 2008 Software Testing Testing is the activity where the errors remaining from all the previous phases must be detected. Hence testing plays a critical role for ensuring quality. Testing is of two types: • Software inspections. Concerned with analysis of the static system representation to discover problems (static verification) – May be supplement by tool-based document and code analysis • Software testing. Concerned with exercising and observing product behaviour (dynamic verification) – The system is executed with test data and its operational behaviour is observed Static and dynamic testing Types of testing • Defect testing – Tests designed to discover system defects. – A successful defect test is one which reveals the presence of defects in a system. – Covered in Chapter 23 • Validation testing – Intended to show that the software meets its requirements. – A successful test is one that shows that a requirements has been properly implemented. Statistical Use testing It is a part of Validation testing which amounts to testing software the way users intend to use it 52
  • 53. Software Engineering 2008 Procedure: • The specification for each increment is analyzed to define a set of stimuli (inputs or events) that cause the software to change it’s behavior • Test cases are generated for each set of stimuli according to the usage probability distribution Software testability is simply how easily a computer program can be tested. The following characteristics lead to testable software: 1. Operability: The better it works (high quality), the more efficiently it can be tested (fewer bugs) 2. Observability: What you see is what you get 3. Decomposability: By controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting 4. Simplicity: The less there is to test, the more quickly we can test it 5. Stability: The fewer the changes, the fewer the disruptions to testing 6. Understandability: The more information we have, the smarter we will test Testing and debugging • Defect testing and debugging are distinct processes. • Verification and validation is concerned with establishing the existence of defects in a program. • Debugging is concerned with locating and repairing these errors. • Debugging involves formulating a hypothesis about program behaviour then testing these hypotheses to find the system error. The debugging process There are three methods of debugging errors: 1. Brute force: Here memory dumps are taken, run-time traces are invoked and the program is loaded with output statements. All this information is searched to find a clue to the cause of an error 2. Back tracking: The source is traced backwards until the site of the cause is found 3. Cause elimination introduces the concept of binary partitioning. Data related to the error occurrence is organized to isolate potential causes. 53
  • 54. Software Engineering 2008 4. Program slicing is similar to backtracking. However, the search space is reduced by defining slices. Debugging guidelines: 1. Thorough understanding of the program design helps in debugging. 2. Sometimes debugging might require complete redesign of the system. In such cases correcting symptoms would not solve the purpose. 3. A correction can cause further errors. Thus regression testing is required. Defect tracking helps in defect analysis and thus prevention. One of the methods of defect analysis is Pareto analysis. Pareto analysis is also known as the 80-20 rule that states that 80% of the problems come from 20% of the root causes. The first step is to draw Pareto chart from the defect data. The next step is to create a CE (cause effect diagram) to determine the causes of the observed effects. The main steps for drawing such diagram would be as follows: 1. Clearly define the problem (i.e. the effect) that is to be studied 2. Draw an arrow from left to right with a box containing the effect drawn at the head. This is the backbone of this diagram. 3. Decide the major categories of causes. These could be the standard categories or some variation of it to suit the problem. 4. Write these major categories in boxes and connect them with diagonal arrows to the backbone. 5. Brainstorm for the sub-causes to these major causes by asking repeatedly, for each major cause, the question, “Why does this major cause produce the effect?” 6. Add the sub-causes to the diagram clustered around the bone of the major cause. Further sub-divide these causes, and stop only when no worthwhile answer to the question can be found. 54
  • 55. Software Engineering 2008 Process Technology Unclear/incorrect Standards/ checklists Specifications not documented well Technical Problems Logic/UI/ Lack of technical standard Skills Oversight defects Lack of training Standards not followeds Training People Lastly, develop and implement solution that can be found for the problems analyzed. Software inspections • These involve people examining the source representation with the aim of discovering anomalies and defects. • Inspections not require execution of a system so may be used before implementation. • They may be applied to any representation of the system (requirements, design, configuration data, test data, etc.). • They have been shown to be an effective technique for discovering program errors. Inspection success • Many different defects may be discovered in a single inspection. In testing, one defect , may mask another so several executions are required. • The reuse domain and programming knowledge so reviewers are likely to have seen the types of error that commonly arise. The inspection process 55
  • 56. Software Engineering 2008 Inspection procedure • System overview presented to inspection team. • Code and associated documents are distributed to inspection team in advance. • Inspection takes place and discovered errors are noted. • Modifications are made to repair discovered errors. • Re-inspection may or may not be required. Inspection checklists • Checklist of common errors should be used to drive the inspection. • Error checklists are programming language dependent and reflect the characteristic errors that are likely to arise in the language. • In general, the 'weaker' the type checking, the larger the checklist. • Examples: Initialisation, Constant naming, loop termination, array bounds, etc. Comparing Inspections and testing • Inspections and testing are complementary and not opposing verification techniques. • Both should be used during the V & V process. • Inspections can check conformance with a specification but not conformance with the customer’s real requirements. • Inspections cannot check non-functional characteristics such as performance, usability, etc. What is automated static analysis? Static analysers are software tools for source text processing. They parse the program text and try to discover potentially erroneous conditions and bring these to the attention of the V & V team. They are very effective as an aid to inspections - they are a supplement to but not a replacement for inspections. Stages of static analysis • Control flow analysis. Checks for loops with multiple exit or entry points, finds unreachable code, etc. • Data use analysis. Detects uninitialized variables, variables written twice without an intervening assignment, variables which are declared but never used, etc. • Interface analysis. Checks the consistency of routine and procedure declarations and their use • Information flow analysis. Identifies the dependencies of output variables. Does not detect anomalies itself but highlights information for code inspection or review • Path analysis. Identifies paths through the program and sets out the statements executed in that path. Again, potentially useful in the review process • Both these stages generate vast amounts of information. They must be used with care. Use of static analysis 56
  • 57. Software Engineering 2008 They are particularly valuable when a language such as C is used which has weak typing and hence many errors are undetected by the compiler. They are also less cost-effective for languages like Java that have strong type checking and can therefore detect many errors during compilation. Verification and formal methods Formal methods can be used when a mathematical specification of the system is produced. They are the ultimate static verification technique. They involve detailed mathematical analysis of the specification and may develop formal arguments that a program conforms to its mathematical specification. Dynamic Testing During dynamic testing, the software to be tested is executed with a set of test cases, and the behavior of the system for the test cases is evaluated to determine if the system is performing as expected. The behavior is evaluated using test oracles which are considered to have the correct answer. These can be humans who decide the correct behavior of the system ,or can be generated automatically using requirement specifications. Test cases are given to the test oracle (if it is a human) and the program under testing. The output of the two is compared to find if program behaved correctly for the test cases. test cases The V-model of development Psychology of testing The basic purpose of testing is to detect the errors that may be present in the program. Hence the intent should be to show that a program does not work, thus intent of finding errors. That is why most of the organizations, require a product to be tested by people not involved with developing the program before finally delivering it to the customer. Also at times errors occur 57
  • 58. Software Engineering 2008 because the programmer did not understand the specification correctly. Thus testing by external person helps in resolving such errors. This approach is suitable for earlier stages, where the objective is to reveal errors. However last stages are meant for evaluating the product. Here test cases are selected to mimic user scenario. Objectives of testing 1. Software Quality Improvement. Quality means conformance to the specified software design requirements. 2. Verification and validation: To ensure conformity with requirements. 3. Software reliability estimation Test design Test design refers to understanding the sources of test cases, test coverage, how to develop and document test cases, and how to build and maintain test data. Two methods of test design are: 1. Black-box testing 2. White-box testing Black box testing In black box testing, the tester only knows the inputs that can be given to the system and what output the system should give. Also known as functional or behavioral testing, in black-box testing, the basis for deciding test cases in functional testing is the requirements or specifications of the system or module. To decide the test cases we use the following methods: 1. Equivalence class partitioning: It breaks classes into small parts and selects test cases from each class, such partitioned. An equivalence class is formed of the inputs for which the behavior is expected to be similar. Each group of inputs for which the behavior is expected to be different from others is considered a separate equivalence class. Equivalence classes are usually formed by considering each condition specified on an input as specifying a valid equivalence class and one or more invalid equivalence classes. Another approach is to consider any special case where behavior could be different. 2. Graph based testing methods or cause-effect testing is a technique that aids in selecting combinations of input conditions in a systematic way, such that the number of test cases does not become unmanageably large. Thus for n input conditions there will be 2n test cases. A cause is a distinct input condition, and an effect is a distinct output condition. The conditions are set such that they can be set to either true or false. A test case is generated for each combination of conditions, which make some effect true. 3. Boundary value analysis selects 7 cases for all boundary value variables. The seven values being min-1, min, min+1, max-1, max, max+1, nominal value. Thus for n variables there would be 7n test cases 4. Specialized testing where special cases are provided for. It also includes testing of documentation, real time systems, inter task testing etc. Advantages 1. Tester and programmer are independent 58
  • 59. Software Engineering 2008 2. Testing from user’s point of view 3. Test cases can be prepared right after specifications are decided. Disadvantages i. Duplication of test cases by tester and programmer ii. Only sample testing can be done as all cases cannot be built and tested. White-Box testing White box testing, also known as structural testing, is concerned with the implementation of the requirements. It evaluates the programming structures and data structures used in the program. The criterion for deciding test cases for white-box testing are: 1. Control flow based criterion: The control flow graph of a program is considered and coverage of various aspects of the graph are specified. The coverage criterions are: 1. Statement coverage requires that the paths executed during testing include all the nodes in the graph. 2. Branch coverage requires that each edge in the control flow graph be traversed at least once. 3. Path coverage requires all possible paths in the control flow graph to be executed during testing. 2. Data flow based testing is conducted by creating a definition-use graph for the program. A variable occurs in one of the three ways: • def refers to the definition of the variable • C-use represents computational use of the variable • P-use represents predicate use, which is used for transfer of control Based on these various paths can be defines, such as • Global c-use • Def-clear • Dcu(x, i) set of nodes such that each node has x is in c-use • Dpu(x,i) set of nodes such that each node has x is in p-use • All-p-uses includes a def-clear path w.r.t x from I to dpu(x, i) • All-c-uses includes a def-clear path w.r.t x from I to dcu(x, i) • Advantages • Reveals errors in hidden code • Forces reason in implementation • Disadvantages • Expensive • Cases omitted can miss out code Mutation testing It is performed by creating many mutants in the code by making simple changes to the program. By testing we try to identify errors. If all the mutants planted are found, then we can assume that test cases are sufficient, otherwise more test cases are created unless we find all the mutants. Steps for mutant testing are as follows: 59
  • 60. Software Engineering 2008 • Generate mutants for P. Suppose there are N mutants • By executing each mutant and P on each test case in T, find how many mutants can be distinguished by T. Let D be the number of mutants distinguished or dead. • For each mutant that cannot be distinguished by T (live), find how many are equivalent to P. Let E be the number of equivalent mutants. • The mutation score is computed as D/(N-E) • Add more test cases to T and continue testing until the mutation score is 1. White box and Black box, both of the testing forms are important for the purpose of complete testing of the system. Each has its own significance. These have their individual benefits and limitations which make them suitable for individual activities. The black-box testing is suitable for verifying the completeness of the system. White-box testing helps enhance the existing structure of the program enabling efficiency of the system. Test case specification Requirement number Condition to be tested Test data and settings Expected o Error, fault and failure • Error refers to the discrepancy between a computed, observed or measured value and the true, specified, or theoretically correct value. • Fault is a condition that causes a system to fail in performing its required function. It is the basic reason for the software malfunction and is synonymous with the term bug. • Failure is the inability of a system or component to perform a required function according to its specifications. Presence of an error implies that a failure must have occurred, and the observance of a failure implies a fault must be present. However, the presence of a fault does not imply that a failure must occur. During the testing process, only failures are observed, by which the presence of faults is deduced. The actual faults are identified by separate activities, commonly referred to as “debugging”. Defect logging and tracking Severity of defect can be categorized as follows: • Critical: Show stopper; affects a lot of users; can delay project • Major: Has a large impact but workaround exists 60
  • 61. Software Engineering 2008 • Minor: An isolated defect that manifests rarely and with little impact • Cosmetic: Small mistakes that don’t impact the correct working Defect analysis and prevention techniques • Pareto Analysis states that 80% of the problems come from 20% of the possible sources. Once these sources are identified, similar test cases can be prepared for the rest of the program. • Casual analysis are drawn using cause-effect diagram. After the problem is stated, the major categories of causes are identified. Then further brainstorming helps identify sub-causes to these major causes. • Develop and implement solution attacks the root cause of the defects. Preventive solutions devised are implemented as project activities, assigned to team members. The effect of such solutions is monitored. Types of testing • Unit testing is performed at the time of development. Errors found at this level can be debugged easily. Each module is tested against the specifications produced during design for the modules. It is typically done by the programmer. Some examples of unit testing are: • Black-box approach – Field level check – Field level validation – User interface check – Function level check • White box approach – Statement coverage – Decision coverage – Condition coverage – Path coverage Advantages of unit testing – Can be applied directly to object code – Performance profilers implement this measure Disadvantages – Not implementable to logical operators – Insensitive to loop operators (number of iterations) • Integration testing is performed to put together all the modules developed and unit tested. It is done in either top-down method or bottom up method. Integration testing gives a chance to determine the correctness of interface between modules. There are several techniques for the same. 61
  • 62. Software Engineering 2008 1. Top-down integration proceeds down the invocation hierarchy, adding one module at a time until an entire tree level is integrated; and thus it eliminates the need for drivers. Testing cannot start till all the modules are added, or we create stubs, or dummy modules, to make it executable. 2. The bottom-up strategy works similarly from the bottom and has no need of stubs. Here testing can begin early, but the big picture can be seen at a later stage. 3. A sandwich strategy runs from top and bottom concurrently, meeting somewhere in the middle. With integration testing we move slowly away from structural testing and towards, functional testing, which treats a module • System testing: Once the whole system is tested and integrated, we conduct tests to verify if the system now as a whole; satisfies system requirements or not. It is followed by performance testing, where the actual performance of the system is evaluated. System testing is concerned with the compliance of requirements regarding the system. Since software is one component of a large computer based system, other incorporated system components should also be tested. Two techniques for selecting cases are defined. These can be structural or functional. A few other considerations are as follows: 1. Testing the system’s capabilities is more important than testing its components; thus focus shifts on catastrophic failures than minor irritants 2. Testing the usual is more important than testing the exotic. Thus focusing on actual use. 3. If we are testing after modification of an existing product, we should test old capabilities rather than new ones. Some of the structural techniques used for system testing are: 1. Stress testing: It executes the system in a manner that demands resources in abnormal quantity, frequency or volume. It decides the higher limit parameters for the system 2. Recovery testing forces the software to fail in a variety of ways and verifies that the system recovers in a proper manner. 3. Compliance testing is conducted to ensure that the system confirms to the standards laid down by the quality management system. 4. Configuration testing is used to analyze system behaviour in various hardware and software configurations specified in the requirements. Functional techniques used for system testing are as follows: 1. Security testing verifies that protection mechanism built in a system will protect it from improper penetration. The role of system designer is to make penetration cost more than the value of the information that will be obtained. It can also be a part of structural techniques 2. Manual support/ Documentation testing. It includes user documentation required by the clients and ensures correct support is available. 3. Control testing checks the control mechanism for correctness 4. Parallel testing is conducted to ensure that all the versions have the required functionalities and correctly display the changes incorporated in each version 5. Volume testing ensures that data structures have been designed successfully for extraordinary situations 62
  • 63. Software Engineering 2008 • Acceptance testing is performed with realistic data of the client to demonstrate that the software is working satisfactorily. Testing here focuses on the external behavior of the system. There are two types of acceptance testing: • Alpha testing is performed at the developer’s site, but the user is present for testing • Beta testing is performed at the client site and the system is tested for acceptability Cleanroom software development • The name is derived from the 'Cleanroom' process in semiconductor fabrication. The philosophy is defect avoidance rather than defect removal. • This software development process is based on: – Incremental development; – Formal specification; – Static verification using correctness arguments; – Statistical testing to determine program reliability. Cleanroom process characteristics • Formal specification using a state transition model. • Incremental development where the customer prioritises increments. • Structured programming - limited control and abstraction constructs are used in the program. • Static verification using rigorous inspections. • Statistical testing of the system Formal specification and inspections The state based model is a system specification and the inspection process checks the program against this model. The programming approach is defined so that the correspondence between the model and the system is clear. Mathematical arguments (not proofs) are used to increase confidence in the inspection process. 63
  • 64. Software Engineering 2008 Cleanroom process teams • Specification team. Responsible for developing and maintaining the system specification. • Development team. Responsible for developing and verifying the software. The software is NOT executed or even compiled during this process. • Certification team. Responsible for developing a set of statistical tests to exercise the software after development. Reliability growth models used to determine when reliability is acceptable. Certification Certification requires the creation of three models 1. Sampling model: Software testing executes m random test cases and is certified if no failures or a certified number of failures occur. 2. Component model: A system composed of n components is to be certified to determine the probability that component I will fail prior to completion. 3. Certification model: The overall reliability of the system is projected and certified. Cleanroom process evaluation • The results of using the Cleanroom process have been very impressive with few discovered faults in delivered systems. • Independent assessment shows that the process is no more expensive than other approaches. • There were fewer errors than in a 'traditional' development process. • However, the process is not widely used. It is not clear how this approach can be transferred to an environment with less skilled or less motivated software engineers. 64
  • 65. Software Engineering 2008 Halsted’s software science is based on the theory of software complexity. He proposed that a program may be considered as a collection of lexical tokens, each of which can be classified as operator or operand. The measurable and countable properties as proposed by Halstead are as follows: n1= number of unique or distinct operators appearing in that implementation n2=number of unique or distinct operands appearing in that implementation N1= total usage of all of the operators appearing in that implementation N2= total usage of all of the operands appearing in that implementation The program vocabulary is given as n=n1+n2 Length of the program is given as N=N1+N2 The number of bits required to encode the program being measured is known as the Volume and given by V=Nlog2n The Effort (E) required to implement the program is given by E= (N log n) The level of abstraction provided by the programming language is L=V*/V where V* is the potential minimum volume Potential volume denotes an algorithm’s shortest possible form V*= (n1*+n2*) log2 (n1*+n2*) 65
  • 66. Software Engineering 2008 Advantages of Halstead’s Metrics 1. Does not require in-depth analysis of programming structure 2. Predicts rate of error 3. Predicts maintenance effort 4. Useful in scheduling and reporting of projects 5. Simple to calculate 6. Useful in maintaining quality of program 7. Is language independent 8. Is supported at large by industry Disadvantages of Halstead’s Metrics 1. Depends on the completed code 2. It is a predictive estimating model Testing of web applications Since web-based systems and applications reside on a network and interoperate with different operating systems, browsers, hardware platforms, communications protocols, and “backroom” applications, the search for errors represents a significant challenge for web engineers. There are three main styles of testing. Exploratory testing examines a system and looks for areas of user confusion, slow-down, or mistakes. Such testing is performed with no particular preconceived notions about where the problems lie or what form they may take. The deliverable for an exploratory test is a list of problem areas for further examination: "users were visibly confused when faced with page x; only half the users were able to complete task y; task z takes longer than it should." Exploratory testing can be used at any point in the development life cycle, but is most effective when implemented early and often. Threshold testing measures the performance characteristics of a system against predetermined goals. This is a pass/fail effort: "with this system users were able to complete task x in y seconds, making an average of z mistakes. This does (does not) meet the release criteria." Threshold testing typically accompanies a beta release. 66
  • 67. Software Engineering 2008 Finally, comparison testing measures the usability characteristics of two approaches or designs to determine which better suits users' needs. This is usually done at the early prototyping stage. Content testing Interface testing Navigation testing Interface design Component testing Aesthetic Design Content design Configuration Navigation design testing Architecture Design Component Design Security Performance Testing testing 67
  • 68. Software Engineering 2008 Maintenance Reasons for changes in software • Customer initiates changes due to changes in law, alterations to policy, changes to interfacing systems • Developers initiate changes due to component failure reports, detection of defects, changes to systems environment • Product support considerations: product on multiple platforms, multiple product versions, different customized product versions Laws of program/software evolution • Continuing change: A program undergoes continuing change or becomes progressively less useful • Increasing complexity: As an evolving program is changed, its complexity, which reflects deteriorating structure, increases unless work is done to reduce the same • The fundamental law of program evolution: Program evolution is subject to a dynamic that makes the programming process self-regulating with statistically determinable trends • Conservation of organization stability: The global activity rate in a project supporting an evolving program is statistically invariant • Conservation of familiarity: The release content (changes, additions, deletions) of the successive releases of the evolving program is statistically invariant Categories of Software Maintenance Corrective Maintenance: Aims to correct errors Adaptive Maintenance: Changes the system to react to changes in the environment Preventive Maintenance: Involves documenting, commenting, or even re-implementing some part of software Perfective Maintenance: It improves the efficiency and/or effective of the system, or improves the maintainability Development time Vs Maintenance time=20:80 68
  • 69. Software Engineering 2008 Problems during maintenance • Often the program is written by another person • Often the program is changed by a person who did not understand it properly • Program listings are not structured • High staff turnover • Some problems become clearer only when a system is in use • Some systems are not designed for change Potential solutions to maintenance problems • Budget and effort reallocation: More time should be invested during development • Complete replacement of system: if the cost of system maintenance is high • Enhancement of existing system Software maintenance cost factors • Non-technical • Application domain • Staff stability • Program lifetime • External environment • Hardware stability • Technical • Programming language • Programming style • Programming validation and testing • Documentation Structured Vs Unstructured maintenance Structured Unstructured • Based on component level design • Based on object maintenance • Recent approach • Earlier approach • No such break-up is created • A common process framework is • Regression tests are conducted and established software is again released • Regression tests are impossible to • Software Configuration management is conduct implemented • Software Configuration management is not implemented Automated Maintenance Tools • Text Editors • File Comparators • Compilers and linkers • Debugging Tools • Cross-reference Generators 69
  • 70. Software Engineering 2008 • Static code Analyzers • Configuration Management Repositories Estimation of maintenance cost Various models exist that help the management in estimating project costs. A few are as follows: • COCOMO Model Annual change traffic (ACT)= KLOC added + KLOC deleted KLOC total Maintenance cost=ACT * Development cost • Belady and Lehman Model M=P+K e(c-d) where M: Total effort expended on maintenance P: Productive effort that involves analysis design, coding, testing & evaluation K: An empirically determined constant c: Complexity measure due to lack of good design d: degree to which maintenance team is familiar with the software Measuring Maintenance • Environment dependent measures which describe the degree and effectiveness of maintenance • Mean time to repair • Total change implementation time to total number of changes implemented • Number of unresolved problems • Time spent on unresolved problems • Percentage of changes that introduce new faults • Number of components modified to implement a change • Internal attributes affecting maintainability • Cyclomatic number • Readability measure (fog index) • F=0.4*number of words+% of words of 3 or more syllables No. of sentences Software Rejuvenation It tries to increase the overall quality of an existing system, by looking back at it’s workproducts to try to derive additional information or to reformat them in a more understandable way. There are several aspects of software rejuvenation to consider: • Redocumentation • Restructuring: Code restructuring and Data restructuring • Reverse engineering is performed to understand Internal data structures, Database Structures, User Interfaces and Processing. • Reengineering 70
  • 71. Software Engineering 2008 Principles of re-engineering 1. We redefine software scope and goals 2. Redefine SRS by way of additions, deletions and extensions of functions 3. Redesigning the application design and architecture using new technology 4. Improving database design, code restructuring to make size smaller and efficient in operations 5. Rewriting the documentation to make it more user friendly Reverse Software Engineering is the process of deriving the requirements from the code. A lot of times during maintenance, the system is found to have no design or requirement document. In such circumstances, we start reading the code, and derive functions that the module performs. The flow of data in the module defines the interaction between sub-systems that helps us define the design of the system. Taking these two as the base, we can define the requirement specification for the system. This whole procedure, going in the reverse manner (as from a process model), defines the process of reverse engineering. 71
  • 72. Software Engineering 2008 Cost Estimation Estimation on resources, cost and schedules for a software engineering effort requires experience, access to good historical information (metrics), and the courage to commit to quantitative predictions when qualitative information is all that exits. Estimation carries inherent risk which leads to uncertainty. Resources Each resource is specified with four characteristics: 1. Description of the resource 2. Statement of availability 3. Time when the resource will be required 4. Duration of time that resource will be applied Human resources The planner evaluates the software scope and selects the skills required to complete development. Organization position and specialty are specified. The number of people required for a software project can be determined only after an estimate of development effort Environmental Resources It incorporates hardware and software. Hardware provides a platform that supports the tools (software) required to produce the work products that are an outcome of good software engineering practice. Reusable Software Resources Reusable software building blocks called components are divided into the following categories: off-the shelf components (COTS), Full-experience components, partial-experience components, new components Commandments for Estimating 1. Don’t Estimate to Match the Budget 2. Don’t play I’ll double because he’ll half Games (Stating twice the cost, because negotiations bring it to half): Focus on the basis for estimation and sharing of data can make it difficult to arbitrarily cut down estimates. 3. Don’t show off by giving a smaller estimate. 4. Don’t play the number game 5. Don’t be over precise. 6. Get real. Don’t live in the past. 7. Always maintain past data. 8. Tell the truth (DO NOT HIDE REAL TIME SPENT) 9. Don’t pressurize people into giving smaller estimates 10. Don’t be greedy. Software Project Estimation Options available for calculation are: • Delay estimation until late in the project 72
  • 73. Software Engineering 2008 • Base estimation on similar projects  Use relatively simple decomposition techniques to generate project cost and effort estimates  Use one or more empirical models for software cost and effort estimation Decomposition Techniques The problem is broken down into smaller portions and estimation is based on these portions. Firstly the size should be estimated. Software Sizing Software size can be the LOC or FP. Four approaches to the sizing problem have been identified by Putnam and Myers: • Fuzzy logic sizing: The planner must identify the type of application, establish it’s magnitude on a qualitative scale, and then refine the magnitude within the original range • Function point sizing: • Standard component sizing: The number of occurrences of each standard component are estimated and historical project data is used to determine the delivered size per standard component • Change sizing: This is used when the existing software has to be modified A three point or expected value is computed which is : S=(Sopt+4Sm+Spess)/6 where Sopt=Optimistic size estimation Sm=Most likely size estimate Spess=Pessimistic size estimate These estimates are calculated using past data of similar projects Example of LOC based estimation Function Estimated LOC User Interface and control facility 2,300 Two dimensional Geometric analysis 5,300 Three-dimensional geometric analysis 6,800 Database management 3,350 Computer graphics display facilities 4,950 Peripheral control functions 2,100 Design analysis modules 8,400 Estimated lines of code 33,200 Example of FP based estimation Factor Value Backup and recovery 4 Data Communications 2 73
  • 74. Software Engineering 2008 Distributed processing 0 Performance critical 4 Existing operating environment 3 On-line data entry 4 Input transaction over multiple screens 5 ILFs updated online 3 Information domain values complex 5 Internal processing complex 5 Code designed for reuse 4 Conversion/installation design 3 Multiple installations 5 Application designed for change 5 Value adjustment factor 1.17 Information domain Opt Likely Pess Est. count Weight FP count No. of external inputs 20 24 30 24 4 97 No. of external outputs 12 15 22 16 5 78 No. of external inquiries 16 22 28 22 5 88 No. of internal logical files 4 4 5 4 10 42 No. of external interface files 2 2 3 2 7 15 Count total 320 FP=count-total*[0.65+0.01*(Fi)] FP=375 Process based estimate computes cost and effort for each function and framework activity Estimation with use cases LOCestimated=N*LOCavg+[(Sa/Sh-1)+(Pa/Ph-1)]*LOCadjust where N=actual number of use cases LOCavg=historical average LOC per use case for similar subsystem LOCadjust= adjustment based on n percent of above where n defines difference between this project and average projects Sa=actual scenarios per use case Sh=average scenarios for similar subsystems 74
  • 75. Software Engineering 2008 Pa=actual pages per use-case Ph=average pages per use-case for this type of subsystem Empirical Estimation models A typical estimation model is derived using regression analysis on data collected from past software projects E=A+B*(ev)C A,B, C are empirically derived constants for calculating effort and ev is the estimation variable (LOC or FP) Many LOC oriented estimation models proposed are: E=5.2*(KLOC)0.91 Walston-Felix model 1.16 E=5.5+0.73*(KLOC) Bailey-Basili model And many more.. Many FP oriented estimation models proposed are: E=-91.4+0.355FP Albercht and Gaffney model E=-37+0.96FP Kemerer model And many more…. Techniques for effort estimation summarized • Algorithmic Cost Modeling: Historical information serves as the basis and cost is estimated on size. • Expert judgment: On the basis of experience one or more experts determine the estimates for the project. • Estimation by analogy: When an organization because of past experience in similar projects determines an estimate for the project. • Pricing to win: Estimate is a figure that appears to be sufficiently low to win a contract • Top-down: Where the overall estimate is formulated and then broken down into component tasks Effort=(system size)*(productivity rate) • Bottom up: Where component tasks are identified and sized and these individual estimates are aggregated (WBS) • Parkinson’s Law: Work expands to fill the available time and budget. 75
  • 76. Software Engineering 2008 The Management Spectrum The Project Planning Process • Establish project scope • Determine feasibility • Analyze risks • Define required resources – Determine human resources available – Determine reusable software resources – Identify environmental resources • Estimate cost and effort – Decompose the problem – Develop two or more estimates using size, function points, process tasks, or use- cases – Reconcile the estimates • Develop a project schedule – Establish a meaningful task set – Define a task network – Use scheduling tools to develop a timeline chart – Define schedule tracking mechanism Software project management Effective software project management focuses on 4 P’s: • People • Product • Process • Project PEOPLE The Software Engineering Institute has developed a people management CMM to enhance the readiness of software organizations to undertake increasingly complex applications by helping to attract, grow motivate, deploy and retain the talent needed to improve their software development capability. The key practice areas are: Recruiting, selection, performance management, training, compensation, career development, organization and work design, and team/culture development. PRODUCT 76
  • 77. Software Engineering 2008 The first Project Management activity is to examine the product and the problem it is intended to solve. The following steps tell about the product: 1. Software scope encompasses context (where in the larger system context), information objectives (customer visible data objects), function and performance (functions the software performs) 2. Problem decomposition is called partitioning or problem elaboration PROCESS The project manager must decide which process model is appropriate for (i) customers, (ii) product characteristic, (iii)project environment. The job of the project manager is to estimate resource requirements for each matrix cell, start and end dates for the tasks associated with each cell and work products to be produced as a consequence of each task. Once the process model is chosen, the process framework is adapted to it. PROJECT A common sense approach to project 1. Start on the right foot 2. Maintain momentum 3. Track progress 4. Make smart decisions 5. Conduct a post mortem analysis Managing people and organizing teams: We have to identify people associated with the project first, then ensure our task is completed. The first step in the management process is to identify the objective or the set of tasks to perform. Next we identify the jobs or organizational structure. Next people required for the job are matched and motivated to ensure they do their jobs correct. Stakeholders are those who participate in the software process. They include: 1. Senior managers who define business issues 2. Project managers who must plan, motivate, organize and control 3. Practitioners who deliver technical skills 4. Customers who specify requirements for the software 5. End-users who interact with the software once it is released Selecting the right person for the job includes a correct recruitment process: • Create a job specification • Create a job holder profile • Obtain applicants • Examine CV’s 77
  • 78. Software Engineering 2008 • Interviews, etc • Other procedures Motivation methods: • Taylorist’s model (piece-rates) says that the wages or benefits of employees should be related with the output of the employee. • Maslow’s hierarchy of needs states 5 basic needs in the order, starting from highest priority are physiological (food and shelter), safety, love and belongingness, esteem, self realization. One the highest priority need is satisfied, the person would work only for the next level of need. • Herzberg’s two factor theory (hygiene-dissatisfaction, motivators) says there are motivators and de-motivators in terms of situations and environments. This theory suggests that to improve job attitudes and productivity, administrators must recognize and attend to both sets of characteristics and not assume that an increase in satisfaction leads to a decrease in unpleasurable dissatisfaction. • Expectancy theory of motivation (perceived value) emphasizes the need for organizations to relate rewards directly to performance and to ensure that the rewards provided are those rewards deserved and wanted by the recipients Methods of improving motivation • Job enlargement means increasing the scope of a job through extending the range of its job duties and responsibilities • Job enrichment is an attempt to motivate employees by giving them the opportunity to use the range of their abilities As such job enrichment has been described as 'vertical loading' of a job, while job enlargement is 'horizontal loading' Stages of Teams’ development • Forming where the people from diverse backgrounds, but similar work are put together, to achieve organizational goals. • Storming is the stage when the problems start occurring on the team. Since the job roles are not accurately defined and the way of working is different for different people, problems start occurring. • Norming is the phase when the tensions on the team start resolving due to mutual understanding, and common goals and objectives. • Performing is when the team starts delivering the output in the form of results. • Adjourning is when the task is completed and people with different backgrounds return to different projects. Measuring group performance • Additive (each one contributes) • Compensatory (correcting other’s errors) • Disjunctive (only one correct solution) 78
  • 79. Software Engineering 2008 • Conjunctive (waiting for the slowest performer) There are a number of factors which influence group working: 1. Group composition: Getting the balance of all skills in the group is essential. All members should be directed towards the same goal. Mistake by one person in a stage can affect the whole project. Thus we require a leader who can direct each member and guide them or interpret the goals for his work. 2. Group cohesiveness: when people work together they subordinate their personal interests to the organizational interest. But a lot of times their individual capabilities are not fully utilized. It promotes egoless programming. 3. Group communications are very important because they help in coordinating group activities. The correct method for communication should be defined and practiced. 4. Group organization should be correct so as to enable individual potential to get recognition. To promote better coordination and to control activities. Leadership powers 1. Position power 1. Coercive power (ability to force) 2. Connection power 3. Legitimate power (status / title) 4. Reward power 2. Personal power 1. Expert power 2. Information power 3. Referent power (attractiveness) Team structures 1. A closed paradigm (formal structure) 2. A random paradigm (egoless programming) 3. An open paradigm ( midway between closed and random) 4. Synchronous paradigm (natural compartmentalization) 5. Matrix structure (line and staff) Putnam Resource Allocation Model Putnam’s resource allocation model is based on the concept of Rayleigh curve which rises, reaches peak end then exponentially trails off as a function of time. The Norden/Rayleigh curve equation represents manpower, measured in persons per unit time as function of time. M(t)=dy/dt=2Kate-at2 Where dy/dt is the manpower utilization rate per unit time, “t” is elapsed time, “a” is a parameter that affects the shape of the curve, and “K” is the area under the curve in the interval [0,∞] Y(t)=t0∫2Kate-at2=K[1-e-at2] 79
  • 80. Software Engineering 2008 Y(0)=0 y(∞)=∞ D y/dt =2Kae [1-2at2]=0 2 2 -at2 Td2=1/2a where td denotes the time where maximum effort rate occurs The number of people involved in the project at the peak time then becomes easy to determine by replacing ‘a’ with 1/2td2 m(t)= 2K/2td2 te-t2/2td2 m(t)=K/td2 te-t2/2td2 where t=td (i.e. peak) m(t) can be stated as m0 ... m0=K/td√e where K is the total project cost (or effort) in person years, td is the delivery time in years, m0 is the number of persons employed at the peak. Average rate of team-build-up= m0/td Difficulty Metric The slope of manpower distribution has a useful property M’(t)=d2y/dt2=2Kae-at2(1-2at2) For t=0, m’(0)=2Ka=2K/2td2=K/td2 The ratio is denoted by D, which is measured in person year D=K/td2 This relationship shows that a project is more difficult to develop when the manpower demand is high or when the time schedule is short. Difficult projects will have a steeper demand curve. Difficulty is also related to peak manning M0=K/td√e (multiplying both sides with 1/td) M0√e/td=D=K/td2 Manpower building Finding derivatives of D dependent upon K and td D’(td)=-2K/td3 person/years2 D’(K)=1/td2 years-2 If project scale is increased the development time also increases to such an extent that the quantity K/td3 remains constant around a value. This quantity can be represented by D0 D0=K/td3 person/years2 D0 could vary slightly from one organization to another depending on the average skill of the analysts, developers and management involved. Putnam also observed that productivity is proportional to the difficulty P α Dβ where average productivity P is the lines of code produced/cumulative manpower used to produce code. 80
  • 81. Software Engineering 2008 P=S/E He analyzed 50 army projects to determine P=φD-2/3 or S=φD-2/3E (We know that for td=t in E=K(1-e-at2) we get E=K(1-e-td2/2td2)=K(1-e-0.5)=0.3935K) S=φD-2/3(0.3935K) S=φ[K/td2]-2/3K(0.3935) S=|0.39335|K1/3td4/3 C ... S=CK1/3td4/3 or C=SK-1/3td-4/3 In software projects, time cannot be freely exchanged against cost. Such a trade off is limited by the nature of software development. Time scale should never be reduced to less than 75% of its initial calculated value As K=1/td4[S/C]3, the effort K varies inversely as fourth power of the development time project Design test Maintenance The product curve is the addition of two curves called development curve and test validation curve. Putnam models work reasonably well on large projects but seriously over estimate effort on medium or small size systems. The model emphasizes heavily on size and development schedule attributes while downplaying with other attributes. Overview of Project Planning Major principle of project planning is to plan in outline first and then in more detail Following is the outline of the main planning activities: Step 0: Select project Step 1: Identify project scope and objectives 81
  • 82. Software Engineering 2008 1. Identify objectives and measures of effectiveness in meeting them 2. Establish a project authority 3. Identify stakeholders 4. Modify objectives in the light of stakeholder analysis 5. Establish methods of communications with all parties Step 2: Identify project Infrastructure 1. Establish relationship between project and strategic planning 2. Identify installation standards and procedures 3. Identify project team organization Step 3: Analyze project characteristics 1. Distinguish the project as either objective or product driven 2. Analyze other project characteristics 3. Identify high-level project risks 4. Take into account user requirements concerning implementation 5. Select general life cycle approach 6. Review overall resource estimates Step 4: Identify project products and activities 1. Identify and describe project products (including quality criteria) 2. Document generic product flows 3. Recognize product instances 4. Produce ideal activity network 5. Modify ideal to take into account need for stages and checkpoints Step 5: Estimate effort for each activity 1. Carry out bottom-up estimates 2. Revise plan to create controllable activities Step 6: Identify activity risk 1. Identify and quantify activity based risks 2. Plan risk reduction and contingency measures where appropriate 3. Adjust plans and estimates to account for resource constraints Step 7: Allocate resources 1. Identify and allocate resources 2. Revise plans and estimates to account for resource constraints Step 8: Review/ publicize plan 1. Review quality aspect of project plan 2. Document plans and obtain agreement Step 9/10: Execute plan/ lower level of planning, which may require reiteration of the planning process at a lower level . Project Evaluation/ Feasibility Study It is the first step in project development. Project assessment can be of following types for each project: 1. Strategic assessment: ensures a well-defined program goal, a strategic plan and a well defined information systems strategy. 82
  • 83. Software Engineering 2008 2. Technical Assessment: evaluates the required functionality against the hardware and software available. 3. Cost-benefit analysis: is comparing the expected costs of development and operation of the system with the benefits of having it in place. Cost can be development costs, setup costs, Operational costs. Benefits can be Direct, assessable indirect or intangible benefits. 4. Cash flow forecasting: indicates when expenditure and income will take place. 5. Cost Benefit evaluation techniques: such as net profit, payback period, return on investment, Net present value, internal rate of return, help measure the cost against the profits. Only when benefits outweigh the cost, we consider the project. 6. Risk Evaluation: is important for the evaluation of the project. If there is high uncontrollable risk, then we might not consider the project. Project Scheduling and Monitoring Project Scheduling The project manager’s objective is to define all project tasks, build a network that depicts their interdependencies, identify the tasks that are critical within the network, and then track their progress to ensure that delay is recognized. To accomplish this, the manager must have a schedule that has been defined at a degree of resolution that allows progress to be monitored and the project to be controlled. Software project scheduling is an activity that distributes estimated effort across the planned duration by allocating the effort to specific software engineering tasks. Basic Principles 1. Compartmentalization: Project is compartmentalized into a number of manageable activities, actions and tasks. 2. Interdependency: of every compartmentalized activity is determined 3. Time allocation: Each task should be allocated some number of work units. It should have a start and completion date. 4. Effort validation: Ensure that no more than the allocated number of people have been scheduled at any given time. 5. Defined responsibilities: Team members should be allocated. 6. Defined outcomes: Every task should have a defined outcome . Work products are often combined in deliverables. 7. Defined milestones for tasks or group of tasks. Defining a task set for the software project 1. The task set will vary depending upon the project type and the degree of rigor with which the software team decides to do work. Various project types are: 1. Concept development: to explore new business concept 83
  • 84. Software Engineering 2008 2. New application development on customer request 3. Application enhancement projects 4. Reengineering projects Each project type uses project model and has its flow of activities. 2. Refinement of major tasks by decomposing them into a set of sub tasks 3. Defining a task network: or an activity network, is a graphic representation of task flow for a project 4. Scheduling: Tools like PERT (Program Evaluation and Review Technique), Critical Path Method (CPM) are used for scheduling. Timeline charts are prepared. Project tables are prepared. Tracking the schedule • Conducting periodic project status meetings in which each team member reports progress and problems • Evaluating results of all the reviews conducted throughout the software engineering process • Determining whether formal project milestones have been accomplished • Comparing actual start-date to planned date • Meeting informally with practitioners to obtain subjective assessment of progress • Use earned value analysis to assess progress quantitatively Earned Value Analysis The earned value system provides a common value scale for every task, regardless of the type of work being performed. Steps: 1. The Budgeted cost of work scheduled (BCWS): The work (in person hours or person-days) for each task, as estimated is calculated 2. Budget at completion (BAC)=total BCWS for all tasks 3. Budgeted cost of work performed (BCWP) is calculated 4. Following calculation are made: 1. Schedule performance index (SPI)=BCWP/BCWS 2. Schedule variance (SV)=BCWP-BCWS 3. Percentage schedule for completion=BCWS/BAC 4. Percentage complete=BCWP/BAC 84
  • 85. Software Engineering 2008 5. Cost performance index (CPI)= BCWP/ACWP 6. Cost variance (CV)= BCWP-ACWP Methods for assigning values are: • The 0/100 technique where it is 0 at the start and completion value is 100 • The 50/50 technique where 50% is assigned at start and 100 at completion • The milestone technique where value given on achievement of milestone Project Monitoring Monitoring requires measurements to be made to assess the situation of a project. We must plan carefully regarding what to measure, when to measure and how to measure. How the measurement data will be analyzed and reported must also be planned in advance. The basic purpose of measurements in a project is to effectively monitor and control the project. For monitoring project schedule, size effort and defects are the basic measurements that are needed. Project monitoring and tracking The main goal of monitoring is to get visibility into the project, to ensure goals are met. There are different levels of monitoring as discussed below. 1. Activity level monitoring ensures that each activity in the detailed schedule has been done properly and within time. This monitoring is done daily in project team meetings or by the project manager checking the status of all tasks. 2. Status reports are often prepared weekly which contain a summary of the activities successfully completed. 3. Milestone analysis is done at each milestone. Analysis of actual Vs estimated for effort and schedule is often included. Project Control 85
  • 86. Software Engineering 2008 86
  • 87. Software Engineering 2008 Software Quality Reliability of the software is a probabilistic measure that assumes that the occurrence of failure is a random phenomenon. It calculates the time between the occurrence of failures and the more the availability of error-free software for use, the more reliable it is. There are various methods of calculating reliability. These are: 1. MTTF (Mean time to failure) which is the mean time between two failures 2. MTTR (mean time to repair) which is the mean time taken to repair 3. MTBF=MTTF+MTTR and signifies the time spent on failures 4. ROCOF/POCOF is the rate of occurrence if failures. The higher the metric, the lesser the system is reliable 5. Availability is the time for which the system will be available for use which is MTTF/ (MTTF+MTTR) Reliability prediction • A reliability growth model is a mathematical model of the system reliability change as it is tested and faults are removed. • It is used as a means of reliability prediction by extrapolating from current data – Simplifies test planning and customer negotiations. – You can predict when testing will be completed and demonstrate to customers whether or not the reliability growth will ever be achieved. • Prediction depends on the use of statistical testing to measure the reliability of a system version. Equal-step reliability growth Observed reliability growth • The equal-step growth model is simple but it does not normally reflect reality. • Reliability does not necessarily increase with change as the change can introduce new faults. • The rate of reliability growth tends to slow down with time as frequently occurring faults are discovered and removed from the software. 87
  • 88. Software Engineering 2008 • A random-growth model where reliability changes fluctuate may be a more accurate reflection of real changes to reliability. 88
  • 89. Software Engineering 2008 Random-step reliability growth Reliability prediction Importance of Software Quality • Increasing criticality of software • Intangibility of software development • Accumulating errors during software development Defining Software Quality In practice Quality of a system can be a vague, undefined, attribute. We therefore need to define precisely what qualities we require of a system. For any software system, there should be three specifications: • A functional specification describing what the system is to do • A quality (or attribute) specification concerned with how well the functions are to operate; • A resource specification concerned with how much is to be spent on the system. Quality Factors To identify specific product qualities that are appropriate to software, James A McCall grouped software qualities into three sets of quality factors • Product operation quality factors • Correctness: Traceability, consistency, completeness 89
  • 90. Software Engineering 2008 • Reliability: Error tolerance, consistency, accuracy • Efficiency: Execution and storage efficiency • Integrity: Access control, access audit • Usability: Operability, training, communicativeness, Input / Output volume, Input / Output rate • Product revision quality factors • Maintainability: Consistency, simplicity, modularity, self-descriptiveness • Testability: Simplicity, modularity, instrumentation , self-descriptiveness • Flexibility: Modularity, generality, expandability , self-descriptiveness • Product transition quality factors • Portability: Modularity, self-descriptiveness, machine independence, software system independence • Reusability: Generality, modularity, software system independence, machine independence, self-descriptiveness • Interoperability: Modularity, communications commonality, data commonality ISO 9126 This standard was published to tackle the question of the definition of software quality. ISO 9126 identifies six software quality characteristics: • Functionality: suitability, accuracy, interoperability (interaction with other systems), compliance, security • Reliability: maturity (in terms of failure in system), fault tolerance, recoverability • Usability: understandability, learnability, operability • Efficiency: time behavior, resource behavior • Maintainability: analyzability, changeability, stability, testability • Portability: adaptability, installability, conformance, replaceability Dromey’s model The model suggests that the product quality is largely determined by the choice of components that comprise the product. Also the quality is determined by a number of factors. Apart from functionality, reliability, usability, efficiency, maintainability, portability, he also focuses on reusability and process maturity. Techniques to help enhance Software Quality • Increasing visibility • Procedural structure • Checking immediate stages through inspections Process-based quality • There is a straightforward link between process and product in manufactured goods. • More complex for software because: – The application of individual skills and experience is particularly important in software development; 90
  • 91. Software Engineering 2008 – External factors such as the novelty of an application or the need for an accelerated development schedule may impair product quality. • Care must be taken not to impose inappropriate process standards - these could reduce rather than improve the product quality. Evaluating processes Process is related with improving quality of product because it ensures minimization of errors during the production process. One technique for evaluation is the ‘Post-mortem Analysis’. The process has 5 parts: 1. Design and promulgate a project survey to collect data without compromising confidentiality. It defines the scope of the analysis and collects information across interests of project team members. Data should be tabularized. 2. Collect objective project information, such as resource costs, boundary conditions, schedule predictability, and fault counts. Information and measures related to cost, schedule and quality should be collected. 3. Conduct a debriefing meeting to collect information the survey missed. All participants in the development team are called and the entire development is discusses in respect with all events and problems are recognized. There is a convenor who conducts the meeting correctly. 4. Conduct a project history day with a subset of project participants, to review project events and data and to discover key insights. The members during this meeting are only those who can suggest improvements for the problems. All diagrams, charts and analysis reports are discussed here. 5. Publish the results by focusing on lessons learnt. The focus is on three most important problems that kept the team from meeting its objective. The purpose is to share its findings with everyone. Practical process quality • Define process standards such as how reviews should be conducted, configuration management, etc. • Monitor the development process to ensure that standards are being followed. • Report on the process to project management and software procurer. • Don’t use inappropriate practices simply because standards have been established. Quality management activities • Quality assurance: Establish organisational procedures and standards for quality. • Quality planning: Select applicable procedures and standards for a particular project and modify these as required. • Quality control: Ensure that procedures and standards are followed by the software development team. • Quality management: should be separate from project management to ensure independence. Typical quality assurance activities include: 91
  • 92. Software Engineering 2008 Establishing a quality management system Defining processes Training personnel Establishing mechanisms for quality assurance Providing guidance to project managers Effecting continuous improvement TQM is a management approach for an organization, centered on quality, based on the participation of all its members and aiming at long-term success through customer satisfaction, and benefits to all members of the organization and to society. Quality Assurance costs Cost of quality can be divided into the following groups: • Preventive cost • Detection cost • Internal failure cost • External failure cost The costs associated with quality occur at different phases during development. These are identified as: • Start-up costs for Survey, Policy, Standards, Metrics identification, Tools • Project related costs such as Quality plan, Quality assurance support, Supervision cost, Quality control costs e.g. reviews, testing • Continuing costs incurred on Management, Staff training, Standards maintenance, Research, Metrics maintenance Quality assurance benefits can be grouped into two categories. • Quantitative • Reduced costs • Greater efficiency • Better performance • Less unplanned work • Fewer disputes • Qualitative • Improved visibility and predictability • Reduced risk • Problems show up earlier • Better quality • Improved customer confidence • Portable and reusable product • Better control over contracted products QUALITY PLANS As per IEEE, a quality plan might have entries for: • Purpose: scope of plan; • List of references to other documents; • Management , including organization, tasks and responsibilities; • Documentation to be produced; • Standard practices and conventions; • Reviews and audits; 92
  • 93. Software Engineering 2008 • Testing and configuration management; • Problem reporting and corrective action; • Tools techniques and methodologies; • Code, media and supplier control; • Records collection, maintenance and retention; • Training; • Risk management QUALITY CONTROL • This involves checking the software development process to ensure that procedures and standards are being followed. • There are two approaches to quality control – Quality reviews; – Automated software assessment and software measurement i.e. testing. Quality reviews • A group of people carefully examine part or all of a software system and its associated documentation. • Code, designs, specifications, test plans, standards, etc. can all be reviewed. • Software or documents may be 'signed off' at a review which signifies that progress to the next development stage has been approved by management. Review functions • Quality function - they are part of the general quality management process. • Project management function - they provide information for project managers. • Training and communication function - product knowledge is passed between development team members. • Quality function - they are part of the general quality management process. • Project management function - they provide information for project managers. • Training and communication function - product knowledge is passed between development team members. Review results • Comments made during the review should be classified – No action. No change to the software or documentation is required; – Refer for repair. Designer or programmer should correct an identified fault; – Reconsider overall design. The problem identified in the review impacts other parts of the design. Some overall judgement must be made about the most cost-effective way of solving the problem; • Requirements and specification errors may have to be referred to the client. 93
  • 94. Software Engineering 2008 Quality standards A quality management system (quality system) is the principal methodology used by organizations to ensure that the products they develop have the desired quality. It consists of the following: • Managerial structure and individual responsibilities. It is the responsibility of the organization as a whole. It should have the support of the top management. • Quality system activities include • Auditing of the projects • Review of the quality system • Development of standards • Production of reports for top management Quality assurance and standards Standards are the key to effective quality management. They may be international, national, and organizational or project standards. Product standards define characteristics that all components should exhibit e.g. a common programming style. Process standards define how the software process should be enacted. Importance of standards • Encapsulation of best practice- avoids repetition of past mistakes. • They are a framework for quality assurance processes - they involve checking compliance to standards. • They provide continuity - new staff can understand the organisation by understanding the standards that are used. ISO 9000 International standards such as ISO 9000 provide guidance on how to organize and maintain a quality system. It specifies a set of guidelines for repeatable and high quality product development. ISO 9000 and ISO 9003 apply to software organizations. Benefits of ISO 9000 certification • Confidence of customers • Well-documented software • Makes the process focused, efficient and cost-effective • Weak points of the organization are identified and remedial action implemented • Sets the basic framework for total quality management How to get ISO 9000 certification • Application • Pre-assessment 94
  • 95. Software Engineering 2008 • Documents review and adequacy of audits • Compliance audit • Registration • Continued surveillance Summary of ISO 9001 Requirements • Management responsibility • Quality System • Contract Reviews • Design Control • Document control • Purchasing • Purchaser Supplied product • Product Identification • Process control • Inspection and testing • Inspection, Measuring and Test Equipment • Inspection and Test Status • Control of Nonconforming Product • Corrective action • Handling • Quality records • Quality Audits • Training • Servicing • Statistical Techniques ================================================================ 1. Contract Review: ISO 9001 requires that contracts be reviewed to determine whether the requirements are adequately defined, agreed with the bid, and can be implemented. 2. Design Control: It requires that procedures to control and verify the design be established. This includes planning design activities, identifying inputs and outputs, verifying the design, and controlling design changes. 3. Document Control: The distribution and modification of documents be controlled 4. Purchasing: The purchased products should confirm to their specified requirements. This includes assessment of potential sub-contractors and verification of purchased products. 5. Purchaser-Supplied product: It requires that any purchaser-supplied material be verified and maintained. 6. Product Identification and Traceability: It requires that the product be identified and traceable during all stages of production, delivery, and installation. 7. Process Control: It requires that production processes be defined and planned. This includes carrying out production under controlled conditions, according to documented instructions. Special processes that cannot be fully verified are continuously monitored and controlled. 95
  • 96. Software Engineering 2008 8. Inspection and Testing: It requires that incoming materials be inspected or certified before use and that in-process inspection and testing be performed. Final inspection and testing are performed prior to release of finished product. Records of inspection and testing are kept. 9. Inspection, Measuring, and Test Equipment: It requires that equipment used to demonstrate conformance be controlled, calibrated, and maintained. When test hardware or software be used, it is checked before use and rechecked at prescribed intervals. 10. Inspection and Test Status: It requires that the status of inspections and tests be maintained for items as they progress through various processing steps. 11. Control of nonconforming product: It requires that nonconforming product be controlled to prevent inadvertent use or installation. 12. Corrective action: It requires that the causes of nonconforming product be identified. Potential causes of nonconforming product are eliminated; procedures are changed resulting from corrective action. 13. Handling, Storage, Packaging, and Delivery: It requires that procedures for handling, storage, packaging, and delivery be established and maintained. 14. Quality records: It requires that quality records be collected, maintained, and dispositioned. 15. Internal Quality Audits: It requires that audits be planned and performed. The results of audits are communicated to management, and any deficiencies found are corrected. 16. Training: It requires that training needs be identified and that training be provided, since selected tasks may require qualified personnel. Records of training are maintained. 17. Servicing: It requires that servicing activities be performed as specified. 18. Statistical Techniques: It states that appropriate, adequate statistical techniques are identified and should be used to verify the acceptability of process capability and product characteristics. Shortcomings of ISO 9000 certification • It requires a software production process to be adhered to, but does not guarantee the process to be of high quality • No international accreditation agency exists • Organizations getting ISO 9000 certification often tend to downplay domain expertise. • ISO 9000 does not automatically lead to continuous process improvement. Capability maturity model It classifies software development industry into the following five maturity models. 1. Initial: characterized by ad hoc activities 2. Repeatable: Management practices like tracking costs and schedule are established 3. Defined: The processes for both management and development activities are defined and documented. The processes though defined, product qualities are not measured. 4. Managed: Product and process metrics are measured and used for evaluating project performance rather than improving the process 5. Optimizing: Continuous process improvement is achieved both by carefully analyzing the quantitative feedback from process measurements and from application of innovative ideas and technologies. 96
  • 97. Software Engineering 2008 Comparison between ISO 9000 and CMM • ISO 9000 can be quoted in official • CMM is for internal purposes documents for communication with • Applies specially to Software industry external parties • CMM aims to achieve Total Quality • ISO 9000 applies to a range of management organizations • Provides a list of Key process areas • ISO aims at level 3 of CMM (KPAs), on which the organization at any • States important standards which all maturity level needs to concentrate organizations should maintain • Process based • Product based SPICE (Software Process Improvement and Capability determination) is the new process assessment standard that defines desirable practices and processes. Each process is measured on the 5 levels and the average is taken as the project level in CMM. Process improvement involves three stages: 5. Process measurements are collected to improve the measures according to the goals of the company involved in process improvement. Process metrics related to the following can be collected 1. The time taken for a particular process to be completed. 2. The resources required for a particular process 3. The number of occurrences of a particular event. To understand what to measure we follow GQM approach which says first identify goal, the ask questions necessary t o achieve that goal, the next step is to form metrices to measure them. 6. Process analysis identifies the bottlenecks. 7. Process change involves the following steps: 1. Improvement Identification 2. Improvement prioritisation 3. Process change introduction 4. Process change training 5. Change tuning Six Sigma Six sigma at many organizations simply means striving for near perfection. To achieve six sigma, a process must not produce more than the acceptable defects The fundamental object of six sigma methodology is the implementation of a measurement-based strategy that focuses on process improvement and variation reduction through the application of six sigma improvement projects. What is Sigma? Sigma Level Defects Per Million Rate of Opportunities Improvement 97
  • 98. Software Engineering 2008 1σ 690,000 2σ 308,000 2 times 3σ 66,800 5 times 4σ 6,210 11 times 5σ 230 27 times 6σ 3.4 68 times It has three core steps: • Define customer requirements, deliverables, and project goals via well-defined methods of customer communication. • Measure the existing process and its output to determine current quality performance (collect defect metrics) • Analyze defect metrics and determine the vital few causes It also suggests additional steps for improvement: • Improve the process by eliminating the root cause of defects • Control the process to ensure that future work does not reintroduce the causes of defects Six Sigma Project Methodology Applying Six Sigma to software development makes product development and other projects transparent to both management and customers. Transparency requires an important cultural change. As a result, after transparency is achieved, completing accurate project estimations while meeting both deadlines and customer requirements becomes a lot easier. There is no SW metrics that serves all. It depends what kind of software we are developing or installing. Finding good metrics for software development or deployment is a major task in itself. 98
  • 99. Software Engineering 2008 Software development must adapt to the ever changing human ideas, beliefs, fears, and the corresponding environment. We cannot restrict ourselves to software engineering, we need focus on project management to improve both software development and deployment. And we need measurements in order to know that we implemented the targets we had planned for. Principle No 1 P Measure customer related metrics only o Use Combinatory Metrics to cover all topics Principle No 2 P Adjust to moving targets o Your goals may need change; accept change and manage it accordingly Principle No 3 P Enforce measurement o Do not enforce meeting targets What Makes Six Sigma Different? • Versatile • Breakthrough improvements • Financial results focus • Process focus • Structured & disciplined problem solving methodology using scientific tools and techniques • Customer centered • Involvement of leadership is mandatory. • Training is mandatory; • Action learning (25% class room, 75 % application) • Creating a dedicated organization for problem solving (85/50 Rule). Benefits of Six Sigma • Generates sustained success • Sets performance goal for everyone • Enhances value for customers; • Accelerates rate of improvement; • Promotes learning across boundaries; • Executes strategic change Evaluating Resource PCMM (People Capability Maturity Model) helps to enhance employees by improving the knowledge and skills of the workforce. Also the working environment of the organization should be positive such as enough space to work in, peaceful environment for work. 99
  • 100. Software Engineering 2008 Op izing tim C ntinuo ly im ro m tho s o us p ve e d C ntinuo w rkfo inno tio o us o rce va n fo d ve p p rs na a r e lo ing e o l nd Ca o ching o a a na co p te rg nis tio l m e nce Pe o co p te rs nal m e ncy d ve p e e lo m nt M nag d a e Qua ntita ly m na e tive a g Org nis tio l p r fo ancea nm nt a a na e rm lig e o a a na g ro rg nis tio l wth in Org nis tio l co p te a a na m e ncy m na e e a g m nt w rkfo ca a ilitie a o rce p b s nd e ta lis co p te s b h m e ncy-b s dae T a -b s dp ctice e m a e ra s te s am T a b ing e m uild M nto e ring De d fine Id ntify p a e rim ry co p te m e ncie as nd Particip to cultu a ry re a n w rkfo lig o rce C m e ncy-b s d p ctice o p te a e ra s activitie w the s ith m C re r d ve p e a e e lo m nt C m e ncy d ve p e o p te e lo m nt W rkfo p nning o rce la Kno le g a s w d e nd killsa lys na is R p a tab ee le Ins b s till a ic d cip is line into C m e s tio o p na n w rkfo o rce Tra ining activities Pe rm nce m na e e rfo a a g m nt S ffing ta C m unica n o m tio W rk e o nviro enm nt Initial Evaluating project The project is evaluated on the amount of profits expected from such development. The Return on Investment factor helps in telling how much profit is expected in comparison to the cost of the project. The return is adjusted with the inflation factor to get the net present value of profits accruing in future and thus giving a true picture of the benefits. Software Audit Software audit is a systematic and independent examination to determine the following: • Are quality related arrangements implemented as planned? • Are results consistent with expectations • Adequacy Audit determines the extent to which documented QMS conforms to applicable standards • Compliance Audit is aimed at determining the extent to which the implemented system conforms to documented systems • Surveillance Audit ensures continued adherence to standards and continuous improvement Types of Audit • First party audits: 100
  • 101. Software Engineering 2008 • Performed in-house • QMS requirement • Ensures effective implementation • Evaluate any particular quality problem • Second party audits • Performed by clients • Vendor selection • Third party audits • Certification • Surveys Documentation standards • Particularly important - documents are the tangible manifestation of the software. • Documentation process standards are concerned with how documents should be developed, validated and maintained. • Document standards are concerned with document contents, structure, and appearance. • Document interchange standards are concerned with the compatibility of electronic documents. Document standards • Document identification standards - How documents are uniquely identified. • Document structure standards - Standard structure for project documents. • Document presentation standards - Define fonts and styles, use of logos, etc. • Document update standards - Define how changes from previous versions are reflected in a document. Document interchange standards • Interchange standards allow electronic documents to be exchanged, mailed, etc. • Documents are produced using different systems and on different computers. Even when standard tools are used, standards are needed to define conventions for their use e.g. use of style sheets and macros. • Need for archiving. The lifetime of word processing systems may be much less than the lifetime of the software being documented. An archiving standard may be defined to ensure that the document can be accessed in future. 101
  • 102. Software Engineering 2008 102
  • 103. Software Engineering 2008 Configuration management The items that comprise all information produced as part of the software process are collectively called a software configuration. Baseline Once a document has been reviewed, changed and approved, it becomes the baseline for the project. Any further changes can be incorporated only after following the prescribed procedure. It helps to control change. Software Configuration Item SCI is information that is created as part of software engineering process. (Components, documents) SCIs’ are organized to form configuration objects that may be catalogued in the project database with a single name. A configuration object has a name, attributes and is connected to other objects by relationships. Software Configuration Management Repository (Features) • Versioning: repository should be able to save all the versions. • Dependency tracking and change management: Relationships between configuration objects is stored properly. • Requirements tracing: Backward tracing of all requirements and forward tracing of deliverables • Configuration Management: It keeps track of a series of configuration • Audit Trials: Establishes additional information about changes. It comprises of the documented procedures for applying technical and administrative directions and surveillance to: • Identification of objects in the configuration: Software configuration items should be named and organized. These are organized as objects (such as a data model or component), collectively known as aggregate object. Each object has a set of distinct features that uniquely identify it, which are evolvable. The objects are divided into three main categories: controlled, pre-controlled and uncontrolled. Configuration management plan includes controlled objects. Various controllable (controlled + pre-controlled) objects are like SRS, design document, source code, test cases etc. • Version control: It has the following capabilities • A project database that stores relevant configuration objects • A version management capability that stores all previous versions • A make facility that helps in building a specific version • Issues tracking (outstanding issues) related with a version 103
  • 104. Software Engineering 2008 •Change Control (process): The configuration control system prevents unauthorized changes to any controlled object. The engineer needing to change a module first obtains a private copy of the module through a reserve operation. • Change request is submitted and evaluated on cost, functionality and impact. • A change report is submitted which is reviewed by change control authority (CCA) • An Engineering change order (ECO) is issued for an approved change, highlighting constraints and criteria. • File kept in the directory is updated by a version control system which performs access control and synchronization control • Configuration Audit: It ensures that the correct change procedures are followed and documented properly. • Status Reporting: It includes answers such as what happened, who did it, when did it happen and what else will be affected. Configuration management helps in ensuring that the project’s product is correct and complete Configuration Management Team The team consists of project manager, software developers and configuration management manager. The goals of the configuration manager are to ensure that procedures and policies for creating, changing, and testing of code are followed, as well as to make information about project accessible. Key functions of configuration management activity are: • Record all versions • Retrieve the desired version on demand • Control changes 104
  • 105. Software Engineering 2008 Risk Management The principal goal of an organization’s risk management process should be to protect the organization and its ability to perform their mission, not just its IT assets. Risk is the net negative impact of the exercise of vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. Categories of Risk • Project risk : which affects project schedules or resources • Product risk : which affects quality or performance • Business risk : which affects deployment of product by the organization Examples of Risk 1. Contract risk 1. Unclear task description 2. Unrealistic timeframes 3. Misunderstood requirements 4. Leaving a requirement unfulfilled can cause a lawsuit 2. User satisfaction 1. When a functional property is missing or faulty 2. Inadequate usability or departure from user standards 3. Confusing error messages and an inadequate online help function 3. Image risk 1. Obsolete interface can have harmful effects on the customer’s and vendor’s image 2. Outdated program architecture 3. Failure to comply with implicit requirements 4. Complexity risk 1. Complex calculation involves greater risk 2. Multitude of rules can lead to a lack of clarity 3. Parameterization can make things complex 5. Integration and configuration risk 1. Interaction between multiple components can create interface problems 2. Impact of disparate hardware resources and operating systems causes problems 6. Input output risks 1. Input data might be unreliable 2. The input and output channels (printer, e-mail server, network connection) may be unavailable or unstable 7. Time and synchronization risks 1. Some workflow may have to be completed within a period of time 2. Different areas of program may have to be activated separately 3. Access to common data by multiple users must be secure and synchronized 105
  • 106. Software Engineering 2008 8. Quantity risks 1. Certain data can accumulate in large quantities 2. System response time must remain acceptable with large clients 9. Security risks 1. Certain data might be accessed by authorized user 2. Hostile attacks on the system must be anticipated 10. Project specific risks are unique for each project Integrating risk management in SDLC SDLC Phases Phase Characteristics Support from Risk Management Activities Phase 1—Initiation The need for an IT system is expressed and • Identified risks are used to support the the purpose and scope of the IT system is development of the system requirements, documented including security requirements, and a security concept of operations (strategy) Phase 2—Development or The IT system is designed, • The risks identified during this phase can Acquisition purchased, programmed, be used to support the security analyses developed, or otherwise of the IT system that may lead to constructed architecture and design tradeoffs during system development Phase 3—Implementation The system security features should be • The risk management process supports configured, enabled, tested, and verified the assessment of the system implementation against its requirements and within its modelled operational environment. Decisions regarding risks identified must be made prior to system Operation Phase 4—Operation or The system performs its functions. • Risk management activities are Maintenance Typically the system is being modified on performed for periodic system an ongoing basis through the addition of reauthorization (or hardware and software and by changes reaccreditation) or whenever major to organizational changes are made to an IT system in its processes, policies, and Procedures operational, production environment (e.g., new system interfaces) Phase 5—Disposal This phase may involve the disposition of • Risk management activities are information, hardware, and software. performed for system components that Activities may include moving, archiving, will be disposed of or replaced to ensure discarding, or destroying information that the hardware and software are and sanitizing the hardware and properly disposed of, that residual data is Software appropriately handled, and that system migration is conducted in a secure and 106
  • 107. Software Engineering 2008 systematic Manner Risk management Risk management encompasses three processes: 1. Risk assessment, 2. Risk mitigation, and 3. Evaluation and assessment The risk assessment methodology encompasses nine primary steps, • Step 1 System Characterization • Step 2 Threat Identification • Step 3 Vulnerability Identification • Step 4 Control Analysis • Step 5 Likelihood Determination • Step 6 Impact Analysis • Step 7 Risk Determination • Step 8 Control Recommendations • Step 9 Results Documentation Risk mitigation is a systematic methodology used by senior management to reduce mission risk. It can be achieved through any of the following : • Risk Assumption. To accept the potential risk and continue operating the IT system or to implement controls to lower the risk to an acceptable level • Risk Avoidance. To avoid the risk by eliminating the risk cause and/or consequence (e.g., forgo certain functions of the system or shut down the system when risks are identified) • Risk Limitation. To limit the risk by implementing controls that minimize the adverse impact of a threat’s exercising a vulnerability (e.g., use of supporting, preventive, detective controls) • Risk Planning. To manage risk by developing a risk mitigation plan that prioritizes, implements, and maintains controls • Research and Acknowledgment. To lower the risk of loss by acknowledging the vulnerability or flaw and researching controls to correct the vulnerability • Risk Transference. To transfer the risk by using other options to compensate for the loss, such as purchasing insurance. Risk evaluation and monitoring • Risk exposure 107
  • 108. Software Engineering 2008 • Risk exposure=risk likelihood * risk impact • Number of risks • Risk reduction leverage= Risk exposure before reduction-Risk exposure after reduction Cost of reduction APPROACH FOR CONTROL IMPLEMENTATION • Step 1-Prioritize Actions • Step 2-Evaluate Recommended Control Options • Step 3-Conduct Cost-Benefit Analysis • Step 4-Select Control • Step 5-Assign Responsibility • Step 6-Develop a Safeguard Implementation Plan • Step 7-Implement Selected Control(s) Preventive Technical Controls • Authentication • Authorization • Access Control Enforcement • Nonrepudiation: Nonrepudiation spans both prevention and detection. • Protected Communications • Transaction Privacy Detection and Recovery Technical Controls • Audit • Intrusion Detection and Containment • Proof of Wholeness • Restore Secure State • Virus Detection and Eradication COST-BENEFIT ANALYSIS A cost-benefit analysis for proposed new controls or enhanced controls to reduce risk encompasses the following: • Determining the impact of implementing the new or enhanced controls • Determining the impact of not implementing the new or enhanced controls • Estimating the costs of the implementation. These may include, but are not limited to, the following: – Hardware and software purchases – Reduced operational effectiveness if system performance or functionality is reduced for increased security – Cost of implementing additional policies and procedures 108
  • 109. Software Engineering 2008 – Cost of hiring additional personnel to implement proposed policies, procedures, or services – Training costs – Maintenance costs • Assessing the implementation costs and benefits against system and data criticality to determine the importance to the organization of implementing the new controls, given their costs and relative impact. Computer Aided Software Engineering (CASE) Nowadays software engineering often follows specific standardized methods and there are lots of diagrams and documentation involved. So now computers can be used to deal with the higher-level aspects of software engineering. CASE is the use of computer-based support in the software development process. CASE is a tool which aids a software engineer to maintain and develop software of better quality; more efficiently and in less time. Many case tools are available and assist in phases related to tasks such as specification, structured analysis, design, etc. Goal of using CASE tools • Supply basic functionality, do routine tasks automatically e.g. Be able to support editing of code in the particular programming language, supply refactoring tools • Enhance productivity e.g. Generate code pieces automatically • Increase software quality • Intuitive use • Integration with other tools For example, code editor works with code repository • To produce higher quality at lower cost 109
  • 110. Software Engineering 2008 A CASE Environment If the different CASE tools are not integrated, then the data generated by one tool would have to be the input to the other tools. This may also involve format conversions. Thus it is better working in an integrated environment for development. Examples of CASE tools can be Project management software, System design tools, Code storage, Compilers, Translation tools, Test software, Code generation tools (Visual Studio .NET), Code analysis (Borland Audits), Development of data models (UML editors), Cleaning up code (refactoring tools), Bug tracker, Version control (CVS, etc.) Benefits of CASE 1. Cost saving 2. Improvement in Quality 3. Reduced software maintenance efforts 4. Reduces drudgery in software engineer’s work 5. Helps in handling large software 6. Improves style of working by applying structured and orderly approach 7. Help standardization of notations and diagrams 8. Help communication between development team members 9. Automatically check the quality of the A&D models 10. Reduction of time and effort 110
  • 111. Software Engineering 2008 11. Enhance reuse of models or models’ components Disadvantages 1. Limitations in flexibility of documentation 2. May lead to restriction to the tool’s capabilities 3. Major danger: completeness and syntactic correctness does NOT mean compliance with requirements 4. Costs associated with the use of the tool: purchase + training Levels of CASE tools • Production-process support technology • These support process activities such as specification, design, implementation and testing • Process management technology • It supports process modeling and process management • Meta CASE technology • These are used to create production process and process management support tools Architecture of a CASE environment 111
  • 112. Software Engineering 2008 Building blocks of case 112