ch-4- Process-driven Improvement.ppt
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
1,874
On Slideshare
1,874
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
49
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • The SEI has suggested that there are five levels of process maturity, ranging from ad hoc (the least predictable and controllable) to repeatable, defined, managed and optimizing (the most predictable and controllable); This is packaged into the Capability Maturity Model (CMM)
  • Pfleeger, Software Engineering: theory and Practice, 1998, Prentice-Hall The SEI Capability Maturity Model: a top-down improvement model, which differ from the “bottom-up” model that was presented so far. We will get back to this at a later stage. They may be seen as complementary: the CMM for lower level of maturity, to implement the basic capabilities that all organizations should have, and then a bottom-up SPI initiative to keep improving, in a continual mode, at levels 4 and 5. The next level is repeatable , where basic project management processes track cost, schedule and functionality. There is some discipline among team members, so that successes on earlier projects can be repeated with similar, new ones. Here, the key process areas are primarily management activities that help to understand and control the actions and outcomes of the process.   Improving the repeatable process leads to a defined process, where management and engineering activities are documented, standardized and integrated; the result is a standard process for everyone in the organization. Although some projects may differ from others, the standard process is tailored to these special needs, and the adaptation must be approved by management. At this level of maturity, the key process areas have an organizational focus.   A managed process directs its efforts at product quality. By introducing detailed measures of process and product quality, the organization focuses on using quantitative information to make problems visible and to assess the effect of possible solutions. Thus, the key process areas address quantitative software management as well as software quality management.   The most desirable level of capability maturity is optimizing , where quantitative feedback is incorporated in the process to produce continuous process improvement. In particular, new tools and techniques are tested and monitored to see how they affect the process and products. Key process areas include defect prevention, technology change management, and process change management.
  • Successful: that lead to sustained improvement actions See J. Herbsleb and D. R. Goldenson: “A Systematic Survey of the Results of CMM-Based Software Process Improvement.” In Proceedings of the 18th International Conference on Software Engineering , pages 323-330, 1996. J. Herbsleb, A. Carleton, J. Rozum, J. Siegel, and D. Zubrow: "Benefits of CMM-Based Software Process Improvement: Initial Results". Technical Report, CMU-SEI-94-TR-13, Software Engineering Institute, 1994.
  • Each key process area identifies a cluster of related activities that, when performed together, achieve a set of goals considered important for enhancing process capability. The key process areas are the requirements for achieving a maturity level. The Capability Maturity Model has another level of granularity not shown in the table: Each process area comprises a set of key practices whose presence indicates that the developer has implemented and institutionalized the process area. The key practices are supposed to provide evidence that the process area is effective, repeatable and long-lasting. An organization is said to satisfy a key process area only when the process area is both implemented and institutionalized. Implementation is determined by the answers to the activities performed questions; the other practices address institutionalization.
  • J. Herbsleb and D. R. Goldenson: “A Systematic Survey of the Results of CMM-Based Software Process Improvement.” In Proceedings of the 18th International Conference on Software Engineering , pages 323-330, 1996. J. Herbsleb, A. Carleton, J. Rozum, J. Siegel, and D. Zubrow: "Benefits of CMM-Based Software Process Improvement: Initial Results". Technical Report, CMU-SEI-94-TR-13, Software Engineering Institute, 1994. Not all of the post-assessment activity was equally effective. However, well over half of the respondents stated that their organizations had experienced at least moderate success in addressing the findings and recommendations that were raised by their assessments; almost one-third said there had been substantial success or marked success throughout their organizations. Only 14 percent say they had had little if any appreciable success by the time of the survey. Contrary to some critics, there is very little evidence that the assessments and subsequent process improvements efforts had a negative effect. Very few (4 percent) respondents said that their assessments had been counter-productive. Over 80 percent said that software processes had not become more bureaucratic and that technical creativity had not been stifled. In fact, there is evidence that more mature organizations in the commercial and government sectors actually have fewer paperwork requirements than do less mature organizations.
  • Y axis: percentage of respondents answering “good” or “excellent” X axis: levels 1, 2, 3 of the CMM (not enough organizations at levels 4 and 5) Survey respondents from higher maturity organizations are substantially more likely than those from level 1 organizations to report that their organizations are characterized by high product quality and staff productivity. They claim better ability to meet schedule commitments, and to have higher levels of staff morale and job satisfaction. In addition they tend to report doing better with respect to customer satisfaction and success in meeting budget commitments. Respondents from higher maturity organizations are more likely than those who remain at the Initial Level to characterize performance in their organizations as being "good" or "excellent" as opposed to just "fair" or "poor." Five of the six correlations are statistically significant (at the 0.05 level). The sixth, ability to meet budget commitments, in fact approaches statistical significance. For example, 80 percent of those from level 3 organizations said that their ability to meet schedule is "good" or "excellent". Fewer than half as many (39 percent) at the initial level make a comparable claim. Notice also the pattern of responses about product quality. Not surprisingly, people from all maturity levels tend to report that their organizations do a reasonably good job of providing their customers with high quality products. Even so, almost one-fourth of the level 1 respondents admit that their products are of only "fair" or "poor" quality [1] . Yet all of the respondents whose organizations have achieved level 3 report having products of good or excellent quality. In fact (not shown in the Figure), close to two-thirds at level 3 say that their products are of excellent quality. Fewer than ten percent of the level 1 or level 2 respondents make a similar claim. Staff morale also appears to benefit quite substantially by higher process maturity. As seen, 60 percent of those whose organizations are at the defined level report that their staff morale is good or excellent. Fewer than a quarter (23 percent) of those at the initial level report that morale is good (only one says it is excellent); another 23 percent (not shown in the Figure) say that their morale level is poor. Not unlike the situation with respect to product quality, people from all maturity levels tend to report that their organizations have achieved reasonably high levels of customer satisfaction. However there is an unexplained dip in reported customer satisfaction at the repeatable level. Although the overall pattern of responses is statistically significant, the difference between the initial and repeatable organizations is not.
  • At that level, you can only perform project-level, rough measurement of effort, size, and may be field fault reports. The initial level of process maturity is characterized by an ad hoc approach to the software development process. That is, inputs to the process are ill‑defined; while outputs are expected, the transition from inputs to outputs is undefined and uncontrolled. Similar projects may vary widely in their productivity and quality characteristics because of lack of adequate structure and control.
  • The second process level, called repeatable, identifies the inputs and outputs of the process, the constraints (such as budget and schedule), and the resources used to produce the final product. The process is repeatable in the same sense that a subroutine is repeatable: Proper inputs produce proper outputs, but there is no visibility into how the outputs are produced. The requirements and the final product are in a predictable form. Planned effort and schedule data are available and meaningful. You can track planned effort and schedule, staffing distribution, etc. (it is meaningful as a planning procedure exists) and actual project data.
  • The defined level of maturity (level 3) differs from level 2 in that a defined process provides visibility into the “construct the system” box. At level 3, intermediate activities are defined, and their inputs and outputs are known and understood. This additional structure means that the input to and output from the intermediate activities can be examined, measured and assessed, since these intermediate products are well‑defined and visible. The figure shows a simple example of a defined process with three typical activities. However, different processes may be partitioned into more distinct functions or activities. Such measurements use the visibility in the process to provide more control over development: if requirements quality is unsatisfactory, additional work can be expended on the requirements before the design activity begins. This early correction of problems helps not only to control quality but also to improve productivity and reduce risk.
  • A managed process (level 4) adds management oversight to a defined process. By level 4, the feedback determines how resources are deployed; the basic activities themselves do not change. At this level, you can evaluate the effectiveness of process activities: how effective are reviews? configuration management? quality assurance? defect‑driven testing? You can use the collected measures to stabilize the process, so that productivity and quality will match expectations.
  • Right now we ask these kind of questions with respect to JEWEL. Learn these questions and apply them to your next project (or ask them when you interview for a position)
  • Intra-project feedback: yes, iteration Inter-project feedback: no, only in terms of war-stories.

Transcript

  • 1. Process Improvement Driven by Maturity Frameworks Books: N. Fenton, S. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS, 1996 Other source: B. Kitchenham, Measurement for Process Improvement, 1995, out of print
  • 2. Process Improvement Frameworks
    • Some development processes are more mature than others, as noted by the US Software Engineering Institute’s (SEI’s). While some organizations have clearly-defined processes, others are more variable, changing significantly with the people who work on the projects.
    • We look at the most popular process assessment framework (CMM). Because this is being mandated by corporations or governments, it is important to understand it and to be aware of measurement’s role within them
    • There are other less popular frameworks, e.g., Boostrap, SPICE (ISO standard)
    • Such frameworks are complementary to measurement-driven improvement: They guide earlier improvement steps and provide a context for measurement.
  • 3. Capability Maturity Model
    • The Capability Maturity Model (CMM) was developed by the U.S. Software Engineering Institute (SEI) to assist the Department of Defense in assessing the quality of its contractors.
    • The CMM describes principles and practices that are assumed to lead to better software products, and the model organizes them into five levels, providing a path to more process visibility and control, and to the improved products that result.
    • The model is used in two ways: by potential customers, to identify the strengths and weaknesses of their suppliers, and by software developers themselves, to assess their capabilities and set a path toward improvement .
  • 4. Some Definitions
    • Software Process: a set of interrelated activities, methods, practices, and transformations that people use to develop and maintain software.
    • Software Process Capability: It describe a range of expected results that can be achieved by following a software process.
    • Software Process Maturity: The extent to which a specific process is explicitly defined, managed, measured, controlled, and effective. It implies a potential growth in capability.
  • 5. Capability Maturity Model
    • Each of the five capability levels is associated with a set of key process areas on which an organization should focus as part of its improvement activities.
    • The first level of the maturity model, initial , describes a software development process that is ad hoc or even chaotic. Few processes are defined, and the success of development depends on individual efforts, not on team accomplishments.
  • 6. Capability Maturity Model (CMM) Level 1: Initial Level 2: Repeatable Level 3: Defined Level 4: Managed Level 5: Optimizing Project management Engineering management Quantitative management Change management Process discipline Process definition Process control Continuous process improvement
  • 7. Maturity and Capability
    • There are a number of assumptions in the CMM
      • An organization’s software process maturity helps to predict a project’s ability to meet its objectives
      • As an organization is going up the maturity ladder, its effectiveness increases and its variability across projects decreases (more predictability)
    • Some published case studies indicate there are significant improvements in both quality and productivity
    • The return on investment seems to typically be in the 4:1 to 8:1 range for successful process improvement efforts.
    • Increases in productivity range within 9-67% and decreases in cycle time range within 15-23%.
  • 8. CMM Key Process Areas
  • 9. Key Process Area Examples
    • Requirements Management : Establish a common understanding between the customer and the software project of the customer’s requirements that will be addressed by the software project. This agreement is the basis for planning and managing the software project.
    • Software Project Planning : Establish reasonable plans for performing the software engineering and for managing the software project. These plans are the necessary foundations for managing software projects.
    • Software Quality Assurance : Provide management with appropriate visibility into the process being used by the software project and of the product being built.
  • 10. Empirical Studies of the CMM
    • Is there any evidence that higher maturity leads to greater project and organizational effectiveness?
    • Herbsleb et al tracked the experience of a large number of organizations. A survey was run on 138 persons from 56 organizations in the US and Canada
    • The CMM assessment took place approximately one to three years before the survey
    • Organizations that vary in size and sector within the software industry, and that vary in the success they have experienced in their process improvement efforts
    • Different people, with different roles were surveyed not to introduce any bias
  • 11. Results marked throughout the organization substantial moderate limited little if any
  • 12.  
  • 13. Maturity and Measurement
    • The Capability Maturity Model emphasizes quantitative control of the process, and the higher levels direct the organization to use measurement and quantitative analysis.
    • A measurement plan using the CMM framework can derive many of its goals directly from the key process areas and practices.
    • At each maturity level, measurement and visibility are closely related: a developer can measure only what is visible in the process, and measurement helps to enable and increase visibility.
    • Thus, the five‑level maturity scale such as the one employed by the SEI is a convenient context for determining what to measure first and how to plan an increasingly comprehensive measurement program.
  • 14. Level 1: Adhoc
    • For this level of process maturity, it is difficult even to write down or depict the overall process; the process is so reactive and ill‑defined that visibility is nil and comprehensive measurement difficult.
    • You may have goals relating to improved quality and productivity, but you do not know what current levels of quality and productivity are.
    • For this level of process maturity, baseline measurements are needed to provide a starting point for measuring improvement as maturity increases.
    • If your organization is at level 1, you should concentrate on imposing more structure and control on the process, in part to enable more meaningful measurement.
  • 15. Level 2: Repeatable We can associate measurements with each arrow in the process diagram. Thus, for a repeatable process, measures of the input might include the size and volatility of the requirements. The output may be measured in terms of system size (functional or physical), the resources as overall staff effort, and the constraints as cost and schedule in dollars and days, respectively.
  • 16. Level 3: Defined Defects discovered in each type of product can be tracked, and you can compare the defect density of each product with planned or expected values. In particular, early product measures can be useful indicators of later product measures. For example, the quality of the requirements or design can be measured and used to predict the quality of the code.
  • 17. Level 4: Managed Management oversight relies on a metrics database that can provide information about such characteristics as distribution of defects, productivity and effectiveness of tasks, allocation of resources, and the likelihood that planned and actual values will match.
  • 18. Level 5: Optimizing
    • Measures from activities are used to improve the process, possibly by removing and adding process activities and changing the process structure dynamically in response to measurement feedback.
    • Results from one or more ongoing or completed projects may also lead to a refined, different development process for future projects.
    • Measurements act as sensors and monitors, and the process is not only under your control but you can change it significantly in response to warning signs.
    An optimizing process (level 5)
  • 19. CMM and Measurement Fenton and Pfleeger et al, 1996
  • 20. What does Process Maturity Measure?
    • The real indicator of process maturity is the level of predictability of project performance (quality, cost, schedule).
      • Level 1: Random, unpredictable performance
      • Level 2: Repeatable performance from project to project
      • Level 3: Better performance on each successive project
      • Level 4: project performance improves on each subsequent project either
        • Substantially (order of magnitude) in one dimension of project performance
        • Significant in each dimension of project performance
      • Level 5: Substantial improvements across all dimensions of project performance.
  • 21. Determining the Maturity of a Project
    • Level 1 questions:
      • Has a process model been adopted for the Project?
    • Level 2 questions:
      • Software size: How big is the system?
      • Personnel effort: How many person-months have been invested?
      • Technical expertise of the personnel:
        • What is the application domain experience
        • What is their design experience
        • Do they use tools?
        • Do they have experience with a design method?
      • What is the employee turnover?
  • 22. Maturity Level 3 Questions
    • What are the software development activities?
    • Requirements complexity:
      • How many requirements are in the requirements specification?
    • Design complexity:
      • Does the project use a software architecture? How many subsystems are defined? Are the subsystems tightly coupled?
    • Code complexity: How many classes are identified?
    • Test complexity:
      • How many unit tests, subsystem tests need to be done?
    • Documentation complexity: Number of documents & pages
    • Quality of testing:
      • Can defects be discovered during analysis, design, implementation? How is it determined that testing is complete?
      • What was the failure density? (Failures discovered per unit size)
  • 23. Maturity Level 4 and 5 Questions
    • Level 4 questions:
      • Has intra-project feedback been used?
      • Is inter-project feedback used? Does the project have a post-mortem phase?
      • How much code has been reused?
      • Was the configuration management scheme followed?
      • Were defect identification metrics used?
      • Module completion rate: How many modules were completed in time?
      • How many iterations were done per activity?
    • Level 5 questions:
      • Did we use measures obtained during development to influence our design or implementation activities?
  • 24. SPICE – Software Process Improvement and Capability dEtermination
    • SPICE is intended to harmonize and extend the existing approaches.
    • It is similar to CMM but, recommended for both process improvement and capability determination.
    • The framework is built on an assessment architecture that defines desirable practices and process
    • There are two different types of practices:
      • Base practices are essential activities of a specific process
      • Generic practices institutionalize or implement a process in a general way.
    • The figure below shows how SPICE architecture ties the tow together.
    • CMM addresses organizations, but SPICE addresses processes.
  • 25. SPICE Architecture for process assessment
  • 26. SPICE architecture for process assessment (Rout 1995)
    • The left-hand side of the diagram represents functional practices involved in software development. This functional view considers five activities:
      • Customer-supplied : Processes that affect the customer directly, support development and delivery of the products to the customer, and ensure correct operation and use
      • Engineering : processes that specify, implement, or maintain the system and its documentation.
      • Project : processes that establish the project, coordinate or manage resources, or provide customer services.
      • Support : processes that enable or support performance of the other processes.
      • Organization : processes that establish business goals and develop assets to achieve those goals.
  • 27.
    • The right-hand side of the figure shows a picture of management; the generic practices, applicable to all processes, are arranged in six levels of capability:
      • 0. Not performed : failure to perform and no identifiable workproducts.
      • Performed informally : not planned and tracked, depends on individual knowledge and identifiable workproducts.
      • Planned and tracked : verified according to specified procedures, workproducts conform to specified standards and requirement.
      • Well-defined : process using approved, tailored versions of standard, documented processes.
      • Quantitatively controlled : detailed performance measures, prediction capability, objective management, and workproducts evaluated quantitatively.
      • Continuously improving : quantitative targets for effectiveness and efficiency based on business goals, quantitative feedback from defined processes, plus trying new ideas.
  • 28. Steps to Take in Using Metrics
    • Metrics are useful only when implemented in a careful sequence of process-related activities.
    • 1. Assess your current process maturity level
    • 2. Determine what metrics to collect
    • 3. Recommend metrics, tools and techniques
      • whenever possible implement automated support for metrics collection
    • 4. Estimate project cost and schedule and monitor actual cost and schedule during development
    • 5. Construct a project data base:
      • Design, develop and populate a project data base of metrics data.
      • Use this database for the analysis of past projects and for prediction of future projects.
    • 6. Evaluate cost and schedule for accuracy after the project is complete.
    • 7. Evaluate productivity and quality
      • Make an overall assessment of project productivity and product quality based on the metrics available.
  • 29. Pros and Cons of Process Maturity
    • Problems:
      • Need to watch a lot (“Big brother“, “big sister“)
      • Overhead to capture, store and analyse the required information
    • Benefits:
      • Increased control of projects
      • Predictability of project cost and schedule
      • Objective evaluations of changes in techniques, tools and methodologies
      • Predictability of the effect of a change on project cost or schedule
  • 30. Summary I
    • It is important to note that process maturity does not involve a discrete set of five possible ratings. Instead, maturity represents relative locations on a continuum from 1 to 5.
    • An individual process is assessed or evaluated along many dimensions, and some parts of the process can be more mature or visible than others. For example, a repeatable process may not have well‑defined intermediate activities, but the design activity may indeed be clearly defined and managed.
  • 31. Summary II
    • The guidelines presented here are meant only to give a general depiction of typical processes. It is essential that you begin any measurement program by examining the process model and determining what is visible.
    • If one part of a process is more mature than the rest, measurement can enhance the visibility of that part and help to meet overall project goals, at the same time bringing the rest of the process up to a higher level of maturity. Thus, in a repeatable process with a well‑defined design activity, design quality metrics may be appropriate and desirable.