• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Giant Plc 2009
 

Giant Plc 2009

on

  • 971 views

Successful presentation to the I.T. Director of Giant Plc., and to the co-team from Barclays Banks Plc.

Successful presentation to the I.T. Director of Giant Plc., and to the co-team from Barclays Banks Plc.

Statistics

Views

Total Views
971
Views on SlideShare
971
Embed Views
0

Actions

Likes
0
Downloads
8
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Why apply six sigma to software development? Let’s look at software development. Progress has been made since the term the “software crisis was first coined in the 1960’s to describe the poor quality of software development, but software projects are still notoriously late, over budget and fail to satisfy their requirements. Consider the following statistics. Now, let’s look at six sigma.
  • However, software development is not a typical six sigma application While software development is process oriented, inputs are often ill-defined outputs are often difficult to fully evaluate – “you can’t use testing to verify the absence of errors” performance is highly influenced by human factors, leading to a high degree of natural variation.
  • “Assessment and Control of Software Risks”, Casper Jones 1994 (p.29)
  • So, in applying Six Sigma to software development, we’ll use both techniques. We’ll apply DFSS to the fuzzy front end to improve our ability to Gather and interpret requirements Establish a product concept Develop product specifications Determine implementation requirements (i.e., CTPs – which for software could include things like the acquisition of new tools, new technology, new skill sets, etc.) Establish the project’s schedule and cost structure We’ll use DMAIC to strengthen the actual software development process, which begins for our purposes with design. Our primarily objective will be to Improve overall productivity Improve quality Reduce the number of introduced defects Improve defect containment Reduce the number of defects delivered to the customer
  • Let’s begin by looking at how we can use DFSS to improve the fuzzy front end.
  • For example, we may begin by looking at different geographic areas and the different types of customers in each of those areas (here a lead user is one who customizes the product themselves). In addition to geography, we could also divide customers based on Size (small, medium and large), or type of application (real-time financial services, real-time simulation, etc.) The idea is to identify all of the slices in the pie chart that represents the product’s market, so that our view of requirements is complete. <Draw this on the board)
  • Kano analysis divides requirements into three basic categories: Must be’s Satisfiers Delighters Illustrate the difference using the airline example. Taking a shuttle flight between New York and Washington Getting there – must be Getting there is less time is a satisfier – the faster the better Surveying excellent American Champagne in crystal glasses during the flight is a delighter The relationship between the categories and customer satisfaction is shown in the next slide
  • The information can be can be summarized in a matrix that shows: The relationship between requirements and use cases The Kano classification and priority associated with each use case Note: requirements and use cases should be numbered for tracking
  • < Walk through the diagram>
  • In the matrix, we start by summarizing the voice of the customer by listing The requirements and use cases The customer’s view of each use case (Kano classification and priority) We then create two columns for each of our design options One of specifying the level of support for each use case, where 0 = no support, 1 = minimum support, 2 = average support, and 3 = strong support, based on the measures we determined earlier () The other specifying the level of effort required in LOC to support the specified level of support for each use case Note: In defining the design options, you must be realistic. That is the resulting options must all be viable. Therefore, you must consider the relationships between use-cases as well as conflicts between use cases. For example, providing strong support for one use case may dictate the level of support provided for another use case. Or in order to provide strong support for one use case, we may have to provide strong support for another, even though this would not be our choice.
  • For each option we can calculate a customer satisfaction score. This will be a function of the Kano classification, priority and level of support for each use case. This is not yet a science and there are different ways to this: The simplest is to ignore the Kano classification and calculate the customer satisfaction score by summing priority x level of support over of all of the use cases. More complete approaches take the Kano classifications into account. For example, Penalize the score for every must be that is not supported Scores for must bes = priority x level of support, with maximum = priority x average Scores of satisfiers = priority x level of support Scores for delighters = priority 2 x level of support The most important thing is to be consistent so that the comparison is valid, and to regularly tune your approach based on pre-release estimates of customer satisfaction and post release measurements of customer satisfaction.
  • For each option, we can calculate an effort score by summing the level of effort estimates for all of the use cases.
  • Finally, we can refine our options by performing a rough benefit/cost analysis. This can be done by Determining the center of the grid by Calculating the median customer satisfaction score for the set of options Calculating the median level of effort score for the set of options Plotting the options Selecting the options with the best benefit/cost profile for further analysis. This will like be options in Quadrant 2, or Quadrants 1 and 3; options in Quadrant 4 are not attractive
  • We use Putnam’s equation as an illustration of an estimating model because it is based on thousands of real software projects, well documented, and well explained in published materials. However, there is no shortage of estimating models, and another model may be better for your situation. The important point is to use a consistent, quantitative approach for evaluating your capability to develop and deliver products based on the different options, and continuously improve your estimating model based on actual results. Note: B in the above equation is called a skill factor. It’s a function of project size and takes care to the additional effort required for integration. Note: E = (Size/Productivity x Duration 4/3 )/B Duration = (Size/Productivity x (Effort/B) 1/3 ) 3/4
  • Next, we calculate a manpower buildup index, which represents how quickly we staff projects. This is another historical parameter that is useful in characterizing the organizations software development capability. The more schedule compression, the shorter the duration of projects, but with a disproportionate increase in staffing (cost) and risk.
  • At the end of the project in month 10, we will have discovered 80.6% of the total defects. We can use this information in two ways: We can use this information together with historical data on the effort required to find and fix defects to sanity check our plan. For example, if it takes 23 hours to find and fix each defect, we can check if the allocated effort is sufficient given the defect discovery profile. Since we will be delivering approximately 20% of the defects to customers, we might want to revise our plan to start testing earlier, or to institute other defect containment strategies (e.g., inspections) to reduce the anticipated number of defects at the start of testing. The goal is ensure that our plans for staffing and testing are sufficient to deliver the required level of quality. Needless to say, if significant changes are made, earlier parts of our evaluation may have to be repeated.
  • Here, we’re comparing one design option against two possible schedules, one more aggressive than the other. Business value = intrinsic value of the product Feature value = added value based on customer satisfaction rating Duration adjustment – estimated value of delivering early Net value = (business value + feature value) – (effort cost + defect repair cost)
  • This slide continues the comparison, this time between the moderate and very rapid schedules. Results show that very compressed schedules are not always advantageous because of the disproportionate increase in cost and effort.
  • Finally, once our overall plan has been established, the next step is transform this into a detailed project plan and perform simulation using schedule estimates (best, average and worst cases) for the critical path to determine the overall schedule risk.
  • Monte Carlo simulation of project schedule risk can be performed when a critical path plan has been developed (i.e., predecessors and successors identified for each task). For each task in the critical path the team is asked to provide ‘best case’, ‘expected’, and ‘worst case’ duration estimates (sometimes called “PERT” estimates). These are used as input to the simulation, typically run 1000 times, to get a probability distribution of expected completion times. Simulation results can be used to determine earliest and latest dates associated with a given probability (e.g., 95% as illustrated here), or to determine the probability associated with a particular date (e.g., ‘what is the probability the project will complete by 1/1/2003?’)
  • Now, let’s look at the application of Six Sigma’s DMAIC process to the software development process.
  • Before describing the actual process, let me mention a few prerequisites. First, The software development process must be well defined in order to apply DMAIC to achieve process improvements.
  • Perhaps the best way to illustrate how DMAIC can be applied to improve the software development process is to walk through an example. This slide shows the problem statement and goal statement developed during the Define phase
  • The team decided to collect three types of metrics during the measure phase to more fully characterize the problem: Total problems fixed prior to release per project Total post release problems per project Types of post release problems, overall and per project These metrics were selected to provide a more complete picture of the company’s defect containment capability. This data will allow the team to determine overall defect containment and study defect containment as A function of project characteristics A function of error type
  • The analyzed the collected data in several ways. First, they looked at the relationship between pre-release defects and project size and found a strong correlation.
  • Next, they looked at the relationship between escaped defects and pre-release defects and found that it was fairly linear. The first two results taken together suggest that there is no significant variation in defect containment effectiveness across projects.
  • Next, the team created a histogram of escaped defects showing the distribution of the different types of problems. This showed that code related problems are the most common. The team decided that the root cause of the increased maintenance effort was the increased size of recent projects, which is unlikely to change, and poor error containment for code related problems,
  • The team decided that they could improve the situation by improving the effectiveness of code inspections. They decided that the effectiveness of code inspections (the number of identified defects) was a function of The size of the unit inspected The preparation time The inspection time The number of reviewers and decided to conduct a designed experiment to determine the optimal combination of these factors.
  • Once they determined the optimal combination they conducted a pilot test using real projects to verify the results.
  • In order to ensure that the improvement will be maintained and managed the team established a performance standard for code inspections based on defects/KLOC, and established a plan for monitoring the process and responding tom situations where unacceptable performance is observed.

Giant Plc 2009 Giant Plc 2009 Presentation Transcript

  • Applying Six Sigma to Financial Software Development Ali Raza Khan Giant Plc. 1 New Oxford Street, London ♦ United Kingdom (+44) 18443247700 ♦ [email_address] © 2009 Giant Plc.
  • Motivation
    • Financial Software Development
    • Large Opportunity for Improvement
    • Approximately 25% of software projects are canceled
    • Average project exceeds
      • Costs by 90%
      • Schedule by 120%
    • Risk of project failure increases with size
    © 2009 Giant Plc.
    • Six Sigma
    • Well-defined improvement approach
    • Impressive track record of achievements
    • Adaptable
  • But Software Development is Not a Typical Application
    • Process Oriented, but
      • Inputs often ill-defined
      • Outputs often difficult to fully evaluate
      • Performance highly influenced by human factors (e.g., knowledge, skills, experience, etc.)
        • Significant natural variation
    © 2009 Giant Plc.
  • Key Factors in Software Project Failures 50% Inadequate Configuration Management 55% Cost Overruns 60% Low Quality Execution Failures 65% Excessive Schedule Pressure Expectation Failures 80% Creeping Requirements Requirements Failures % of “MIS” Projects Risk Factor
  • Applying Six Sigma to Software Development © 2009 Giant Plc.
  • Fuzzy Front End Six Sigma DFSS © 2009 Giant Plc
  • Balance the VOC and the VOB Voice of the Customer Voice of Business
  • VOC – Voice of the Customer
    • Understand Internal and External Customers and Target Environment
    © 2009 Giant Plc.
  • Building a Customer Matrix Segments Types of Customers Lead User Demanding Lost Lead Had But Lost U.S. Europe Asia © 2009 Giant Plc.
  • VOC – Voice of the Customer
    • Understand Internal and External Customers and Target Environment
    • Identify, Characterize and Verify Critical to Quality (CTQ) Requirements
      • Interviews, focus groups, use cases, etc.
      • Preference surveys and Kano analysis
    © 2009 Giant Plc.
  • Kano Analysis
    • Dissatisfiers (or basic requirements)
      • “ Must be” requirements
      • These features must be present to meet minimal expectations of customers
    • Satisfiers (or variable requirements)
      • The better or worse you perform on these requirements, the higher or lower will be your rating from customers
    • Delighters (or latent requirements)
      • These are features, factors, or capabilities that go beyond what customers expect, or that target needs customers can’t express themselves
    © 2009 Giant Plc.
  • The Kano Model How the Customer Feels Satisfier Must-Be Delighter Level of Functionality Delivered for a particular requirement Low to None High Delighted © 2009 Giant Plc Neutral Very Dissatisfied Satisfied, but I expect it this way I can live with it
  • VOC Output: Prioritized CTQs © 2009 Giant Plc. 5 M Minimizing system response time Provide real-time user access 3.5 D Optimizing data transfer 3 M Moving client-server data Manage Network I/O 4 S Verifying data content integrity Manage database interfaces Priority Kano Use-case Requirement
  • VOC – Voice of the Customer
    • Understand Internal and External Customers and Target Environment
    • Identify, Characterize and Verify Critical to Quality (CTQ) Requirements
      • Interviews, focus groups, use cases, etc.
      • Preference surveys and Kano analysis
    • Establish measures for CTQ requirements
    © 2009 Giant Plc.
  • VOC Output: Fully Characterized CTQs © 2009 Giant Plc. Top 10+ compression schemes supplied and fully integrated Top 5 compression schemes supplied Hooks for user supplied compression 3.5 D Optimizing data transfer 800 records/min. 500 records/min. 100 records/min. 3 M Moving client-server data Manage Network I/O ≤ 1 record/100,000 ≤ 1 record/10,000 ≤ 1 record/1,000 4 S Verifying data content integrity Strong Average Minimum Manage Database Interfaces Measure Priority Kano Use-Case Requirements
  • VOB - Voice of Business
    • Analyze Design Options
      • Estimate customer satisfaction
      • Level of effort
      • Capability to deliver
      • Balance VOC and VOB
    © 2009 Giant Plc.
  • Analyze Design Options Design Options Level of Effort © 2009 Giant Plc. Effort Score Customer Sat. Score 18000 12000 3 1 3.5 D Optimizing data transfer 7500 5500 3 1 3 M Moving client-server data Manage Network I/O 1500 1000 3 1 4 S Verifying data content integrity Manage database interfaces Full Effort Base Effort Full Base Priority Kano Use-Case Requirement
  • Analyze Design Options = F(Kano, Priority, Feature Level) © 2009 Giant Plc. Effort Score Customer Sat. Score 18000 12000 3 1 3.5 D Optimizing data transfer 7500 5500 3 1 3 M Moving client-server data Manage Network I/O 1500 1000 3 1 4 S Verifying data content integrity Manage database interfaces Full Effort BaseEffort Full Base Priority Kano Use-Case Requirement
  • Analyze Design Options = ∑ Effort Estimates © 2009 Giant Plc. Effort Score Customer Sat. Score 18000 12000 3 1 3.5 D Optimizing data transfer 7500 5500 3 1 3 M Moving client-server data Manage Network I/O 1500 1000 3 1 4 S Verifying data content integrity Manage database interfaces Full Effort BaseEffort Full Base Priority Kano Use-Case Requirement
  • Concept Selection © 2009 Giant Plc.
  • Computing Productivity Historically, for each project we should know Size, Effort, and Duration © 2009 Giant Plc. ) ( ) ( / / 3 4 3 1 * (SLOC) years Duration B StaffYears Effort Size PP     
  • Schedule Compression 3 ) ( ) ( to relates years Duration StaffYears Effort MBI Manpower Buildup Index, MBI © 2009 Giant Plc. 89 Very Rapid 5 55 Rapid 4 26.9 Moderate 3 14.7 Mod. Slow 2 7.3 Slow 1 Equation Output Buildup Rate MBI
  • Rayleigh Curve © 2009 Giant Plc.
  • Balancing VOC and VOB © 2009 Giant Plc. Duration Adjustment $19,482,050 Net Value $239,700 Defect Repair Cost $966,250 Effort Cost 14.1 Released Defects 77.3 Effort (staff months) 15.2 Duration (months) Concept 1 MBI = 1 (Slow) $1,400,000 Duration Adjustment $20,141,600 Net Value $453,900 Defect Repair Cost $1,492,500 Effort Cost 267 Released Defects 119.4 Effort (staff months) 13 Duration (months) Concept 1 MBI = 3 (Moderate) $688,000 Feature Value $20,000,000 Business Value
  • Balancing VOC and VOB © 2009 Giant Plc. $1,400,000 Duration Adjustment $18,665,650 Net Value $663,600 Defect Repair Cost $2,758,750 Effort Cost 508 Released Defects 220.7 Effort (staff months) 11.7 Duration (months) Concept 1 MBI = 5 (Very Rapid) $1,400,000 Duration Adjustment $20,141,600 Net Value $453,900 Defect Repair Cost $1,492,500 Effort Cost 267 Released Defects 119.4 Effort (staff months) 13 Duration (months) Concept 1 MBI = 3 (Moderate) $688,000 Feature Value $20,000,000 Business Value
  • VOB - Voice of Business
    • Analyze Design Options
      • Estimate customer satisfaction
      • Level of effort
      • Capability to deliver
      • Balance VOC and VOB
    • Select Concept and Approach
      • Flesh out concept
        • QFD
        • FEMA
      • Verify and refine approach
        • Defect analysis
        • Schedule simulation
    © 2009 Giant Plc.
  • Capability to Deliver on Time Probabilistic Scheduling Frequency Chart Certainty is 94.90% from -Infinity to 289.17 days .000 .007 .014 .021 .028 0 7 14 21 28 250.00 262.50 275.00 287.50 300.00 1,000 Trials 4 Outliers Forecast: F52
    • How much confidence should we have in the schedule?
    • … At a 95% confidence level
      • latest mid March, 2003 (+ 43 days)
      • earliest mid January, 2003 (- 15 days)
    Upper Spec Limit (USL) © 2009 Giant Plc.
  • Process Improvement Standard Six Sigma DMAIC Process © 2009 Giant Plc.
  • Application to Software X
    • Prerequisites:
      • Processes must be well defined
    © 2009 Giant Plc.
  • DMAIC Example
    • Problem Statement
      • Post release maintenance has increased by 30% since the end of last fiscal year and is now limiting new product development.
    • Goal Statement
      • Reduce post release maintenance by 40% by the end of Q4’2003.
    © 2009 Giant Plc.
  • Measure – Data Collection
    • Total Problems Fixed Prior to Release Per Project
      • Pre-Release Defects: defects found and fixed during development and testing
    • Total Post Release Problems Per Project
      • Released Defects: defects reported by customers
    • Types of Post Release Problems
      • All projects
      • Per project
    © 2009 Giant Plc
  • Analysis
    • Pre-Release Defects = f(Size)
    © 2009 Giant Plc.
  • Analysis
    • Escaped defects proportional to pre-release defects
      • No significant variation in Defect Containment Effectiveness
        • DCE = Pre-Release Defects/ (Pre-Release Defects + Escaped Defects)
    © 2009 Giant Plc.
  • Analysis
    • Most Escaped Defects are Code Related
    © 2009 Giant Plc.
  • Improve
    • Improve the Effectiveness of Code Inspections
      • Factors
        • Size of unit (LOC)
        • Preparation time (LOC/hour)
        • Inspection time (LOC/hour)
        • Number of reviewers
      • Measure
        • Number of identified defects
    © 2009 Giant Plc.
  • Improve
    • Improve the Effectiveness of Code Inspections
      • Conduct DOE
        • Determine most effective combination of factors
      • Verify DOE results
        • Pilot test using real project
    © 2009 Giant Plc.
  • Control
    • Establish Performance Standard for Code Inspections
      • Defects/KLOC
    • Monitor Performance
      • Take action when unacceptable performance observed
    © 2009 Giant Plc.
  • Questions © 2009 Giant Plc.