Software Project Management Meets Six Sigma

1,348 views
1,249 views

Published on

Software Project Management Meets Six Sigma

Published in: Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,348
On SlideShare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
189
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Software Project Management Meets Six Sigma

  1. 1. Bookmark This Page Email This Page Format for Printing Cite This Article Submit an Article Software Project Read More Articles New from iSixSigmaSix Sigma Project Tracking Software Related Tools & Articles Review R Managing IT Demands Through Project Portfolios Project Selection Research Report T Six Sigma for Software… More Than a New Tool NEW VERSION! Lean Six Sigma Champion Training MSoftware Vendor Partnership Cost Management Meets Reduction With Six Sigma Six Sigma Discussion Forum quot;Using DMAIC you can reduce the schedule and By D effort over-run, which is common for almost 90% of avid L. Hallowell the projects. You can also measure the variance and then analyse the causes of Part 1: Bottom-up Project Duration and Variation Prediction. the over-run. The causes may be wrong estimation of the project, low productivity A number of recent posts in the iSixSigma Software Forum have of the employees, wrong skill inquired about the application of Six Sigma methods to Software Project set of employees, improper allocation of work, some Management. In particular, how might we look at software project technical problems, and the duration as a key project-planning and execution output (a quot;Yquot; in list goes on...quot; common Six Sigma terms), to better understand the contributing factors (the quot;X'squot;) that drive it? Learning to reduce project duration could be one Software Timeline Management goal (quot;reducing the meanquot;), but it could be more important to reduce the predicted-to-actual project duration gap (quot;reducing the variationquot;). That gap is a particularly familiar enemy in the software world, causing problems with project cost, disappointed customers, conflicts with management, development team morale, and more. Industry data from sources like Capers Jones (Table 1) show this to be an important topic, indicating that what we might call quot;Expectations Failuresquot; are high on the list of contributors to software project failure. Problems related to mismatched expectations between a development team, its management, and internal or external customers show up here just under 'Requirements Failuresquot; (the proper subject for a number of other articles). With the goal of reducing the gap between predicted and actual project completion times, we will begin by focusing on the process that delivers a project completion time estimate. Inaccuracies in that estimating process could, of course, contribute significantly to our completion time gap before work on the project has even started. Later, to find other quot;X'squot; that could strongly influence the size of the quot;Expectation gapquot;, we will look for factors in play during project execution: some under the control of the project leader and team; some controlled by management; and some quot;uncontrolledquot; variables in the project environment. We will approach this broad topic in two parts. This part will concentrate on quot;bottom-upquot; estimating methods, which will establish a good foundation for the task-level view and the methods that use it to produce forecasts. In Part 2, we will explore quot;top-downquot; estimating methods that develop predictions based on properties of the work-product, the project team, and the project environment. Throughout we will use data from several working case studies, taken from real projects and representative of what you might expect to see in actual practice.
  2. 2. Project Duration Estimating Process (Bottom-Up) For best practices in project planning and management, Six Sigma practitioners are wise to leverage the toolkit and experience base that has been developed by the Project Management Institute and its quot;Project Management Body of Knowledgequot; (PMBoK). Leaving the details at the other end of those links, we will briefly outline what is involved in Figure 1: Task Duration Estimate Window (# of days) getting a meaningful estimate of project duration. 1. Developing the Work Breakdown Structure and Task-Level Estimates The first step involves breaking the total project into a set of tasks, each small enough that one can answer these questions: • What's involved in getting started? • How will personnel and resources be allocated, and • What, exactly, are the conditions to be met in order to be quot;donequot;? While it may be tempting to fix a quot;point estimatequot; on the duration of any task, a windowed estimate is more realistic: 2. Identifying Predecessor-Successor Relationships and the Critical Path As tasks are evaluated for their levels of dependency on other tasks, they can be organized into a chain or network that illustrates one or more feasible plans for progressing through the whole set of tasks. Gantt Charts and Critical Path Diagrams illustrate these kinds of progressive task layouts. Figure 2 shows a simple case with seven tasks (A through F), each with a quot;Most Likelyquot; duration shown in the task box. Figure 2: Critical Path with Parallel Tasks From the point of view of duration estimation, the question is, of course, quot;How long will it take to get everything done?quot; The 'quot;Critical Path,quot; highlighted in Figure 2 is, in simple terms, the longest of all potential routes through the task network. Stated differently, the quot;Critical Pathquot; is long enough to ensure that all other project tasks could be completed in parallel (assuming that required resources are available). For that reason, the critical path tasks represent a good sample for completion time estimation. Changes in the durations of critical path tasks are directly reflected in the overall project
  3. 3. completion time. The dynamics connected with the critical path should track pretty well with the top- level dynamics of the overall project. 3. Forecasting Completion Time With the critical path identifying the subset of all tasks to consider, practitioners typically select one of three approaches to build the completion time forecast: • Simple Approach Add up the quot;Most-Likelyquot; estimates for each task. While it may seem obvious that this approach fails to account for the Best and Worst estimates for each task, some software and some users still use it. The quot;Most Likelyquot; estimates for the 49 tasks in our working case total 268 days. We'll contrast that result with the others as we work them through. • Rolling Up Task Expectation Values and Variances This method attempts to account for the influence of all the values in each task estimate window while assessing the variation 'stackup' that accumulates over the entire set of tasks. The calculations at the task level are typically done as follows: In Equation 1 (sometimes called the quot;PERT equationquot; since it may be used in PERT Charts), the expectation weights the quot;Most Likelyquot; estimate by a factor of 4, while factoring in the quot;Bestquot; and quot;Worstquot; estimates at face value. This skews the expectation in a way that responds to some degree to the shape of the estimation window. The variance estimate (Equation 2) assumes that the range of the task estimate window represents approximately 99% of the cases, or +/- 3 standard deviations. One sixth of that span can be used to approximate the standard deviation. Squaring that value gives an estimated variance. Table 2 illustrates these computations for several of the 49 tasks in our working case. Table 2: Sample Task Expectation and Variance Estimates Figure 3: Completion Time Estimate (days) • • Summing the Expected Values for each task gives an expected overall project completion time. In this case, the sum for the 49 tasks on
  4. 4. the critical path is 271 days. Using the knowledge that the variances for a series of independent events are additive, the overall completion time variance is simply the sum of the individual task variances (about 46 in this case). Figure 3 illustrates the completion time estimate as a distribution with a spread that spans from about 254 days to about 290 days. • Monte Carlo Monte Carlo refers to an approach that uses the distribution of each quot;input variablequot; (in this case, each task-level estimate) as the basis for a simulation and draws a series of random samples from those distributions to build a set of hypothetical result values. Without any other underlying math, the Figure 4: Distributions to Describe Input Variables simple process of input variable sampling produces a distribution of output variables that can be used as an indicator of what one might expect to see. Setting quot;Assumptionquot; Variables • Tools like the commercial software package Crystal Ball from Decisioneering make it easy to set up and assess Monte Carlo simulations. A few examples from the case study will illustrate. First, the distribution of each input (called quot;Assumptionquot;) variable is defined, using a representative distribution and appropriate parameters. Figures 4 and 5 show the way that setup is accomplished using Crystal Ball. For the type of variation we expect to see in these task estimate windows, the most typical distribution choices are quot;Triangularquot; and quot;Beta.quot; We'll use Triangular as the best simple illustration. Figure 5: Triangular Distribution For One Task Estimate • Figure 5 shows the triangular distribution for the first task in our case: Best and Most Likely = 10 days, Worst = 15 days. (Note that these are defined as quot;Minquot;, quot;Likeliestquot; and quot;Maxquot; in Figure 5) In a Monte Carlo setup, all other estimates are defined with the corresponding shape that the window values dictate. Setting Forecast Variables Another step in the setup is to define one or more forecast variables, which describe the way the predefined input variables will generate the forecast. In this case, the forecast is simply the sum of the 49 (Triangular-distributed) critical path task-level quot;Most Likelyquot; values.
  5. 5. Running the simulation generates a number of graphs and statistical reports. Figure 6 shows the basic frequency chart for predicted duration. Figure 6: Monte Carlo Frequency Distribution Output • Assessing the Probabilities Associated with Particular Completion Times Both the Expectation Values and Monte Carlo methods allow us to calculate the likelihood that the project will be completed before or after some time point or range. With Crystal Ball, that is accomplished by moving sliders within the frequency distribution window, as illustrated in Figure 7. Figure 7: Assessing Completion Time Probabilities • Compare the Monte Carlo appraisal with the quot;Simplequot; method in regards to the chances of meeting a 270 day deadline. The simple approach added quot;Most Likelyquot; estimates to yield a project completion forecast of 268 days - suggesting we could make the 270 day deadline. Monte Carlo delivers the more sobering message that our chances may be slim indeed - less than 30%.
  6. 6. The Expectation and Variance method can be used to make similar predictions. With this method, an approximation to a Normal distribution is often assumed, allowing the use of Z tables to calculate probabilities. Comparing the Three Bottom-Up Estimation Methods We see that the simple quot;add-em-upquot; method was most optimistic (268 days) but had no defined frequency distribution connected with the forecast. The Expectation and Variance approach was more conservative (271 days) and supplied an estimate of variation (Std. Deviation = 6.8 days) as well as a target value for project completion. The Monte Carlo method was most conservative, with a project duration estimate of about 275 days and an estimated standard deviation of about 8.9 days. These differences are not surprising, since there are quite a number of tasks with skewed estimation windows; i.e., the quot;Most Likelyquot; estimate is the same as either the Best or Worst. (In our study, more of the tasks favored the quot;Worstquot; estimate.) The computation used to build the Expectation for each task weights the extreme values only by a factor of 1 (Equation 1), thereby limiting their influence on the outcome. During Monte Carlo simulation, enough events representing task times near the edges of those estimate windows showed up in the mix to pull the forecast distribution a little further in the quot;Worstquot; direction. In practice, if most tasks are balanced (e.g., isosceles triangles), the Expectation and Variance and the Monte Carlo approaches will tend to agree. As more task estimates become increasingly skewed (e.g., right triangles), the two methods will diverge; the Monte Carlo method will pay more attention to the extremes. Bear this in mind as you explore the use of these tools. Of course, it is often possible, and advisable, to run two or more of these approaches, comparing results and interpreting the estimates on their merits for the specific case at hand. As we will show in Part 2 of this article, no one estimate can be meaningful enough to trust in isolation. A group of estimates from different perspectives can, however, advise and inform us against the backdrop of all our other experience. Together they help illuminate blind spots and steer us clear of some of the common traps. Looking Ahead to Part 2 We will explore top-down project estimation, looking at the way commercial or custom software estimating models work and considering how such models can be calibrated and tailored for specific development environments. This discussion will bring into play some of those quot;other X'squot; connected with the project team and the management environment as they may influence the size and nature of our decided enemy in this article set - the software project delivery quot;Expectations Gap.quot; Part 2 > Top Down Project Effort, Duration, and Defect Prediction About The Author David L. Hallowell is a Managing Partner of Six Sigma Advantage, Inc. and has 20+ years experience as an engineer, manager and Master Black Belt. As Digital’s representative to Motorola’s Six Sigma Research Institute he worked on the original courseware for Black Belts and the application of Six Sigma to software. Mr. Hallowell has supported Six Sigma deployments and trained many Black Belts and trainers worldwide. With a special focus on Design for Six Sigma, he has led development teams to breakthrough improvements in the concept development and design of a number of commercial products. Mr. Hallowell has patents and publications in the area of microelectronics packaging and high speed interconnect. He has authored courses in Software DFSS, Design of Experiments, C++ , and Computational Intelligence Tools. He is a co-founder of Six Sigma Advantage Inc., where he co- authored the Black Belt, Green Belt, and Foundation curriculum. He can be reached via email at dhallowell@6siga.com.
  7. 7. Copyright © 2000-2008 iSixSigma LLC – All Rights Reserved Reproduction Without Permission Is Strictly Prohibited – Copyright Requests Publish an Article: Do you have a Six Sigma tip, learning or case study? Share it with the largest community of Six Sigma professionals, and be recognized by your peers. It's a great way to promote your expertise and/or build your resume. Read more about submitting an article. http://software.isixsigma.com/library/content/c030514a.asp

×