1. Defect Age:
  The time gap between reported and resolved of the defect. If defect age is less then communication
  betw...
4.4 Examples of Metrics Programs

4.4.1 Motorola

Motorola's software metrics program is well articulated by Daskalantonak...
Metric 1.2 : Effort Estimation Accuracy (EEA)




Goal 2: Increase Defect Containment

Question 2.1: What is the currently...
Metric 4.2a: Total Released Defects (TRD) total




Metric 4.2b: Total Released Defects (TRD) delta




Question 4.3: What...
Metric 7.1b: Software Productivity delta (SP delta)




From the preceding goals one can see that metrics 3.1, 4.2a, 4.2b,...
Figure 1: Total Containment Effectiveness




                                                                        For ...
Figure 2: Origins of Defects Found During Test




In Figure 2 we see that, for defects found in test, most were inserted ...
Figure 3: Pareto Chart For Design Defects




Data like this enables the computation of two related containment metrics: P...
Table 1: Software Organization Goals Versus Processes And Metrics



Six Sigma Goal
Required Processes
Enabled Processes
M...
Looking Ahead
In part 2 we'll look at Goal 3: Predict Defect Find and Fix Rates During Development and After Release. Buil...
Upcoming SlideShare
Loading in …5
×

Defect Age

4,670 views

Published on

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
4,670
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
134
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Defect Age

  1. 1. 1. Defect Age: The time gap between reported and resolved of the defect. If defect age is less then communication between development team and testing is good. 2. Defect Re-salvation type: 3. Analysis of defected bug: Whether affected bugs are deferrable or not. At the end of this review, testing team constraint on final regression testing hyper bug density or all modules of time is available. 4. Defect efficiency: DRE= A/ (A+B) A= Defect found by tester during testing B= Defect found by Customer during maintains Project Name, Client Name, Version, Reported Date, Resolved Date, Difference between Reported and Resolved Graph: - Number of bugs Vs Days or Time Detected Finding faults [edit] Finding faults early It is commonly believed that the earlier a defect is found the cheaper it is to fix it.[26] The following table shows the cost of fixing the defect depending on the stage it was found.[27] For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. Time Detected Requirements Architecture Construction System Test Post-Release Requirements 1× 3× 5–10× 10× 10–100× Time Introduced Architecture - 1× 10× 15× 25–100× Construction - - 1× 10× 10–25×
  2. 2. 4.4 Examples of Metrics Programs 4.4.1 Motorola Motorola's software metrics program is well articulated by Daskalantonakis (1992). By following the Goal/Question/Metric paradigm of Basili and Weiss (1984), goals were identified, questions were formulated in quantifiable terms, and metrics were established. The goals and measurement areas identified by the Motorola Quality Policy for Software Development (QPSD) are listed in the following. Goals • Goal 1: Improve project planning. • Goal 2: Increase defect containment. • Goal 3: Increase software reliability. • Goal 4: Decrease software defect density. • Goal 5: Improve customer service. • Goal 6: Reduce the cost of nonconformance. • Goal 7: Increase software productivity. Measurement Areas • Delivered defects and delivered defects per size • Total effectiveness throughout the process • Adherence to schedule • Accuracy of estimates • Number of open customer problems • Time that problems remain open • Cost of nonconformance • Software reliability For each goal the questions to be asked and the corresponding metrics were also formulated. In the following, we list the questions and metrics for each goal:1 Goal 1: Improve Project Planning Question 1.1: What was the accuracy of estimating the actual value of project schedule? Metric 1.1 : Schedule Estimation Accuracy (SEA) Question 1.2: What was the accuracy of estimating the actual value of project effort?
  3. 3. Metric 1.2 : Effort Estimation Accuracy (EEA) Goal 2: Increase Defect Containment Question 2.1: What is the currently known effectiveness of the defect detection process prior to release? Metric 2.1: Total Defect Containment Effectiveness (TDCE) Question 2.2: What is the currently known containment effectiveness of faults introduced during each constructive phase of software development for a particular software product? Metric 2.2: Phase Containment Effectiveness for phase i (PCEi) NOTE From Daskalantonakis's definition of error and defect, it appears that Motorola's use of the two terms differs from what was discussed earlier in this chapter. To understand the preceding metric, consider Daskalantonakis's definitions: • Error: A problem found during the review of the phase where it was introduced. • Defect: A problem found later than the review of the phase where it was introduced. • Fault: Both errors and defects are considered faults. Goal 3: Increase Software Reliability Question 3.1: What is the rate of software failures, and how does it change over time? Metric 3.1: Failure Rate (FR) Goal 4: Decrease Software Defect Density Question 4.1: What is the normalized number of in-process faults, and how does it compare with the number of in-process defects? Metric 4.1a: In-process Faults (IPF) Metric 4.1b: In-process Defects (IPD) Question 4.2: What is the currently known defect content of software delivered to customers, normalized by Assembly-equivalent size?
  4. 4. Metric 4.2a: Total Released Defects (TRD) total Metric 4.2b: Total Released Defects (TRD) delta Question 4.3: What is the currently known customer-found defect content of software delivered to customers, normalized by Assembly-equivalent source size? Metric 4.3a: Customer-Found Defects (CFD) total Metric 4.3b: Customer-Found Defects (CFD) delta Goal 5: Improve Customer Service Question 5.1 What is the number of new problems opened during the month? Metric 5.1: New Open Problems (NOP) NOP = Total new postrelease problems opened during the month Question 5.2 What is the total number of open problems at the end of the month? Metric 5.2: Total Open Problems (TOP) TOP = Total postrelease problems that remain open at the end of the month Question 5.3: What is the mean age of open problems at the end of the month? Metric 5.3: Mean Age of Open Problems (AOP) AOP = (Total time postrelease problems remaining open at the end of the month have been open)/ (Number of open post release problems remaining open at the end of the month) Question 5.4: What is the mean age of the problems that were closed during the month? Metric 5.4: Mean Age of Closed Problems (ACP) ACP = (Total time postrelease problems closed within the month were open)/(Number of open postrelease problems closed within the month) Goal 6: Reduce the Cost of Nonconformance Question 6.1: What was the cost to fix postrelease problems during the month? Metric 6.1: Cost of Fixing Problems (CFP) CFP = Dollar cost associated with fixing postrelease problems within the month Goal 7: Increase Software Productivity Question 7.1: What was the productivity of software development projects (based on source size)? Metric 7.1a: Software Productivity total (SP total)
  5. 5. Metric 7.1b: Software Productivity delta (SP delta) From the preceding goals one can see that metrics 3.1, 4.2a, 4.2b, 4.3a, and 4.3b are metrics for end-product quality, metrics 5.1 through 5.4 are metrics for software maintenance, and metrics 2.1, 2.2, 4.1a, and 4.1b are in-process quality metrics. The others are for scheduling, estimation, and productivity. In addition to the preceding metrics, which are defined by the Motorola Software Engineering Process Group (SEPG), Daskalantonakis describes in-process metrics that can be used for schedule, project, and quality control. Without getting into too many details, we list these additional in-process metrics in the following. [For details and other information about Motorola's software metrics program, see Daskalantonakis's original article (1992).] Items 1 through 4 are for project status/control and items 5 through 7 are really in-process quality metrics that can provide information about the status of the project and lead to possible actions for further quality improvement. 1. Life-cycle phase and schedule tracking metric: Track schedule based on life-cycle phase and compare actual to plan. 2. Cost/earned value tracking metric: Track actual cumulative cost of the project versus budgeted cost, and actual cost of the project so far, with continuous update throughout the project. 3. Requirements tracking metric: Track the number of requirements change at the project level. 4. Design tracking metric: Track the number of requirements implemented in design versus the number of requirements written. 5. Fault-type tracking metric: Track causes of faults. 6. Remaining defect metrics: Track faults per month for the project and use Rayleigh curve to project the number of faults in the months ahead during development. 7. Review effectiveness metric: Track error density by stages of review and use control chart methods to flag the exceptionally high or low data points. Goal 1: Reduce Released Defects This fundamental Six Sigma goal is one that all software work deals with in one way or another. Tracking Defect Containment requires, at a minimum, a test process and defect tally system. When combined with post-released defect tallies this basic data supports the primary defect containment metric: Total Containment Effectiveness (TCE). The computation is simple. Where: Errors = Potential defects that are found during the phase that created them. Defects = Errors that escape to a subsequent development or delivery phase. Obviously, a high TCE is best. While TCE data can be collected and the metric can be easily computed, many software organizations cannot document their performance on this basic metric. Benchmarking shows a range of TCE in the industry from about 65% to 98%, with many organizations somewhere in the 75% - 85% range. In some software businesses where there are particularly strong penalties for escaped defects (high end data storage systems, for example) the TCE can approach and reach 100%.
  6. 6. Figure 1: Total Containment Effectiveness For Software Projects of Different Size and "Schedule Compression" From Patterns of Software System Failure and Success by Capers Jones, pp. 31-32. If the same defects that are counted to compute TCE are classified and grouped according to type and, even better, with respect to where they were inserted into the process, some rudimentary Defect Causal Analysis can be done.
  7. 7. Figure 2: Origins of Defects Found During Test In Figure 2 we see that, for defects found in test, most were inserted during coding. Building a measurement system for classifying these defects by type would be the next step in uncovering the key defect root causes. That, of course, would help us to reduce or remove root causes. In concert with root cause (defect insertion) reduction we may want to step up efforts to reduce the escape of defects from upstream phases to test. That highlights the next goal. Goal 2: Find And Fix Defects Closer To Their Origin This requires that, in concert with the creation of important work products (not just code, of course, but documents like requirements, specifications, user documentation), we integrate activities that are effective in finding the errors that could become downstream defects. Formal work-product inspections, based on Fagan's seminal work at IBM3, have been shown to be the most efficient and effective way to find potential defects and develop useful data about them. With inspections and perhaps supplementary defect-finding activity in place we could track defect insertion rates at each key development or delivery stage. Figure 3 shows where defects attributed to Design were found. Note that 35% were found "in phase" (during Design) and lesser proportions found by Test, during Coding, and by the Customer (post release).
  8. 8. Figure 3: Pareto Chart For Design Defects Data like this enables the computation of two related containment metrics: Phase Containment Effectiveness (PCE) and Defect Containment Effectiveness (DCE). PCE tracks the ability of each phase to find defects before they escape that phase. DCE tracks the ability of each phase to find defects passed to it by upstream phases. Each of these measures provides insight into the strengths and weaknesses of the error and defect detection processes in each phase. Tracking these numbers together with defect type classifications, can identify patterns that shed light on the causes of the types of defects that are found. As a system like this matures, with data over time and across multiple projects, an understanding of the defect "time dynamics" can begin to come into view, setting the stage for the next goal. The use of a defect containment scorecard to track TCE, DCE, PCE, and Defect Repair Costs is described in this related article: Is Software Inspection Value Added?
  9. 9. Table 1: Software Organization Goals Versus Processes And Metrics Six Sigma Goal Required Processes Enabled Processes Metrics 1. Reduce Released Defects Unit/Integration/System Test Pre-release vs. Post-release Defect Tallies Total Containment Effectiveness (TCE) Focus defect fix/removal work Operational definitions for classifying Defects Problem-solving knowledge-base Basic causal analysis Defect stratification by type Poisson Modeling for defect clustering (Optional) Define the "Unit" for work-products delivered Yield assessments for work-product units delivered Yield predictions for work-product units planned Total Defects per Unit (TDU) Rolled Throughput Yield (RTY) 2. Find and Fix Defects Closer to Their Origin Upstream Defect Detection Process (Inspections) Defect Insertion rates and Defect Find Rates for phases or other segments of work breakdown structure (WBS) Phase Containment Effectiveness (PCE) and Defect Containment Effectiveness (DCE) Gather data necessary for process monitoring and improvement Defect Sourcing Process Improved causal analysis Defects per Unit (DPU) for phases or other segments of WBS, contributions to TDU 3. Predict Defect Find and Fix Rates During Development and After Release Rayleigh or other time-dynamic modeling of defect insertion and repair Defect Estimating Model - calibrated to the history and current state in our process Given data on defects found during upstream development or WBS stages, predict defect find rates downstream. Predicted total defects for an upcoming project of specified size and complexity Best fit Rayleigh Model Predictive Rayleigh Model 4. Compare Implementations Within the Company Company's choice of appropriate normalizing factors(LOC, FP, etc) to convert defect counts into meaningful defect densities Defect Density comparing groups, sites, code-bases, etc. within the company Defects / Function Point Defects per KLOC 5. Benchmark Implementations Across Companies Define Opportunities for counting defects in a way that is consistent within the company and any companies being benchmarked Defect Density comparing performance with other companies (and, if applicable, other industries) DPMO Sigma Level Cpk z-score
  10. 10. Looking Ahead In part 2 we'll look at Goal 3: Predict Defect Find and Fix Rates During Development and After Release. Building on the foundation provided by containment measures, we will explore defect arrival rate prediction models and tests for their adequacy. In part 3 we'll cover Goals 5 and 6 related to defect density measures, including the elusive Defects per Million Opportunities (DPMO). Read Six Sigma Software Metrics, Part 2 » About The Author David L. Hallowell is a Managing Partner of Six Sigma Advantage, Inc. and has 20+ years experience as an engineer, manager and Master Black Belt. As Digital’s representative to Motorola’s Six Sigma Research Institute he worked on the original courseware for Black Belts and the application of Six Sigma to software. Mr. Hallowell has supported Six Sigma deployments and trained many Black Belts and trainers worldwide. With a special focus on Design for Six Sigma, he has led development teams to breakthrough improvements in the concept development and design of a number of commercial products. Mr. Hallowell has patents and publications in the area of microelectronics packaging and high speed interconnect. He has authored courses in Software DFSS, Design of Experiments, C++ , and Computational Intelligence Tools. He is a co-founder of Six Sigma Advantage Inc., where he co-authored the Black Belt, Green Belt, and Foundation curriculum. He can be reached via email at dhallowell@6siga.com. Footnotes And References 1 Yield = good units / units started. "First Time Yield" (FTY) computes this quantity before any rework. 2 FP = Function Points, measure software work-product size based on functionality and complexity. While this system is independent of implementation language, there are conversion factors that map function point counts to 'lines of code' in cases where code is the work-product being sized. As order of magnitude rule of thumb 1 FP converts to about 100 lines of C code. 3 M.E.Fagan, "Design and Code Inspections to Reduce Errors in Program Development," IBM Systems Journal, vol. 15, No. 3, 1976.

×