15 16-chapter 17-metrics

5,114 views

Published on

Published in: Software, Technology, Business
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,114
On SlideShare
0
From Embeds
0
Number of Embeds
7
Actions
Shares
0
Downloads
216
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

15 16-chapter 17-metrics

  1. 1. 1 No part of this presentation may be reproduced without prior written permission from authors, copyright owner and publisher of the book Metrics & Measurements Chapter 17
  2. 2. 05/03/14 Metrics & Measurements Slide : 2 Agenda 0.1 What is metrics? 0.2 Why Metrics? 0.3 Steps for metrics 0.4 Types of metrics 0.5 Overview slide 1. Project Metrics 2. Progress Metrics 3. Productivity Metrics 4. Development Metrics 5. Release Metrics
  3. 3. 05/03/14 Metrics & Measurements Slide : 3 What is Metrics This is the period we noticed excellent profits in the organization …and… Boss, you were on vacation that period!
  4. 4. 05/03/14 Metrics & Measurements Slide : 4 Terminology 1. Set of data is called information and set of information combined to provide a perspective is called Metrics. 2. A quantitative measure to explain at what degree an attribute of testing or product quality or process has performed is called Metrics. 3. Effort is the actual time that is spent on a particular activity or a phase. “Elapsed days” is the difference between start of an activity to completion of the activity. 4. Measurement is an unit used by metrics (e.g Effort, elapsed days, number of defects …etc). A metric typically uses one of more measurements
  5. 5. 05/03/14 Metrics & Measurements Slide : 5 Why Metrics? 1. How do you determine quality and progress of testing? 2. How much testing is completed? 3. How much more time is needed for release? 4. How much time needed to fix defects? 5. How many Days needed for release? 6. How many defects that will be reported by customers? 7. Do you know how to prevent defects rather than finding and fixing them? Do you have answers?
  6. 6. 05/03/14 Metrics & Measurements Slide : 6 Why Metrics for QA? 1. Testing is penultimate cycle of product release --- Determining quality and progress of testing thus is very important 2. How much testing is completed can be measured if you know how much total testing is needed 3. How much more time is needed for release (e.g) Days needed to complete testing = total test cases yet to be executed / test case execution productivity 4. How much time needed to fix defects (e.g) The defect trend gives a rough estimate of defects that will come in future. Metrics helps in predicting the number of defects that can be found in future test cycles (e.g) Total days needed for defect fixes = (Outstanding defects yet to be fixed + Defects that can be found in future test cycles) / defect fixing capability 5. Days needed for release = Max of (days needed for testing, days needed for defect fixes)
  7. 7. 05/03/14 Metrics & Measurements Slide : 7 Why Metrics for QA? Days needed for release = Max of (Days needed for testing, (Days needed for defect fixes + Days needed for regressing outstanding defect fixes)) In Summary  When to make the release  What to release – Based on defect density across modules, their importance to customers and impact analysis of those defects, scope of the product can be decided to release the product on time. Metrics help in making this decision.  Are we releasing the product with known quality? – The idea of metrics is not only for meeting the date but also to know the quality of product and ascertaining the decision on whether we are releasing the product with the known quality and whether it will function in predictable way in the field.
  8. 8. 05/03/14 Metrics & Measurements Slide : 8 Steps for metrics Step 1: Identify what measurements are important Step 2: Define granularity of measurements ; Granularity depends on data drilling. Example Tester: We found 100 more defects in this test pass compared to the previous one Manager: What aspect of the product testing produced more defects? Tester: Functionality aspect produced 60 defects out of 100 Manager: Good, what are the components in the product that produced more functional defects? Tester: “Installation” component produced 40 out of those 60 Manager: What particular feature produced that many defects? Tester: The data migration involving different schema produced 35 out of those 40 defects…….
  9. 9. 05/03/14 Metrics & Measurements Slide : 9 Steps for metrics Step 3: Decide on periodicity of metrics Step 4: Analyze metrics and take action items for both positives and improvement areas Step 5-n: Track action items from metrics
  10. 10. 05/03/14 Metrics & Measurements Slide : 10 Types of metrics Project metrics: The set of metrics which indicate how the project is planned and executed Progress metrics: The set of metrics to indicate how different activities of the project are progressing. The activities include both development and testing activities. Since the focus of this training is testing, only those metrics applicable to testing are discussed. Productivity metrics: The set of metrics that takes into account various productivity numbers that can be collected and used for planning and tracking the testing activities.
  11. 11. 05/03/14 Metrics & Measurements Slide : 11 OverviewProcess Metrics Product Metrics Project Metrics Progress Metrics Productivity Metrics Effort distribution Schedule Variance Effort Variance Effort distribution Schedule Variance Effort Variance Defect cause distribution Weighted defects trend Defect classification trend Defects trend Priority outstanding rate Outstanding defects rate Defect fix rate Defect find rate Defect cause distribution Weighted defects trend Defect classification trend Defects trend Priority outstanding rate Outstanding defects rate Defect fix rate Defect find rate Introduced and reopened defects rate Age analysis of outstanding defects Defect density and defect removal rate Component-wise defect distribution Introduced and reopened defects rate Age analysis of outstanding defects Defect density and defect removal rate Component-wise defect distribution Closed defects distribution Test phase effectiveness Defects per 100 failed test cases Defects per 100 test cases Test cases developed per 100 hours Test cases executed per 100 hrs of testing Defects per 100 hrs of testing Closed defects distribution Test phase effectiveness Defects per 100 failed test cases Defects per 100 test cases Test cases developed per 100 hours Test cases executed per 100 hrs of testing Defects per 100 hrs of testing. . . . Development defect metrics Testing defect metrics
  12. 12. 05/03/14 Metrics & Measurements Slide : 12 1. Project Metrics – Effort Variance (Planned Vs Actual) Phase Wise Effort Variation 0.00 10.00 20.00 30.00 40.00 Req Design Coding Testing Doc Defect fixing PersonDays Baselined Estimate Revised Estimate Actual Knowledge Check • What is baselined, revised estimates?. • How person days is calculated?. • What is the purpose of this metric? • What is allowed variance %
  13. 13. 05/03/14 Metrics & Measurements Slide : 13 1. Project Metrics – Schedule Variance (Planned Vs Actual) Knowledge Check • What is baselined, revised estimates?. • How no of days is calculated?. • What is the purpose of this metric? • What is allowed variance % for both effort & schedule Schedule Variance 126.00 136.00 110.00 56.00 0.00 50.00 100.00 150.00 200.00 Baseline Estimated Actual/Remaining No.ofDays Estimated Remaining
  14. 14. 05/03/14 Metrics & Measurements Slide : 14 1. Project Metrics – Effort & Schedule Variance Over estimation & over schedule; Both effort and schedule Estimation needs improvement Negative varianceNegative variance Over estimation & schedule; Both effort and schedule Estimation needs improvement Zero or acceptable variance Negative variance Under estimation of both effort and schedule Unacceptable variance Unacceptable variance Under estimation (People get burnt); Needs further analysis Zero or acceptable variance Unacceptable variance Need slight improvement in effort / schedule estimation Acceptable variance Zero or acceptable variance A well executed projectZero varianceZero or Acceptable variance Probable causes / ResultSchedule VarianceEffort Variance Over estimation & over schedule; Both effort and schedule Estimation needs improvement Negative varianceNegative variance Over estimation & schedule; Both effort and schedule Estimation needs improvement Zero or acceptable variance Negative variance Under estimation of both effort and schedule Unacceptable variance Unacceptable variance Under estimation (People get burnt); Needs further analysis Zero or acceptable variance Unacceptable variance Need slight improvement in effort / schedule estimation Acceptable variance Zero or acceptable variance A well executed projectZero varianceZero or Acceptable variance Probable causes / ResultSchedule VarianceEffort Variance
  15. 15. 05/03/14 Metrics & Measurements Slide : 15 1. Project Metrics – Effort distribution Actual effort distribution 23% 18% 15% 22% 0% 5% 17% Req Design Coding Testing Doc bug fixing 1. Matured orgn spend atleast 10-15% in requirements 10-15% in design and 40-50% in testing (This data normally comes from time sheets) 2. Adequate effort needs to be spent in each of the SDLC phase for a quality product release (both more testing and less testing are issues)
  16. 16. 05/03/14 Metrics & Measurements Slide : 16 2. Progress Metrics – Testing progress 0% 20% 40% 60% 80% 100% 1 2 3 4 5 6 7 8 Week Test cases executed Blocked Not Run Fail Pass • Increase in pass % indicate, quality of product improving • Decrease in Blocked % indicate, tests can progress well • Reduced % in fail, is requirement for a release • Not run % should be Zero for the release ; final week should have only Pass and Fail %
  17. 17. 05/03/14 Metrics & Measurements Slide : 17 2. Progress Metrics – Defect find rate Objective: The purpose of testing is to find defects early in the test cycle Knowledge Check 1. Why bell curve (as above) happens in almost all releases if objective is to find all defects early? Defect find rate time-> Numberofdefects
  18. 18. 05/03/14 Metrics & Measurements Slide : 18 2. Progress Metrics – Defect fix rate Objective: The purpose of development is to fix defects as soon as they are identified Knowledge Check 1. Why bell curve (as above) happens here? Defect find rate time-> Numberofdefects Defect fix rate
  19. 19. 05/03/14 Metrics & Measurements Slide : 19 2. Progress Metrics – Outstanding defects Objective: A well-executed project has the number of outstanding defects which is very close to zero all the time during test cycle Knowledge Check 1. Why bell curve (as above) happens then? Defect find rate time-> Numberofdefects Outstanding defects
  20. 20. 05/03/14 Metrics & Measurements Slide : 20 2. Progress Metrics – Priority Outstanding (P0, P1) defects Objective: Provide additional focus for those defects that matters to the release Defect find rate time-> Numberofdefects
  21. 21. 05/03/14 Metrics & Measurements Slide : 21 2. Progress Metrics – Defect trend Objective: Effectiveness of analysis increases when several perspectives of find rate, fix rate, outstanding and priority outstanding defects are combined Defect Trend 0 50 100 150 200 250 300 350 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 Week Defects Defect find rate Defect fix rate Outstanding defects Priority outstanding Knowledge check: What is your analysis about the trend above?
  22. 22. 05/03/14 Metrics & Measurements Slide : 22 2. Progress Metrics – Defect Distribution & Trend Defect distribution P0 11% P1 18% P2 18% P3 35% P4 18% Defect classification trend 0 20 40 60 80 100 120 140 1 2 3 4 5 6 7 8 9 10 week P4 P3 P2 P1 P0 Objective: Providing the perspective of defect classification in the chart helps in finding out on how the defects are distributed Knowledge Check: Analyze week5 and week9 data
  23. 23. 05/03/14 Metrics & Measurements Slide : 23 2. Progress Metrics – Weighted Defects Weighted defects = (P0* 5 + P1 * 4 + P2 *3 + P3 *2+ P4) Both “large defects” and “large number of small defects” affect the product release 0 50 100 150 200 250 300 350 400 1 2 3 4 5 6 7 8 9 10 Week Weighted Knowledge check: What do you understand from week 9 data? Can we make a release at the end of Week10?
  24. 24. 05/03/14 Metrics & Measurements Slide : 24 3. Development Metrics – Defect Cause Knowing the causes (Why a defect happened) of defects help in finding more defects and also in preventing such defects early Knowledge check: How do you prevent defects? What is the diff between Change Request (not defects) and Feature request? Requirement 15% Design 10% Code 37% Feature request 4% Change request 20% Third party 8% Others 6% Requirement Design Code Feature request Change request Third party Others
  25. 25. 05/03/14 Metrics & Measurements Slide : 25 3. Development Metrics – Module-wise defects Knowing the components producing more defects help in defect fix plan and in deciding what to release 0 10 20 30 40 Defect Install Reports admin login GUI Client Server Database Media API Modulewise Defect Distribution P0 P1 P2 P3 P4 Knowledge Check: How do you decide what to release?
  26. 26. 05/03/14 Metrics & Measurements Slide : 26 3. Development Metrics – Defects/KLOC & defect removal rate Defects per KLOC = (Total defects found in the product) / (Total executable AMD lines of the code in KLOC) Note: AMD=added/modified/deleted Defect removal % = (Defects found thru verification activities + defects found by DEV team) / (Defects found by test teams)* 100 Defects/KLOC & Defect removal % 0 10 20 30 40 50 60 70 1 2 3 4 5 6 7 8 Releases Value Defects/KLOC Defect removal %
  27. 27. 05/03/14 Metrics & Measurements Slide : 27 3. Development Metrics – Age analysis of outstanding defects Age analysis of outstanding defects 0 5 10 15 20 25 30 35 40 45 50 1 2 3 4 5 6 7 8 9 10 Week Cumulativeage P4 P3 P2 P1 P0 The time needed to fix a defect may be proportional to its age Knowledge check: What do you observe based on week-4 data? What is happening from week-3 to week-7?
  28. 28. 05/03/14 Metrics & Measurements Slide : 28 3. Development Metrics – Reopened and introduced defects Testing is not there to find same defects again; Release readiness should consider quality of defect fixes Knowledge check: What do you observe based on week-4 data? What is happening from week-3 to week-7? 0 10 20 30 40 50 60 Defects 1 2 3 4 5 6 7 8 9 10 Week Introduced & Reopened Defects Reopened defects Introduced defects
  29. 29. 05/03/14 Metrics & Measurements Slide : 29 4. Productivity metrics Time for a break! – how to communicate productivity loss A manager to his Secretary: I have plenty of work to do in the afternoon; how about you taking afternoon off… Manager to his sub-ordinate: We are not firing you! We are just fixing expense limits and you have already exceeded yours!
  30. 30. 05/03/14 Metrics & Measurements Slide : 30 4. Productivity metrics – Defects per 100 hrs of testing Defects per 100 hours oftesting 0 20 40 60 80 100 120 1 2 3 4 5 6 7 8 9 10 Week Defects Cosmetic Minor Important Critical Extreme Defects per 100 hours of testing = (Total defects found in the product for a period / Total hours spent to get those defects) * 100 Normalizing the defects with effort spent indicates another perspective for release quality
  31. 31. 05/03/14 Metrics & Measurements Slide : 31 4. Productivity metrics – Test productivity Productivity Metrics 0 20 40 60 80 100 120 140 160 180 200 1 2 3 4 5 6 7 8 9 10 Week Test cases executed per 100 hours Test cases developed per 100 hours Defects per 100 test cases Defects per 100 failed test cases
  32. 32. 05/03/14 Metrics & Measurements Slide : 32 4. Productivity metrics – Test Phase effectiveness Test phase effectiveness CT 32% IT 17% ST 12% UT 39% Testing is the responsibility of everyone and multiple teams does testing Hence it is important to analyze which phase (not teams) found more defects Knowledge check: What is the right ratio for unit, Feature/component, Integration and system test?
  33. 33. 05/03/14 Metrics & Measurements Slide : 33 4. Productivity metrics – Closed defect distribution Fixed in closed is a good metric to have for both DEV & test teams Duplicate to be avoided (<5%) Not reproducable defects may reappear again; need to be careful Defects moving to next release needs to be with in certain band (3-6%) Closed defect distribution Will not fix 32% Next release 1% Others 8% Fixed 28% Duplicate 19% Not reproducable 11% As per design 7%
  34. 34. 05/03/14 Metrics & Measurements Slide : 34 5. Release Metrics 1.Weighted defects trend showing “Bell curve” 2.Close to zero weighted defects in the last few weeks prior to release High priority defects as well as high number of low priority defects Weighted defects trend Close to zero high-priority defects in the last few weeks prior to release High Priority defectsPriority Outstanding defects trend 1.Outstanding defects trend showing “downward” trend 2.Close to zero outstanding defects in the last few weeks prior to release Outstanding defectsOutstanding defects trend Defect fixing trend matching arrival trendDefect fix trendDefect fix rate 1.Defect arrival trend showing ‘bell curve” 2.Incoming defects close to zero in the last week Defect trendDefect find rate 15-20% effort spent each on Requirements, design and Testing phases Adequate effort has been spent on all phases Effort Distribution 1.All 100% of test cases to be executed 2.Test cases passed should be minimum 98% Execution % Pass % Test cases executed GuidelinesPerspectives to be considered Name of the metric 1.Weighted defects trend showing “Bell curve” 2.Close to zero weighted defects in the last few weeks prior to release High priority defects as well as high number of low priority defects Weighted defects trend Close to zero high-priority defects in the last few weeks prior to release High Priority defectsPriority Outstanding defects trend 1.Outstanding defects trend showing “downward” trend 2.Close to zero outstanding defects in the last few weeks prior to release Outstanding defectsOutstanding defects trend Defect fixing trend matching arrival trendDefect fix trendDefect fix rate 1.Defect arrival trend showing ‘bell curve” 2.Incoming defects close to zero in the last week Defect trendDefect find rate 15-20% effort spent each on Requirements, design and Testing phases Adequate effort has been spent on all phases Effort Distribution 1.All 100% of test cases to be executed 2.Test cases passed should be minimum 98% Execution % Pass % Test cases executed GuidelinesPerspectives to be considered Name of the metric
  35. 35. 05/03/14 Metrics & Measurements Slide : 35 5. Release Metrics Test cases executed showing an upward trend Whether improved quality in product allowing more test cases being executed Whether test cases executed is proportional to effort spent Test cases executed per 100 hours of testing 1. Defects per 100 hours of testing should be less than 5 2. Defects per 100 hours of testing trend showing downward trend Whether defect arrival is proportional to effort spent Defects per 100 hours of testing 1. Combined number of outstanding & reopened defects showing downward trend 2. Introduced & reopened defects are less than 5% of defect arrival rate Quality of defect fix Same defects reappearing again Introduced and reopened defects Age of defects showing downward trendAge of defectsAge analysis of outstanding defects 1. Defects / KLOC less than 7 2. Defects / KLOC less than last release 3. Defect removal is 50% of more 4. Defect removal % better than last release Defects /KLOC Defect removal % Defect density and defect removal rate GuidelinesPerspectives to be considered Name of the metric Test cases executed showing an upward trend Whether improved quality in product allowing more test cases being executed Whether test cases executed is proportional to effort spent Test cases executed per 100 hours of testing 1. Defects per 100 hours of testing should be less than 5 2. Defects per 100 hours of testing trend showing downward trend Whether defect arrival is proportional to effort spent Defects per 100 hours of testing 1. Combined number of outstanding & reopened defects showing downward trend 2. Introduced & reopened defects are less than 5% of defect arrival rate Quality of defect fix Same defects reappearing again Introduced and reopened defects Age of defects showing downward trendAge of defectsAge analysis of outstanding defects 1. Defects / KLOC less than 7 2. Defects / KLOC less than last release 3. Defect removal is 50% of more 4. Defect removal % better than last release Defects /KLOC Defect removal % Defect density and defect removal rate GuidelinesPerspectives to be considered Name of the metric
  36. 36. 05/03/14 Metrics & Measurements Slide : 36 5. Release Metrics 1.At least 70% of closed defects are fixed 2.Non-reproducible defects are less than 5% 3.Defects moved to next release should be less than 10% Whether good proportion of defects found by testing are fixed Closed defects distribution 1.Very low percentage of defects found in system and acceptance test phase (say less than 12%) 2.A distribution of defects and reduction in defects % compared to next test phase 3.A distribution of UT=50%, CT=30%, IT=13% and ST=7% would be ideal Defects found in each of the test phase Test phase effectiveness GuidelinesPerspectives to be consideredName of the metric 1.At least 70% of closed defects are fixed 2.Non-reproducible defects are less than 5% 3.Defects moved to next release should be less than 10% Whether good proportion of defects found by testing are fixed Closed defects distribution 1.Very low percentage of defects found in system and acceptance test phase (say less than 12%) 2.A distribution of defects and reduction in defects % compared to next test phase 3.A distribution of UT=50%, CT=30%, IT=13% and ST=7% would be ideal Defects found in each of the test phase Test phase effectiveness GuidelinesPerspectives to be consideredName of the metric
  37. 37. 05/03/14 Metrics & Measurements Slide : 37 Agenda - Recap 0.1 What is metrics? 0.2 Why Metrics? 0.3 Steps for metrics 0.4 Types of metrics 0.5 Overview slide 1. Project Metrics 2. Progress Metrics 3. Productivity Metrics 4. Development Metrics 5. Release Metrics

×