Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Defect Density matrix SQA

659 views

Published on

Defect Density matrix SQA

Published in: Education
  • Login to see the comments

Defect Density matrix SQA

  1. 1. Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
  2. 2. • It is Manual Testing • Size (Function Pts, SLOC, Modules, Subsystems, Pages, Documents )
  3. 3. Defect • confirmed and agreed upon (not just reported). • Dropped defects are not counted. Size • Function Points (FP) • Source Lines of Code
  4. 4. • Many metrics have parameters such as “total number of defects” e.g. total number of requirements defects. • Clearly, we only ever know about the defects that are found. • So we never know the “true” value of many of these metrics. • Further, as we find more defects, this number will increase: • Hopefully, finding defects is asymptotic over time i.e. we find fewer defects as time goes along, especially after release. • So metrics that require “total defects” type info will change over time, but hopefully converge eventually. • The later in the lifecycle we compute the metric, the more meaningful the results. • If and when we use these metrics, we must be aware of this effect and account for it. 5
  5. 5. • Many defect metrics have “size” parameters: • The most common size metric is KLOC (thousands of lines of code). • Depends heavily on language, coding style, competence. • Code generators may produce lots of code, distort measures. • Does not take “complexity” of application into account. • Easy to compute automatically and “reliably” (but can be manipulated). • An alternative size metric is “function points” (FP’s). • A partly-subjective measure of functionality delivered. • Directly measures functionality of application: number of inputs and outputs, files manipulated, interfaces provided etc. • More valid but less reliable, more effort to gather. • We use KLOC in our examples, but works just as well with FP’s. 6
  6. 6. •Module 1 = 20 bugs •Module 2 = 30 bugs •Module 3 = 50 bugs •Module 4 = 60 bugs And the total line of code for each module is •Module 1 = 1200 LOC •Module 2 = 3023 LOC •Module 3 = 5034 LOC •Module 4 = 6032 LOC Then, we calculate defect density as Total bugs = 20+30+50+60 = 160 Size= 15289 LOC Defect Density = 160/15289 = 0.01046504 defects/loc = 10.465 defects/Kloc
  7. 7. • Size estimation itself has problems. • “Total Defects” problem. • Defects may not equal reliability. (User experience problem). • Statistical significance when applied to phases and components. • Actual number of defects may be so small that random variation can mask significant variation. 14
  8. 8. • Very useful as measure of organizational capability to produce defect-free outputs. • Can be compared with other organizations in the same domain. • Outlier information useful to spot problem projects and problem components. • Can be used in-process, if comparison is with defect densities of other projects in same phase. • If much lower, may indicate defects not being found. • If much higher, may indicate poor quality of work. • (In other words, need to go behind the numbers to find out what is really happening. Metrics can only provide triggers) 16
  9. 9. • For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them. • For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality.
  10. 10. •Thank You

×