• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
100 Defects Isn't Bad!
 

100 Defects Isn't Bad!

on

  • 252 views

How do you properly compare the quality of two or more software deliverables without an effective normalizing metric? The answer is you can’t. If project A has 100 defects and project B has 50 ...

How do you properly compare the quality of two or more software deliverables without an effective normalizing metric? The answer is you can’t. If project A has 100 defects and project B has 50 defects, do you automatically assume project B is a higher quality deliverable? That may not be the case. But, often times, that is the perception of the end user. An effective normalizing metric allows you to properly measure and compare the level of quality across software deliverables. It can also be used to manage end user expectations regarding the quality of the software in relation to the functional value being delivered. Furthermore, the normalizing metric can be used to predict future quality outcomes and can be used to establish service levels of quality performance. Learn how you can quickly and easily incorporate this all-important metric into your quality program.

Statistics

Views

Total Views
252
Views on SlideShare
252
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    100 Defects Isn't Bad! 100 Defects Isn't Bad! Presentation Transcript

    • 100 Defects Isn’t that Bad! David Herron d.herron@davidconsultinggroup.com North America QUEST 2013 Measure. Optimize. Deliver. Phone +1.610.644.2856
    • Defining Software Quality How do you define software quality in your organization?©2013 David Consulting Group 1
    • Software Quality Defined• Absence of defects• Conformance to requirements• Certification standards met• Maintainable• Scalable• Reliable• Usable• Secure©2013 David Consulting Group 2
    • Tracking Software Quality• Mr. I.M.A. Pib is upset. He is the VP of the Store Systems Division.• He just saw the first quarter dashboard of results and his #1 priority project, Store Inventory, has the greatest number of defects.• Here is what was reported to Mr. Pib: Project Delivery Cost Quality (000s) Defects PO Special On Time $500 12 Vendor Mods Late $760 18 Prising Adj. Early $80 5 Store Inventory On Time $990 22• You are the development manager. How might you respond to Mr. Pib? Do we have all the information we need to properly evaluate these outcomes?©2013 David Consulting Group 3
    • Tracking Software Quality • Size (value) can serve as a normalizing metric • A cost per unit of work (rate) can now be calculated • Defect Density* for Mr. I.M.A. Pib’s project is, in fact, the lowest of all his projects Project Delivery Cost Quality Size Rate Density * Defect (000s) Defects Value Density is calculated as defects/sizePO Special On Time $500 12 250 $ 2,000.00 0.048Vendor Mods Late $760 18 765 $ 993.46 0.024Prising Adj. Early $80 5 100 $ 800.00 0.050Store Inventory On Time $990 22 1498 $ 660.88 0.015 ©2013 David Consulting Group 4
    • Size Does Matter• Finding: Nine out of ten projects that fail have not been properly sized.• Consider: When you build a house you specify all the functions and features you want – these are your requirements. The builder then generates an estimate based on the size (square footage) of your requirements.• Size is the key to effectively managing software projects.©2013 David Consulting Group 5
    • Characteristics of an Effective Sizing Metric• Meaningful to both developer and business user• Defined (industry recognized)• Consistent (methodology)• Easy to learn and apply• Accurate, statistically based• Available when needed (early)©2013 David Consulting Group 6
    • Function Points: An Effective Sizing Metric• Function Point Analysis is a standardized method for measuring the functionality delivered to an end user.• Benefits: – Quantitative (Objective) Measure – Industry Data as Basis for Comparison – Expectations (Perceived Customer Value) Managed – Software Process Improvement Requirements Satisfied©2013 David Consulting Group 7
    • The Function Point MethodologyThe software deliverable is sized based upon the functionality delivered.• External Inputs• External Outputs Five key components are• External Inquiries identified based on logical user view• Internal Logical Files External External Output• External Interface Files Input Inquiry Internal Logical Files External Application Interface File©2013 David Consulting Group 8
    • What Do We Count? INPUT FILES AND INPUT TRANSACTIONS APPLICATION OUTPUT FILES AND OUTPUT TRANSACTIONS (BATCH INTERFACES) SCREENS INTERNAL LOGICAL FILES (ADDS, CHANGES, (TABLES, DATA FILES, DELETES, QUERIES) CONTROL FILES) OTHER OUTPUTS • REPORTS • FILES • XML • VIEWS CONTROL INFORMATION • FICHE • TAPE • DISKETTES EXTERNAL • LETTERS TABLES & FILES REFERENCED • NOTICES from other applications • ALARMS (Not Maintained)©2013 David Consulting Group 9
    • Component Complexity & Weights• Complexity calculations are a function of: – The number of data elements – The files referenced – Data complexity Complexity Components: Low Avg . High Total Internal Logical File (ILF) __ x 7 __ x 10 __ x 15 ___ External Interface File (EIF) __ x 5 __ x 7 __ x 10 ___ External Input (EI) __ x 3 __ x 4 __ x 6 ___ External Output (EO) __ x 4 __ x 5 __ x 7 ___ External Inquiry (EQ) __ x 3 __ x 4 __ x 6 ___ Total Unadjusted FPs ___ Record Data Elements (# of unique data fields) Element Types 1-4 5 - 15 16+ Data Relationships or 0-1 Low Low Average File 2 Low Average High Types Referenced 3+ Average High High©2013 David Consulting Group 10
    • The Counting ProcessThe Formal Process:1. Identify Components2. Assess Complexity3. Apply Weightings4. Compute Function Points Components: Low Avg High Total Data Stores __ x 7 __ x 10 __ x 15 ___ Interfaces __ x 5 __ x 7 __ x 10 ___ Inputs __ x 3 __ x 4 __ x 6 ___ Outputs __ x 4 __ x 5 __ x 7 ___ Inquiries __ x 3 __ x 4 __ x 6 ___ Total Function Points ___©2013 David Consulting Group 11
    • Identifying the Functionality PURCHASE USER USER ORDER Input SYSTEMInputs ADD, CHG PAYMENTS INVOICES Interface PURCHASE ORDER INFO PAYMENTS INVOICES Inquiry VENDOR Data Stores USER PAYMENT STATUS ACCOUNTS PAYABLE Output USER PAID INVOICES©2013 David Consulting Group 12
    • Sizing Example PURCHASE USER USER ORDERThe Process SYSTEM 1) Identify Components PAYMENTS 2) Assess Complexity INVOICES VENDOR 3) Apply Weightings USER 4) Compute Function Points ACCOUNTS PAYABLE USER Complexity Components: Low Avg. High Total Internal Logical File (ILF) __ x 7 3 x 10 __ x 15 30 External Interface File (EIF) __ x 5 1 x 7 __ x 10 7 External Input (EI) __ x 3 3 x 4 __ x 6 12 External Output (EO) __ x 4 1 x 5 __ x 7 5 External Inquiry (EQ) __ x 3 1 x 4 __ x 6 4 58 Function Point Size ©2013 David Consulting Group 13
    • Function Point Quality Measures• Defect Density – Measures the number of defects identified across one or more phases of the development project lifecycle and compares that value to the total size of the application. Number of defects (by phase or in total) Total number of function points• Test Case Coverage – Measures the number of test cases that are necessary to adequately support thorough testing of a development project. Number of test cases Number of function points©2013 David Consulting Group 14
    • Function Point Quality Measures• Reliability – A measure of the number of failures an application experiences relative to its functional size. Number of production failures Total application function points• Rate of Growth – Growth of an application’s functionality over a specified period of time. Current number of function points Original number of function points• Stability – Used to monitor how effectively an application or enhancement has met the expectations of the user. Number of changes Number of application function points©2013 David Consulting Group 15
    • Measures of Quality Used to evaluate the effectiveness Defect Removal Efficiency of development quality activities Used to evaluate the overall Defect Density quality of the developed software Used to evaluate the quality of the Delivered Defects delivered software Used to determine the quality of Test Cases Passed First Time software being tested Used to determine if inspections Inspection Rate by Document positively impact quality Used to monitor trends in the Volatility number of changes per month©2013 David Consulting Group 16
    • Non-FP Quality Measures • Defect Removal Efficiency – Tracks the number of defects removed by lifecycle phase. Peer Reviews TestingRange Reqs. Design Code Unit Test Sys. Test UAT Prod TotalInsertion Rate 21 30 35 17 11 3 117Defects Found 5 16 27 31 24 12 2 117Removal Efficiency 4.3% 13.7% 23.1% 26.5% 20.5% 10.3% 1.7% Review Effectiveness 41.0% Test Effectiveness 57.3% • Customer Satisfaction – Gather information relating to delivery performance, communication, management, solutions, etc. – Level of importance. ©2013 David Consulting Group 17
    • A Measurement Baseline Model QUANTITATIVE QUALITATIVE Size Process Effort MethodsMeasures Duration Skills Identifieshow you Cost Tools what youare doing Management Quality are doing Measured Capability Performance Maturity Standard of Baseline of performance Performance ©2013 David Consulting Group 18
    • Baseline Results Example • Small size projects are the norm. • Duration on small projects reflects • Performance levels vary across all industry norms. projects. • Relatively high degree of consistency • The extent of variation is greater than seen in duration data suggests a basis desired. for an estimation model. • Variation potentially driven by mixing • Size to duration relationship suggests support and development tasks. that current methods are scalable. Delivery Rate Time to Market 300 300 G 250 250 K D 200 200 ESize Size 150 150 F B H 100 100 A L J 50 I 50 C 0 0 0.0 10.0 20.0 30.0 40.0 Productivity (Hrs/FP) Duration (Months) ©2013 David Consulting Group 19
    • Quantitative Performance Evaluation Quantitative Assessment: COLLECT QUANTITATIVE DATA Size  Perform functional sizing on all selected projects. Effort  Collect data on project level of effort, cost, duration Duration and quality. Cost  Calculate productivity rates for each project, including Quality functional size delivered per staff month, cost per functional size, time to market, and defects delivered. Measured Performance Results: Baseline Productivity Average Project Size 133 Average FP/SM 10.7 Average Time-To-Market (Months) 6.9 Average Cost/FP $939 Delivered Defects/FP 0.0301©2013 David Consulting Group 20
    • Qualitative Performance Evaluation COLLECT Qualitative Assessment QUALITATIVE DATA  Conduct interviews with members of each project team. Process  Collect Project Profile information. Methods  Develop Performance Profiles to display strengths and Skills weaknesses among the selected projects. Tools Management Definition Design Management • Team Dynamics • Evolutionary • Process • Morale Requirements • Reviews • Project Tracking • Process • Design Reuse Capability • Product Owner • Iteration Planning • Customer Involvement Profiles • Release Planning Involvement • Experience • Automation • Experience Levels • Automation • Leadership Skills • Business Impact Build Test Environment • Code Reviews • Formal Testing Methods • New Technology • Configuration • Test Plans • Automated Process Management • Testing Experience • Adequate Training • Code Reuse • Effective Test Tools • Organizational • Data Administration • Customer Involvement Dynamics • Experienced Staff • Certification • Automation©2013 David Consulting Group 21
    • Modeled Improvements Project Nam e Profile Score Managem ent Definition Design Build Test Environm entAccounts Payable 55.3 47.73 82.05 50.00 46.15 43.75 50.00Priotity One 27.6 50.00 48.72 11.36 38.46 0.00 42.31 BaselineHR Enhancements 32.3 29.55 48.72 0.00 42.31 37.50 42.31 ProductivityClient Accounts 29.5 31.82 43.59 0.00 30.77 37.50 42.31ABC Release 44.1 31.82 53.85 34.09 38.46 53.13 42.31 Average Project Size 133Screen Redesign 17.0 22.73 43.59 0.00 15.38 0.00 30.77 Average FP/SM 10.7Customer Web 40.2 45.45 23.08 38.64 53.85 50.00 34.62Whole Life 29.2 56.82 28.21 22.73 26.92 18.75 53.85 Average Time-To-Market (Months) 6.9Regional - East 22.7 36.36 43.59 0.00 30.77 9.38 30.77 Average Cost/FP $939Regional - West 17.6 43.18 23.08 0.00 26.92 9.38 26.92Cashflow 40.6 56.82 71.79 0.00 38.46 43.75 38.46 Delivered Defects/FP 0.0301Credit Automation 23.5 29.55 48.72 0.00 38.46 6.25 26.92NISE 49.0 38.64 56.41 52.27 30.77 53.13 53.85Help Desk Automation 49.3 54.55 74.36 20.45 53.85 50.00 38.46Formula One Upgrade 22.8 31.82 38.46 0.00 11.54 25.00 46.15 Performance Improvements: Process Improvements: Productivity ~ +131% • Peer Reviews Time to Market ~ -49% • Requirements Management Defect Ratio ~ -75% • Configuration Management Project Nam e Profile Score Managem ent Definition Design Build Test Environm entAccounts Payable 75.3 61.73 82.05 60.00 60.15 53.75 50.00Priotity One 57.6 57.00 55.72 18.36 45.46 22.00 49.31 ProductivityHR Enhancements 52.3 32.55 51.72 23.00 42.31 57.50 49.31 ImprovementClient Accounts 69.5 53.82 65.59 12.00 50.77 67.50 49.31ABC Release 74.1 55.82 69.85 49.09 52.46 63.13 49.31 Average Project Size 133Screen Redesign 67.0 43.73 63.59 21.00 36.38 20.00 51.77 Average FP/SM 24.8Customer Web 59.2 49.45 27.08 58.64 53.85 54.00 49.62Whole Life 50.2 49.82 32.21 27.73 31.92 24.75 53.85 Average Time-To-Market (Months) 3.5Regional - East 57.7 59.36 49.59 0.00 30.77 9.38 50.77 Average Cost/FP $467Regional - West 52.6 55.18 30.08 0.00 33.92 19.38 26.92Cashflow 67.6 66.82 71.79 0.00 49.46 53.75 49.46 Delivered Defects/FP 0.0075Credit Automation 60.5 41.55 78.72 0.00 50.46 26.25 46.92NISE 79.0 68.64 76.41 62.27 65.77 53.13 53.85Help Desk Automation 79.3 64.55 74.36 47.45 63.85 54.00 58.46Formula One Upgrade 52.8 49.82 52.46 0.00 31.54 25.00 56.15©2013 David Consulting Group 22
    • Overall Information Framework Executive Management Enterprise Dashboard Milestone Baseline Plan Actual % Var Project Resource and Effort Status Business Decisions Checkpoint A – Charter & Kickoff 1/10/2008 1/10/2008 1/10/2008 0% Requirements Complete 1/28/2008 1/28/2008 1/28/2008 0% 1,600 2/4/2008 2/4/2008 2/15/2008 Project Resources/Hours Vendor Selection Complete 7% 1,400 PMP/Schedule Complete 2/12/2008 2/12/2008 2/28/2008 11% 1,200 Checkpoint B– Planning & Reqs 2/28/2008 3/15/2008 11% 1,000 Design Complete 3/15/2008 4/15/2008 20% 800 Development Complete 4/15/2008 4/30/2008 10% 600 Checkpoint C– Midpoint 4/30/2008 5/15/2008 10% 4/30/2008 5/15/2008 400 Testing Complete 10% 5/10/2008 5/30/2008 200 Training Complete 13% 5/30/2008 6/15/2008 0 Go Live 11% 8 8 8 8 08 8 8 08 8 8 8 8 6/1/2008 6/30/2008 l0 0 0 0 r0 n0 0 0 b0 0 Lessons Learned/Cust Sat Survey Complete 19% g n ar ct ec ov ay Sep Ju Ap Fe Au Ju Ja O M D N M Checkpoint D – Deploy & Close 6/1/2008 6/30/2008 19% Cum Planned Effort Allocated Cum Actual Effort Spent Performance "Earned Value" Baseline Total Hours Project Defect Status Requirements Growth and Stability Measures 1000 200 900 800 150 # of Requirements 700 # of Defects 600 100 500 400 50 300 200 0 100 0 -50 8 8 8 08 8 08 8 8 8 8 8 l0 08 8 8 08 0 8 8 8 8 r0 n0 0 0 08 b0 0 8 08 l0 g0 n0 r0 n0 0 g p0 n 0 ar ct ay Sep Ju ar ov ec Ap ay Fe Au Ju Ja Feb ct Ju O M Ap M Au Ju Ja Se M O D N M Added Changed Deleted Total Reqs Total Defects Discovered Total Closed Defects Process Process Management Measurement Baseline Repository Process Impro Measures Enterprise .. ve Contr Project Score Mngmnt Req Des Build Test Environ BI Product Releases | Q2 2007 56.2 68 62 68 58 41 35 ol Database EDW Phase IV: Applicant Tracking System 44.3 68 49 57 35 28 35 CRM Product Maintenance Releases | Q3 2007 60.2 73 74 68 65 41 27 Road to 90: In Bound 36.4 57 44 32 46 22 27 Exec SAR PM 2.0 37.5 50 51 25 46 28 27 Defin Measur Meetings | Teleconf. vendor selection 46.6 68 62 57 38 25 27 ute e CoBRA Application 53.6 77 64 50 46 50 31 e Web 2.1 53.2 61 72 48 58 41 31 Proce Web 2.0 Q1 Maintenance 43.7 61 54 20 58 44 31 Q3 2007 Web v2.1 Enhancements / Maintenance 47.3 61 54 20 58 41 31 ss Web v2.2 (EPN) 59.8 77 69 55 58 53 31 Web v2.2 Enhancements / Maintenance | Q4 2007 44.2 61 54 20 65 41 31 Project PAL Historical Measures End User Project X Project Y Project Z Project Estimates©2013 David Consulting Group 23
    • Dashboard Service Levels Industry Median) primarily Level 3 Goal by Measure Name Calculation Notes Example Median organizations 2012 Estimating Accuracy (actual labor hours - positive values represent (1000-500)/500 = - Effort estimated) / estimated overruns; negative underruns +100% overrun +22% 0% 18% Estimating Accuracy (actual calendar months - positive values represent (4 - 3)/3 = +33% - Schedule estimated) / estimated overruns; negative underruns overrun +21% 0% 18% 100 FPs/4 staff Productivity function points / labor months varies with project size months = 25 17 26 20 Dollars are estimated from labor hours @ $110 per hour $200,000/100 = Unit Cost dollars / function points * 145 hrs per staff month $2,000 $938 $613 $800 100 FPs/ 2 System Delivery function points / calendar QSM value is a mean - calendar months = Rate months median not available 50 32 49 40 For all but one project, data Requirements added, changed, deleted / not available. Project 10 changed / 100 Volatility total baselined rqts manager gave an estimate baselined = 10% 20% 10% 15% For all but three projects, 5 = very satisfied ratings by clients 1 = very Client Satisfaction ratings by project manager unavailable. unsatisfied 4 Not available 4 total defects = defects found in system test + defects System Test defects found in system test / found in production (first 30 Effectiveness total defects days) 40 / 50 = 90% 83% 90% 90% Delivered Defect Density (Defects per (defects found in production / (5 defects / 200 100 function points) function points) * 100 production = first 30 days FPs) * 100 = 2.5 2.3 1.3 1.8©2013 David Consulting Group 24
    • Alternative Sizing Options Sizing Technique Standard Lifecycle Comparative Data Lines of Code No Build No Modules/Components No Design No Use Cases No Requirements No Story Points No Requirements No Function Points Yes Requirements Yes COSMIC Yes Requirements Partial NESMA Yes Requirements Partial Mark II Yes Requirements Limited©2013 David Consulting Group 25
    • Alternative Sizing Options Internal v ExternalOrganizational Definitions Power v Specific Industry Definitions Defined Ease of Use Fewer More Rules Rules Modules, Story Lines of Use Case Cosmic, Use Cases, Points Code Points NESMA FP Hours, Story Use Case COSMIC Days Points Points NESMA FP Test Cases IFPUG Function Points IFPUG Function Points Mark II Mark II Easier to Harder to Learn LearnLess Accurate Consistency/ More Accurate Accuracy Ease of Use Power / Ease Index Story Lines of Use Case COSMIC Hours, Power Increases Points Code Points NESMA FP Days IFPUG Function Points Mark II ©2013 David Consulting Group 26
    • Summary• Quality defined as a measure of value for the customer• Size is an critical normalizing metric• FPA serves as an effective sizing method• Historical baseline data can provide for potential predictive capabilities©2013 David Consulting Group 27
    • Contact UsEmail: d.herron@davidconsultinggroup.comPhone: 1-610-644-2856http://www.davidconsultinggroup.com @DavidConsultGrp /DavidConsultGrp /company/David-Consulting-Group Measure. Optimize. Deliver. Phone +1.610.644.2856©2013 David Consulting Group 28