• Save
IT Quality Testing and the Defect Management Process
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

IT Quality Testing and the Defect Management Process

on

  • 6,130 views

A Guide to understanding Defects and opportunties in IT Quality Assurance

A Guide to understanding Defects and opportunties in IT Quality Assurance

Statistics

Views

Total Views
6,130
Views on SlideShare
6,129
Embed Views
1

Actions

Likes
3
Downloads
1
Comments
0

1 Embed 1

http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Audience: Can you think of any additional goals for the DMP?
  • It should also be noted that there are some human factors/cultural issues involved with the defect discovery process.  When a defect is initially uncovered, it may be very unclear whether it is a defect, a change, user error, or a misunderstanding.  Developers may resist calling something a defect because that implies "bad work" and may not reflect well on the development team.  Users may resist calling something a "change" because that implies that the developers can charge them more money.  Some organizations have skirted this issue by initially labeling everything by a different name -- e.g.,  "incidents" or "issues."  From a defect management perspective, what they are called is not an important issue.  What is important is that the defect be quickly brought to the developers' attention and formally controlled.
  • To report on the status of individual defects.To provide tactical information and metrics to help project management make more informed decisions -- e.g., redesign of error prone modules, the need for more testing, etc.To provide strategic information and metrics to senior management -- defect trends, problem systems, etc.To provide insight into areas where the process could be improved to either prevent defects or minimize their impact.To provide insight into the likelihood that target dates and cost estimates will be achieved.
  • Functional team – Ad hocPerformanceSITUATPost Go-live
  • Name other risks that would affect the DMP?
  • Human errors:Types of Errors Omission,Ignorance,Commission,Typography,Knowledge,Information,External

IT Quality Testing and the Defect Management Process Presentation Transcript

  • 1. Defect Management ProcessYolanda Williamsybowdrywilliams@gmail.comhttp://www.slideshare.net/ybowdrywilliamsA practical guide for handling opportunities in ITQuality Assurance
  • 2. AGENDATodays session is divided into two mainSections2What is a DefectManagement Process?• Knowledge sharing• DiscussionDefect Analysis andrecommendations• Critical Analysis• Issue• Change ControlCoordination• UAT an dry run Defects• Recommendations• Discussion
  • 3. Process Improvement : Whatare they thinking?Developers“While the defect itself may not be a bigdeal, the fact that there was a defect isa big deal.”Testers“If this defect could have gotten this farinto the process before it wascaptured, what other defects may bepresent that have not been discovered.”Customers“A defect is anything that causescustomer dissatisfaction, whether in therequirements, design, code or testing ornot”ProgramLeadership‟sadvice:Efforts should be made to analyze theprocess that originated the defect tounderstand what caused the defect.3
  • 4. Management ReportingWhat Leaders need?To make moreinformeddecisions -- e.g.,redesign of errorprone modules,the need formore testing,etc.To providestrategicinformation andmetrics to seniormanagement --defecttrends, problemsystems, etc.To provideinsight intoareas where theprocess couldbe improved toeither preventdefects orminimize theirimpact.To provideinsight into thelikelihood thattarget dates andcost estimateswill be achievedor overran.4
  • 5. Goals of DMPAgreement on the Goals of the Defect ManagementProcess• The foremost goals are to• Prevent the Defect• Early detection• Minimize the impact• Decrease in change requests• The process should be Risk driven.• Integrated with Software Development• Continuous effort to improve the process• Most defects are caused by due process5
  • 6. Defect Management ProcessDefectPreventionDefectDiscoveryDefectResolutionProcessImprovement6
  • 7. ObjectivesTo enable participants understand and apply defectprevention conceptsDefect Prevention Objectives Identify and analyze the causes of defects Reduction in number of defect categories Reduction in frequency of common defects Reduction in the extent of defect escape between test phases andreleases7
  • 8. Defining the Defect PreventionDefect Prevention is a process ofimproving quality and productivity bypreventing the injection of defects into asoftware work product.Definition: “…an activity of continuousinstitutionalized learning during whichcommon causes of errors in work productsare systematically identified and processchanges eliminating those causes aremade.”[Eickelmann]8
  • 9. Defect Prevention Activities9Establish practice of Root Cause Analysis withinprojects for Analysis of Identified DefectsIdentify critical processes as part of root causeanalysisSet goals for improving critical processes withteam-level accountabilityReduce most frequent type of defects such as“ not following coding guidelines” or Ambiguitywithin Requirements and SpecificationsAnalyze opportunities for improvement byconducting escape analysis.
  • 10. Defect Prevention It is virtually impossibleto eliminate the defectsaltogether. Testers/Developersshould collaborate forquick detection ofdefects and minimizethe risk. Assess the critical risksassociated with thesystem and identify thedefects10Minimize expected ImpactPrioritizebased onImpactEstimateExpectedImpactIdentifyCriticalRisks
  • 11. Deliverable BaselinesWhen should it occur in the Developmenttimeline? When a Predefined milestone isreached then Product is Baselined Further development work continuesfrom one stage to another. A product should be consideredbaselined when developers pass it onfor integration testing.11
  • 12. Defect Discovery A defect is said to be discovered when it isbrought to the attention of the developersand acknowledged (i.e., “Accepted”) to bevalid one. A defect discovery team should comprise ofrespectable and knowledgeable individualsand should be lead by a Facilitator.12Find/Discover Defect• Discover defectsbefore theybecome majorproblems.Report Defect• Report defects todevelopers so thatthey can beresolved.Acknowledge/AcceptDefect• Agreed that the reporteddefect is valid by thedevelopers
  • 13. DEFECT RESOLUTION PROCESSA resolution process needs to beestablished for use in the event thereis a dispute regarding a defect. Forexample, if the group uncovering thedefect believes it is a defect but thedevelopers do not, a quick-resolution process must be inplace. The 2 recommendedprocesses:• Arbitration by the software owner -- the customer using the softwaredetermines whether or not theproblem is a defect.• Arbitration by a softwaredevelopment manager -- a seniormanager of the softwaredevelopment department will beselected to resolve the dispute.PrioritizeRiskScheduleFixFixingDefectsReport theResolution13
  • 14. Process Improvement : GeneralFeedback Senior Management must understand, support, and be a part of theDefect Management Program. The Defect Management process should be integrated into the overallsoftware development process. To the extent practical, the process should be repeatable andautomated. Specific development approaches(e.g., testing, inspections, reviews, exit criteria etc.) should be chosenbased on project objectives and risks that must be addressed. Process improvement efforts should be driven by the measurementprocess: metrics, reporting & decision making. It should be noted, the process does not require a significantinvestment, yet will likely result in significant payback14
  • 15. Implementing ProcessImprovementsContinuousImprovementthroughCollaborationDefine CriticalMetricsIdentifyCritical RisksIdentifyProcessImprovementsDevelopProjectManagementPlanPilot“Improved”DefectManagementProcess15
  • 16. Define Critical Metrics Critical Metrics Joint Definition Session:◦ What can we do to ensure that more of these are found earlierin the cycle?◦ Have the defects from these „last minute‟ discoveries beenanalyzed to determine why they were missed in our normalSIT test cycle?◦ When these issues are found late in the cycle, what is done toensure that we catch them next time?◦ This is stemming from the way our business team is doingtheir testing (ad-hoc testing coming late, late in the process). Correction of Errors (COE) analysis◦ the number of defects which were found in the final days ofUAT and all of the ones we found during the dry runs.◦ Why were they not found earlier?◦ What phase of testing that uncovers the majority of them?16
  • 17. Calculated Metrics & Phases % Complete % Defects Corrected % Test Coverage % Rework % Test Cases Passed % Test Effectiveness % Test Cases Blocked % Test Efficiency 1st Run Fail Rate Defect Discovery Rate Overall Fail Rate17
  • 18. Identify Critical Risks Risks which negatively impact the Critical Metricsset◦ Some of the few Risks may be as follows: Stable mature technology vs. new technology Technical platform Large integrated system vs. small stand-alonesystem Tight target dates Limited budget Bottlenecks within the Development & Testingcycles Resource constraints: Overview of all resourceswithin Dev and testing to decrease: Resource access request/on-boarding bottlenecks Resource need competition between projects18
  • 19. Identify ProcessImprovementsEscapesThis is done by driving the defect discovery as farback into the software development process aspossible. Escape Analysis process benefits:◦ improve product quality and customer satisfaction◦ lower costs spent on resolving customer-found defects◦ Reduce defect escape between test phases andreleases Test scenario – test case reviews Exit Criteria for Developer testing QA acceptance of Development Environment/Non-functional Requirements Technical requirements Environment baselines GUI checklists prior to test case execution No Ad-hoc testing: document all variants of testing and their 19
  • 20. Types of Errors20Human ErrorsOmission• Knowledge• Ignorance• Informational• Process-relatedTranslational• Miscommunication• Mismatch in solutionand requirement• Mismatch withrequirement and testcaseDesign and Coding• Affect Data Integrity• Alter Correctly storeddata• Affect downstreamsystem dependencies• Affect testing outcomesTesting• Failure to notice aproblem• Misreading the screen.• Failure to execute aplanned test.• Criteria disagreements• CRs, Rejections
  • 21. Lessons Learned – FeedbackTestingIntegration Testing should bescheduled so that the coremodules are initially tested.Rigorous unit testing to bedone to avoid logical errors.Basic level review and testingshould be done at developerslevel before handing over thecode to the testers andreviewers.CommunicationClient responsibilities shouldbe clearly communicated inthe beginning of the project.Design documents shouldcontain all the necessaryvalidations to avoid validationerror.QA should create anaccessible communicationmodel to increasetransparency, collaborationand trustProcessChecklist should be used toavoid GUI errorsTest cases should be formedwith test data.Developer team should acceptdefect while researching tomove defect through pipeline.HPQC should be utilized at alltest phases :DIT, FIT, SIT, UAT, Prod tomaintain visibility of allprogram defect and for trendassessment21
  • 22. SAMPLE ISSUE # 1Corrections include• Increased downstreamcommunication for Tier 1software owners• Create a GUI Checklist tobaseline what is acceptableviewing per page type• Increase Dev visibility into highimpact change control at pre/postgo live Checkpoints• Include Change Coordinator ininstall planning that can increaserisk to prod environment• Escalation method:• IT Change coordinator for release managementteam to Program Manager, RM Manager and QAManagment• Concerns• Increased workload• the multiple installs of the OM700 program thatneeded review and approvals.• Issues were found during a dry run last week,• Issues found in a load test from past weekend• the change for next week was found last night.• Not enough planning and test cases performed onthe screen that this program runs in.• Reactive response• The change for Monday went to CAB meeting toensure proper testing and planning have beendone.• Why?• To prevent any impacting issues going forward asoccurred with the go live in Production installs.22
  • 23. SAMPLE DEFECT ANALYSIS #1DEFECTS• 13 Open Defects.• 60% defects with Source Code andDesign as the root cause.• 107 Rejected defects for this triage• 30% of all reported defects wererejected.OPPORUNITIES• Implement a Rejection Resolutionprocess to understand why somany defects are reported andrejected.• Increase visibility to DesignTraceability and Unittesting/Development Integrationtesting for increased quality inupstream Program processes.23DEFECT CAUSE /DEFECT STATUS
  • 24. Status: IT Release Testing As Of 21-Oct-201111.90 17.53Sev 2 MTTR "Days" Sev 3 MTTR "Days"Sev 1 MTTR "Days"1.74Sev 4 MTTR "Days"0.00Defect Mean Time To Repair for System Integration TestAVG MTTR "Days"10.390.02.04.06.08.010.012.014.016.018.020.022/Aug/1129/Aug/1105/Sep/1112/Sep/1119/Sep/1126/Sep/1103/Oct/1110/Oct/1117/Oct/1124/Oct/1131/Oct/1107/Nov/11DaysDefect "MTTR" Mean Time To RepairSev 1 MTTR Sev 2 MTTR Sev 3 MTTR Sev 4 MTTR
  • 25. Status: IT Release Testing As Of 21-Oct-2011SIT 30-Aug-11 SIT 21-Oct-11 SIT 30-Aug-11 SIT 4-Nov-11Regression 24-Oct-11 Regression 4-Nov-11 Regression 24-Oct-11 RegressionPlanned Start Planned Finish Actual Start Trend Finish0102030405060708090100110120130140150 22/Aug/1129/Aug/1105/Sep/1112/Sep/1119/Sep/1126/Sep/1103/Oct/1110/Oct/1117/Oct/1124/Oct/1131/Oct/1107/Nov/11TestCasesTest Case "Pass" PowercurveSIT Actual Pass SIT Projection SIT TrendRegression Actual Passed Regression Projection Regression Trend
  • 26. Test Case defect densityTotal number of errors found in test scripts vs. developed andexecuted. (Defective Test Scripts /Total Test Scripts) * 100Example: Total test script developed 1360, total test script executed1280, total test script passed 1065, total test script failed 215So, test case defect density is215 X 100---------------------------- = 16.8%1280This 16.8% value can also be called as test case efficiency %, whichis depends upon total number of test cases which uncovereddefects.26Smallertest casedensity %Increasedtest caseefficiencyHigh testcaseexecutionrates
  • 27. Status: IT Release TestingSev 4 DensitySev 2 Density18.18% 0.83%Defects Density for System Integration TestSev 1 Density2.48%Total Density68.60%47.11%Sev 3 Density0.0%10.0%20.0%30.0%40.0%50.0%60.0%70.0%80.0%90.0%100.0%22/Aug/1129/Aug/1105/Sep/1112/Sep/1119/Sep/1126/Sep/1103/Oct/1110/Oct/1117/Oct/1124/Oct/1131/Oct/1107/Nov/11Defect Density (Defects/Pass TCs)Overall Defect Density Defect Density (Industry Avg)Sev 1-2 Defect Density Sev 3- 4 Defect Density
  • 28. Status: IT Release Testing3 74Sev 3-4 Defect BacklogDefect Discovery and Severity 1-2 Backlog for System Integration TestSev 1-2 Defect Backlog051015202530010203040506070809022/Aug/1129/Aug/1105/Sep/1112/Sep/1119/Sep/1126/Sep/1103/Oct/1110/Oct/1117/Oct/1124/Oct/1131/Oct/1107/Nov/11DefectBacklogDefectsDefect Discovery and Severity 1-2 BacklogDefects (Sev 1-2) Defects (Sev 3-4) Defect Backlog (Sev 1-2)
  • 29. IT TESTING ITERATION 2 UAT AND DRY RUN ANALYSISDEFECTSOPPORUNITIES• Implement a RejectionResolution process tounderstand why so manydefects are reported andrejected.• Increase visibility to DesignTraceability and Unittesting/DevelopmentIntegration testing forincreased quality inupstream Programprocesses.29
  • 30. Status: Sample Release Testing Pass Trend AnalysisSIT 19-Sep-11 SIT 28-Nov-11 SIT 21-Sep-11 SIT 30-Jan-12Regression 29-Nov-11 Regression 5-Dec-11 Regression 29-Nov-11 RegressionPlanned Start Planned Finish Actual Start Trend Finish050100150200250300350400450500550600650700750800850900950100010501100115012001250130015-Sep-1122-Sep-1129-Sep-1106-Oct-1113-Oct-1120-Oct-1127-Oct-1103-Nov-1110-Nov-1117-Nov-1124-Nov-1101-Dec-1108-Dec-1115-Dec-1122-Dec-1129-Dec-1105-Jan-1212-Jan-1219-Jan-1226-Jan-1202-Feb-1209-Feb-12TestCasesTest Case "Pass" PowercurveSIT Actual Pass SIT Projection SIT TrendRegression Actual Passed Regression Projection Regression Trend
  • 31. Status: Sample Mean Time To Repair DataSev 4 MTTR "Days"16.70Sev 1 MTTR "Days"17.45 17.99Sev 2 MTTR "Days" Sev 3 MTTR "Days"0.69Defect Mean Time To Repair for System Integration TestAVG MTTR "Days"13.210.02.04.06.08.010.012.014.016.018.020.0 15/Sep/1122/Sep/1129/Sep/1106/Oct/1113/Oct/1120/Oct/1127/Oct/1103/Nov/1110/Nov/1117/Nov/1124/Nov/1101/Dec/1108/Dec/1115/Dec/1122/Dec/1129/Dec/1105/Jan/1212/Jan/1219/Jan/1226/Jan/1202/Feb/1209/Feb/12DaysDefect "MTTR" Mean Time To RepairSev 1 MTTR Sev 2 MTTR Sev 3 MTTR Sev 4 MTTR
  • 32. Status: Sample Defect Disovery and ResolutionBacklog32 118Sev 3-4 Defect BacklogSev 1-2 Defect BacklogDefect Discovery and Severity 1-2 Backllog for System Integration Test0102030405060708090100050100150200250 15-Sep-1122-Sep-1129-Sep-1106-Oct-1113-Oct-1120-Oct-1127-Oct-1103-Nov-1110-Nov-1117-Nov-1124-Nov-1101-Dec-1108-Dec-1115-Dec-1122-Dec-1129-Dec-1105-Jan-1212-Jan-1219-Jan-1226-Jan-1202-Feb-1209-Feb-12DefectBacklogDefectsDefect Discovery and Severity 1-2 BacklogDefects (Sev 1-2) Defects (Sev 3-4) Defect Backlog (Sev 1-2)
  • 33. Defect Prevention Support Criteria and Actions33• Management Approval and Support: Management must support the project team and theobjectives of the defect management effort if it is to be successful.• Receptivity of Project Team: The project team should wholeheartedly embrace theprogram. Project management, in particular, must take ownership for the successfulimplementation of the program.• Project Duration: The project should not be so long that results will not be known for a longperiod of time. A project duration of three to six months is probably ideal.• Project Risk: The project should be sufficiently large and complex that the results will be takenseriously, but not so large and complex that success of the pilot project is unlikely.• Early in the Life Cycle: The defect management program should be integrated into projectplans and not retrofitted into a project whose approach is well established.Criteria for implementing Defect Prevention efforts• Conduct Training: The project team should be trained in the Defect Management concepts, ifnecessary. Example: on-boarding new team members.• Incorporate Defect Management Into Project Plan: Each pilot project should adapt the DefectManagement approach to its particular circumstances. Emphasis should be placed onautomating as much of the process as practical, and measuring the process. Data collectedfrom the pilot projects should form the baseline for the Critical Metrics Set.• Review Results and Refine Approach: Progress implementing the Defect ManagementProgram should be monitored and the approach refined as appropriate.• Report Results: Periodic status reports, which report pilot project results, should be issued.Defect Prevention Actions
  • 34. Problem Closure & Congratulate Team• Acknowledge significance and value of problemsolution• Recognize team‟s collective efforts in solving theproblem as well as individual contributions• Document what was learned in solving theproblem; lessons learned not only about theproblem which was solved but also about theproblem solving process• Consider investigating other potential causes aspreventive actions• Create case study/problem solving reports34
  • 35. Thank You35If you have any questions or concerns about this presentationContactYolanda Williams ybowdrywilliams@gmail.com