Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
204
On Slideshare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 14 June 2011 The Magazine for Professional Testersprinted in Germanyprint version 8,00 €free digital versionwww.testingexperience.com Improving the Test ProcessISSN 1866-5705 © iStockphoto.com/ jgroup
  • 2. © iStockphoto.com/Dimitris66 Predicting Defects in System Testing Phase Using a Model: A Six Sigma Approach by Muhammad Dhiauddin Mohamed Suffian ing a predicted number of defects to be discovered, testThis article is a revised version of a paper presented at the 4th Inter- execution can be planned accordingly to meet the projectnational Symposium on Information Technology (ITSIM‘10) deadline. 4. Model also deals with issue on reliability of software to be de-Defect prediction is an important aspect of the software develop- livered: More defects found within the testing phase meansment life cycle. The rationale in knowing the predicted number more defects are prevented from escaping to the field. Havingof functional defects earlier on in the lifecycle, particularly at the defect prediction provides a direction on how many defectsstart of the testing phase, is, apart from just find as many defects should be contained within the phase and later, contributesas possible during the testing phase, to determine when to stop to the zero known post-release defects of the software. In thetesting and ensure all defects have been found in testing before long run, it shall portray the stability of the whole team effort in producing software.the software is delivered to the intended end user. It also ensuresthat wider test coverage is put in place to discover the predicted Research Objectivesdefects. This research is aimed at achieving zero known post-re-lease defects of the software delivered to the end user. To achieve The main question to address is “How to predict the total numberthe target, the research effort focuses on establishing a test de- of defects to be found at the start of the system testing phasefect prediction model using the Design for Six Sigma methodol- using a model?” This is followed by the list of sub-questions toogy in a controlled environment, where all factors contributing support the main research questions as below:to defects in the software are collected prior to commencing the • What are the key contributors to the test defect predictiontesting phase. It identifies the requirements for the prediction model?model and how the model can benefit from them. It also outlinesthe possible predictors associated with defect discovery in the • What are the factors that contribute to finding defects in the system testing phase?testing phase. The proposed test defect prediction model is dem-onstrated via multiple regression analysis incorporating testing • How can we measure the relationship between the defectmetrics and development-related metrics as the predictors. factors and the total number of defects in the system testing phase?Why use defect prediction at the start of the testing • What type of data should be gathered and how to get them?phase • How can the prediction model help in improving the testing1. Better resource planning for test execution across projects: As process and improve software quality? one test resource can work in more than one testing project, the prediction of defects in the testing phase allows for plan- From the list of questions, the research effort has been able to ning the appropriate number of resources to be assigned demonstrate the following key items: across multiple projects and support resource planning ac- tivities to ensure the optimized resources and higher pro- • Establish a defect prediction model for software testing ductivity. phase2. Issue on wider test coverage in discovering defects: To ensure • Demonstrate the approach in building a defect prediction wider and better test coverage, prediction of defects could model using the Design for Six Sigma (DfSS) Methodology improve the way of finding defects since there is now a tar- • Identify the significant factors that contribute to a reliable get number of defects to be found, either by adding more defect prediction model types of testing or adding more scenarios related to user ex- perience which results in better root cause analysis. • Determine the importance of a defect prediction model for improving the testing process3. Issue on improving test execution time to meet project dead- line: Putting defect prediction in place will reduce schedule slippage problems resulting from testing activities. By hav-52 The Magazine for Professional Testers www.testingexperience.com
  • 3. Building the Model A. Define Phase The Define Phase primarily involves establishing the projectIn Design for Six Sigma (DfSS), the research is organized accord- definition and acknowledging the fundamental needs of the re-ing to DMADV phases: Define (D), Measure (M), Analyze (A), De- search. A typical software production process which applies the V-sign (D) and Verify (V). DfSS seeks to sequence proper tools and Model software development life cycle is used as the basis for thistechniques to design in more value during new product devel- research. Throughout the process, the testing team is involved inopment, while creating and using higher quality data for key de- all review sessions for each phase, starting from planning untilvelopment decisions. Achievement of DfSS is observed when the the end of the system integration testing phase. The test lab isproducts or services developed meet customer needs rather than involved in reviewing the planning document, the requirementcompeting alternatives. In general, DfSS-DMADV phases are de- analysis document, the design document, the test planning doc-scribed as below: ument and the test cases (see Figure 1). The software production process is governed by project management, quality manage-• Define - identify the project goals and customer (internal and external) requirements ment, configuration and change management, integral and sup- port as well as process improvement initiatives, in compliance to• Measure - determine customer needs and specifications; CMMI®. From the figure, the area of study is the functional or sys- benchmark competitors and industry tem test phase, where defects are going to be predicted. There-• Analyze – study and evaluate the process options to meet fore, only potential factors/predictors in the phases prior to the customer needs system testing phase are considered and investigated.• Design – detailing the process to meet customer needs• Verify – confirm and prove the design performance and abil- ity to meet customer needsFigure 1: Typical Software Production ProcessTwo (2) schematic diagrams are produced, which are a high level are defect containment in the test phase, customer satisfaction,schematic diagram (Figure 2) and a detailed schematic diagram quality of the process being imposed to produce the software and(Figure 3). The high level schematic diagram deals with estab- project management. There are two aspects involved related tolishing the Big Y or business target, little Ys, vital Xs and the goal these little Ys: potential number of defects before the test phasestatement against the business scorecard. In this research, Big which is the research interest, and the number of defects afterY is to produce software with zero-known post-release defects, completing the test phase.while for little Ys, elements that contribute to achieving the Big Ywww.testingexperience.com The Magazine for Professional Testers 53
  • 4. Big Y Zero-Known Post Release Defects Little ys Defect Contain- Customer Project Quality of process ment in Test Phase satisfaction Management Vital Xs Potential # of Level of People Timeline defect before test satisfaction capability Allocation # of defect after Process Rescue test Effectiveness AllocationFigure 2: Schematic DiagramFor the detailed schematic diagram (or detailed Y-X tree), possible test, and these are summarized in a Y to X tree diagram as shownfactors that contribute to the test defect prediction are defined in the figure below. The highlighted factors are selected for fur-from the Vital X, which is the potential number of defects before ther analysis. Test Defect Prediction Software Historical Knowledge Test Process Errors Fault ProjectComplexity Defect Require- Developer Test Case De- Require- Require- Defect Project Project ment Pages Knowledge sign Coverage ment Error ment Fault Severity Domain Thread Design Tester Targeted Total Design Defect Type/ Design Error Component Pages Knowledge Test Cases Fault Category Programming Test Defect Language CUT Error CUT Fault Application Automation Validity Test Case Execu- Test Plan Integration Total PR (Defects) Code Size tion Productivity Error Fault Raised Total Effort in Test Test Cases Test Case Design Phase Error Fault Total Effort in Phases Prior to System Test Factors to considerFigure 3: Detailed Y-X Tree Diagram54 The Magazine for Professional Testers www.testingexperience.com
  • 5. B. Measure Phase known results of PASS and FAIL are identified. Then three testA Measurement System Analysis (MSA) is conducted to validate engineers are selected to execute the test cases at random. Thisthat the processes of discovering defects are repeatable and re- is repeated three times for every engineer. The outcome is pre-producible, thus eliminating human errors. Ten test cases with sented in Figure 4 below: Tester 1 Tester 2 Tester 3 TCD ID TC Result 1 2 3 1 2 3 1 2 3 TC1 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS TC2 FAIL FAIL FAIL FAIL FAIL FAIL FAIL PASS PASS PASS TC3 FAIL FAIL FAIL FAIL FAIL FAIL FAIL PASS PASS PASS TC4 PASS FAIL FAIL FAIL FAIL FAIL FAIL PASS PASS PASS TC5 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS TC6 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS TC7 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS TC8 FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL TC9 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS TC10 PASS PASS PASS PASS PASS PASS PASS PASS PASS PASS Figure 4: Data for AnalysisThe data is then used to run the MSA, where the result is com- As the MSA is passed, the operational definition is prepared con-pared against the Kappa value. Since the overall result of all ap- sisting of type of data to be gathered, measurement to be used,praisers against standard is above 70% (as required by Kappa’s responsibilities, and mechanism to obtain those data so that thevalue), the MSA is considered as a PASS as presented in Figure 5 data can be used for next DfSS phase.below: Overall Result for MSA is PASS since Kappa value is greater than 0.7 or 70%Figure 5: Overall Result of MSAC. Analyze Phase analysis via manual stepwise regression analysis, i.e. the predic- tors are added and removed during the regression until a strongThe data gathered from the Measure Phase is used to perform statistical result is obtained. The figure involves the P-value ofa first round of regression analysis using the data shown in Fig- each predictor against the defects, the R-squared (R-Sq.) value andure 6, which was collected from thirteen projects. The predictors the R-squared adjusted (R-Sq. (adj.)) value. These figures demon-used include requirement error, design error, Code and Unit Test strated the goodness of the equation and how well it can be used(CUT) error, Kilo Lines of Code (KLOC) size, targeted total test cases for predicting the defects. The result of the regression analysis isto be executed, test plan error, test cases error, automation per- presented in Figure 7.centage, test effort in days, and test execution productivity perstaff day. The regression is done against the functional defects asthe target. MINITAB software is used to perform the regression56 The Magazine for Professional Testers www.testingexperience.com
  • 6. Project Req. Er- Design CUT KLOC Total Test Plan Test Case Automa- Test Test Ex- Func- Name ror Error Error Test Error Error tion % Effort ecution tional Cases Produc- Defects tivity Project A 5 22 12 28.8 224 0 34 0 6.38 45.8000 19 Project B 0 0 1 6.8 17 0 6 0 9.36 17.0000 1 Project C 9 10 14 5.4 24 4 6 0 29.16 5.8333 4 Project D 7 12 2 1.1 25 4 9 0 13.17 7.0000 0 Project E 11 29 3 1.2 28 4 12 0 14.26 3.4000 3 Project F 0 2 7 6.8 66 1 7 0 32.64 31.0000 16 Project G 3 25 11 4 149 5 0 0 7.15 74.5000 3 Project H 4 9 2 0.2 24 4 0 0 18.78 7.6667 0 Project I 7 0 1 1.8 16 1 3 0 9.29 2.6818 1 Project J 1 7 2 2.1 20 1 4 0 6.73 1.9450 0 Project K 17 0 3 1.4 13 1 4 0 8.44 6.5000 1 Project L 3 0 0 1.3 20 1 7 0 14.18 9.7500 1 Project M 2 3 16 2.5 7 1 6 0 8.44 1.7500 0Figure 6: Data for Regression Analysis Figure 7: Regression Analysis Result Project Req. Design CUT KLOC Req. Design Total Test Total Test Func- All Name Error Error Error Page Page Test Cases Effort Design tional Defects Cases Error Effort Defects Project A 5 22 12 28.8 81 121 224 34 16.79 15.20 19 19 Project B 0 0 1 6.8 171 14 17 6 45.69 40.91 1 1 Project C 9 10 14 5.4 23 42 24 6 13.44 13.44 4 4 Project D 7 12 2 1.1 23 42 25 9 4.90 9.90 0 0 Project E 11 29 3 1.2 23 54 28 12 4.72 4.59 3 3 Project F 0 2 7 6.8 20 70 88 7 32.69 16.00 16 27 Project G 3 25 11 4 38 131 149 0 64.00 53.30 3 3 Project H 4 9 2 0.2 26 26 24 0 5.63 5.63 0 0 Project I 17 0 3 1.4 15 28 13 4 9.13 7.88 1 1 Project J 61 34 24 36 57 156 306 16 89.42 76.16 25 28 Project K 32 16 19 12.3 162 384 142 0 7.00 7.00 12 12 Project L 0 2 3 3.8 35 33 40 3 8.86 8.86 6 6 Project M 15 18 10 26.1 88 211 151 22 30.99 28.61 39 57 Project N 0 4 0 24.2 102 11 157 0 41.13 28.13 20 33Figure 8: New Set of Data for Regressionwww.testingexperience.com The Magazine for Professional Testers 57
  • 7. D. Design PhaseBased on the result, it has been demonstrated that for this roundof regression, possible predictors are design error, targeted total Further analysis is conducted during the Design Phase due to thetest cases to be executed, test plan error, and test effort in days, need to generate a prediction model that both is practical andin which the P-value for each predictor is less than 0.05. As overall makes sense from the viewpoint of software practitioners usingmodel equation, this model portrays strong characteristics of a logical predictors, to filter the metrics to contain only valid data,good model via high percentage of R-Sq. and R-Sq. (adjusted) val- and to reduce the model to have only coefficients that have a logi-ues: 96.7% and 95.0% respectively. From the regression, this equa- cal correlation to the defect. As a result, a new set of data is usedtion is selected for study in the next phase: to refine the model as shown in Figure 8. Using the new set of data, new regression results are presented Defect = - 3.04 + 0.220 Design Error + 0.0624 Targeted Total in Figure 9. Test Cases - 2.30 Test Plan Error + 0.477 Test Efforts Figure 9: New Regression ResultFrom these latest results, it can be demonstrated that significant Based on the the verification result, it is clearly shown that thepredictors for predicting defects are requirement error, CUT er- model is fit for use and can be implemented to predict test de-ror, KLOC, requirement page, design page, targeted total test cas- fects for the software product. This is justified by the predictedes to be executed, and test effort in days from the phases prior to defects number, which falls within 95% Prediction Interval (PI). Assystem test. The P-value for each factor is less than 0.05, while the the test defect prediction model equation is finalized, the nextR-Sq. and R-Sq. (adjusted) values are 98.9% and 97.6% respective- consideration is to emphasize on the control plan with regard toly, which results in a stronger prediction equation. The selected its implementation in the process, i.e. when the actual numberequation is as below: of defects found is lower or greater than the prediction. Figure 11 below summarizes the control plan: Functional Defects (Y) = 4.00 - 0.204 Requirement Error - 0.631 CUT error + 1.90 KLOC - 0.140 Requirement Page + 0.125 Design Actual Defects < Predicted Actual Defects > Predicted Page - 0.169 Total Test Cases + 0.221 Test Effort Defects Defects Perform thorough testing Re-visit the errors captured during ad-hoc test during requirement, designE. Verify Phase and CUT phaseThe selected model in the Design Phase is now verified against Perform additional test Re-visit the errors capturednew projects that have yet to go to the System Test phase. The strategy that relates to the during test case designactual defects found after the System Test has been completed discovery of more functional defectsare compared against the predicted defects to ensure that actualdefects fall between 95% prediction intervals (PI) of the model as Re-visit the model and thepresented in the last column in Figure 10. factors used to define the model No Predicted Actual 95% CI 95% PI Figure 11: Action Plan for Test Defect Prediction Model Functional Functional (min, max) (min, max) Defects Defects 1. 182 187 (155, 209) (154, 209) 2. 6 1 (0, 2) (0, 14) 3. 1 1 (0, 3) (0, 6)Figure 10: Verification Result for the Prediction Model58 The Magazine for Professional Testers www.testingexperience.com
  • 8. Conclusion > biography From the findings in this research, it is proven that Six Sigma pro- vides a structured and systematic way of building a defect predic- Muhammad Dhiauddin Mo- tion model for the testing phase. The Design for Six Sigma (DfSS) hamed Suffian is pursuing his PhD (Computer methodology provides opportunities to clearly determine what Science) in Software Testing at needs to be achieved from the research, issues to be addressed, Universiti Teknologi Malaysia data to be collected, the needs to be measured and how the and working as lecturer at model is generated, constructed and validated. Moreover, from the leading open-university the model equation it is possible to discover strong factors that in Malaysia. He was the Solu- contribute to the number of defects in the testing phase, while tion Test Manager at one of acknowledging the need to consider other important factors that the public-listed IT companies have significant impact to on testing defects. This research has in Malaysia and prior to that, also demonstrated the importance of a defect prediction model in he was Senior Engineer and Test Team Lead for the test de- improving the testing process, since it contributes to zero-known partment in one of the leading R&D agencies in Malaysia. post-release defects of a software. Having test defect prediction He has almost 7 years of experience in the software/sys- helps in better test strategy and wider test coverage, while at the tem sevelopment and software testing / quality assurance same time ensuring stability of overall the software development fields. With working experience in IT, automotive, bank- effort for releasing a software product. ing and research & development companies, he obtained his technical and management skills from various project References profiles. A graduate of M.Sc. in Real Time Software Engi- 1. Graham, G., Veenendaal, E.V., Evans, I. and Black, R. (2006). neering from Centre for Advanced Software Engineering Foundations of Software Testing: ISTQB Certification. Thom- (CASE), Universiti Teknologi Malaysia, he holds various pro- son Learning. United Kingdom. fessional certifications, namely Certified Six Sigma Green Belt, Certified Tester Foundation Level (CTFL) and Certified 2. Fenton, N.E. and Neil, M. (1999). A Critique of Software Defect Tester Advanced Level – Test Manager (CTAL-TM). He also Prediction Models. IEEE Transactions On Software Engineer- has vast knowledge in CMMi, test process and methodolo- ing. Volume 25, No.5. gies as well as the Software Development Life Cycle (SDLC). 3. Clark, B. and Zubrow, D. (2001). How Good is the Software: A He was involved in and has managed various testing strat- Review of Defect Prediction Techniques. Software Engineer- egies for different projects, including functional, perfor- ing Symposium. Carnegie Mellon University. mance, security, usability and compatibility test, both at system test and system integration test level. His interest 4. Nayak, V. and Naidya, D. (2003). Defect Estimation Strategies. falls within software engineering and software testing ar- Patni Computer Systems Limited. Mumbai. eas, particularly on performance testing and test manage- ment. 5. Thangarajan, M. and Biswas, B. (2002). Software Reliability Prediction Model. Tata Elxsi Whitepaper Lassen Sie sich auf Mallorca zertifizieren!© Wolfgang Zintl - Fotolia.com Certified Tester Advanced Level TESTMANAGER - deutsch 10.10. – 14.10.2011 Mallorca www.testingexperience.com http://training.diazhilterscheid.com Magazine for Professional Testers The 59 1