EuroSTAR Software Testing Conference 2012 presentation on Leveraging Defect Taxonomies for Testing by Michael Felderer. See more at: http://conference.eurostarsoftwaretesting.com/past-presentations/
Michael Felderer - Leveraging Defect Taxonomies for Testing - EuroSTAR 2012
1. Leveraging Defect Taxonomies for Testing
Results from a Case Study
Michael Felderer, QE LaB Business Services
www.eurostarconferences.com
@esconfs
#esconfs
2. Motivation
Defect taxonomies are in practice only used for a-posteriori
classification of failures, but they can also be applied to …
• control the design of test cases
• provide a precise statement about release quality
• allocate testing resources
Continuous integration of defect taxonomies into the test process
3. Defect Taxonomy Supported Testing in a Nutshell
Before DTST
Test Planning and Control
Test Analysis and Design
Test Implementation and Execution
Test Evaluation and Reporting
DTST
Test Planning and Control
Test Analysis and Design
Test Implementation and Execution
Test Evaluation and Reporting
4. Defect Taxonomy Supported Testing
Integration of Defect Taxonomy in ISTQB like standard test process
Features and test process extensions
• Prioritization of requirements, defect categories and failures
• Effective product-specific defect taxonomy
• Linkage of a requirements, to defect categories and failures
• Test design based on defect categories
• Assignment and analysis of failures after a test cycle
Decision support for application of defect taxonomy supported
testing (DTST)
5. Defect Taxonomy Supported Testing
(1) Test Planning
(2) Test Analysis and Design
(4) Test Evaluation and Reporting
Step 1: Analysis and Prioritization of Requirements
Step 2: Creation of a Product-Specific Defect Taxonomy
Step 3: Linkage of Requirements and Defect Categories
Step 4: Definition of a Test Strategy with Test Patterns
Step 5: Analysis and Categorization of Failures after a Test Cycle
(3) Test Execution
6. Step 1: Analysis and Prioritization of Requirements
Prioritized requirements are assigned to use cases
defined by business processes and user interfaces
UC Identification
UC Search Client
REQ# Description Priority of REQ
39 ANF_0053 Logging and Auditing normal
40 ANF_0057 LOGIN via OID high
REQ# Description Priority of REQ
… … …
7. Step 2: Creation of a Product-Specific Defect Taxonomy
Top level categories of Beizer are mapped to product-specific
defect categories which are then further refined to concrete
low-level defect categories with assigned identifier und severity
Defect Category of
Beizer
Product-Specific
Category
DC Description of DC Severity
1xxx . . Requirements R1 Client not identified correctly critical
11xx . . Requirements
Goals and measures of case manager are not
R2
incorrect
processed correctly
normal
16xx . . Requirement
changes
R3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout
13xx . . Completeness Syntactic specifications of input fields
R4 major
Error massages
4xxx . Data D1
Incorrect access / update of client information,
states etc.
normal
42xx . . Data access and
handling
D2 Erroneous save of critical data critical
Unsuitability of the system
taking the organizational
processes and procedures
into account.
Incorrect handling of the
syntactic or semantic
constraints of GUI.
8. Step 3: Linkage of Requirements and Defect Categories
Experience-based assignment of reqs to defect categories
Peer review of assignment important
Requirements should be assigned to only one defect category
R4 major
REQ# Description Priority of REQ
39 ANF_0053 Logging and Auditing normal
40 ANF_0057 LOGIN via OID high
Defect Category of
Beizer
Product-Specific
Category
DC Description of DC Severity
1xxx . . Requirements R1 Client not identified correctly critical
11xx . . Requirements
Goals and measures of case manager are not
R2
incorrect
processed correctly
normal
16xx . . Requirement
changes
R3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout
13xx . . Completeness Syntactic specifications of input fields
Error massages
4xxx . Data D1
Incorrect access / update of client information,
states etc.
normal
42xx . . Data access and
handling
D2 Erroneous save of critical data critical
Unsuitability of the system
taking the organizational
processes and procedures
into account.
Incorrect handling of the
syntactic or semantic
constraints of GUI.
9. Step 4: Definition of a Test Strategy with Test Pattern
A test pattern consists of a test design technique with
three test strength and has assigned defect categories
Id
Test Design
Technique
Defect
Categories
Test Strength 1
(low)
Test Strength 2
(normal)
Test Strength 3
(high)
S1
Use case-based
testing; process
cycle tests
R1, R2, R3, F1,F2,
F3
Main paths Branch coverage Loop coverage
S3
State transition
testing
I1, I2, F7, F8, F9 State coverage
State transition
coverage
Path coverage
D: Data
oriented D1
CRUD (Create,
Read, Update and
Delete)
D1, D2 Data cycle tests Data cycle tests
D3
EP: Equivalence
partitioning
F3, F5, F6 EP valid EP valid+invalid EP valid+invalid
D4
BVA: Boundary
value analysis
F3, F5, F6 BVA valid BVA valid+invalid
BVA r values at
boundaries
S: Sequence
oriented
10. Test Design and Execution
REQs
REQ Text PR
1 Workflow High
2 Data Normal
Defect Catego ry o f Beizer P ro duct -Specif ic Catego ry DC Descript io n o f DC Severity
1xxx . . Requirements R1 Client not identified correctly critical
11xx . . Requirements incorrect R2 Goals and measures of case manager are not processed correctly normal
16xx . . Requirement changes R3 Update and termination of case incorrect normal
12xx . . Logic GUI-layout
R4 majo r
13xx . . Completeness Syntactic specifications of input fields
Error massages
Unsuitability of the system taking the
organizational processes and
procedures into account.
Incorrect handling of the syntactic or
semantic constraints of GUI.
Id
Test Design
Technique
Defect Categories
Test Strength 1
(low)
Test Strength 2
(normal)
Test Strength 3
(high)
S1
Use case-based
testing; process
cycle tests
R1, R2, R3, F1,F2,
F3
Main paths Branch coverage Loop coverage
S3
State transition
testing
I1, I2, F7, F8, F9 State coverage
State transition
coverage
Path coverage
D: Data
oriented D1
CRUD (Create,
Read, Update and
Delete)
D1, D2 Data cycle tests Data cycle tests
S: Sequence
oriented
Defect Taxonomy
Test Design
PR SDC, SF Test strength
high blocker, critical, major 3
normal blocker, critical, major 3
normal major, normal, minor 2
low minor trivial 1
Test Execution
ID Description Result Comments Severity
1 see test spec. pass no
2 see test spec. pass no
3 see test spec. fail no critical
4 see test spec. pass no
5 see test spec. fail no minor
11. Step 5: Analysis and Categorization of Failures after a Test Cycle
Defects are exported from Bugzilla
Severity assigned by testers is compared to
severity of defect category and priority of req
Weights are adapted if needed
Precise statement about release quality is possible
• Additional information valuable for release planning
12. Case Study Overview
Public health insurance domain
Characteristics Project A Project B
Area Application for case managers
Administration of clients of the
public health insurance
institution.
Staff About 7 Up to 10
Duration
9 month development, now
under maintenance
9 month development, now
under maintenance
Number of iterations 4 3
Size (Reqs + Use Cases) 41+14 28+20
Ratio of system testing 27% of overall project effort 28% of overall project effort
Test Process DTST (based on ISTQB) ISTQB
13. Case Study Results
Smaller number of test cases with higher effectiveness
More goal-oriented test cases
Metrics Project A Project B
Number of Requirements (NOR) 41 28
Number of Use Cases (NUC) 14 20
SIZE (NUC+NOR) 55 48
Number of Tests (NOT) 148 170
Number of Failures (NOF) 169 114
NOT/SIZE 2.69 3.54
NOF/NOT 1.14 0.67
14. Cost Comparison and Decision of Application
Estimation is structured according to the phases and steps of the
test process
Comparison of cost (measured in time) of DTST and ISTQB
Break-even if cost of DTST is smaller than cost of ISTQB
Adaptation and simulation with parameters
• Analysis and prioritization of a requirement
• Linkage of a requirement to defect category
• Design and implementation of a test case
• Execution time per test case
• Maintenance time per test case
15. Cost Comparison in Case Study
Diff of Costs
ISTQB DTST
Phases Ph Ph Ph
TE Test Planning 50.00 100.00
-50.00
TE Test Analysis and Design 200.00 166.00
34.00
TE Test Execution, Evaluation, Maintenance 110.00 92.50
17.50
TP 50.00 100.00 -50.00
TAD 250.00 266.00 -16.00
TC1 360.00 358.50 1.50
TC2 470.00 451.00 19.00
TC3 580.00 543.50 36.50
TC4 690.00 636.00 54.00
TC5 800.00 728.50 71.50
TC6 910.00 821.00 89.00
TC7 1020.00 913.50 106.50
TC8 1130.00 1006.00 124.00
16. Cost Comparison Scenarios
Cost comparison with average
test execution time 0.5h
Cost comparison with average
test execution time 0.1h
1200
1000
800
600
400
200
0
TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8
DTST
ISTQB
Test Cycles
Ph
600
500
400
300
200
100
0
TP TAD TC1 TC2 TC3 TC4 TC5 TC6 TC7 TC8
DTST
ISTQB
Test Cycles
Ph
17. Summary
Integration of defect taxonomies in
standard ISTQB test process
Decision support for application
Benefits
• Goal-oriented test cases
• Small number of more effective
test cases
• Support for a precise release
quality statement
• Clear cost estimation procedure
available
18. Questions or Suggestions?
Dr. Michael Felderer
Technikerstr. 21a
A-6020 Innsbruck
Austria
Tel. +43 680 1238038
michael.felderer@qe-lab.com
www.qe-lab.com
Dipl.-Ing. Armin Beer
Helenenstr. 114
A-2500 Baden
Austria
Tel. +43 676 65055670
info@arminbeer.at
www.arminbeer.at