A presentation about how to categorize different test activities, by defining what attributes describe them. This is then used to improve planning, and finding redundancy and gaps.
2. Introduction
▪ Brian Marick first developed the agile testing matrix [1]
▪ Lisa Crispin then used this in her book “Agile Testing” [2]
▪ There have been many interesting developments of the model
[3][4]
▪ The purpose of the agile testing matrix is to categorize test
activities in four distinct quadrants to help plan the necessary
testing [2]
▪ Categorizing test activities is all about granularity – sometimes
it is enough to have 2 categories, sometimes you have to have
20
2
2013-12-13
PA1
Confidential
3. Introducing Test Activity Attributes
▪ To be able to categorize test activities we need to know
what distinguishes different test activities from each other
▪ We need to identify the different types of attributes that a
test activity can have
▪ We also need to identify the different values that the
different attributes can have
▪ Once we have done this, we can create any categorization
model we want to, which meets our specific granularity
needs
3
2013-12-13
PA1
Confidential
4. Test Activity Attributes Overview
Report
Granularity
Generated
Value
Scope
Flexibility
Required Tool
Support
Stakeholder
System
Complexity
Executor
Definition of
Done
4
2013-12-13
PA1
Confidential
5. Generated Value
▪ What value does the test activity generate?
▪ Finding defects?
▪ Passing certifications and standards?
▪ Meeting customer requirements?
▪ Generating decision material and other information?
▪ Supporting developers in some other way?
▪ Provides start criteria for other test activities?
5
2013-12-13
PA1
Confidential
6. Stakeholder
▪ Who are the stakeholders of the test activity?
▪ The project leader?
▪ The developer?
▪ The system architect?
▪ The line manager?
▪ The test leader?
▪ Other testers?
6
2013-12-13
PA1
Confidential
7. System Complexity
▪ How predictable is the (sub-)system under test?
▪ A small unit is often more or less predictable if it is tested in a
controlled environment
▪ A large system is often unpredictable, even if you have system
requirements, and the system is made up of many small
predictable units
▪ Sub-systems can be more or less predictable
7
2013-12-13
PA1
Confidential
8. Report Granularity
▪ On what level is reporting necessary?
▪ Does every test have to be recorded in detail?
▪ What measurements to the stakeholders need?
▪ Is it enough with general quality feedback?
▪ What will the information in the report be used for?
8
2013-12-13
PA1
Confidential
9. Scope Flexibility
▪ What possibilities does the tester have to affect the scope?
▪ Is the scope completely fixed?
▪ Certification / Standard
▪ Customer requirements
▪ Is it semi-flexible?
▪ Could be that priority 1 test cases have to be executed, but the rest is
risk-based
▪ Is it completely up to the tester?
▪ Can you run which ever test sessions you want, without any pre-set
scope?
9
2013-12-13
PA1
Confidential
10. Required Tool Support
▪ Does the activity require certain tools?
▪ Bluetooth testing, power consumption tests, 3GPP tests, all
require specific equipment to run the tests
▪ Activities such as integration tests which are run in a
continuous integration system need to be automated
▪ User-focused test are examples where no specific tools are
usually needed
10
2013-12-13
PA1
Confidential
11. Executor
▪ Who executes the tests?
▪ Dedicated tester?
▪ Developer?
▪ Developer-in-Test?
▪ External User?
▪ Internal User?
▪ External test house?
11
2013-12-13
PA1
Confidential
12. Definition of Done
▪ When is the test activity over?
▪ When all tests are executed?
▪ When a time period has passed?
▪ When the tester says so?
▪ When the first defect is found?
▪ When the stakeholder says so?
12
2013-12-13
PA1
Confidential
13. Evaluating Attributes
▪ Once you have all activities mapped with attributes and
values you can start comparing and evaluating them
▪ This can give you insight into for example if two activities
are very similar and perhaps redundant
▪ It can also show that there are gaps in some areas, if many
activities have similar attribute values, and parts of the
value-spectrum is not covered
13
2013-12-13
PA1
Confidential
14. How attributes affect test method
▪ The test activities themselves to not force a specific test
method
▪ Scripted testing / Session-based testing / Ad-hoc testing
▪ Manual / Automated / Tool supported
▪ But often if you look at the attributes, you will get hints as to
what is more or less suitable as a method for that activity
14
2013-12-13
PA1
Confidential
15. Conclusion
▪ The reason why there are 8 test activity attributes described
here is totally arbitrary and only because I wanted to use
octagon in the title – which attributes are relevant is
completely context dependant
▪ By having all relevant attributes mapped out, it becomes
much easier to plan, and find gaps and redundant activities
▪ How many attributes you choose to use is based on what
granularity you need for your planning (and if you want to
have a cool sounding model name)
15
2013-12-13
PA1
Confidential