More Related Content
Similar to Risk-Based Testing - Designing & managing the test process (2002) (20)
More from Neil Thompson (17)
Risk-Based Testing - Designing & managing the test process (2002)
- 1. Risk-Based Testing
Paul Gerrard & Neil Thompson
Thompson
information
Systems
Consulting Limited
Systeme Evolutif Limited Thompson information Systems Consulting Limited
3rd Floor, 9 Cavendish Place 23 Oast House Crescent
London W1M 9DL, UK Farnham, Surrey GU9 0NP, UK
Tel: +44 (0)20 7636 6060 Tel: +44 (0)7000 NeilTh (634584)
Fax: +44 (0)20 7636 6072 Fax: +44 (0)7000 NeilTF (634583)
email: paulg@evolutif.co.uk email: NeilT@TiSCL.com
http://www.evolutif.co.uk/ http://www.TiSCL.com/
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 1
- 2. Agenda
I Introduction
II Risk-Based Test Strategy
III Risk Based Test Planning and Organisation
IV Managing Test Execution and the End-Game
V Close, Q&A
Here‟s the commercial bit:
– Most of the material presented today is based on:
– Risk-Based E-Business Testing, Gerrard and Thompson,
Artech House, 2002
– Visit www.riskbasedtesting.com for samples.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 2
- 3. Paul Gerrard
Systeme Evolutif are a software testing consultancy specialising in E-Business
testing, RAD, test process improvement and the selection and implementation
of CAST Tools. Evolutif are founder members of the DSDM (Dynamic Systems
Development Method) consortium, which was set up to develop a non-
proprietary Rapid Application Development method. DSDM has been taken up
across the industry by many forward-looking organisations.
Paul is the Technical Director and a principal consultant for Systeme Evolutif.
He has conducted consultancy and training assignments in all aspects of
Software Testing and Quality Assurance. Previously, he has worked as a
developer, designer, project manager and consultant for small and large
developments. Paul has engineering degrees from the Universities of Oxford
and London, is Co-Programme Chair for the BCS SIG in Software Testing, a
member of the BCS Software Component Test Standard Committee and
Former Chair of the IS Examination Board (ISEB) Certification Board for a
Tester Qualification whose aim is to establish a certification scheme for testing
professionals and training organisations. He is a regular speaker at seminars
and conferences in Europe and the US, and won the „Best Presentation‟
award at EuroSTAR ‟95.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 3
- 4. Neil Thompson
Neil Thompson is the Director of Thompson information Systems Consulting
Ltd, a company he set up in 1998 as a vehicle for agile, impartial consultancy
and management to blue-chip clients, sometimes in association with other
consultancies such as Systeme Evolutif. Neil‟s background is in programming,
systems analysis and project management. He has worked for a computer
hardware manufacturer, two software houses, an international user
organisation and two global management consultancies, and currently feels
fulfilled as a specialist in software testing.
He is a Certified Management Consultant (UK), and a member of the British
Computer Society (BCS) and the IEEE. Neil is active in the BCS Specialist
Interest Groups in Software Testing and Configuration Management. He
spoke at the first and second EuroSTAR conferences in 1993-4, and again in
1999.
Neil studied Natural Sciences at Cambridge University (BA & MA degrees). He
holds the ISEB Foundation Certificate in software testing and will be taking the
Practitioner examination in December.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 4
- 5. Part I
Introduction
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 5
- 6. Part 1 - Agenda
How Much Testing is enough?
When is the Product Good Enough?
Introduction to Risk
The V-Model and W-Model.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 6
- 7. How Much Testing is Enough?
(can risks help?)
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 7
- 8. “I need six testers for eight
weeks…”
The Project manager says
– “Four testers for six weeks and that‟s it!”
The testers say
– It‟ll take longer than six weeks
– It‟ll cost more than the budget allocated
– It‟s too big/complicated/risky for us to do properly
– “It‟s just not enough”
Was it ever possible to do “enough” testing in
these circumstances?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 8
- 9. Testing is time delimited…always
No upper limit to the amount of testing
Even in the highest integrity environments, time and
cost limit what we can do
Testing is about doing the best job we can in the
time available
Testers should not get upset, if our estimates are
cut down (or ignored)
The benefits of early release may be worth the risk
But who knows the status of the risks and benefits?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 9
- 10. Did any tester ever get enough
time to test?
No, of course not
We need to separate the two responses:
– the knee-jerk reaction (a back-covering
exercise prior to the post-disaster “I told you
so!”)
– a rational evaluation of the risks and response
to doing „too little‟
Often, just like cub-scouts, testers “promise
to do their best” and don‟t make waves.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 10
- 11. Risk-based test planning
If every test aims to address a risk, tests can
be prioritised by risk
It‟s always going to take too long so…
– Some tests are going to be dropped
– Some risks are going to be taken
Proposal:
– The tester is responsible for making the project
aware of the risks being taken
– Only if these risks are VISIBLE, will management
ever reconsider.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 11
- 12. So how much testing is enough?
Enough testing has been planned when the
stakeholders (user/customer, project
manager, support, developers) approve:
TESTS IN SCOPE
– They address risks of concern and/or give confidence
THE TESTS THAT ARE OUT OF SCOPE
– Risk is low OR these tests would not give confidence
The amount and rigour of testing is determined by
CONSENSUS.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 12
- 13. When is the Product
“Good Enough”?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 13
- 14. Compulsive behaviour
Consultants, „gurus‟, academics preach
perfection through compulsive behaviour:
» product perfection through process maturity and
continuous process improvement
» all bugs are bad, all bugs could be found, so use more and
more rigorous/expensive techniques
» documentation is always worthwhile
» you can‟t manage what you can‟t count
» etc. etc..
Process hypochondriacs can‟t resist.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 14
- 15. “Good Enough”
James Bach* is main advocate of the „Good
Enough‟ view (also Yourdon and others)
A reaction to compulsive formalism
– if you aim at perfection, you‟ll never succeed
– your users/customers/businesses live in the
real world, why don‟t you?
– compromise is inevitable, don‟t kid yourself
– guilt and fear should not be part of the process.
* James Bach, “Good Enough Quality: Beyond the Buzzword”, Computer, August 1997
Web site: www.satisfice.com
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 15
- 16. “Good Enough” means:
1. X has sufficient benefits
2. X has no critical problems
3. Benefits of X sufficiently outweigh
problems
4. In the present situation, and all things
considered, improving X would cause
more harm than good.
All the above must apply
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 16
- 17. Contribution of testing to the
release decision
Have sufficient benefits been delivered?
– Tests must at least demonstrate that features
providing benefits are delivered completely
Are there any critical problems?
– Test records must show that any critical problems
have been corrected, re-tested, regression tested
Is our testing Good Enough?
– Have we provided sufficient evidence to be
confident in our assessment?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 17
- 18. Who makes the release decision?
“Good enough” is in the eye of the stakeholder
Testers can only say:
– “at the current time, our tests demonstrate that:
» the following features/benefits have been delivered
» the following risks have been addressed
» these are the outstanding risks of release...”
Stakeholders and management, not the
tester, can then make the decision.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 18
- 20. The definition of risk
Italian dictionary: Risicare, “to dare”
Simple generic definition:
–“The probability that undesirable events will occur”
In this tutorial, we will use this definition:
“A risk threatens one or more of a project‟s
cardinal objectives and has an uncertain
probability”.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 20
- 21. Some general statements about
risk
Risks only exist where there is uncertainty
If the probability of a risk is zero or 100%, it is
not a risk
Unless there is the potential for loss, there is
no risk (“nothing ventured, nothing gained”)
There are risks associated with every project
Software development is inherently risky.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 21
- 22. Cardinal Objectives
The fundamental objectives of the
system to be built
Benefits of undertaking the project
Payoff(s) that underpin and justify the
project
Software risks are those that threaten the
Cardinal Objectives of a project.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 22
- 23. Three types of software risk
Process Risk
Project Risk variances in planning and
resource constraints, estimation, shortfalls in
external interfaces, staffing, failure to track
supplier relationships, progress, lack of quality
contract restrictions assurance and
configuration management
Primarily a management Planning and the development
responsibility process are the main issues here.
Product Risk
lack of requirements
stability, complexity,
Testers are design quality, coding
mainly quality, non-functional
concerned with issues, test specifications.
Product Risk
Requirements risks are the most significant
risks reported in risk assessments.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 23
- 24. Quantitative or qualitative
Risks can be quantified:
– Probability = 50%
– Consequence or loss £100,000
– Nice to use numbers for calculations/comparisons
– But we deal in perceived risk, not absolute risk
Or qualified:
– Probability is high medium or low
– Consequence is critical, moderate, low
– More accessible, usable in discussions, risk
workshops.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 24
- 25. Uncertainty
Consequence
– Do you know the consequences of failure?
– Need user input to determine this
– Some costs could be calculated but others are
intangible (credibility, embarrassment etc.)
Probability
– The toughest one to call
– Crystal ball gazing
In conducting a risk assessment, make it clear
what level of uncertainty you are working with
Testing (and test results) can reduce uncertainty.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 25
- 26. The V-Model and W-Model
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 26
- 27. V-Model
User
Requirements Acceptance
Test
Is there ever a one-to-one relationship between baseline
documents and testing?
Functional
Specification
System
Test
Physical Integration
Design Test
Where is the static testing (reviews, inspections, static
analysis etc.)? Program
Specification
Unit
Test
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 27
- 28. W-Model
Write Test the Install Acceptance
Requirements Requirements System Test
Specify Test the Build System
System Specification System Test
Design Test the Build Integration
System Design Software Test
Write Unit
Code Test
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 28
- 29. W-Model and static testing
Requirements
Animation
Write Test the Early Test Install Acceptance
Requirements Requirements System Test
Scenario Case Preparation
Walkthroughs
Specify Test the Reviews Build System
System Specification System Test
Inspections
Design Test the Build Integration
System Design Software Test
Write Unit
Code Test
Static
Inspection
Analysis
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 29
- 30. W-Model and dynamic testing
Write Test the Install Acceptance Business
Requirements Requirements System Test Integration
Testing
System
Integration
Specify Test the Build System Testing
System Specification System Test
Performance
Usability Testing
Testing
Design Test the Build Integration Security
System Design Software Test Testing
Boundary
Value Testing
Equivalence Exploratory
Partitioning Testing
Write Unit
Code Test
Path
Testing
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 30
- 31. What do we mean by a (work)
product?
Project documents:
– schedule, quality plan, test strategy, standards
Deliverables:
– requirements, designs, specifications, user
documentation, procedures
– software: custom built or COTS components, sub-
systems, systems, interfaces
– infrastructure: hardware, O/S, network, DBMS
– transition plans, conversion software, training...
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 31
- 32. What do we mean by testing?
Testing is the process of evaluating the
deliverables of a software project
– detect faults so they can be removed
– demonstrate products meet their requirements
– gain confidence that products are ready for use
– measure and reduce risk
Testing includes:
– static tests: reviews, inspections etc.
– dynamic tests: unit, system, acceptance tests etc.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 32
- 33. Part II
Risk Based Test Strategy
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 33
- 34. Part II - Agenda
Risk Management Process
The Role of Testing in Product Risk
Management
Identifying Benefits and Risks
From Risks to Test Objectives
Generic Test Objectives and System
Requirements
Test Objectives, Coverage and
Techniques
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 34
- 36. Process
Risk identification
– what are the risks to be addressed?
Risk analysis
– nature, probability, consequences, exposure
Risk response planning
– pre-emptive or reactive risk reduction measures
Risk resolution and monitoring
Stakeholders should be involved at all stages.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 36
- 37. Assessing consequences (loss)
Severity Description Score
Critical business objectives cannot be accomplished 5
High business objectives undermined 4
Moderate business objectives affected 3
Low slight effect on business 2
Negligible no noticeable effect 1
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 37
- 38. Assessing probability (likelihood)
Probability Description Score
>80% almost certainly, highly likely 5
61-80% probable, likely, we believe 4
41-60% better than even, 50/50, we doubt, improbable 3
21-40% unlikely, probably not 2
1-20% chances are slight, highly unlikely 1
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 38
- 39. Risk exposure
Risks with the highest exposure are
those of most concern
Worst case scenarios drive concerns
Risk EXPOSURE is calculated as the
product of the PROBABILITY and
CONSEQUENCE of the risk
A simple notation is L2
– where L2 = LIKELIHOOD x LOSS.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 39
- 40. What do the numbers mean?
Sometimes you can use numeric assessments
– We may have experience that tells us
» Likelihood is high (it always seems to happen)
» Loss is £50,000 (that‟s what it cost us last time)
But often, we are guessing
– Use of categories help us to compare risks
– Subjective perceptions (never the same)
– E.g. Developers may not agree with users on
probability!
Maybe you can only assign risk RAG numbers
– RED, AMBER, GREEN
The ability to compare is what is most important.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 40
- 41. The danger slope
Improbable
Very Likely
Unlikely
Unlikely
Highly
Likely
Critical
High
Moderate
Low
Negligible
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 41
- 42. Risk response planning
Do nothing!
Pre-emptive risk reduction measures
– information buying
– process model Where
testing fits in
– risk influencing
– contractual transfer
Reactive risk reduction measures
– contingency plans
– insurance
The risk that‟s left is the residual risk.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 42
- 43. Role of Testing in
Product Risk Management
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 43
- 44. Faults, failure and risk
System failures are what we fear
The faults that cause failures are our
prey
Uncertainty is what makes us concerned:
– what type of faults are present in the
system?
– how many faults are in the system?
– did testing remove all the serious faults?
Testing helps us to address these
uncertainties.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 44
- 45. Testing helps to reduce risk
If risk assessment steers test activity
– we design tests to detect faults
– we reduce the risks caused by faulty
products
Faults found early reduce rework, cost
and time lost in later stages
Faults found are corrected and re-tested
and so the quality of all products is
improved.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 45
- 46. Testing can measure risk
Testing is a measurement activity
Tests that aim to find faults provide information
on the quality of the product
– which parts of the software are faulty
– which parts of the software are not faulty
Tests help us understand the risk of release
Understanding the risks helps us to make a
risk-based decision on release
After testing, our risk assessment can be
refined.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 46
- 47. The test passes…
The risk could be unchanged because:
Risk probability higher because:
Risk probability lower because:
Risk consequence higher because:
Risk consequence lower because:
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 47
- 48. The test fails…
The risk could be unchanged because:
Risk probability higher because:
Risk probability lower because:
Risk consequence higher because:
Risk consequence lower because:
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 48
- 50. Business benefits (cardinal
objectives)
At the highest level, normally set out in a project
initiation document or business case etc.
A benefit is any 'good thing' required to be
achieved by a project
Normally expressed in financial/business terms
e.g.
Save money - one or a combination of
– cut staff, stock, work in progress, time to deliver…
Increase revenues - one or a combination of
– increase market share, launch new product, improve
existing product, increase margins, exploit a new
market…
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 50
- 51. Risk identification
Expert interviews
Independent consultant or domain expert
assessment
Past experience (lessons learned)
Checklists of common risks (risk
templates)
Risk workshops
Brainstorming.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 51
- 52. Risk workshops
Brainstorming sessions can be very
productive
Make risks visible
Generates risk ideas
Generates ideas for resolution
Starts buy-in.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 52
- 53. Exercise
Main features of an ATM:
– Validation of a customers card and PIN
– Cash withdrawal
– On-line balance request
– Request a statement
…amongst others.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 53
- 54. What kind of failures could occur
for each of the four requirements?
Failure Bank Cust.
POV POV
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 54
- 55. Risks and viewpoints
Viewpoint has an influence over which
risks are deemed important
Developer/supplier viewpoint
– what stops us getting paid for the system?
Customer/service provider viewpoint
– what could lose us business, money?
Customer viewpoint
– what could lose ME money?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 55
- 56. The testers viewpoint
Typically, testers represent either suppliers or
their customers
Main stakeholders in the project:
– system supplier
– customer/buyer
– service provider and support
– end-users
Testers may work for one stakeholder, but
should consider concerns of all.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 56
- 57. From Risks to Test Objectives
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 57
- 58. Why use risks to define test
objectives?
If we focus on risks, we know that bugs
relating to the selected mode of failure are
bound to be important.
If we focus on particular bug types, we will
probably be more effective at finding those
bugs
If testers provide evidence that certain failure
modes do not occur in a range of test
scenarios, we will become more confident that
the system will work in production.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 58
- 59. Risks as failure modes or bug
types
Risks describe „what we don‟t want to
happen‟
Typical modes of failure:
– calculations don‟t work
– pages don‟t integrate
– performance is poor
– user experience is uncomfortable
Think of them as „generic bug types‟.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 59
- 60. Defining a test objective from
risk
We „turn around‟ the failure mode or risk
Risk:
– a BAD thing happens and that‟s a problem for us
Test objective:
– demonstrate using a test that the system works
without the BAD thing happening
The test:
– execute important user tasks and verify the BAD
things don‟t happen in a range of scenarios.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 60
- 61. Risks and test objectives -
examples
Risk Test Objective
The web site fails to function To demonstrate that the application functions
correctly on the user‟s client correctly on selected combinations of
operating system and browser operating systems and browser version
configuration. combinations.
Bank statement details To demonstrate that statement details
presented in the client presented in the client browser reconcile with
browser do not match records back-end legacy systems.
in the back-end legacy
banking systems.
Vulnerabilities that hackers To demonstrate through audit, scanning and
could exploit exist in the web ethical hacking that there are no security
site networking infrastructure. vulnerabilities in the web site networking
infrastructure.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 61
- 63. Risk-based test objectives are
usually not enough
Other test objectives relate to broader issues
– contractual obligations
– acceptability of a system to its users
– demonstrating that all or specified functional or non-
functional requirements are met
– non-negotiable test objectives might relate to
mandatory rules imposed by an industry regulatory
authority and so on
Generic test objectives complete the definition
of your test stages.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 63
- 64. Generic test objectives
Test Objective Typical Test Stage
Demonstrate component meets requirements Component Testing
Demonstrate component is ready for reuse in larger Component Testing
sub-system
Demonstrate integrated components correctly Integration testing
assembled/combined and collaborate
Demonstrate system meets functional requirements Functional System
Testing
Demonstrate system meets non-functional requirements Non-Functional System
Testing
Demonstrate system meets industry regulation System or Acceptance
requirements Testing
Demonstrate supplier meets contractual obligations (Contract) Acceptance
Testing
Validate system meets business or user requirements (User) Acceptance
Testing
Demonstrate system, processes and people meet (User) Acceptance
business requirements Testing
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 64
- 65. Tests as demonstrations
“Demonstrate” is most often used in test
objectives
Better than “Prove” which implies
mathematical certainty (which is impossible)
But is the word “demonstrate” too weak?
– it represents exactly what we will do
– we provide evidence for others to make a decision
– we can only run a tiny fraction of tests compared to
what is possible
– so we really are only doing a demonstration of a
small, sample number of tests.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 65
- 66. But tests should aim to locate
faults, shouldn't they?
The tester‟s goal: to locate faults
We use boundary tests, extreme values, invalid
data, exceptional conditions etc. to expose faults:
– if we find faults these are fixed and re-tested
– we are left with tests that were designed to detect
faults, some did detect faults, but do so no longer
We are left with evidence that the feature works
correctly and our test objective is met
No conflict between:
– strategic risk-based test objectives and
– tactical goal of locating faults.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 66
- 67. Testing and meeting requirements
Risk-based test objectives do not change
the methods of test design much
Functional requirements
– We use formal or informal test design
techniques as normal
Non-functional requirements
– Test objectives are often detailed enough to
derive specific tests.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 67
- 69. Risks can be used to…
Determine the „level‟ of testing to be performed
– Four risk levels typically used by most safety-related
standards
– Specific test case design techniques and test completion
criteria mandated for each level of risk
Determine the „type‟ of testing to be performed
– E.g. performance, usability, security backup/recovery
testing all address different types of risk.
High risk components might be subjected to:
– Code inspections, static analysis and formal component
testing to 100% modified decision coverage.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 69
- 70. Example risk – test objective -
techniques
Risk: feature xxx fails to calculate an insurance
premium correctly
Potential test techniques:
– Review or inspect requirements for xxx
– Review or inspect specification of xxx
– Review or inspect design of xxx
– Review or inspect component(s)
– Code inspection, static analysis
– Defined tests at component, system or acceptance level
We choose the most effective, practical technique
that can be performed as early as possible.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 70
- 71. Test objectives and coverage
Easy to derive a test objective from a risk
But it is less easy to derive an objective
coverage measure to plan for
“How much testing (how many tests) are
required to provide enough information to
stakeholders that a risk has been addressed”?
Well documented coverage measures for both
black box and white box test design
techniques but how can we use these?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 71
- 72. Black box test coverage
Some techniques subsume others e.g.
– boundary value analysis (BVA) subsumes
equivalence partitioning (EP) and is
„stronger‟
Other test techniques (e.g. decision
tables, state-transition testing, syntax
testing etc.) do not fit into such an
ordered hierarchy
These (and other) techniques have
specific areas of applicability.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 72
- 73. White box test techniques
Most are based on the path-testing model
More „inclusive‟ ordered hierarchy:
– statement testing (weakest)
– branch testing
– modified condition decision testing
– branch condition combination testing (strongest)
Although costs vary, there is little data
available that compares their cost-
effectiveness
British Standard BS 7925-2 available at
www.testingstandards.co.uk has all definitions.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 73
- 74. Selecting the „right‟ test techniques
The tester should explain how these are
used, the potential depth and consequent cost
of using them to the stakeholders
Some test techniques are less formal, or not
yet mature enough to have defined coverage
levels
For example, coverage targets for
configuration testing and ethical hacking as
means of detecting security flaws are likely to
be subjective.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 74
- 75. Selecting the right coverage
level
Stakeholders will PAY so much to address a risk
Testers need to reconcile the concerns of
stakeholders to the coverage measure to be used
– The tester may need to assess the quantity of testing
that could be achieved in the time available
– May also need to offer stakeholders degrees of
coverage and quote different costs of each
Tester needs to explain both quantitatively and
qualitatively techniques and coverage to be used
The stakeholders must then take a view on which
of these coverage targets will be adequate.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 75
- 76. Part III
Risk Based Test Planning and
Organisation
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 76
- 77. Part III - Agenda
Designing the Test Process
Stages, Teams and Environments
Estimation
Planning and Scheduling
Test Specification and Traceability.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 77
- 79. Master Test Planning process
Risk • Consult business, technical staff
Tester Activity
Identification
• Prepare a draft register of risks
• Discuss risks
Workshop Risk
Analysis • Assign probability and consequence scores
• Calculate exposure
• Formulate test objectives, select test technique
Risk • Document
Tester Activity dependencies, requirements, costs, timescales
Response
for testing
• Assign Test Effectiveness score
• Nominate responsibilities
Test • Agree scope of risks to be addressed by testing
Review and Decision
Scoping
• Agree responsibilities and budgets
• Draft the test process from the Test Process
Tester Activity Test Process
Worksheet
Definition
• Complete test stage definitions
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 79
- 80. Internet banking example
Customer workstation HTTP dispatcher
Web
HTTPS HTTPS server(s)
Application
Server(s)
firewall
printer
RMI/IIOP RMI/IIOP
MQ, RMI/IIOP MQ/Series
Billing production
servers
Existing
Backends
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 80
- 81. Example
A bank aims to offer an internet based service
to give its corporate customers
– balance and account information
– multi-account management and inter account
transfers
– payments
How many ways could this system fail?
Which should be considered for testing?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 81
- 82. Failure mode and effects analysis
FMEA is a method for:
– assessing the risk of different failure modes
– prioritising courses of action (in our
case, testing)
We identify the various ways in which a
product can fail
For each failure mode we assign scores
Can use these profiles to decide what to do.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 82
- 83. Profiling failure modes
What is the consequence of the failure?
What is the probability of the failure
occurring if no action is taken?
What is the probability of detecting this
with testing?
– How would you detect the problem that
causes the failure?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 83
- 84. Failure mode profiling
Each mode is scored
– P: probability of occurrence (1 low - 5 high)
– C: consequence (1 low -5 high)
– T: test effectiveness (1 low - 5 high)
P x C x T = Risk Number.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 84
- 85. Failure mode risk numbers
If the range of scores is 1-5
Maximum risk number = 125
– implies it‟s a very serious risk but could be detected
with testing
Minimum risk number = 1
– implies it‟s a low risk, and would not be detected
with testing
Risk numbers indicate your testing priorities.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 85
- 86. Test process worksheet
Failure Mode or Test Technique
Probability
Consequence
Test Effectiveness
RISK Number
Prototyping
Infrastructure
Sub-System
Application System
Tests
Non-Functional
User Acceptance
Live Confidence
Customer Live Trial
Acceptance (BTS)
Operational
Objective
Client Platform
1 Which browsers, versions and O/S platforms SS
will be supported, includes non-frames, non-
graphic browsers etc.)?
2 New platforms: Web TV, Mobile Phones, Palm
Pilots etc.
3 Connection through commercial services e.g. SS
MSN, Compuserve, AOL
4 Browser HTML Syntax Checking SS
5 Browser compatibility HTML Checking SS
6 Client configuration e.g. unusable, local SS
character sets being rejected by database etc.
7 Client configuration: Client turns off graphics, SS
rejects cookies, Cookies time out, Client doesn‟t
have required plug-ins etc.
8 Minimum supported client platform to be SS
determined/validated
Component Functionality
9 Client component functionality SS
10 Client web-page object loading SS
11 Custom-built infrastructure component SS
functionality
12 COTS component functionality SS
13 HTML page content checking - spelling, HTML SS
validation
System/Application functionality
14 End-to-end system functionality CC
15 Loss of context/persistence between SS CC
transactions
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 86
- 87. Using the worksheet - risks
Failure Mode or Objective column
– failures/risks
– requirements for demonstrations
– mandatory/regulatory/imposed requirements
Probability of the problem occurring
Consequence of failure
Test Effectiveness - if we test, how likely
would the problem be detected?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 87
- 88. Using the worksheet - test stages
Proposed test stages and responsibilities
can take shape
A failure mode might be addressed by
one or more than one test stage
Let‟s fill in a few rows and calculate risk
numbers (19, 28, 38, 45).
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 88
- 89. Creating the worksheet
Create a template sheet with initial risks and
objectives based on experience/checklists
Cross-functional brainstorming
– stakeholders or technically qualified nominees
– might take all day, but worth completing in one
session to retain momentum
If you can‟t get a meeting, use the specs, then
get individuals to review.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 89
- 90. Additional columns on the
worksheet
COST!
How can you prioritise the risks and proposed
test activities, without knowing the cost?
For each test activity, assign a cost estimate
identifying all assumptions and dependencies
When the risks and testing is prioritised and cost
is the limiting factor, you know
– What testing can be afforded – IN SCOPE
– What testing cannot be afforded – OUT OF SCOPE.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 90
- 91. From worksheet to test process
Identify/analyse the risk
Select a test activity to address the risks
Collect test activities into stages and sequence
them
Define, for each stage:
– objectives, object under test
– entry, exit and acceptance criteria
– responsibility, deliverables
– environment, tools, techniques, methods.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 91
- 92. Test stage - key attributes
Test Objectives • The objectives of this stage of testing, e.g. faults to
be found; risks to be avoided; demonstrations to be
performed.
Component(s) under Test • The architectural components, documents,
business processes to be subjected to the test.
Baseline • Document(s) defining the requirements to be met
for the components under test (to predict expected
results).
Responsibility • Groups responsible for e.g. preparing tests,
executing tests and performing analysis of test
results.
Environment • Environment in which the test(s) will be performed.
Entry Criteria • Criteria that must be met before test execution may
start.
Exit Criteria • Criteria to be met for the test stage to end.
Techniques/tools • Special techniques, methods to be adopted; test
harnesses, drivers or automated test tools to be
used.
Deliverables • Inventory of deliverables from the test stage.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 92
- 93. Testing in the real world
Time and cost limit what can be done
Some risks may be deemed
acceptable, without testing
Some risks will be de-scoped to squeeze the
plan into the available timescales or budget
– mark de-scoped line items „out of scope‟
– if someone asks later what happened, you have
evidence that the risk was considered, but deemed
acceptable.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 93
- 94. Stages, Teams and
Environments
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 94
- 95. So what is this real world,
for this project?
STAGES TEAMS ENVIRONMENTS
(LEVELS)
Acceptance Acceptance
Testing Test Team To-be-live
(or copy
Large-Scale LSI Test of live)
Integration Team
Test Execution
Testing
Partitioned
System Sys Test
Risks Testing Team
MASTER
TEST Integration
Integration
PLAN Testing
Developers
Failure Unit Personal
modes Testing / by pair
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 95
- 96. Many focused stages, or
bigger broader stages?
It depends: may be convenient to...
– group test types together (each as early as
possible)
– group by responsibilities (teams / managers)
– group into available environments
Need to keep dependencies in right sequence
System development lifecycle has an
influence:
– agile methods suggest fewer, broader stages
– proprietary methods may have own stage names.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 96
- 97. Integration, and
Large-Scale Integration
4. Full E-Business
System 3. Order Processing Sub-System
2. Web Sub-System
Web Server
INTEGRATION
Banking System 1. Application
(Credit Card (objects)
Processor) Sub-System
Database Server Legacy System(s)
LARGE-SCALE INTEGRATION
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 97
- 98. Organising the teams
Programme
Management
Requirements Design & Service Service
Development Integration Implementation
Project (Release) Management
Testing Strategy
& Co-ordination
(Involved in Unit, Large-Scale Acceptance
Acceptance Integration & Integration Testing
Testing) System Testing & Pilot
Testing
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 98
- 99. Specifying and procuring
the environments
Disaster Recovery connect
Release 2 Release 3
and test environment test
Not yet used interfaces LSI Test & LSI Test &
(at location C) Acc Test Acc Test
connect
Live environment live
Release 1 Release 2
(at location B) Not yet live interfaces Pilot & Live Live
Test (to-be-live) Release 1 move test
environment Not yet LSI Test & interfaces Now live
at location B used Acc Test
Test Release 3
environment Release 1 Release 2 System System
Testing (etc)
at location A System Testing Testing
Development Release 1 Unit Release 2 Unit Release 3 Unit
(etc)
environment & Integration & Integration & Integration
(at location A) Testing Testing Testing
Time
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 99
- 100. Where do the non-functional
tests fit?
Some need a stable functional base, e.g.
performance, security, but can do some early
work:
– configuration benchmarking, infrastructure &
component performance
– security inspections, eg source code, network
Usability testing can & should be done early.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 100
- 101. System development life-cycles
Waterfall is still in use, but iterative more
suited to risk-based testing:
– iterative lifecycles are themselves risk-reducers
– iterations give good opportunity to re-evaluate risks
– incremental is special case of iterative; if
functionality added to stable core, retest risk low
But: if design is iterated, retest risk high, need
much regression testing.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 101
- 102. from, and completion of, test
stages
PILOT
Release 1
ACCEPTANCE
Release 1
Retesting &
LARGE SCALE INTEGRATION regression testing
of fixes
Retesting &
SYSTEM Deliveries of major functional areas
regression testing
of fixes
Retesting &
INTEGRATION Sub-systems or periodic builds,
regression testing
plus “fix” builds
of fixes
Retesting & Release 2 starts
UNIT Every unit as soon as unit-tested regression testing
of fixes
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 102
- 104. Estimation in a vacuum
Need to estimate early
“It‟s impossible!”
– no requirements yet
– don‟t know the risks
– don‟t know how many faults there will be
– don‟t know how severe the faults will be
– don‟t know how long the developers will take to fix
– Etc. etc.
But you need some high level estimates to
include in the plan.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 104
- 105. Problems with high-level estimates
If unsubstantiated by lower level detail
– management add a dose of optimism
– your estimates may be pruned
– contingency removed if it sticks out too much
Estimates not properly justified will be
challenged, or simply thrown out
Management target costs, timescales
– based on pride
– because of real or imagined pressure from above
– they lose much of their reputation if dates are not me
Project may not be approved if estimates are high.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 105
- 106. Estimation formula A
Assumption: testing takes as long as development
1. FUNCTESTBUDGET = cost of all technical design
and documentation plus
development (excluding
any testing at all)
2. DEVTESTBUDGET = developers‟ estimate of
their unit and integration
testing
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 106
- 107. Estimation formula A (cont‟d)
3. INDEPTESTBUDGET = FUNCTESTBUDGET -
DEVTESTBUDGET
document an important assumption: that your team will have sign-
off authority on the developers‟ unit and especially integration test
plans. This may seem overbearing, but if testing is neglected at
any stage, the price will be paid!
4. SYSTESTBUDGET = 0.75xINDEPTESTBUDGET
5. USERTESTBUDGET = 0.25xINDEPTESTBUDGET
6. NFTESTBUDGET = 0.25xFUNCTESTBUDGET
if stringent performance test required
7. Else NFTESTBUDGET= 0.15xFUNCTESTBUDGET
if no or superficial performance test
required
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 107
- 108. Estimation formula A (cont‟d)
Total test budget for staff resources is
therefore FUNCTESTBUDGET + 25% (or
whatever uplift you choose for non-functional
testing)
Add a figure for the cost of test tool licenses
that you believe you will incur
– Consult one or more vendors what to budget
– If they exaggerate they risk losing the business!
– Tools can cost up to $100,000 or even $150,000
Then add a cost for test environment(s).
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 108
- 109. Estimation formula A (cont‟d)
Ask technical architect to estimate the cost and
effort required to build three independent
functional test environments
– Component testing
– integration plus system testing (overlapped for most
of the functional testing period)
– production-scale performance test environment
available for two months
Add that to your total estimate.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 109
- 110. Estimation formula B (1-2-3 rule)
This formula works best for SYSTEM testing
If you have detailed
requirements/designs, scan a few typical
pages of the requirement
– Estimate how many test conditions per page
– Multiply the average conditions per page by the
number of requirements pages
– C = total test conditions
Assume: tester can specify 50 conditions/day
Specification:
– Effort to specify SYSTEM test = NDAYS = C / 50
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 110
- 111. Estimation formula B (cont‟d)
Preparation:
– Effort = 2 x NDAYS
– If preparing test data and exp results is easy
– Effort = 3 x NDAYS
– If preparing test data and exp results is harder
– Effort = 4 x NDAYS
– If you need to prepare details scripts for audit purposes
Execution:
– Effort = NDAYS (if it goes well (does it ever?))
– Effort = 3 x NDAYS (if it goes badly (always?))
Regression test (execution):
– Effort = NDAYS
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 111
- 112. Estimation formula B (cont‟d)
Total Effort for SYSTEM TEST is between
– 6 NDAYS and 9 NDAYS
Allocate an additional budget of one-third of the
system test budget to user acceptance ie
– Between 2 NDAYS and 3 NDAYS
For nonfunctional testing and tools use formula B:
1-2-3 rule:
– 1 day to specify tests (the test cases)
– 2 days to prepare tests
– 1-3 days to execute tests (3 if it goes badly)
– Easy to remember, but you may have different ratios.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 112
- 113. Exercise – Estimation formula A
Total budget for analysis, design and
development = £100k
Developers say component/integration test will
cost £20k
Calculate, using formula A
– Budget for system and acceptance testing
– Non-functional testing
– Total cost of development and testing
Document your assumptions.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 113
- 114. Exercise – Estimation formula A
(cont‟d)
1. FUNCTESTBUDGET =
2. DEVTESTBUDGET =
3. INDEPTESTBUDGET =
4. SYSTESTBUDGET =
5. USERTESTBUDGET =
6. NFTESTBUDGET =
7. Total cost of dev + test =
8. Assumptions?
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 114
- 115. Exercise – Estimation formula B
A 150 page specification has 100 pages of
requirements with an average of 25 conditions
per page
Calculate using formula B estimates for:
– Specification
– Preparation (assume it‟s easy to do)
– Execution (assume the worst)
– Execution of a complete regression test
– User acceptance test.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 115
- 117. Master Test Planned:
now what‟s next?
Master Test Plan has estimated, at a
high level, resources & time cost
Next steps are:
– more detailed and confident test plans for
each stage
– bottom-up estimates to check against top-
down
– preparing to manage test specification &
execution to time, cost, quality...
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 117
- 118. Test Plans for
each testing stage
eg for System Testing: Test 1 2 3 4 5 6 7 8 9 10 11 ...
Risk F1 125 Test objective F1
Risk F2 10 Test objective F2
Risk F3 60 Test objective F3
Risk F4 32 Test objective F4
Risk U1 12 Test objective U1
Risk U2 25 Test objective U2
Risk U3 30 Test objective U3
Risk S1 100 Test objective S1
Risk S2 40 Test objective S2
Requirement F1
Requirement F2 Generic test objective G4
Requirement F3
Requirement N1 Generic test objective G5
Requirement N2 Test importance H L M H H H M M M L L ...
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 118
- 119. Plan to manage risk during
test execution
Quality
Quality Scope
Cost
Risk
Quality
best pair to
fine-tune
Time Time Cost
Scope
Scope Cost
Time
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 119
- 120. Test Design:
target execution schedule
ENVIRONMENTS TEAMS TESTERS
Test execution days
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
Balance &
1
Retests & regression tests
transaction
reporting
Inter- 5 10
Partition for account 3 7 11
functional tests transfers 2
Payments 6 9
Direct debits
End-to-end
customer
scenarios
Partition for 4 8
disruptive non-functional tests
Comfortable completion date
Earliest completion date
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 120
- 121. Test Specification
and Risk Traceability
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 121
- 122. What test specification
documents after Design?
Master Test Plan Stage Execution & management
Test procedures
Plans Designs
Schedule “live”
Converted
}
Contrived
Cases “Scripts” Data
(Test procedure
specifications)
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 122
- 123. Context-driven approach affects
test documentation
Require- Epistem-
ology Cognitive Context
ment
psychology
Specif- Expected Require-
ication outcome Heuristics
ment
Test
Observe & Abductive
Product
Product
evaluate inference
Test
outcome
Bug
advocacy Impression
Pass / fail of product
“BEST PRACTICE” CONTEXT-DRIVEN
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 123
- 124. Variables in test documentation
Format of the documents: automated
tool, spreadsheet, word processor, or
even hand-written cards!
Test procedures: detailed content of
tests if needed, and separate overall
process
Expected outcomes: explicit or implicit?
Test data: synthetic and lifelike.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 124
- 125. How much test documentation
is required?
Stake out two extremes, eg:
– “best practice” documentation, reviewed
– spreadsheet overviews, no review
Identify the requirements each “extreme pre-
requisite” is really trying to address, eg:
– anyone can execute; efficiency is paramount
Distil common objective, question
assumptions, inject refinements; best fit might
be a mixture of documentation styles tailored
to your project
– Sources: Software documentation superstitions
(Daich); Goldratt‟s Theory of Constraints (Dettmer).
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 125
- 126. Traceability of risks
Many projects use risk analysis
already, but...
Maintaining traceability of risks is a
challenge
For risk-based reporting, need clerical help
or automation (more work needed on this)
Beware over-complex risk relationships
Tactical solution is usually spreadsheets.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 126
- 127. Part IV
Managing Test Execution
and the End-Game
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 127
- 128. Part IV - Agenda
Progress and Incident Management
Risk and Benefits-Based Test Reporting
Consensus and Risk-Based Acceptance.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 128
- 130. Risk reduction components
Tests passed reduce perceived risk
Components of risk reduction are:
– progress through tests
– severities of incidents reported
– progress through fault-fixing and retesting
– first quantitative (have we done enough
testing?), then
– qualitative (can we tolerate remaining specific risks?).
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 130
- 131. Incident classification and entry-
exit criteria
Classifying incidents:
– priority: how much testing it interrupts (“urgency”)
– severity: business impact if not fixed (“importance”)
– three levels of each may be enough (ten too many)
Entry and exit criteria:
– entry = preparations + adequate exit from previous
stage(s)
– exit: build, delivery or release can proceed to next stage.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 131
- 132. Progress through tests
We are interested in two main aspects:
– can we manage the test execution to get complete before
target date?
– if not, can we do it for those tests of high (and medium?)
importance?
# tests # tests
Target tests passed
Target tests run
Actual tests
passed High
Fail
Actual tests run Pass
Medium
date date
Low
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 132
- 133. Progress through incident fixing
and retesting
Similarly, two main aspects:
– can we manage the workflow to get incidents fixed
and retested before target date?
– if not, can we do it for those of material impact?
# incidents # incidents
Cumulative incidents
Outstanding
material impact
Resolved
Awaiting
fix Deferred
Closed date
date
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 133
- 134. Quantitative and qualitative risk
reduction from tests and retests
PROGRESS &
RESIDUAL RISK
UP RIGHT SIDE
OF W-MODEL
System Large-Scale Acceptance
Testing Integration Testing Testing
PROGRESS
THROUGH Material
H Fail
TESTS M impact
L Pass
PROGRESS
THROUGH Awaiting fix
Awaiting fix
Resolved Material
INCIDENT Deferred impact
FIXING Closed
& RETESTING
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 134
- 136. Risk-based reporting
Planned
start today
end
all risks
„open‟ at
the start
residual
risks of
releasing
TODAY
Progress through the test plan
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 136
- 137. Benefits of risk-based test
reporting
Risk of release is known:
– On the day you start and throughout the test phase
– On the day before testing is squeezed
Progress through the test plan brings positive
results – risks are checked off, benefits available
Pressure: to eliminate risks and for testers to
provide evidence that risks are gone
We assume the system does not work until we
have evidence – “guilty until proven innocent”
Reporting is in the language that management
and stakeholders understand.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 137
- 138. Benefit & objectives based test
reporting
Objective
Objective
Objective
Objective
Objective
Benefit
Benefit
Benefit
Benefit
Benefit
Benefit
Open
Closed
Risks
Open
Closed
Closed
Open
Open
Closed
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Benefits available for release Slide 138
- 139. Benefits of benefit-based test
reporting
Risk(s) that block every benefit are known:
– On the day you start and throughout the test phase
– Before testing is squeezed
Progress through the test plan brings positive
results – benefits are delivered
Pressure: to eliminate risks and for testers to
provide evidence that benefits are delivered
We assume that the system has no benefits to
deliver until we have evidence
Reporting is in the language that management
and stakeholders understand.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 139
- 140. How good is our testing?
Our testing is good if it provides:
– Evidence of the benefits delivered
– Evidence of the CURRENT risk of release
– At an acceptable cost
– In an acceptable timeframe
Good testing is:
– Knowing the status of benefits with
confidence
– Knowing the risk of release with confidence.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 140
- 142. Stakeholders and acceptance
hierarchy
Testers build consensus among differing viewpoints
of stakeholders, but
Senior management will typically accept system, eg:
customer-facing? Senior management internal-facing?
hopes Project & Quality management fears
User Acceptance Operational Acceptance
tester-facilitated consensus
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 142
- 143. Test Review Boards and
degrees of approval
To ascend the approval and acceptance
hierarchy, testers facilitate Testing Review Boards, eg:
Decisions could be: Pilot start
– unqualified
– qualified Operational Acceptance
– delay
User Acceptance
Large-scale Integration
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 143
- 144. Slippages and trade-offs: an
example
If Test Review Boards recommend
delay, management may demand a trade-off, “slip
in a little of that descoped functionality”
This adds benefits but also new risks:
original first actual
target slip go-live
scope
date
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 144
- 145. Tolerable risk-benefit balance:
another example
Even if we resist temptation to trade off slippage
against scope, may still need to renegotiate the
tolerable level of risk balanced against benefits:
original
actual
target
(risk - go-live
date
benefits)
“go for it”
margin
original target net risk
date
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 145
- 147. Risk-based test approach:
planning
RBT approach helps stakeholders:
– They get more involved and buy-in
– The have better visibility of the test process
RBT approach helps testers
– Clear guidance on the focus of testing
– Approval to test against risks in scope
– Approval to not test against risks out of scope
– Clearer test objectives upon which to design tests.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 147
- 148. Risk-based test approach:
execution and reporting
RBT approach helps stakeholders:
– They have better visibility of the benefits
available and the risks that block benefits
RBT approach helps management:
– To see progress in terms of risks addressed
and benefits that are available for delivery
– To manage the risks that block acceptance
– To better make the release decision.
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 148
- 149. Risk-Based Testing
Close
Any Questions?
Document templates can be found at
www.riskbasedtesting.com
Version 1.0a ©2002 Systeme Evolutif Ltd and TiSCL Slide 149