2. Introduction
ο¬ This is work in progress and is in a draft
state.
ο¬ It however contains enough information
to be initially helpful.
ο¬ The prime focus is to allow items to be
noted that need to be included within
project plans and assistance in
estimating.
3. Scope
ο¬ This slide pack assumes the creation of
an application that is then integrated
onto a hardware platform.
ο¬ Testing of the hardware platform itself
and the integration testing for this is not
included at present within this draft pack.
4. Contents
ο¬ Overview
ο¬ Budgeting for Tooling
ο¬ Design and Build
ο¬ Application Security and DDA Testing
ο¬ Test Application and Integration Phase
ο¬ Working with Widgets and GUI Interfaces
ο¬ Estimating Manual Test Run Time
ο¬ Estimating GUI Automation Testing
ο¬ Methods for Managing, Prioritising Regression Testing
ο¬ Techniques for Deciding Sufficient Testing Done
ο¬ Estimating Testing for Protective Monitoring Testing
ο¬ Mobile interface testing
ο¬ Acceptance Testing (FAT, SAT, UAT, OAT)
5. Overview
This collection of notes was compiled to provide guidance for new Test
Managers and Project Managers in the art of estimating test effort, with
Overview
particular attention made to the activities that are usually forgotten about
and which leads to stresses and pressures on a test team and potentially
risk to successful delivery.
It needs to be understood that while there is an attempt to bring a science to
the process, there will also be special factors to take into account which will
require additional effort.
The aim of these notes is to prevent underestimation and so reduce the risk
of late delivery or poor test coverage, due to a test team being under
resourced.
02/01/2013 5
6. Targeting within Test Strategy
ο¬ Within the test strategy, be clear as to what is being tested at each phase.
ο¬ Unit code tests will test the implementation of the understanding of the requirements at a
unit code level. The logic is being tested as understood and implemented. Error
conditions and error handling should also be tested. The creation of stubs and drivers
plus the creation of test data will be necessary. Typically white box test analysis is used
as per BS7925-2.
ο¬ At a functional level, the application integration is being tested. Again functionality
is tested but at a black box techniques are used as per BS7925-2.
ο¬ End to end functional stories are tested. However these scenarios need to be
clearly documented with clear expectations.
ο¬ System level testing concerns the integration of the application with the main
system, in addition to security and performance testing.
ο¬ Enterprise monitoring testing may be required.
ο¬ Performance monitoring testing may be required.
ο¬ PKI certificate testing may be required.
ο¬ Penetration testing is a security audit.
ο¬ User testing may include W3C compliance and testing against functional
requirements.
ο¬ Site testing concerns itself with site specific targeted tests.
ο¬ Operational testing is concerned with maintenance and support of the system.
7. Good Enough?
ο¬ In testing we need to make a call as to what is good
enough. This will depend upon commercial and
product risk.
ο¬ The level of testing required will be defined within the
project Test Policy.
ο¬ For the purposes of this presentation, we look at a
typical large project (Β£3 million) and point out key
things that are often forgotten in estimates.
ο¬ While in some projects this will be an overkill, the
bigger danger is to assume an overkill, miss key points
and end up with a project overspent, behind schedule
and even causing company failure.
8. Rewards and Approach
ο¬ A common problem for testers is under resourcing. Project
managers typically estimate testing as a small or larger fraction of
development time. However this is frequently incorrect.
ο¬ Other pressures are based on the delivery project manager
getting the product out to meet a bonus payment based upon
saving development costs. However when the project is handed
over to the support phase this could incur far greater costs and
risk to the company reputation. The key point is not to reward
wrong behaviour.
ο¬ These slides focus on producing realistic time estimations with a
responsible view to quality. However if dealing with safety or
highly sensitive commercial targets then this will reflect the
minimum time scales. On the other hand if one is producing a
simple web project with no commercial impact if this fails and no
risk to commercial reputation or life, then far less testing will be
required.
9. Is it not simple?
ο¬ One approach that is often used as a first finger in the
air is to say that the Test Effort is between 50% and You can view the
75% of the development effort. Arian lift off at:
ο¬ This however does not always work because: http://www.youtube.
ο¬ Development may assume use of existing code that is com/watch?v=gp_D
integrated. Assuming existing code does not need
integration testing can be dangerous (e.g. this is why the 8r-2hwk)
Arian rocket, delivered to project schedule, exploded
shortly after lift off). This is a good
ο¬ Many test tasks are traditionally underestimated or
forgotten and the broad approach existed when systems example of a
were less complex. project delivered
ο¬ It may potentially need for some systems more test effort
than development effort. It may not however become to schedule, but
obvious till the team are under pressure and it is too late not performing
to bring additional staff onto the project.
ο¬ Testing version of tools (e.g. Visual Studio) are more full test analysis.
expensive than the cut down development versions.
Hence the answer is No it is not that simple and here
are some notes to help.
11. Off the Shelf Test Tooling and Hardware
ο¬ Typical project Tooling To be included in a bid:
ο¬ Test management tooling for each tester.
ο¬ Access to tooling to track requirements to tests to defects.
ο¬ If a .Net Project Then
ο¬ For Testers, project will require Visual Studio Ultimate licence each.
ο¬ For Developers will want to ensure that they have access to FxCop and ReSharper, within their development
versions of Visual Studio.
ο¬ If a large programme will want to consider using HP Quality Centre tool set.
ο¬ For small to medium sized projects will want to use cheaper tools such as Atlassian.
ο¬ If running PKI certificate licensing tests, will need to budget for additional licences that can
be cancelled and revoked.
ο¬ Is video evidence required for testing β do we need a capture tool?
ο¬ Do we need emulators and real devices such as mobile phones for testing? There may
also be security checks to make before testing can start.
ο¬ Load and Performance test tooling. May require additional licences for each platform and
will need to provide a licence that allows for appropriate level of testing for stress.
ο¬ The load and performance test platform needs to be comparable and scalable. If resources
are too low, then the results may not be scalable and it may be difficult to run tests with
even a small number of users so not allowing a realistic trend to be identified with any
certainty over margins of error. This in turn can cause considerable problems at SAT and
OAT and cause a project to be delivered late or with performance errors.
ο¬ Code coverage tools and other analysis tooling, such as Coverity, may be required for large
programmes.
ο¬ Are checking tools required (e.g. for DDA / W3C, Usability, etc).
12. Customised Test Tooling
ο¬ There may be the need to develop internal test tooling
for a project. This can typically include:
ο¬ Methods for generating large amounts of data.
ο¬ Methods for comparing or validating large amounts of data.
ο¬ Test Stubs and Test Drivers.
ο¬ Approaches for extracting data, including extracting data from
test tooling.
ο¬ Budget for design, build, review, test and verification of
the tooling and documentation, configuration control
and support for the tooling.
13. Test Manager
ο¬ Budget for Test Manager:
ο¬ 10% of time per day per test team member. If more than 5 individuals, then need to consider
breaking team down to include team leads.
ο¬ 1 day per week dealing with other teams, project manager, Development manager, security
testing, system architect, etc.
ο¬ 0.5 day per week rising to 1 day per week in various meetings.
ο¬ 1 hour per day preparing short reports and extracting data β Task can be done by new
graduate or technician under Test Manager guidance.
ο¬ Reviewing Development and other test documentation β 1 hour per document per version.
Assume 2 versions.
ο¬ Reviewing Requirements β 1 day per 100 requirements (assuming all single logical
statements).
ο¬ Dealing with customer and customer issues β 1 day per week.
ο¬ Reviewing Test Analysis β 1 day per 100 requirements.
ο¬ Reviewing Test Script coverage β 1 day per 100 test cases.
ο¬ Reviewing Test Coverage β 1 day per regression run.
ο¬ Dealing with Hosting Company β 1 day per week.
ο¬ Dealing with other Stakeholders β 1 day per week during and leading up to integration, plus
during SAT and OAT, this will increase to 1 day per stakeholder. This may need to be
covered by additional test support (Principle Consultant level), if many Stakeholders.
ο¬ Writing documentation
ο¬ Who is chairing the code reviews? If the Test manager is responsible, then this needs to be
budgeted. One hour per review, with one hour preparation and one hour for minutes and
actions. Also need to budget in other review team member effort at one hour per meeting
and potentially longer than one hour preparation time.
14. Key Test Documents
ο¬ The following is just general guidance for writing (not reviewing) and time can
increase, less likely to decrease (Principle / Senior level staff):
ο¬ Test Strategy β 2 weeks
ο¬ Functional Test Plan β 2 weeks
ο¬ Function Test Specification (when required) β 4 weeks
ο¬ Non Functional Test Plan (if required) β 2 weeks
ο¬ Security Test Plan β 2 weeks
ο¬ Load and Performance Test Plan β 2 weeks
ο¬ Integration Test Plan β 2 weeks
ο¬ Enterprise Monitoring (EM) Test Plan for SAT β 2 weeks
ο¬ Protective Monitoring (PM) Test Plan for SAT β 4 weeks
ο¬ Site / System Acceptance Plan (in addition to EM and PM) β 2 weeks
ο¬ Operational Acceptance Test Plan β 2 weeks
ο¬ Test harness definitions (includes stubs, drivers, etc) β 2 weeks
ο¬ Documentation for other special test tooling β 2 weeks each.
ο¬ Input to support testing related work with training material β 2 weeks
ο¬ Covered Separately (Technician / Recent Graduate):
ο¬ Test Analysis
ο¬ Test Cases
ο¬ Test Scripts (Manual and Automated)
ο¬ Test data created
Note: For large projects and programmes additional time may be required.
15. Design and Build Phase
ο¬ This part examines testing during the
design and build phase.
16. Assumptions and Risks
ο¬ Before we even consider estimating, we need to consider the
quality of the requirements. Defects will leak into the system at
requirements level. Poor requirements or poorly constructed
requirements will mean that considerable overhead on testing will
occur. This will mean that additional resources will be required to
help administer the test team and to specifically help in preparing
reports and measure test effectiveness.
ο¬ For poorly constructed requirements then an additional person at
consultant level will be required for the duration of the project. OR
ο¬ The requirements need to be checked.
ο¬ If requirements are poorly constructed then it may be necessary to
break these down into sub-requirements and link to user stories, which
will need matching to Test Cases and match the Test Cases to test
Scripts. This results in:
ο¬ Need for specialist requirements management tooling and the need to link
this with specialist commercial test tooling.
ο¬ Need for additional resources to develop, review and manage requirement
improvements
17. Requirements
ο¬ Tests are delivered against Requirements. It is important to check that:
ο¬ There are no Requirement Gaps.
ο¬ There are no Requirement Conflicts.
ο¬ Derived engineering requirements are well understood and documented.
ο¬ There are no blatant Errors in Requirements.
ο¬ There is no Lack of Detail in terms of valid ranges and expectations of
behaviour when error conditions arise.
ο¬ Details concerning security have been identified.
ο¬ Specifications have been checked as if subsets of requirements β do not
assume that these will be all complete and correct.
ο¬ Where Browsers are defined, are all tests to be repeated for each browser, or
can we prioritise and distribute tests across different browsers. E.g. 100% of
tests on Firefox, 90% on IE 6.0, 25% on other. This needs to be reflected in
test run time effort calculations. Attention to pop-up behaviour is often
different across browsers.
ο¬ Where interfaces to other systems are present and the requirement interface
is not proven, poorly (or unreliably) documented, then additional resource will
be required.
18. Early intervention
Resource required to check requirements for testing. Needs to be at Principle /
Senior staff level.
As a general rule 5 to 10 days investment with requirement review by an appropriate
test architect can typically save 2 man months of effort later, if advice is adopted.
19. Defect Cost Implications
ο¬ Defects slip in at the
Cost of Fault by Phase requirements phase and
Β£25.0 grow.
ο¬ The later the detection the
Cost per Fault (Β£K)
Β£20.0
Β£15.0
greater the cost to detect,
Cost per Fault (Β£K) fix and retest.
Β£10.0
ο¬ It is not a choice of being
Β£5.0
able to afford early test
Β£-
intervention in checking
t
st
ts
gn
es
ng
se
requirements.
Te
en
lT
U
i
i
es
od
m
em
d
na
D
C
ire
el
ο¬ It is a fact that early
tio
Fi
st
u
Sy
eq
nc
Fu
R
Project Phase intervention:
ο¬ Saves money
ο¬ Prevents project overrun
Around 10% of defects are seeded in a
ο¬ Reduces development
project at the requirements phase. Late and test effort.
detection means longer time to delivery of ο¬ Improves Development
project and greater costs. delivery
20. Test Management of Requirements
ο¬ It is not enough for requirements to have sufficient content to be testable. They
also need to be manageable.
ο¬ Tests are mapped to individual requirements. Requirements need to be
structured as individual single logical statements. Failure to do this will mean
that many tests are required to sign off many attributes of a requirement. As a
project grows, it becomes necessary to cut tests down to a manageable set of
regression tests based upon risk assessment. With multiple embedded
requirements this creates difficulties and introduces the risk of a requirement
attribute not being delivered, containing defects that are not tested and can mean
that critical defects go undetected.
ο¬ It is vital that requirements are structured to be a single logical statement with a
separate reference number. Logical statements normally do not have the words
OR / AND within the statement.
ο¬ If this structuring is not done, then there will be a requirement to have additional
effort required to maintain the test reporting tool. If the solution is to create User
Stories, then these will need to be managed, reviewed and there may be issues in
extracting information from tools such as Visual Studio.
21. Early Testing Effort
ο¬ Testing applies equally to coding as well as System Testing.
ο¬ To cut defect leakage early on, it is vital that code is:
ο¬ Reviewed against best practice check lists.
ο¬ Checked early for security impacts
ο¬ Checked early for performance issues
ο¬ Checked against tools such as ReSharper and FxCop β This
requires configuration and build control effort from the Development
Team with the necessary resources to run tests and analyse output
early on. This will save time in System Testing effort and help to
speed development. To avoid false reporting, ensure tools
configured correctly (allow 5 man days for configuration and setup).
ο¬ Resource to ensure: Adequate Static testing to include:
ο¬ Review of Code
ο¬ Running of static tooling
22. Review of Code Effort
ο¬ It is vital that code reviews are adequately resourced. Reviews need to
be effective and so the review rate needs to be considered.
ο¬ Too fast and defects will leak through increasing the overall project cost.
ο¬ Too long and the review become ineffective and people become blinded by
lines of code.
ο¬ Reviews need to be resourced, regular, guided and targeted.
ο¬ A review period of 1 hour to 2 hours max is most efficient and reviews longer
than 2 hours need to be broken down into targeted focused chunks.
ο¬ Or give individuals specific areas to review.
ο¬ Review rate may be around 1KLOC/hour.
ο¬ Time also needs to be allocated for static review of documents and
diagrams.
ο¬ Static test tools can help add confidence to a code review and will (if set
up and used correctly) will add value to a review, but should not be used
to replace a code review.
23. Code review activities
ο¬ If reviewing code in a closed meeting, comments by one reviewer will
typically inspire comment from another reviewer. If reviewing code using
tools to support remote reviews, then the first reviewers will miss
comments from other reviewers. Hence it is important for parties to go
back over comments when all comments are collected.
ο¬ Review tasks should be set for individuals. Typically these will be
supported by project check lists and will include:
ο¬ Use of good coding practice;
ο¬ Code efficiency / Performance;
ο¬ Code security;
ο¬ Consistency with requirements;
ο¬ Consistency with interfaces and other code modules. Any module being
interfaced with should have an assigned individual representing that module
to check compliance.
24. System Architecture
ο¬ This has impact for testing of:
ο¬ Security
ο¬ Load and Performance
ο¬ Ensure that the security test team and
performance tester have early input to
the design. This review needs to be
budgeted.
25. Application Security and DDA Testing
ο¬ This details points that often need testing
and can get missed out
26. Security Scenarios
ο¬ While the system will be subjected to security testing,
do not forget to test the application as soon as
possible. This needs to be budgeted and resourced.
ο¬ Scenarios need to be put in place for:
ο¬ Ensuring that SQL injection cannot be used. One test per
field
ο¬ Ensuring that URL injection cannot be used on secure web
pages.
ο¬ Check timeout of logins
ο¬ Check success of logout and try the back button.
27. Disability Discrimination Act (DDA) Testing
ο¬ While the world wide web consortium (W3C) has tooling to check web sites, this
may not be usable on sites prior to go-live. Consequently if developing on an air
gaped system, testing for disability can be more involved.
ο¬ The level of DDA adherence will vary under contractual agreement. However one
should ensure that the following is tested as minimum good practice and this will
need resourcing and budget:
ο¬ Check for Blue / Green colour contrast not being present.
ο¬ Check for Red / Brown colour contrast not being present.
ο¬ Check for Green / Brown colour contrast not being present.
ο¬ Check that images and logos have alternate text for web pages.
ο¬ Check if a web page reader will actually read within a column before moving to the next
column and not just read in turn the top line of each column before moving to the next
line of each column.
ο¬ Check that fonts in browsers can be resized. So a page does not restrict access for
those with poor eyesight.
ο¬ Allow time for scripting and running these extra tests.
28. Test Application and Integration Phase
ο¬ This part examines test related activity
during the early testing of an application
and during integration.
29. Test Analysis Effort
ο¬ Having had time to read documentation and understand the design then
the test analysis will be required to identify test cases.
ο¬ There are a range of techniques such as those detailed in BS7925-2 plus
methods like Classification Tree. The CT method comes with a tool
Classification Tree Editor (CTE), which can help to group tests and cut
test effort. In practice for estimation, this will help to provide a margin of
error to avoid underestimation of testing.
ο¬ This assumes however that the system under test is not safety critical.
ο¬ If it is safety critical, the free CTE tool in a different mode will help to ensure
that test cases are less likely to be missed.
ο¬ For large projects there is a commercial version of the CTE tool, which is
worth consideration.
ο¬ The CTE tool also interfaces with the HP tool set.
ο¬ Allow at least 15 minutes per single logical requirement for the analysis
phase of testing.
30. Manual Test Scripting Effort
ο¬ To create a test script from a test case, allow for each logical
requirement:
ο¬ 10 minutes to write test setup phase.
ο¬ 5 minutes per step, which will equate to values to be entered (taken
from the test case).
ο¬ 10 minutes to write the end of the test and check the test sanity and
ensure the test is under configuration control.
ο¬ As a general rule a manual test takes around 30 minutes to write
per test case.
ο¬ NOTE: Test cases need to be reviewed.
ο¬ One way to check the sanity of a test is to run it the first time using
another tester.
ο¬ HOWEVER the test case set needs to be reviewed for test coverage
and effectiveness and this can take around 5 to 10 minutes per test.
31. Test Case and Script Traps
ο¬ There is a risk that test cases and scripts may
miss key opportunities to test during
intervening steps. So for each step assess
what needs to be checked and referenced. Do
not focus only on the final state.
ο¬ If using end to end scenarios for functional
testing, then check that the requirements fully
document the required actions. Failure to
document the requirement flows fully can lead
to inadequate testing.
ο¬ Check that the requirement authors are
involved in reviewing test cases and scripts.
32. Estimation First Principles
ο¬ It is assumed that all requirements are in single logical statement. If a statement
refers to a standard or other sets of requirements then the relevant requirements
need to be identified as single statements.
ο¬ There are a range of test analysis techniques (e.g. Classification Tree and
approaches in BS7925-2). For a simple approach one would need to consider the
boundary analysis technique. This would run tests with values between boundaries
A and B. The tests that one would use would therefore be:
ο¬ Far below Boundary A (can include negative numbers)
ο¬ Just below boundary A
ο¬ On boundary A
ο¬ Just above boundary A
ο¬ Mid point between boundary A and B. (Not always tested, but recommended)
ο¬ Just below boundary B
ο¬ On boundary B
ο¬ Just above boundary B
ο¬ Far above boundary B
ο¬ Special case of value 0
ο¬ Illegal value (e.g alpha, special characters, etc, when expecting a numeric value).
ο¬ So for each single statement requirement there are a minimum of 11 tests. As a
general rule this is a good starting point for estimating.
33. Pair-wise and Orthogonal Array
ο¬ Pair-wise relies upon 2 variable combinations creating
defects that a single change would not produce. So
assume 3 inputs (factors), each having a state of 1
and 2 (ie 2 levels). We would test 4 cases (Runs):
I/P 1 I/P 2 I/P 3
Case 1 1 1 1
Case 2 1 2 2
Case 3 2 2 1
Case 4 2 1 2
Hence while thorough this can reduce the test cases from the 8
possible test cases. Orthogonal Arrays take the Pair-wise analysis
further and is out of the scope of this slide set.
34. End to End Tests
ο¬ End to End Tests are used to check an
application and system from a full user
perspective.
ο¬ The end to end business rules will be defined
within the requirements and as a general rule
allow 30 minutes scripting per rule, which
needs to include both positive successful end
to end cases and cases where the process will
lead to testing error handling. Both sets need
to be identified in the count for estimation.
35. Working with Widgets and GUI Interfaces
ο¬ This part examines estimating test
scripting effort for GUI interfaces, so
where requirements are structured in a
User Experience Document
36. Estimating scripting effort for a GUI interface
ο¬ As with normal requirements a User Experience
Document needs to be reviewed for single logical
features identified.
ο¬ Error conditions and legal ranges need to be identified.
ο¬ Business rules need to identify the end to end
processes.
ο¬ As a rough estimate of the amount of scripting time:
ο¬ For each widget (GUI interface), the test scripting
effort = Number of widget features x 11 x 30 minutes,
where 11 represents the standard minimum number of
boundary tests required.
37. Estimating Manual Test Run Time
ο¬ This part examines estimating test run
time for Manual Test Scripts.
38. Estimating Manual Test Run Time For First Pass
ο¬ To estimate manual test script run time.
ο¬ For each run:
ο¬ 5 minutes to set up each test script.
ο¬ 3 minutes per step in the script (not including the first set up and final
end steps).
ο¬ 3 minutes for the end step BUT add time for defect handling. Or
count last step as 5 minutes.
ο¬ One can expect that around 10% of scripts will flag a problem and
so will need a defect report raised. So 15 minutes per defect x
10% of scripts to be run.
ο¬ Any additional time to set up (or re-set) the test environment will
need to be added.
39. Estimating Manual Regression Runs
ο¬ For each set of scripts run:
ο¬ 10% will need to be run again to verify defect fixes.
ο¬ Repeat runs will be required for regression runs. This will either be:
ο¬ All scripts and initially one would want to re-run at least 3 times.
ο¬ Run all scripts once, then if a non critical or low risk system on each
regression run, where new functionality is being added, gradually reduce
the module testing as one adds end to end tests and automate tests,
then for each pass reduce the manual module tests by 10%. Choice of
reduction is based upon risk and this is covered later.
ο¬ IF critical or high risk, then all module tests will be required to be tested.
However these can be either:
ο¬ Gradually automated as code stabilises
ο¬ Automate all tests from the start, however this has a very high overhead on
test effort and minor changes in the code can mean considerable need to re-
write tests, depending upon test framework in place. This needs to be
resourced.
40. Estimating GUI Automation Testing
ο¬ This looks at the approach for estimating
GUI Automation test effort
41. GUI Test Automation
ο¬ Estimation of GUI automation effort will depend upon choice of tool, the presence
of an automated test framework and the stability of the code.
ο¬ If aiming to automate then allow for:
ο¬ Familiarisation of the tooling.
ο¬ Setting up of an automated test framework β could be 2 weeks minimum for a developer.
ο¬ Scripting, running and proving the first tests will take longer allow at least one week for
first tests.
ο¬ If using a tool like Selenium and within a framework then allow for scripting:
ο¬ 5 to 10 minutes for low complexity, based upon experience.
ο¬ For highly complex scripts, a single step can take 1 hour to write.
ο¬ Hence an estimation and banding of the Risk and Complexity of the test target needs to
be done.
ο¬ Note if using record and playback scripting is the same as a test run but add 15 minutes
for administration.
ο¬ NOTE: If code is unstable, then the overhead on managing and updating scripts can be
high. It may be decided to target automation at regression end to end scripts for stable
code.
42. High Use of Automation
ο¬ Automation for unit code tests has the advantage of being able to
measure simply the code coverage and should be encouraged.
ο¬ Usually automation is used gradually to replace manual functional
scripts for code that is stabilised and has low risk of causing the
need to re-write automated scripts.
ο¬ IF all functional scripts are to be automated early on, then there
can be a high level of maintenance. In many instances, a manual
test script that takes an hour to write, may require a day to write
and prove an automated version (depending upon tool and
framework).
ο¬ A manual test script may only take 5 minutes to change and may
even be tolerant of change to code. However an automated
script may require completely re-writing. So the maintenance
level of scripts needs considerable thought. However there are
ways around this.
43. Automation
ο¬ Ideally you need a low maintenance approach.
ο¬ Use where possible common scripts, where
the data and expected results can be pulled
from a table.
ο¬ This means that only the data needs
manipulating and updating. Which in turn can
reduce test maintenance effort.
ο¬ Always look for the smart approach to tooling
and do not rely upon record and playback as
this can be expensive.
44. Methods for Managing, Prioritising Regression Testing
ο¬ There are a number of methods for
prioritising regression tests to target
Risk. This section looks at these.
45. Managing Regression Pack
ο¬ A regression pack will grow as functionality is added.
ο¬ If manual scripts are being used for the core regression pack, then once
the code become stable, it will be possible to automate scripts gradually.
ο¬ Set priorities for automation based upon:
ο¬ targeting scripts that are more successful at finding defects,
ο¬ targeting scripts that test critical or more risk related functionality.
ο¬ When running a manual regression pack within a time limited period,
choose the test as a subset of the full test pack, choosing a customised
subset for each run. The choice will be based upon:
ο¬ High risk functionality.
ο¬ Areas of code that have been changed or have interfaces that are impacted by
change.
ο¬ Areas of code that have an existing record of being susceptible to defects.
ο¬ For a final regression run, one will want to run a full set of tests.
46. End of Part 1 of 2
ο¬ See slide pack part 2 of 2.