Principles of Software TestingPresentation Transcript
Principles of Software Testing Part I Satzinger, Jackson, and Burd
Testing is a process of identifying defects
Develop test cases and test data
A test case is a formal description of
A starting state
One or more events to which the software must respond
The expected response or ending state
Test data is a set of starting states and events used to test a module, group of modules, or entire system
Testing discipline activities
Test types and detected defects
The process of testing individual methods, classes, or components before they are integrated with other software
Two methods for isolated testing of units
Simulates the behavior of a method that sends a message to the method being tested
Simulates the behavior of a method that has not yet been written
Evaluates the behavior of a group of methods or classes
Identifies interface compatibility, unexpected parameter values or state interaction, and run-time exceptions
Integration test of the behavior of an entire system or independent subsystem
Build and smoke test
System test performed daily or several times a week
Determines whether a method, class, subsystem, or system meets user requirements
Determines whether a system or subsystem can meet time-based performance criteria
Response time specifies the desired or maximum allowable time limit for software responses to queries and updates
Throughput specifies the desired or minimum number of queries and transactions that must be processed per minute or hour
User Acceptance Testing
Determines whether the system fulfills user requirements
Involves the end users
Acceptance testing is a very formal activity in most development projects
Who Tests Software?
Testing buddies can test other’s programmer’s code
Usability and acceptance testing
Volunteers are frequently used to test beta versions
Quality assurance personnel
All testing types except unit and acceptance
Develop test plans and identify needed changes
Part II Principles of Software Testing for Testers Module 0: About This Course
After completing this course, you will be a more knowledgeable software tester. You will be able to better :
Understand and describe the basic concepts of functional (black box) software testing.
Identify a number of test styles and techniques and assess their useful ness in your context.
Understand the basic application of techniques used to identify useful ideas for tests.
Help determine the mission and communicate the status of your testing with the rest of your project team.
Characterize a good bug report, peer-review the reports of your colleagues, and improve your own report writing .
Understand where key testing concepts apply within the context of the Rational Unified Process.
0 – About This Course
1 – Software Engineering Practices
2 – Core Concepts of Software Testing
3 – The RUP Testing Discipline
4 – Define Evaluation Mission
5 – Test and Evaluate
6 – Analyze Test Failure
7 – Achieve Acceptable Mission
8 – The RUP Workflow As Context
Principles of Software Testing for Testers Module 1: Software Engineering Practices (Some things Testers should k now a bout them)
Identify some common software development problems.
Identify six software engineering practices for addressing common software development problems.
Discuss how a software engineering process provides supporting context for software engineering practices.
Symptoms of Software Development Problems
User or business needs not met
Modules don’t integrate
Hard to maintain
Late discovery of flaws
Poor quality or poor user experience
Poor performance under load
No coordinated team effort
Trace Symptoms to Root Causes Needs not met Requirements churn Modules don’t fit Hard to maintain Late discovery Poor quality Poor performance Colliding developers Build-and-release Incorrect requirements Ambiguous communications Brittle architectures Overwhelming complexity Undetected inconsistencies Insufficient testing Subjective assessment Waterfall development Uncontrolled change Insufficient automation Symptoms Root Causes Software Engineering Practices Develop Iteratively Manage Requirements Use Component Architectures Model Visually (UML) Continuously Verify Quality Manage Change Continuously Verify Quality Poor quality Undetected inconsistencies Insufficient testing Subjective assessment
Software Engineering Practices Reinforce E ach Other Validates architectural decisions early on Addresses complexity of design/ implementation incrementally Measures quality early and often Evolves baselines incrementally Ensures users involved as requirements evolve Develop Iteratively Manage Requirements Use Component Architectures Model Visually (UML) Continuously Verify Quality Manage Change Software Engineering Practices
Principles of Software Testing for Testers Module 2: Core Concepts of Software Testing
Introduce foundation topics of functional testing
Provide stakeholder-centric visions of quality and defect
Explain test ideas
Introduce test matrices
Module 2 Content Outline
Defining functional testing
Definitions of quality
A pragmatic definition of defect
Dimensions of quality
Test idea catalogs
In this course, we adopt a common, broad current meaning for functional testing. It is
Interested in any externally visible or measurable attributes of the software other than performance.
In functional testing, we think of the program as a collection of functions
We test it in terms of its inputs and outputs.
How Some Experts Have Defined Quality
Fitness for use (Dr. Joseph M. Juran)
The totality of features and characteristics of a product that bear on its ability to satisfy a given need (American Society for Quality)
Conformance with requirements (Philip Crosby)
The total composite product and service characteristics of marketing, engineering, manufacturing and maintenance through which the product and service in use will meet expectations of the customer (Armand V. Feigenbaum)
Note absence of “conforms to specifications.”
Quality As Satisfiers and Dissatisfiers
Joseph Juran distinguishes between Customer Satisfiers and Dissatisfiers as key dimensions of quality:
the right features
hard to use
incompatible with the customer’s equipment
A Working Definition of Quality
Quality is value to some person.
---- Gerald M. Weinberg
Change Requests and Quality
A “defect” – in the eyes of a project stakeholder– can include anything about the program that causes the program to have lower value.
It’s appropriate to report any aspect of the software that, in your opinion (or in the opinion of a stakeholder whose interests you advocate) causes the program to have lower value.
Dimensions of Quality: FURPS Reliability
e.g., Test the application behaves consistently and predictably.
e.g., Test online response under average and peak loading
e.g., Test the accurate workings of each usage scenario
e.g., Test application from the perspective of convenience to end-user.
e.g., Test the ability to maintain and support application under production use
A Broader Definition of Dimensions of Quality
Conformance to standards
Installability and uninstallability
Collectively, these are often called Qualities of Service , Nonfunctional Requirements, Attributes , or simply the -ilities
A test idea is a brief statement that identifies a test that might be useful.
A test idea differs from a test case, in that the test idea contains no specification of the test workings, only the essence of the idea behind the test.
Test ideas are generators for test cases: potential test cases are derived from a test ideas list.
A key question for the tester or test analyst is which ones are the ones worth trying.
Exercise 2.3: Brainstorm Test Ideas (1/2)
We’re about to brainstorm, so let’s review…
Ground Rules for Brainstorming
The goal is to get lots of ideas. You brainstorm together to discover categories of possible tests—good ideas that you can refine later.
There are more great ideas out there than you think.
Don’t criticize others’ contributions.
Jokes are OK, and are often valuable.
Work later, alone or in a much smaller group, to eliminate redundancy, cut bad ideas, and refine and optimize the specific tests.
Often, these meetings have a facilitator (who runs the meeting) and a recorder (who writes good stuff onto flipcharts). These two keep their opinions to themselves.
Exercise 2.3: Brainstorm Test Ideas (2/2)
A field can accept integer values between 20 and 50.
What tests should you try?
A Test Ideas List for Integer-Input Tests
Common answers to the exercise would include:
Reject, error msg 2^32, overflow integer? 4294967296 Reject, error msg Negative number -1 Reject, error msg Largest +1 51 Accepts it Largest valid value 50 Accepts it Valid value 49 Reject? Ignore? Empty field, what’s it do? Blank Reject, error msg 0 is always interesting 0 Reject, error msg Smallest -1 19 Accepts it Smallest valid value 20 Expected result Why it’s interesting Test
Discussion 2.4: Where Do Test Ideas Come From?
Where would you derive Test Ideas Lists?
Brainstorm sessions among colleagues
A Catalog of Test Ideas for Integer-Input tests
At LB of value
At UB of value
At LB of value - 1
At UB of value + 1
Outside of LB of value
Outside of UB of value
At LB number of digits or chars
At UB number of digits or chars
Empty field (clear the default value)
Outside of UB number of digits or chars
Wrong data type (e.g. decimal into integer)
Non-printing char (e.g., Ctrl+char)
DOS filename reserved chars (e.g., " * . :")
Upper ASCII (128-254)
Upper case chars
Lower case chars
Modifiers (e.g., Ctrl, Alt, Shift-Ctrl, etc.)
Function key (F2, F3, F4, etc.)
The Test - Ideas Catalog
A test - ideas catalog is a list of related test ideas that are usable under many circumstances.
For example, the test ideas for numeric input fields can be catalogued together and used for any numeric input field .
In many situations, these catalogs are sufficient test documentation. That is, an experienced tester can often proceed with testing directly from these without creating documented test cases.
Apply a Test Ideas Catalog Using a Test Matrix Nothing LB-1 Upper Bound Lower Bound UB+1 Zero Spaces Negative Field name Field name Field name
Review: Core Concepts of Software Testing
What is Quality?
Who are the Stakeholders?
What is a Defect?
What are Dimensions of Quality?
What are Test Ideas?
Where are Test Ideas useful?
Give some examples of a Test Ideas .
Explain how a catalog of Test Ideas could be applied to a Test Matrix.
Principles of Software Testing for Testers Module 4: Define Evaluation Mission
So? Purpose of Testing?
The typical testing group has two key priorities.
Find the bugs (preferably in priority order).
Assess the condition of the whole product (as a user will see it).
Sometimes, these conflict
The mission of assessment is the underlying reason for testing, from management’s viewpoint. But if you aren’t hammering hard on the program, you can miss key risks.
Missions of Test Groups Can Vary
Maximize bug count
Block premature product releases
Help managers make ship / no-ship decisions
Minimize technical support costs
Conform to regulations
Minimize safety-related lawsuit risk
Assess conformance to specification
Find safe scenarios for use of the product (find ways to get it to work, in spite of the bugs)
Verify correctness of the product
A Different Take on Mission: Public vs. Private Bugs
A programmer’s public bug rate includes all bugs left in the code at check-in.
A programmer’s private bug rate includes all the bugs that are produced, including the ones fixed before check-in.
Estimates of private bug rates have ranged from 15 to 150 bugs per 100 statements.
What does this tell us about our task?
Defining the Test Approach
The test approach (or “testing strategy”) specifies the techniques that will be used to accomplish the test mission.
The test approach also specifies how the techniques will be used.
A good test approach is:
Heuristics for Evaluating Testing Approach
James Bach collected a series of heuristics for evaluating your test approach. For example, he says:
Testing should be optimized to find important problems fast, rather than attempting to find all problems with equal urgency.
Please note that these are heuristics – they won’t always the best choice for your context. But in different contexts, you’ll find different ones very useful.
What Test Documentation Should You Use?
Test planning standards and templates
Some benefits and costs of using IEEE-829 standard based templates
When are these appropriate?
Thinking about your requirements for test documentation
Questions to elicit information about test documentation requirements for your project
Write a Purpose Statement for Test Documentation
Try to describe your core documentation requirements in one sentence that doesn’t have more than three components.
The test documentation set will primarily support our efforts to find bugs in this version, to delegate work, and to track status.
The test documentation set will support ongoing product and test maintenance over at least 10 years, will provide training material for new group members, and will create archives suitable for regulatory or litigation use.
Review: Define Evaluation Mission
What is a Test Mission?
What is your Test Mission?
What makes a good Test Approach (Test Strategy)?
What is a Test Documentation Mission?
What is your Test Documentation Goal?
Principles of Software Testing for Testers Module 5: Test & Evaluate
Test and Evaluate – Part One: Test
In this module, we drill into Test and Evaluate
This addresses the “How?” question:
How will you test those things?
Test and Evaluate – Part One: Test
This module focuses on the activity Implement Test
Earlier , we covered Test - Idea Lists , which are input here
In the next module, we’ll cover Analyze Test Failures , the second half of Test and Evaluate
Review: Defining the Test Approach
In Module 4, we covered Test Approach
A good test approach is:
The techniques you apply should follow your test approach
Discussion Exercise 5.1: Test Techniques
There are as many as 200 published testing techniques. Many of the ideas are overlapping, but there are common themes.
Similar sounding terms often mean different things, e.g.:
User interface testing
What are the differences among these techniques?
Dimensions of Test Techniques
Think of the testing you do in terms of five dimensions:
Evaluation: how to tell whether the test passed or failed.
Test techniques often focus on one or two of these, leaving the rest to the skill and imagination of the tester.
Test Techniques—Dominant Test Approaches
Of the 200+ published Functional Testing techniques, there are ten basic themes.
They capture the techniques in actual practice.
In this course, we call them:
Stochastic or Random testing
“So Which Technique Is the Best?” Testers Coverage Potential problems Activities Evaluation Technique A Technique B Technique C Technique E Technique F Technique G Technique H
Each has strengths and weaknesses
Think in terms of complement
There is no “one true way”
Mixing techniques can improve coverage
Apply Techniques According to the LifeCycle
Test Approach changes over the project
Some techniques work well in early phases; others in later ones
Align the techniques to iteration objectives
Inception Elaboration Construction Transition A limited set of focused tests Many varied tests A few components of software under test Large system under test Simple test environment Complex test environment Focus on architectural & requirement risks Focus on deployment risks
Module 5 Agenda
Overview of the workflow: Test and Evaluate
Defining test techniques
Stochastic or Random testing
Using techniques together
At a Glance: Function Testing Simple Complexity Black box unit testing Tag line Any stage SUT readiness Varies Harshness Whatever works Evaluation Whatever works Activities A function does not work in isolation Potential problems Each function and user-visible variable Coverage Any Testers Test each function thoroughly, one at a time. Objective
Strengths & Weaknesses: Function Testing
Spreadsheet, test each item in isolation.
Database, test each report in isolation
Thorough analysis of each item tested
Easy to do as each function is implemented
Misses exploration of the benefits offered by the program.
At a Glance: Equivalence Analysis (1/2) Partitioning, boundary analysis, domain testing Tag line Data, configuration, error handling Potential problems All data fields, and simple combinations of data fields. Data fields include input, output, and (to the extent they can be made visible to the tester) internal and configuration variables Coverage Any Testers There are too many test cases to run. Use stratified sampling strategy to select a few test cases from a huge population. Objective
At a Glance: Equivalence Analysis (2/2) Simple Complexity Any stage SUT readiness Designed to discover harsh single-variable tests and harsh combinations of a few variables Harshness Determined by the data Evaluation Divide the set of possible values of a field into subsets, pick values to represent each subset. Typical values will be at boundaries. More generally, the goal is to find a “best representative” for each subset, and to run tests with these representatives. Advanced approach: combine tests of several “best representatives”. Several approaches to choosing optimal small set of combinations. Activities
Strengths & Weaknesses: Equivalence Analysis
Equivalence analysis of a simple numeric field.
Printer compatibility testing (multidimensional variable, doesn’t map to a simple numeric field, but stratified sampling is essential)
Find highest probability errors with a relatively small set of tests.
Intuitively clear approach, generalizes well
Errors that are not at boundaries or in obvious special cases.
The actual sets of possible values are often unknowable.
Optional Exercise 5.2: GUI Equivalence Analysis
Pick an app that you know and some dialogs
MS Word and its Print, Page setup, Font format dialogs
Select a dialog
Identify each field, and for each field
What is the type of the field (integer, real, string, ...)?
List the range of entries that are “valid” for the field
Partition the field and identify boundary conditions
List the entries that are almost too extreme and too extreme for the field
List a few test cases for the field and explain why the values you chose are the most powerful representatives of their sets (for showing a bug)
Identify any constraints imposed on this field by other fields
At a Glance: Specification-Based Testing Depends on the spec Complexity Any Testers Verify every claim Tag line As soon as modules are available SUT readiness Depends on the spec Harshness Does behavior match the spec? Evaluation Write & execute tests based on the spec’s. Review and manage docs & traceability Activities Mismatch of implementation to spec Potential problems Documented reqts, features, etc. Coverage Check conformance with every statement in every spec, requirements document, etc. Objective
Strengths & Weaknesses: Spec-Based Testing
Traceability matrix, tracks test cases associated with each specification item.
User documentation testing
Critical defense against warranty claims, fraud charges, loss of credibility with customers.
Effective for managing scope / expectations of regulatory-driven testing
Reduces support costs / customer complaints by ensuring that no false or misleading representations are made to customers.
Any issues not in the specs or treated badly in the specs /documentation.
Traceability Tool for Specification-Based Testing The Traceability Matrix X X Test 6 X X Test 5 X X Test 4 X X X Test 3 X X Test 2 X X X Test 1 Stmt 5 Stmt 4 Stmt 3 Stmt 2 Stmt 1
Optional Exercise 5.5: What “Specs” Can You Use?
Getting information in the absence of a spec
What substitutes are available?
The user manual – think of this as a commercial warranty for what your product does.
What other “specs” can you/should you be using to test?
Exercise 5.5—Specification-Based Testing
Here are some ideas for sources that you can consult when specifications are incomplete or incorrect.
Software change memos that come with new builds of the program
User manual draft (and previous version’s manual)
Published style guide and UI standards
Three key meanings:
Find errors (risk-based approach to the technical tasks of testing)
Manage the process of finding errors (risk-based test management)
Manage the testing project and the risk posed by (and to) testing in its relationship to the overall project (risk-based project management)
We’ll look primarily at risk-based testing (#1), proceeding later to risk-based test management.
The project management risks are very important, but out of scope for this class.
At a Glance: Risk-Based Testing Any Complexity Find big bugs first Tag line Any stage SUT readiness Harsh Harshness Varies Evaluation Use qualities of service, risk heuristics and bug patterns to identify risks Activities Identifiable risks Potential problems By identified risk Coverage Any Testers Define, prioritize, refine tests in terms of the relative risk of issues we could test for Objective
Optimal prioritization (if we get the risk list right)
High power tests
Risks not identified or that are surprisingly more likely.
Some “risk-driven” testers seem to operate subjectively.
How will I know what coverage I’ve reached?
Do I know that I haven’t missed something critical?
Workbook Page—Risks in Qualities of Service
Installability and uninstallability -- Conformance to standards
Reliability -- Scalability -- Security
Supportability -- Testability -- Usability
Each quality category is a risk category, as in: “ the risk of unreliability.”
Workbook Page—Heuristics to Find Risks (1/2)
Risk Heuristics: Where to look for errors
New things : newer features may fail.
New technology: new concepts lead to new mistakes.
Learning Curve: mistakes due to ignorance.
Changed things : changes may break old code.
Late change: rushed decisions, rushed or demoralized staff lead to mistakes.
Rushed work: some tasks or projects are chronically underfunded and all aspects of work quality suffer.
Workbook Page—Heuristics to Find Risks (2/2)
Risk Heuristics: Where to look for errors
Complexity : complex code may be buggy.
Bugginess : features with many known bugs may also have many unknown bugs.
Dependencies : failures may trigger other failures.
Untestability : risk of slow, inefficient testing.
Little unit testing : programmers find and fix most of their own bugs. Shortcutting here is a risk.
Little system testing so far : untested software may fail.
Previous reliance on narrow testing strategies : (e.g. regression, function tests), can yield a backlog of errors surviving across versions.
Workbook Page—Bug Patterns As a Source of Risks
Testing Computer Software laid out a set of 480 common defects. To use these:
Find a defect in the list
Ask whether the software under test could have this defect
If it is theoretically possible that the program could have the defect, ask how you could find the bug if it was there.
Ask how plausible it is that this bug could be in the program and how serious the failure would be if it was there.
If appropriate, design a test or series of tests for bugs of this type.
Use the web: www.bugnet.com
Workbook Page—Risk-Based Test Management
Project risk management involves
Identification of the different risks to the project (issues that might cause the project to fail or to fall behind schedule that cost too much or dissatisfy customers or other stakeholders)
Analysis of the potential costs associated with each risk
Development of plans and actions to reduce the likelihood of the risk or the magnitude of the harm
Continuous assessment or monitoring of the risks (or the actions taken to manage them)
Useful material free at http://seir.sei.cmu.edu
http://www.coyotevalley.com (Brian Lawrence)
Good paper by Stale Amland, Risk Based Testing and Metrics, in appendix.
Optional Exercise 5.6: Risk-Based Testing
You are testing Amazon.com
(Or pick another familiar application)
What are the functional areas of the app?
Then evaluate risks:
What are some of the ways that each of these could fail?
How likely do you think they are to fail? Why?
How serious would each of the failure types be?
At a Glance: Stress Testing Varies Complexity Overwhelm the product Tag line Late stage SUT readiness Extreme Harshness Varies Evaluation Specialized Activities Error handling weaknesses Potential problems Limited Coverage Specialists Testers Learn what failure at extremes tells about changes needed in the program’s handling of normal cases Objective
Strengths & Weaknesses: Stress Testing
Buffer overflow bugs
High volumes of data, device connections, long transaction chains
Low memory conditions, device failures, viruses, other crises
Expose weaknesses that will arise in the field.
Expose security risks.
Weaknesses that are not made more visible by stress.
At a Glance: Regression Testing Varies Complexity Automated testing after changes Tag line For unit – early; for GUI - late SUT readiness Varies Harshness Varies Evaluation Create automated test suites and run against every (major) build Activities Side effects of changes Unsuccessful bug fixes Potential problems Varies Coverage Varies Testers Detect unforeseen consequences of change Objective
Strengths & Weaknesses—Regression Testing
Bug regression, old fix regression, general functional regression
Automated GUI regression test suites
Cheap to execute
“ Immunization curve”
Anything not covered in the regression suite
Cost of maintaining the regression suite
At a Glance: Exploratory Testing Varies Complexity Simultaneous learning, planning, and testing Tag line Medium to late: use cases must work SUT readiness Varies Harshness Varies Evaluation Learn, plan, and test at the same time Activities Everything unforeseen by planned testing techniques Potential problems Hard to assess Coverage Explorers Testers Simultaneously learn about the product and about the test strategies to reveal the product and its defects Objective
Limited by each tester’s weaknesses (can mitigate this with careful management)
This is skilled work, juniors aren’t very good at it.
At a Glance: User Testing Varies Complexity Strive for realism Let’s try real humans (for a change) Tag line Late; has to be fully operable SUT readiness Limited Harshness User’s assessment, with guidance Evaluation Directed by user Activities Items that will be missed by anyone other than an actual user Potential problems Very hard to measure Coverage Users Testers Identify failures in the overall human/machine/software system. Objective
Strengths & Weaknesses—User Testing
In-house lab using a stratified sample of target market
Expose design issues
Find areas with high error rates
Can be monitored with flight recorders
Can use in-house tests focus on controversial areas
Coverage not assured
Weak test cases
Beta test technical results are mixed
Must distinguish marketing betas from technical betas
At a Glance: Scenario Testing High Complexity Instantiation of a use case Do something useful, interesting, and complex Tag line Late. Requires stable, integrated functionality. SUT readiness Varies Harshness Any Evaluation Interview stakeholders & write screenplays, then implement tests Activities Complex interactions that happen in real use by experienced users Potential problems Whatever stories touch Coverage Any Testers Challenging cases to reflect real use Objective
Strengths & Weaknesses: Scenario Testing
Use cases, or sequences involving combinations of use cases.
Appraise product against business rules, customer data, competitors’ output
Hans Buwalda’s “soap opera testing.”
Complex, realistic events. Can handle (help with) situations that are too complex to model.
Exposes failures that occur (develop) over time
Single function failures can make this test inefficient.
Must think carefully to achieve good coverage.
Workbook Page—Test Scenarios From Use Cases
Outline of a Use Case
Has one normal, basic flow (“Happy Path”)
Several alternative flows
Exceptional flows handling error situations
“ Happy Path”
Workbook Page—Scenarios Without Use Cases
Sometimes, we develop test scenarios independently of use cases. The ideal scenario has four characteristics:
It is realistic (e.g. it comes from actual customer or competitor situations).
It is easy (and fast) to determine whether a test passed or failed.
The test is complex. That is, it uses several features and functions.
There is a stakeholder who has influence and will protest if the program doesn’t pass this scenario.
Workbook Page—Soap Operas
A Soap Opera is a scenario based on real-life client/customer experience.
Exaggerate every aspect of it. For example:
For each variable, substitute a more extreme value
If a scenario can include a repeating element, repeat it lots of times
Make the environment less hospitable to the case (increase or decrease memory, printer resolution, video resolution, etc.)
Create a real-life story that combines all of the elements into a test case narrative.
At a Glance: Stochastic or Random Testing (1/2) Monkey testing High-volume testing with new cases all the time Tag line
Have the computer create, execute, and evaluate huge numbers of tests.
The individual tests are not all that powerful, nor all that compelling.
The power of the approach lies in the large number of tests.
These broaden the sample, and they may test the program over a long period of time, giving us insight into longer term issues.
At a Glance: Stochastic or Random Testing (2/2) Complex to generate, but individual tests are simple Complexity Any SUT readiness Weak individual tests, but huge numbers of them Harshness Generic, state-based Evaluation Focus on test generation Activities Crashes and exceptions Potential problems Broad but shallow. Problems with stateful apps. Coverage Machines Testers
Combining Techniques (Revisited)
A test approach should be diversified
Applying opposite techniques can improve coverage
Often one technique can extend another
Testers Coverage Potential problems Activities Evaluation Technique G Technique A Technique B Technique C Technique E Technique F Technique H Technique D
Applying Opposite Techniques to Boost Coverage
Old test cases and analyses leading to new test cases
Archival test cases, preferably well documented, and bug reports
Reuse across multi-version products
models or other analyses that yield new tests
scribbles and bug reports
Find new bugs, scout new areas, risks, or ideas
Contrast these two techniques Exploration Regression
Applying Complementary Techniques Together
Regression testing alone suffers fatigue
The bugs get fixed and new runs add little info
Symptom of weak coverage
Combine automation w/ suitable variance
E.g. Risk-based equivalence analysis
Coverage of the combination can beat sum of the parts
Equivalence Risk-based Regression
How To Adopt New Techniques
Answer these questions:
What techniques do you use in your test approach now?
What is its greatest shortcoming?
What one technique could you add to make the greatest improvement, consistent with a good test approach: