SlideShare a Scribd company logo
TQ
PM Tutorial
4/30/13 1:00PM

How to Actually DO High-volume
Automated Testing
Presented by:
Cem Kaner and Carol Oliver
Florida Institute of Technology

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Cem Kaner
Cem Kaner is a professor of software engineering at Florida Institute of Technology and director of FIT’s
Center for Software Testing Education & Research. Cem teaches and researches in software
engineering—software testing, software metrics, and computer law and ethics. In his career, he has
studied and worked in areas of psychology, law, programming, testing, technical writing, sales, applying
what he learned to the central problem of software satisfaction. Cem has done substantial work on the
development of the law of software quality. He coauthored Testing Computer Software, Lessons Learned
in Software Testing, and Bad Software: What To Do When Software Fails.

Carol Oliver
Carol Oliver is a doctoral student in the Center and leader of the HiVAT project. She has more than a
decade of experience as a tester, most recently as Lead QA Analyst supporting infrastructure technology
services at Stanford University. For most of her career, Carol has been the lone tester supporting more
than a dozen database-driven middleware and web applications, all of which needed to cooperate with
each other and several different infrastructure services simultaneously. Carol is very interested in testing
techniques that help examine complex question spaces more effectively and in how these techniques can
be applied to maximize human attention to the holistic picture.
How to Actually DO High-Volume
Automated Testing
Cem Kaner
Carol Oliver
Mark Fioravanti
Florida Institute of Technology

1
Presentation Abstract
In high volume automated testing (HiVAT), the test tool generates the
test, runs it, evaluates the results, and alerts a human to suspicious
results that need further investigation.
Simple HiVAT approaches use simple oracles—run the program until it
crashes or fails in some other extremely obvious way.
More powerful HiVAT approaches are more sensitive to more types of
errors. They are particularly useful for testing combinations of many
variables and for hunting hard-to-replicate bugs that involve timing or
corruption of memory or data.
Cem Kaner, Carol Oliver, and Mark Fioravanti present a new strategy
for teaching HiVAT testing. We’ve been creating open source examples
of the techniques applied to real (open source) applications. These
examples are written in Ruby, making the code readable and reusable
by snapping in code specific to your own application. Join Cem Kaner,
Carol Oliver, and Mark Fioravanti as they describe three HiVAT
techniques, their associated code, and how you can customize them.
2
Acknowledgments
• Our thanks to JetBrains (jetbrains.com), for supporting this
work through an Open Source Project License to their Ruby
language IDE, RubyMine.
• These notes are partially based on research supported by NSF
Grant CCLI-0717613 “Adaptation & Implementation of an
Activity-Based Online or Hybrid Course in Software Testing.”
Any opinions, findings and conclusions or recommendations
expressed in this material are those of the author(s) and do
not necessarily reflect the views of the National Science
Foundation.

3
What is automated testing?
• There are many testing activities, such as…
– Design a test, create the test data, run the test and fix it if
it’s broken, write the test script, run the script and debug
it, evaluate the test results, write the bug report, do
maintenance on the test and the script
• When we automate a test, we use software to do one or more
of the testing activities
– Common descriptions of test automation emphasize test
execution and first-level interpretation
– But that automates only part of the task
– Automation of any part is automation to some degree
• All software tests are to some degree automated and no
software tests are fully automated.
4
What is “high volume” automated testing?
• Imagine automating so many testing activities
– That the speed of a human is no longer a constraint on
how many tests you could run
– That the number of tests you run is determined more by
how many are worth running than by how much time they
take
• This is the underlying goal of HiVAT

5
Once upon an N+1th time…
•
•

Imagine developing an N+1th version of a program
– How could we test it?
We could try several techniques
1. Long sequence regression testing. Reuse regression tests from previous
version. For those tests that the program can pass individually, run them
in long random sequences
2. Program equivalence testing.
•
Start with function equivalence testing. For each function that exists
in the old version and the new one, generate random inputs, feed
them to both versions and compare results. After a sufficiently
extensive series, conclude the functions are equivalent
•
Combine tests of several functions that compare (old versus new)
successfully individually.
3. Large-scale system comparison by analyzing large sets of user data from
the old version (satisfied user attests that they are correct). Check that
the new version yields comparable results.
•
Before running the tests, run data qualification tests (test the
plausibility of the inputs and user-supplied sample outputs). The
goal is to avoid garbage-in-garbage-out troubleshooting.
6
Working definitions of HiVAT
• A family of test techniques that use software to generate,
execute and interpret arbitrarily many tests.
• A family of test techniques that empower a tester to focus on
interesting results while using software to design, implement,
execute, and interpret a large suite of tests (thousands to
billions of tests).

7
Breaking out potential benefits of HiVAT
(different techniques  different benefits)
•

•

•

Find bugs we don’t otherwise know how to find
• Diagnostics-based
• Long sequence regression
• Load-enhanced functional testing
• Input fuzzing
• Hostile data stream testing
Increase test coverage inexpensively
• Fuzzing
• Functional equivalence
• High-volume combination testing
• Inverse operations
• State-model based testing
• High-volume parametric variation
Qualify large collections of input or output data
• Constraint checks
• Functional equivalence testing
8
Examples of the techniques
• Equivalence testing
– Function (e.g. MASPAR)
– Program
– Version-to-version with shared data
• Constraint checking
– Continuing the version-to-version comparison
• LSRT
– Mentsville
• Fuzzing
– Dumb monkeys, stability and security tests
• Diagnostics
– Telenova PBX intermittent failures
9
Oracles Enable HiVAT
• Some specific to the testcase
– Reference Programs: Does the SUT behave like it?
– Regression Tests: Do the previously-expected results still happen?
• Some independent of the testcase
– Fuzzing: Run to crash, run to stack overflow
– OS/System Diagnostics
• Some inform on only a very narrow aspect of the testcase
– You’d never dream that passing the oracle check means passing the
test in all possible ways
– Only that passing the oracle check means the program does not fail in
this specific way
– E.g. Constraint Checks
• The more extensive your oracle, the more valuable your high-volume test
suite
– What potential errors have we covered?
– What potential errors have to be checked in some other way?

10
Reference Oracles are Useful but Partial
Based on notes from Doug Hoffman
Program state, (and uninspected outputs)

Program state
System state
Intended inputs

System
under
test

System state
Monitored outputs
Impacts on connected devices / resources

Configuration and system resources
To cooperating processes, clients or servers
Cooperating processes, clients or servers
Program state, (and uninspected outputs)

Program state

System state

System state
Intended inputs
Configuration and system resources
Cooperating processes, clients or servers

Reference
function

Monitored outputs
Impacts on connected devices / resources
To cooperating processes, clients or servers
Using Oracles
• If you can detect a failure, you can use that oracle in any test,
automated or not
• Many ways to combine oracles, as well as using them one at a
time
• Each has time costs and timing costs
• Heisenbug costs mean you might use just one or two oracles
at a time, rather than all the possible oracles you know, all at
once.

12
Practical Things

MAKING HIVAT WORK FOR YOU

13
Basic Ingredients
• A Large Problem Space
– The payoff should be worth the cost of setting up the computer to do
detailed testing for you
• A Data Generator
– Data generation often requires more human effort than we expect.
This constrains the breadth of testing we can do, because all human
effort takes time.
• A Test Runner
– We have to be able to read the data, run the test, and feed the test to
the oracle without human intervention. Probably we need a
sophisticated logger so we can trace what happened when the
program fails.
• An Oracle
– This might tell us definitively that the program passed or failed the
test or it might look more narrowly and say that the program didn’t fail
in this way. In either case, the strength of the test is driven by your
ability to tell whether it failed. The narrower the oracle, the weaker
the technique.
14
Two Examples and a Future
• Function equivalence tests (testing against a reference
program)
• Long-sequence regression (repurposing already-generated
data with new oracles)
• A more flexible HiVAT architecture

15
Functional Equivalence
• Oracle: A Reference Program
• Method:
– The same testcases are sent to the SUT and to the Reference
Program
– Matched behaviors are considered a Pass
– Discrepant behaviors are considered a Fail
• Limits:
– The reference program might be wrong
– We can only test comparable functions
– Generating semantically meaningful inputs can be hard,
especially as we create nontrivial ones
• Benefits: For every comparable function or combination of
functions, you can establish that your SUT is at least as good as the
Reference Program
16
Functional Equivalence Examples
• Square Root calculated one way vs. Square Root calculated
another way (Hoffman)
• Open Office Calc vs. a reference implementation
– Comparing
• Individual functions
• Combinations of functions
– Reference
• Currently reference formulas programmed in Ruby
• Coming (maybe by STAR) compare to MS-Excel

17
Functional Equivalence Architecture

18
Functional Equivalence: Key Ideas
• You provide your Domain Knowledge
– How to invoke meaningful operations for your system
– Which data to use during the tests
– How to compare the resulting data states
• Test Runner sends the same test operations to both the
Reference Oracle and the System Under Test (SUT)
– Collects both answers
– Compares the answers
– Logs the result

19
Function Equivalence: Architecture
• Component independence allows reusability for other
applications and for later versions of this application.
– We can snap in a different reference function
– We can snap in new versions of the software under test
– The test generator has to know about the category of
application (this is a spreadsheet) but not which particular
application is under test.
– In the ideal architecture, the test running passes test
specifications to the SUT and oracle without knowing
anything about those applications. To the maximum extent
possible, we want to be able to reuse the test runner code
across different applications
– The log interpreter probably has to know a lot about what
makes a pass or a fail, which is probably domain-specific.
20
Functional Equivalence: Reference Code
• Work in Progress
– http://testingeducation.org/
– See HiVAT section of site

21
Long-Sequence Regression
• Oracles:
– OS/System diagnostics
– Checks built into each regression test
• Method:
– Reconfigure some known-passing regression tests to
ensure they can build upon one another (i.e. never reset
the system to a clean state)
– Run a subset of those regression tests randomly,
interspersed with diagnostics
– When a test fails on the N+1st time that we run it, after
passing N times during the run, analyze the diagnostics to
figure out what happened and when in the sequence
things first went wrong
22
Long-Sequence Regression
• Limits:
– Failures will be indirect pointers to problems
– You will have to analyze the logs of diagnostic results
– You may need to rerun the sequence with different
diagnostics
– Heisenbug problem limits the number of diagnostics that
you can run between tests
• Benefits:
– This approach is proven good for finding timing errors and
memory misuse

23
Long-Sequence Regression Examples
• Mentsville
• Application to OpenOffice

24
Long-Sequence Regression Architecture

25
Test Runner/Result Checker Operation
•
•
•
•
•
•
•
•
•

Diagnostics
A Regression Test
Diagnostics
A Regression Test
Diagnostics
A Regression Test
Diagnostics
……..
Final Diagnostics

26
Long-Sequence Regression: Key Ideas
• Pool of Long-Sequence-Compatible Regression Tests
– Each of these is Known to Pass, individually, on this build
– Each of these is Known to NOT reset system state
(therefore it contributes to simulating real use of the
system in the field)
– Each of these is Known to Pass when run 100 times in a
row, with just itself altering the system (if not, you’ve
isolated a bug)
• Pool of Diagnostics
– Each of these collects something interesting about the
system or program: memory state, CPU usage, thread
count, etc.
27
Long-Sequence Regression: Key Ideas
• Selection of Regression Tests
– Need to run a small enough set of regression tests that
they will repeat frequently during the elapsed time of the
Long Sequence
– Need to run a large enough set of regression tests that
there’s real opportunity for interactions between functions
of the system
• Selection of Diagnostics
– Need to run a fixed set of diagnostics per Long Sequence,
so that you collect data to compare Regression Test failures
against
– Need to run a small enough set of diagnostics that the act
of running the diagnostics doesn’t significantly change the
system state
28
Long-Sequence Regression: Reference Code
• Work in Progress (not currently available)
– http://testingeducation.org/
– See HiVAT section of site

29
Future Power

A FLEXIBLE ARCHITECTURE FOR
HIVAT
30
Maadi HiVAT Architecture
• Maadi is a generalized HiVAT architecture
– Open Source and written in Ruby
– Developed to support multiple HiVAT techniques
– Developed to support multiple applications and workflows
– Based on demonstrations from previous implementations
• Architecture is divided into Core and Custom components
– Custom components are where the specific
implementations reside
• Maadi, one of the earliest Egyptian mines
– Implemented in Ruby, used RubyMine IDE, etc.

31
Maadi HiVAT Architecture Overview

32
Maadi Core Components
•
•
•
•
•
•
•

•
•
•
•

Application – Ruby Interface to SUT
Collector – Logging Interface
Monitor – Diagnostic Utility Interface
Tasker – Resource Consumption Interface
Organizer – Implements the Test Technique
Expert – Domain Expert which builds “skeleton” Procedures
Generator – Mediates interaction between Organizer and
Expert to assemble a series of Tests
Scheduler – Collects and orders the completed Procedures
Controller – Mediates interaction between Scheduler and the
Applications, Monitors, and Taskers to conduct tests
Analyzer – Performs analysis of collected Results
Manager – Overall Test Conductor
33
HiVAT Test Process
• Configure Environment
– CLI enables scripted runs via a profiles *
• Prepare Components
– Validate component compatibility
– Generate Test Cases, Schedule Test Cases, Log Test Cases
– Log Configuration
• Execute Test Plan
– Record results
• Generate Test Report

* Entire process can be scripted via profiles; start to finish
34
Language Restrictions
• Applications, Organizer and Expert must all understand the
same “procedural” language in order to communicate the
selection, construction, and execution of a test case
– Using a SQL Domain Expert to generate Test Cases for
Microsoft Solitaire is not useful
– Controller will validate that all Applications and the
Organizer can accept the Domain Expert
• Applications and Experts must also understand a second
“result” language in order to capture the results

35
Test Plan Complexities
• Functional Equivalence exposed a challenge:
– How to flexibly specify test details that required random
choice on-the-fly
• Early Attempt: Try to Pre-Specify all the details you want to
have vary:
– Plan 1: log (RNG_1)
• CONSTRAINT RNG_1: Real, >0
– Plan 2: Sum (0...RNG_1) of CellValue (RNG_2)
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real
– Plan 3: Sum (0...RNG_1) of CellValue (log (RNG_2))
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real, >0 Delta= 0
36
Test Plan Complexities (2)
• This gets awkward quickly:
– Plan 4: (Sum (0...RNG_1) of CellValue (log(RNG_2)))
/RNG_1
• CONSTRAINT RNG_1: Integer, >0
• CONSTRAINT RNG_2: Real, >0
• Really want to just specify:
– The individual elements with their individual constraints
– The rules for combining the elements together
• Then dynamically assemble the pieces in sensible but
infinitely variable ways at test generation time
• New Approach: Mediate a Conversation with the Expert
during test generation time
37
Generator Mediation
• Generator is responsible for generating a test case
– Interaction between the Organizer and the Expert
produces a test procedure
– Generator supervises and terminates as necessary
• Test Case construction process overview:
1. Organizer selects a Test to build
2. Expert builds a skeleton, adds configurable parameters
3. Organizer populates parameters
4. Expert inspects parameter selection, expert may
a) add additional parameters (return to step 2)
b) flag the procedure as complete or failed

38
Generator Mediation (2)

39
Organizer: One Example Test Plan
• Organizer is responsible for constructing a set of test
procedures according to the Test Plan
– The Test Plan is really the resultant set of Test Procedures
– Organizers could be as simple as a Fuzzer
• Test Plan could be satisfied by randomly select a value
for any configurable parameter
– Organizers could be as complex as desired and could be as
specific as required
• Test Plan could only be satisfied by successfully
constructing a specific database

40
Organizer: One Example Test Plan (2)
• Example of a Test Plan
– CREATE a TABLE tblStudents in dbUniversity with 5
columns
• s_ID as a PRIMARY KEY, AUTO INCREMENT, INT
• s_FamilyName as VARCHAR(255), NOT NULL
• s_Program as VARCHAR(32), NOT NULL
• s_Major as INTEGER, FOREIGN KEY( tblMajors, m_ID)
• s_Enrolled as DATETIME
– CREATE a TABLE tblStudentClasses in dbUniversity with 3
columns
–…
• Following example will illustrate the process of building a
procedure to according to the Test Plan
41
Example Procedure Construction
• Participants
– Expert: Structured Query Language (SQL) Domain Expert
– Organizer: Construct University Database
• Beginning Test Selection
• Expert provides a list of possible tests
– the SQL Expert provides:
• CREATE, SELECT, INSERT, UPDATE, DELETE, DROP

• Organizer selects test, requests procedure
– the Organizer would choose:
• CREATE

• After the Test has been chosen, the Organizer can request
that the Expert provide a Test Procedure
42
Example Procedure Construction (2)
• Expert’s Initial response to choosing CREATE
id: CREATE-NEW-NEXT
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] =
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)

• Expert returns procedure with 1 parameter: [CREATE-TYPE]
– Organizer has 2 options; choose TABLE or DATABASE
• Each parameter has a constraint which the Organizer should
follow
– Many different types of constraints possible
– Expert may or may not enforce the constraint; if not
followed the procedure may be flagged as invalid
• Organizer returns procedure with option selected
43
Example Procedure Construction (3)
• Organizer’s Test Plan requires that the Procedure for a CREATE
TABLE be built
– Organizer selects TABLE for the [CREATE-TYPE] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-NEW-NEXT
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)

• Use of Step and Parameters is part of the agreed upon
Procedural language
– In the case of the SQL example, the parameters will be
substituted inline to construct the desired SQL command
44
Example Procedure Construction (4)
• Expert’s response to [CREATE-TYPE] selection
id: CREATE-TABLE-PICK-DATABASE
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)

• Expert returns a procedure with no new parameters
– Procedure ID has changed
• Expert utilizes Procedure for determining next actions
• Organizer has no selections to make but the procedure
is not flagged as being complete, therefore it should
return the procedure to the Expert for the next update

45
Example Procedure Construction (5)
• Expert’s response to returned procedure
id: CREATE-TABLE-NEW
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] =
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)

• Expert returns procedure, which has 1 NEW parameter;
[DATABASE-NAME]
– [DATABASE-NAME] has a constraint in which the only two
valid values are either HiVAT or dbUniversity
– Procedure ID has changed (again)
• Expert has not modified or removed existing [CREATE-TYPE]
parameter
46
Example Procedure Construction (6)
• Organizer selects value according to Test Plan
– Organizer selects HiVAT for the [DATABASE-NAME] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-TABLE-NEW
step: CREATE [TYPE]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)

• Organizer has only selected a parameter which was previously
empty
– If a previously selected parameter is changed, it may cause the
Expert to flag the Procedure as Failed (e.g. setting [CREATETYPE] to DATABASE
47
Example Procedure Construction (7)
• Expert updates procedure and returns it to the Organizer
id: CREATE-TABLE-PARAMS
step: CREATE TABLE [TABLE-NAME]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
* parameter: [TABLE-NAME] =
constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25)
* parameter: [TABLE-COLUMNS] =
constraint: RANGED-INTEGER (MIN: 1, MAX: 25)

• The procedure has 2 NEW parameters
– [TABLE-NAME] has an ALPHANUMERIC constraint
– [TABLE-COLUMNS] has a RANGED INTEGER constraint
– Procedure ID and Step name have also changed
48
Example Procedure Construction (8)
• Organizer selects values according to Test Plan
– Organizer selects
• tblStudents for the [TABLE-NAME] parameter
• 5 for the [TABLE-COLUMNS] parameter
• Organizer returns the following procedure to the Expert
id: CREATE-TABLE-PARAMS
step: CREATE TABLE [TABLE-NAME]
* parameter: [CREATE-TYPE] = TABLE
constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE)
* parameter: [DATABASE-NAME] = dbUniversity
constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity)
* parameter: [TABLE-NAME] = tblStudents
constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25)
* parameter: [TABLE-COLUMNS] = 5
constraint: RANGED-INTEGER (MIN: 1, MAX: 25)

49
Example Procedure Construction (9)
• Exchange between the Expert and Organizer continues until
– Expert flags the Procedure as Complete
– Expert flags the Procedure as Failed
– Generator determines that the maximum number of exchanges
have occurred *
• Organizer and Expert may not be able to agree on
acceptable test
• All procedures (including Failed procedures) are returned to the
Controller
– Controller is configured to discard failed procedures
– If too many failed procedures are detected the user is notified
and test is terminated *

* User configurable values
50
Maadi: Key Ideas
• Exploit the commonalities across each HiVAT technique
• Enable a conversation with the Knowledge Experts to
empower the tests to be more dynamic, yet still sensible

51
Maadi: Reference Code
• Work in Progress
– http://testingeducation.org/
– See HiVAT section of site

52
Is this the future of testing?
• Maybe, but within limits
• Expensive way to find simple bugs
– Significant troubleshooting costs: shoot a fly with an
elephant gun and discover you have to spend enormous
time wading through the debris to find the fly
• Ineffective way to hunt for design bugs
• Emphasizes the families of tests that we know how to code
rather than the families of bugs that we need to hunt

As with all techniques
These are useful under some circumstances

Probably invaluable under some circumstances
But ineffective for some bugs and inefficient for others

53

More Related Content

What's hot

Software testing axioms
Software testing axiomsSoftware testing axioms
Software testing axioms
vijayalakshmijanakir1
 
Pairwise testing
Pairwise testingPairwise testing
Pairwise testing
Kanoah
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | Testbytes
Testbytes
 
Requirements Based Testing
Requirements Based TestingRequirements Based Testing
Requirements Based TestingSSA KPI
 
Software Testing- Principles of testing- Mazenet Solution
Software Testing- Principles of testing- Mazenet SolutionSoftware Testing- Principles of testing- Mazenet Solution
Software Testing- Principles of testing- Mazenet Solution
Mazenetsolution
 
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
RootedCON
 
Software testability slide share
Software testability slide shareSoftware testability slide share
Software testability slide shareBeBo Technology
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4
Mohammad Faizan
 
Mattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario TestingMattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario Testing
TEST Huddle
 
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
TEST Huddle
 
Continuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the RescueContinuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the Rescue
TechWell
 
Testing- Fundamentals of Testing-Mazenet solution
Testing- Fundamentals of Testing-Mazenet solutionTesting- Fundamentals of Testing-Mazenet solution
Testing- Fundamentals of Testing-Mazenet solution
Mazenetsolution
 
Software engineering 23 software reliability
Software engineering 23 software reliabilitySoftware engineering 23 software reliability
Software engineering 23 software reliability
Vaibhav Khanna
 
How google-tests-software
How google-tests-softwareHow google-tests-software
How google-tests-software
Bhawna Tuteja
 
Software reliability
Software reliabilitySoftware reliability
Software reliability
Baptiste Wicht
 
Effective Test Automation in DevOps
Effective Test Automation in DevOpsEffective Test Automation in DevOps
Effective Test Automation in DevOps
Lee Barnes
 
The Art of Testing Less without Sacrificing Quality @ ICSE 2015
The Art of Testing Less without Sacrificing Quality @ ICSE 2015The Art of Testing Less without Sacrificing Quality @ ICSE 2015
The Art of Testing Less without Sacrificing Quality @ ICSE 2015
Kim Herzig
 
Automated Software Testing Framework Training by Quontra Solutions
Automated Software Testing Framework Training by Quontra SolutionsAutomated Software Testing Framework Training by Quontra Solutions
Automated Software Testing Framework Training by Quontra Solutions
Quontra Solutions
 
Design For Testability
Design For TestabilityDesign For Testability
Design For Testability
Giovanni Asproni
 

What's hot (20)

Software testing axioms
Software testing axiomsSoftware testing axioms
Software testing axioms
 
Pairwise testing
Pairwise testingPairwise testing
Pairwise testing
 
Testing a GPS application | Testbytes
Testing a GPS application | TestbytesTesting a GPS application | Testbytes
Testing a GPS application | Testbytes
 
Requirements Based Testing
Requirements Based TestingRequirements Based Testing
Requirements Based Testing
 
Software Testing- Principles of testing- Mazenet Solution
Software Testing- Principles of testing- Mazenet SolutionSoftware Testing- Principles of testing- Mazenet Solution
Software Testing- Principles of testing- Mazenet Solution
 
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
Joxean Koret - Interactive Static Analysis Tools for Vulnerability Discovery ...
 
Software testability slide share
Software testability slide shareSoftware testability slide share
Software testability slide share
 
SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4  SOFTWARE TESTING UNIT-4
SOFTWARE TESTING UNIT-4
 
Mattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario TestingMattias Ratert - Incremental Scenario Testing
Mattias Ratert - Incremental Scenario Testing
 
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
'Acceptance Test Driven Development Using Robot Framework' by Pekka Klarch & ...
 
Continuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the RescueContinuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the Rescue
 
Testing- Fundamentals of Testing-Mazenet solution
Testing- Fundamentals of Testing-Mazenet solutionTesting- Fundamentals of Testing-Mazenet solution
Testing- Fundamentals of Testing-Mazenet solution
 
Software engineering 23 software reliability
Software engineering 23 software reliabilitySoftware engineering 23 software reliability
Software engineering 23 software reliability
 
How google-tests-software
How google-tests-softwareHow google-tests-software
How google-tests-software
 
Software reliability
Software reliabilitySoftware reliability
Software reliability
 
Effective Test Automation in DevOps
Effective Test Automation in DevOpsEffective Test Automation in DevOps
Effective Test Automation in DevOps
 
The Art of Testing Less without Sacrificing Quality @ ICSE 2015
The Art of Testing Less without Sacrificing Quality @ ICSE 2015The Art of Testing Less without Sacrificing Quality @ ICSE 2015
The Art of Testing Less without Sacrificing Quality @ ICSE 2015
 
Testing
Testing Testing
Testing
 
Automated Software Testing Framework Training by Quontra Solutions
Automated Software Testing Framework Training by Quontra SolutionsAutomated Software Testing Framework Training by Quontra Solutions
Automated Software Testing Framework Training by Quontra Solutions
 
Design For Testability
Design For TestabilityDesign For Testability
Design For Testability
 

Viewers also liked

Managing Successful Test Automation
Managing Successful Test AutomationManaging Successful Test Automation
Managing Successful Test Automation
TechWell
 
Data Masking: Testing with Near-real Data
Data Masking: Testing with Near-real DataData Masking: Testing with Near-real Data
Data Masking: Testing with Near-real Data
TechWell
 
Pivoting Your Testers to Become Agile
Pivoting Your Testers to Become AgilePivoting Your Testers to Become Agile
Pivoting Your Testers to Become Agile
TechWell
 
Zorro Circles: Retrospectives for Excellence
Zorro Circles: Retrospectives for ExcellenceZorro Circles: Retrospectives for Excellence
Zorro Circles: Retrospectives for Excellence
TechWell
 
Improvisation for Agile Skill Development
Improvisation for Agile Skill DevelopmentImprovisation for Agile Skill Development
Improvisation for Agile Skill Development
TechWell
 
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun AgainSTARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
TechWell
 
What Hollywood Can Teach Us about Software Testing
What Hollywood Can Teach Us about Software TestingWhat Hollywood Can Teach Us about Software Testing
What Hollywood Can Teach Us about Software Testing
TechWell
 
Essential Patterns of Mature Agile Teams
Essential Patterns of Mature Agile TeamsEssential Patterns of Mature Agile Teams
Essential Patterns of Mature Agile Teams
TechWell
 
Twelve Heuristics for Solving Tough Problems Faster and Better
Twelve Heuristics for Solving Tough Problems Faster and BetterTwelve Heuristics for Solving Tough Problems Faster and Better
Twelve Heuristics for Solving Tough Problems Faster and Better
TechWell
 
User Stories: Across the Seven Product Dimensions
User Stories: Across the Seven Product DimensionsUser Stories: Across the Seven Product Dimensions
User Stories: Across the Seven Product Dimensions
TechWell
 
Performance Testing Web 2.0 Applications—in an Agile World
Performance Testing Web 2.0 Applications—in an Agile WorldPerformance Testing Web 2.0 Applications—in an Agile World
Performance Testing Web 2.0 Applications—in an Agile World
TechWell
 
Model-Based Testing: Concepts, Tools, and Techniques
Model-Based Testing: Concepts, Tools, and TechniquesModel-Based Testing: Concepts, Tools, and Techniques
Model-Based Testing: Concepts, Tools, and Techniques
TechWell
 
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left BehindAccessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
TechWell
 
A Case Study in Metrics-Driven DevOps
A Case Study in Metrics-Driven DevOpsA Case Study in Metrics-Driven DevOps
A Case Study in Metrics-Driven DevOps
TechWell
 
Combinatorial Black-Box Testing with Classification Trees
Combinatorial Black-Box Testing with Classification TreesCombinatorial Black-Box Testing with Classification Trees
Combinatorial Black-Box Testing with Classification Trees
TechWell
 

Viewers also liked (15)

Managing Successful Test Automation
Managing Successful Test AutomationManaging Successful Test Automation
Managing Successful Test Automation
 
Data Masking: Testing with Near-real Data
Data Masking: Testing with Near-real DataData Masking: Testing with Near-real Data
Data Masking: Testing with Near-real Data
 
Pivoting Your Testers to Become Agile
Pivoting Your Testers to Become AgilePivoting Your Testers to Become Agile
Pivoting Your Testers to Become Agile
 
Zorro Circles: Retrospectives for Excellence
Zorro Circles: Retrospectives for ExcellenceZorro Circles: Retrospectives for Excellence
Zorro Circles: Retrospectives for Excellence
 
Improvisation for Agile Skill Development
Improvisation for Agile Skill DevelopmentImprovisation for Agile Skill Development
Improvisation for Agile Skill Development
 
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun AgainSTARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
STARCANADA 2013 Keynote: Cool! Testing’s Getting Fun Again
 
What Hollywood Can Teach Us about Software Testing
What Hollywood Can Teach Us about Software TestingWhat Hollywood Can Teach Us about Software Testing
What Hollywood Can Teach Us about Software Testing
 
Essential Patterns of Mature Agile Teams
Essential Patterns of Mature Agile TeamsEssential Patterns of Mature Agile Teams
Essential Patterns of Mature Agile Teams
 
Twelve Heuristics for Solving Tough Problems Faster and Better
Twelve Heuristics for Solving Tough Problems Faster and BetterTwelve Heuristics for Solving Tough Problems Faster and Better
Twelve Heuristics for Solving Tough Problems Faster and Better
 
User Stories: Across the Seven Product Dimensions
User Stories: Across the Seven Product DimensionsUser Stories: Across the Seven Product Dimensions
User Stories: Across the Seven Product Dimensions
 
Performance Testing Web 2.0 Applications—in an Agile World
Performance Testing Web 2.0 Applications—in an Agile WorldPerformance Testing Web 2.0 Applications—in an Agile World
Performance Testing Web 2.0 Applications—in an Agile World
 
Model-Based Testing: Concepts, Tools, and Techniques
Model-Based Testing: Concepts, Tools, and TechniquesModel-Based Testing: Concepts, Tools, and Techniques
Model-Based Testing: Concepts, Tools, and Techniques
 
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left BehindAccessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
Accessibility Standards and Testing Techniques: Be Inclusive or Be Left Behind
 
A Case Study in Metrics-Driven DevOps
A Case Study in Metrics-Driven DevOpsA Case Study in Metrics-Driven DevOps
A Case Study in Metrics-Driven DevOps
 
Combinatorial Black-Box Testing with Classification Trees
Combinatorial Black-Box Testing with Classification TreesCombinatorial Black-Box Testing with Classification Trees
Combinatorial Black-Box Testing with Classification Trees
 

Similar to How to Actually DO High-volume Automated Testing

Google, quality and you
Google, quality and youGoogle, quality and you
Google, quality and you
nelinger
 
5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation
Sauce Labs
 
Automated Testing Using Selenium
Automated Testing Using SeleniumAutomated Testing Using Selenium
Automated Testing Using Selenium
TechWell
 
Software testing overview subbu
Software testing overview subbuSoftware testing overview subbu
Software testing overview subbu
Subramanya Mudukutore
 
Software Testing Presentation in Cegonsoft Pvt Ltd...
Software Testing Presentation in Cegonsoft Pvt Ltd...Software Testing Presentation in Cegonsoft Pvt Ltd...
Software Testing Presentation in Cegonsoft Pvt Ltd...
ChithraCegon
 
testing
testingtesting
testing
Rashmi Deoli
 
UNIT 1.pptx
UNIT 1.pptxUNIT 1.pptx
UNIT 1.pptx
GNANAJESLINJ
 
Software testing part
Software testing partSoftware testing part
Software testing part
Preeti Mishra
 
Basic software-testing-concepts
Basic software-testing-conceptsBasic software-testing-concepts
Basic software-testing-conceptsmedsherb
 
DockerCon SF 2019 - TDD is Dead
DockerCon SF 2019 - TDD is DeadDockerCon SF 2019 - TDD is Dead
DockerCon SF 2019 - TDD is Dead
Kevin Crawley
 
Software Characterization & Performance Testing - Beat Your Software with a S...
Software Characterization & Performance Testing - Beat Your Software with a S...Software Characterization & Performance Testing - Beat Your Software with a S...
Software Characterization & Performance Testing - Beat Your Software with a S...
Tze Chin Tang
 
No Devops Without Continuous Testing
No Devops Without Continuous TestingNo Devops Without Continuous Testing
No Devops Without Continuous Testing
Parasoft
 
Manual Testing tutorials and Interview Questions.pptx
Manual Testing tutorials and Interview Questions.pptxManual Testing tutorials and Interview Questions.pptx
Manual Testing tutorials and Interview Questions.pptx
Prasanta Sahoo
 
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-54&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
hemasubbu08
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testing
Muhammad Khairil
 
01. foundamentals of testing
01. foundamentals of testing01. foundamentals of testing
01. foundamentals of testing
Tricia Karina
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test Management
Anuraj S.L
 
Top 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid ThemTop 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid ThemSundar Sritharan
 
Istqb foundation level day 1
Istqb foundation level   day 1Istqb foundation level   day 1
Istqb foundation level day 1
Shuchi Singla AKT,SPC4,PMI-ACP,ITIL(F),CP-AAT
 
Software unit4
Software unit4Software unit4
Software unit4
Himanshu Awasthi
 

Similar to How to Actually DO High-volume Automated Testing (20)

Google, quality and you
Google, quality and youGoogle, quality and you
Google, quality and you
 
5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation5 Steps to Jump Start Your Test Automation
5 Steps to Jump Start Your Test Automation
 
Automated Testing Using Selenium
Automated Testing Using SeleniumAutomated Testing Using Selenium
Automated Testing Using Selenium
 
Software testing overview subbu
Software testing overview subbuSoftware testing overview subbu
Software testing overview subbu
 
Software Testing Presentation in Cegonsoft Pvt Ltd...
Software Testing Presentation in Cegonsoft Pvt Ltd...Software Testing Presentation in Cegonsoft Pvt Ltd...
Software Testing Presentation in Cegonsoft Pvt Ltd...
 
testing
testingtesting
testing
 
UNIT 1.pptx
UNIT 1.pptxUNIT 1.pptx
UNIT 1.pptx
 
Software testing part
Software testing partSoftware testing part
Software testing part
 
Basic software-testing-concepts
Basic software-testing-conceptsBasic software-testing-concepts
Basic software-testing-concepts
 
DockerCon SF 2019 - TDD is Dead
DockerCon SF 2019 - TDD is DeadDockerCon SF 2019 - TDD is Dead
DockerCon SF 2019 - TDD is Dead
 
Software Characterization & Performance Testing - Beat Your Software with a S...
Software Characterization & Performance Testing - Beat Your Software with a S...Software Characterization & Performance Testing - Beat Your Software with a S...
Software Characterization & Performance Testing - Beat Your Software with a S...
 
No Devops Without Continuous Testing
No Devops Without Continuous TestingNo Devops Without Continuous Testing
No Devops Without Continuous Testing
 
Manual Testing tutorials and Interview Questions.pptx
Manual Testing tutorials and Interview Questions.pptxManual Testing tutorials and Interview Questions.pptx
Manual Testing tutorials and Interview Questions.pptx
 
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-54&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
4&5.pptx SOFTWARE TESTING UNIT-4 AND UNIT-5
 
Fundamentals of testing
Fundamentals of testingFundamentals of testing
Fundamentals of testing
 
01. foundamentals of testing
01. foundamentals of testing01. foundamentals of testing
01. foundamentals of testing
 
An introduction to Software Testing and Test Management
An introduction to Software Testing and Test ManagementAn introduction to Software Testing and Test Management
An introduction to Software Testing and Test Management
 
Top 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid ThemTop 5 Pitfalls of Test Automation and How To Avoid Them
Top 5 Pitfalls of Test Automation and How To Avoid Them
 
Istqb foundation level day 1
Istqb foundation level   day 1Istqb foundation level   day 1
Istqb foundation level day 1
 
Software unit4
Software unit4Software unit4
Software unit4
 

More from TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
TechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
TechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
TechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
TechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
TechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
TechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
TechWell
 
Ma 15
Ma 15Ma 15
Ma 15
TechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
TechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
TechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
TechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
TechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
TechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
TechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
TechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
TechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
TechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
TechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
TechWell
 

More from TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Recently uploaded

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
DianaGray10
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
ThomasParaiso2
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
Neo4j
 

Recently uploaded (20)

GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5UiPath Test Automation using UiPath Test Suite series, part 5
UiPath Test Automation using UiPath Test Suite series, part 5
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...GridMate - End to end testing is a critical piece to ensure quality and avoid...
GridMate - End to end testing is a critical piece to ensure quality and avoid...
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
GraphSummit Singapore | Enhancing Changi Airport Group's Passenger Experience...
 

How to Actually DO High-volume Automated Testing

  • 1. TQ PM Tutorial 4/30/13 1:00PM How to Actually DO High-volume Automated Testing Presented by: Cem Kaner and Carol Oliver Florida Institute of Technology Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Cem Kaner Cem Kaner is a professor of software engineering at Florida Institute of Technology and director of FIT’s Center for Software Testing Education & Research. Cem teaches and researches in software engineering—software testing, software metrics, and computer law and ethics. In his career, he has studied and worked in areas of psychology, law, programming, testing, technical writing, sales, applying what he learned to the central problem of software satisfaction. Cem has done substantial work on the development of the law of software quality. He coauthored Testing Computer Software, Lessons Learned in Software Testing, and Bad Software: What To Do When Software Fails. Carol Oliver Carol Oliver is a doctoral student in the Center and leader of the HiVAT project. She has more than a decade of experience as a tester, most recently as Lead QA Analyst supporting infrastructure technology services at Stanford University. For most of her career, Carol has been the lone tester supporting more than a dozen database-driven middleware and web applications, all of which needed to cooperate with each other and several different infrastructure services simultaneously. Carol is very interested in testing techniques that help examine complex question spaces more effectively and in how these techniques can be applied to maximize human attention to the holistic picture.
  • 3. How to Actually DO High-Volume Automated Testing Cem Kaner Carol Oliver Mark Fioravanti Florida Institute of Technology 1
  • 4. Presentation Abstract In high volume automated testing (HiVAT), the test tool generates the test, runs it, evaluates the results, and alerts a human to suspicious results that need further investigation. Simple HiVAT approaches use simple oracles—run the program until it crashes or fails in some other extremely obvious way. More powerful HiVAT approaches are more sensitive to more types of errors. They are particularly useful for testing combinations of many variables and for hunting hard-to-replicate bugs that involve timing or corruption of memory or data. Cem Kaner, Carol Oliver, and Mark Fioravanti present a new strategy for teaching HiVAT testing. We’ve been creating open source examples of the techniques applied to real (open source) applications. These examples are written in Ruby, making the code readable and reusable by snapping in code specific to your own application. Join Cem Kaner, Carol Oliver, and Mark Fioravanti as they describe three HiVAT techniques, their associated code, and how you can customize them. 2
  • 5. Acknowledgments • Our thanks to JetBrains (jetbrains.com), for supporting this work through an Open Source Project License to their Ruby language IDE, RubyMine. • These notes are partially based on research supported by NSF Grant CCLI-0717613 “Adaptation & Implementation of an Activity-Based Online or Hybrid Course in Software Testing.” Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. 3
  • 6. What is automated testing? • There are many testing activities, such as… – Design a test, create the test data, run the test and fix it if it’s broken, write the test script, run the script and debug it, evaluate the test results, write the bug report, do maintenance on the test and the script • When we automate a test, we use software to do one or more of the testing activities – Common descriptions of test automation emphasize test execution and first-level interpretation – But that automates only part of the task – Automation of any part is automation to some degree • All software tests are to some degree automated and no software tests are fully automated. 4
  • 7. What is “high volume” automated testing? • Imagine automating so many testing activities – That the speed of a human is no longer a constraint on how many tests you could run – That the number of tests you run is determined more by how many are worth running than by how much time they take • This is the underlying goal of HiVAT 5
  • 8. Once upon an N+1th time… • • Imagine developing an N+1th version of a program – How could we test it? We could try several techniques 1. Long sequence regression testing. Reuse regression tests from previous version. For those tests that the program can pass individually, run them in long random sequences 2. Program equivalence testing. • Start with function equivalence testing. For each function that exists in the old version and the new one, generate random inputs, feed them to both versions and compare results. After a sufficiently extensive series, conclude the functions are equivalent • Combine tests of several functions that compare (old versus new) successfully individually. 3. Large-scale system comparison by analyzing large sets of user data from the old version (satisfied user attests that they are correct). Check that the new version yields comparable results. • Before running the tests, run data qualification tests (test the plausibility of the inputs and user-supplied sample outputs). The goal is to avoid garbage-in-garbage-out troubleshooting. 6
  • 9. Working definitions of HiVAT • A family of test techniques that use software to generate, execute and interpret arbitrarily many tests. • A family of test techniques that empower a tester to focus on interesting results while using software to design, implement, execute, and interpret a large suite of tests (thousands to billions of tests). 7
  • 10. Breaking out potential benefits of HiVAT (different techniques  different benefits) • • • Find bugs we don’t otherwise know how to find • Diagnostics-based • Long sequence regression • Load-enhanced functional testing • Input fuzzing • Hostile data stream testing Increase test coverage inexpensively • Fuzzing • Functional equivalence • High-volume combination testing • Inverse operations • State-model based testing • High-volume parametric variation Qualify large collections of input or output data • Constraint checks • Functional equivalence testing 8
  • 11. Examples of the techniques • Equivalence testing – Function (e.g. MASPAR) – Program – Version-to-version with shared data • Constraint checking – Continuing the version-to-version comparison • LSRT – Mentsville • Fuzzing – Dumb monkeys, stability and security tests • Diagnostics – Telenova PBX intermittent failures 9
  • 12. Oracles Enable HiVAT • Some specific to the testcase – Reference Programs: Does the SUT behave like it? – Regression Tests: Do the previously-expected results still happen? • Some independent of the testcase – Fuzzing: Run to crash, run to stack overflow – OS/System Diagnostics • Some inform on only a very narrow aspect of the testcase – You’d never dream that passing the oracle check means passing the test in all possible ways – Only that passing the oracle check means the program does not fail in this specific way – E.g. Constraint Checks • The more extensive your oracle, the more valuable your high-volume test suite – What potential errors have we covered? – What potential errors have to be checked in some other way? 10
  • 13. Reference Oracles are Useful but Partial Based on notes from Doug Hoffman Program state, (and uninspected outputs) Program state System state Intended inputs System under test System state Monitored outputs Impacts on connected devices / resources Configuration and system resources To cooperating processes, clients or servers Cooperating processes, clients or servers Program state, (and uninspected outputs) Program state System state System state Intended inputs Configuration and system resources Cooperating processes, clients or servers Reference function Monitored outputs Impacts on connected devices / resources To cooperating processes, clients or servers
  • 14. Using Oracles • If you can detect a failure, you can use that oracle in any test, automated or not • Many ways to combine oracles, as well as using them one at a time • Each has time costs and timing costs • Heisenbug costs mean you might use just one or two oracles at a time, rather than all the possible oracles you know, all at once. 12
  • 15. Practical Things MAKING HIVAT WORK FOR YOU 13
  • 16. Basic Ingredients • A Large Problem Space – The payoff should be worth the cost of setting up the computer to do detailed testing for you • A Data Generator – Data generation often requires more human effort than we expect. This constrains the breadth of testing we can do, because all human effort takes time. • A Test Runner – We have to be able to read the data, run the test, and feed the test to the oracle without human intervention. Probably we need a sophisticated logger so we can trace what happened when the program fails. • An Oracle – This might tell us definitively that the program passed or failed the test or it might look more narrowly and say that the program didn’t fail in this way. In either case, the strength of the test is driven by your ability to tell whether it failed. The narrower the oracle, the weaker the technique. 14
  • 17. Two Examples and a Future • Function equivalence tests (testing against a reference program) • Long-sequence regression (repurposing already-generated data with new oracles) • A more flexible HiVAT architecture 15
  • 18. Functional Equivalence • Oracle: A Reference Program • Method: – The same testcases are sent to the SUT and to the Reference Program – Matched behaviors are considered a Pass – Discrepant behaviors are considered a Fail • Limits: – The reference program might be wrong – We can only test comparable functions – Generating semantically meaningful inputs can be hard, especially as we create nontrivial ones • Benefits: For every comparable function or combination of functions, you can establish that your SUT is at least as good as the Reference Program 16
  • 19. Functional Equivalence Examples • Square Root calculated one way vs. Square Root calculated another way (Hoffman) • Open Office Calc vs. a reference implementation – Comparing • Individual functions • Combinations of functions – Reference • Currently reference formulas programmed in Ruby • Coming (maybe by STAR) compare to MS-Excel 17
  • 21. Functional Equivalence: Key Ideas • You provide your Domain Knowledge – How to invoke meaningful operations for your system – Which data to use during the tests – How to compare the resulting data states • Test Runner sends the same test operations to both the Reference Oracle and the System Under Test (SUT) – Collects both answers – Compares the answers – Logs the result 19
  • 22. Function Equivalence: Architecture • Component independence allows reusability for other applications and for later versions of this application. – We can snap in a different reference function – We can snap in new versions of the software under test – The test generator has to know about the category of application (this is a spreadsheet) but not which particular application is under test. – In the ideal architecture, the test running passes test specifications to the SUT and oracle without knowing anything about those applications. To the maximum extent possible, we want to be able to reuse the test runner code across different applications – The log interpreter probably has to know a lot about what makes a pass or a fail, which is probably domain-specific. 20
  • 23. Functional Equivalence: Reference Code • Work in Progress – http://testingeducation.org/ – See HiVAT section of site 21
  • 24. Long-Sequence Regression • Oracles: – OS/System diagnostics – Checks built into each regression test • Method: – Reconfigure some known-passing regression tests to ensure they can build upon one another (i.e. never reset the system to a clean state) – Run a subset of those regression tests randomly, interspersed with diagnostics – When a test fails on the N+1st time that we run it, after passing N times during the run, analyze the diagnostics to figure out what happened and when in the sequence things first went wrong 22
  • 25. Long-Sequence Regression • Limits: – Failures will be indirect pointers to problems – You will have to analyze the logs of diagnostic results – You may need to rerun the sequence with different diagnostics – Heisenbug problem limits the number of diagnostics that you can run between tests • Benefits: – This approach is proven good for finding timing errors and memory misuse 23
  • 26. Long-Sequence Regression Examples • Mentsville • Application to OpenOffice 24
  • 28. Test Runner/Result Checker Operation • • • • • • • • • Diagnostics A Regression Test Diagnostics A Regression Test Diagnostics A Regression Test Diagnostics …….. Final Diagnostics 26
  • 29. Long-Sequence Regression: Key Ideas • Pool of Long-Sequence-Compatible Regression Tests – Each of these is Known to Pass, individually, on this build – Each of these is Known to NOT reset system state (therefore it contributes to simulating real use of the system in the field) – Each of these is Known to Pass when run 100 times in a row, with just itself altering the system (if not, you’ve isolated a bug) • Pool of Diagnostics – Each of these collects something interesting about the system or program: memory state, CPU usage, thread count, etc. 27
  • 30. Long-Sequence Regression: Key Ideas • Selection of Regression Tests – Need to run a small enough set of regression tests that they will repeat frequently during the elapsed time of the Long Sequence – Need to run a large enough set of regression tests that there’s real opportunity for interactions between functions of the system • Selection of Diagnostics – Need to run a fixed set of diagnostics per Long Sequence, so that you collect data to compare Regression Test failures against – Need to run a small enough set of diagnostics that the act of running the diagnostics doesn’t significantly change the system state 28
  • 31. Long-Sequence Regression: Reference Code • Work in Progress (not currently available) – http://testingeducation.org/ – See HiVAT section of site 29
  • 32. Future Power A FLEXIBLE ARCHITECTURE FOR HIVAT 30
  • 33. Maadi HiVAT Architecture • Maadi is a generalized HiVAT architecture – Open Source and written in Ruby – Developed to support multiple HiVAT techniques – Developed to support multiple applications and workflows – Based on demonstrations from previous implementations • Architecture is divided into Core and Custom components – Custom components are where the specific implementations reside • Maadi, one of the earliest Egyptian mines – Implemented in Ruby, used RubyMine IDE, etc. 31
  • 35. Maadi Core Components • • • • • • • • • • • Application – Ruby Interface to SUT Collector – Logging Interface Monitor – Diagnostic Utility Interface Tasker – Resource Consumption Interface Organizer – Implements the Test Technique Expert – Domain Expert which builds “skeleton” Procedures Generator – Mediates interaction between Organizer and Expert to assemble a series of Tests Scheduler – Collects and orders the completed Procedures Controller – Mediates interaction between Scheduler and the Applications, Monitors, and Taskers to conduct tests Analyzer – Performs analysis of collected Results Manager – Overall Test Conductor 33
  • 36. HiVAT Test Process • Configure Environment – CLI enables scripted runs via a profiles * • Prepare Components – Validate component compatibility – Generate Test Cases, Schedule Test Cases, Log Test Cases – Log Configuration • Execute Test Plan – Record results • Generate Test Report * Entire process can be scripted via profiles; start to finish 34
  • 37. Language Restrictions • Applications, Organizer and Expert must all understand the same “procedural” language in order to communicate the selection, construction, and execution of a test case – Using a SQL Domain Expert to generate Test Cases for Microsoft Solitaire is not useful – Controller will validate that all Applications and the Organizer can accept the Domain Expert • Applications and Experts must also understand a second “result” language in order to capture the results 35
  • 38. Test Plan Complexities • Functional Equivalence exposed a challenge: – How to flexibly specify test details that required random choice on-the-fly • Early Attempt: Try to Pre-Specify all the details you want to have vary: – Plan 1: log (RNG_1) • CONSTRAINT RNG_1: Real, >0 – Plan 2: Sum (0...RNG_1) of CellValue (RNG_2) • CONSTRAINT RNG_1: Integer, >0 • CONSTRAINT RNG_2: Real – Plan 3: Sum (0...RNG_1) of CellValue (log (RNG_2)) • CONSTRAINT RNG_1: Integer, >0 • CONSTRAINT RNG_2: Real, >0 Delta= 0 36
  • 39. Test Plan Complexities (2) • This gets awkward quickly: – Plan 4: (Sum (0...RNG_1) of CellValue (log(RNG_2))) /RNG_1 • CONSTRAINT RNG_1: Integer, >0 • CONSTRAINT RNG_2: Real, >0 • Really want to just specify: – The individual elements with their individual constraints – The rules for combining the elements together • Then dynamically assemble the pieces in sensible but infinitely variable ways at test generation time • New Approach: Mediate a Conversation with the Expert during test generation time 37
  • 40. Generator Mediation • Generator is responsible for generating a test case – Interaction between the Organizer and the Expert produces a test procedure – Generator supervises and terminates as necessary • Test Case construction process overview: 1. Organizer selects a Test to build 2. Expert builds a skeleton, adds configurable parameters 3. Organizer populates parameters 4. Expert inspects parameter selection, expert may a) add additional parameters (return to step 2) b) flag the procedure as complete or failed 38
  • 42. Organizer: One Example Test Plan • Organizer is responsible for constructing a set of test procedures according to the Test Plan – The Test Plan is really the resultant set of Test Procedures – Organizers could be as simple as a Fuzzer • Test Plan could be satisfied by randomly select a value for any configurable parameter – Organizers could be as complex as desired and could be as specific as required • Test Plan could only be satisfied by successfully constructing a specific database 40
  • 43. Organizer: One Example Test Plan (2) • Example of a Test Plan – CREATE a TABLE tblStudents in dbUniversity with 5 columns • s_ID as a PRIMARY KEY, AUTO INCREMENT, INT • s_FamilyName as VARCHAR(255), NOT NULL • s_Program as VARCHAR(32), NOT NULL • s_Major as INTEGER, FOREIGN KEY( tblMajors, m_ID) • s_Enrolled as DATETIME – CREATE a TABLE tblStudentClasses in dbUniversity with 3 columns –… • Following example will illustrate the process of building a procedure to according to the Test Plan 41
  • 44. Example Procedure Construction • Participants – Expert: Structured Query Language (SQL) Domain Expert – Organizer: Construct University Database • Beginning Test Selection • Expert provides a list of possible tests – the SQL Expert provides: • CREATE, SELECT, INSERT, UPDATE, DELETE, DROP • Organizer selects test, requests procedure – the Organizer would choose: • CREATE • After the Test has been chosen, the Organizer can request that the Expert provide a Test Procedure 42
  • 45. Example Procedure Construction (2) • Expert’s Initial response to choosing CREATE id: CREATE-NEW-NEXT step: CREATE [TYPE] * parameter: [CREATE-TYPE] = constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) • Expert returns procedure with 1 parameter: [CREATE-TYPE] – Organizer has 2 options; choose TABLE or DATABASE • Each parameter has a constraint which the Organizer should follow – Many different types of constraints possible – Expert may or may not enforce the constraint; if not followed the procedure may be flagged as invalid • Organizer returns procedure with option selected 43
  • 46. Example Procedure Construction (3) • Organizer’s Test Plan requires that the Procedure for a CREATE TABLE be built – Organizer selects TABLE for the [CREATE-TYPE] parameter • Organizer returns the following procedure to the Expert id: CREATE-NEW-NEXT step: CREATE [TYPE] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) • Use of Step and Parameters is part of the agreed upon Procedural language – In the case of the SQL example, the parameters will be substituted inline to construct the desired SQL command 44
  • 47. Example Procedure Construction (4) • Expert’s response to [CREATE-TYPE] selection id: CREATE-TABLE-PICK-DATABASE step: CREATE [TYPE] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) • Expert returns a procedure with no new parameters – Procedure ID has changed • Expert utilizes Procedure for determining next actions • Organizer has no selections to make but the procedure is not flagged as being complete, therefore it should return the procedure to the Expert for the next update 45
  • 48. Example Procedure Construction (5) • Expert’s response to returned procedure id: CREATE-TABLE-NEW step: CREATE [TYPE] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) * parameter: [DATABASE-NAME] = constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity) • Expert returns procedure, which has 1 NEW parameter; [DATABASE-NAME] – [DATABASE-NAME] has a constraint in which the only two valid values are either HiVAT or dbUniversity – Procedure ID has changed (again) • Expert has not modified or removed existing [CREATE-TYPE] parameter 46
  • 49. Example Procedure Construction (6) • Organizer selects value according to Test Plan – Organizer selects HiVAT for the [DATABASE-NAME] parameter • Organizer returns the following procedure to the Expert id: CREATE-TABLE-NEW step: CREATE [TYPE] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) * parameter: [DATABASE-NAME] = dbUniversity constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity) • Organizer has only selected a parameter which was previously empty – If a previously selected parameter is changed, it may cause the Expert to flag the Procedure as Failed (e.g. setting [CREATETYPE] to DATABASE 47
  • 50. Example Procedure Construction (7) • Expert updates procedure and returns it to the Organizer id: CREATE-TABLE-PARAMS step: CREATE TABLE [TABLE-NAME] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) * parameter: [DATABASE-NAME] = dbUniversity constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity) * parameter: [TABLE-NAME] = constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25) * parameter: [TABLE-COLUMNS] = constraint: RANGED-INTEGER (MIN: 1, MAX: 25) • The procedure has 2 NEW parameters – [TABLE-NAME] has an ALPHANUMERIC constraint – [TABLE-COLUMNS] has a RANGED INTEGER constraint – Procedure ID and Step name have also changed 48
  • 51. Example Procedure Construction (8) • Organizer selects values according to Test Plan – Organizer selects • tblStudents for the [TABLE-NAME] parameter • 5 for the [TABLE-COLUMNS] parameter • Organizer returns the following procedure to the Expert id: CREATE-TABLE-PARAMS step: CREATE TABLE [TABLE-NAME] * parameter: [CREATE-TYPE] = TABLE constraint: PICK-LIST (CHOOSE ONE: TABLE DATABASE) * parameter: [DATABASE-NAME] = dbUniversity constraint: PICK-LIST (CHOOSE ONE: HiVAT, dbUniversity) * parameter: [TABLE-NAME] = tblStudents constraint: ALPHANUMERIC WORD (LENGTH MIN: 5, MAX: 25) * parameter: [TABLE-COLUMNS] = 5 constraint: RANGED-INTEGER (MIN: 1, MAX: 25) 49
  • 52. Example Procedure Construction (9) • Exchange between the Expert and Organizer continues until – Expert flags the Procedure as Complete – Expert flags the Procedure as Failed – Generator determines that the maximum number of exchanges have occurred * • Organizer and Expert may not be able to agree on acceptable test • All procedures (including Failed procedures) are returned to the Controller – Controller is configured to discard failed procedures – If too many failed procedures are detected the user is notified and test is terminated * * User configurable values 50
  • 53. Maadi: Key Ideas • Exploit the commonalities across each HiVAT technique • Enable a conversation with the Knowledge Experts to empower the tests to be more dynamic, yet still sensible 51
  • 54. Maadi: Reference Code • Work in Progress – http://testingeducation.org/ – See HiVAT section of site 52
  • 55. Is this the future of testing? • Maybe, but within limits • Expensive way to find simple bugs – Significant troubleshooting costs: shoot a fly with an elephant gun and discover you have to spend enormous time wading through the debris to find the fly • Ineffective way to hunt for design bugs • Emphasizes the families of tests that we know how to code rather than the families of bugs that we need to hunt As with all techniques These are useful under some circumstances Probably invaluable under some circumstances But ineffective for some bugs and inefficient for others 53