SlideShare a Scribd company logo
1 of 29
High Volume Test Automation in Practice
Andy Tinkham
Principal Lead Consultant, QAT
Magenic Technologies
Acknowledgements
» This presentation draws on the knowledge shared by the attendees of
WTST 12 in Melbourne, FL (Jan 25-27, 2013, hosted by the Harris Institute
for Assured Information at the Florida Institute of Technology and Kaner,
Fiedler & Associates, LLC)
» Cem Kaner, Catherine Karena, Michael Kelly, Rebecca Fiedler, Janaka Balasooriyi,
Thomas Bedran, Jared Demott, Keith Gallagher, Doug Hoffman, Dan Hoffman, Harry
Robinson, Rob Sabourin, Andy Tinkham, Thomas Vaniotis, Tao Xie, Casey Doran, Mark
Fiorvanti, Michal Frystacky, Scott Fuller, Nawwar Kabbani, Carol Oliver, Vadym
Tereschenko
» This material is heavily drawn from Cem Kaner’s blog posts on kaner.com
and context-driven-testing.com, referenced at the end of this slide deck
About me
» 17 years in testing industry
» Principal Lead Consultant at Magenic Technologies
» Doctoral student at Florida Tech
» Host free virtual office hours roughly weekly
http://ohours.org/andytinkham
» http://magenic.com/Blog.aspx
» http://testerthoughts.com
» http://twitter.com/andytinkham
What is High Volume Test Automation (HiVAT)?
“A family of test
techniques that enable a
tester to run & evaluate
arbitrarily many
computer-assisted tests”
-- WTST 12 working definition
Let’s break that down…
• Many ways to
do HiVAT
• Different ways
for different
goals
Family of test
techniques
• Not replacing a
human
• Augmenting a
tester’s skill set
Enable a
tester
• Need
executable tests
• Need some sort
of oracle
Run &
evaluate
• Easy to change
number of tests
• Not 1:1
matchup with
manual tests
Arbitrarily
Many
• Continuum of
manual &
automated
• Different tests at
different spots
Computer-
assisted tests
Manual & automated testsEverytesthas
manualelements
•A human designs it
•A human wrote the
code
•A human analyzes
the results
Everytesthas
automated
elements
•Transforming the
inputs to outputs is
done by the
computer
Every test falls somewhere on a continuum between the two extremes
HiVAT tests tend toward the automated side
» Human still designs overall tests (possibly very high-level)
» Computer may determine inputs, paths and expected results
» Computer evaluates individual results
» Human determines stopping criteria
» Number of tests
» Time
» First bug
» Human analyzes overall results
…but are different from “traditional” automation
» Include many iterations of execution
» May run for longer periods of time
» Sometimes involve more randomness
» Can be focused on looking for unknown risks rather than identified risks
Why do HiVAT?
» Find problems that occur in only a small subset of input values
» Find difficult to encounter bugs like race conditions or corrupted state
» Catch intermittent failures
» Leverage idle hardware
» Address risks and provide value in ways that traditional
automation & manual testing don’t normally do
How do we do HiVAT?
» Lots of ways!
» Kaner gives this classification scheme which covers many techniques
(including the ones we’re about to talk about)
Focus on
Inputs
Exploit
available
oracle
Exploit
existing tests
or tools
Methods that focus on inputs
» Testers usually divide inputs into equivalence classes and pick high-value
representative values
» For reasonably-sized datasets, automation doesn’t need to do this!
» Run all (or at least many of) the values through the automation
» Alternatively, use random input generation to get
a stream of input values to use for testing
Parametric Variation
» Replace small equivalence class representative sets
» Some input sets may allow running the total set of inputs
» Doug Hoffman’s MASPAR example
» Others may still require sampling
» Valid passwords example
» Sampling can be optimized if data is well understood
» Can generate random values
High-Volume Combination Testing
» Testers often use combinatorial test techniques to get a workable set of
combinations to cover interactions
» These techniques leave combinations uncovered
» If we know which uncovered combinations are more important or risky, we
can add them to the test set
» What about when we don’t know which ones are of interest?
» HiVAT tests can run many more combinations through than are usually done
» Sampling can be same as Parametric Variation
» Retail POS system example
Input Fuzzing/Hostile Data Stream Testing
» Given a known good set of inputs
» Make changes to the input and run each changed values through the
system
» Watch for buffer overruns, stack corruption, crashes, and other system-
level problems
» Expression Blend example
» Alan Jorgensen’s Acrobat Reader work
Automated Security Vulnerability Checking
» Scan an application for input fields
» For each input field, try a variety of common SQL Injection and Cross-Site
Scripting attacks to detect vulnerabilities
» Mark Fiorvanti’s WTST paper (see references)
One problem with input focused tests
» We need an oracle!
» It can be hard to verify the correctness of the results without duplicating
the functionality we’re testing
» Input-focused tests may look for more obvious errors
» Crashes
» Memory problems
» Simple calculations
Methods that exploit oracles
» Sometimes we already have an oracle available
» If so, we can take advantage of it!
Functional Equivalence
» Run lots of inputs through the SUT and another system that does the
same thing, then compare outputs
» FIT Testing 2 exam example
Constraint Checks
» Look for obviously bad data
» US ZIP codes that aren’t 5 or 9 digits long
» End dates that occur before start dates
» Pictures that don’t look right
State-Model Walking
» 3 things required
» State model of the application
» A way to drive the application
» A way to determine what state we’re in
Methods that exploit existing tests or tools
» Existing artifacts can be used in high-volume testing
» Tests
» Load Generators
Long-Sequence Regression Testing
» Take a set of individually passing automated regression tests
» Run them together in long chains over extended periods of time
» Watch for failures
» Actions may leave corrupted state that only later appears
» Sequence of actions may be important
» Mentsville example
High-Volume Protocol Testing
» Send a string of commands to a protocol handler
» Web service method calls
» API calls
» Protocols with defined order
Load-enhanced Functional Testing
» Run your existing automated functional tests AND your automated load
generation at the same time
» Add in additional diagnostic monitoring if available
» Systems behave differently under load
» System resource problems may not be
visible when resources are plentiful
» Timing issues
Starting HiVAT in your organization
» Inventory what you already have
» Existing tests you can chain together (Preferably without intervening
clean-up code)
» Tools you can put to additional uses
» Oracles you can use
» Places where small samples have been chosen from a larger data set
» Hardware that is sometimes sitting idle
Starting HiVAT in your organization
» Match your inventory up to techniques that can take advantage of them
» Think about what sorts of risks and problems a technique could reveal in your
application
» For each risk, do you have other tests that can be reasonably expected to cover
that issue?
» How much value is there in getting information about the risk?
» How much effort is required to get the information?
» What other tasks could you do in the same time?
» Is the value of the information ≥ the cost to implement + the value of the other
tasks?
Summary
» High volume automated testing is a family of test techniques focused on
running an arbitrary number of tests
» The number of tests is often defined by an amount of time or coverage of
a set of values rather than trying for a minimal set
» Some high-volume techniques focus on covering a set of inputs
» Some take advantage of an accessible oracle
» Some reuse existing artifacts in new ways
» Determining what makes sense for you is a matter of risk and value
References
» Cem Kaner’s High Volume Test Automation Overview
http://kaner.com/?p=278
» Cem’s WTST 12 write-up
http://context-driven-testing.com/?p=69
» WTST 12 home page (with links to papers and slides, including Mark Fiorvanti’s)
http://wtst.org
» Doug Hoffman’s MASPAR example
http://www.testingeducation.org/BBST/foundations/Hoffman_Exhaust_Options.pdf
» Alan Jorgensen’s “Testing With Hostile Data Streams” paper
https://www.cs.fit.edu/media/TechnicalReports/cs-2003-03.pdf
» Pat McGee & Cem Kaner’s Long-Sequence Regression Test (Mentsville) plan
http://www.kaner.com/pdfs/MentsvillePM-CK.pdf
Contact Information
Andy Tinkham
Magenic Technologies
andyt@magenic.com
http://magenic.com
http://ohours.org/andytinkham
http://testerthoughts.com
http://twitter.com/andytinkham

More Related Content

Similar to High volume test automation in practice

Similar to High volume test automation in practice (20)

A Software Testing Intro
A Software Testing IntroA Software Testing Intro
A Software Testing Intro
 
Hello
HelloHello
Hello
 
How to Actually DO High-volume Automated Testing
How to Actually DO High-volume Automated TestingHow to Actually DO High-volume Automated Testing
How to Actually DO High-volume Automated Testing
 
Combinatorial testing ppt
Combinatorial testing pptCombinatorial testing ppt
Combinatorial testing ppt
 
Practitioners’ Expectations on Automated Fault Localization
Practitioners’ Expectations on Automated Fault LocalizationPractitioners’ Expectations on Automated Fault Localization
Practitioners’ Expectations on Automated Fault Localization
 
Combinatorial testing
Combinatorial testingCombinatorial testing
Combinatorial testing
 
Testcase Preparation Checklist
Testcase Preparation ChecklistTestcase Preparation Checklist
Testcase Preparation Checklist
 
prova4
prova4prova4
prova4
 
provalast
provalastprovalast
provalast
 
test3
test3test3
test3
 
test2
test2test2
test2
 
domenica3
domenica3domenica3
domenica3
 
provoora
provooraprovoora
provoora
 
remoto2
remoto2remoto2
remoto2
 
provacompleta2
provacompleta2provacompleta2
provacompleta2
 
finalelocale2
finalelocale2finalelocale2
finalelocale2
 
domenica2
domenica2domenica2
domenica2
 
provarealw4
provarealw4provarealw4
provarealw4
 
test2
test2test2
test2
 
prova3
prova3prova3
prova3
 

Recently uploaded

Recently uploaded (20)

ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 

High volume test automation in practice

  • 1. High Volume Test Automation in Practice Andy Tinkham Principal Lead Consultant, QAT Magenic Technologies
  • 2. Acknowledgements » This presentation draws on the knowledge shared by the attendees of WTST 12 in Melbourne, FL (Jan 25-27, 2013, hosted by the Harris Institute for Assured Information at the Florida Institute of Technology and Kaner, Fiedler & Associates, LLC) » Cem Kaner, Catherine Karena, Michael Kelly, Rebecca Fiedler, Janaka Balasooriyi, Thomas Bedran, Jared Demott, Keith Gallagher, Doug Hoffman, Dan Hoffman, Harry Robinson, Rob Sabourin, Andy Tinkham, Thomas Vaniotis, Tao Xie, Casey Doran, Mark Fiorvanti, Michal Frystacky, Scott Fuller, Nawwar Kabbani, Carol Oliver, Vadym Tereschenko » This material is heavily drawn from Cem Kaner’s blog posts on kaner.com and context-driven-testing.com, referenced at the end of this slide deck
  • 3. About me » 17 years in testing industry » Principal Lead Consultant at Magenic Technologies » Doctoral student at Florida Tech » Host free virtual office hours roughly weekly http://ohours.org/andytinkham » http://magenic.com/Blog.aspx » http://testerthoughts.com » http://twitter.com/andytinkham
  • 4. What is High Volume Test Automation (HiVAT)? “A family of test techniques that enable a tester to run & evaluate arbitrarily many computer-assisted tests” -- WTST 12 working definition
  • 5. Let’s break that down… • Many ways to do HiVAT • Different ways for different goals Family of test techniques • Not replacing a human • Augmenting a tester’s skill set Enable a tester • Need executable tests • Need some sort of oracle Run & evaluate • Easy to change number of tests • Not 1:1 matchup with manual tests Arbitrarily Many • Continuum of manual & automated • Different tests at different spots Computer- assisted tests
  • 6. Manual & automated testsEverytesthas manualelements •A human designs it •A human wrote the code •A human analyzes the results Everytesthas automated elements •Transforming the inputs to outputs is done by the computer Every test falls somewhere on a continuum between the two extremes
  • 7. HiVAT tests tend toward the automated side » Human still designs overall tests (possibly very high-level) » Computer may determine inputs, paths and expected results » Computer evaluates individual results » Human determines stopping criteria » Number of tests » Time » First bug » Human analyzes overall results
  • 8. …but are different from “traditional” automation » Include many iterations of execution » May run for longer periods of time » Sometimes involve more randomness » Can be focused on looking for unknown risks rather than identified risks
  • 9. Why do HiVAT? » Find problems that occur in only a small subset of input values » Find difficult to encounter bugs like race conditions or corrupted state » Catch intermittent failures » Leverage idle hardware » Address risks and provide value in ways that traditional automation & manual testing don’t normally do
  • 10. How do we do HiVAT? » Lots of ways! » Kaner gives this classification scheme which covers many techniques (including the ones we’re about to talk about) Focus on Inputs Exploit available oracle Exploit existing tests or tools
  • 11. Methods that focus on inputs » Testers usually divide inputs into equivalence classes and pick high-value representative values » For reasonably-sized datasets, automation doesn’t need to do this! » Run all (or at least many of) the values through the automation » Alternatively, use random input generation to get a stream of input values to use for testing
  • 12. Parametric Variation » Replace small equivalence class representative sets » Some input sets may allow running the total set of inputs » Doug Hoffman’s MASPAR example » Others may still require sampling » Valid passwords example » Sampling can be optimized if data is well understood » Can generate random values
  • 13. High-Volume Combination Testing » Testers often use combinatorial test techniques to get a workable set of combinations to cover interactions » These techniques leave combinations uncovered » If we know which uncovered combinations are more important or risky, we can add them to the test set » What about when we don’t know which ones are of interest? » HiVAT tests can run many more combinations through than are usually done » Sampling can be same as Parametric Variation » Retail POS system example
  • 14. Input Fuzzing/Hostile Data Stream Testing » Given a known good set of inputs » Make changes to the input and run each changed values through the system » Watch for buffer overruns, stack corruption, crashes, and other system- level problems » Expression Blend example » Alan Jorgensen’s Acrobat Reader work
  • 15. Automated Security Vulnerability Checking » Scan an application for input fields » For each input field, try a variety of common SQL Injection and Cross-Site Scripting attacks to detect vulnerabilities » Mark Fiorvanti’s WTST paper (see references)
  • 16. One problem with input focused tests » We need an oracle! » It can be hard to verify the correctness of the results without duplicating the functionality we’re testing » Input-focused tests may look for more obvious errors » Crashes » Memory problems » Simple calculations
  • 17. Methods that exploit oracles » Sometimes we already have an oracle available » If so, we can take advantage of it!
  • 18. Functional Equivalence » Run lots of inputs through the SUT and another system that does the same thing, then compare outputs » FIT Testing 2 exam example
  • 19. Constraint Checks » Look for obviously bad data » US ZIP codes that aren’t 5 or 9 digits long » End dates that occur before start dates » Pictures that don’t look right
  • 20. State-Model Walking » 3 things required » State model of the application » A way to drive the application » A way to determine what state we’re in
  • 21. Methods that exploit existing tests or tools » Existing artifacts can be used in high-volume testing » Tests » Load Generators
  • 22. Long-Sequence Regression Testing » Take a set of individually passing automated regression tests » Run them together in long chains over extended periods of time » Watch for failures » Actions may leave corrupted state that only later appears » Sequence of actions may be important » Mentsville example
  • 23. High-Volume Protocol Testing » Send a string of commands to a protocol handler » Web service method calls » API calls » Protocols with defined order
  • 24. Load-enhanced Functional Testing » Run your existing automated functional tests AND your automated load generation at the same time » Add in additional diagnostic monitoring if available » Systems behave differently under load » System resource problems may not be visible when resources are plentiful » Timing issues
  • 25. Starting HiVAT in your organization » Inventory what you already have » Existing tests you can chain together (Preferably without intervening clean-up code) » Tools you can put to additional uses » Oracles you can use » Places where small samples have been chosen from a larger data set » Hardware that is sometimes sitting idle
  • 26. Starting HiVAT in your organization » Match your inventory up to techniques that can take advantage of them » Think about what sorts of risks and problems a technique could reveal in your application » For each risk, do you have other tests that can be reasonably expected to cover that issue? » How much value is there in getting information about the risk? » How much effort is required to get the information? » What other tasks could you do in the same time? » Is the value of the information ≥ the cost to implement + the value of the other tasks?
  • 27. Summary » High volume automated testing is a family of test techniques focused on running an arbitrary number of tests » The number of tests is often defined by an amount of time or coverage of a set of values rather than trying for a minimal set » Some high-volume techniques focus on covering a set of inputs » Some take advantage of an accessible oracle » Some reuse existing artifacts in new ways » Determining what makes sense for you is a matter of risk and value
  • 28. References » Cem Kaner’s High Volume Test Automation Overview http://kaner.com/?p=278 » Cem’s WTST 12 write-up http://context-driven-testing.com/?p=69 » WTST 12 home page (with links to papers and slides, including Mark Fiorvanti’s) http://wtst.org » Doug Hoffman’s MASPAR example http://www.testingeducation.org/BBST/foundations/Hoffman_Exhaust_Options.pdf » Alan Jorgensen’s “Testing With Hostile Data Streams” paper https://www.cs.fit.edu/media/TechnicalReports/cs-2003-03.pdf » Pat McGee & Cem Kaner’s Long-Sequence Regression Test (Mentsville) plan http://www.kaner.com/pdfs/MentsvillePM-CK.pdf
  • 29. Contact Information Andy Tinkham Magenic Technologies andyt@magenic.com http://magenic.com http://ohours.org/andytinkham http://testerthoughts.com http://twitter.com/andytinkham