Automated Acceptance Tests & Tool choice


Published on

Automated Acceptance Testing (and tool choice)

Automated acceptance testing has many names: acceptance-test driven development (ATDD), story-test driven development (STDD), agile acceptance testing and, most recently, specification by example. At the heart of all these approaches is to produce business-facing tests which are system tests running end-to-end, picking up regression issues and improving confidence that the code works as required.

In this talk, I will contextualise how each of these approaches share in common a three-tier layering strategy: acceptance criteria, test implementation layer and application driver layer. This is important because applying this approach requires a tool choice and each tool tends to have its own sweet (and blind) spot that is best understood through these layers.

I will first deep dive into sample code across a few tools (Cucumber, Fitnesse, Concordion) to illustrate this layering. I use an example that shows how to decouple the GUI from tests (window driver pattern).

Finally, I will look at some typical client scenarios to examine which tools might best suited because tool choice is not simply a host operating system question (.Net, Java, Ruby).

Published in: Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Automated Acceptance Tests & Tool choice

  1. 1. AUTOMATEDACCEPTANCE TESTINGUnderstanding layering and its implications fortool choice
  2. 2. Business facing testsThe acceptance test suite as a whole bothverifies that the application delivers the businessvalue expected by the customer and guardsagainst regressions or defects that break pre-existing functions of the application. As such theacceptance criteria must be executablespecifications. They also pick up problems notfound in unit or component tests. see Farley & Humble, 2010
  3. 3. Layers in Acceptance Tests Farley & Humble, 2010
  4. 4. Layer 1:Acceptance CriteriaFeature:Potential customers need to be able to find company information, services andaddress details when searching on the internetScenario: Given I am using google When I search for ’XXXXX Then the top result should be ’YYYYYYScenario: Given I am using google When I search for ’QQQQQQQQ Then the results include the text ’Y2Y2Y2Y2
  5. 5. Many toolsets: but there aretypesCucumber (ruby, java, .net and many otherports) - uses plain text to arrange testsConcordion (java, .net)- uses html to arrange testsFitnesse (java, .net)- uses wiki to arrange tests
  6. 6. Demo•• Sample code for a number of tools, including: • cucumber, • concordion, • fitnesse with fitlibraryweb
  7. 7. Layer 2:Test Implementation Layer• Ideally creatable, readable and reportable to business• Connects the layer above and below in a way that is maintainable• Usually maintains data setup, teardown or proxies and stub data• Written in a host language (Ruby, Java, C#) – but doesn’t need to the same as the system under test• Often done as a domain specific language (DSL)
  8. 8. Learnings• Cucumber: written by coders (devs or testers) with the help of simple regular expressions• Concordion: some tag code is written in html and must match that in this code layer• Fitnesse: there is no test implementation layer when using FitLibraryWeb
  9. 9. Also• Cucumber: ability/need to create application abstractions• Concordion: ability/need to create application abstraction• Fitnesse: this approach it is less domain and more technology focussed (although without FitLibraryWeb you can create application abstractions)
  10. 10. Layer 3:Application Driver Layer• Works against our system under test (in this example it is the browser against google)• Abstractions work best as the tests grow – eg Window Driver Pattern
  11. 11. 3rd Party Adapters• Cucumber: there are loads of libraries to run against particularly in Ruby• Concordion: Selenium WebDriver is effectively built in – but you start working harder for the other functionality found in FitLibraryWeb• Fitnesse: FitLibraryWeb is designed to make it easy to run against browsers (headless and selenium webdriver), webservices (SOAP, xml), PDFs, shell scripts, databases & creating stub services and proxies
  12. 12. Maintainability: FitLibraryWeb• Extremely well written and understandable abstraction of a technical domain (webservices, pdf, proxies, shell)• We’ve been using the mock webservices functionality in our test implementation layer to provide stubbed data in concordion• Demonstrates a DSL standardising tests (and abstracts away both the test implementation and application driver layers for reuse)
  13. 13. Issues to balanceCollaboration/Communication• Language used: technical vs natural• Role focus: developer  business• Facing: technology  businessTechnical• Data setup/teardown, test doubles• Test strategy: unit  integration  system• Test suite size and refactoring support
  14. 14. ExerciseIn groups of 3-4, read the scenarios ofcompanies.Think through the goals, which layers are mostaffected and reason out your tool choice onhow to best verify that their software works!
  15. 15. Exercise: which tool and why?You are in a company that is moving toward selling all your services via theinternet sales including confirmation via email. Your back office systems are amixture of java and mainframe but it are all mediated via soap web services. Thedevelopers say they are unit testing. In practice, most testing occurs afterdevelopment ensuring applications work in the high socialised environment.You are part of a team that supports a real-time system. The regression suitecurrently takes two weeks to run through. The team is frustrated and can see thatthe test suite is only growing in size and has a lot of duplication. Furthermore, it isunderstood by only a few mostly because of it sheer size. They really want teststhey can run at any time to look at the state of the system.Your team develops a component library. It is used by other customers as part oftheir product suite. There have been complaints about its quirky usage.You are working as part of small, cross-functional team. Your business person(customer, product owner) is actively involved. There are complex rules - thesubtleties of which often unfold through development and test.
  16. 16. Further areasTest Automation Pyramid• Testing is layered. Acceptance testing using these tools is generally a subset of system testing. We haven’t looked at how acceptance criteria is dealt with in BDD developer-style testing (eg JBehave, StoryQ, SpecFlow, rSpec, Spock).Agile Quadrants Model• Important for understanding the distinction between business-facing and technical-facing tests. Acceptance tests can be either but try to avoid conflating the two. Your toolset choice should be alignedDeployment Pipeline• Acceptance tests must be automated in continuous delivery. This will require work for build scripts and data and configuration managementSpecification By Example• Working as a team through examples is enabled by these sorts of toolsets.
  17. 17. References• FIT: Framework for Integrated Testing, Mugridge & Cunningham• Specification By Example, Gojko Adzic• Chapter 8, Automated Acceptance Testing in Continuous Delivery, Farley & Humble•• 350264••
  18. 18. Good luck Todd Brackley twitter: toddbNZ code: