• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content

Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Context Driven Automation Gtac 2008

on

  • 1,813 views

Slides from my presentation at the Google Test Automation Conference 2008

Slides from my presentation at the Google Test Automation Conference 2008

Statistics

Views

Total Views
1,813
Views on SlideShare
1,755
Embed Views
58

Actions

Likes
1
Downloads
40
Comments
0

7 Embeds 58

http://contextdrivenautomation.blogspot.com 48
http://contextdrivenautomation.blogspot.in 4
http://contextdrivenautomation.blogspot.co.uk 2
http://www.blogger.com 1
http://www.slideshare.net 1
http://contextdrivenautomation.blogspot.ru 1
http://contextdrivenautomation.blogspot.sg 1
More...

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Context Driven Automation Gtac 2008 Context Driven Automation Gtac 2008 Presentation Transcript

    • Context-Driven Test Automation How to Build the System you Really Need Pete Schneider [email_address] F5 Networks
    • License
      • The content of these slides is made available under the Create Commons Attribution Share-Alike 3.0 United States license.
    • Where this talk came from…
      • F5 Networks has lots of product teams who have their own automation tools.
      • We were taking inventory of the tools and trying to see where we could share and eliminate duplication.
      • We looked at 11 different test automation tools.
    • We saw 6 common tasks…
      • All the tools had ways of addressing common tasks:
        • Test distribution and run control
        • Test case set up
        • Test case execution
        • Test case evaluation
        • Test case tear down
        • Results reporting
    • We discussed the tasks…
      • Test distribution and run-time control
        • Some tools had sophisticated controls, some were rudimentary.
        • Some were automatic, some required manual effort.
      • Set up / tear down
        • Some tools took care of a lot of the work in setting up and tearing down a test, others left it all to the test case.
      • Test execution and verification
        • Except for data-driven tests, all tools left this entirely to the test case.
        • We saw huge variations in complexity of verification.
      • Reporting
        • All tools sent results via email.
        • Some had web GUIs, some didn’t.
    • We argued…
      • Everyone agreed that the six tasks existed and were important.
      • We did not agree on the relative importance of each task.
      • We did not agree on what was needed to meet requirements for each task.
    • The light bulb came on…
      • We realized that we were approaching test automation from different directions, with different intentions.
      • In short, we had different contexts.
    • We looked at the tools again…
      • We tried to figure out how to group the tools
      • The context of the tool was the key
      • Who writes the tests?
      • Who looks at the results?
      • What decisions do the results influence?
    • We came up with 4 contexts in our setting… Context Tests written by Results looked at by Decisions influenced by Individual Developer Developers Developers Code check-in Development Team Developers and/or testers Testers, developers, PM’s Branch merges, releases Project Testers Testers, PM’s Project milestones, releases Product Line Testers Testers, PM’s, senior management Updates and maintenance releases
    • Will the refactoring I just finished break anything?
    • Individual Developer context…
      • A common example is Unit Tests.
      • These tests need to be very quick, duration measured in seconds.
      • They test very small pieces of functionality—e.g. a single procedure in an API.
      • Writing them requires deep knowledge of product code.
      • They should be considered part of the product code deliverable, i.e. the code isn’t finished if there are no unit tests
      • See xUnit Test Patterns, by Gerard Meszaro
    • If we merge our feature branch to main, will we break anything?
    • Development Team context…
      • These tests focus on a specific area of functionality or a subsystem of the product.
      • They still need to be fast, but speed is not as critical.
      • The tests may use an interface that is not directly available to product users.
      • Writing these kinds of tests requires significant expertise in the specific protocol/feature.
      • Once fully implemented, the tests can be migrated to project/product-line testing.
    • Are the builds becoming more stable or less stable?
    • Project context…
      • Focus on user functionality of the system
      • Speed is desirable, but not essential
      • Requires a more complex infrastructure
        • Hardware dependencies
        • Variations in expected results from release to release
        • Other external dependencies
      • Reporting is critical
      • Can be migrated to Product Line easily
    • Will this patch work for customers running Basic, Pro, and Premiere editions with Service Packs 1, 2 or 3?
    • Product Line context…
      • This automation is intended to run on releases that are out in the field.
      • The automation may take a very long time to run.
      • Goals:
        • Ensure that patches fix the problem they claim to fix.
        • Ensure that they don’t break something else.
      • Reliability is critical.
      • These tests are challenging to maintain.
      • Run-time control is a big deal.
    • Case Study: ITE …
      • Summary:
        • The ITE is STAF/STAX based.
        • It was developed by testers for use by other testers, with developers as a secondary target.
      • ITE Design Criteria:
        • Allow hands-off execution of tests.
        • Allow the test harness to automatically determine which tests to run.
        • Reduce the set up/tear down burden on test writers.
    • ITE …
      • Distribution / Runtime control
        • Tests and framework are distributed as a linux chroot that includes all dependencies.
        • Both tests and framework stored in source control.
        • Tests tagged with meta-data used to control runs.
      • Test Setup
        • The ITE offers services to configure DUT and various test services.
      • Execution
        • This is largely left to test writer. The ITE is beginning to support data-driven tests.
    • ITE …
      • Verification
        • This is largely left to the test writer.
        • The ITE performs “health checks” on DUT.
      • Teardown
        • It performs more extensive cleanup after “subjobs” complete.
      • Results Reporting
        • The ITE sends email after completion of runs, it also stores results in database.
        • Web pages are available for viewing results.
    • Case Study: xBVT …
      • Summary:
        • The xBVT is a Perl based system.
        • It was developed by a developer for use by other developers, with testers as secondary target.
      • xBVT Design Criteria:
        • Tests should be able to run inside or outside the tool.
        • Impose little/no overhead on test writers and runners.
    • xBVT …
      • Distribution / Runtime control
        • Tests and framework are stored in source control.
        • Tests are stored with the product code
        • Runtime execution is determined by “test manifests.” Manifests can be nested to arbitrary depths.
      • Setup
        • The xBVT provides the test with login credentials and ip address. Test is responsible for configuring system.
      • Test execution
        • Execution is left to test writer.
    • xBVT …
      • Results verification
        • Verification is left to test writer.
      • Teardown
        • Teardown is left to test writer. The expectation is that each test will clean up completely, leaving system as it was prior to test.
      • Reporting
        • A text file is generated containing pass/fail results for each test.
        • The text files are emailed out when run is completed. They are also stored on a web page.
    • What I Learned …
      • If you have trouble agreeing, take a step back.
      • There are many different approaches that will work, the one that will work best for you depends on your test writers, framework writers, and automation customers.
      • Rather than build “one framework to test them all”, consider building sharable components.
    • How you can use this …
      • Ask yourself:
        • Who is going to write and maintain the framework?
        • Who will build and maintain the tests?
        • How are the tests going to be used?
        • How long will the tests live?
    • In conclusion…
      • Define your context:
        • Who is going to write the tests?
        • Who is going to look at the results?
        • What decisions will the test results influence?
      • Determine how your automation will implement the 6 actions:
        • Test distribution and run control
        • Test set up / tear down
        • Test execution / Results evaluation
        • Reporting
    • Acknowledgements …
      • Thanks to the members of F5’s cross-functional tools team
          • Brian Sullivan, Chris Rouillard, Ephraim Dan, Patrick Walters, Sebastian Kamyshenko, Bob Conard, Terry Swartz
      • Thanks to the members of F5’s automated test team
          • Henry Su, Rex Stith, Randy Holte, Richard Jones, James Saryerwinnie
      • Special Thanks to
          • John Hall, Brian DeGeeter, Ryan Allen, and Brian Branagan