Main sponsor




Every Rose Has Its Thorn
 Taming automated tests beast

       Wojciech Seliga
Every Rose Has Its
      Thorn




 Taming automated tests beast
About me
• 29 years coding
• Agile Practices (inc. TDD) since 2003
• Certified ScrumMaster, Agile Coach,
  Trainer, Speaker

• 4+ years with Atlassian (JIRA
  Development Team Lead)

• Co-founder of Spartez
The Story
Codebase
Almost 10 years old
From 2 to about 40
    engineers
Obsessed with Quality
Obsessed with
Automated Tests
... From the Begining
1.5M lines of code
1.5M lines of code
Cheating

1.5M lines of code
Mixture of
technologies
XStream
 OSGi    REST                      Velocity
                       Jersey
             Pico
Active             OfBiz EntityEngine Guava
Objects    LESS
                Java                OpenSocial
                       OS Workflow
    Quartz

Jackson
            Mixture of               Lucene


           technologies                JQuery
 Spring
           XML       Underscore              Soy
                                  Seraph
  OAuth            Maven2
           JSP             Webwork         JDBC
Javamail         Backbone.js    SpringDM
Lots of Dependencies
maven dependencies


mvn dependency:list -DincludeScope=compile -o | grep :jar | cut -c 11- | sed
            s/:provided// | sed s/:compile//| sort -u|wc -l
554
   maven dependencies


mvn dependency:list -DincludeScope=compile -o | grep :jar | cut -c 11- | sed
            s/:provided// | sed s/:compile//| sort -u|wc -l
Atlassian
maven dependencies
217
    Atlassian
maven dependencies
217
    Atlassian
maven dependencies

 Atlassian-patched
maven dependencies
217
    Atlassian
maven dependencies
         20
 Atlassian-patched
maven dependencies
65 modules in one IntelliJ project
65 modules in one IntelliJ project
65 modules in one IntelliJ project
13000 unit tests
Almost 1000
Selenium Tests
4000 Functional and
 Integration Tests
Atlassian JIRA
Our CI environment
Test frameworks
• JUnit 3 and 4
• JMock, Easymock, Mockito
• Powermock, Hamcrest
• QUnit, HtmlUnit
• JWebUnit, Selenium, WebDriver
• Custom runners, extensions, matchers
Test frameworks
• JUnit 3 and 4
• JMock, Easymock, Mockito
• Powermock, Hamcrest
• QUnit, HtmlUnit
• JWebUnit, Selenium, WebDriver
• Custom runners, extensions, matchers
Bamboo Setup


• Dedicated server with 70+ remote
  agents (including Amazon Elastic)
• Build engineers
• Bamboo devs
for each main branch
for each main branch
Run first
Run first



Run in parallel
  in batches
There
  is
Much
More
Type of Tests
• Unit
• Functional
• Selenium / WebDriver
• Integration
• Platform
• Performance
Platforms
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
• Dimension - Java ver.: 1.5, 1.6, 1.7
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
• Dimension - Deployment Mode: Standalone,
  Tomcat, Websphere, Weblogic
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
• Dimension - Deployment Mode: Standalone,
  Tomcat, Websphere, Weblogic

• Dimension - Browsers: IE 8+, FF, Chrome,
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
• Dimension - Deployment Mode: Standalone,
  Tomcat, Websphere, Weblogic

• Dimension - Browsers: IE 8+, FF, Chrome,
Platforms
• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows          Coming
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
• Dimension - Deployment Mode: Standalone,
  Tomcat, Websphere, Weblogic

• Dimension - Browsers: IE 8+, FF, Chrome,
Run Nightly
              Platforms               or Before
                                       Release

• Dimension - DB: MySQL, PostgreSQL, MS
  SQL, Oracle

• Dimension - OS: Linux, Windows          Coming
• Dimension - Java ver.: 1.5, 1.6, 1.7
• Dimension - CPU arch.: 32-bit, 64-bit
• Dimension - Deployment Mode: Standalone,
  Tomcat, Websphere, Weblogic

• Dimension - Browsers: IE 8+, FF, Chrome,
Triggering Builds


• On Commit (hooks, polling)
• Dependent Builds
• Nightly Builds
• Manual Builds
But...
Slow unit test
Very slow functional tests
Builds Wait in The Queue
Builds Often Fail
Too Often...
It takes time to fix it...
Sometimes very long
You commit at 3 PM
You commit at 3 PM


You get “Unit Test Green” email at 4PM
You commit at 3 PM


You get “Unit Test Green” email at 4PM


         You happily go home
You commit at 3 PM


  You get “Unit Test Green” email at 4PM


            You happily go home


You get flood of “Red Test X” emails at 4 - 9PM
You commit at 3 PM


  You get “Unit Test Green” email at 4PM


            You happily go home


You get flood of “Red Test X” emails at 4 - 9PM


      Your colleagues on the
      other side of the globe
You commit at 3 PM


  You get “Unit Test Green” email at 4PM


            You happily go home


You get flood of “Red Test X” emails at 4 - 9PM


      Your colleagues on the
                                    You
      other side of the globe
“Slow CI loop and non-
  deterministic tests are
strong inhibitor of change
 instead of the catalyst”


                    by W. Seliga
“We probably spend more
time dealing with the JIRA
  test codebase than the
  production codebase”
Striving for Coverage
                     100



                     75




                           Test Coverage
                     50



                     25



   Effort Invested   0
Strange? Relationship
                                 100%


                                 75%




                                        Value
                                 50%


                                 25%



Investments in automated tests   0%
Outcomes

• Development slows down
• Devs are afraid of change
• Software difficult to release
• Significant amount of time spent on
  analysing test failures

• Morale goes down
Feedback
               Quality
 Speed
           `
Feedback Loop Speed

• Tiniest change triggers test avalanche
• Lack of responsibility syndrome
• Devs do not run tests locally (speed)
• Before you get the results you are at
  home
Quality

• Non-deterministic tests (races,
  timeouts)
• Catching up with UI changes
• 1 red test hides new failures
• Ignoring always red tests in
  dangerous ...
Broken window theory
Long time to fix
Decisions which do not
        scale
Decisions which do not
        scale
• All unit tests in one maven module
Decisions which do not
        scale
• All unit tests in one maven module
• All functional tests in one maven module
Decisions which do not
        scale
• All unit tests in one maven module
• All functional tests in one maven module
• All Selenium and web-driver tests in one
  module
Decisions which do not
        scale
• All unit tests in one maven module
• All functional tests in one maven module
• All Selenium and web-driver tests in one
  module

• Every commit triggers rebuild and re-test of
  everything
Decisions which do not
        scale
• All unit tests in one maven module
• All functional tests in one maven module
• All Selenium and web-driver tests in one
  module

• Every commit triggers rebuild and re-test of
  everything

• Monolithic test framework / utils
Decisions which do not
        scale
• All unit tests in one maven module
• All functional tests in one maven module
• All Selenium and web-driver tests in one
  module

• Every commit triggers rebuild and re-test of
  everything

• Monolithic test framework / utils
• Opaque fixtures
Strategies
Test Quality
Problem:
Catching up with UI
     changes
Problem:
Catching up with UI
     changes

     Solution:
Problem:
Catching up with UI
     changes

     Solution:
Page Objects Pattern
Page Objects Pattern
• Page Objects model UI elements (pages,
  components, dialogs, areas) your tests
  interact with

• Page Objects shield tests from changing
  internal structure of the page

• Page Objects generally do not make
  assertions

• Designed for chaining
Page Objects Example
public class AddUserPage extends AbstractJiraPage       @Override
{                                                         public TimedCondition isAt()
                                                          {
    private static final String URI =                         return and(username.timed().isPresent(),
         "/secure/admin/user/AddUser!default.jspa";   password.timed().isPresent(), fullName.timed().isPresent());
                                                          }
    @ElementBy(name = "username")
    private PageElement username;                         public AddUserPage addUser(final String username,
                                                               final String password, final String fullName, final
    @ElementBy(name = "password")                              String email, final boolean receiveEmail)
    private PageElement password;                         {
                                                              this.username.type(username);
    @ElementBy(name = "confirm")                              this.password.type(password);
    private PageElement passwordConfirmation;                 this.passwordConfirmation.type(password);
                                                              this.fullName.type(fullName);
    @ElementBy(name = "fullname")                             this.email.type(email);
    private PageElement fullName;                             if(receiveEmail) {
                                                                  this.sendEmail.select();
    @ElementBy(name = "email")                                }
    private PageElement email;                                return this;
                                                          }
    @ElementBy(name = "sendemail")
    private PageElement sendEmail;                        public ViewUserPage createUser()
                                                          {
    @ElementBy(id = "user-create-submit")                     return createUser(ViewUserPage.class);
    private PageElement submit;                           }

    @ElementBy (id = "user-create-cancel")
    private PageElement cancelButton;                     public <T extends Page> T createUser(Class<T> nextPage,
                                                      Object...args)
    @Override                                             {
    public String getUrl()                                    submit.click();
    {                                                         return pageBinder.bind(nextPage, args);
        return URI;                                       }
    }
                         ...
Using Page Objects
@Test
public void testServerError()
{
    jira.gotoLoginPage().loginAsSysAdmin(AddUserPage.class)
            .addUser("username", "mypassword", "My Name",
                     "sample@email.com", false)
            .createUser();
  // assertions here
}
Using Page Objects
@Test
public void testImportSampleProject() {
    final PivotalImporterSetupPage setupPage = getSetupPage();
    Assert.assertEquals("1. Connect", setupPage.getActiveTabText());

    final PivotalProjectsMappingsPage projectMappingPage = setupPage.next();
    Assert.assertEquals("2. Project Mapping", setupPage.getActiveTabText());
    Assert.assertTrue("Expecting all project to be selected by default",
        projectMappingPage.areAllProjectsSelected());
    projectMappingPage.setImportAllProjects(false);
    projectMappingPage.setProjectImported(sampleProject, true);
    projectMappingPage.createProject(sampleProject, sampleProject, "SAMPLE");

    final ImporterFinishedPage importerLogsPage =
        projectMappingPage.beginImport().waitUntilFinished();
    Assert.assertTrue(importerLogsPage.isSuccess());
    Assert.assertEquals(0, importerLogsPage.getGlobalErrors().size());
    Assert.assertEquals("1", importerLogsPage.getProjectsImported());

    // more assertions here
}
More on Page Objects

• Design for reusability
• Design for sharing - libraries of Page
  Objects

• Good support by WebDriver/Selenium 2
• Atlassian Selenium 2.0
Problem:
Opaque Test Fixtures
Problem:
Opaque Test Fixtures


     Solution:
Problem:
Opaque Test Fixtures


     Solution:
 REST-based Set-up
REST-based Setup
REST-based Setup
@Before
public void setUpTest() {
    restore("some-big-xml-file-with-everything-needed-inside.xml");
}
REST-based Setup
@Before
public void setUpTest() {
    restore("some-big-xml-file-with-everything-needed-inside.xml");
}




                              VS
REST-based Setup
@Before
public void setUpTest() {
    restore("some-big-xml-file-with-everything-needed-inside.xml");
}




                              VS
@Before
public void setUpTest() {
    restClient.restoreEmptyInstance();
    restClient.createProject(/* project params */);
    restClient.createUser(/* user params */);
    restClient.createUser(/* user params */);
    restClient.createSomethingElse(/* ... */);
}
Problem:
Flakey Tests
Problem:
Flakey Tests

 Solution:
Problem:
Flakey Tests

 Solution:
Quarantine
Problem:
 Flakey Tests

   Solution:
  Quarantine

Fix
Problem:
 Flakey Tests

   Solution:
  Quarantine

Fix    Eradicate
Quarantine




• @Ignore
• @Category
• Quarantine on CI server
• Recover or Die
Quarantine




• @Ignore
• @Category
• Quarantine on CI server
• Recover or Die
Problem:
Fixing Flakey Tests
Problem:
Fixing Flakey Tests

     Solution:
Problem:
Fixing Flakey Tests

   Solution:
Timed Conditions
Problem:
Fixing Flakey Tests

       Solution:
  Timed Conditions
Test-friendly Markup
Problem:
 Fixing Flakey Tests

       Solution:
  Timed Conditions
Test-friendly Markup
Mock Unreliable Deps
Timed Conditions
Test-friendly Markup
• Do not save on IDs
• Do not save on CSS classes
• XPath is fragile
• XPath is expensive
• XPath is not readable
• i18N
Mock Unreliable
 Dependencies
Speed
Aiming at 10 seconds build
Speed
Aiming at 10 seconds build
Splitting Codebase
 The easiest and most effective
         improvement
Splitting Codebase

• Tests closer to tested code
• Less to test
• Testing less frequently
• Increased team responsibility
• Restructuring CI hierarchy - more
  complicated picture
Less to Test
Less to Test   Commit
Less to Test   Commit
Less to Test   Commit
Less to Test   Commit
Less to Test
Less to Test




Most of commits happen here
Speed vs Control
  Workspace Dilemma


• Incubation
• Maturity
• Custom workspaces
Test Execution Time
Execution Time:
  Test Level
Execution Time:
  Test Level

     Unit Tests
Execution Time:
  Test Level

     Unit Tests

   REST API Tests
Execution Time:
  Test Level

       Unit Tests

    REST API Tests

JWebUnit/HTMLUnit Tests
Execution Time:
  Test Level

       Unit Tests

     REST API Tests

JWebUnit/HTMLUnit Tests

Selenium/WebDriver Tests
Execution Time:
  Test Level

       Unit Tests

     REST API Tests

JWebUnit/HTMLUnit Tests

Selenium/WebDriver Tests
Execution Time:
     Test Level
Speed

           Unit Tests

         REST API Tests

    JWebUnit/HTMLUnit Tests

    Selenium/WebDriver Tests
Execution Time:
     Test Level
Speed

           Unit Tests

         REST API Tests

    JWebUnit/HTMLUnit Tests

    Selenium/WebDriver Tests
Execution Time:
     Test Level
Speed                     Confidence

           Unit Tests

         REST API Tests

    JWebUnit/HTMLUnit Tests

    Selenium/WebDriver Tests
Execution time - Cont.

• Batching
• Several tests per single set-up
  (violation of test isolation)

• REST-based assertions
• Remove / merge overlapping tests
Execution time - Cont.
• IDs over CSS/JQuery Selectors over XPath
• JUnit tests running in the container
• In-process testing
• In-memory DBs
• Mocking web servers
• Reducing framework initialization time
• Test Optimization (Clover)
Waiting time

• More build agents
• Shorter and smaller tests
• No sleep()
• Avoiding long fixture setup (hot
  container, fragmented setup)
Preparation Time

• SCM performance
• Container set-up
• Compilation time (GWT...)
• Maven...
• Artifacts passing
So how about the
     goals?
Is 10 seconds build realistic?
Realistic Goals (for us*)
Realistic Goals (for us*)




              *My current personal dreams
Realistic Goals (for us*)

 Time                 Type

           for unit tests for 95% of the
 2 min
                     commits
          for base smoke functional tests
 5 min
               for 95% of the commits
           for ALL tests for 95% of the
 15 min
          commits on selected platform

 30 min   for ALL tests for ALL commits

                     *My current personal dreams
Realistic Goals (for us*)

 Time                 Type

           for unit tests for 95% of the
 2 min
                     commits
          for base smoke functional tests
 5 min
               for 95% of the commits
           for ALL tests for 95% of the
 15 min
          commits on selected platform

 30 min   for ALL tests for ALL commits

                     *My current personal dreams
Realistic Goals (for us*)

 Time                 Type

           for unit tests for 95% of the
 2 min
                     commits
          for base smoke functional tests
 5 min
               for 95% of the commits
           for ALL tests for 95% of the
 15 min
          commits on selected platform

 30 min   for ALL tests for ALL commits

                     *My current personal dreams
Realistic Goals (for us*)

 Time                 Type

           for unit tests for 95% of the
 2 min
                     commits
          for base smoke functional tests
 5 min
               for 95% of the commits
           for ALL tests for 95% of the
 15 min
          commits on selected platform

 30 min   for ALL tests for ALL commits

                     *My current personal dreams
Realistic Goals (for us*)

 Time                 Type

           for unit tests for 95% of the
 2 min
                     commits
          for base smoke functional tests
 5 min
               for 95% of the commits
           for ALL tests for 95% of the
 15 min
          commits on selected platform

 30 min   for ALL tests for ALL commits

                     *My current personal dreams
Realistic Goals (for us*) p.2
Realistic Goals (for us*) p.2




                *My current personal dreams
Realistic Goals (for us*) p.2

   Metric                 Type

   >98%              green unit tests

   >95%          green functional tests

   <20min      average time to fix unit test

    <2h     average time to fix functional test

                         *My current personal dreams
Realistic Goals (for us*) p.2

   Metric                 Type

   >98%              green unit tests

   >95%          green functional tests

   <20min      average time to fix unit test

    <2h     average time to fix functional test

                         *My current personal dreams
Realistic Goals (for us*) p.2

   Metric                 Type

   >98%              green unit tests

   >95%          green functional tests

   <20min      average time to fix unit test

    <2h     average time to fix functional test

                         *My current personal dreams
Realistic Goals (for us*) p.2

   Metric                 Type

   >98%              green unit tests

   >95%          green functional tests

   <20min      average time to fix unit test

    <2h     average time to fix functional test

                         *My current personal dreams
Realistic Goals (for us*) p.2

   Metric                 Type

   >98%              green unit tests

   >95%          green functional tests

   <20min      average time to fix unit test

    <2h     average time to fix functional test

                         *My current personal dreams
Our Possible Future
• Further splitting the code-base, incubation
  and maturity

• Finer-grained team responsibilities
• Merciless quarantine and purging of flakey
  tests

• More page objects, less old-school Selenium
• Refactoring/removal of slow tests
• More REST-driven test fixtures and assertions
Take-aways
Automated testing has
 cumulative benefits
Automated testing has
  cumulative benefits
...and cumulative cost
Splitting codebase is
 key aspect of short
 test feedback loop
Test Code is Not
     Trash
Respect



Test Code is Not
     Trash
Respect
Design


   Test Code is Not
        Trash
Respect
Design


   Test Code is Not
        Trash
               Maintain
Respect
Design


   Test Code is Not
        Trash
                Maintain
       Review
Respect
Design


   Test Code is Not
        Trash
Refactor
                Maintain
       Review
Respect    Restructure
Design


   Test Code is Not
        Trash
Refactor
                  Maintain
       Review
Respect    Restructure
Design            Share


   Test Code is Not
        Trash
Refactor
                  Maintain
       Review
Respect    Restructure
Design            Share


   Test Code is Not
        Trash
Refactor
                  Maintain
       Review
                Discuss
Respect   Restructure
Design           Share
         Prune

   Test Code is Not
        Trash
Refactor
                 Maintain
       Review
                Discuss
Optimum Balance
Optimum Balance




Isolation
Optimum Balance




Isolation Speed
Optimum Balance




Isolation Speed Coverage
Optimum Balance




Isolation Speed Coverage Level
Optimum Balance




Isolation Speed Coverage Level Structure
Optimum Balance




Isolation Speed Coverage Level Structure Effort
Dangerous to temper with
Dangerous to temper with




Quality / Determinism
Dangerous to temper with




Quality / Determinism   Maintainability
There are no universal
 rules - silver bullets
There are no universal
 rules - silver bullets
 We are expected to find
optimum balance for our
      specific case
There are no universal
 rules - silver bullets
 We are expected to find
optimum balance for our
      specific case
Definition of “optimum”
  constantly changes
Otherwise
Otherwise
Did I mention that
Page Objects pattern?
Credits

• Photos:
  • http://www.flickr.com/photos/toofarnorth/ - Dragon

  • http://www.flickr.com/photos/striatic - Frustration
  • http://www.flickr.com/photos/leeadlaf/ - Broken window

  • http://www.flickr.com/photos/johnloo/ - Lightbulb

• Dariusz Kordoński - for Test
  Improvements Leadership in JIRA
Thank You

33rd degree

  • 1.
    Main sponsor Every RoseHas Its Thorn Taming automated tests beast Wojciech Seliga
  • 2.
    Every Rose HasIts Thorn Taming automated tests beast
  • 3.
    About me • 29years coding • Agile Practices (inc. TDD) since 2003 • Certified ScrumMaster, Agile Coach, Trainer, Speaker • 4+ years with Atlassian (JIRA Development Team Lead) • Co-founder of Spartez
  • 4.
  • 5.
  • 6.
    From 2 toabout 40 engineers
  • 7.
  • 8.
  • 9.
    ... From theBegining
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
    XStream OSGi REST Velocity Jersey Pico Active OfBiz EntityEngine Guava Objects LESS Java OpenSocial OS Workflow Quartz Jackson Mixture of Lucene technologies JQuery Spring XML Underscore Soy Seraph OAuth Maven2 JSP Webwork JDBC Javamail Backbone.js SpringDM
  • 15.
  • 16.
    maven dependencies mvn dependency:list-DincludeScope=compile -o | grep :jar | cut -c 11- | sed s/:provided// | sed s/:compile//| sort -u|wc -l
  • 17.
    554 maven dependencies mvn dependency:list -DincludeScope=compile -o | grep :jar | cut -c 11- | sed s/:provided// | sed s/:compile//| sort -u|wc -l
  • 18.
  • 19.
    217 Atlassian maven dependencies
  • 20.
    217 Atlassian maven dependencies Atlassian-patched maven dependencies
  • 21.
    217 Atlassian maven dependencies 20 Atlassian-patched maven dependencies
  • 22.
    65 modules inone IntelliJ project
  • 23.
    65 modules inone IntelliJ project
  • 24.
    65 modules inone IntelliJ project
  • 26.
  • 27.
  • 28.
    4000 Functional and Integration Tests
  • 30.
  • 31.
  • 32.
    Test frameworks • JUnit3 and 4 • JMock, Easymock, Mockito • Powermock, Hamcrest • QUnit, HtmlUnit • JWebUnit, Selenium, WebDriver • Custom runners, extensions, matchers
  • 33.
    Test frameworks • JUnit3 and 4 • JMock, Easymock, Mockito • Powermock, Hamcrest • QUnit, HtmlUnit • JWebUnit, Selenium, WebDriver • Custom runners, extensions, matchers
  • 34.
    Bamboo Setup • Dedicatedserver with 70+ remote agents (including Amazon Elastic) • Build engineers • Bamboo devs
  • 36.
  • 37.
  • 39.
  • 40.
    Run first Run inparallel in batches
  • 41.
  • 42.
    Type of Tests •Unit • Functional • Selenium / WebDriver • Integration • Platform • Performance
  • 43.
  • 44.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle
  • 45.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows
  • 46.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows • Dimension - Java ver.: 1.5, 1.6, 1.7
  • 47.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit
  • 48.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit • Dimension - Deployment Mode: Standalone, Tomcat, Websphere, Weblogic
  • 49.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit • Dimension - Deployment Mode: Standalone, Tomcat, Websphere, Weblogic • Dimension - Browsers: IE 8+, FF, Chrome,
  • 50.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit • Dimension - Deployment Mode: Standalone, Tomcat, Websphere, Weblogic • Dimension - Browsers: IE 8+, FF, Chrome,
  • 51.
    Platforms • Dimension -DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows Coming • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit • Dimension - Deployment Mode: Standalone, Tomcat, Websphere, Weblogic • Dimension - Browsers: IE 8+, FF, Chrome,
  • 52.
    Run Nightly Platforms or Before Release • Dimension - DB: MySQL, PostgreSQL, MS SQL, Oracle • Dimension - OS: Linux, Windows Coming • Dimension - Java ver.: 1.5, 1.6, 1.7 • Dimension - CPU arch.: 32-bit, 64-bit • Dimension - Deployment Mode: Standalone, Tomcat, Websphere, Weblogic • Dimension - Browsers: IE 8+, FF, Chrome,
  • 53.
    Triggering Builds • OnCommit (hooks, polling) • Dependent Builds • Nightly Builds • Manual Builds
  • 54.
  • 55.
  • 56.
  • 57.
    Builds Wait inThe Queue
  • 58.
  • 59.
  • 60.
    It takes timeto fix it...
  • 61.
  • 62.
  • 63.
    You commit at3 PM You get “Unit Test Green” email at 4PM
  • 64.
    You commit at3 PM You get “Unit Test Green” email at 4PM You happily go home
  • 65.
    You commit at3 PM You get “Unit Test Green” email at 4PM You happily go home You get flood of “Red Test X” emails at 4 - 9PM
  • 66.
    You commit at3 PM You get “Unit Test Green” email at 4PM You happily go home You get flood of “Red Test X” emails at 4 - 9PM Your colleagues on the other side of the globe
  • 67.
    You commit at3 PM You get “Unit Test Green” email at 4PM You happily go home You get flood of “Red Test X” emails at 4 - 9PM Your colleagues on the You other side of the globe
  • 68.
    “Slow CI loopand non- deterministic tests are strong inhibitor of change instead of the catalyst” by W. Seliga
  • 69.
    “We probably spendmore time dealing with the JIRA test codebase than the production codebase”
  • 70.
    Striving for Coverage 100 75 Test Coverage 50 25 Effort Invested 0
  • 71.
    Strange? Relationship 100% 75% Value 50% 25% Investments in automated tests 0%
  • 72.
    Outcomes • Development slowsdown • Devs are afraid of change • Software difficult to release • Significant amount of time spent on analysing test failures • Morale goes down
  • 73.
    Feedback Quality Speed `
  • 74.
    Feedback Loop Speed •Tiniest change triggers test avalanche • Lack of responsibility syndrome • Devs do not run tests locally (speed) • Before you get the results you are at home
  • 75.
    Quality • Non-deterministic tests(races, timeouts) • Catching up with UI changes • 1 red test hides new failures • Ignoring always red tests in dangerous ...
  • 76.
  • 77.
  • 78.
  • 79.
    Decisions which donot scale • All unit tests in one maven module
  • 80.
    Decisions which donot scale • All unit tests in one maven module • All functional tests in one maven module
  • 81.
    Decisions which donot scale • All unit tests in one maven module • All functional tests in one maven module • All Selenium and web-driver tests in one module
  • 82.
    Decisions which donot scale • All unit tests in one maven module • All functional tests in one maven module • All Selenium and web-driver tests in one module • Every commit triggers rebuild and re-test of everything
  • 83.
    Decisions which donot scale • All unit tests in one maven module • All functional tests in one maven module • All Selenium and web-driver tests in one module • Every commit triggers rebuild and re-test of everything • Monolithic test framework / utils
  • 84.
    Decisions which donot scale • All unit tests in one maven module • All functional tests in one maven module • All Selenium and web-driver tests in one module • Every commit triggers rebuild and re-test of everything • Monolithic test framework / utils • Opaque fixtures
  • 85.
  • 86.
  • 87.
  • 88.
    Problem: Catching up withUI changes Solution:
  • 89.
    Problem: Catching up withUI changes Solution: Page Objects Pattern
  • 90.
    Page Objects Pattern •Page Objects model UI elements (pages, components, dialogs, areas) your tests interact with • Page Objects shield tests from changing internal structure of the page • Page Objects generally do not make assertions • Designed for chaining
  • 91.
    Page Objects Example publicclass AddUserPage extends AbstractJiraPage @Override { public TimedCondition isAt() { private static final String URI = return and(username.timed().isPresent(), "/secure/admin/user/AddUser!default.jspa"; password.timed().isPresent(), fullName.timed().isPresent()); } @ElementBy(name = "username") private PageElement username; public AddUserPage addUser(final String username, final String password, final String fullName, final @ElementBy(name = "password") String email, final boolean receiveEmail) private PageElement password; { this.username.type(username); @ElementBy(name = "confirm") this.password.type(password); private PageElement passwordConfirmation; this.passwordConfirmation.type(password); this.fullName.type(fullName); @ElementBy(name = "fullname") this.email.type(email); private PageElement fullName; if(receiveEmail) { this.sendEmail.select(); @ElementBy(name = "email") } private PageElement email; return this; } @ElementBy(name = "sendemail") private PageElement sendEmail; public ViewUserPage createUser() { @ElementBy(id = "user-create-submit") return createUser(ViewUserPage.class); private PageElement submit; } @ElementBy (id = "user-create-cancel") private PageElement cancelButton; public <T extends Page> T createUser(Class<T> nextPage, Object...args) @Override { public String getUrl() submit.click(); { return pageBinder.bind(nextPage, args); return URI; } } ...
  • 92.
    Using Page Objects @Test publicvoid testServerError() { jira.gotoLoginPage().loginAsSysAdmin(AddUserPage.class) .addUser("username", "mypassword", "My Name", "sample@email.com", false) .createUser(); // assertions here }
  • 93.
    Using Page Objects @Test publicvoid testImportSampleProject() { final PivotalImporterSetupPage setupPage = getSetupPage(); Assert.assertEquals("1. Connect", setupPage.getActiveTabText()); final PivotalProjectsMappingsPage projectMappingPage = setupPage.next(); Assert.assertEquals("2. Project Mapping", setupPage.getActiveTabText()); Assert.assertTrue("Expecting all project to be selected by default", projectMappingPage.areAllProjectsSelected()); projectMappingPage.setImportAllProjects(false); projectMappingPage.setProjectImported(sampleProject, true); projectMappingPage.createProject(sampleProject, sampleProject, "SAMPLE"); final ImporterFinishedPage importerLogsPage = projectMappingPage.beginImport().waitUntilFinished(); Assert.assertTrue(importerLogsPage.isSuccess()); Assert.assertEquals(0, importerLogsPage.getGlobalErrors().size()); Assert.assertEquals("1", importerLogsPage.getProjectsImported()); // more assertions here }
  • 94.
    More on PageObjects • Design for reusability • Design for sharing - libraries of Page Objects • Good support by WebDriver/Selenium 2 • Atlassian Selenium 2.0
  • 95.
  • 96.
  • 97.
    Problem: Opaque Test Fixtures Solution: REST-based Set-up
  • 98.
  • 99.
    REST-based Setup @Before public voidsetUpTest() { restore("some-big-xml-file-with-everything-needed-inside.xml"); }
  • 100.
    REST-based Setup @Before public voidsetUpTest() { restore("some-big-xml-file-with-everything-needed-inside.xml"); } VS
  • 101.
    REST-based Setup @Before public voidsetUpTest() { restore("some-big-xml-file-with-everything-needed-inside.xml"); } VS @Before public void setUpTest() { restClient.restoreEmptyInstance(); restClient.createProject(/* project params */); restClient.createUser(/* user params */); restClient.createUser(/* user params */); restClient.createSomethingElse(/* ... */); }
  • 102.
  • 103.
  • 104.
  • 105.
    Problem: Flakey Tests Solution: Quarantine Fix
  • 106.
    Problem: Flakey Tests Solution: Quarantine Fix Eradicate
  • 107.
    Quarantine • @Ignore • @Category •Quarantine on CI server • Recover or Die
  • 108.
    Quarantine • @Ignore • @Category •Quarantine on CI server • Recover or Die
  • 109.
  • 110.
  • 111.
    Problem: Fixing Flakey Tests Solution: Timed Conditions
  • 112.
    Problem: Fixing Flakey Tests Solution: Timed Conditions Test-friendly Markup
  • 113.
    Problem: Fixing FlakeyTests Solution: Timed Conditions Test-friendly Markup Mock Unreliable Deps
  • 114.
  • 115.
    Test-friendly Markup • Donot save on IDs • Do not save on CSS classes • XPath is fragile • XPath is expensive • XPath is not readable • i18N
  • 116.
  • 117.
    Speed Aiming at 10seconds build
  • 118.
    Speed Aiming at 10seconds build
  • 119.
    Splitting Codebase Theeasiest and most effective improvement
  • 120.
    Splitting Codebase • Testscloser to tested code • Less to test • Testing less frequently • Increased team responsibility • Restructuring CI hierarchy - more complicated picture
  • 121.
  • 122.
    Less to Test Commit
  • 123.
    Less to Test Commit
  • 124.
    Less to Test Commit
  • 125.
    Less to Test Commit
  • 126.
  • 127.
    Less to Test Mostof commits happen here
  • 128.
    Speed vs Control Workspace Dilemma • Incubation • Maturity • Custom workspaces
  • 129.
  • 130.
    Execution Time: Test Level
  • 131.
    Execution Time: Test Level Unit Tests
  • 132.
    Execution Time: Test Level Unit Tests REST API Tests
  • 133.
    Execution Time: Test Level Unit Tests REST API Tests JWebUnit/HTMLUnit Tests
  • 134.
    Execution Time: Test Level Unit Tests REST API Tests JWebUnit/HTMLUnit Tests Selenium/WebDriver Tests
  • 135.
    Execution Time: Test Level Unit Tests REST API Tests JWebUnit/HTMLUnit Tests Selenium/WebDriver Tests
  • 136.
    Execution Time: Test Level Speed Unit Tests REST API Tests JWebUnit/HTMLUnit Tests Selenium/WebDriver Tests
  • 137.
    Execution Time: Test Level Speed Unit Tests REST API Tests JWebUnit/HTMLUnit Tests Selenium/WebDriver Tests
  • 138.
    Execution Time: Test Level Speed Confidence Unit Tests REST API Tests JWebUnit/HTMLUnit Tests Selenium/WebDriver Tests
  • 139.
    Execution time -Cont. • Batching • Several tests per single set-up (violation of test isolation) • REST-based assertions • Remove / merge overlapping tests
  • 140.
    Execution time -Cont. • IDs over CSS/JQuery Selectors over XPath • JUnit tests running in the container • In-process testing • In-memory DBs • Mocking web servers • Reducing framework initialization time • Test Optimization (Clover)
  • 141.
    Waiting time • Morebuild agents • Shorter and smaller tests • No sleep() • Avoiding long fixture setup (hot container, fragmented setup)
  • 142.
    Preparation Time • SCMperformance • Container set-up • Compilation time (GWT...) • Maven... • Artifacts passing
  • 143.
    So how aboutthe goals? Is 10 seconds build realistic?
  • 144.
  • 145.
    Realistic Goals (forus*) *My current personal dreams
  • 146.
    Realistic Goals (forus*) Time Type for unit tests for 95% of the 2 min commits for base smoke functional tests 5 min for 95% of the commits for ALL tests for 95% of the 15 min commits on selected platform 30 min for ALL tests for ALL commits *My current personal dreams
  • 147.
    Realistic Goals (forus*) Time Type for unit tests for 95% of the 2 min commits for base smoke functional tests 5 min for 95% of the commits for ALL tests for 95% of the 15 min commits on selected platform 30 min for ALL tests for ALL commits *My current personal dreams
  • 148.
    Realistic Goals (forus*) Time Type for unit tests for 95% of the 2 min commits for base smoke functional tests 5 min for 95% of the commits for ALL tests for 95% of the 15 min commits on selected platform 30 min for ALL tests for ALL commits *My current personal dreams
  • 149.
    Realistic Goals (forus*) Time Type for unit tests for 95% of the 2 min commits for base smoke functional tests 5 min for 95% of the commits for ALL tests for 95% of the 15 min commits on selected platform 30 min for ALL tests for ALL commits *My current personal dreams
  • 150.
    Realistic Goals (forus*) Time Type for unit tests for 95% of the 2 min commits for base smoke functional tests 5 min for 95% of the commits for ALL tests for 95% of the 15 min commits on selected platform 30 min for ALL tests for ALL commits *My current personal dreams
  • 151.
  • 152.
    Realistic Goals (forus*) p.2 *My current personal dreams
  • 153.
    Realistic Goals (forus*) p.2 Metric Type >98% green unit tests >95% green functional tests <20min average time to fix unit test <2h average time to fix functional test *My current personal dreams
  • 154.
    Realistic Goals (forus*) p.2 Metric Type >98% green unit tests >95% green functional tests <20min average time to fix unit test <2h average time to fix functional test *My current personal dreams
  • 155.
    Realistic Goals (forus*) p.2 Metric Type >98% green unit tests >95% green functional tests <20min average time to fix unit test <2h average time to fix functional test *My current personal dreams
  • 156.
    Realistic Goals (forus*) p.2 Metric Type >98% green unit tests >95% green functional tests <20min average time to fix unit test <2h average time to fix functional test *My current personal dreams
  • 157.
    Realistic Goals (forus*) p.2 Metric Type >98% green unit tests >95% green functional tests <20min average time to fix unit test <2h average time to fix functional test *My current personal dreams
  • 158.
    Our Possible Future •Further splitting the code-base, incubation and maturity • Finer-grained team responsibilities • Merciless quarantine and purging of flakey tests • More page objects, less old-school Selenium • Refactoring/removal of slow tests • More REST-driven test fixtures and assertions
  • 159.
  • 160.
    Automated testing has cumulative benefits
  • 161.
    Automated testing has cumulative benefits ...and cumulative cost
  • 162.
    Splitting codebase is key aspect of short test feedback loop
  • 163.
    Test Code isNot Trash
  • 164.
  • 165.
    Respect Design Test Code is Not Trash
  • 166.
    Respect Design Test Code is Not Trash Maintain
  • 167.
    Respect Design Test Code is Not Trash Maintain Review
  • 168.
    Respect Design Test Code is Not Trash Refactor Maintain Review
  • 169.
    Respect Restructure Design Test Code is Not Trash Refactor Maintain Review
  • 170.
    Respect Restructure Design Share Test Code is Not Trash Refactor Maintain Review
  • 171.
    Respect Restructure Design Share Test Code is Not Trash Refactor Maintain Review Discuss
  • 172.
    Respect Restructure Design Share Prune Test Code is Not Trash Refactor Maintain Review Discuss
  • 173.
  • 174.
  • 175.
  • 176.
  • 177.
  • 178.
    Optimum Balance Isolation SpeedCoverage Level Structure
  • 179.
    Optimum Balance Isolation SpeedCoverage Level Structure Effort
  • 180.
  • 181.
    Dangerous to temperwith Quality / Determinism
  • 182.
    Dangerous to temperwith Quality / Determinism Maintainability
  • 184.
    There are nouniversal rules - silver bullets
  • 185.
    There are nouniversal rules - silver bullets We are expected to find optimum balance for our specific case
  • 186.
    There are nouniversal rules - silver bullets We are expected to find optimum balance for our specific case Definition of “optimum” constantly changes
  • 187.
  • 188.
  • 189.
    Did I mentionthat Page Objects pattern?
  • 190.
    Credits • Photos: • http://www.flickr.com/photos/toofarnorth/ - Dragon • http://www.flickr.com/photos/striatic - Frustration • http://www.flickr.com/photos/leeadlaf/ - Broken window • http://www.flickr.com/photos/johnloo/ - Lightbulb • Dariusz Kordoński - for Test Improvements Leadership in JIRA
  • 191.

Editor's Notes