Successfully reported this slideshow.

Expoqa17 - Cheesecake: The evolution of our automatic test suite

1

Share

Loading in …3
×
1 of 29
1 of 29

More Related Content

Related Books

Free with a 14 day trial from Scribd

See all

Expoqa17 - Cheesecake: The evolution of our automatic test suite

  1. 1. CHEESECAKE The evolution of our automatic test suite
  2. 2. BIO
  3. 3. INTRODUCTION • We had to create an automated test suite for a web application • Evolution of our automated test suite
  4. 4. OUR TOOLS • Open Source • Easy to learn • Continuous Integration • Document app behavior • Flexibility
  5. 5. #1 WRITE SCENARIOS Initial State: We need to agree on the wording Solution 1 Scenario: Log in with valid credentials Given I am on the login page www.myapp.com/login When I fill the user name with jo@mail.es And I fill the password with iLoveBdd And I click the login button Then I should be redirected to my dashboard
  6. 6. #1 WRITE SCENARIOS Initial State: We need to agree on the wording Solution 2 Scenario: Log in with valid credentials Given I am on www.myapp.com/login When I log in with the following data: | username | password | | jo@mail.es | iLoveBdd | Then I should be redirected to my dashboard
  7. 7. #1 WRITE SCENARIOS Initial State: We need to agree on the wording Solution 3 Scenario: Log in with valid credentials Given I am on the login page When I log in with valid credentials Then I should be redirected to my dashboard
  8. 8. #1 WRITE SCENARIOS Initial State: We need to agree on the wording Solution 3 Scenario: Log in with valid credentials Given I am on the login page When I log in with valid credentials [email:jo@mail.es] Then I should be redirected to my dashboard
  9. 9. #2 BROWSER INITIALIZATION Initial State: Browser started for each scenario Before do @browser = Watir::Browser.new :firefox end After do @browser.quit end hooks.rb
  10. 10. #2 BROWSER INITIALIZATION Problem: Slow Solution: • Open/close the browser when starting/ending the suite • Clean cookies between scenarios
  11. 11. #2 BROWSER INITIALIZATION AfterConfiguration do $browser = Watir::Browser.new :firefox end Before do @browser = $browser @browser.cookies.clear end at_exit do $browser.quit end hooks.rb
  12. 12. #3 THE NIGHTLY BUILD Initial State: Tests were executed on demand Problem: Slow feedback loop Solution: Nightly build + Rerun report
  13. 13. #3 THE NIGHTLY BUILD
  14. 14. #4 SPEEDING UP Initial State: Long test execution Problem: Slow feedback loop & flickering scenarios Solution: Speed up the automation
  15. 15. #4 SPEEDING UP How? • Backend testing • Micro-services testing • Bypass login • Parallelization
  16. 16. #4.1 BACKEND TESTING Initial State: We were only doing UI testing Problem: Slow feedback and flickering scenarios Solution: Backend testing • Full coverage in backend & re-implement some tests in frontend
  17. 17. #4.1 BACKEND TESTING • Reliable • Fast – Quick feedback • Easy to maintain
  18. 18. #4.2 COMPONENT TESTING State: We were testing a monolith Problem: Slow feedback Solution: Component testing • 1 service / project <> 1 testing suite
  19. 19. #4.2 COMPONENT TESTING
  20. 20. #4.3 BYPASS LOGIN Initial State: All the UI tests go through the login page Problem: Interact with the UI is slow Solution: Bypass the login page
  21. 21. #4.4 PARALLELIZATION Initial State: Tests executed sequentially Problem: Slow feedback loop Solution: Parallelization
  22. 22. #4.4 PARALLELIZATION Difficulties: • Headless mode blocked other threads – Solution: Managing the threads
  23. 23. #4.4 PARALLELIZATION Difficulties: • The initial state for the tests was not the expected one – Solution: Analyze each test and fix it
  24. 24. #4.4 PARALLELIZATION Difficulties: • One report was generated per thread – Solution: Merge all the reports in one
  25. 25. #5 WHAT’S NEXT? • Have code and tests under the same repository • Dockerize your solution • Make your test create data on the fly • Execute tests in local environment • Execute tests on each PR
  26. 26. #5 WHAT’S NEXT? • Create your own “qa” classes to identify web elements • Talk with developers – In which level to automate – How to make testing easier
  27. 27. TAKE AWAYS • Do not be afraid of changes • Do not try to fix the big problem at once • Do not think small improvements do not count • Do not stop improving
  28. 28. THANK YOU @bugbustersbcn #bugbustersbcn linkedin.com/in/gloriahornero linkedin.com/in/aidamanna

Editor's Notes

  • Hi and welcome to our track
    We are going to speak about Cheesecake, our automated test suite
  • We are Aida and Gloria
    We come from Barcelona, Spain
    We are both QAs
    We are fans of Agile methodologies and of course we are super fans of Cucumber
    We use it at our daily work at Typeform
    For those who don’t know our company
    We have a web application tool that allows you to create human-friendly online forms.
  • When we joined our current company we faced the challenge of creating the automated test suite for our web application
    We started automating UI because the API was super messy and because this type of automation allows you to simulate real users
    So you can check that the business requirements are met
    One thing we found out was that there are not many resources available for beginners
    This is why we would like to share our experience and explain the story of cheesecake and introduce you some of the challenges we faced and how we solve it.
  • The first first thing we did was thinking about the requirements we needed for choosing the automation tool,
    and we came up with the following list
    We wanted to use open source tools
    As they usually have active communities that provide constant support and improvements
    The tools should be easy to learn
    So that we can start working on tests earlier
    It was also very important to be able to connect to a continuous Integration tool like Jenkins
    To make our test suite fully automatic and run tests in an unattended way
    Another requirement was to document app behavior
    The company we work in was already using a Project Management Tool to document requirements, but there was no single place where the expected behavior was documented. For this reason, it was important to use a tool that documented application behavior in a way that everyone could understand
    Last but not least, we needed the tool to be highly flexible
    Even we started automating the UI, we wanted to have the flexibility to be able to automate at any level or even compare screenshots

    With all these requirements in mind, and taking into account our personal experiences with several tools, we arrived at a decision:
    the tools that best fit our needs were Cucumber + Watir (wrapper around Selenium) and Ruby

    After choosing the tools we wanted to use we started automating tests
    As we started adding tests we realized that the feedback was slow
    So we did small changes in order to speed up the execution
    At the end we will see how we completely changed our testing strategy in order to really solve the problems we were having
  • Write scenarios in a proper way
    Cucumber allows you to write test scenarios that describes behavior using Gherkin syntax
    We were not sure about how to do it properly
    We wanted to do it right because we knew that the way to write it down will become the basis for all our tests
    First solution:
    The first scenario we though to automate was the login
    Problems:
    Long
    Noisy
    We are not focusing on the behavior
    We are explaining the steps you need to follow in order to perform a login
  • Second solution:
    Shorter
    We are focusing on the behavior
    Problems:
    We are not solving the problem of the data
    For us, unnecessary data is information that is not needed in order to describe the behavior of the scenario
    In this specific case, the user name and the password is not necessary to describe the behavior of the login

  • Third solution:
    Is the solution we decided to implement
    The scenario is short and focuses on the application behavior
    We are creating domain language


  • Variation on the third solution:
    This is the variation we sometimes use at Typeform
    Sometimes you need to pass data to your test scenario
    With this brackets notation we are telling the reader that the information is not needed in order to describe the behavior

  • INITIAL STATE:
    - When you are automating tests it is supper important that:
    - Tests are independent
    - They start from a clean state
    As a first approach, we started by initializing the browser for each scenario
    in the Before hook we opened a new browser
    and then we closed it in the After hook.
    Each scenario from a clean state

    PROBLEM:
    Initializing the browser for each scenario does not look like a problem when you have a few tests, but when you have a reasonable number of test cases, you realize that a long time is spent in constantly opening and closing the browser.
    Quick feedback is one of the most important characteristics of a good automated test suite, so we decided to change this approach.

  • PROBLEM:
    Initializing the browser for each scenario does not look like a problem when you have a few tests, but when you have a reasonable number of test cases, you realize that a long time is spent in constantly opening and closing the browser.
    Quick feedback is one of the most important characteristics of a good automated test suite, so we decided to change this approach.

  • SOLUTION:
    We will open the browser before any test is executed
    and close it when all tests are run.
    In between scenarios we will just clean the browser status

    This can be achieved in Cucumber + Ruby by
    Opening the browser in the env.rb file
    Closing it in the at_exit hook
    The only thing that is left after that is to clean the browser in between scenarios, which basically means cleaning cookies in the Before hook.

    This is just a small change but it speed up our suite a lot.
  • INITIAL STATE
    In Typeform QAs are in charge of deploying the app to production
    We wanted to do a deploy every morning
    So, to do that safely, we need to run all test cases,
    which took more or less an hour to complete.
    - If any test fails we need to review it and see whether there is a bug or the test is broken
    - If it is a bug, we wait for the fix and we run the tests again
    - If the test is broken we need to see what is going on:
    - If something has changed we just need to fix the test
    - But sometimes we also have flickering scenarios: Scenarios that fail intermittently
    - In a large number of cases, we you rerun them they pass
    - So they were failing because of the conditions in which they were executed:
    - Because the network was slow (the page couldn’t load)
    - or because the server was busy and tool too long to respond to the client (we were waiting for an event that didn’t happen)
    - Fixing those test is not easy at all and you need your time

    - The idea when you are reviewing failing tests is:
    - Identifying bugs in the code to solve them quickly and make the deploy
    - And also identifying the flickering scenario so that you can solve them after the deploy calmly


    PROBLEM:
    - As you can see, this process is very slow,
    - and sometimes by the time we were done it was already too late to make the deploy
    - se needed something to speed up this process.

    SOLUTION:
    In order to be able to deploy every day, we started doing nightly builds in our Continuous Integration tool and generating the rerun report
  • Execute all tests in the last commit
    Two repors: Normal + Re-run
    Automate re-running failed tests -> You can identify flickering scenarios
    Now deploy everyday

    ######################################################

    SOLUTION:
    Every night, the nightly build runs the entire testing suite against the last commit.
    The nightly build will generate two reports:
    The normal report with the info about passed/failed tests
    The rerun report
    Not everybody know this report, but
    you can generate it by passing the rerun option in cucumber command line
    It generates a report that contains the command you need to execute in order to rerun alll failed tests
    Then you can automate this process
    This was a really good idea as when automating the UI sometimes you have flickering scenarios, false fails
    By running failed tests again you can identify them quickly
    And, of course, fix them

    This little improvement greatly sped up the deployment process and allowed us to deploy almost every day.

  • E2E tests are slow because interacting with the UI is a slow process
    A slow feedback loop is not good for an agile environment
    Among other characteristics in order to achieve a good feedback loop, this should be quick
    No one wants to be waiting for hours for tests to be executed
    So we need to speed up the automation
  • In order to speed up the testing suite we took different actions
    Backend testing
    Micro-services testing
    Bypass login
    Parallelization
  • INITIAL STATE & PROBLEM
    - As you can see until now we were just doing UI testing which was causing problems like:
    Slow feedback
    Flickering scenarios, which are difficult to solve

    SOLUTION
    For those reasons we decided to change our testing strategy and
    focus on doing a lot of backend testing and very little UI testing
    With backend testing I mean simulating the calls that the browser would do and hit the endpoints directly
    The main idea was implementing all tests in backend and then re-implement the most critical ones in frontend
  • - For us doing more backend testing was a big improvement
    We know it may not fit in all projects but in our case the advantages were:
    Reliable: No flickering scenarios because we are not interacting with the UI,
    And for the same reason the execution it is also much faster
    Easy to maintain: In our case the backend changes less than the UI
  • INITIAL STATE & PROBLEM:
    After trying it for a while, we found this was not the best approach:
    Re-implementing scenarios is messy at a coding level
    We wanted to do as less UI testing as possible
    We wanted to be closer to development
    Moreover, we where testing a monolith (all functionality was under the same codebase)
    Slow, as there were a lot of test cases
    The good thing was that at that moment we were moving to micro-services architecture


    SOLUTION:
    We would test the micro-services in isolation
  • SOLUTION:
    Instead a having a big test suite that was testing everything,
    we would have one testing suite for each service/project to test that the service/project was working well
    If possible this will be done through backend
    When a frontend only project we will test it through UI
    To make the test quick the front should not be connected to the back
    The test will only cover the interaction with the UI
    To test the integration we will implement and integration testing suite with e2e that will go through the UI
  • INITIAL STATE & PROBLEM:
    In our case all our E2E automated tests need to go through the login page
    Interact with the UI is a slow process
    We already have a test that checks if the login works correctly

    SOLUTION:
    We can bypass the login page, in our case performing the login via backend


  • INITIAL STATE & PROBLEM:
    When you start with the automation it is normal that all the tests are executed sequentially
    The main problem is that this makes the execution of your test suite slow

    SOLUTION:
    A way to solve that is implementing parallelization
    In this way you can run different subsets of tests at the same time saving time
    There are some libraries that make the implementation of the parallelization easy
    We implemented our parallelization solution using the Ruby gem parallel_tests
    This gem splits the feature files that are going to be executed between different threads
  • While implementing this solution we found some difficulties we needed to solve:
    Headless mode blocked other threads
    In headless mode instead of opening a different instance of the browser it opens a different tab
    Not all the threads take the same time to be executed
    As explained before, when the execution finishes we close the browser, so as soon as a thread finishes it closes the browser and thus all the tabs
    This ends up with the other processes losing the connection to the browser
    In order to avoid that we implemented a watcher that waits until all the processes are finished to close the browsers.


  • The initial state for the tests was not the expected one as we were not cleaning test data after each scenario
    We were using the same users for several tests
    As we were not executing the scenarios in a sequential way the same user was performing different actions that were modifying the initial state for the next test
    There was not a general fix for it, we needed to analyze each case and find the proper solution
  • Other improvements to work better:
    Have code and tests under the same repository
    Dockerize your solution
    There are selenium docker images to deal with problems with selenium & browser versions compatibility
    Make your test create data on the fly
    E.g. by using the APIs
    Execute tests in local environment
    Execute tests on each PR

  • Other improvements to work better:
    Create your own qa classes to identify web elements
    When you do UI testing you need to somehow identify the web element you want to interact with
    Sometimes developers change classes or ids, making tests fail because of that
    By creating you own classes and prefixing them with the “qa” word they know this is something needed for testing
    and they will either not change it or modify the test not to fail
    Among all, the most important thing is speaking about testing with developers. They can suggest a lot of shortcuts to make testing easier
    and prepare the code to be easily tested
  • As a conclusion:
    You can see that we started with UI tests
    We had different problems, one of them the slow feedback
    We did different improvements to speed up
    But at the end we realized the problem was the fact of doing that much UI testing
    So we decided to change our testing strategy

    The message we want to transmit is that you will face different problems while developing your tests
    Do not try to fix all the problems at once
    Even if the change is small it will be better than before
    It is important to be constantly improving

  • We hope we were able to give some ideas
  • ×