Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

PAC 2019 virtual Joerek Van Gaalen

Best practices of performance tests in CI

  • Be the first to comment

  • Be the first to like this

PAC 2019 virtual Joerek Van Gaalen

  1. 1. Best Practices of performance tests in Continuous Integration By Joerek van Gaalen
  2. 2. Joerek van Gaalen • Performance specialist since 2005 • Independent Performance Specialist since 2018 • 100+ performance test projects
  3. 3. Relevant Experience
  4. 4. YOLT Money manager app Aggregation platform for 2rd parties PSD2 ready Millions of requests per day on open banking
  5. 5. YOLT • Owner of performance aspect • Agile way of working with a glance of DevOps • Microservices architecture (45 services, 7 teams) • CI/CD implemented with 1 to 4 releases per day • Setting up performance tests in Continuous Integration • Help the teams with their performance challenges
  6. 6. Goal ofthe talk • Improve grip by automating tests • Share best practices and guidelines • Other activities you should do to improve grip on performance
  7. 7. WHY?
  8. 8. Why do wewant it • Acceptance tests are late in the process • It can run independent from the performance engineer • Direct feedback – Fix or continue • Trending – The more the better
  9. 9. Problems & Risks • It’s NOT easy to do • Too many false positives or negatives • Scripts and thresholds need too many rework • People don’t know how to interpret the results • People start to ignore the tests
  10. 10. Best Practices
  11. 11. Acceptancetest vs Automatedtest Traditional Acceptance tests • Has the goal to prove the application meets the requirements • Realistic simulation of production • Different types of tests
  12. 12. Acceptancetest vs Automatedtest Automated tests • Should show difference with prior tests • Runs automatically after builds, in release pipelines or scheduled • Usually a load test scenario • Shouldn’t necessarily be a realistic simulation (but preferred!)
  13. 13. Approach –Start small • Start with a single script • Your most important and meaningful script • Then later add new pages, transactions, variations and scripts • If possible, clone the acceptance default load test
  14. 14. Approach –Robustness
  15. 15. Approach –Realism • Being realistic is ideal The more realistic, the more meaningful your results are • Deviate to be less realistic if necessary to improve robustness • Realistic load model is nice, but having sufficient measurements is better • Spend time on your test data and environment
  16. 16. Approach –Other robustness factors • Self healing test data • Avoid randomization on ‘many’ things - Test data - Opening random URLs - Random iterations - Random think times are still good
  17. 17. The process –The release pipeline • Add load test(s) into the pipeline – after functional tests • Block release pipeline of test fails • Distribute results to people who are involved
  18. 18. The process –Run scheduled too • Generally: The more the better But it should not frustrate the process • Also run tests scheduled because not all changed are due to releases - Infra changes - Sneaking changes outside the pipeline - Network changes (routes change too)
  19. 19. Visualise the data • Detailed results per test <link> • Graphical trend lines of your data <link> • Tagging of releases or changes • Errors per transaction and requests • Make data easily available for everyone
  20. 20. Which scenario to automate? • I would say a load test is the most proper scenario • Duration is a balance of faster pipelines and more robust measurements • Duration depends on: - duration of the user flows - deviation of the response times and errors - number of measurements you need for stable result • What about an automated stress test?
  21. 21. Analysis - 1 • End result show give PASS or FAIL • Set thresholds on average response times + 10-25% of baseline • Set thresholds on max error percentage (0-3%) • Set thresholds on All requests and individual requests & transactions
  22. 22. Analysis - 2 • Manual analysis on failure and sometimes on pass • Keep updating thresholds
  23. 23. Analysis –My experience • It’s a learning journey • Stopped using percentiles – too voilatile • Changed load model to increase number of measurements • Didn’t set too strict thresholds
  24. 24. Synthetic Monitoring • Monitor your production environment with your scripts • Monitor your acceptance environment too • Annotate releases
  25. 25. Issues along myjourney- 1 • Too many false negatives • Unstable environments and services • Overview of results and comparisons • I’m the single point of contact • Hard to test micro services isolated
  26. 26. Issues along myjourney- 2 • It’s hard to transfer true performance testing skills • Automated tests is a real benefit, but doesn’t replace through acceptance tests
  27. 27. Ultimategoal • Continuous load test • Automatic recognition of anomalities – reducing false positives/negatives
  28. 28. Endthoughts • Automated tests will not replace traditional acceptance tests • Setting up automated tests can be a long journey
  29. 29. Q&A • What about automated stress tests? • What about automated self adjusting thresholds? • Do you find isolated tests useful or too time consuming with no benefits?

×