Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Stating the obvious - 121 Test Automation Day, Dublin, 2018

31 views

Published on

121 Test Automation Day
Dublin, May 23rd 2018
https://1point21gws.com/testingsummit/dublin/

Stating the obvious: adding performance and scalability tests to a Continuous Integration pipeline

Performance and scalability are core quality attributes of any system; unit testing, integration testing, UI testing, they all focus on functional requirements. Good performances mean happy users, less resource usage which translates to lower running costs (power, cloud bills) and customer retention.

In this session we will recall some basic concept of performance testing and demonstrate some of the many tools available in the cloud.

Published in: Software
  • Be the first to comment

  • Be the first to like this

Stating the obvious - 121 Test Automation Day, Dublin, 2018

  1. 1. Stating the Obvious Adding Performance and Scalability Tests to a Continuous Integration Pipeline 121 Test Automation Day Dublin, 23 May 2018 Giulio Vian DevOps Lead Glass Lewis & Co.
  2. 2. Agenda Introduction Why is it worth? What are the basics (aka theory)? How to do it? Closing
  3. 3. This Session 1/200 level Bibliography at end SlideShare after the conference 3
  4. 4. Bio in pictures 4 giulio.dev@casavian.eu @giulio_vian http://blog.casavian.eu/ https://tfsaggregator.github.io Hardware spec: 1KB RAM (upg. 16KB) 4KB ROM
  5. 5. I. Why is it worth? 5
  6. 6. Performance is a Feature «there are two kinds of websites: the quick and the dead» https://blog.codinghorror.com/performance-is-a-feature/
  7. 7. Response time 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect. Miller, R. B. (1968). Response time in man- computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267- 277.
  8. 8. Hard facts Google Half a second delay caused a 20% drop in traffic. Amazon.com Even very small delays (100 ms) would result in substantial and costly drops in revenue.
  9. 9. Lubricant for an engine Do you like your motor company to seize?
  10. 10. Always necessary? Source: B.Malakooti
  11. 11. There’s Nothing Like Production 2M users 40,000 RPS 2Gbps 11 © 2016 IMG Universe, LLC. All Rights Reserved
  12. 12. Summary Performance is a feature Lots of evidence Size the effort Limit expectations Make your business case
  13. 13. II. What are the basics of performance testing Also load or stress 13
  14. 14. (Technical) Goals Benchmarking Force defects to emerge code, configuration, architecture Capacity planning Breaking point Sensitive data leakage
  15. 15. (Business) Goals Guarantee performances Forecast growth Budget expansion
  16. 16. Emerging Problems Connection pool exhaustion File locks Database locks (Thread) deadlocks Memory exhaustion Domino Effect Lack of scalability
  17. 17. Healthy behaviour +Error rates +Tools errors Relative Load Latency Required threshold Max N seconds 100%60% Throughput Usage
  18. 18. Monitor CPU RAM Disk I/O Network I/O Threads / processes (Photo: Donovan Govan)
  19. 19. Summary Set your goals Define the paths Write the scripts Run and Monitor Analyze
  20. 20. II. How to do it 20
  21. 21. Tooling Scripts & Runners Load generators Monitoring
  22. 22. Tools: Script & Runners Tools JMeter Visual Studio (.webtest / .loadtest) Selenium ×Unit family Gatling *custom Traits Recording Scripting language Extensible Local runner & More…
  23. 23. Aside: data & scripts Logon Authentication tokens Session tokens Cookies Correlation tokens Data flow
  24. 24. Script example (VS)
  25. 25. Script example (VSTS)
  26. 26. Tools: Load generators Tools VSTS Cloud-based load testing CA BlazeMeter SOASTA (Akamai) CloudTest HPE LoadRunner *custom Traits Scalable Emulate clients (user agent) Load progression Automatic data collection Pluggable runners & More…
  27. 27. Load configuration (VS)
  28. 28. Load configuration (VSTS)
  29. 29. Integrate in your CI/CD pipelines
  30. 30. Tools: Monitoring Loading tool Basic Performance Counters* Application logs Cloud & More…
  31. 31. High level architecture Load Generator SUT Monitoring CI / CD pipeline
  32. 32. Private target
  33. 33. Analyzing
  34. 34. Quality gates
  35. 35. Hidden Gremlins Cloud infrastructure warm up (e.g. ELB) Default configuration e.g. nginx worker processes Client resources
  36. 36. Summary Pick a tool family Simple CI/CD Integration Beware of unexpected Value of test
  37. 37. IV. Closing 38
  38. 38. How you read this? Relative Load Latency Required threshold Max N seconds 100%60% Throughput Usage
  39. 39. Costs Writing and maintaining scripts Wait Time Load-generating resources
  40. 40. Test Automation Pyramid Source: Mike Cohn © 2010 UI Service Unit UI Service Unit Load test
  41. 41. (Photo: Elya) Resources Sisyphus Choosing to invest 42
  42. 42. Bibliografy & References http://www.slideshare.net/giuliov/presentations https://github.com/giuliov/Stating-the-obvious/ https://docs.microsoft.com/en-us/vsts/build-release/tasks/test/cloud- based-load-test AWS Well-Architected Framework - Performance Efficiency Pillar https://www.amazon.com/dp/B01MSSLHBX Performance Testing Guidance for Web Applications https://msdn.microsoft.com/en-us/library/bb924375.aspx http://www.brendangregg.com/linuxperf.html 43
  43. 43. Bibliografy (2) Writing High-Performance .NET Code — Ben Watson (Ben Watson) https://www.amazon.it/Writing-High- Performance-NET-Code- Watson/dp/0990583430/ Time Is Money: The Business Value of Web Performance — Tammy Everts (O'Reilly Media) https://www.amazon.com/Time-Money- Business-Value-Performance/dp/1491928743 44
  44. 44. Bibliografy (3) Software Performance and Scalability: A Quantitative Approach — Henry H. Liu (Wiley) https://www.amazon.com/Software- Performance-Scalability-Quantitative- Approach/dp/0470462531 Continuous Delivery with Windows and .NET — Matthew Skelton and Chris O'Dell (O'Reilly) http://www.oreilly.com/webops- perf/free/continuous-delivery-with-windows-and- net.csp 45
  45. 45. Bibliografy (4) Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation — J.Humble, D.Farley (Addison-Wesley) https://www.amazon.com/Continuous- Delivery/dp/0321601912/ The DevOps Handbook — G.Kim, P.Debois, J.Willis, J.Humble (IT Revolution Press) https://www.amazon.com/DevOps-Handbook- World-Class-Reliability- Organizations/dp/1942788002/ 46
  46. 46. Takeaways Know the theory Make your case Pick a tool Integrate in CI/CD Complexity around the corner

×