Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

TestCon Vilnius 2017 - Stating the obvious


Published on

Session "Stating the obvious: adding performance and scalability tests to a Continuous Integration pipeline" at TestCon Vilnius 2017

Published in: Software
  • Login to see the comments

  • Be the first to like this

TestCon Vilnius 2017 - Stating the obvious

  1. 1. Stating the obvious Adding performance/load/stress testing to your Continuous Integration pipeline Giulio Vian @giulio_vian
  2. 2. Agenda Why obvious? CI / CD Integration Perf / load/ stress testing basics Closing 2
  3. 3. This Session 200 level Tight on time: avoid open questions 3
  4. 4. I. Why obvious? 5
  5. 5. Performance is a Feature «there are two kinds of websites: the quick and the dead»
  6. 6. Response time 0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result. 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data. 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect. Miller, R. B. (1968). Response time in man- computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267- 277.
  7. 7. Hard facts Google Half a second delay caused a 20% drop in traffic. Even very small delays (100 ms) would result in substantial and costly drops in revenue.
  8. 8. Why lubricant in an engine? Do you like the motor to seize?
  9. 9. Always necessary? Gartner Hype Cycle, Source: Wikipedia
  10. 10. My perf test of the year 2M users 40,000 RPS 2Gbps There’s Nothing Like Production 11 © 2016 IMG Universe, LLC. All Rights Reserved
  11. 11. II. CI / CD Integration 12
  12. 12. Tooling Scripts & Runners Load generators Monitoring
  13. 13. Tools: Script & Runners JMeter Visual Studio (.webtest / .loadtest) Selenium ×Unit family Gatling *Custom & More…
  14. 14. Tools: Load VSTS Cloud-based load testing CA BlazeMeter SOASTA (Akamai) CloudTest HPE LoadRunner *Custom & More…
  15. 15. Tools: Monitoring Loading tool Basic Performance Counters* Application logs Cloud & More…
  16. 16. Analyzing
  17. 17. High level architecture You Load Generator SUT Monitoring CI / CD pipeline
  18. 18. Web Load Testing 19
  19. 19. Considerations for CI/CD Trunk or branch of pipeline Resources KPIs
  20. 20. Time factor Short-running Tests Long-running Tests Test Coverage Frequency of execution Value from test
  21. 21. Hidden Gremlins Cloud infrastructure warm up ELB Default configuration nginx worker processes Client resources
  22. 22. III. Perf / load/ stress testing basics 23
  23. 23. (Technical) Goals Benchmarking Force defects to emerge code, configuration, architecture Capacity planning Breaking point Security leakage
  24. 24. Emerging Problems Connection pool exhaustion File locks Database locks (Thread) deadlocks Memory exahustion Domino Effect Lack of scalability
  25. 25. Indicators Latency Throughput Load Error rate Tool (client) errors
  26. 26. Compound graph +Error rates +Tools errors Relative Load Latency Required threshold Max N seconds 100%60% Throughput Usage
  27. 27. How you read this? Relative Load Latency Required threshold Max N seconds 100%60% Throughput Usage
  28. 28. Analyze Response curve Inflection point Bottlenecks
  29. 29. Common scenarios Desktop / mobile / library Server / web Database server Web serverClient N1 N4 A1 A3 A2N3 N2
  30. 30. What to monitor CPU RAM Disk I/O Network I/O Threads / processes
  31. 31. IV. Closing 32
  32. 32. Resources Sisyphus Choosing to invest 33 (Photo: Elya)
  33. 33. Bibliografy & References based-load-test AWS Well-Architected Framework - Performance Efficiency Pillar Performance Testing Guidance for Web Applications 34
  34. 34. Bibliografy (2) Writing High-Performance .NET Code — Ben Watson (Ben Watson) Performance-NET-Code- Watson/dp/0990583430/ Time Is Money: The Business Value of Web Performance — Tammy Everts (O'Reilly Media) Business-Value-Performance/dp/1491928743 35
  35. 35. Bibliografy (3) Software Performance and Scalability: A Quantitative Approach — Henry H. Liu (Wiley) Performance-Scalability-Quantitative- Approach/dp/0470462531 Continuous Delivery with Windows and .NET — Matthew Skelton and Chris O'Dell (O'Reilly) perf/free/continuous-delivery-with-windows-and- net.csp 36
  36. 36. Bibliografy (4) Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation — J.Humble, D.Farley (Addison-Wesley) Delivery/dp/0321601912/ The DevOps Handbook — G.Kim, P.Debois, J.Willis, J.Humble (IT Revolution Press) World-Class-Reliability- Organizations/dp/1942788002/ 37
  37. 37. About me 38 @giulio_vian
  38. 38. Call to action 39 (Photo: Francesco Canu)
  39. 39. End of transmission 40