Gatling - Bordeaux JUG

2,740 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,740
On SlideShare
0
From Embeds
0
Number of Embeds
71
Actions
Shares
0
Downloads
70
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Gatling - Bordeaux JUG

  1. 1. Performance & Load Testing Stéphane Landelle CTO eBusiness Information @slandelle
  2. 2. Concepts
  3. 3. Define performance? ● fast? ● robust? ● resource effective? => Define your requirements!
  4. 4. Web speed is a user experience● <0,5 s : loading looks instantaneous● 0,5-2 s : not instantaneous, but acceptable● > 3 s : users start leaving your site
  5. 5. Put things into perspectiveProduct Owner requirement : Every page should show up under 200ms WRONG ! User expectations = per use case!
  6. 6. Speed matters● More traffic● Better conversion rate● Less angry users
  7. 7. Some figuresGoogle :Results/page : 10 => 30-> +500 ms-> -20% pages seenAmazon :+100 ms-> -1 % salesestimated annual cost : $160M
  8. 8. Load tests : why ? Load tests =know your app & infrastructure No load tests =potential problems in production = angry users
  9. 9. MethodologyClicking everywhere/Selenium = not a load test !● You expect more than 1 user● "Seems fast enough." => Not a metric !
  10. 10. Methodology Comes after proper local perf testing● Client/Network optimization: minification, javascript, sprites, caching... ○ Tools : Developer Tools, YSlow, Google PageSpeed Insights...● Application debugging: 50 SQL queries/page = BUG! ○ Tools : VisualVM, Yourkit, JProfiler, MAT...
  11. 11. Methodology● Analyze = Data = Monitoring● Fix hotspot● Iterate!
  12. 12. Tooling● Load injector: Gatling, JMeter, Locust.io...● JVM monitoring: JMXtrans, Yammer Metrics...● System monitoring: Nagios● Network monitoring● Database monitoring● Dashboard: Graphite, Ganglia● Webapp mock: H. Gomezs basic perf webapp
  13. 13. Types of load tests● Capacity Test● Stress Test● Endurance Test
  14. 14. Capacity TestGoal :Determine how much load your system can holdHow :Repeat the same scenario over and over, but add virtualusers every time until performance starts degrading.
  15. 15. A response time under 1 s is expected-> user cap : 1500 users
  16. 16. Stress TestGoal :Study system behavior in case of heavy load, during ANDafterHow :Find the max load your system can handle, and then runthe scenario with a heavier load.
  17. 17. 10k users for 1 min, then 1k :webapp struggles, but stabilizes
  18. 18. Endurance TestGoal :Validate system behavior after a long period of activity.How :Run the scenario with a manageable load, but for a longperiod of time (several hours at least).
  19. 19. Fast memory leak :Runs fine for 2 minutes until heaps full...
  20. 20. RampsStart virtual users progressively, because thats what realusers do !Ramps also help to warm up your system.
  21. 21. ReportsThe purpose of load injectors : stress your app andproduces reports● Meaningful reports help developers analyze stress tests results and what to make of them● Something shiny to give to your boss
  22. 22. The good, the bad and the ugly metricsEvery metric can be useful, but some less than others...● Response time min/max = worst case/best case● Mean can be biased in case of extreme values Response time is a physical phenomenon => Statistically distributed
  23. 23. Percentiles to the rescue nth percentiles = n % of users response time
  24. 24. Part of the development processLike any other functional test, load testing should be :● integrated early in the development cycle● automated● versioned
  25. 25. Gatling
  26. 26. Yet Another Stress ToolJMeter, Grinder, Tsung, LoadUI,LoadRunner, Neoload…
  27. 27. Issue #1High Performance http://www.shopfbparts.com/catalog/nal-19201331_w.jpg
  28. 28. 1 user = 1 thread
  29. 29. With 50 threads on a JVM
  30. 30. With 2000 threads on a JVM
  31. 31. Blocking I/O
  32. 32. Threads? Waiting…
  33. 33. … andsleeping
  34. 34. Is that a real problem?
  35. 35. Can you trust your results? JMeter 2.8 perf test, expecting 300 tr/sec
  36. 36. Issue #2Usability
  37. 37. Graphical User Interface Listen, its not that complicated...
  38. 38. Issue #3Maintainability
  39. 39. What was this change about?
  40. 40. Gatling can change all that!http://static.lexpress.fr/medias/15/mai-68_124. jpg, copyright by AFP
  41. 41. Say hello to my little friend… Version 1.4.1 Released January 2013
  42. 42. Be asynchronous,embrace the actor model
  43. 43. Use non-blocking I/O● Async HTTP Client● Netty
  44. 44. Scenario = Code(Scala) = DSL
  45. 45. Easy
  46. 46. Use the rich DSL…Checks ● regex / css / xpath / jsonPath ● find / findAll / count ● is / in / not / whateverStructures ● doIf / repeat / during / asLongAs ● randomSwitch / roundRobinSwitchError handling ● tryMax / exitBlockOnFailFeeders ● csv / tsv / jdbc
  47. 47. … or write your own Scala code…
  48. 48. …or use the Recorder
  49. 49. Integrations● Maven Plugin● Maven archetype (run in IDE)● Jenkins plugin● Graphite live reporting
  50. 50. Coming soon…● Websockets, JDBC…● Clustering
  51. 51. Demo
  52. 52. Gatling at Ezakus
  53. 53. Ezakus Architecture
  54. 54. Metrics110 M Http hits by dayPeak : 3 000 req/s
  55. 55. Gatling usage - Simulations
  56. 56. Gatling usage - Example
  57. 57. Gatling usage - Results
  58. 58. Gatling usage - Results
  59. 59. Conclusion
  60. 60. Really efficient? Jmeter perf test run with Gatling,
  61. 61. http://gatling-tool.orghttps://github.com/excilys/gatling@GatlingToolhttps://github.com/slandelle@slandelle

×