20090127 Esmeloadtest Summary


Published on

Load test of ESME (based on old GoogleCode version) using the Stax cloud

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

20090127 Esmeloadtest Summary

  1. 1. ESME Loadtest Results
  2. 2. Contents <ul><li>Questions to be answered </li></ul><ul><li>Test content and scope </li></ul><ul><li>Findings </li></ul><ul><ul><li>Summary </li></ul></ul><ul><ul><li>1-Server vs. 3 Servers </li></ul></ul><ul><ul><li>REST vs. WebUI call </li></ul></ul><ul><li>Remarks </li></ul><ul><li>ESME Test Bed – for your own loadtesting </li></ul><ul><li>Next steps </li></ul>
  3. 3. Questions to be answered <ul><li>How does the performance of the ESME application (Web UI + REST-like API) react to a bigger number of parallel requests? </li></ul><ul><li>Which impact does a virtual clustered infrastructure have on the application? </li></ul>
  4. 4. Test content and scope <ul><li>REST-like API: </li></ul><ul><li>Test procedure was to logon, send a message and to logoff from ESME. For these activities the token of one user is used, who has no followers. </li></ul><ul><li>In the beginning 500 calls are initiated, after 2 minutes 333 additional, after 4 minutes 167 additional calls. </li></ul><ul><li>WebUI-Calls: </li></ul><ul><li>Test procdure was to log the time needed to load the public timeline. </li></ul><ul><li>In the beginning 1000 calls are initiated, after 2 minutes 666 additional, after 4 minutes 333 additional calls. </li></ul><ul><li>WebUI and the REST-like of the Google-Code have been tested with three threads each in parallel, which approached the ESME installation on Stax (this install is based on Tomcat and MySQL). </li></ul><ul><li>Stax offers to setup dynamically clustered servers based on Amazon EC2: the test has been performed with one dedicated server and with three clustered & dedicated servers. </li></ul><ul><li>Testing data are evaluated from the beginning until the 11 th minute, as until then parallel processing was in all cases finished by then. </li></ul>
  5. 5. Test content and scope <ul><li>The current version on Google Code has been tested. </li></ul><ul><li>It included </li></ul><ul><ul><li>A Tomcat based setup, </li></ul></ul><ul><ul><li>With Scala library (2.7.2), </li></ul></ul><ul><ul><li>And a Lift-library version before 0.10. </li></ul></ul><ul><li>Further tests will be executed with with the first parts of the version on the Apache SVN. </li></ul><ul><li>This includes Version 0.10 of Lift, which promises better web ui performance. </li></ul>
  6. 6. Findings: Summary <ul><li>WebUI and REST-like API performance are on continuing good level in the first two minutes. </li></ul><ul><li>On one server: After 2 minutes WebUI performance is decreasing heavily, with no visibly stable level. Also REST performance decreases, but less heavily. </li></ul><ul><li>On three servers: WebUI and REST performance go to a higher level, but stay there during the rest of the test. </li></ul><ul><li> Three servers enable a high level of confidence that the performance will stay on the same level for a longer time. </li></ul><ul><li> Measures have to investigated to ensure a better WebUI performance. </li></ul>
  7. 7. Findings: 1-Server vs. 3 Servers <ul><li>Usage of three servers reduces the spread of results largely. </li></ul><ul><li>Three servers deliver a higher guarantee, that the calls will show the same performance level. </li></ul>1 Server 3 Servers The boxplot shows the overall distribution of the measured values: 50% of all function calls are in range of the drawn boxes. The circles describe indivdual extreme results.
  8. 8. Findings: REST-like API calls <ul><li>The trend line on 1 server goes up quite fast, whereas 3 servers show constant performance. </li></ul><ul><li>However the number of high extreme values grows in both scenarios after 3-4 minutes. </li></ul>
  9. 9. Findings: Web UI calls <ul><li>The trendline on one server shows a sharp rise in execution time after two minutes. (also no stable level is visible) </li></ul><ul><li>On three servers, after a increase from minute 2-4, performance remains on the same level. </li></ul>
  10. 10. Remarks <ul><li>This test was possible due to the innovation infrastructure setup provided by Stax. Deploying ESME to three servers in a cluster just needed two mouse clicks.  which is an astonishing performance compared to earlier settings. </li></ul><ul><li>The performance of the WebUI is linked with some memory handling issues in required java libs: when new versions are available, this test can be repeated. </li></ul><ul><li>The Jetty Servlet Container is said to have a better performance in the long run than Tomcat. This can be tested later. </li></ul><ul><li>To test the WebUI, a public timeline function is needed also in future. </li></ul>
  11. 11. ESME Test Bed – for your own loadtesting <ul><li>This loadtest aims to set a standard test set, which enables a comparison for ESME as an application. (clustered vs. non-clustered, Tomcat vs. Jetty, MySQL vs…) </li></ul><ul><li>The LoadTester source code is available at Google Code: http:// esmeproject.googlecode.com/files/ESMELoadTester.zip </li></ul><ul><li>For the evaluation of the reports a script for the R-Project will be made available: R ( www.r-project.org ) was used to generate the graphics for this report. </li></ul>
  12. 12. Next steps <ul><li>Tests to be executed </li></ul><ul><ul><li>Jetty as a Servlet container </li></ul></ul><ul><ul><li>Replaced Scala-libraries </li></ul></ul><ul><li>Tests with more different users </li></ul><ul><li>Tests with users who are followed by other test users. </li></ul>