A Soap Performance Comparison Of Different WSRF Implementations


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Zwischen „6“ und „WSRFLite“ ein Leerzeichen benchmark groß Schreiben als erstes Wort zweiter Bulletpoint: eher sowas wie „benchmark results can indicate which implementation is favorable if performance is a key requirement“ (quasi betonen dass, wenn es um Performance geht, der Benchmark hilft, ansonsten aber nicht zwingend aussagekräftig ist“.
  • Ist ne gute Aufstellung. Zu 4 kannst du sagen dass sowas in Zukunft, aufbauend auf unseren Untersuchungen, auch interessant wäre. Ansonsten kannst du mal hier schauen, da gibt’s ein paar Zusammenfassungen: https://wickie.hlrs.de/staff/index.php/User:Hpckueb/Papers
  • Bullet 1: „based“ statt „base“ Bullet 4: Was willst du damit sagen? „Even though it‘s not standardized, it can make results comparable“?
  • „ Each service exposes 3 different types of operations:“ in bullet 2
  • - Kannst du dir merken für MIO: similiar to data that is, for example, used in complex fluid dynamics simulations - echoVoid to test latency of the SOAP stack (without processing overhead as in send/receive)
  • Kannst sowas sagen wie „GT4 under Linux is clearly an outlier, but repeated measurement brought the same value. As to now we cannot explain this drop in performance.“
  • Glaube hier stand im Paper, dass GT4 seinen Vorteil nicht mehr ausspielen kann (siehe send*, oder?), weil die deserialization so schlecht ist. Kannst du aus dem Paper nehmen und evtl. auch nur mündlich erwähnen
  • A Soap Performance Comparison Of Different WSRF Implementations

    1. 1. A SOAP Performance Comparison of different WSRF Implementations Roland Kübert, Axel Tenschert, Hai-Lang Thai {kuebert, tenschert}@hlrs.de High Performance Computing Center Stuttgart (HLRS), University of Stuttgart SOAP Comparison to WSRF 28.11.2009
    2. 2. Introduction <ul><li>SOAP is the protocol used most often in web services communications </li></ul><ul><li>WSRF uses SOAP as a communications protocol </li></ul>SOAP Comparison to WSRF 28.11.2009 <ul><li>Today: Performance analysis’ for SOAP toolkits have been performed for various cases and toolkits But: WSRF implementations have generally not been taken into account </li></ul>
    3. 3. Introduction <ul><li>SOAP performance of three WSRF implementations are compared: </li></ul><ul><ul><li>UNICORE 6 WSRFLite 1.8.6, </li></ul></ul><ul><ul><li>Globus Toolkit 4 Java WS-Core 4.2.1 </li></ul></ul><ul><ul><li>Apache Muse v2.2.0. </li></ul></ul><ul><li>Benchmark results can indicate which implementation is favorable if performance is a key requirement </li></ul>SOAP Comparison to WSRF 28.11.2009
    4. 4. Related Work <ul><li>Investigation of applicability of SOAP in Real-Time Trading Systems [5] </li></ul><ul><li>Analysis of the feasibility of SOAP for Scientific Computing [1] </li></ul><ul><li>Test of specific SOAP toolkits against Axis 1, gSoap, bSoap and XSUL in a generic SOAP benchmark suite [2] </li></ul><ul><li>Investigation of WSRF specific operations for Globus Toolkit v3.9.2 but without deeper conclusions [7] </li></ul><ul><li>Analyzis of s uitability of SOAP for wireless devices [4] </li></ul>SOAP Comparison to WSRF 28.11.2009
    5. 5. Related Work <ul><li>This work is based on: </li></ul><ul><ul><li>benchmark suite developed by Head et. Al [2] </li></ul></ul><ul><li>Selected benchmark suite because it was performed with the aim of developing a standard benchmark suite for: </li></ul><ul><ul><li>Cuantifying, </li></ul></ul><ul><ul><li>comparing and </li></ul></ul><ul><ul><li>Contrasting </li></ul></ul><ul><li>Wide range of use cases </li></ul>SOAP Comparison to WSRF 28.11.2009 the performance of SOAP implementations
    6. 6. Methodology: Software <ul><li>For each middleware one service is developed </li></ul><ul><li>Each service exposes 3 different types of operations: </li></ul><ul><ul><li>Echo: received values are sent back </li></ul></ul><ul><ul><li>Receive: the number of values is sent back </li></ul></ul><ul><ul><li>Send: for a received number, that much values are sent back </li></ul></ul><ul><li>All operations are implemented for primitive data types: </li></ul><ul><ul><li>Byte, double, int and string </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    7. 7. Methodology: Software <ul><li>Additionally two complex data types are used </li></ul><ul><ul><li>MeshInterfaceObject: consists of two integers that represent coordinates and a double that represents a field value at the given position. </li></ul></ul><ul><ul><li>SimpleEvent: an object representing an event that is composed of a sequence number (int), a time stamp (double) and a message (String) </li></ul></ul><ul><li>Operation echoVoid ( void input and output) is implemented to test latency of the SOAP stack </li></ul>SOAP Comparison to WSRF 28.11.2009
    8. 8. Methodology: Hardware <ul><li>Server: Dell Latitude D620 with Intel® Core 2 Duo™ CPU T7400 2.17 GHz and 2 GB of memory </li></ul><ul><li>Services benchmarked in: </li></ul><ul><ul><li>Ubuntu Linux v9.04 (Kernel 2.6.28-11-generic) </li></ul></ul><ul><ul><li>Windows Vista Enterprise 32-bit Service Pack 1 </li></ul></ul><ul><li>Client: Dell Optiplex 320 with Intel® Pentium® D CPU 3.00 GHz and 2 GB of memory </li></ul><ul><ul><li>Windows Vista Enterprise 32-bit Service Pack 1 </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    9. 9. Results: Latency <ul><li>Performed by calling void operation: </li></ul><ul><ul><li>operation needs no processing except the one inherent in every SOAP message processing </li></ul></ul><ul><ul><li>good indicator of the overhead imposed by the different toolkits </li></ul></ul>SOAP Comparison to WSRF 28.11.2009 Windows Linux GT4 6 ms 42 ms Muse 4 ms 6 ms WSRFLite 2 ms 3 ms
    10. 10. Results: Latency <ul><li>Ranking by measuring the imposed overhead: </li></ul><ul><ul><li>WSRFLite </li></ul></ul><ul><ul><li>Muse </li></ul></ul><ul><ul><li>GT4 </li></ul></ul>SOAP Comparison to WSRF 28.11.2009 <ul><li>General trend: all toolkits run faster under Windows </li></ul><ul><li>Recent test of different VMs on Ubuntu Linux and Windows Vista showed opposite results [6] </li></ul><ul><li>T rend that performance is slower under </li></ul><ul><li>Linux stays the same for the other tests as well </li></ul>
    11. 11. Results: Serialization <ul><li>Serialization performance was tested with the send* operations: </li></ul><ul><ul><li>integer specifying array size to be created by service is sent over wire </li></ul></ul><ul><ul><li>only input parameter: an array of the corresponding size is then returned. </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    12. 12. Results: Serialization SOAP Comparison to WSRF 28.11.2009
    13. 13. Results: Serialization SOAP Comparison to WSRF 28.11.2009
    14. 14. Results: Serialization <ul><li>Muse performs worst on both platforms </li></ul><ul><li>GT4 has the best results when dealing with complex objects </li></ul><ul><li>Otherwise best perfomance: WSRFLite. </li></ul>SOAP Comparison to WSRF 28.11.2009 <ul><li>Ranking: </li></ul><ul><ul><li>WSRFLite (without complex objects) </li></ul></ul><ul><ul><li>GT 4 </li></ul></ul><ul><ul><li>Muse </li></ul></ul>
    15. 15. Results: Deserialization <ul><li>Deserialization performance was tested with the receive* operations: </li></ul><ul><ul><li>array of objects was sent to service </li></ul></ul><ul><ul><li>size of the array was returned as integer </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    16. 16. Results: Deserialization SOAP Comparison to WSRF 28.11.2009
    17. 17. Results: Deserialization SOAP Comparison to WSRF 28.11.2009
    18. 18. Results: Deserialization <ul><li>Performance ranking under Windows: </li></ul><ul><ul><li>WSRFLite </li></ul></ul><ul><ul><li>GT4 </li></ul></ul><ul><ul><li>Muse </li></ul></ul><ul><li>Performance difference at receiveBase64: </li></ul><ul><ul><li>WSRFLite </li></ul></ul><ul><ul><li>Muse and GT4 </li></ul></ul><ul><li>No differences under Linux </li></ul>SOAP Comparison to WSRF 28.11.2009
    19. 19. Results: End-to-End <ul><li>End-to-End performance was tested with echo* operations </li></ul><ul><ul><li>each service returns given input array </li></ul></ul><ul><ul><li>input array incorporates both complex deserialization (when receiving) and serialization (when sending) operations </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    20. 20. Results: End-to-End SOAP Comparison to WSRF 28.11.2009
    21. 21. Results: End-to-End SOAP Comparison to WSRF 28.11.2009
    22. 22. Results: End-to-End <ul><li>Ranking under Windows and Linux: </li></ul><ul><ul><li>WSRFLite </li></ul></ul><ul><ul><li>GT4 </li></ul></ul><ul><ul><li>Muse </li></ul></ul><ul><li>echoBase64 operation: </li></ul><ul><ul><li>WSRFLite is still performing much better </li></ul></ul><ul><ul><li>GT4 and Muse nearly the same results </li></ul></ul>SOAP Comparison to WSRF 28.11.2009
    23. 23. Conclusions <ul><li>Future work: </li></ul><ul><ul><li>investigate the performance of the three toolkits when making use of capabilities of WSRF and other implemented specifications </li></ul></ul><ul><ul><li>Investigation of methods such as: </li></ul></ul><ul><ul><li>creating and destroying resources, getting and setting properties or sending and receiving notifications. </li></ul></ul>28.11.2009 SOAP Comparison to WSRF
    24. 24. <ul><li>Thank You! </li></ul>SOAP Comparison to WSRF 28.11.2009
    25. 25. References <ul><li>[1] K. Chiu, M. Govindaraju, and R. Bramley. </li></ul><ul><li>Investigating the limits of soap performance for </li></ul><ul><li>scientific computing. In HPDC ’02: Proceedings of the </li></ul><ul><li>11th IEEE International Symposium on High </li></ul><ul><li>Performance Distributed Computing, page 246, </li></ul><ul><li>Washington, DC, USA, 2002. IEEE Computer Society. </li></ul><ul><li>[2] M. R. Head, M. Govindaraju, A. Slominski, P. Liu, </li></ul><ul><li>N. Abu-Ghazaleh, R. van Engelen, K. Chiu, and M. J. </li></ul><ul><li>Lewis. A benchmark suite for soap-based </li></ul><ul><li>communication in grid web services. In SC ’05: </li></ul><ul><li>Proceedings of the 2005 ACM/IEEE conference on </li></ul><ul><li>Supercomputing, page 19, Washington, DC, USA, 2005. </li></ul><ul><li>IEEE Computer Society. </li></ul><ul><li>[3] F. Ilinca, J.-F. Hetu, M. Audet, and R. Bramley. </li></ul><ul><li>Simulation of 3-d mold-filling and solidification </li></ul><ul><li>processes on distributed memory parallel architectures. </li></ul>SOAP Comparison to WSRF 28.11.2009
    26. 26. References <ul><li>[4] J. Kangasharju, S. Tarkoma, and K. Raatikainen. </li></ul><ul><li>Comparing soap performance for various encodings, </li></ul><ul><li>protocols, and connections. In Personal Wireless </li></ul><ul><li>Communications, volume 2775 of Lecture Notes in </li></ul><ul><li>Computer Science, pages 397–406. Springer-Verlag, </li></ul><ul><li>2003. </li></ul><ul><li>[5] C. Kohlhoff and R. Steele. Evaluating soap for high </li></ul><ul><li>performance business applications: Real-time trading </li></ul><ul><li>systems, 2003. </li></ul><ul><li>[6] M. Larabel. Java performance: Ubuntu linux vs. </li></ul><ul><li>windows vista. http://www.phoronix.com/scan.php? </li></ul><ul><li>page=article&item=java_vm_performance&num=1. </li></ul><ul><li>[7] M. Li, M. Qi, M. Rozati, and B. Yu. A WSRF based </li></ul><ul><li>shopping cart system. In P. M. A. Sloot, A. G. </li></ul><ul><li>Hoekstra, T. Priol, A. Reinefeld, and M. Bubak, editors, </li></ul><ul><li>EGC, volume 3470 of Lecture Notes in Computer Science, </li></ul><ul><li>pages 993–1001. Springer, 2005. </li></ul>28.11.2009 SOAP Comparison to WSRF