Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Performance of Microservice frameworks on different JVMs


Published on

A lot is happening in world of JVMs lately. Oracle changed its support policy roadmap for the Oracle JDK. GraalVM has been open sourced. AdoptOpenJDK provides binaries and is supported by (among others) Azul Systems, IBM and Microsoft. Large software vendors provide their own supported OpenJDK distributions such as Amazon (Coretto), RedHat and SAP. Next to OpenJDK there are also different JVM implementations such as Eclipse OpenJ9, Azul Systems Zing and GraalVM (which allows creation of native images). Other variables include different versions of the JDK used and whether you are running the JDK directly on the OS or within a container. Next to that, JVMs support different garbage collection algorithms which influence your application behavior. There are many options for running your Java application and choosing the right ones matters! Performance is often an important factor to take into consideration when choosing your JVM. How do the different JVMs compare with respect to performance when running different Microservice implementations? Does a specific framework provide best performance on a specific JVM implementation? I've performed elaborate measures of (among other things) start-up times, response times, CPU usage, memory usage, garbage collection behavior for these different JVMs with several different frameworks such as Reactive Spring Boot, regular Spring Boot, MicroProfile, Quarkus, Vert.x, Akka. During this presentation I will describe the test setup used and will show you some remarkable differences between the different JVM implementations and Microservice frameworks. Also differences between running a JAR or a native image are shown and the effects of running inside a container. This will help choosing the JVM with the right characteristics for your specific use-case!

Published in: Software
  • Be the first to comment

  • Be the first to like this

Performance of Microservice frameworks on different JVMs

  1. 1. Performance of Microservice frameworks on different JVMs CJIB / Maarten Smeets
  2. 2. Performance of Microservice frameworks on different JVMs
  3. 3. Agenda 1. Introduction 2. Microservice frameworks 3. Test setup 4. Results 5. Recommendations
  4. 4. Who am I? Who is Maarten? • Software architect at AMIS / Conclusion • Several certifications SOA, BPM, MCS, Java, SQL, PL/SQL, Mule, AWS, etc • Enthusiastic blogger @MaartenSmeetsNL
  5. 5. What is the CJIB • The Central Judicial Collection Agency part of the Ministry of Justice and Security in the Netherlands • The CJIB is responsible for collecting a range of different fines, such as traffic fines and punitive orders. • Works together with EU Member States when it comes to collecting fines. • Plays a key enforcement role in decisions relating to criminal matters, such as • court rulings • decisions made by one of the Public Prosecution Service’s public prosecutors • Located in Leeuwarden, Friesland Where do I work?
  6. 6. Boete voor verkeersovertreding 9.223.477 Leges kostenveroordeling 3.846 CJIB: Kengetallen 2017 Door rechter opgelegde boete 57.900 Schadevergoedingsmaatregel 13.563 Hulp bij problematische schuldsituatie Transactievoorstel 4.575Principale vrijheidsstraffen 13.485 Bestuursrechtelijke premies 2.081.270 OM-afdoening 284.642 Coördinatie van taakstraffen 36.630 Inkomende Europese boetes 1.038 Uitgaande Europese boetes 49.766 Ontnemingsmaatregel 1.690 Voorwaardelijke invrijheidstelling 1.043 Bestuurlijke boetes 40.608 Toezicht 15.021 Omgezette taakstraffen 7.657 Jeugdtoezicht 5.258
  7. 7. The CJIB and Java • 1400 people. ICT department of around 325 people. 100 Java developers • 30 teams using Scrum and SAFe. Tight integration between business and IT • Solid CI/CD pipelines and release train • Automated testing using Cucumber, Gherkin • Code quality checks using SonarQube • Bamboo, Puppet, Git, Maven, Vault • Running on Redhat 7, OpenJDK 8 with Spring Boot Microservices on Jetty • Innovation lab • Blockchain • Machine learning CJIB ICT
  8. 8. Disclaimer • The performance tests mentioned in this presentation were conducted with intention to obtain information on what performance differences can be expected from running various frameworks in various JVMs using various garbage collection algorithms. A best effort has been made to conduct an unbiased test. Still, performance depends on many parameters such as hardware, specifics of the framework implementation, the usage of different back-ends, concurrency, versions of libraries, OS, virtualization and various other factors. I cannot provide any guarantees that the same values will be achieved with other than tested configurations. The use of these results is at your own risk. I shall in no case accept liability for any loss resulting from the use of the results or decisions based on them.
  9. 9. Spring Fu Microservice frameworks
  10. 10. Microservice frameworks Spring Boot • Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". • An opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
  11. 11. Microservice frameworks Spring Boot Reactive / WebFlux • Spring WebFlux is fully non-blocking, supports Reactive Streams back pressure, and runs on such servers as Netty, Undertow, and Servlet 3.1+ containers.
  12. 12. Microservice frameworks Spring Fu • Spring Fu is an incubator for Kofu (Ko for Kotlin, fu for functional), which provides a Kotlin API to configure Spring Boot applications programmatically. Spring Fu allows for native compilation on GraalVM while ‘regular’ Spring does not Application.kt
  13. 13. Microservice frameworks Vert.x • Eclipse Vert.x is event driven and non blocking. This means your app can handle a lot of concurrency using a small number of kernel threads. Vert.x lets your app scale with minimal hardware. • You can use Vert.x with multiple languages including Java, JavaScript, Groovy, Ruby, Ceylon, Scala and Kotlin. • Vert.x is flexible and unopiniated
  14. 14. Microservice frameworks Akka • Akka is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala
  15. 15. Microservice frameworks Quarkus • A Kubernetes Native Java stack tailored for GraalVM & OpenJDK HotSpot crafted from the best of breed Java libraries and standards • Extensions configure, boot and integrate a framework or technology into your Quarkus application. They also provide the right information to GraalVM for your application to compile natively
  16. 16. Microservice frameworks Microprofile • An open forum to optimize Enterprise Java for a microservices architecture by innovating across multiple implementations and collaborating on common areas of interest with a goal of standardization. • Specifies the least number of Java Enterprise specifications APIs required to build a microservice.
  17. 17. No framework • Minimal implementation using Java SE code only • No reflective code and no frameworks easy native compilation
  18. 18. JVMs differ • Licensing / support • Memory usage • Garbage collection algorithms • Start up, class loading • Other features JVMs
  19. 19. Test setup Framework versions used Hardware used • Intel Core i7-8700 • hexa-core • 12 threads • Max 4.6GHz • 32Gb DDR4L (2400MHz) OS used • Running Linux Mint 19.1 (Tessa) based on Ubuntu 18.04LTS (Bionic Beaver) • Docker server 18.06.1-ce client 18.09.2 Framework Version HTTP server Spring Boot 2.1.4 Tomcat Spring Fu 0.0.5 Reactor Netty WebFlux 2.1.4 Reactor Netty Akka 2.12 10.1.8 Akka HTTP Open Liberty Open Liberty Vert.x 3.7.0 Vert.x Quarkus 0.15.0 Netty My laptop
  20. 20. What did I do? • Create minimal but comparable implementations for every framework Java, Kotlin • Create a script to loop over JVMs, Microservice implementations, GC-algorithms Bash • Create multithreaded load generators and compare results Python, Node/JavaScript • Containerize the implementations; makes testing JVMs and resource isolation easy Docker • Summarize results (determine average, standard deviation, Prometheus results) Bash: awk, sed, curl • Run the setup under various conditions Mostly 15 minutes per framework per JVM per variable (weeks of data) • Visualize results Python: pandas, numpy, pyplot Test setup
  21. 21. Test setup Docker base image with JVM Microservice framework fat JAR Load generator (Python) Build and run container JVM + framework Load generation Loop over JVMs and frameworks Start processes, clean up and summarize results outputfile.txt Data validation Data description Data visualization Generate and summarize data Validate and visualize results.txt (measures per JVM per framework) messages
  22. 22. Resource isolation User processes Load generators JVM process 1 32
  23. 23. Running in a container • The kernel is shared among processes • The disk is shared with the host • The load generators and the JVM might compete for resources • Running inside a container has a performance cost
  24. 24. • Images available on Docker Hub – OpenJDK – AdoptOpenJDK – Oracle GraalVM – Amazon Corretto – Eclipse OpenJ9 – Azul Zulu • Images not available on Docker Hub (due to license restrictions) – Oracle JDK – Azul Zing Running in a container
  25. 25. Containers Results • Hosting on Docker is slower than hosting locally • Docker to Docker is not faster than local to Docker • Everything outside a container is fastest
  26. 26. Microservice frameworks Which framework gives best response times • Akka gives worst performance. Vert.x best. • Reactive frameworks (Akka, Vert.x, WebFlux) do not outperform non-reactive frameworks (Microprofile, Quarkus, Spring Boot, Spring Fu) Spring Fu
  27. 27. Java versions What happens when migrating from Java 8 to Java 11? • Java 8 and 11 behave very similarly for every framework JDK 11 is slightly slower than 8 • OpenJ9 benefits most from going to JDK 11 Especially for Spring Boot and Akka Responsetime[ms] Java version
  28. 28. JVMs Which JVM performs best for which framework? • OpenJDK and Oracle JDK perform similarly for every framework (no consistent winner) • OpenJ9 does worst for every framework followed by Zing • For Vert.x the differences between JVMs are smallest Zing (JDK 8) does best here • Substrate VM (native compilation) for Quarkus gives worst performance for Quarkus +
  29. 29. Application startup Spring Boot, 2Gb heap, default GC
  30. 30. Application startup Startup time Quarkus per JVM Native compilation and startup • Native compilation (Substrate VM) greatly reduces start-up time • Oracle JDK is fastest to start of the JIT compilers followed closely by OpenJDK • Zing and OpenJ9 are relatively slow to start Have not looked at Zing Compile Stashing and ReadyNow! features • Quarkus starts approximately 10x faster than Spring Boot!
  31. 31. • OpenJ9 Balanced Divide memory in individually managed blocks. Good for NUMA • OpenJ9 Metronone Garbage collection occurs in small interruptible steps • OpenJ9 OptAvgPause Uses concurrent mark and sweep phases. Reduces pause times • OpenJ9 OptThruPut Optimizes on throughput but long pause times (app freezes) • Shenandoah GC (Java 12) A low pause time algorithm which does several tasks concurrently No increased pause times with a larger heap • ZGC (Java 12) Scalable (concurrent) low latency garbage collector • Zing C4 GC Continuously Concurrent Compacting Collector Pauseless garbage collection • OpenJDK G1GC (default Java 9+) Compact free memory space without lengthy pause times • OpenJDK Parallel (default Java 8) High-throughput GC which does not allow memory shrinking • OpenJDK ConcMarkSweep (EOL?) Designed for lower latency / pause times than other parallel collectors • OpenJDK Serial GC Single threaded GC which freezes the application during collection • OpenJ9 Generational Concurrent policy Minimize GC pause times without compromising throughput GC algorithms
  32. 32. GC algorithms How do GC algorithms influence response times (2Gb heap) • OpenJ9 did worst (with and without sharedclasses) Metronome does best for OpenJ9 • Every JVM (OpenJ9, Zing, OpenJDK) achieves similar performance for every available GC algorithm (at 2Gb heap!) • OpenJDK Serial GC did best OpenJDK OpenJ9 Zing
  33. 33. GC algorithms 20mb heap How do GC algorithms influence response times (20Mb heap) • GC algorithms influence the minimal amount of memory required to start the JVM OpenJ9 Metronome GC gave out of memory • When memory is limited • Do not use Shenandaoh (30ms) or parallel GC • OpenJ9 does better than AdoptOpenJDK • Azul Zing cheated! ‘Warning Maximum heap size rounded to 1024 MB’ Zing supports heap sizes from 1 GB to 8 TB • OpenJ9 with OptThruPu GC produces best performance on limited memory
  34. 34. GC algorithms: Prometheus metrics
  35. 35. Prometheus metrics When not to use Prometheus metrics • Prometheus metrics are not suitable to compare frameworks Every framework has a different implementation • Prometheus/Grafana are powerful to quickly look at many metrics and measures over time (dashboards) • Prometheus and Grafana are less useful if you want a single measure per test
  36. 36. Good to know Some challenges • Scoping. You can not test every situation for every framework. • Testing • Tests have long running times. 15 minutes per framework or GC algorithm per JVM • Writing a performance-test script which produces reproduceable results • GraalVM • 1.0 RC16 and 19.0.0 native image support contains breaking changes • Spring Framework 5.3 will have OOTB support • Is only available as Java 8 • Quarkus makes native images more easy! • Open Liberty will run on GraalVM. Earlier versions will not (reported the bug in and it got quickly resolved)
  37. 37. Considerations Suggestions for further research • You should have used switch x or y with JVM z • You didn’t run statistic x or y to determine of the differences are significant • You only ran every test for 15 minutes (a couple of million requests) • You’re only looking at a minimal implementations. No one uses those. • You’re running the load test and JVM on the same machine • You have not looked at the differences in libraries in the container images • You should have included JVM x or Microservice framework y • You have only tested on hardware x. I’m using hardware y • You have not compared different servlet engines with the same application
  38. 38. • Low on memory: consider OpenJ9 • Is performance important? Don’t run in a container! • JDK 11 has slower start-up times and slightly worse performance than JDK 8 • OpenJDK variants (GraalVM, Corretto, AdoptOpenJDK, Zulu) and Oracle JDK perform pretty similarly. Does not matter much which one (when looking at performance) • Native images (Quarkus, Substrate VM) have way faster startup time but worse response times. Choose which one matters most to you Recommendations
  39. 39. Choices the CJIB made Framework: Spring Boot Quick for development and can run standalone Jetty servlet engine Efficient in memory usage and performance OpenJDK 8 on RedHat 7 (currently not in a container) CJIB already has RedHat licenses for support RedHat is steward for OpenJDK 8 and 11 Spring Boot runs well on OpenJDK Garbage Collection Default as long as no performance issues Choices the CJIB made
  40. 40. Want more answers? Suggestions • Sources: • Get help from the JVM suppliers! Eclipse OpenJ9, Oracle GraalVM, Azul Systems got in touch and provided valuable feedback • Do your own tests. Your environment and application is unique. Results might differ
  41. 41. Questions? @MaartenSmeetsNL