For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application. JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances. We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
This is an updated version of my JITServer talk that I will present at Open Source Summit North America in May 2023
The Next Frontier in Open Source Java Compilers: Just-In-Time Compilation as a Service
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
Monoliths, macroservices, microservices, cloud-native and serverless... Where do we even start? If you are a Java developer, you will likely have to work with one, some, or even all of these deployment approaches. Does this mean learning multiple frameworks, tools and methods? It certainly looks that way, based on the many deployment-specific solutions being proposed to the Java development community.
In this presentation, we will look into these solutions, weighing their strengths and weaknesses. We will also contrast this with one-size-fits-all solutions being offered by modern open-source cloud-native Java runtimes like Open Liberty. Does it have the right technology to compete in microservice and serverless environments - can one runtime really do it all?
The Next Frontier in Open Source Java Compilers: Just-In-Time Compilation as a Service
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
This is an updated version of my JITServer talk that I will present at Open Source Summit North America in May 2023
The Next Frontier in Open Source Java Compilers: Just-In-Time Compilation as a Service
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
Monoliths, macroservices, microservices, cloud-native and serverless... Where do we even start? If you are a Java developer, you will likely have to work with one, some, or even all of these deployment approaches. Does this mean learning multiple frameworks, tools and methods? It certainly looks that way, based on the many deployment-specific solutions being proposed to the Java development community.
In this presentation, we will look into these solutions, weighing their strengths and weaknesses. We will also contrast this with one-size-fits-all solutions being offered by modern open-source cloud-native Java runtimes like Open Liberty. Does it have the right technology to compete in microservice and serverless environments - can one runtime really do it all?
The Next Frontier in Open Source Java Compilers: Just-In-Time Compilation as a Service
For Java developers, the Just-In-Time (JIT) compiler is key to improved performance. However, in a container world, the performance gains are often negated due to CPU and memory consumption constraints. To help solve this issue, the Eclipse OpenJ9 JVM provides JITServer technology, which separates the JIT compiler from the application.
JITServer allows the user to employ much smaller containers enabling a higher density of applications, resulting in cost savings for end-users and/or cloud providers. Because the CPU and memory surges due to JIT compilation are eliminated, the user has a much easier task of provisioning resources for his/her application. Additional advantages include: faster ramp-up time, better control over resources devoted to compilation, increased reliability (JIT compiler bugs no longer crash the application) and amortization of compilation costs across many application instances.
We will dig into JITServer technology, showing the challenges of implementation, detailing its strengths and weaknesses and illustrating its performance characteristics. For the cloud audience we will show how it can be deployed in containers, demonstrate its advantages compared to a traditional JIT compilation technique and offer practical recommendations about when to use this technology.
Secrets of Performance Tuning Java on KubernetesBruno Borges
Java on Kubernetes may seem complicated, but after a bit of YAML and Dockerfiles, you will wonder what all that fuss was. But then the performance of your app in 1 CPU/1 GB of RAM makes you wonder. Learn how JVM ergonomics, CPU throttling, and GCs can help increase performance while reducing costs.
A technical presentation on how Zing changes parts of the JVM to eliminate GC pauses, generate more heavily optimised code from the JIT and reduce the warm up time.
Microservices is the dominant architecture for developing new applications, as it is ideally suited to cloud deployments. When using JVM-based services, each instance works in isolation and has no awareness of previous runs of a service. The Cloud Native Compiler detaches the JIT compiler from individual JVMs and centralises it in the cloud, effectively a JIT-as-a-Service. The benefits of this are caching of compiled code for instant reuse, use of greater resources for higher optimisation as well as many others. The session will discuss the challenges of cloud-based JVMs and how we can significantly improve performance with reduced costs.
What we've learned from running a PostgreSQL managed service on KubernetesDoKC
In this talk, I will share some of our learnings from running a managed PostgreSQL/TimescaleDB service on Kubernetes on AWS for a little more than a year: I’ll start with the motivation of running managed PostgreSQL on Kubernetes, the benefits and drawbacks. I’ll describe the architecture of the managed PostgreSQL cloud on Kubernetes I’ll zoom in on how we solved some of the Kubernetes-specific issues within our cloud, such as upgrading extensions without downtimes, taming the dreaded OOM killer, and doing regular maintenance and PostgreSQL major upgrades. I’ll share how open-source tools from the PostgreSQL ecosystem helps us to run the service and explain how we use them in a slightly non-trivial way.
This talk was given by Oleksii Kliukin for DoK Day Europe @ KubeCon 2022.
Simple tweaks to get the most out of your jvmJamie Coleman
Many developers don’t think about the JVM level when creating applications. It is something that just simply works. Now more applications are becoming cloud-native and we have JVM’s running in every microservice container, each performance gain can have massive benefits when scaled up. Some tweaks are very easy to implement and can have huge impacts on start-up time and performance of your applications. This talk will go through all the different JVM options and give you some easy and simple advice on how to get the most out of your JVM to save not only money but also energy on the cloud.
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Impact2014 session # 1523 performance optimization using ibm java on z and w...Elena Nanos
IMPACT 2014 ACU-1523: Performance Optimization Using IBM Java on z/OS & IBM WebSphere Application Server on z/OS V8.5.5
I was a guest speaker at IBM IMPACT 2014 conference. This session outlines how to optimize the performance of IBM WebSphere Application Server on z/OS applications, reduce CPU utilization, and take advantage of the latest zEC12 enhancements. IBM continues its efforts and investments in its Java Virtual Machine on IBM System z. zEC12 hardware packs an awesome performance punch with second-generation, out-of-order pipeline design, large caches, and 5.5 GHz hex-core processor. With the exploitation of new features, IBM Java Runtime Environment continues a long history of aggressive vertical integration on IBM System z. Come hear how HCSC is taking advantage of the latest IBM WebSphere Application Server and Java releases and enhancements. This presentation covers installation of Java V6.1, V7.0, and V7.1 with IBM WebSphere Application Server on z/OS V8.5.5 and exploitation of 1 Meg large pages with zEC12 Flash Express and IBM zEnterprise Data Compression with z/OS V2.1. Benchmark performance data is presented
There is a “dark side” to Kubernetes that makes it difficult to ensure the desired performance and resilience of cloud-native applications, while also keeping their costs under control. Indeed, the combined effect of Kubernetes resource management mechanisms and application runtime heuristics may cause serious performance and resilience risks. See Akamas' AI-powered optimizations solve this!
Simple tweaks to get the most out of your JVMJamie Coleman
Many developers don’t think about the JVM level when creating applications. It is something that just simply works. Now more applications are becoming cloud-native and we have JVM’s running in every microservice container, each performance gain can have massive benefits when scaled up. Some tweaks are very easy to implement and can have huge impacts on start-up time and performance of your applications. This talk will go through all the different JVM options and give you some easy and simple advice on how to get the most out of your JVM to save not only money but also energy on the cloud.
Enabling applications to really thrive (and not just survive) in cloud environments can be challenging. The original 12 factor app methodology helped to lay out some of the key characteristics needed for cloud-native applications... but... as our cloud infrastructure and tooling has progressed, so too have these factors.
In this session we'll dive into the extended and updated 15 factors needed to build cloud native applications that are able to thrive in this environment, and we'll take a look at open source technologies and tools that can help us achieve this.
SwissJUG_Bringing the cloud back down to earth.pptxGrace Jansen
How can we effectively develop for the cloud, when we as developers are coding back down on earth? This is where effective cloud-native developer tools can enable us to either be transported into the cloud or alternatively, to bring the cloud back down to earth. But what tools should we be using for this? In this session, we’ll explore some of the useful OSS tools and technologies that can used by developers to effectively develop, design and test cloud-native Java applications.
Secrets of Performance Tuning Java on KubernetesBruno Borges
Java on Kubernetes may seem complicated, but after a bit of YAML and Dockerfiles, you will wonder what all that fuss was. But then the performance of your app in 1 CPU/1 GB of RAM makes you wonder. Learn how JVM ergonomics, CPU throttling, and GCs can help increase performance while reducing costs.
A technical presentation on how Zing changes parts of the JVM to eliminate GC pauses, generate more heavily optimised code from the JIT and reduce the warm up time.
Microservices is the dominant architecture for developing new applications, as it is ideally suited to cloud deployments. When using JVM-based services, each instance works in isolation and has no awareness of previous runs of a service. The Cloud Native Compiler detaches the JIT compiler from individual JVMs and centralises it in the cloud, effectively a JIT-as-a-Service. The benefits of this are caching of compiled code for instant reuse, use of greater resources for higher optimisation as well as many others. The session will discuss the challenges of cloud-based JVMs and how we can significantly improve performance with reduced costs.
What we've learned from running a PostgreSQL managed service on KubernetesDoKC
In this talk, I will share some of our learnings from running a managed PostgreSQL/TimescaleDB service on Kubernetes on AWS for a little more than a year: I’ll start with the motivation of running managed PostgreSQL on Kubernetes, the benefits and drawbacks. I’ll describe the architecture of the managed PostgreSQL cloud on Kubernetes I’ll zoom in on how we solved some of the Kubernetes-specific issues within our cloud, such as upgrading extensions without downtimes, taming the dreaded OOM killer, and doing regular maintenance and PostgreSQL major upgrades. I’ll share how open-source tools from the PostgreSQL ecosystem helps us to run the service and explain how we use them in a slightly non-trivial way.
This talk was given by Oleksii Kliukin for DoK Day Europe @ KubeCon 2022.
Simple tweaks to get the most out of your jvmJamie Coleman
Many developers don’t think about the JVM level when creating applications. It is something that just simply works. Now more applications are becoming cloud-native and we have JVM’s running in every microservice container, each performance gain can have massive benefits when scaled up. Some tweaks are very easy to implement and can have huge impacts on start-up time and performance of your applications. This talk will go through all the different JVM options and give you some easy and simple advice on how to get the most out of your JVM to save not only money but also energy on the cloud.
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Impact2014 session # 1523 performance optimization using ibm java on z and w...Elena Nanos
IMPACT 2014 ACU-1523: Performance Optimization Using IBM Java on z/OS & IBM WebSphere Application Server on z/OS V8.5.5
I was a guest speaker at IBM IMPACT 2014 conference. This session outlines how to optimize the performance of IBM WebSphere Application Server on z/OS applications, reduce CPU utilization, and take advantage of the latest zEC12 enhancements. IBM continues its efforts and investments in its Java Virtual Machine on IBM System z. zEC12 hardware packs an awesome performance punch with second-generation, out-of-order pipeline design, large caches, and 5.5 GHz hex-core processor. With the exploitation of new features, IBM Java Runtime Environment continues a long history of aggressive vertical integration on IBM System z. Come hear how HCSC is taking advantage of the latest IBM WebSphere Application Server and Java releases and enhancements. This presentation covers installation of Java V6.1, V7.0, and V7.1 with IBM WebSphere Application Server on z/OS V8.5.5 and exploitation of 1 Meg large pages with zEC12 Flash Express and IBM zEnterprise Data Compression with z/OS V2.1. Benchmark performance data is presented
There is a “dark side” to Kubernetes that makes it difficult to ensure the desired performance and resilience of cloud-native applications, while also keeping their costs under control. Indeed, the combined effect of Kubernetes resource management mechanisms and application runtime heuristics may cause serious performance and resilience risks. See Akamas' AI-powered optimizations solve this!
Simple tweaks to get the most out of your JVMJamie Coleman
Many developers don’t think about the JVM level when creating applications. It is something that just simply works. Now more applications are becoming cloud-native and we have JVM’s running in every microservice container, each performance gain can have massive benefits when scaled up. Some tweaks are very easy to implement and can have huge impacts on start-up time and performance of your applications. This talk will go through all the different JVM options and give you some easy and simple advice on how to get the most out of your JVM to save not only money but also energy on the cloud.
Enabling applications to really thrive (and not just survive) in cloud environments can be challenging. The original 12 factor app methodology helped to lay out some of the key characteristics needed for cloud-native applications... but... as our cloud infrastructure and tooling has progressed, so too have these factors.
In this session we'll dive into the extended and updated 15 factors needed to build cloud native applications that are able to thrive in this environment, and we'll take a look at open source technologies and tools that can help us achieve this.
SwissJUG_Bringing the cloud back down to earth.pptxGrace Jansen
How can we effectively develop for the cloud, when we as developers are coding back down on earth? This is where effective cloud-native developer tools can enable us to either be transported into the cloud or alternatively, to bring the cloud back down to earth. But what tools should we be using for this? In this session, we’ll explore some of the useful OSS tools and technologies that can used by developers to effectively develop, design and test cloud-native Java applications.
Our cloud-native environments are more complex than ever before! So how can we ensure that the applications we’re deploying to them are behaving as we intended them to? This is where effective observability is crucial. It enables us to monitor our applications in real-time and analyse and diagnose their behaviour in the cloud. However, until recently, we were lacking the standardization to ensure our observability solutions were applicable across different platforms and technologies. In this session, we’ll delve into what effective observability really means, exploring open source technologies and specifications, like OpenTelemetry, that can help us to achieve this while ensuring our applications remain flexible and portable.
PittsburgJUG_Cloud-Native Dev Tools: Bringing the cloud back to earthGrace Jansen
How can we effectively develop for the cloud, when we as developers are coding back down on earth? This is where effective cloud-native developer tools can enable us to either be transported into the cloud or alternatively, to bring the cloud back down to earth. But what tools should we be using for this? In this session, we’ll explore some of the useful OSS tools and technologies that can used by developers to effectively develop, design and test cloud-native Java applications.
Imagine a Java application that can start up in milliseconds, without compromising on throughput, memory, development-production parity or Java language features. Sounds out of this world, right? Well, through the use of technologies like CRIU support in Eclipse OpenJ9 and Liberty’s InstantOn, we’ve taken one giant leap forwards for innovation within Java, offering exactly this! Join this session to learn more about these innovations and how you could utilise OSS technologies to deliver highly scalable and performant applications that are optimized for today’s cloud-native environments.
Jfokus_Bringing the cloud back down to earth.pptxGrace Jansen
How can we effectively develop for the cloud, when we as developers are coding back down on earth? This is where effective cloud-native developer tools can enable us to either be transported into the cloud or alternatively, to bring the cloud back down to earth. But what tools should we be using for this? In this session, we’ll explore some of the useful OSS tools and technologies that can used by developers to effectively develop, design and test cloud-native Java applications.
FooConf23_Bringing the cloud back down to earth.pptxGrace Jansen
How can we effectively develop for the cloud, when we as developers are coding back down on earth? This is where effective cloud-native developer tools can enable us to either be transported into the cloud or alternatively, to bring the cloud back down to earth. But what tools should we be using for this? In this session, we’ll explore some of the useful OSS tools and technologies that can used by developers to effectively develop, design and test cloud-native Java applications.
How does one choose to architect a system that has a Microservice / REST API endpoints? There are many solutions out there. Some are better than others. Should state be held in a server side component, or externally? Generally we are told this is not a good practice for a Cloud Native system, when the 12-factor guidelines seem to be all about stateless containers, but is it? It’s unclear and this confusion may lead to poor technology stack choices that are impossible or extremely hard to change later on as your system evolves in terms of demand and performance.
While stateless systems are easier to work with, the reality is that we live in a stateful world, so we have to handle the state of data accordingly to ensure data integrity beyond securing it.
We will examine and demonstrate the fundamentals of a Cloud Native system with Stateful Microservices that’s built with Open Liberty and MicroProfile.
UtrechtJUG_Exploring statefulmicroservices in a cloud-native world.pptxGrace Jansen
How does one choose to architect a system that has a Microservice / REST API endpoints? There are many solutions out there. Some are better than others. Should state be held in a server side component, or externally? Generally we are told this is not a good practice for a Cloud Native system, when the 12-factor guidelines seem to be all about stateless containers, but is it? It’s unclear and this confusion may lead to poor technology stack choices that are impossible or extremely hard to change later on as your system evolves in terms of demand and performance.
While stateless systems are easier to work with, the reality is that we live in a stateful world, so we have to handle the state of data accordingly to ensure data integrity beyond securing it.
We will examine and demonstrate the fundamentals of a Cloud Native system with Stateful Microservices that’s built with Open Liberty and MicroProfile.
JCON_Adressing the transaction challenge in a cloud-native world.pptxGrace Jansen
With microservices comes great benefits but also great challenges! One such challenge is data consistency and integrity. Traditionally, tightly coupled transactions were used to ensure strong consistency and isolation. However, this results in strong coupling between services due to data locking and decreasing concurrency, both of which are unsuitable for microservices. So, how do we provide consistency guarantees for flows that span long periods of time in cloud-native applications? We'll address this challenge by investigating the Saga pattern for distributed transactions, the MicroProfile Long Running Action (LRA) specification and how these can be used to develop effective cloud-native Java microservices.
JavaZone_Addressing the transaction challenge in a cloud-native world.pptxGrace Jansen
With microservices comes great benefits but also great challenges! One such challenge is data consistency and integrity. Traditionally, tightly coupled transactions were used to ensure strong consistency and isolation. However, this results in strong coupling between services due to data locking and decreasing concurrency, both of which are unsuitable for microservices. So, how do we provide consistency guarantees for flows that span long periods of time in cloud-native applications? We'll address this challenge by investigating the Saga pattern for distributed transactions, the MicroProfile Long Running Action (LRA) specification and how these can be used to develop effective cloud-native Java microservices.
JavaZone_Mother Nature vs Java – the security face off.pptxGrace Jansen
Mother Nature has had millennia to build up its defences to the many potential hazards and attacks it may face. So, given its wisdom and expertise on this subject, what can we as software developers learn from it and bring back to the evolution of our own application’s security? In this session we’ll explore where software and biology overlap when it comes to security and lessons we can learn from nature to improve our own application security.
Boost developer productivity with EE, MP and OL (Devoxx Ukraine 22).pptxGrace Jansen
As developers we strive to iteratively and rapidly develop our applications. However, development is often slowed by the process of setting up a new project to use the latest APIs, building the application, deploying to a local or container environment, and testing. In this session we will look at key pain points faced by cloud-native Java developers and present helpful APIs and tools so that as developer you can focus on what really matters - your code.
Addressing the transaction challenge in a cloud-native world Devoxx Ukraine 2022Grace Jansen
With microservices comes great benefits but also great challenges! One such challenge is data consistency and integrity. Traditionally, tightly coupled transactions were used to ensure strong consistency and isolation. However, this results in strong coupling between services due to data locking and decreasing concurrency, both of which are unsuitable for microservices. So, how do we provide consistency guarantees for flows that span long periods of time in cloud-native applications? We'll address this challenge by investigating the Saga pattern for distributed transactions, the MicroProfile Long Running Action (LRA) specification and how these can be used to develop effective cloud-native Java microservices.
With microservices comes great benefits but also great challenges! One such challenge is data consistency and integrity. Traditionally, tightly coupled transactions were used to ensure strong consistency and isolation. However, this results in strong coupling between services due to data locking and decreasing concurrency, both of which are unsuitable for microservices. So, how do we provide consistency guarantees for flows that span long periods of time in cloud-native applications? We'll address this challenge by investigating the Saga pattern for distributed transactions, the MicroProfile Long Running Action (LRA) specification and how these can be used to develop effective cloud-native Java microservices.
How does one choose to architect a system that has a Microservice / REST API endpoints? There are many solutions out there. Some are better than others. Should state be held in a server side component, or externally? Generally we are told this is not a good practice for a Cloud Native system, when the 12-factor guidelines seem to be all about stateless containers, but is it? It’s unclear and this confusion may lead to poor technology stack choices that are impossible or extremely hard to change later on as your system evolves in terms of demand and performance.
While stateless systems are easier to work with, the reality is that we live in a stateful world, so we have to handle the state of data accordingly to ensure data integrity beyond securing it.
We will examine and demonstrate the fundamentals of a Cloud Native system with Stateful Microservices that’s built with Open Liberty and MicroProfile in Kubernetes.
How to become a superhero without even leaving your desk!Grace Jansen
With global warming on the rise, viral pandemics affecting every nation and extinction threatening more than 40,000 species the world has never needed superheros more! Are you ready to use your powers to save the world?
In this session we’ll explore the various ways our coding super powers can help to make a positive impact on our society and the planet we inhabit.
Devoxx Ukraine - Going beyond the 12 factorsGrace Jansen
Enabling applications to really thrive (and not just survive) in cloud environments can be challenging. The original 12 factor app methodology helped to lay out some of the key characteristics needed for cloud-native applications... but... as our cloud infrastructure and tooling has progressed, so too have these factors.
In this session we'll dive into the extended and updated 15 factors needed to build cloud native applications that are able to thrive in this environment, and we'll take a look at open source technologies and tools that can help us achieve this.
BuildStuffConf Going beyond the 12 factorsGrace Jansen
Enabling applications to really thrive (and not just survive) in cloud environments can be challenging. The original 12 factor app methodology helped to lay out some of the key characteristics needed for cloud-native applications... but... as our cloud infrastructure and tooling has progressed, so too have these factors.
In this session we'll dive into the extended and updated 15 factors needed to build cloud native applications that are able to thrive in this environment, and we'll take a look at open source technologies and tools that can help us achieve this.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
2. Agenda
REASON:
JVM and JIT compiler
– the good and the
bad
2
JIT-as-a-Service
SOLUTION:
JIT-as-a-Service to
the rescue
PROBLEM:
Java on Cloud - a bad
fit for microservices
4. Legacy Java Apps
4
•Java monolith on
dedicated server
•Plenty of CPU power
and memory
•Never went down
•6 month
upgrade/refresh
schedule
5. Moving to the Cloud
5
-Running in containers
-Managed by Cloud
Provider (and K8s)
-Auto-scaling to meet
demand
Cloud native App talking to Microservices
6. Main Motivators
6
-Flexible & scalable
-Easier to roll-out new releases more frequently
-Take advantage of latest-greatest Cloud technologies
-Less infrastructure to maintain and manage
-Saving money
7. Performance vs Cost
7
Performance
Cost
Containers too small
Not enough Instances
Containers too big
Or
Too many instances
Containers too big
Too many Instances
Containers right-sized
Just enough Instances
Variables – Container Size and Container Instances
Why is this so hard to get right?
9. JVM Interpreter
Class Loader
Bytecode Verifier
Interpreter JIT
Java Bytecode
JVM
• Java programs are converted into bytecode
by the javac compiler
• Machine independent bytecodes are
interpreted by the JVM at runtime
• This ensures portability of Java programs
across different architectures
• But it affects performance because
interpretation is relatively slow
OS
Hardware
9
10. Just-in-Time Compiler
Class Loader
Bytecode Verifier
Interpreter JIT
Java Bytecode
JVM
• Performance is helped by the JIT compiler, which
transforms sequences of bytecodes into optimized
machine code
• Unit of compilation is a typically a method. To
save overhead, only “hot” methods are compiled
• Compiled native machine code executes ~10x
faster than a bytecode-by-bytecode interpreter
• Generated code is saved in a "code cache" for
future use for lifetime of JVM
OS
Hardware
10
11. Java Virtual Machine (JVM)
11
The Good
• Device independent – write once, run anywhere
• > 25 years of improvements
• JIT produces optimized machine code through use of Profilers
• Efficient garbage collection
• Longer it runs, the better it runs (JVM collects more profile data, JIT
compiles more methods)
12. Java Virtual Machine (JVM)
12
The Bad
• Initial execution run is “interpreted”, which is relatively slow
• “Hot Spot” methods compiled by JIT can create CPU and memory
spikes
• CPU spikes cause lower QoS
• Memory spikes cause OOM issues, including crashes
• Slow start-up time
• Slow ramp-up time
13. Java Virtual Machine (JVM)
13
0
50
100
150
200
250
300
350
400
0 30 60 90
CPU
utilization
(%)
Time (sec)
Daytrader7 CPU consumption
CPU spikes caused
by JIT compilation
0
100000
200000
300000
400000
500000
600000
0 30 60 90
Resident
set
size
(KB)
Time (sec)
Daytrader7 memory footprint
Footprint spikes caused
by JIT compilation
15. Container Size
15
Main issues:
•Need to over-provision to
avoid OOM
•Very hard to do – JVMs
have a non-deterministic
behavior
0
100000
200000
300000
400000
500000
600000
0 30 60 90
Resident
set
size
(KB)
Time (sec)
Daytrader7 memory footprint
Footprint spikes caused
by JIT compilation
19. JIT-as-a-Service
Decouple the JIT compiler from the JVM and let it run as an independent process
Offload JIT
compilation to
remote process
Remote
JIT
Remote
JIT
JVM
JIT
JVM
JIT
Kubernetes
Control Plane
Treat JIT
compilation as a
cloud service
• Auto-managed by orchestrator
• A mono-to-micro solution
• Local JIT still available
19
20. Eclipse OpenJ9 JITServer
• JITServer feature is available in the Eclipse OpenJ9 JVM
• “Semeru Cloud Compiler” when used with Semeru Runtimes
• OpenJ9 combines with OpenJDK to form a full JDK
Link to GitHub repo: https://github.com/eclipse-openj9/openj9
20
21. Overview of Eclipse OpenJ9
Designed from the start to span all the operating
systems needed by IBM products
This JVM can go from small to large
Can handle constrained environments or memory
rich ones
Renowned for its small footprint, fast start-up and
ramp-up time
Is used by the largest enterprises on the planet
21
23. IBM Semeru Runtimes
“The part of Java that’s really in the clouds”
IBM-built OpenJDK runtimes powered by the Eclipse OpenJ9 JVM
No cost, stable, secure, high performance, cloud optimized, multi-
platform, ready for development and production use
Open Edition
• Open source license (GPLv2+CE)
• Available for Java 8, 11, 17, 18 (soon 19)
Certified Edition
• IBM license
• Java SE TCK certified.
• Available for Java 11, 17
23 https://ibm.biz/GetSemeru
24. JITServer advantages for JVM Clients
24
Provisioning
Easier to size; only consider the needs
of the application
Performance
Improved ramp-up time due to JITServer
supplying extra CPU power when the JVM
needs it the most.
Reduced CPU consumption with JITServer AOT
cache
Cost
Reduced memory consumption means
increased application density and reduced
operational cost.
Efficient auto-scaling – only pay for what
you need/use.
Resiliency
If the JITServer crashes, the JVM can
continue to run and compile with its
local JIT
26. JITServer value in Kubernetes
• https://blog.openj9.org/2021/10/20/save-money-with-jitserver-on-the-
cloud-an-aws-experiment/
• Experimental test bed
• ROSA (RedHat OpenShift Service on AWS)
• Demonstrate that JITServer is not tied to IBM HW or SW
• OCP cluster: 3 master nodes, 2 infra nodes, 3 worker nodes
• Worker nodes have 8 vCPUs and 16 GB RAM (only ~12.3 GB available)
• Four different applications
• AcmeAir Microservices
• AcmeAir Monolithic
• Petclinic (Springboot framework)
• Quarkus
• Low amount of load to simulate conditions seen in practice
• OpenShift Scheduler to manage pod and node deployments/placement
26
27. JITServer improves container density and cost
Default config
AM 500
B 550
C 550
F 450 P 450
P 450
B 550
F 450
AM 500
A 350
AM 500
M 200
Q 350
P 450
Q 350
D 600
D 1000
F 450
B 550
Q 350
AM 500
AM 500
AM 500
B 550
B 550
A 350
C 550
F 450
M 200
P 450
P 450
P 450
Q 350
Q 350
D 1000
AM 500 B 550
P 450
AM 500
B 550
B 550
C 550
C 550
F 450
F 450 P 450
Q 350
Q 350
D 1000
D 1000
Q 350
AM 250
AM 250
P 250
P 250
F 250
F 250
B 400 C 350
Q 150
Q 150
M 150
AM 250
AM 250
P 250
P 250
F 250
B 400
Q 150
Q 150
J 1200
A 250
B 400
B 400
C 350
D 1000 D 1000
D 600
AM 250
AM 250
P 250
P 250
F 250
F 250
B 400 C 350
Q 150
Q 150
M 150
AM 250
AM 250
P 250
P 250
F 250
B 400
Q 150
Q 150
J 1200
A 250
B 400
B 400
C 350
D 1000
D 1000
JITServer config
Legend:
AM: AcmeAir monolithic
A: Auth service
B: Booking service
C: Customer service
D: Database (mongo/postgres)
F: Flight service
J: JITServer
M: Main service
P: Petclinic
Q: Quarkus
Total=8250 MB Total=8550 MB Total=8600 MB
Total=9250 MB Total=9850 MB
6.3 GB less
27
29. Conclusions from high density experiments
• JITServer can improve container density and reduce operational costs
of Java applications running in the cloud by 20-30%
• Steady-state throughput is the same despite using fewer nodes
29
30. Horizontal Pod Autoscaling in Kubernetes
• Better autoscaling behavior with JITServer due to faster ramp-up
• Less risk to trick the HPA due to transient JIT compilation overhead
30
Setup:
Single node Microk8s cluster (16 vCPUs, 16 GB RAM)
JVMs limited to 1 CPU, 500MB
JITServer limited to 8 CPUs and has AOT cache enabled
Load applied with JMeter, 100 threads, 10 ms think-time,
60s ramp-up time
Autoscaler: scales up when average CPU utilization
exceeds 0.5P. Up to 15 AcmeAir instances
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0 60 120 180 240 300 360 420 480
Throughput
(pages/sec)
Time (sec)
AcmeAir throughput when using Kubernetes autoscaling
Baseline JITServer+AOTcache
32. Improve ramp-up time with JITServer
• Experiment in docker containers
• Show that JITServer improves ramp-up
• Show that JITServer allows a lower memory limit for JVM containers
32
OpenLiberty+
AcmeAir
1P, 400 MB
limit
OpenLiberty+
AcmeAir
1P, 200 MB
limit
OpenLiberty+
AcmeAir
1P, 200 MB
limit
MongoDB
JITServer
4P, 1GB limit
JMeter
JMeter
JMeter
InfluxDB
Grafana
Collect throughput
data from JMeter
Display throughput
data
Apply load
to AcmeAir
instances
Provide data
persistence
services
Run the AcmeAir
application
Provide JIT
compilation
services
Grafana
Prometheus
Scrape
metrics
Display
JITServer
metrics
35. JITServer – natural fit for the cloud
• JITServer performs better in constrained environments
• Smaller containers increase application density and thus, reduce
operational costs
• JITServer can be easily containerized and deployed to Kubernetes,
OpenShift, etc., which makes it easier to run Java applications in
densely packed cloud environments
• Use of server-side caching can lead to better cluster-wide CPU
utilization
• Improved ramp-up time improves auto-scaling behavior
• JITServer can be scaled to match demand
35
37. JITServer usage basics
• One JDK, three different personas
• Normal JVM: $JAVA_HOME/bin/java MyApp
• JITServer: $JAVA_HOME/bin/jitserver
• Client JVM: $JAVA_HOME/bin/java -XX:+UseJITServer MyApp
• Optional further configuration through JVM command line options
• At the server:
-XX:JITServerPort=… default: 38400
• At the client:
-XX:JITServerAddress=… default: ‘localhost’
-XX:JITServerPort=… default: 38400
• Full list of options: https://www.eclipse.org/openj9/docs/jitserver/
• Note: Java version and OpenJ9 release at client and server must match
37
38. JITServer usage in Kubernetes
• Typically we create/configure
• JITServer deployment
• JITServer service (clients interact with service)
• Use
• Yaml files
• Helm charts: repo https://raw.githubusercontent.com/eclipse/openj9-utils/master/helm-chart/
• Certified OpenShift/K8s Operators from Open Liberty
• Tutorial: https://developer.ibm.com/tutorials/using-openj9-jitserver-in-
kubernetes/
38
39. JITServer encryption/authentication through TLS
• Needs additional JVM options
• Server: -XX:JITServerSSLKey=key.pem -XX:JITServerSSLCert=cert.pem
• Client: -XX:JITServerSSLRootCerts=cert.pem
• Certificates and keys can be provided using Kubernetes TLS Secrets
• Create TLS secret:
• kubectl create secret tls my-tls-secret --key <private-key-filename> --cert <certificate-filename>
• Use a volume to map “pem” files
39
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container-name
image: my-image
volumeMounts:
- name: secret-volume
mountPath: /etc/secret-volume
volumes:
- name: secret-volume
secret:
secretName: my-tls-secret
40. Monitoring
• Support for custom metrics for Prometheus
• Metrics scrapping: GET request to http://<jitserveraddress>:<port>/metrics
• Command line options:
-XX:+JITServerMetrics -XX:JITServerMetricsPort=<port>
• Metrics available
• jitserver_cpu_utilization
• jitserver_available_memory
• jitserver_connected_clients
• jitserver_active_threads
• Verbose logging
• Print client/server connections
-XX:+JITServerLogConnections
• Heart-beat: periodically print to verbose log some JITServer stats
• -Xjit:statisticsFrequency=<period-in-ms>
• Print detailed information about client/server behavior
-Xjit:verbose={JITServer},verbose={compilePerformance},vlog=…
40
41. JITServer usage recommendations
When to use it:
• JVM needs to compile many methods in a relatively short time
• JVM is running in a CPU/memory constrained environment, which can
worsen interference from the JIT compiler
• The network latency between JITServer and client VM is relatively low
(<1ms)
• To keep network latency low, use “latency-performance” profile for tuned and
configure your VM with SR-IOV
41
42. JITServer usage recommendations
Recommendations:
• 10-20 client JVMs connected to a single JITServer instance
• JITServer needs 1-2 GB of RAM
• Better performance if the compilation phases from different JVM
clients do not overlap (stagger)
• Encryption adds to the communication overhead; avoid if possible
• In K8s use “sessionAffinity” to ensure a client always connects to the
same server
• Enable JITServer AOT cache: -XX:+JITServerUseAOTCache (client needs
to have shared class cache enabled)
42
43. Final thoughts
• JIT provides advantage, but compilation adds overhead
• Disaggregate JIT from JVM JIT compilation as a service
• Eclipse OpenJ9 JITServer (a.k.a Semeru Cloud Compiler)
• Available now on Linux for Java 8, Java 11 and Java 17 (IBM Semeru Runtimes)
• Especially good for constrained environments (micro-containers)
• Kubernetes ready (Helm chart available, Prometheus integration)
• Can improve ramp-up, autoscaling and performance of short lived applications
• Can reduce peak memory footprint, increase app density and reduce costs
• Java solution to Java problem, with no compromise
43
44. Resources
• Blogs
• JITServer - Optimize your Java cloud-native applications
• Using OpenJ9 JITServer in Kubernetes
• Connect a Kubernetes Open Liberty app to OpenJ9 JITServer
• Exploring JITServer on the new Linux on IBM z16 Platform
• Save Money with JITServer on the Cloud – an AWS Experiment
• Introducing the Eclipse OpenJ9 JITServer Helm Chart
• A glimpse into performance of JITServer technology
• Free your JVM from the JIT with JITServer technology
• Documentation: https://www.eclipse.org/openj9/docs/jitserver/
44
I assume everyone here is a Java developer and knows what a JVM and a JIT is. As you know the JVM, or Java Virtual Machine, executes your Java application,
And the JIT, or Just-in-time Compiler is invoked by the JVM during run time to compile the most frequently called, or HOT, methods.
With this in mind, today we will be talking about the concept of a JIT-as-a-Service, and why we need it.
We are going to break this talk down into 3 parts:
First we’ll discuss the problem we want to address, and this is running Java on the cloud is not a good fit, specifically in a distributed and dynamic architecture, like microservices
Then we’ll talk the reason this is, by taking a look at the the JVM and the JIT compiler, which has a great history, but has some side effects that can affect performance at start-up
Finally, we’ll discuss a way to get around these start-up issues by using JIT-as-a-Service
Let’s start with some background on running Java apps in cloud
For contrast, let’s start with how we all use to typically run our Java enterprise apps.
It was a monolith application running on a dedicated server, and to make sure we didn't have any performance issues, we loaded that server with plenty of CPUs and memory.
And, of course, that application ran great. It didn't matter if it took 10 minutes to start because it never went down.
Maybe once every 6 months it would be taken offline to launch a new version with some library upgrades, a couple new features, and some bug fixes.
Now let's fast forward to today where the trend is to deploy apps to the cloud.
That same monolith application, is now composed of many small microservices that all talk to each other, all running in containers, and managed by some cloud provider. And, depending on demand, there maybe multiple instances of each microservice.
And we do this for a couple reasons –
More agile and dynamic
Can implement new releases more easily and frequently
Positioned to take advantage of new cloud technologies
Less infrastructure we have to maintain and manage - going from a constrained to a utility use model
And of course, a major motivator is to save money
But how do we ensure that performance is still acceptable to our customers while still minimizing cost so that we actually do save money?
The main variables controlling cost and performance are how big our containers are, and how many instances of each are running.
Container size we can control, but scaling of instances is left to the cloud orchestrator to manage. But we can do a lot to ensure that scaling is efficient and effective. More on this later.
This graph shows the various ways we can get these variables wrong. Of course, if we under-provision our containers, and not enough instances can be efficiently run, we save on money, but the performance is unacceptable.
On the opposite side, if we over-provision our containers and we may too many instances running, we have great performance, but we’re wasting money.
Our goal is to get to the bottom-right quadrant – the sweet spot.
But this is extremely hard to accomplish. Getting this right is the new focus for Java vendors, all coming out with new technologies to address this problem.
Why is this so hard? To better understand, we need to go over some background on how Java applications execute.
For Java applications, it’s all about the JVM and the JIT, which are great and time-tested technologies, but they have some not so good side-effects, especially during start-up
One of the reasons for Java’s popularity is that its platform independent – you write once, and run anywhere. This portability is provided by the Java Virtual Machine, or JVM.
So how does it work?
First you compile your Java code into bytecode, then pass it to the JVM for execution.
At a very high level, the JVM loads and verifies the bytecode, then passes it to the interpreter to execute, one byte at a time.
And as everyone know, interpreting code can be relatively slow.
******
One line of Java code can translate into multiple bytecodes, so it's more accurate to say one byte at a time.The Interpreter in the VM interprets the bytecode by executing a predetermined sequence of operations for each bytecode it encounters.
To help with this, the JVM has a Just-in-time or JIT compiler.
The JIT compiler converts Java bytecode into machine code, which is optimized.
The typical unit of compilation is a method. But, to save resources, it only compiles “hot” methods, which means methods that are repeatedly called.
Another benefit of the JIT is that the generated code is saved in a “code cache” and available for use for the lifetime of the JVM.
The JIT compiler converts Java bytecode into machine code, which executes around 10 times faster than the interpreter
*****
Typically, the unit of compilation is a method (trace compilation is also possible) and to limitoverhead, typically only methods that are deemed hot are compiled.Generated code is saved into the so called "code cache" for future use during JVM lifetime.
One reason Java really took off early on was because it was device independent - write once, run anywhere.
And it’s been around for a long time, constantly improving over time.
It uses a JIT to dynamically compile “hot” methods using Profilers to generate very optimized machine code – much more than you can get with using a static or Ahead-of-Time compiler.
It has great garbage collection.
And because it takes time for the JVM to profile and the JIT to compile, Java apps actually run better the longer they run.
But there are some trade-offs.
Before the JIT is invoked, the code is “interpreted”, which is relatively slow
And when the JIT is invoked, it can cause CPU and memory spikes.
CPU spikes at the very least can lower QoS, and memory spikes cause OOM issues, including crashes. One of the main reasons JVMs crash is due to OOM issues.
Both CPU and memory spikes slow down start-up time and ramp-up time.
Start-up time is the time it takes for the app to be ready to process its first request, and ramp-up time is the time it takes for the JIT to compile all of the hot methods and to be running fully optimized.
Here’s graph of a typical Java App at start-up, and you can see the CPU spikes on the left, and memory spikes on the right.
A lot of the CPU spikes are caused by JIT compilations, and you can see the biggest spikes occur at the start when the JIT is the most active. The result of these spikes can cause lower QoS, which means sluggish performance.
This is also true for memory spikes. Again, you can see the biggest spikes are related to JIT compiles during ramp-up time. Memory spikes are particularly bad because they can cause OOM issues, including crashing the JVM.
So now that we have some good background info, lets revisit our 2 variables for determining cost vs performance.
Remember, our goal is to find that sweet spot - where we have just the right amount of resources provisioned for our containers, and we have containers set up for efficient auto-scaling.
For container size, we now know why it’s hard to get the right size.
We need to over-provision in order to avoid any OOM issues. We need to handle the initial spikes, but those resources are wasted once the app reaches steady-state.
And the amount to over-provision is hard because Java is non-deterministic, meaning we can run the same application twice and get different spike levels. You really need to run a series of different load tests to get this even close to right.
For auto-scaling container instances, we now know we have 2 main issues.
Slow start-up and ramp-up times makes scaling ineffective – new instances take too long to start-up causing QoS issues. The alternative is to just start more instances than you think you need and effectively eliminate any auto-scaling.
Another problem is that CPU spikes due to JIT compiles can cause issues with auto-scalers. These spikes may be interpreted incorrectly by the auto-scaler as demand load and may result in unnecessary instance launches. One way to minimize this problem is set your thresholds very high, but again, this makes your auto-scaler less effective.
The solution is pretty clear – we need to flatten out those CPU and memory spikes, and we need to improve start-up and ramp-up times.
Which leads us to JIT-as-a-Service.
The basic premise here is to decouple the JIT compiler from the JVM and let it run as an independent process.
Here we show a couple of JVMs on the left, and remote JIT services on the right.
The JVMs will no longer use their local JIT, and will offload their JIT compilations to the remote JIT services.
Here we show the remote JIT processes containerized and made available as a cloud service.
This gives us an added benefit – we can be managed by orchestrators like Kubernetes, where it can make sure our service it is always running and scaled properly to handle demand.
And this solution is just like any other monolith to micro-services conversion – in this case the JVM is the monolith that is turned into 2 loosely coupled mircro-services – the JIT and the rest of the JVM.
Note that on the diagram we show the JVM JIT crossed-out, but it still can be used if the remote JIT should become unavailable.
This service already exists, and it is called the JITServer and is a feature of the Eclipse OpenJ9 JVM – which is totally open source and free to download.
It also goes by the name “Semeru Cloud Compiler”, because it is mostly distributed with the IBM Semeru Runtimes (which we will talk about in a minute).
For distribution, the OpenJ9 combines with the OpenJDK binaries to form a full JDK.
As I mentioned, the Eclipse Open J9 JVM, and by extension JITServer Technology is completely open source - here is a link to its GitHub repo.
A little background on the OpenJ9 JVM –
It originally started life as the J9 JVM, which was developed by IBM over 20 years ago to run all of their Java based workloads on IBM hardware.
It was open sourced to the Eclipse Foundation around 5 years ago and re-branded as OpenJ9
It works with any Java workload, from micro-services to monoliths, and is specifically designed to work in constrained environments.
And its well-known for its small footprint, fast startup and ramp-up times.
Over time it has been used by many fortune 500 companies to run their enterprise Java applications. So it has a long history of success.
Here we show how it compares to the popular HotSpot JVM. OpenJ9 is the green, and HotSpot is orange. And this comparison is independent of any JITServer advantages.
These graphs are based on startup and ramp-up times. Remember, the distinction is start-up time is initial application load time, while ramp-up time is the time it takes to be running at peak throughput with optimized compiled code.
Going left to right, you can see that start-up time can be 51% faster than HotSpot.
Next we see that OpenJ9 has a 50% smaller footprint after start-up, which means more resources for the application.
Next, we see faster ramp-up time. Notice how much longer it takes HotSpot to match the level of OpenJ9.
And finally, you see OpenJ9 still has a smaller footprint, even after fully ramping up.
All of these metrics are important when running in constrained environments.
Semeru Runtimes is IBMs distribution of the OpenJDK binaries, and it is the only distribution that comes with the OpenJ9 JVM.
Similar to other vendor offerings and builds of OpenJDK, IBM Semeru Runtimes is built on the latest open source release of the OpenJDK class libraries. What separates Semeru Runtimes from the others is that it includes the highly rated Eclipse OpenJ9 JVM.
It comes in 2 flavors – an Open and Certified Edition. Both are free to download, the only difference is licensing and supported platforms.
If you are wondering where the name came from, the connection is that Mount Semeru is the tallest mountain on the island of Java.
Back to the JITServer - let’s take a look at the advantages, from the perspective of the JVM clients that will be utilizing the JITServer.
For provisioning:
Since there are no more JIT compilation spikes, sizing becomes much easier. There is no need to add in any “just-in-case” resources, and you can just focus on what the application needs.
As for performance:
It will be much more predictable – the JIT will no longer be stealing CPU cycles.
And because the JITServer can provide additional CPU cycles from the start, ramp-up times will be improved. And this is especially true for short-lived apps, since a majority of their life span is during ramp-up.
The JITServer also has its own AOT cache, which means that any new replicated instances can have access to already compiled methods.
As for cost:
Less resources are needed, and more efficient auto-scaling means only paying for what you need and use.
And finally for resiliency:
The JVM and the JITServer are separate processes, so the JVM can continue if the JITServer crashes. And the JVM still has use of its local JIT.
Let’s take a closer at some test results which show both cost savings and improved performance.
Now lets take a look at another test to see how JITServer can help with provisioning.
The experiment was conducted on a Red Hat OpeShift cluster on AWS. It has 3 worker nodes, with around 12GB of RAM to play with.
We will be running 4 test applications – 2 versions of the AcmeAir application, one as a monolith, and one with micro-services.
And a springboot and Quarkus application.
We will apply a real-world load to the applications to simulate activity, and we will let the OpenShift Scheduler determine how to deploy and replicate the applications.
***
Let’s look at a more complex example that demonstrates the value of JITServer a Kubernetes setting.
These experiments were performed on RedHat OpenShift Service on AWS (for those of you that don’t know OpenShift is a Kubernetes distribution from RedHat).
Our cluster has 3 worker nodes with 8 vCPUs and 16 GB of RAM out if which only 12.3GB are available (the rest is used by OS and OCP related applications)
As workload we have 4 different applications: AcmeAir Microservices and AcmeAir Monolithic based on OpenLiberty, Petclinic (which is based on the Springboot framework) and a Quarkus based app.
We apply a load amount of load to better reflect conditions seen in practice (I have seen quite a few studies that show that, the level of utilization is somewhere around 6 and 15% while another study from Google gives more generous numbers between 10 and 50%).
So these slides show how the OpenShift scheduler decided to place the various pods on the worker nodes. Note that each application has a different color. and each application is replicated multiple times.
The size of the shape indicates its relative container size. The number in the shape is the memory limit for that container.
As you can see in the top row, all 3 worker nodes are used, and the size of the containers are all larger than the bottom graph – this is due to building in extra memory to avoid OOM issues and improve throughput.
The bottom row uses the JITServer, which results in only 2 worker nodes being used, despite the fact that the JITServer containers (shown in brown) are the largest containers in the node. The savings comes from being able to scale down each of the application nodes.
The end result is a 33% cost savings by using one less worker node.
***
This slide is an illustration of how OpenShift scheduler decided to place the various containers on nodes.
There are two different configurations: above we have the default configuration without JITServer which needs 3 worker nodes.
Below we have the JITServer configuration that only uses 2 worker nodes.
The colored boxes represent containers and the legend on the right will help you decipher which application each container is running.
The number in each box represents the memory limit for that container. These values were experimentally determined so that the application can run without an OOM or drastically reduced throughput.
The boxes were drawn at scale meaning that a bigger box represents a proportionally larger amount of memory given to that container.
At a glance you can see that in the default config you cannot fit all those containers in just 2 nodes; you need 3 nodes.
In contrast, the JITServer config uses 6.3 GB less and we are able to fit all those containers in just 2 nodes. This happens even after we account for the presence of JITServer
(as you can see we have two JITServer instances, one on each node).
The takeaway is that JITServer technology allows you to increase application density in the cloud and therefore cost.
Now lets take a look at how each of the applications performed, The orange line represents the top row from the previous page. And the blue line represents the bottom row from the previous page, which uses the JITServer.
Each graph represents each of the applications.
You can see that the performance is pretty even, despite the fact that the JITServer is working with less worker node CPUs. The small blue lags are likely caused by the noisy-neighbor effect due to all the apps loaded up at the same time.
***
I have here 4 graphs, one for each application and the blue line shows how throughput with JITServer varies in time, while the orange line represents the throughput of the baseline.
As you can see, the steady state throughput for the 2 configurations is the same.
From the ramp-up point of view JITServer is doing quite well for Petclinic and Quarkus, while for AcmeAir mono and micro there is a miniscule ramp-up lag, which I would say it's negligible.
On the graphs you can also notice some dips in throughput, more pronounced for AcmeAir monolithic.
This is due to interference between applications, or the so-called noisy neighbor effect.
Since in practice applications are not likely to be loaded at the exact same time, in these experiments we apply load to the 4 applications in a staggered fashion, 2 minutes apart, starting with AcmeAir microservices and continuing with AcmeAir monolithic, Petclinic and Quarkus.
Those throughput dips correspond to these 2 minute intervals when the next application starts to become exercised causing a flurry of JIT compilations to happen.
If you pay close attention, you'll observe that the Baseline configuration is affected too by the noisy neighbor effect, but to a lower extent because Baseline has 50% more CPUs to its disposal (3 nodes vs 2).
So, to summarize, the experiment demonstrate that JITServer can increase container density in the cloud without sacrificing throughput.
And this results in reducing operational cost of Java applications by 20 to 30%.
And just to note - Ramp-up time can be slightly affected in high density scenarios due to limited computing resources, but how much depends on the level of load and the number of pods concurrently active.
****
In conclusion, the experiments conducted on Amazon AWS demonstrate that JITServer can increase container density in the cloud without sacrificing throughput and therefore reduces operational cost of Java applications by 20 to 30%.
Ramp-up can be slightly affected in high density scenarios due to limited computing resources, but the extent of this depends on the level of load and the number of pods concurrently active.
And for out last experiment, we wanted to see how JITServer affects autoscaling in Kubernetes.
You can see a description of the test bed on the right. The autoscaler instantiates a new pod when the average CPU utilization exceeds 50%.
The graph shows the throughput of the AcmeAir app while increasing amounts of load are applied.
The orange line is the baseline, and the blue line represents using the JITServer.
As you can see, the throughput curve continues to rise, and then it plateaus. The dips are associated with the launch of new pods, which burn a lot of CPU.
Comparing the two curves, JITSever gives you a better behavior because it is able the warm-up the newly spawn pods faster.
Also, without the JITServer there is the danger that the autoscaler will interpret the CPU used for compilation as load and be misled into launching even more pods. The JITServer makes this less likely.
And because the JITServer caches compiled methods, new instances of the same app will have these methods available to them at start-up.
***
HPA = Horizontal Pod Autoscaler
We performed some experiments to see how JITServer affects the autoscaling behavior in Kubernetes.
A description of the experimental test-bed can be found on the right.
Let’s focus on the graph on the left which shows the throughput of AcmeAir app while increasing load is applied to it.
HPA monitors the average CPU usage of the AcmeAir pods and if that exceeds our target of 0.5P, new pods are instantiated.
Overall the throughput curve goes up and at some point it plateaus.
Interestingly, the curve shows some transient dips in throughput and those correlate with HPA decisions to launch new pods.
The new pods will burn a lot of CPU but yield poor performance until they warm-up. To maintain fairness the load balancer gives an equal number of requests to each pod, but because the new pods can only process a limited number of requests per second, the older pods will match that level of performance.
Comparing the two curves, JITSever gives you a better behavior because it is able the warm-up the newly spawn pods faster.
Moreover, without JITServer there is the danger that the autoscaler will interpret the CPU used for compilation as load and be misled into launching even more pods.
This can be avoided by making the autoscaler less sensitive to CPU spikes. However, not reacting fast enough to a legitimate increase in load is also bad.
Now lets run through a demo
Before we start, lets go over what we will be setting up.
The purpose of the demo is to show that in a constrained container environment, JITServer improves ramp-up time, and allows JVM containers to run with less memory.
For this test we will be using the JavaEE AcmeAir application running in the Open Liberty runtime. AcmeAir is an airline booking simulation app.
We will run 3 instances of it running in containers. 2 of the containers will not be using the JITServer, and they will have 200 and 400MB of RAM. The third instance will have 200MB and will use the JITServer. All instances are limited to 1 CPU in order to simulate a constrained environment.
AcmeAir instances are connected to a single MongoDB database which provides data persistence.
The JITServer container will be larger, with 4 CPUs and 1GB of memory.
We will use Jmeter to simulate load to each of the application containers. Throughput values are forwarded to InfluxDB.
InfluxDB will gather the data from Jmeter and display the results in Grafana as graphs.
Prometheus is used to scrape metrics from the JITServer, which will then be displayed in Grafana.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is setup for the demo – this and the next few slides can be substituted by running the actual demo, or showing a copy of the demo located at https://ibm.ent.box.com/file/1014972472931
All of the JITServer features make it a great fit for the cloud.
It works better in constrained environments, which is obiously important for running in the cloud
Smaller containers means allocating less resources, which means less over-all operational cost
Like we showed previously, having the JITServer run in a container as cloud service makes it easy to be deployed and managed by an orchestrator like Kubernetes or OpenShift. And with that, you get all the benefits they provide.
Using additional features like server-side caching can help over-all CPU utilization on your cluster
Improved ramp-up times for short-lived applications can improve auto-scaling behavior
And finally, JITServer is just another container, so it can be scaled up or down to match demand
If running from the command line, the JITServer can be started from the OpenJ9 bin directory, by simply typing “jitserver”.
I would like to point out that the JITServer is just a OpenJ9 JVM, but running under a different persona.
To use it from your app, just use the UseJitServer option when starting your java applications
And there are a number of other options such as specifying address and port number if you need more than just the default values.
Here is a link to all of the options.
And note that the JITServer and its clients need to on the same Java version OpenJ9 release.
#####
Port can be changed if there is a conflict with another service using the same port number
For Kubernetes, you need to set up a JITServer deployment and service.
You do this with Yaml files, or Helm Charts, and an Operator is now available.
Here is a link to a tutorial that walks you through the steps.
- You can establish trust and encrypt the communication between client and server using TLS
- This can be done with the command line options shown in blue,
which specify the certificate file and private key to be used.
- The certificate and the private key files can be stored as Kubernetes TLS secrets and
mapped into the container using volumes.
- I am showing here an excerpt of a yaml file though there are other ways possible.
The JITServer can be queried for metrics. Here is a list, which is what we showed in the demo.
You can also specify logging options.
Here are some recommendation on when to use the JITServer.
One use case is when your JVM needs to compile many methods in a relatively short time
Or you are running in a constrained environment, where you can’t afford CPU spikes from compilations
And only use if you have low network latency. Communication between the JVM and JITServer can create a lot of traffic. And you should use any latency-performance settings to tune your environment.
*****
SR-IOV = single root I/O Virtualization
Kubernetes use "requests" and "limits" for resources like CPU and memory. "requests" is the amount of CPU/memory that a process is guaranteed to get. This value is used for pod scheduling. "limits" is the maximum amount of CPU/memory a pod is allowed to consume.
By setting a smaller CPU "request" value, Kubernetes has more flexibility in scheduling the JITServer. By setting a larger CPU "limit" value, you give JITServer the ability to use any unused CPU cycles
Typically the "request" value should be set to what the process consumes at steady-state (if you can define such a thing for JITServer)
-Xshareclasses on client – this turns on AOT
As for recommendations on how to use it:
The JITServer does create require additional resources, so to get the most net benefit, you should try to have between 10-20 connected client JVMs
It needs at least 1-2 GB of RAM
If you are using it with Kubernetes, always set the CPU/memory limits (which is the max) much larger than the requests (which is the minimum). This can really help handling CPU usage spikes
And as we saw with one of our experiments, it performs better if all of its clients are not started at the same time.
Don’t use encryption unless you really need it. It does add a lot of overhead
In Kubernetes, definitely use “sessionAffinity” to make sure JITServers and their clients stay connected.
And the last tip is using the AOT cache feature of OpenJ9. If both the client and JITServer have this enabled, AOT code can be cached on the server side and shared with all of the JVM instances.
*****
SR-IOV = single root I/O Virtualization
Kubernetes use "requests" and "limits" for resources like CPU and memory. "requests" is the amount of CPU/memory that a process is guaranteed to get. This value is used for pod scheduling. "limits" is the maximum amount of CPU/memory a pod is allowed to consume.
By setting a smaller CPU "request" value, Kubernetes has more flexibility in scheduling the JITServer. By setting a larger CPU "limit" value, you give JITServer the ability to use any unused CPU cycles
Typically the "request" value should be set to what the process consumes at steady-state (if you can define such a thing for JITServer)
-Xshareclasses on client – this turns on AOT
So, what did we learn today?
JIT compilation adds overhead, and one solution is to disaggregate the JIT from the JVM and perform JIT compilations as a service.
We demonstrated the OpenJ9 implementation, which is the JITServer, also know as the Semeru Cloud Compiler.