Describes ongoing work at Eclipse OMR and Eclipse OpenJ9 open source projects to develop Just In Time compiler technology that can be deployed independently of a runtime (like a JVM in OpenJ9's case). The actual presentation included two demos which don't both appear in the slides but those demos are available in the open so contact me if you want the details .
1. JIT as a Service
Compiling for Runtimes in the Cloud
Mark Stoodley
Eclipse OMR and Eclipse OpenJ9 project lead
2. 2
Important Disclaimers
§ THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY.
§ WHILST EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION
CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED.
§ ALL PERFORMANCE DATA INCLUDED IN THIS PRESENTATION HAVE BEEN GATHERED IN A CONTROLLED
ENVIRONMENT. YOUR OWN TEST RESULTS MAY VARY BASED ON HARDWARE, SOFTWARE OR
INFRASTRUCTURE DIFFERENCES.
§ ALL DATA INCLUDED IN THIS PRESENTATION ARE MEANT TO BE USED ONLY AS A GUIDE.
§ IN ADDITION, THE INFORMATION CONTAINED IN THIS PRESENTATION IS BASED ON IBM’S CURRENT
PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BY IBM, WITHOUT NOTICE.
§ IBM AND ITS AFFILIATED COMPANIES SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT
OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION.
§ NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, OR SHALL HAVE THE EFFECT OF:
– CREATING ANY WARRANT OR REPRESENTATION FROM IBM, ITS AFFILIATED COMPANIES OR ITS
OR THEIR SUPPLIERS AND/OR LICENSORS
3. Runtimes need to operate in small spaces
8 cores
8GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1 core, 1GB
1. Virtualization 2. Microservice Architecture
Monolithic
Application
High Input
Load
Service
A
Service
B
Service
C
Service
D
Service
E
Service
F
Possibly lower load connections
Many lighter-weight applications need to be allocated to many small VMs
3
4. Driving some interest in
Ahead of Time (AOT) compilers
even for ”dynamic” languages
4
5. 5
AOT compiler JIT compiler
Runtime CPU cycles none considerable early on
Runtime memory none considerable early on
Ability to prove things excellent more limited
Performance ramp-up immediate takes time
Target environment have to choose CPU its running on
Profile based optimizations can use earlier run
data but awkward
collects and uses data
from current run
Speculative optimization can’t really afford yes, aggressively
AOT vs. JIT compilers for dynamic languages
6. AOT compilation has some advantages but
leaves significant performance on the table
by itself
6
7. What if…
• What if we could JIT compile out of process
• On another machine or even on a cluster of machines
• With independently deployed CPUs and memory
• Able to serve multiple applications and sharing code
• Optimizing with information collected from many runtime clients
7
9. 1. Move runtime costs to remote service
…can afford JIT even for very lightweight runtimes
2. Still running alongside applications
…so no loss in code performance
9
10. Yes! There will be latency!
But maybe not as bad as you think
10
11. The Basic JITaaS Architecture
• A client-server model
• Bidirectional communication
• Target method + metadata sent to
server
• Compiled code + metadata
returned to client
Client Server
Compilation begins
VM queries ×N
Compilation ends
Time
11
12. Two ongoing projects
1. Eclipse OMR
• Record JitBuilder calls on client, replay on server
2. Eclipse OpenJ9
• Compile Java methods from bytecodes
12
13. IBM ExtremeBlue student project over the summer :
1. Record JitBuilder API calls on the client side
2. Send that record to a JIT server
3. If no code generated already for the provided record:
4. Replay the recorded calls to JitBuilder implementation in the JIT server
5. Store generated code using the record as key
6. Send the code back to be installed in client’s code cache
7. Client can call the native code as a C function pointer
Created a client JIT for ultra simple (but Turing complete!) “BF” language
https://en.wikipedia.org/wiki/Brainf***
JitBuilder as a Service
19. Throughput Takeaways
• JIT as a service doesn’t have to sacrifice peak throughput
• Network latency (in aggregate) is tolerable
• Many parallel compilation requests overlap much of the cost
• Can cache many query results to avoid network round trips
19
21. Memory Consumption Takeaways
• Dominant memory consumption is now application
• Size your containers to your apps, forget about the JIT
• Workload balancers will no longer “fooled” by spiky JIT activity
21
22. Demo 2
Java Jit as a Service
in a Constrained Environment
(256MB, ½ core)
22
24. Future Work
• Close remaining performance gap
• Share compilations between clients
• Optimizations enabled through connection to multiple clients
• Build different kinds of remote JIT clients, not just for languages
• Merge hardened prototypes to OMR/OpenJ9 master branches
24
Shameless self plug: Whittier room at 3:30 today by Mark Stoodley
https://2018.splashcon.org/event/splash-2018-splash-i-oh-the-compilers-you-will-build-
25. All open and on the way to merged
• Eclipse OMR
• https://github.com/eclipse/omr/pull/3019 (JitBuilder record function)
• https://github.com/eclipse/omr/pull/3056 (JitBuilder replay function)
• https://eclipse-omr.slack.com
• Eclipse OpenJ9
• https://github.com/eclipse/openj9/tree/jitaas
• https://github.com/eclipse/openj9-omr/tree/jitaas
• https://openj9.slack.com
25