For quite a long time we were forced to make a choice - performance vs. simplicity. Either use a complicated and performant reactive code, or use simple, yet limited blocking approach.Thanks to project Loom in JDK, the paradigm can shift once more even for applications that require high concurrency. I will introduce Helidon Nima - new microservices framework which is built on top of a server designed for Loom with fully synchronous routing that can block as needed, yet still provide high performance under heavy concurrent load. I'll also talk about challenges, benefits and impact on application development in such an environment.
3. Helidon
● Java framework for developing cloud-native microservices
● Open source, licenced under Apache 2.0
○ https://github.com/helidon-io/helidon
● Small and fast
● Minimum number of third-party dependencies
● Supports GraalVM native-image
● https://helidon.io
3
7. Our Goal - Efficiency & Performance
● Faster requests processing
● Use less CPU cycles
● Use less memory
● Minimize disk footprint
7
8. Problems
● One thread per request model is limited
● User threads are mapped 1:1 to kernel threads
● Kernel threads are scheduled by the OS
● Creating a kernel thread is an expensive operation
8
9. Solution 1 - Reactive Programming
● Asynchronous - we don’t wait for something to happen
● Callback functions - called when it happens
● Back pressure
○ Resistance or force opposing the desired flow of fluid through pipes data through software.
● Reactive Streams Specification
○ https://www.reactive-streams.org/
○ Reactive Streams is an initiative to provide a standard for asynchronous stream processing with
non-blocking back pressure
○ Java implements it since version 9 with Flow API
9
10. Helidon SE – Our Solution
● Reactive, Non-blocking
● Uses event loop (small number of threads to process all requests)
● MicroProfile Reactive Streams Operators
● MicroProfile Reactive Messaging
● Reactive WebServer
● Reactive WebClient
● Reactive DBClient
10
11. Problems of Reactive Programming
● Steep learning curve
● Callback hell
● Code is difficult to read
● Code is difficult to debug
● Code is difficult to test
11
13. Solution 2 – Java Virtual Threads
● Project Loom (JEP-425)
● Available as preview in Java 19
● Threads can now be either Platform or Virtual
● Virtual threads are mapped M:N to platform threads
● Blocking operations on virtual threads
do not block platform threads
● Virtual threads are cheap to create
● Virtual threads are cheap to block
13
14. Helidon Níma – Our Solution
● The first microservices framework based on virtual threads
● Scalability of asynchronous programming models with the simplicity of
synchronous code
● Built from the ground up in tight collaboration with the Java team
● Contains Níma web server plus additional goodies
● Will be released as part of Helidon 4.0 next year
● ALPHA2 is released
● https://helidon.io/nima
14
15. Sample - Níma
private void parallel(ServerRequest req, ServerResponse res) throws Exception {
try (var exec = Executors.newVirtualThreadPerTaskExecutor()) {
int count = count(req);
var futures = new ArrayList<Future<String>>();
for (int i = 0; i < count; i++) {
futures.add(exec.submit(() -> callRemote(client)));
}
var responses = new ArrayList<String>();
for (var future : futures) {
responses.add(future.get());
}
res.send("Combined results: " + responses);
}
}
16. Helidon Níma thread model
16
● Socket listeners are platform threads (one per port)
● HTTP/1.1 uses two virtual threads per connection
(can be configured to use just one)
● HTTP/2 uses two virtual threads per connection, and one virtual thread per
stream
● All routes are executed within a virtual thread, and can block as needed
17. Performance Disclaimer
● Single Linux machine used for client and server (loopback)
○ Intel Core i9–9900K @ 3.6 GHz x 16
○ 32 GiB memory
○ Ubuntu Linux
● Different Java versions used (as Níma requires 19)
● Used code from TechEmpower benchmark (not the benchmark itself)
● Versions:
○ Netty 4.1.36 pure handler (no higher-level framework)
○ Helidon SE 2.5.1 reactive routing
○ Helidon Níma 4.0.0-ALPHA2 blocking routing
19. Helidon Níma Server (ALPHA2) – Features
● HTTP/1.1 with pipelining
○ Server and Client
● HTTP/2 prototype, either h2c or h2 (upgrade/ALPN)
○ Server, Client in progress
● GRPC prototype
○ Server, Client TBD
● WebSocket Server prototype
● Extensible to other TCP protocols
● Testing support
○ Unit and integration tests
19
20. Helidon Níma – Features
● Access Log
● CORS support
● Static Content
● Tracing – Open Telemetry & OpenTracing
● Health checks
2
0
21. Helidon Flavors (next major release)
Helidon SE
● Reactive, non-blocking
● Functional style APIs
● Build on top of Netty
21
Helidon MP
● MicroProfile & Jakarta EE
● Java EE style APIs
● Annotations &
Dependency Injection
Helidon Níma
● Blocking
● Based on virtual threads
● Easy to use, test and
debug
24. Challenges (1/2)
● Re-use or not re-use (byte buffers/byte arrays)
○ Due to the huge number of virtual threads, using a single component to cache
buffers for re-use is not efficient. We have achieved higher throughput with
discarding them (and let GC do its work) than with reuse. Similar results with native
byte buffers and heap byte buffers
● Asynchronous writes
○ By default, we write to sockets asynchronously. On Linux, this provides higher
performance when HTTP/1.1 pipelining is used (up to 3x). When not using
pipelining, there is no additional advantage to this. So, the async writes are
configurable and can be disabled.
2
4
25. Challenges (2/2)
● Blocking or non-blocking sockets/socket channels
○ After a lot of testing and validation with Java team, we found out that the best
performance is achieved with blocking sockets – e.g., we use ServerSocket in the
blocking mode to listen for connections and “old school” approach of accepting a
socket and running a new virtual thread to process it
● Get used to blocking!
○ If you are used to write a reactive code, it could be difficult to switch to blocking.
You should just block!
2
5