Lack of asynchronous relational database drivers in Java has been a barrier to writing scalable, data-driven applications for many. R2DBC is seeking to change this with a new API designed from the ground up for reactive programming against relational databases—its intent ito support reactive data access built on natively asynchronous, non-blocking SQL database drivers.
How does this change the game for data access in the cloud? Used in conjunction with RSocket and Proteus, it is now possible to write applications benefiting from reactive streaming end-to-end, from the browser all the way to the database. No more fiddling with paging APIs, polling for updates, or writing complex logic to merge data from multiple sources--reactive streams can handle this all for you!
RSocket is an open-source, reactive networking protocol that is a collaborative development initiative of Netifi with Pivotal, Facebook, and others. Proteus is a freely available broker for RSocket that is designed to handle the challenges of communication between complex networks of services—both within the data center and over the internet—extending to mobile devices and browsers.
Attend this webinar to learn how to use Pivotal Cloud Foundry with R2DBC and Proteus to build reactive microservices that return large amounts of data in a streaming fashion over RSocket.
Speakers: Ryland Degnan, co-founder and CTO of Netifi and Dan Baskette, Pivotal host
2. Reactive Programming
• “Reactive Streams is an initiative to provide a standard for asynchronous
stream processing with non-blocking back pressure. This encompasses
efforts aimed at runtime environments (JVM and JavaScript) as well as
network protocols”
• “The main goal of Reactive Streams is to govern the exchange of stream
data across an asynchronous boundary—think passing elements on to
another thread or thread-pool—while ensuring that the receiving side is
not forced to buffer arbitrary amounts of data.”
• Everything is a stream!
2
3. Why Non-Blocking?
• CPU consumption per request
• Event loop architecture reduces thread migrations under load, which
lowers CPU cycle consumption per request
• Latency under load
• Tomcat has higher latencies under load due to its thread pool
architecture, which involves thread pool locks (and lock contention)
and thread migrations
• Incredibly important when building microservices
3
4. Thread Migrations
• Netty achieved a 46%
higher request rate
• As load increases, Netty
begins to experience lower
thread migrations
• There is enough queued
work for event loop threads
to keep servicing requests
without switching
4
5. Request Maximum Latency
• The degradation in
maximum latency for
Tomcat is much more
severe
• Netty’s latency
breakdown happens with
much higher load
5
6. Roadblocks
• But there are still some barriers to using Reactive everywhere*
• Data Access
• MongoDB, Apache Cassandra, and Redis
• No relational database access
• Cross-process back pressure (networking)
6
8. R2DBC
• R2DBC engages relational databases with a reactive API, something not
possible with the blocking nature of JDBC and JPA.
• R2DBC is founded on Reactive Streams providing
an asynchronous, non-blocking API, all the way to the database
• Current implementations include:
• PostgreSQL
• H2
• Microsoft SQL Server
8
9. RSocket
• RSocket is a bi-directional, multiplexed, message-based, binary protocol
based on Reactive Streams back pressure
• It provides out of the box support for four interaction models commonly
seen in cross-application communication
• Request-Response
• Fire-and-Forget
• Request-Stream
• Channel
9
10. Message Driven Binary Protocol
• Requester-Responder interaction is broken down into frames that
encapsulate messages
• The framing is binary (not human readable like JSON or XML)
• Massive efficiencies for machine-to-machine communication
• Downsides only manifest rarely and can be mitigated with tooling
• Payload Agnostic
• Protobuf, JSON, Custom Binary
10
11. Multiplexed
• Connections that are only used for a single request are massively
inefficient (HTTP 1.0)
• Pipelining (ordering requests and responses sequentially) is a naive
attempt solving the issue, but results in head-of-line blocking (HTTP
1.1)
• Multiplexing solves the issue by annotating each message on the
connection with a stream id that partitions the connection into multiple
"logical streams"
11
12. Bi-Directional
• Many protocols (notably not TCP) have a distinction between the client
and server for the lifetime of a connection
• This division means that one side of the connection must initiate all
requests, and the other side must initiate all responses
• Even more flexible protocols like HTTP/2 do not fully drop the
distinction
• Servers cannot start an unrequested stream of data to the client
• Once a client initiates a connection to a server, both parties can be
requestors or responders to a logical stream
• Transport Agnostic
• TCP, Websockets, HTTP/2, Aeron (UDP)
12
13. Reactive Streams Back Pressure
• Network protocols generally send a single request, and receive an
arbitrarily large response in return
• There is nothing to stop the responder (or even the requestor) from
sending an arbitrarily large amount of data and overwhelming the
receiver
• In cases where TCP back pressure throttles the responder, queues fill
with large amounts of un-transferred data
• Reactive Streams (pull-push) back pressure ensures that data is only
materialized and transferred when receiver is ready to process it
13