Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. AND IT'S TRUE!
In this talk given at JBCNConf 2015 in Barcelona, we will see how we use Netty at Trovit since 2013, what brought to us and how it opened our minds. We will share tips that helped us to learn more about Netty, some performance tricks and all things that worked for us.
106. @Override
public void channelActive(ChannelHandlerContext ctx, Object msg) {
String date = DATE_TIME.print(new DateTime());
ctx.writeAndFlush(ByteBufUtil.encodeString(
ctx.alloc(), CharBuffer.wrap(date),
CharsetUtil.US_ASCII));
}
Hand e wo
111. @Override
public void channelActive(ChannelHandlerContext ctx, Object msg) {
String date = DATE_TIME.print(new DateTime());
ctx.writeAndFlush(ByteBufUtil.encodeString(
ctx.alloc(), CharBuffer.wrap(date),
CharsetUtil.US_ASCII));
// We need to encode the String
}
Hand e wo
112. @Override
public void channelActive(ChannelHandlerContext ctx, Object msg) {
String date = DATE_TIME.print(new DateTime());
ctx.writeAndFlush(ByteBufUtil.encodeString(
ctx.alloc(), CharBuffer.wrap(date),
CharsetUtil.US_ASCII));
// We allocate some space
// +Netty: Keeps internal pools
}
Hand e wo
113. @Override
public void channelActive(ChannelHandlerContext ctx, Object msg) {
String date = DATE_TIME.print(new DateTime());
ctx.writeAndFlush(ByteBufUtil.encodeString(
ctx.alloc(), CharBuffer.wrap(date),
CharsetUtil.US_ASCII));
// Write the message
// Request to actually flush the data
// back to the Channel
}
Hand e wo
116. Ma n ass
public class DaytimeServer {
void run() throws Exception {
// fun stuff
}
public static void main(String[] args) throws Exception {
DaytimeServer daytimeServer = new DaytimeServer();
daytimeServer.run();
}
}
127. un()
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
// Boss group accepts connections
// Work group handles the work
128. Se e Boo s ap
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(8080)
.option(ChannelOption.SO_BACKLOG, 100)
129. Se e Boo s ap
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(8080)
.option(ChannelOption.SO_BACKLOG, 100)
// We assign both event loops
130. Se e Boo s ap
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(8080)
.option(ChannelOption.SO_BACKLOG, 100)
// We use a ServerSocketChannel
// to accept TCP/IP connections
// as the RFC says
131. Se e Boo s ap
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(8080)
.option(ChannelOption.SO_BACKLOG, 100)
// Simply bind the local address
132. Se e Boo s ap
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.localAddress(8080)
.option(ChannelOption.SO_BACKLOG, 100)
// Set some Socket options... why not?
// Just remember: This is not handled
// by Netty or the JVM, it’s the OS
134. hanne P pe ne
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new SimpleDaytimeHandler());
}
});
135. hanne P pe ne
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new SimpleDaytimeHandler());
}
});
// ChannelPipeline to define your
// application workflow
136. hanne P pe ne
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new SimpleDaytimeHandler());
}
});
// Append our handlers
// ProTip: use LoggingHandler to
// understand Netty
137. hanne P pe ne
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new SimpleDaytimeHandler());
}
});
// Finally, we add our handler
138. RUN!
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch)
throws Exception {
ChannelPipeline p = ch.pipeline();
p.addLast(new LoggingHandler(LogLevel.INFO));
p.addLast(new SimpleDaytimeHandler());
}
});
ChannelFuture f = b.bind().sync();
f.channel().closeFuture().sync();
// It works!
I was looking for a fast webserver?!
more hype words please
I think I’m gonna pass, forget it never happened -- Almost happened to me 3 years ago
They’re all the same...
low-level, input and output (I/O) operations
Just to name a few: reading data from a disk drive,
a remote procedure call (RPC),
send a file over a network, etc. In general terms, any communication between CPU + memory and any other device is considered I/O.
The same goes with write() operations
We can say that our program blocks while the communication is in progress.
This type of I/O is known as blocking I/O or synchronous I/O.
NS: What’s the problem?
The big problem with blocking I/O is that while you’re waiting, you will leave most of your system resources idle. Meaning that your processor will mostly do nothing but to wait that the I/O operations are finished.
The big problem with blocking I/O is that while you’re waiting, you will leave most of your system resources idle. Meaning that your processor will mostly do nothing but to wait that the I/O operations are finished.
Imagine that for every request you have to handle, you need to read something from the database.
Your code will block waiting for the database operations to finish every time.
In that period, you’re dedicating memory and processing time to a thread that it’s only waiting
Because of this, typical web servers spawns a new thread for every incoming request to handle more traffic
But as you can see, this is not optimal.
Under stress situations, most of your threads will be consuming more memory and CPU waiting for other operations to finish.
We can do better...
But as you can see, this is not optimal.
Under stress situations, most of your threads will be consuming more memory and CPU waiting for other operations to finish.
We can do better...
Another approach is to issue an I/O call but not wait until it’s finished
As you can imagine, these kind of operations don’t block your program.
In fact, the call returns immediately to the caller
NS: and...
you’ll be notified once the operation has finished
Yup, just like with Dependency Injection
Even though, keep in mind that if your tasks depend on having completed an I/O operation, you’d still have to wait for its completion.
But this time, you won’t be wasting resources just waiting, because other processing that doesn’t depend on I/O can be executed
With asynchronous I/O APIs and multithreading we can build more robust, scalable application.
Async I/O also called NIO, remember first slides
Most Operating Systems (OS) nowadays implement many asynchronous calls with different strategies.
For example, with Unix systems you have polling, signals or select loops, while Windows has support for callback functions.
Java and the JVM started to offer integration with package java.nio
NS: since...
1.4., 2002
Java 1.7 introduced a new API for files and more NIO goodies, usually referred as NIO2.
The drawback of this approach is that our software will be more complex,
if you tried to use Selector API you know this
You freak dependency-haters
The main difference is that in Netty all API definitions are asynchronous in nature, no matter what.
NS: What does it mean?
It is important to be aware of this encapsulation.
Netty can be used as a general library as a replacement for Java NIO regarding network operations.
Just like you would use Guava or Apache Commons.
These days most of the projects tend to use HTTP for everything, from sending large files to building web service
But HTTP is not always the answer to everything.
Just like email works over SMTP, or non-critical data can be sent via UDP.
For example, StatsD is a daemon that sends over UDP aggregates of different types of metrics and statistics.
More often than you think you can find yourself trying to implement some protocol of your own.
Think about mobile messaging, real-time exchanges between ad-servers, or you wanting to invoke some remote method in another server, like an RPC
From my personal experience, I can confirm that is true in every way.
It’s really easy to use Netty once you get used to the API.
Soon you will notice how rapidly you are developing your applications
If “implementing a protocol” sounds like too much for you, it’s not.
Don’t think of it like that.
You’ll be implementing exactly what you need for your program
NS: By the way...
By the way, most common protocols (HTTP, UDP, SSL…) are supported out-of-the-box, with many more coming.
3 → 4 → 4.1 → 5.0 - Even more basic, day to day things you’ll use, change
This protocol defines that a daytime service simply sends the current date and time as a character string without regard to the input
One daytime service is defined as a connection based application on TCP. A server listens for TCP connections on TCP port 13. Once a connection is established the current date and time is sent out the connection as a ascii character string (and any data received is thrown away). The service closes the connection after sending the quote
Handlers in Netty are where you specify your business logic and write the actual application code you’ll need
Handlers in Netty are where you specify your business logic and write the actual application code you’ll need
A ChannelHandler will handle the operations for that Channel.
Channel is roughly, a connection
Inbound handlers are responsible to handle incoming traffic and dispatch events to the next Handler,
outbound handlers do the exact same thing but in the other direction.
Before, in version 3 they were known as Upstream and Downstream (like jenkins jobs)
But it doesn’t stop there
Move on. We have our class in place, time to do real work.
We need to hook up into some method to do the actual work.
But RFC says: “Once a connection is established the current date and time is sent out the connection. “
Handler has a natural lifecycle, very simplified in Netty 4
ByteBuf is a container to hold bytes, with most common operations implemented in ByteBufUtil helper class.
You may know that the Java offers its own java.nio.ByteBuffer class with the same purpose, but it has too many caveats to stick to it.
We will see much more about ByteBuf in chapter 4
Keep in mind that all of this is done for the sake of performance.
Netty keeps internal pools to reuse space and prevent too many context switching, memory leaks and other typical problems you would face writing a network application on your own.
NS: You know how to run this?
Forget about classloaders, logger problems…
If you want to call this a microservice, be my guest
I don’t know what to think after fowler’s article about them
Similar to JDK’s ExecutorService but with many power-ups.
An EventLoopGroup is a multithreaded event loop that will handle I/O operations
One EventLoop instance will handle I/O operations for a Channel.
Put it in another way
When we receive a connection, a Channel is registered for that connection, and then it gets assigned to an EventLoop inside the EventLoopGroup.
Once assigned, that EventLoop is responsible to handle all I/O operations for that connection.
This way, we can say that with some EventLoops instances, running always in the same thread, we can handle many Channels
If you noticed, this is a big difference with most regular synchronous servers, where a connection is assigned to a thread
This is the secret behind Netty’s thread model, and what vert.x and others use.
Vert.x is actually using Netty, with Netty defaults… I believe magic happens there
It changed for the best since version 3, with a lot of lessons learned. Like context switching is hard
This is version 4.0, but now with version 4.1 and even 5.0, you can customize it, but following this pattern
NS: Going back to the example
The “boss” group is the one that accepts incoming connections.
The “worker” group is the one that will handle all the work once a connection is accepted.
The “boss” group will register and pass the connection to the “worker” group.
For the Daytime protocol we need to accept incoming TCP/IP connections, so we need to use a ServerSocketChannel implementation.
In this case, we are using NioServerSocketChannel, just like the previous NioEventLoopGroup uses NIO Selector to accept new connections.
This option will set the maximum queue length for incoming connections.
If a connection request arrives when the queue is full, the connection is refused.
What makes this option special is that this restriction is not handled by Netty. It’s not even under JVM’s control.
It’s a platform-dependent option that the underlying operating system will decide how to handle.
If you noticed, this is a big difference with most regular synchronous servers, where a connection is assigned to a thread
This is the secret behind Netty’s thread model, and what vert.x and others use.
Vert.x is actually using Netty, with Netty defaults… I believe magic happens there
It changed for the best since version 3, with a lot of lessons learned. Like context switching is hard
This is version 4.0, but now with version 4.1 and even 5.0, you can customize it, but following this pattern
NS: Going back to the example
Netty uses a ChannelPipeline to define your application workflow. You may add one or more handlers to the ChannelPipeline
Every connection received, it will execute this workflow
Invoking ChannelPipeline.addLast(), you’re appending handlers at the end and defining the order of execution.
There are more methods available, but it’s easier to keep it this way
We’re usin
NS: I want to use ChannelFuture to introduce for all the K.I.A that happened here.
Netty is HUGE, but when you get all of its concepts, it’s always the same
Tons of options, we’re constantly exploring them, try to find what works best
Here is where your devops team can help you understand them
“Understand your domain”, you actually need to understand what’s happening
There’s no need to live in the past. You still want all DI goodies
It this easy! Instead of using plain news
We did some paranoid microbenchmarks, and it’s worth it every nano-second
We need to respond always below 100ms,
You have to measure everything. We use both statsd and caliper
YourKit or any other profiler is your friend
It doesn’t make any sense!
Please share your configs
Even if you’re not using Netty directly
Snowball effect
You’re handling 500 connections per machine → 90 thousand
You won’t get an OOM error, everything is getting slower (because you’re measuring it in real time)
You have to handle this, on your own
Something big is coming… and is not the winter
I’ve seen 2 times the Techempower result tests… it happens to be that I’m a big fan of those results…
I was surprised about Vert.x being first, and even more surprised being better than Netty
NS: So, I had to do it...
Round 10, April 2015
Round 9, Round 8… one per year… pretty similar results…
Where’s vert.x?