The goal of HTTP/2 is to increase the perceived performance of the web browsing experience. This is achieved by multiplexing over TCP and Server Push among other techniques. What implications does this have for developers? How does Servlet 4.0 embrace HTTP/2 and what support is there in JDK 9? We will see, with code examples, what the future of developing with HTTP/2 might look like.
3. What’s on?
1 Why Do We Need HTTP/2
2 Work-Arounds to HTTP1.1
3 HTTP Sockets
4 Topline HTTP/2 Features
5 Servlet 4.0 Features
6 Server Support
7 What about SPDY?
8 HTTP/2 Performance
9 Tracking HTTP/2 Adoption
10 Summary and Q&A
4. ■ Increase perceived performance of web
■ HTTP protocol not suitable
■ Since 2011 average web page size
increased by over 300%
■ The problem with HTTP/1.1
Why Do We Need HTTP/2
The Goal of HTTP/2
300%
Source: HTTPArchive.com
5. ■ Requests resources in parallel HTTP 1
■ One request per TCP connection
■ HTTP1.1 Pipelining: multiple requests
■ Responds in sequence
■ Delay causes head-of-line blocking
Why Do We Need HTTP/2
How a browser loads a webpage?
open
close
client server
no pipelining
index.html
style_1.css
logo.jpg
open
close
client server
pipelining
time
index.html
style_1.css
logo.jpg
6. Multiple connections
However two issues
1. TCP sockets expensive
2. Browser max connections
Work-Arounds to HTTP1.1
Solution to Head-Of-Line Blocking
open
close
client server
connection 1
style_1.css
open
close
client server
connection 2
javaScript_1.js
open
close
client server
connection 3
image_1.png
9. ■ Embed image in web page
■ Base 64 encoded
■ Time spent decoding
■ Caching difficult
Work-Arounds to HTTP1.1
Inlined Assets
10. One image file consists of many smaller images
Image sprites from Amazon, Google and Facebook.
Work-Arounds to HTTP1.1
Image Sprite Sheet
11. Work-Arounds to HTTP1.1
Domain Sharding
web page
y.example.com
x.example.com
server 2
server 1
logo.jpg
icon.jpg
header.css
menu.css
12. ■ Not much specified
■ Throw away resources
■ No maximum open sockets
HTTP Sockets
What HTTP1.1 Says About Sockets
13. ■ Much is specified
■ Scares resources
■ Ideally only open one socket
HTTP Sockets
What HTTP/2 Says About Sockets
14. HTTP Sockets
All connections now operate as one connection
open
close
client server
open
close
client server
open
close
client server
open
close
client server
open
close
client serveropen
close
client server
15. ■ HTTP/2 is comprised of two specifications:
■ Hypertext Transfer Protocol version 2 - RFC7540
■ HPACK - Header Compression for HTTP/2 - RFC7541
■ Binary Protocol Based on Frames
■ Features:
■ Request/Response Multiplexing
■ Binary Framing
■ Header Compression
■ Stream Prioritization
■ Server Push
■ Upgrade From HTTP1.1
Topline HTTP/2 Features
What’s new
16. ■ Most important feature
■ Request and response is multiplexed
■ Fully bi-directional communication
■ Concepts:
■ Connection - A TCP socket
■ Stream – A channel of communication
■ Message – A request/response and control message
■ Frame –The smallest unit within a communication
■ Resolves head-of-line blocking
■ Communication broken down into frames
■ Frames facilitate interweaving the logical stream
Topline HTTP/2 Features
Request/Response Multiplexing
18. ■ Request/Response Multiplexing
■ Interweave the logical stream over a single TCP
Stream 3 sends header then body and server responds with stream 2 before it
receives completed stream 3
Topline HTTP/2 Features
Request/Response Multiplexing
STREAM 1
HEADERS
STREAM 3
DATA
STREAM 1
DATA
STREAM 2
HEADERS
STREAM 3
HEADERS
STREAM 2
DATA
browser server
19. ■ Decomposition of the frame
■ The frame has a header and the head consists of some information
■ Type fields can be
■ HEADER corresponds to the HTTP headers
■ DATA corresponds to the HTTP request body
■ PRIORITY specifies stream priority
■ PUSH_PROMISE notifies of server push intent
■ RST_STREAM notifying error, client rejects push
■ SETTING, PING, GOAWAY, WINDOW_UPDATE, CONTINUATION
Topline HTTP/2 Features
Binary Framing
LENGTH (24)
TYPE (8) FLAGS (8)
R STREAM IDENTIFIER (31)
FRAME PAYLOADS (0.. n)
20. Mapping the HTTP Request to Frames
Topline HTTP/2 Features
Header Compression
HTTP request Header Frame
GET /index.html HTTP/1.1
Host: example.com
Accept: text/html
HEADERS
+ END_STREAM
- END_HEADERS
:method: GET
:scheme: http
:path: /index.html
:authority: example.com
accept: text/html
21. Mapping the HTTP Response to Frames
Topline HTTP/2 Features
Header Compression
HTTP response Header Frame
HTTP/1.1 200 OK
Content-Length: 11
Content-Type: text/html
May The Force Be With You
HEADERS
- END_STREAM
+ END_HEADERS
:status: 200
content-length: 11
content-type: text/html
Data Frame
DATA
+ END_STREAM
May The Force Be With You
22. ■ Reduces header duplication
■ scheme, accept, user-agent
■ Server and client maintains a table
of headers
■ Send only difference and references
header number
Topline HTTP/2 Features
HPACK header compression
HTTP Request 1
:method GET
:scheme https
:host example.com
:path /index.html
:authority example.org
:accept text/html
user-agent Mozilla/5.0
HTTP Request 2
:method GET
:scheme https
:host example.com
:path /info.html
:authority example.org
:accept text/html
user-agent Mozilla/5.0
HEADERS frame (Stream 1)
:method GET
:scheme https
:host example.com
:path /index.html
:authority example.org
:accept text/html
user-agent Mozilla/5.0
HEADERS frame (Stream 3)
:path /info.html
23. ■ Attach priority information to streams
■ Priority can be entered in the header frame or the priority frame
■ Only a suggestion to the server
Topline HTTP/2 Features
Stream Prioritization
B D C
A
2 14 10
B C
A
4 8
24. ■ Eliminate the need for resource inlining
■ The sever can proactively send resources to the client
■ Client can reject PUSH_PROMISE by responding RST_STREAM
Topline HTTP/2 Features
Server Push
25. ■ How do we talk in HTTP/2?
■ Two ways to talk in HTTP2
■ HTTP 1.1 in clear text send upgrade header to upgrade to protocol h2c
■ HTTPS use ALPN (TLS extension) and communication continues in h2
■ However Firefox or Chrome does not support h2c
■ HTTPS all the way
Topline HTTP/2 Features
Upgrade Negotiation
26. ■ Servlet API well positioned to enable HTTP2 optimisation
■ Servlet 4.0 Appropriate Abstraction
■ Provide high level abstraction
■ Don’t want to program frames at the servlet layer
■ OUT: one request = one response
■ IN: one request = multiple responses
Servlet 4.0 Features
Appropriate Abstraction
27. Servlet 4.0 Features
Server Push
■ Most visible improvements in servlets
■ Improve perceivable performance
■ Best place to know what resources a request needs
■ logo image, stylesheet, menu javascript etc
■ Not a replacement for websockets
■ JSF will make good use of Server Push and other frameworks
■ Implemented as PushBuilder API
28. Servlet 4.0 Features
Typical Journey
1. Browser requests index.html
2. Server discovers need for css and js
3. Get PushBuilder from HTTP request
4. Set path to css and invoke push
5. Set path to js and invoke push
6. Then responds with index.html
Note
1. The pushBuilder can be reused
2. index.html returned after pushed resources
RST_STREAM rejects cached resources
29. ■ javax.servlet.http.PushBuilder
■ To use server push, obtain a reference to a PushBuilder from an
HttpServletRequest, mutate the builder as desired, then call push()
Servlet 4.0 Features
PushBuilder
30. Servlet 4.0 Features
PushBuilder
■ javax.servlet.http.PushBuilder
■ constructed with request method set to GET
■ conditional, range, expectation, authorization and request headers
are removed
■ cookies are only added if the maxAge has not expired
■ request header set to the request URL plus any query string present
■ only required setting is the URI path to the resource
■ must be set before every call to push()
33. Servlet 4.0 Features
JSF Use Case
■ Framework use case most important
■ Dependent on knowing the resources the client requires
■ Server side web frameworks best placed to take advantage of server push
34. Servlet 4.0 Features
Java 9 Support
■ JEP 110
■ Does not reinvent HttpClient
■ Supports HTTP1.1 and HTTP/2
■ Full server push support
■ Two modes: blocking and non-blocking
HttpClient.Builder HttpClient HttpRequest.Builder
HttpRequest: POST
HttpRequest: GET
37. Servlet 4.0 Features
Disable/Reject Server Push
■ Clients can explicitly disabled server push by sending a
SETTINGS_ENABLE_PUSH
■ Servlet containers must honor a client’s request to not receive a pushed
response
■ Browser already has the resource in its cache.
■ RST_STREAM rejects cached resources
38. Server Implementation
Servlet Support
■ GlassFish 5.0
■ Reference implementation
■ Payara 5.0
■ Recently created a branch for Java EE 8 development
■ Jetty Stable-9 (9.3.8.v20160314)
■ org.eclipse.jetty.servlets.PushCacheFilter/PushBuilder
■ WildFly 10 (Undertow)
■ Initial PushBuilder support implemented in Undertow master
■ Tomcat 9.0.0.M4 alpha
■ Initial PushBuilder Supports Servlets 4.0's PushBuilder in the
javax.servlets.http package
■ Netty 4.1
■ HTTP/2 implementation takes full advantage of headline features
39. Server Implementation
Tomcat Configuration
■ TLS required
■ TLS virtual hosting and multiple certificate are supported for a single
connector each virtual host is able to support multiple certificates
40. What About SPDY?
Stepping Stone to HTTP/2
■ Primary focus reduce web page load time
■ Same as HTTP/2
■ Formed first draft of HTTP/2
■ Stepping stone to HTTP/2
■ Chrome will drop support in favour of HTTP/2
41. HTTP/2 Performance
Overriding Goal to Improve Performance
■ The goal of HTTP/2 is to improve performance
■ Cloudflare HTTP/2 demonstration tool
cloudflare.com/http2
■ Anthum's HTTP vs HTTPS
httpvshttps.com
42. Tracking HTTP/2 Adoption
How to Track Adoption
■ Servers advise HTTP/2 support during the SSL handshake
■ Using shodan.io searches can be made using the ssl.alpn filter
43. Tracking HTTP/2 Adoption
Adoption Data
■ Analysis
■ Between December 2015 and April 2016 HTTP/2 adoption up to 10%
■ However growth comes from upgrading incumbent version HTTP/2
■ Combining protocols we see no meaningful change
45. What Servlet 4 and
HTTP/2 mean to you
Alex Theedom @alextheedom
Editor's Notes
Introduction: Senior Java Developer, Microservices, background in ATM, middleware, learning systems. Mentor Jersey Coders. Co-author professional Java EE design patterns.
Can contact me via email and twitter. I want to hear from you. Any questions or queries.
Here is how I am going to spend the next 45 mins or so.
I will establish the reasons why we need HTTP/2
Then I will look at the some of the work arounds that are used to over come short comings in HTTP1.1
Then I will focus in on the constrasting ways that HTTP1 and 2 percieve the usage of sockets
Then I will look at the topline features of HTTP/2
And then how they manifest themselves in servlets 4 with plenty of code examples
A quick look at the current state of server support for Servlets
A mention of SPDY and whats hwppening there
And then a look at some tools that demonstrate how more prtformant HTTP/2 really is
Then I will examine how HTTP/2 is being adopted by providers
Then I will finish with a round of how you can get involved and some q and a
Goal is increased perceived performance
Download more resource -> 21 yr when HTTP standardised
Webpage was simple, image, < 1k
HTTP designed for this
-> 2016 webpage completely different
120 resources, required for rich experience
Protocol wasn’t designed for rich, is inefficent
Last 5 years > 300%
Demands of HTTP forced Web devs to work arounds
The goal of HTTP/2 is to increase the perceived performance of the web browsing experience.
Web pages download more resource now compared to 21 years go when the HTTP protocol was standardised. A typical page back then was just a simple HTML page with a few images and less than 1K is size. The protocol was designed to serve this kind of page.
Fast forward to 2016 and the average web page looks completely different. It is still essentially HTML but now it is dependent on an average of 120 resources and these resources are images, javascript files and stylesheets. They are essential for today’s rich web experiences. We want this experience.
In fact over the last five years the average size of a top 1,000 web page has increased by over 300% to over 2MB.
The protoclol wasn't designed to deliver this kind of web experience and is very inefficient at delivering the kind of content we are now accustomed to. Over the last 21 years, as the demands on the protocol increased, web developers looked for ways to overcome the protocol's limitations and have developed some very effective solution. However good these solutions are they are just work-arrounds, hacks to compensate for the protocols limitations. We will be looking at some of those work-arounds later on. If the protocol was better these work-arounds would not be necessary.
Lets start by looking at the problems with HTTP/1.1.
Browser requests page, finds needs more resources
Browser requests these resources one at a time.
HTTP1 allowed only one request per TCP connection
HTTP1.1 allowed multiple requests – piplining
Now, request sent for resources then browser waits for response
Server response in order
The image returns quickly, but css takes time, the browser must wait, other resources are not returned until the css
Head of line blocking
The browser requests a webpage from the server, finds that it requires more resources in order to render the page properly.
It then starts to request these resources one at a time. So the browsers will load stylesheet number one and wait for the server to respond with this resource then the browser requests the logo image from the server and waits for the response.
In the beginning HTTP/1.0 allowed only one request to be made via a single TCP connection. In HTTP/1.1 this was addressed with pipelining and the browser was able to make multiple requests.
A request is sent for all the required resources. A request is sent for stylesheet 1, then stylesheet 2, then image 1 and then javascript 1 and the browsers waits for the server to respond with the requested resources. The server will respond in the order in which the resources were requested, but what happens if the image returns quickly and the stylesheet takes time to return to the client: the browser must wait until the stylesheet file is returned, the other requested resources are not returned until the server responds with the stylesheet file. So we are blocked waiting for a resource.
This problem is called head-of-line blocking.
Instead of one connection, we have multiple connections
C1 we load stylesheet, c2 we request javascript and c3 we load the image
OK but
TCP connections are expensive
Mas connections per browser
So instead of one connection, multiple connections are set up.
For example: on connection 1 the browsers loads the stylesheet, on connection 2 it loads the javaScript, on connection 3 it loads the image file and so on.
This an acceptable solution but this solution has two main issues.
TCP sockets are expensive to create, not an efficient use of TCP connections and
For a given browser there is maximum number of connection per host
Given these restrictions we looked for a different way to optimize page loading.
Create one large css/javascript
Will reduce number of HTTP requests
Create one large CSS file containing all the site's styles and one large JavaScript file with all the required dynamic logic, even the page itself could contain all it's required CSS and JavaScript.
This will reduce the number of HTTP requests for a given web page to one.
Special case of file concatenation
Embedding stylesheets, javascript files in webpage
A page looks like this
When run through an inlining tool
Will look like this
Also reduces number of HTTP requests for a given page
Another way is to use assest inlining.
Inlining assets is a special case of file concatenation. It’s the practice of embedding CSS stylesheets and external JavaScript files directly into an HTML page. For example, if you have web page that looks like this:
You could run it through an inlining tool to get something like this.
You can see that the stylesheet is now embeded inline with the HTML code as is the javascript.This can reduce the number of HTTP requests for a given web page to one.
We can embeded images
Base64 encoded
Uploaded to server
Decoded on browser
Time spent decoding
Caching is a challenge
As we have just seen resources can be embedded directly inside the web page. We have seen that the CSS and JavaScript can be embedded into the HTML.
The way images are embeded is that the image is base64 encoded and inserted into the web page, it is deployed to the webserver with the image encoding inline, and then when the web page is loaded by the browser it is extracted and decoded using base 64.
A lot of time is spent decoding and caching cannot be easily done.
One image file contains many images
Cut images from this larger image
Reduces the number of HTTP requests required
An image sprite is a single image file made up of all the sprites you use on the web page. Instead of 10 images we have just one. Each sprite is cut from the sprite sheet on the client side before use.
Image sprites in the wild from Amazon, Google and Facebook.
Overcome max connections we use domain sharding
Host stylesheet on one server and images on another
Not bound to one host
Explain image…
Opening 2 connection rather than 4
To over come the maximum number of connection to a host by the browser we can host the style sheets on one host and the JavaScript on another. This way we are not bound to one host.
Here we open a connection to server 1 and request the logo.jpg and icon.jpg files and we open a connectin to a different server and request the header.css and the menu.css. So we have opened two connections to two domains rather than 4 connections to one domain.
Seen as a throw away resource
Industry convention opens 5 or 8
Mosaic opened 1, IE open 4
What does HTTP1.1 say about sockets.
Socket are seen by HTTP1.1 as a throw away resource as the specification does not say much about how they are to be used nor is there any mention of how many sockets a browser should open to a given host.
It just by industry consensus that they open 5 or 8 connections. Mosaic opened just one and then Internet Explorer opened 4.
Even though the specification says nothing about the number of open sockets the browsers decided to restrict it to an arbitory number
In constrast, seen as a scares resource
Says much
Ideally open one
In contrast sockets are seen as a scares resource and the specification talks much about how they are to be used. Ideally a browser should open only one socket to the server and do all it needs over this one socket.
Everything that was done in N sockets in HTTP1 is now done over one socket connection in HTTP/2.
Lets now have a look at the topline features of HTTP/2 Specification.
HTTP/2 is comprised of two specifications:
Hypertext Transfer Protocol version 2 - RFC7540
HPACK - Header Compression for HTTP/2 - RFC7541
Request/Response Multiplexing – the most important change - Over a single TCP connection request and response is multiplexed with full bi-directional communication.
Binary Framing - The TCP connection is broken down into frames
Header Compression – remove duplication from headers
Stream Prioritization – give priority processing to streams
Server Push – anticipates required assets
Upgrade From HTTP1.1 – how to upgrade to HTTP/2
Over single connection, resonse/request is multiplexes and fully bidirectional
Connection – A single TCP socket
Stream – A channel within a connection
Message – A logical message, such as a request or a response and control message
Frame – The smallest unit of communication in HTTP/2. A request/response it is broken down into smaller parts.
Where in HTTP1 you would have just one HTTP request, with HTTP/2, we break it down into smaller frames
this is how we resolve head of line blocking.
Over a single TCP connection request and response is multiplexed with full bi-directional communication.
Lets defined some terms used when talking about multiplexing:
Connection – A single TCP socket
Stream – A channel within a connection
Message – A logical message, such as a request or a response and control message
Frame – The smallest unit of communication in HTTP/2. A request/response it is broken down into smaller parts.
For a given request you would have just one HTTP request now, with HTTP/2, we break it down into smaller frames and this is how we resolve head of line blocking.
Now that communication has been broken down into frames you can interweave the logical streams over a single TCP connection and the issue of head-of-line blocking is removed.
The TCP connection is broken down into frames.
For a given connection you have multiple streams, for each stream you can have multiple messages and for each message you have multiple frames.
You can see that there is a hierarchy.
Frame is the fundamental unit of communication
Interweave logical streams
the server does not have to wait for the completed communication before it does something else
So head of line blocking problem does not occur.
Once broken down into frames you can interweave the logical stream over a single TCP connection.
Stream 3 sends its headers first then later it sends its data which is its HTTP request body and before the server receives the completed stream 3 its sends back response to stream 2.
The server does not have to wait for the completed communication before it does something else the head of line blocking problem does not occur.
The frame has a header and the header consists of some information:
A frame consists of length of payload (24), type(8), some configuration flags(8), a reserved bit, Stream identifier(31) and Frame Payload (0 ...)
The stream identifier refers to the 1, 2, 3 in the previous diagram.
There are many different types of frames: Type fields can be
DATA corresponds to the HTTP request body. If you have a large body you may have multiple of them data 1, data2 etc
HEADER, corresponds to the HTTP headers
PRIORITY, referd to the stream priority
RST_STREAM, notifies there is an error, and allows the client to rejects server push promise request because it already has resource
PUSH_PROMISE
Lets see how the HTTP request is mapped to frames
Left HTTP request, right it is mapped to header frame
END_STREAM if true (plus) last frame for this stream
END_HEADERS if false not last frame that contains header info
Map Method, scheme …
Lets look at response
On the left we have an HTTP request and on the right we have it mapped into a header frame.
In the header frame you have two configurations. The first is END_STREAM which is set to true (the plus sign means true) this means that this is the last frame for this request and if you set END_HEADERS to true then this frame is the last frame in the stream that contains header information.
Then we map the familiar header information from the HTTP 1.1 request.
The colon denotes that this is a sudo header and references its definition in the HTTP2 specification.
Lets look at the response.
Left is HTTP response header, on right split into two frames: HEADER frame and DATA frame
END_STREAM is false, this is not the last frame
END_HEADERS is true, this is the last header frame
Header frames contain a lot of duplicate information. Can we optimise?
On the left is an HTTP1.1 header response. This splits into two frames: header frame and a data frame.
In the header frame the end_stream is minus because this is not the last frame and the end_header is true as this is the last frame with header information.
In the data frame the end_stream is mark plus as it is the last frame.
The header frame contains a lot of information that has been duplicated between requests. How can we optimize this?
Between requests, same info
Only different is path, so why send all other info
Use HPACK header compression
Instead send deltas
HPACK keeps table headers on client/server. Subsequent requests/responses references the header info
Between requests there is a lot of information that is the same. Between request 1 and 2 the only difference is the path. So why send the same information over and over again?
Instead of sending the data over and over again. We use HPACK header compression. HPACK keeps a table of the headers on the client and servers then when the second and subsequent headers are sent across it just references the header number on the header table.
Then the server/client knows which header you are actually using.
Attach priority information to streams, priority over resources
Entered in header/priority frame
Only an advice to server. Free to ignore if it cannot respect the priority
HTTP/2 provides the ability to attach priority information to streams which gives priority to one resource over another.
The priority can be entered in the header frame or the priority frame.
We can also define a hierarchy between stream however it is only advise to the server which is completely free to ignore the priority information. If it cannot respect the priority it can ignore it.
Stream A, b, d, c
C will take 3 times the resources as B
Eliminate the need for resource inlining.
sever proactively send resources to client.
prepopulates browsers cache with resources when resource is needed is already available, does not need to be requested, saving time.
The way it works.
The client sends header frames to get index.html (at some point the server will respond with the file.)
if browser is requesting that file it will also want the other resources. In this case a CCS and an image.
the Server can proactively send resources to the client even though client did not asked for them.
server sends a push promise frame for the CSS and a push promise frame for the image.
Then the server sends the index file.
The client can ignore those resources. If in cache so it declines the push promise
Eliminate the need for resource inlining.
The sever can proactively send resources to the client.
The server tries to prepopulates the browsers cache with resources so that when the resource is needed it is already available. and does not need to be requested saving time.
The way it works.
The client sends header frames to the server to get a file, in this case an index.html and at some point the server will respond with the file.
The server knows that if I am requesting that file I will also want to request the resources that the file requires to render that html file. In this case a CCS and an image. the Server can decide to proactively send those resources to the client even though the client has not yet asked for those resources.
The server knows that those resources will be requested. So the server sends a push promise frame for the CSS and a push promise frame for the image. Then the server sends the index file.
The client can ignore those resources. It knows that those resources are in the cache so it declines the push promise so it will not be sent to the client.
How to talk HTTP2
1. http in clear text, use upgrade mechanism, send upgrade header, promoted to h2c
2. https use ALPN, send extension during handshake, communication continues in h2
However Firefox or Chrome does not support h2c
So how do we talk in HTTP2.
Two ways.
If you are using HTTP in clear text you can use the upgrade mechanism in HTTP1.1 to send upgraded header to promote to a protocol called h2c, the c means clear text, then the server will react and upgrade to h2c.
If you are using HTTPS you can use ALPN (application layer protocol negation) which is a TLS extension. You send an extension during the handshake to the server side and the server figures out that it is h2 and the communication continues in h2.
However Firefox or Chrome does not support h2c
HTTP/2 over TLS and HTTP/2 over TCP have been defined as 2 different protocols, identified respectively by h2 and h2c.
Servlets well positioned to enable http/2 optimisations
And allow frameworks to leverage server push
Servlets are right abstraction, you don’t want to program frames and streams
In a servlet you can so server push without doing anything low level
So a high level API is what we want
In http1 we had one request and one response
In http/2 no true, one request and server pushes many resources
One request and multiple responses.
Abstraction: Servlet API well positioned to enable HTTP/2 optimisation and to allow frameworks to leverage server push.
So how might Servlets expose HTTP /2 features? Servlets are the right abstraction for the RFC. You don't want to have to program frames and streams so a high level API to hide the network layer would be nice. In the Servlets layer you can do the server push with out doing the low level stuff.
One of the changes in the Servlet API is that in HTTP 1 we had one request and one response. In HTTP /2 this is no longer true. There can be one request and the server may decide to push several resources and then finally it responds with the originally requested page. You have one request and multiple responses at the same time and this is a challenge for the Servlet API.
Server push is the most visible of the many improvements in HTTP/2 to appear in the servlet API. All of the new features in HTTP/2, including server push, are aimed at improving the perceived performance of the web browsing experience.
Server push is enabled to improved perceived browser performance because servers are in a much better position than clients to know what additional assets (such as images, stylesheets and javascripts) a request might ask for next.
For example, it is possible for servers to know that whenever a browser requests an index.html page, it will then request logo image, a stylesheet and menu javascript, etc. Since servers know this, they can pre-emptively start sending these assets while processing the index.html.
Server push is not a replacement for web sockets. it just allows you to populate the browser cache. It is expect frameworks that build on Servlets like JSF will use this and we solve this problem using the push builder API.
The browser requests the index page. The server will notice that it needs the style_1.css and the javaScript_1.js files, so we get a pushbuilder from the HTTP request and set the path to the style_1.css file and invoke push, then we set the path to the javaScript_1.js file and invoke a push again.
Note in this case the css and javascript will return to the client first and then the index returns.
Its simply you get the push builder from the HTTP Request request and set the path to the resource and push.
There are two things to note in this sequence diagram,
the push builder can be reused. In the example I use the push builder to push two resources the css file and the javaScript file.
the second thing is that the index.html is returning to the browser after the push resource.
The reason is that if the index returns before the push resources the browser will analysis it and see that it needs the two resources. It will look in the cache and see that it does not have those resources and it will request them. At this point the browser cache will not be prepopulated. So the pushed resources must be returned first before the index is sent.
One of the frames types mentioned earlier was a RST_STREAM this is how the client can decline a push promise. So if the server pushes a resource and the browser already has it in the cache then rather than let the server send the file it will send an RST_STREAM frame saying that it already has the files file so don't send it.
To use server push, obtain a reference to a PushBuilder from an HttpServletRequest, mutate the builder as desired, then call push().
This builds a push request based on the HttpServletRequest from which this builder was obtained
This builds a push request based on the HttpServletRequest from which this builder was obtained.
PushBuilder pushBuilder = request.getPushBuilder();
The push request is constructed with the request method set to GET. Conditional, range, expectation, authorization and request headers are removed. Cookies are only added if the maxAge has not expired. The request header will be set to the request URL and any query string that was present. If either of the headers If-Modified-Since or If-None-Match were present then isConditional() will be set to true.
The only required setting is the URI path to be used for the push request. This must be called before every call to push(). If the path includes a query string, the query string will be appended to the existing query string (if any) and no de-duplication will occur.
Paths beginning with '/' are treated as absolute paths. All other paths are treated as relative to the context path of the request used to create this builder instance. The path may include a query string.
pushBuilder.path("/images/logo.png")
The resource is pushed by calling thepush() method on the pushBuilder instance.
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { request.getPushBuilder().path("/images/logo.png").push(); }
This code snippet pushes the header.jpg image to the client that made this request.
This code snippet pushes the header.jpg image to the client that made this request.
A different way of solving this problem is to implement the server push in a filter.
jetty has a PushCacheFilter in the org.eclipse.jetty.servlets package
One of the most important use cases for the server push is the framework case. It is completely dependent on the server having prior knowledge of the resources that the client will ask for before the client asks for them. the server side web frameworks are in a very good position to take advantage of server push.
So JSF is able to use server push very easily. So every time JSF is going to render a style sheet for example it will call the method encodeResourceURL and this is the entry point and here we can initialize the call the server push.
This is how web frame works like JSF will be able to leverage the server push modification. This has not been implemented we will see if this is how they decide to do it or if they come up with another method.
The development of Java 9 support is taken care of in JEP 110
HttpClient is not being reinvented, it builds on what is currently. It will support both HTTP1.1 and HTTP/2
Non-blocking mode you use the executor service and compleatable futures
.create uses the builder under the covers for you
The client can explicitly disabled server push by sending a SETTINGS_ENABLE_PUSH setting value of 0 (zero).
3-30 Java Servlet Specification • October 2015
In addition to allowing clients to disable server push with the SETTINGS_ENABLE_PUSH setting, servlet containers must honor a client’s request to not receive a pushed response on a finer grained basis by heeding the CANCEL or REFUSED_STREAM code that references the pushed stream’s stream identifier. One common use of this interaction is when a browser already has the resource in its cache.
The reference implementation for the Java EE 8 is the GlassFish project Version 5.0. However there seems to have been little progress made when compared to other server vendors.
The most interesting implementations are in Jetty, WildFly and Tomcat.
Jetty (Jetty 9.3.8.v20160314/stable-9)
Jetty is a Web Server and Servlet container which support for HTTP/2, WebSockets, OSGi, JMX, JNDI, JAAS. Jetty's stable release 9.3.x has had PushBuilder support for a few months and it is being actively used by their HTTP/2 adopters
They have implemented their own packaging for Servlet 4.0 API and includes a PushCacheFilter in the org.eclipse.jetty.servlets package. This is a more sophisticated implementation of the example we saw earlier.
WildFly 10 (Undertow)
They have an initial PushBuilder support implemented in Undertow master, however it is not part of any release yet. It should be possible to use it in Wildfly by simply replacing the existing Undertow and Servlet API jars.
Tomcat 9.0.0.M4 alpha
Currently supports Servlets 4.0's PushBuilder in the javax.servlets.http package.
Netty 4.1
Its worth mentioning that Netty 4.1, an asynchronous event-driven network application framework, has an HTTP/2 implemetations that takes full advantage of the main headline features.
We can use ALPN which is a TLS extension and in the handshake you send an extension and the server will determine that the communication is h2 and will continue using h2.
One of the changes in Tomcat 9 is that TLS virtual hosting and multiple certificate are supported for a single connector with each virtual host able to support multiple certificates.
Open the conf/server.xml file and make the following configuration changes.
SPDY's primary focus is to reduce web page load time by reducing latency, essentially satisfying the same goal as HTTP/2 and it formed the first draft of the HTTP/2. Its a bridge between HTTP1.1 and HTTP/2 and will not be support be support by Chrome going forward. Instead chrome will favour support for HTTP/2 from early this year.
The goal of HTTP/2 is to improve performance. So lets see just how much faster it really is.
Cloudflare has a neat online tool (cloudflare.com/http2) that downloads 200 image slices in both HTTP1.1 and HTTP/2. The browser has to use many separate TCP connections to load the slices. This incurs significant amount of overhead because only a small number of images are downloaded in parallel.
I ran this a few days ago and the demo showed that HTTP/2 was 4.0x faster than HTTP/1.1.
HTTP1.1 vs HTTP/2 HTTPS
Another tool that tests HTTP1.1 against HTTP/2 is Anthum's HTTP vs HTTPS (httpvshttps.com). Plaintext HTTP/1.1 is compared against encrypted HTTP/2 HTTPS on a non-caching, nginx server with a direct, non-proxied connection.
Servers can advise that they support HTTP/2 during the SSL handshake and with modification made to Shodan by John Matherly that track the negotiated HTTP versions searches of the data collected can be made using the ssl.alpn filter.
Shodan is the world's first search engine for Internet-connected devices
If we analyse the two graphs (December full report, April full report) by looking at the percentage of all reported server support for each protocol type we can see that the adoption of HTTP/2 has increased 100% to 10% of all surveyed servers.
However further analysis shows that growth has come from providers upgrading the incumbent version of HTTP/2 to the latest specification. It can be implied that providers have upgraded from draft versions 14 and 17 and from HTTP/2 (cleartext). The clear text version is not supportedby Firefox or Chrome.
Looking deeper into data by combining all HTTP 1.x versions into one group, all HTTP 2 into another group and all SPDY versions in a different group we can see that there is no significant change in protocols supported.
As expected HTTP 1.x dominates the list of support protocols, with SPDY in second place and HTTP/2 trailing last. There is still a lot of work to be done.
Servers can advise that they support HTTP/2 during the SSL handshake and with modification made to Shodan by John Matherly that track the negotiated HTTP versions searches of the data collected can be made using the ssl.alpn filter.
Shodan is the world's first search engine for Internet-connected devices
If we analyse the two graphs (December full report, April full report) by looking at the percentage of all reported server support for each protocol type we can see that the adoption of HTTP/2 has increased 100% to 10% of all surveyed servers.
However further analysis shows that growth has come from providers upgrading the incumbent version of HTTP/2 to the latest specification. It can be implied that providers have upgraded from draft versions 14 and 17 and from HTTP/2 (cleartext). The clear text version is not supportedby Firefox or Chrome.
Looking deeper into data by combining all HTTP 1.x versions into one group, all HTTP 2 into another group and all SPDY versions in a different group we can see that there is no significant change in protocols supported.
As expected HTTP 1.x dominates the list of support protocols, with SPDY in second place and HTTP/2 trailing last. There is still a lot of work to be done.