The document discusses SPDY and HTTP/2, which aim to improve upon HTTP/1.1 by allowing multiple requests to be sent concurrently over a single TCP connection through header compression and multiplexing. It notes that SPDY is now supported by major browsers but not Internet Explorer, while HTTP/2 is still not widely adopted. The document also describes how protocols like NPN and ALPN enable negotiation of the transport layer and encryption ensures security for intermediaries.
2. HTTP/1.1
• One request per connection, sequentially
• First in, first out without cancellation
• Has to open multiple TCP connections
• TCP handshake takes round trips
• increasing the TCP window takes round trips
• SSL handshake takes even more…
3. HTTP/1.1
• Human readable, hard to parse
• Keep-Alive dilemma
• waste resources keeping connections open, or
reopening them?
• Tricks: inlining, JS/CSS concatenation, sprites,
domain sharding
4. SPDY
• Google “proprietary”, but widely supported
• Android 4.1+, iOS 8, Chrome, Firefox, Safari,
but not IE
• nginx, Jetty, ATS…
• Only on top of SSL
• Used widely in production (Google, Twitter,
Yahoo…)
5. HTTP/2
• SPDY being standardized by the IETF
• Not widely supported
• Chrome Canary, Firefox Nightly
• Jetty, twitter.com, google.com
• Specified with and without SSL, but browsers will
not support plain text (but IE will)
6. Negotiating transport
• Two TLS extensions to negotiate application layer
transport
• NPN, came with SPDY
• ALPN, came with HTTP/2
• You will have to support both
• Then: End to end encryption
• Intermediaries don’t know and can’t care about
transport
7. NPN
• C: ClientHello (NPN)
• S: ServerHello (NPN, list of protocols), Keys,
Certificates
• C: Keys, Certificates
• Client picks its preferred protocol from the list
provided by the server
8. ALPN
• C: ClientHello (ALPN, list of protocols)
• S: ServerHello (ALPN, selected protocol), Keys,
Certificates
• C: Keys, Certificates
• Server controls which protocol will be used
9.
10.
11. HTTP/2 and SPDY
• Preserve HTTP/1.1 paradigms
• There are still GET, POST and so on requests
• URLs stay the same, requesting a path
• All headers are still there, cookies, etc.
12. HTTP/2 and SPDY
• Easy to parse binary, length prefixed…
• but not human readable (without tools)
• 100+ requests (streams) in one connection
• concurrently, with dynamic priority, “all mixed up”
• cancellation
• Cheaper requests
• header compression
13. Multiplexing
• Stream consists of a series of frames
• Type, Stream ID, payload
• Order of frames in the TCP connection does not
matter
• Order of frames in a stream matters
• Client can reset (cancel) any stream at any time
14. Deployment
• Popular reverse proxies (nginx) support SPDY, easy
to enable in config
• SPDY is transparent to your web app
• You want everything on one - or a low number of -
host(s)
• Reverse proxy can terminate SPDY, talk HTTP/1.1 to
application server
• Then measure real users, not the Bay Area
15. HTTP/2 risks
• One lost packet stalls the TCP stream
• meaning, all 100+ HTTP/2 streams inside it
• “head of line blocking”
• Intermediary throttles per TCP stream
• Can’t bypass that with only 1 TCP connection
• Next Idea: Userland “TCP” over UDP, QUIC