This document summarizes the evolution of HTTP including versions 0.9, 1.0, 1.1, and the development of HTTP/2. It outlines limitations of HTTP/1.1 like concurrent connection limits and head-of-line blocking. It then describes how SPDY was developed by Google to address these limitations and optimize HTTP, and how HTTP/2 was later standardized based on SPDY incorporating features like request multiplexing, header compression, server push, and stream prioritization to improve performance.
3. Limitations of HTTP/1.1
Simple and text based.
Concurrent Connection Limit
Head-of-line blocking
Unable to use TCP at it’s full capability
Latency
4. Limitations of HTTP/1.1
HTTP/1.1 RFC states that there should be only 2 concurrent
connections per server/proxy.
Maximum Concurrent connections supported by browsers:
Chrome: 6
IE: 8
Firefox: 6
Opera: 6
Safari: 4
5. Optimization with HTTP/1.1
■ HTTP/1.1 defines Pipelining but most Browsers do not implement it.
■ Pipelining -> Multiple requests can be sent concurrently but still the responses are sent in
the same order as the requests are made.
PIPELINING
6. Optimization with HTTP/1.1
■ Use multiple sub-domains to get more connections
■ For example : sub-1.example.com, sub-2.example.com, sub-3.example.com etc.
■ Complicates architecture
DOMAIN SHARDING
7. Optimization with HTTP/1.1
■ Combine resources into a single larger resource
■ Example, bundling of CSS and javascript,use of image sprites.
■ Inlining:Requests are reduced by using inline styles in HTML instead of writing in separate
files.
CONCATENATION AND INLINING
8. More problems…
■ Complexity in Web design and maintenance increases.
■ Increased resource consumption.
■ Reduces cacheability of resources.
■ Duplicate resources
9. SPDY
■ Experimental protocol, developed at Google and announced in mid-2009 with goals.
- Target a 50% reduction in page load time(PLT)
- Avoid the need for any changes to content by website authors.
- Minimize deployment complexity.
- Avoid change in network infrastructure
- Implement in partnership with open-source community.
10. SPDY
Make more efficient use of the underlying TCP connection by
introducing a new binary framing layer to enable request and
response multiplexing, prioritization, and header compression.
11. HTTP and SPDY evolution
HTTP/0.9
1991
HTTP/1.0 HTTP/1.1
1996 1999
HTTP/2 Approved
2015
SPDY
Implemented
2012
End of SPDY
2016
■ July 2012 – development of SPDY announced publicly by group of developers at Google.
■ All Major browsers started implementing SPDY which started in the path of standardization.
■ February 2015, Google announced removal of support for SPDY
■ February 2016, Google announced that Chrome will no longer support SPDY after May 15, 2016
HTTP/2 first draft
2012
SPIDY
Development
2009
12. Beginning of HTTP/2.0
■ March 2012 : Call for Proposal for HTTP/2
■ November 2012: First Draft of HTTP/2 (Based on SPDY)
■ August 2014: HTTP/2 draft-17 and HPACL draft-12 are published
■ February2015: IESG approved HTTP/2 and HPACK drafts
■ May 2015: RFC 7540 (HTTP/2) and RFC 7541 (HPACK) are published.
13. New in HTTP/2
■ Reduce latency by introducing Header Field Compression
■ Allow multiple concurrent exchanges on the same connection.
■ Server Push
14. Protocol Overview
Supports all core features of HTTP/1.1
■ HTTP/2 uses the same “http” and “https” URI schemes as HTTP/1.1
■ HTTP/2 uses same default port numbers as HTTP/1.1 (80 for http and 443 for
https)
■ HTTP Semantics, such as verbs, methods, and headers are unaffected.
15. Binary Framing Layer
HTTP 1.1
POST /upload HTTP/1.1
Host: www.example.com
Content-Type: application/json
Content-Length:15
{“msg”:”hello”}
DATA frame
HEADERS frame
16. Streams, messages and frames
•Stream: A bidirectional flow of bytes within an established connection, which may
carry one or more messages.
•Message: A complete sequence of frames that map to a logical request or response
message.
•Frame: The smallest unit of communication in HTTP/2, each containing a frame
header, which at a minimum identifies the stream to which the frame belongs.
18. Request and Response Multiplexing
HTTP/2.2 enables full multiplexing by allowing the client and server
to break down HTTP messages into independent frames, interleave them
and them reassemble them on the other end.
https://developers.google.com/web/fundamentals/performance/http2/
19. Stream prioritization
• Each stream may be assigned an integer weight between 1-256
• Each stream may be given an explicit dependency on another stream.
https://developers.google.com/web/fundamentals/performance/http2/
20. One connection per origin
• No Longer needs multiple connections
• Single connection is established between client
and server and multiple streams are exchanged
between them.
• Reduces the memory and processing footprint
along the full connection path.
• Reduces Network latency.
21. Flow control
• A mechanism to prevent the sender from overwhelming the
receiver with data
• Each receiver may choose to set any window size that it
desires for each stream and the entire connection.
• Window size is defined in SETTINGS frame when connection
is established
• Default size is 65,535 bytes. Max is (2^31 -1) bytes
• Can be maintained using WINDOW_UPDATE frame.
22. Server Push
• In addition to the response to the original request, the server
can push additional resources to the client, without the client
having to request each one explicitly.
• Push resources can be:
Ø Cached by the client
Ø Reused across different pages
Ø Multiplexed alongside other resources
Ø Prioritized by the server
Ø Declined by the client
24. Header Compression
• HPACK header compression reduces size of HTTP/2 header
• Compressed using Huffman encoding, resulting in an average
30% reduction.
• Frequently used headers can be encoded as variable length
integer, opposed to re-sending the whole header every time.
• Faster content delivery due to smaller headers.