Cant We Address Latency & SecuritySeparately? No.
Cant We Address Latency & SecuritySeparately?● If eavesdropping in the cafe is still possible in 2020 with trivial tools, we have failed our users.● The internet is weak already and getting worse.● Firesheep tools make sniffing easy● Major content providers want privacy ○ Facebook opt-in ○ Twitter opt-in ○ Google wants it too. ○ SSL is just too slow right now...
Requirements● Avoid requiring the website author to change content Allow for incremental changes Performing "better" with content changes is okay Performing "worse" without content changes is unacceptable● Perform always better, never worse than HTTP● Drop-in replacement from webapps point of view ○ Changing the web server/application server is inevitable and therefore acceptable
Incremental Improvements - Meh.● Incremental improvements to HTTP dont "move the needle" ○ Theyre hard to figure out individually ○ Each only works for some people, with hacks● Problem is the intermediaries (a.k.a. proxies) ○ Transparent proxies change the content. ○ Example: pipelining - donde esta? ○ Example: stripped "Accept-Encoding" headers ■ we cant even improve "negotiated" compression!● Intermediaries make the web a hostile place for protocol changes.● Even our laws get in the way. ○ Spying on us is bad, but spying on our kids is ok!
Background: What is a WebPage?● ~44 resources● ~7 hosts● ~320KB● ~66% compressed (top sites are ~90% compressed)● Except HTTPS, where < 50% compressed.
Background: Poor Network UtilizationWeb Page evolution has led to poor network utilization.Bandwidth is going up... RTT isnt going down.
Transport or Application Layer?● Known transport problems ○ lots of small connections with no shared state ○ no data in handshake ○ slow-start ○ delayed ack● Known app layer problems ○ Single threaded connections ○ Redundant headers ○ Not enough compression● Decision: ○ App layer problems were too severe to ignore ○ Packet loss patterns unknown
How to Deploy? Process of Elimination● Avoid changing the lower-level transport ○ Much easier "burden of proof" ○ Works with existing infrastructure● Available transports: ○ TCP, UDP are possible. ○ SCTP not an option due to NAT.● UDP ○ Wed have to re-engineer TCP features.● That leaves us with TCP. ○ Only two ports available universally ■ Port 80? ■ Port 443?
Deployment: Port 80● Should be our first choice.● Except HTTP runs on port 80.● Proxies and intermediates make for a treacherous environment. ○ HTTP/1.1 1999 - Pipelining still not deployed ○ Compression negotiation● Upgrade header requires a round trip● WebSockets Data Shows that using HTTP over a non- standard port is less tampered with than port 80. ○ Success rate: ■ HTTP (port 80) 67% ■ HTTP (port 61985) 86% ■ HTTPS (port 443) 95%
Deployment: Port 443● Port 443 runs SSL/TLS. ○ Adds server authentication & encryption● Handshake is extensible: ○ Next-Protocol-Negotiation www.ietf.org/id/draft-agl-tls-nextprotoneg-00.txt● But isnt TLS too slow? ○ CPU issues are minor ○ TLS does increase bandwidth by 2-3% ■ Mitigated by SPDY better compression. ○ Round Trip Times are significant ○ SSL requires 2 or 1 RTs for the handshake ■ Can we reduce to 0 RTs?
What is SPDY?● Goal: Reduce Web Page Load time.● Multiplexing ○ Get the data off the client● Compression ○ HTTP headers are excessive ○ Uplink bandwidth is limited● Prioritization ○ Today the browser holds back ○ Priorities enable multiplexing● Encrypted & Authenticated ○ Eavesdropping at the Cafe must be stopped● Server Push ○ Websites do some of this today with data URLs
Request Path SPDY (closer to actual) SPDY (human readable)80 01 00 01 SYN frame00 00 00 6b stream_id = 100 00 00 01 associated_stream_id = 000 00 00 00 priority = 140 00 00 04 num_headers = 406method04post methodPOST03url1ahttp://blogger.com/my_blog urlhttp://blogger.com/my_blog0auser-agent13blahblahblah chrome user-agentblahblahblah chrome07version08http/1.1 versionHTTP/1.1 DATA frame stream_id = 100 00 00 01 flags = fin01 00 00 1f length = 31here is my very short blog post here is my very short blog post
Does it work? "Figures dont lie, but liars can figure"
Initial Results - Testing Methods● 3 major variables on the network ○ Packet Loss ○ Bandwidth ○ Round-Trip-Time● Looking at any one configuration is anecdotal, need to look at a spectrum of data.● Our test content includes: ○ high-fidelity copies of top-25 web pages. ○ includes exact HTTP headers, compression, etc.● Packet loss simulators stink (random loss is not realistic) ○ We dont have a good model. ○ We dont have a good simulator.
Initial Results: Packet LossReal world packetloss is ~1%.SPDY is 41%-47%faster for PL between1 and 2%.
Top 300 ContentQuestion: Do highlyoptimized websites getless benefit?Randomly picked 300sites from the AlexaTop-1000.Overall page loadimprovement averagewas 40%.
TLS - Can we tune it now?● TLS is an unoptimized frontier for browsers● Browser features: ○ Certificate caching ○ Validation caching ○ Parallel validation/connecting ○ OCSP stapling ■ multi-level OCSP stapling ○ False Start ○ Snap Start● more more more
TLS - Snap Start● 0-RTT SSL● We built it. ○ Dont like it.● Doesnt do perfect forward secrecy● Changes too complex when retrofitted atop existing SSL● http://tools.ietf.org/html/draft-agl-tls-snapstart-00
Deployment Status @ Google● On by default since Chrome 6 ○ Currently at 90%; 10% holdback is for A/B testing.● On for all Google SSL traffic● SPDY HTTP->SPDY proxy used externally some● SPDY Proxy● Android just getting started, Honeycomb/Clank.In other words, yes, you can really use it now.But SPDY is: ● experimental ● research ● not standardized (yet)
Poor Mans WebSocket Today● docs.google.com has a "hanging GET" for every doc open● how to scale beyond 6 connections per domain? ○ docs[1-N].google.com● but, gets expensive and is horribly inefficient● switched to spdy and much happier● Header compression mitigates some of the inefficiency of a hanging GET
Single Connection Throttles● Initial cwnd is still a problem ○ mitigated with kernel changes, but who can deploy that?● Initial receive window too low on Linux ○ fixed on trunk, but need to update all client kernels!● TCP intermediaries ○ See bugs all the time that muck with wscale and other options causing slow single connections● TCP congestion control ○ Completely unfair backoff for single-connection protocols ○ Single connection: single loss loses 1/2 of bandwidth ○ HTTP (6 conn): single loss loses 1/12 of bandwidth
Next Steps● Chrome Changes ○ Added SPDY IP Pooling ○ Need to persist dynamic cwnd data ○ Server Push ○ More measurements● SSL Changes ○ Scratching Snapstart ○ Need a new protocol● Transport Changes ○ Need a protocol we can actually deploy. ○ TCP in the kernel isnt it.