SPDY @Zynga


Published on

In Jan 2012, Zynga was kind enough to invite me to speak at their SF office. These are the slides I presented; its much of the same SPDY content, although starting to focus more on mobile.

  • Erik - yes your argument is the common argument against security/privacy, and it is flawed. The flaw is that although CNN's content is public, neither you, nor I, nor even CNN, knows if the *user* wants the content to be privately transmitted. Further, users are not sophisticated enough to 'opt in'. It is our responsibility, as computer professionals, to make it always secure and private so that there are no mistakes.

    For example, maybe you've just learned you contracted an embarrassing virus. You're browsing around wikipedia, CNN, youtube, and other sites reading articles about this virus. It's all public content, so why secure it, right? We MUST secure it because its the only way to ensure the user's privacy.

    In other words, although you assert that you know when the user wants to maintain privacy, I argue that you don't and you never will.

    Further, we CAN solve the SSL latency problem. Why don't we fix that? We can reduce the number of times we have extra round-trip inducing handshakes for SSL. We can maintain transport level (TCP) connections which don't require re-establishment over round trips. But we need to build new protocols to do it.

    Net result: instead of throwing user privacy and security under the bus - lets fix the real problem - that our protocols need even more latency reductions than just SPDY.
    Are you sure you want to  Yes  No
    Your message goes here
  • @mbelshe It is your last comment that I take exception to. I would like to see a world where web traffic is secure as well, but I don't it should be all or nothing. There is little value in dynamically encrypting ugc video, jquery libs, etc. There is plenty (I would argue most) content on the web that makes no sense to encrypt. The CNN home page for example. Why would you encrypt that? So, it is simply wasted cpu cycles to encypt it in real-time. If it is a DRM issue, encrypt the content once and be done with it. Clearly, there are many use cases to be secure, but let's not say they are the dominant use cases. They are the exception rather than the rule. So, I would not optimize the whole web for the secure use case. Sure, it needs to be there and be optimal, but not at the expense of the other 90% of the time it does not need to be secure.
    With respect to proxies, proxies installed in carrier networks (mobile being a prime use case for SPDY) will get upgraded to http/2.0 rather quickly I think. If http/2.0 really is faster on mobile, carriers would be at a competitive disadvantage if they did not upgrade their proxies.
    Proxies are seen to be evil by many in the community. They serve many valuable purposes in the network. They are used to speed up web access from unoptimized web sites. Edge caching is another good use case. If we go to an entirely encrypted world, all those use cases go out the window. That will slow things down even more.
    This is why I advocate for the smart use of secure content, rather than blanket use as a matter of policy, It simply does not make sense (to me). I know many others share my views, but they are not getting any airplay in the http/2.0 debate. It is being dominated by the 'big boys' who have their own interests that are not necessarily aligned with the overall use of the web.
    Again, I thank you for your thoughtful responses.
    Are you sure you want to  Yes  No
    Your message goes here
  • Regarding unsecured protocols; SPDY has much utility in the backoffice where security requirements are very different and SSL may not be required.

    For over the web, you simply cannot deploy anything over port 80 that isn't HTTP. It's not that nobody has tried - it's that no technical solutions have been found that do work. History has shown that proxies are not removed from the network when they fail. This is because they can't be found - they're transparent and it is easier to switch to another browser than it is to fix your proxy.

    It's also true that web users expect the web to be secure just like they expect operating systems to be secure. It's up to us computer professionals to move to more secure protocols over time. I'd even go so far as to say it borders on criminal negligence if we don't.
    Are you sure you want to  Yes  No
    Your message goes here
  • @mbelshe I agree that the comparison is bogus. Many SPDY advocates make the blanket statement though which I why I bring it up. While TLS is not a requirement, there is not a lot of work going on to optimize SPDY for non-TLS connections. Google's (et. al.) mostly say, 'TLS is the only real way to get a new protocol through the proxies, so let's just do that.' A better approach (I believe) would be to optimize delivery for both modes. Proxies that don't support SPDY will quickly find themselves removed from the network :).
    Thanks for the response!
    Are you sure you want to  Yes  No
    Your message goes here
  • Thanks.

    I didn't mean to say it was always faster - that was a requirement for us when designing a new protocol.

    Note, however, that SPDY doesn't require SSL. It's a misconception that it does; the protocol specification doesn't require it.

    If you compare unsecured HTTP against unsecured SPDY, SPDY most often wins. If you compare secured HTTPS against SPDY/SSL, SPDY also most often wins.

    To compare an unsecured protocol against a secured one is like apples and oranges. Just like it would be faster to get your bank statement if they didn't use SSL...
    Are you sure you want to  Yes  No
    Your message goes here
  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

SPDY @Zynga

  1. 1. Agenda● Motivation & Background● What is SPDY?● Whos Using SPDY?● SPDY and Mobile● SPDY and REST APIs● Standardization
  2. 2. Motivation Latency & Security
  3. 3. Background: What is a WebPage?● ~86 resources● ~13 hosts● ~966KB● ~66% compressed (top sites are ~90% compressed)● Except HTTPS, where < 50% compressed.
  4. 4. Background: Poor Network UtilizationWeb Page evolution has led to poor network utilization.Bandwidth is going up... RTT isnt going down.
  5. 5. Background: Pesky Round Trips
  6. 6. Browser Perf Problems● Network● Rendering / Layout● JavaScript Execution● Stylesheets● Flash● More Network Loads
  7. 7. HTTP ConnectionsAverage: 29 connections per page.25%-tile = 10 50%-tile = 20 75%-tile = 39 95%-tile = 78
  8. 8. Incremental Improvements - Meh.● Incremental changes dont "move the needle" ○ Theyre hard to figure out individually ○ Each only works for some people, with hacks● Problem is the intermediaries (a.k.a. proxies) ○ Transparent proxies change the content. ○ Example: pipelining - donde esta? ○ Example: stripped "Accept-Encoding" headers ■ we cant even improve "negotiated" compression!
  9. 9. SPDY Requirements● Avoid requiring the website author to change content Allow for incremental changes Performing "better" with content changes is okay Performing "worse" without content changes is unacceptable● Perform always better, never worse than HTTP● Drop-in replacement from webapps point of view ○ Changing the web server/application server is inevitable and therefore acceptable
  10. 10. What is SPDY?● Multiplexing ○ Get the data off the client● Compression ○ HTTP headers are excessive ○ Uplink bandwidth is limited● Prioritization ○ Today the browser holds back ○ Priorities enable multiplexing● Server Push ○ Websites do some of this today with data URLs
  11. 11. Less is More - Conns, Bytes, Packets
  12. 12. Deployment: Process of Elimination● Avoid changing the lower-level transport● Available transports: TCP or UDP. ○ Note: SCTP not an option due to NAT.● UDP ○ Wed have to re-engineer TCP features.● That leaves us with TCP. ○ Ok, so which port? 80 or 443?
  13. 13. Deployment: Port 80● HTTP runs on port 80.● Proxies are a barrier to protocols ○ HTTP/1.1 1999 - Pipelining still not deployed ○ Compression negotiation● Upgrade header requires a round trip● WebSockets Data Shows that using HTTP over a non- standard port is less tampered with than port 80. ○ Success rate: ■ HTTP (port 80) 67% ■ HTTP (port 61985) 86%
  14. 14. Deployment: Port 443● Port 443 runs SSL/TLS. ○ Adds server authentication & encryption● Handshake is extensible: ○ Next-Protocol-Negotiation www.ietf.org/id/draft-agl-tls-nextprotoneg-00.txt
  15. 15. Can We Address Latency & SecuritySeparately?● If eavesdropping in the cafe is still possible in 2015 with trivial tools, we have failed our users.● The internet is weak already and getting worse. ○ A matter of life and death● Firesheep tools make sniffing easy● Major content providers want privacy ○ Facebook opt-in ○ Twitter opt-in UPDATE: Now on by default! ○ GMail and G+ already SSL only. ○ SSL is just too slow right now...
  16. 16. HTTPS vs SPDY (Google)Update Jan 2012: Google has announced that SPDY (w/SSL) is now faster than HTTP on Google properties.
  17. 17. Who Uses SPDY?● Websites: ○ Google since 2010 ○ Amazon Kindle Fire● Browsers ○ Google Chrome since 2010 ○ Firefox 11+ ○ Chrome for Android● Servers ○ Apache w/ mod-spdy ○ nginx has announced support coming ○ Java/Ruby/Python/node.js/Erlang/Go & C impls! ○ netty framework● Mobile ○ iPhone client
  18. 18. SPDY & Mobile● New client-side problems ○ Battery life constraints ○ Severely limited CPUs● New Network Properties ○ Latency from 150 - 300ms per Round Trip ○ Bandwidth 1-4Mbps● New use cases ○ Mobile Web Browsers are 1st generation ■ So web browsing sucks ○ Everyone uses Apps w/ REST APIs anyway
  19. 19. SPDY and Battery Life● Network activity is one of the biggest battery drains● SPDY is lightweight: ○ Fewer connections ○ Fewer packets ○ Fewer sends● But..... ○ Mobile network activity can be sporadic ■ e.g. a ping every 60-300secs ○ SSL connections are more expensive to establish ■ Anecdotal - is the handshake CPU intensive ○ Every SSL implementation is unoptimized.● I hate to say it, but until we optimize SSL clients on mobile. SPDY may not be ready for mobile.
  20. 20. SPDY and Mobile Networks● Good news ○ Mobile networks are the SPDY sweetspot ■ High latency and High bandwidth● Bad news ○ Operators timeout NATs aggressively (~60s) ○ Traditional SSL is unoptimized ■ OCSP validation is particularly poor● Mitigation ○ Trust your own certificate to bypass OCSP in apps ○ Dont trust pooled connections over 60s old
  21. 21. SPDY & REST APIs● Apps use HTTP to transfer JSON or XML● Dont need to load HTML/CSS/Assets, they are installed up front● REST APIs over HTTP need batching ○ due to HTTP connection/serialization limits ○ JSON fundamentally not streamable ○ Lose cacheability ○ Sacrifices latency for throughput
  22. 22. JSON Streamability● REST APIs are messages; how to best deliver a message over any network?● Network will chunk data.● Round trips exist *between* chunks. More chunks == more chance of delays, lost packets.● Small JSON blobs are good.● Large JSON blobs are bad.● JSON not parseable until all JSON is received!!
  23. 23. Standardization● Test implementations are cropping up everywhere● Badly in need of an interoperability test suite!● Going to IETF next month to talk about HTTP/2.0 ○ SPDY will likely change, but hopefully will be a part of it.● In 2012, SPDY is available on over 50% of browsers
  24. 24. Thank You!Good luck to Zynga!