Building high traffic http front-ends. theo schlossnagle. зал 1
Upcoming SlideShare
Loading in...5

Building high traffic http front-ends. theo schlossnagle. зал 1






Total Views
Views on SlideShare
Embed Views



4 Embeds 129 93 34 1 1



Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment

Building high traffic http front-ends. theo schlossnagle. зал 1 Building high traffic http front-ends. theo schlossnagle. зал 1 Presentation Transcript

  • High-performance Robust HTTP Front-ends / tips, tricks and expectationsSaturday, April 23, 2011
  • Who am I? @postwait on twitter Author of “Scalable Internet Architectures” Pearson, ISBN: 067232699X Contributor to “Web Operations” O’Reilly, ISBN: Founder of OmniTI, Message Systems, Fontdeck, & Circonus I like to tackle problems that are “always on” and “always growing.” I am an Engineer A practitioner of academic computing. IEEE member and Senior ACM member. On the Editorial Board of ACM’s Queue magazine. 2Saturday, April 23, 2011
  • Agenda • Why only HTTP? • HTTP-like protocols • Performance • AvailabilitySaturday, April 23, 2011
  • HTTP • Why only HTTP... it’s what we do. • User-based, immediate, short-lived transactions occupy my life. • So, not just HTTP. • HTTPS • SPDY (... we’ll get to this)Saturday, April 23, 2011
  • Performance • ATS (Apache Traffic Server) • supports SSL • battle-hardened codebase • very multi-code capable • Varnish • VCL adds unparalleled flexibility • no SSL! • nginx • I don’t see much of this out on the edgeSaturday, April 23, 2011
  • Performance Expectations • from a single server, you should be able to: • support 500k concurrent users • this is only 40k sockets/core • push in excess of 100k requests/second • this is only 9k requests/core*second • push close to 10 gigabits • this is why 10G was inventedSaturday, April 23, 2011
  • Performance Achievements • Good load balancers achieve this performance • with dual socket Westmere processors, we’re able to achieve in software on general purpose hardware what was only possible in hardware ASICs. • ATS and Varnish can do this today.Saturday, April 23, 2011
  • The Basic Rules: Content • You must serve content from cache • Your cache should fit in memory • If it does not, it should spill to SSD, not spinning media.Saturday, April 23, 2011
  • The Basic Rules: CPU • You must cache SSL sessions • SSL key negotiation is expensive. • SSL encryption is not* • Common cases must not cause state on the firewall. • It’s hard enough to serve 150k requests/second. • You will spend too much time in kernel in iptables, ipf, or pf. • allow port 80 and port 443. • enable SYN flood prevention * crypto obviously costs CPU; symmetric crypto is relatively cheapSaturday, April 23, 2011
  • The Basic Rules: Network • You must not run a stateful firewall in front • too expensive • too little value • You must be directly behind capable router(s) • expect anywhere from 1MM to 20MM packets per second • we need to run BGP for availabilitySaturday, April 23, 2011
  • Availability • We learned in the performance section: • 1 machine / 10Gbps uplink performs well enough • We need redundancy: • Linux HA? • VRRP/HSRP? • CARP? • No...Saturday, April 23, 2011
  • Availability: Constraints • Client TCP sessions are relatively short lived. • The web is a largely idempotent place. • Clients are capable of retrying on failure. • This means: • forget stateful failover. • focus on availability for new connections.Saturday, April 23, 2011
  • Availability: Setup • You are behind a capable router (it was a rule) • Use routing protocols (BGP) to maintain availability. BGP, April 23, 2011
  • Working Stacks • Linux (OS/TCP stack) • Illumos (OS/TCP stack) • Varnish (HTTP) • ATS (HTTP/HTTPS) • Quagga (BGP) • Quagga (BGP)Saturday, April 23, 2011
  • Future! • This stuff is fast. • In the end, we’re not looking for faster servers, we’re looking for improved user experience. • Enter SPDY • Google’s multi-channel HTTP super-protocol • Allows multiplexing of concurrent HTTP(like) request/response on a single TCP session. • Defeats slow startup • Allows for content prioritization on serverSaturday, April 23, 2011
  • Future: my thoughts • SPDY is relatively simple to implement on the server • SPDY is very very hard to leverage on the server • If ATS implemented SPDY in and out • and provided a robust configuration language to leverage it ... the future would be today.Saturday, April 23, 2011
  • Thank you. • Thank you Олег Бунин • Thanks to the Varnish and ATS developers. • Спасибо.Saturday, April 23, 2011