Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Scaling Early


Published on

by Mark Maunder

Published in: Economy & Finance, Technology
  • Be the first to comment

Scaling Early

  1. 1. Scaling an early stage startup by Mark Maunder <>
  2. 2. Why does performance and scaling quickly matter? <ul><li>Slow performance could cost you 20% of your revenue according to Google. </li></ul><ul><li>Any reduction in hosting costs goes directly to your bottom line as profit or can accelerate growth. </li></ul><ul><li>In a viral business, slow performance can damage your viral growth. </li></ul>
  3. 3. My first missteps <ul><li>Misconfiguration. Web server and DB configured to grab too much RAM. </li></ul><ul><li>As traffic builds, the server swaps and slows down drastically. </li></ul><ul><li>Easy to fix – just a quick config change on web server and/or DB. </li></ul>
  4. 4. Traffic at this stage <ul><li>2 Widgets per second </li></ul><ul><li>10 HTTP requests per second. </li></ul><ul><li>1 Widget = 1 Pageview </li></ul><ul><li>We serve as many pages as our users do, combined. </li></ul>
  5. 5. Keepalive – Good for clients, bad for servers. <ul><li>As http requests increased to 10 per second, I ran out of server threads to handle connections. </li></ul><ul><li>Keepalive was on and Keepalive Timeout was set to 300. </li></ul><ul><li>Turned Keepalive off. </li></ul>
  6. 6. Traffic at this stage <ul><li>4 Widgets per second </li></ul><ul><li>20 HTTP requests per second </li></ul>
  7. 7. Cache as much DB data as possible <ul><li>I used Perl’s Cache::FileCache to cache either DB data or rendered HTML on disk. </li></ul><ul><li>MemCacheD, developed for LiveJournal, caches across servers. </li></ul><ul><li>YMMV – How dynamic is your data? </li></ul>
  8. 8. MySQL not fast enough <ul><li>High number of writes & deletes on a large single table caused severe slowness. </li></ul><ul><li>Writes blow away the query cache. </li></ul><ul><li>MySQL doesn’t support a large number of small tables (over 10,000). </li></ul><ul><li>MySQL is memory hungry if you want to cache large indexes. </li></ul><ul><li>I maxed out at about 200 concurrent read/write queries per second with over 1 million records (and that’s not large enough). </li></ul>
  9. 9. Perl’s Tie::File to the early rescue <ul><li>Tie::File is a very simple flat-file API. </li></ul><ul><li>Lots of files/tables. </li></ul><ul><li>Faster – 500 to 1000 concurrent read/writes per second. </li></ul><ul><li>Prepending requires reading and rewriting the whole file. </li></ul>
  10. 10. BerkeleyDB is very very fast! <ul><li>I’m also experimenting with BerkeleyDB for some small intensive tasks. </li></ul><ul><li>Data From Oracle who owns BDB: Just over 90,000 transactional writes per second. </li></ul><ul><li>Over 1 Million non-transactional writes per second in memory. </li></ul><ul><li>Oracle’s machine: Linux on an AMD Athlon™ 64 processor 3200+ at 1GHz system with 1GB of RAM. 7200RPM Drive with 8MB cache RAM. </li></ul>Source:
  11. 11. Traffic at this stage <ul><li>7 Widgets per second </li></ul><ul><li>35 HTTP requests per second </li></ul>
  12. 12. Created a separate image and CSS server <ul><li>Enabled Keepalive on the Image server to be nice to clients. </li></ul><ul><li>Static content requires very little memory per thread/process. </li></ul><ul><li>Kept Keepalive off on the App server to reduce memory. </li></ul><ul><li>Added benefit of higher browser concurrency with 2 hostnames. </li></ul>Source:
  13. 13. Now using Home Grown Fixed Length Records <ul><li>A lot like ISAM or MyISAM </li></ul><ul><li>Fixed length records mean we seek directly to the data. No more file slurping. </li></ul><ul><li>Sequential records mean sequential reads which are fast. </li></ul><ul><li>Still using file level locking. </li></ul><ul><li>Benchmarked at 20,000+ concurrent reads/writes/deletes. </li></ul>
  14. 14. Traffic at this stage <ul><li>12 Widgets per second </li></ul><ul><li>50 to 60 HTTP requests per second </li></ul><ul><li>Load average spiking to 12 or more about 3 times per day for unknown reason. </li></ul>
  15. 15. Blocking Content Thieves <ul><li>Content thieves were aggressively crawling our site on pages that are CPU intensive. </li></ul><ul><li>Robots.txt is irrelevant. </li></ul><ul><li>Reverse DNS lookup with ‘dig –x’ </li></ul><ul><li>Firewall the &^%$@’s with ‘iptables’ </li></ul>
  16. 16. Moved to httpd.prefork <ul><li>Httpd.worker consumes more memory than prefork because worker doesn’t share memory. </li></ul><ul><li>Tuning the number of Perl interpreters vs number of threads didn’t improve things. </li></ul><ul><li>Prefork with no keepalive on the app server uses less RAM and works well – for Mod_Perl. </li></ul>
  17. 17. The amazing Linux Filesystem Cache <ul><li>Linux uses spare memory to cache files on disk. </li></ul><ul><li>Lots of spare memory == Much faster I/O. </li></ul><ul><li>Prefork freed lots of memory. 1.3 Gigs out of 2 Gigs is used as cache. </li></ul><ul><li>I’ve noticed a roughly 20% performance increase since using it. </li></ul>
  18. 18. Tools <ul><li>httperf for benchmarking your server </li></ul><ul><li> for perf monitoring. </li></ul>
  19. 19. Summary <ul><li>Make content as static as possible. </li></ul><ul><li>Cache as much of your dynamic content as possible. </li></ul><ul><li>Separate serving app requests and serving static content. </li></ul><ul><li>Don’t underestimate the speed of lightweight file access API’s . </li></ul><ul><li>Only serve real users and search engines you care about. </li></ul>