Django at Scale
Upcoming SlideShare
Loading in...5
×
 

Django at Scale

on

  • 17,721 views

 

Statistics

Views

Total Views
17,721
Views on SlideShare
17,157
Embed Views
564

Actions

Likes
22
Downloads
101
Comments
1

9 Embeds 564

http://bretthoerner.com 538
http://feeds.feedburner.com 11
http://translate.googleusercontent.com 4
url_unknown 3
http://us-w1.rockmelt.com 3
https://twitter.com 2
http://a0.twimg.com 1
http://www.ofelio.com 1
http://www.commafeed.com 1
More...

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Django at Scale Django at Scale Presentation Transcript

  • Django at Scale Brett Hoerner @bretthoerner http://bretthoerner.comWhirlwind of various tools and ideas, nothing too deep.I tried to pick things that are applicable/useful even for smaller sites.
  • Who?Django Weekly Review in November 2005.I took that job in Dallas.Django for 5+ years.Disqus for 2 years.
  • DISQUSA commenting system with an emphasis for connecting online communities.Almost a million ‘forums’ (sites), millions and millions of users and comments. View slide
  • “The embed”You’ve probably seen it somewhere, if you haven’t seen it you’ve probably loaded it.More customization than one might think at first glance, or make for your own system. View slide
  • How big? • 19 employees, 9 devs/ops • 25,000 requests/second peak • 500 million unique monthly visitors • 230 million requests to Python in one daySlighty dated traffic information, higher now.Except the 230MM number I just pulled from logs: doesn’t include cached varnish hits,media, etc.Growing rapidly, when I joined I thought it was “big”... hahaha.
  • Long TailToday’s news is in the green, but the yellow is very long and represents all of the older postspeople are hitting 24/7.Hard to cache everything.Hard to know where traffic will be.Hard to do maintenance since we’re part of other peoples’ site’s.
  • Infrastructure • Apache • Nginx • mod_wsgi • Haproxy • PostgreSQL • Varnish • Memcached • RabbitMQ • Redis • ... and more • SolrA little over 100 total servers; not Google/FB scale, but big.Don’t need our own datacenter.Still one of the largest pure Python apps, afaik.Not going deep on non-python/app stuff, happy to elaborate now/later.
  • But first ... ... a PSA
  • USE PUPPET OR CHEFNo excuses if this isn’t a pet project.If you do anything else you’re reinventing wheels.It’s not that hard.Your code 6 months later may as well be someone else’s, same holds true for sysadmin work.But ... not really the subject of this talk.
  • Application Monitoring • Graphite • http://graphite.wikidot.com/You should already be using Nagios, Munin, etcIt’s Python! (and Django, I think)Push data in, click it to add to graph, save graph for later.Track errors, new rows, logins - it’s UDP so it’s safe to call a lot from inside your app.Stores rates and more ... I think?
  • Using Graphite / statsd statsd.increment(api.3_0.endpoint_request. + endpoint) That’s it.Periods are “namespaces”, created automatically.From devs at Etsy, check out their blog.
  • Error Logging • Exception emails suck • Want to ... • ... group by issue • ... store more than exceptions • ... mark things fixed • ... store more detailed output • ... tie unique ID of a 500 to an exceptionWe were regularly locked out of Gmail when we used exception emails.
  • Sentry dashboard
  • Sentry detail
  • Using Sentry import logging from sentry.client.handlers import SentryHandler logger = logging.getLogger() logger.addHandler(SentryHandler()) # usage logging.error(There was some crazy error, exc_info=sys.exc_info(), extra={ # Optionally pass a request and well grab any information we can request: request, # Otherwise you can pass additional arguments to specify request info view: my.view.name, url: request.build_absolute_url(), data: { # You may specify any values here and Sentry will log and output them username: request.user.username } })Try generating and sending unique IDs, send them out with your 500 so you can search forthem later (from user support requests, etc).
  • Background Tasks • Slow external APIs • Analytics and data processing • Denormalization • Sending email • Updating avatars • Running large imports/exports/deletesEveryone can use this, it helps with scale but is useful for even the smallest apps.
  • Celery + RabbitMQ • http://celeryproject.org/ • Super simple wrapper over AMQP (and more) @task def check_spam(post): if slow_api.check_spam(post): post.update(spam=True) # usage post = Post.objects.all()[0] check_spam.delay(post)Tried inventing our own queues and failed, don’t do it.Currently have over 40 queues.We have a Task subclass to help with testing (enable only tasks you want to run).Also good for throttling.
  • Celery + Eventlet = <3 • Especially for slow HTTP APIs • Run hundreds/thousands of requests simultaneously • Save yourself gigs of RAM, maybe a machine or twoCan be a bit painful... shoving functionality into Python that nobody expected.We have hacks to use the Django ORM, ask if you need help.Beware “threading” issues pop up with greenthreads, too.
  • Delayed Signals • Typical Django signals sent to a queue # in models.py post_save.connect(delayed.post_save_sender, sender=Post, weak=False) # elsewhere def check_spam(sender, data, created, **kwargs): post = Post.objects.get(pk=data[id]) if slow_api.check_spam(post): post.update(spam=True) delayed.post_save_receivers[spam].connect(check_spam, sender=Post) # usage post = Post.objects.create(message="v1agr4!")Not really for ‘scale’, more dev ease of use.We don’t serialize the object (hence the query).Not open sourced currently, easy to recreate.Questionable use ... it’s pretty easy to just task.delay() inside a normal post_save handler.
  • Dynamic Settings • Change settings ... • ... without re-deploying • ... in realtime • ... as a non-developerThings that don’t deserve their own table.Hard to think of an example right now (but we built something more useful ontop of this...you’ll see).
  • modeldict class Setting(models.Model): key = models.CharField(max_length=32) value = models.CharField(max_length=200) settings = ModelDict(Setting, key=key, value=value, instances=False) # access missing value settings[foo] >>> KeyError # set the value settings[foo] = hello # fetch the current value using either method Setting.objects.get(key=foo).value >>> hello settings[foo] >>> hello https://github.com/disqus/django-modeldictBacked by the DB.Cached, invalidated on change, fetched once per request.
  • Feature Switches • Do more development in master • Dark launch risky features • Release big changes slowly • Free and easy beta testing • Change all of this live without knowing how to code (and thus without needing to deploy)No DB Magic, your stuff needs to be backwards compatible on the data layer.
  • Gargoyle • https://github.com/disqus/gargoylePowered by modeldict.Everything remotely big goes under a switch.We have many, eventually clean when the feature is stable.
  • Using Gargoyle from gargoyle import gargoyle def my_function(request): if gargoyle.is_active(my switch name, request): return foo else: return barAlso usable as a decorator, check out the docs.You can extend it for other models like .is_active(‘foo’, forum).Super handy but still overhead to support both versions, not free.
  • Caching• Use pylibmc + libmemcached• Use consistent hashing behavior (ketama)• A few recommendations...
  • Caching problem in update_homepage? def homepage(request): page = cache.get("page:home") if not page: page = Page.objects.get(name=home) cache.set("page:home", page) return HttpResponse(page.body) def update_homepage(request): page = Page.objects.get(name=home) page.body = herp derp page.save() cache.delete("page:home") return HttpResponse("yay")See any problems related to caching in “update_homepage”?If not, imagine the homepage is being hit 1000/sec, still?
  • Set don’t delete • If possible, always set to prevent ... • ... races • ... stampedesPrevious slide: Race: Another request in transaction stores the old copy when it gets a cache miss. Stampede: 900 users start a DB query to fill the empty cache.Setting > Deleting fixes both of these.This happened to us a lot when we went from “pretty busy” to “constantly under high load”.Can still happen (more rarely) on small sites. Confuses users, gets you support tickets.
  • ‘Keep’ cache cache.get("moderators:cnn", keep=True) • Store in thread local memory • Flush dict after request finishesUseful when something that hits cache may be called multiple times in different parts of thecodebase.Yes, you can solve this in lots of other ways, I just feel like “keep” should be on by default.No released project, pretty easy to implement.Surprised I haven’t seen this elsewhere? Does anyone else do this?
  • Mint Cache • Stores (val, refresh_time, refreshed) • One (or few) clients will refresh cache, instead of a ton of them • django-newcache does thisOne guy gets an early miss, causing him to update the cache.Alternative is: item falls out of cache, stampede of users all go to update it at once.Check out newcache for code.
  • Django Patches • https://github.com/disqus/django-patches • Too deep, boring, use-case specific to go through here • Not comprehensive • All for 1.2, I have a (Disqus) branch where they’re ported to 1.3 ... can release if anyone caresMaybe worth glancing through.Just wanted to point this out.Some of these MAY be needed for edge cases inside of our own open sources Djangoprojects... we should really check. :)
  • DB or: The Bottleneck • You should use Postgres (ahem) • But none of this is specific to Postgres • Joins are great, don’t shard until you have to • Use an external connection pooler • Beware NoSQL promises but embrace the shit out of itExternal connection poolers have other advantages like sharing/re-using autocommitconnections.Ad-hoc queries, relations and joins help you build most features faster, period.Also come to the Austin NoSQL meetup.
  • multidb • Very easy to use • Testing read slave code can be weird, check out our patches or ask me later • Remember: as soon as you use a read slave you’ve entered the world of eventual consistencyNo general solution to consistency problem, app specific.Huge annoyance/issue for us. Beware, here there be dragons.
  • Update don’t save • Just like “set don’t delete” • .save() flushes the entire row • Someone else only changes ColA, you only change ColB ... if you .save() you revert his changeWe send signals on update (lots of denormalization happens via signals), you may want to dothis also. (in 1.3? a ticket? dunno)
  • Instance update # instead of Model.objects.filter(pk=instance.id).update(foo=1) # we can now do instance.update(foo=1) https://github.com/andymccurdy/django-tips-and-tricks/blob/master/model_update.pyPrefer this to saving in nearly all cases.
  • ALTER hurts• Large tables under load are hard to ALTER• Especially annoying if you’re not adding anything complex• Most common case (for us): new boolean
  • bitfield class Foo(models.Model): flags = BitField(flags=( awesome_flag, flaggy_foo, baz_bar, )) # Add awesome_flag Foo.objects.filter(pk=o.pk).update(flags=F(flags) | Foo.flags.awesome_flag) # Find by awesome_flag Foo.objects.filter(flags=Foo.flags.awesome_flag) # Test awesome_flag if o.flags.awesome_flag: print "Happy times!" https://github.com/disqus/django-bitfieldUses a single BigInt field for 64 booleans.Put one on your model from the start and you probably won’t need to add booleans everagain.
  • (Don’t default to) Transactions • Default to autocommit=True • Don’t use TransactionMiddleware unless you can prove that you need it • Scalability pits that are hard to dig out ofMiddleware was sexy as hell when I first saw it, now sworn mortal enemy.Hurts connection pooling, hurts the master DB, most apps just don’t need it.
  • Django DB Utils • attach_foreignkey • queryset_to_dict • SkinnyQuerySet • RangeQuerySet https://github.com/disqus/django-db-utilsSee Github page for explainations.
  • NoSQL • We use a lot of Redis • We’ve used and moved off of Mongo, Membase • I’m a Riak fanboyWe mostly use Redis for denormalization, counters, things that aren’t 100% critical and canbe re-filled on data loss.Has helped a ton with write load on Postgres.
  • Nydus from nydus.db import create_cluster redis = create_cluster({ engine: nydus.db.backends.redis.Redis, router: nydus.db.routers.redis.PartitionRouter, hosts: { 0: {db: 0}, 1: {db: 1}, 2: {db: 2}, } }) res = conn.incr(foo) assert res == 1 https://github.com/disqus/nydusIt’s like django.db.connections for NoSQL.Notice that you never told conn which Redis host to use, the Router decided that for youbased on key.Doesn’t do magic like rebalancing if you add a node (don’t do that), just a cleaner API.
  • Sharding • Django Routers and some Postgres/Slony hackery make this pretty easy • Need a good key to shard on, very app specific • Lose full-table queries, aggregates, joins • If you actually need it let’s talkFun to talk about but not general or applicable to 99%.
  • Various Tools • Mule https://github.com/disqus/mule • Chishop https://github.com/disqus/chishop • Jenkins http://jenkins-ci.org/ • Fabric http://fabfile.org/ • coverage.py http://nedbatchelder.com/code/coverage/ • Vagrant http://vagrantup.com/Not to mention virtualenv, pip, pyflakes, git-hooks ...
  • Get a job.• Want to live & work in San Francisco? http://disqus.com/jobs/