CloudFlare, Nginx, uWSGI, Celery
Our Django tricks
2015-02-09 København - Lorenzo Setale at Django Meetup #9
Lorenzo Setale
Proud Full Stack Developer, Startupper, 

Blogger and Italian since 1993
Working as CTO at MinbilDinbil.dk
http://who.is.lorenzo.setale.me/?
MinbilDinbil.dk allows every car owner
to earn some money by sharing their
car with the neighborhood.
P2P Car Sharing Service
Photo from Shutterstock
A lot of people involved

=

Complex project
‣ Render the HTML page
‣ Perform the API call

‣ Transform the request in a query
‣ Calculate distances between search location and the cars
‣ Get the JSON with the details for each car
‣ Return the JSON list of cars. 

‣ Render the API results using knockout.js and jQuery
Actions and external API calls behind a search request
Search flow
‣ Calculate details (verify dates, prices and discounts)

‣ Send SMS & Email notification
‣ Update the insurance
‣ Notify the support team about the booking
‣ Update tracking services ( Google Analytics / Mixpanel )
‣ Perform payment
Actions and external API calls when a user book a car
New Booking flow
Speed is crucial
Photo By PistolPeet ( https://www.flickr.com/photos/13174399@N00/46500553/ )
‣ Synchronous and Delayed tasks using Celery
‣ Smart load balancing with Nginx
‣ The brave spartan uWSGI configuration
‣ Cache implementations ( Server-side, CDN and Browser )
(even if there are other ways of doing it)
This is how we made everything faster
Synchronous vs Delayed
Celery is a task queue with focus on real-time processing.



It helps us to delay what is not really connected to the
success of the process behind the single API/Request.
A sweet guide to implement Celery with Django:
https://my.setale.me/271Y273i3N2b
Delayed = use a Celery Task
Hint: Define what should be delayed
# notifications/tasks.py
# django-celery>=3.0.23,<3.0.99
@celery.task(default_retry_delay=60, ignore_result=False)
def send_sms(recipient, text, user_sms, **kwargs):
"This task will send an sms message.”
try:
send_sms_api(recipient, text, user_sms)
except Exception, e:
send_sms.retry(
args=[recipient, text, user_sms],
countdown=60, max_retries=5,
exc=e, kwargs=kwargs
)
Retry in case of error.
Using the CeleryTask method .retry() we can perform the task again,
in case of failure: In this way even if the process is detached form the
main thread, we are sure that we will perform it even if it fails
# Bookings slower (old)


class Booking(models.Model):
def notify_booking_created():
# Wait until the Clickatell API returns
send_sms_api(recipient, text, user_sms)
Before & After
# Bookings faster (new: Using Celery)
from .tasks import send_sms

class Booking(models.Model):
def notify_booking_created():
# Ask a worker to perform this for you.
send_sms.delay(recipient, text, user_sms)
Smart Load Balancing
TL;DR - Define which machine should elaborate what
Smart load balancing with NGINX
Load Balancer
Admin uWSGI …
Request Admin Request
The official guide to use uWSGI with Django and NGINX:
http://my.setale.me/0U0Z0J1r0w2w
# NGINX configuration file
upstream backend {
server uwsgi.local:8080 max_fails=2 fail_timeout=15s;
server uwsgi1.local:8080 max_fails=2 fail_timeout=15s;
}
server {
listen 80;
server_name minbildinbil.dk www.minbildinbil.dk;
location / {
proxy_pass http://backend;
}
location ~* “/admin/“ {
proxy_pass http://uwsgi.local:8080;
}
}
NGINX load balancer
NGINX is a great reverse proxy server, that can work as a load
balancer too. In some cases, some requests should be distributed
some others should go only to specific machines.
The “spartan” uWSGI
(that commit suicide to save the server)
uWSGI is a fast, self-healing application container server. It
will help to spawn the processes to handle the requests
with Django code. 

The official guide to use uWSGI with Django and NGINX:
http://my.setale.me/0U0Z0J1r0w2w
TL;DR - uWSGI = “spawn” django servers
Use uWSGI to deploy your project
{
"uwsgi": {
// #[…]
"logformat": "[%(ltime)][%(vszM) MB] %(method) %(status) - %(uri)",
"workers": 3,
"max-worker-lifetime": 300,
"max-requests": 100,
"reload-on-as": 512,

"reload-on-rss": 384,
}
}
uWSGI Configuration
The configuration below will set requests, time and memory limit: 

in case the process/worker will be reloaded or restarted gracefully. 

In this way you will avoid servers hungry of resources.
Cache everywhere
Browser + CDN, NGINX
Set-up cache in base of custom rules and user authentication
using these HTTP headers:
‣ Cache-Control
‣ Vary
‣ Expires
A sweet guide to implement Cache-Control with Django:

https://my.setale.me/243p2U0I1P0F
Source Code:

https://github.com/koalalorenzo/django-smartcc
TL;DR - Define Cache-Control header in a smart way
Use django-smartcc middleware
# settings.py
MIDDLEWARE_CLASSES += [
'smart_cache_control.middleware.SmartCacheControlMiddleware'
]
SCC_CUSTOM_URL_CACHE = (
(r'/api/search$', 'public', 60*30),
)
SCC_MAX_AGE_PUBLIC = 60*60*24*7 # 7 Days
SCC_MAX_AGE_PRIVATE = 0
Configure django-smartcc middleware
This django middleware will force cache headers if we define some
rules. In the following code the /api/search is always considered as
public and cached for a maximum of 1800 seconds (30 minutes).
$ pip install -U django-smartcc
TL;DR - As my mother, NGINX says things only once!
Cache using NGINX
NGINX uWSGI
The official guide to use uWSGI with Django and NGINX:
http://my.setale.me/0U0Z0J1r0w2w
Cache
Request
Request
# Cache settings
proxy_cache_path /tmp/cache keys_zone=uwsgiCache:30m;
proxy_ignore_headers Cache-Control;
proxy_cache_key "$scheme://$host$request_uri$is_args$args";
add_header X-Cached $upstream_cache_status;
# Disabled cache when editing a car
location ~* "(/api/search|api/car)" {
proxy_cache_min_uses 1;
proxy_cache_valid 200 15m;
proxy_pass http://backend;
}
NGINX cache rules
The best practice is to save the requests in cache only for specific
requests (like really static pages or API calls) for an amount of time
that is reasonable, to not deal with the invalidation of the cache.
Results
Request saved by CloudFlare Cached
( Without django-smartcc )
8.8% Cached
Request saved by CloudFlare Cached
( With django-smartcc )
28.45% Cached
Pingdom RUM statistics
(Without Cache and Celery)
Pingdom RUM statistics
(With Cache and Celery)
45.4% Faster
9DJANGO
Do you need a car?
Coupon for 10% of discount!
Photo from Shutterstock
Tak!
Photo from Shutterstock
Lorenzo Setale
email koalalorenzo@gmail.com
phone +45 30 14 45 29
twitter @koalalorenzo
website http://who.is.lorenzo.setale.me/?
CTO at MinbilDinbil.dk

MinbilDinbil Django Speed Tricks

  • 1.
    CloudFlare, Nginx, uWSGI,Celery Our Django tricks 2015-02-09 København - Lorenzo Setale at Django Meetup #9
  • 2.
    Lorenzo Setale Proud FullStack Developer, Startupper, 
 Blogger and Italian since 1993 Working as CTO at MinbilDinbil.dk http://who.is.lorenzo.setale.me/?
  • 3.
    MinbilDinbil.dk allows everycar owner to earn some money by sharing their car with the neighborhood. P2P Car Sharing Service Photo from Shutterstock
  • 4.
    A lot ofpeople involved
 =
 Complex project
  • 5.
    ‣ Render theHTML page ‣ Perform the API call
 ‣ Transform the request in a query ‣ Calculate distances between search location and the cars ‣ Get the JSON with the details for each car ‣ Return the JSON list of cars. 
 ‣ Render the API results using knockout.js and jQuery Actions and external API calls behind a search request Search flow
  • 6.
    ‣ Calculate details(verify dates, prices and discounts)
 ‣ Send SMS & Email notification ‣ Update the insurance ‣ Notify the support team about the booking ‣ Update tracking services ( Google Analytics / Mixpanel ) ‣ Perform payment Actions and external API calls when a user book a car New Booking flow
  • 7.
    Speed is crucial PhotoBy PistolPeet ( https://www.flickr.com/photos/13174399@N00/46500553/ )
  • 8.
    ‣ Synchronous andDelayed tasks using Celery ‣ Smart load balancing with Nginx ‣ The brave spartan uWSGI configuration ‣ Cache implementations ( Server-side, CDN and Browser ) (even if there are other ways of doing it) This is how we made everything faster
  • 9.
  • 10.
    Celery is atask queue with focus on real-time processing.
 
 It helps us to delay what is not really connected to the success of the process behind the single API/Request. A sweet guide to implement Celery with Django: https://my.setale.me/271Y273i3N2b Delayed = use a Celery Task Hint: Define what should be delayed
  • 11.
    # notifications/tasks.py # django-celery>=3.0.23,<3.0.99 @celery.task(default_retry_delay=60,ignore_result=False) def send_sms(recipient, text, user_sms, **kwargs): "This task will send an sms message.” try: send_sms_api(recipient, text, user_sms) except Exception, e: send_sms.retry( args=[recipient, text, user_sms], countdown=60, max_retries=5, exc=e, kwargs=kwargs ) Retry in case of error. Using the CeleryTask method .retry() we can perform the task again, in case of failure: In this way even if the process is detached form the main thread, we are sure that we will perform it even if it fails
  • 12.
    # Bookings slower(old) 
 class Booking(models.Model): def notify_booking_created(): # Wait until the Clickatell API returns send_sms_api(recipient, text, user_sms) Before & After # Bookings faster (new: Using Celery) from .tasks import send_sms
 class Booking(models.Model): def notify_booking_created(): # Ask a worker to perform this for you. send_sms.delay(recipient, text, user_sms)
  • 13.
  • 14.
    TL;DR - Definewhich machine should elaborate what Smart load balancing with NGINX Load Balancer Admin uWSGI … Request Admin Request The official guide to use uWSGI with Django and NGINX: http://my.setale.me/0U0Z0J1r0w2w
  • 15.
    # NGINX configurationfile upstream backend { server uwsgi.local:8080 max_fails=2 fail_timeout=15s; server uwsgi1.local:8080 max_fails=2 fail_timeout=15s; } server { listen 80; server_name minbildinbil.dk www.minbildinbil.dk; location / { proxy_pass http://backend; } location ~* “/admin/“ { proxy_pass http://uwsgi.local:8080; } } NGINX load balancer NGINX is a great reverse proxy server, that can work as a load balancer too. In some cases, some requests should be distributed some others should go only to specific machines.
  • 16.
    The “spartan” uWSGI (thatcommit suicide to save the server)
  • 17.
    uWSGI is afast, self-healing application container server. It will help to spawn the processes to handle the requests with Django code. 
 The official guide to use uWSGI with Django and NGINX: http://my.setale.me/0U0Z0J1r0w2w TL;DR - uWSGI = “spawn” django servers Use uWSGI to deploy your project
  • 18.
    { "uwsgi": { // #[…] "logformat":"[%(ltime)][%(vszM) MB] %(method) %(status) - %(uri)", "workers": 3, "max-worker-lifetime": 300, "max-requests": 100, "reload-on-as": 512,
 "reload-on-rss": 384, } } uWSGI Configuration The configuration below will set requests, time and memory limit: 
 in case the process/worker will be reloaded or restarted gracefully. 
 In this way you will avoid servers hungry of resources.
  • 19.
  • 20.
    Set-up cache inbase of custom rules and user authentication using these HTTP headers: ‣ Cache-Control ‣ Vary ‣ Expires A sweet guide to implement Cache-Control with Django:
 https://my.setale.me/243p2U0I1P0F Source Code:
 https://github.com/koalalorenzo/django-smartcc TL;DR - Define Cache-Control header in a smart way Use django-smartcc middleware
  • 21.
    # settings.py MIDDLEWARE_CLASSES +=[ 'smart_cache_control.middleware.SmartCacheControlMiddleware' ] SCC_CUSTOM_URL_CACHE = ( (r'/api/search$', 'public', 60*30), ) SCC_MAX_AGE_PUBLIC = 60*60*24*7 # 7 Days SCC_MAX_AGE_PRIVATE = 0 Configure django-smartcc middleware This django middleware will force cache headers if we define some rules. In the following code the /api/search is always considered as public and cached for a maximum of 1800 seconds (30 minutes). $ pip install -U django-smartcc
  • 22.
    TL;DR - Asmy mother, NGINX says things only once! Cache using NGINX NGINX uWSGI The official guide to use uWSGI with Django and NGINX: http://my.setale.me/0U0Z0J1r0w2w Cache Request Request
  • 23.
    # Cache settings proxy_cache_path/tmp/cache keys_zone=uwsgiCache:30m; proxy_ignore_headers Cache-Control; proxy_cache_key "$scheme://$host$request_uri$is_args$args"; add_header X-Cached $upstream_cache_status; # Disabled cache when editing a car location ~* "(/api/search|api/car)" { proxy_cache_min_uses 1; proxy_cache_valid 200 15m; proxy_pass http://backend; } NGINX cache rules The best practice is to save the requests in cache only for specific requests (like really static pages or API calls) for an amount of time that is reasonable, to not deal with the invalidation of the cache.
  • 24.
  • 25.
    Request saved byCloudFlare Cached ( Without django-smartcc ) 8.8% Cached
  • 26.
    Request saved byCloudFlare Cached ( With django-smartcc ) 28.45% Cached
  • 27.
  • 28.
    Pingdom RUM statistics (WithCache and Celery) 45.4% Faster
  • 29.
    9DJANGO Do you needa car? Coupon for 10% of discount! Photo from Shutterstock
  • 30.
  • 31.
    Lorenzo Setale email koalalorenzo@gmail.com phone+45 30 14 45 29 twitter @koalalorenzo website http://who.is.lorenzo.setale.me/? CTO at MinbilDinbil.dk