2013 - Dustin whittle - Escalando PHP en la vida real
SCALING PHP IN THE REAL
PHP is used by the likes of Facebook, Yahoo!, Zynga, Tumblr, Etsy, and Wikipedia. How do the
largest internet companies scale PHP to meet their demand? Join this session and ﬁnd out how
to use the latest tools in PHP for developing high performance applications. We’ll take a look
at common techniques for scaling PHP applications and best practices for proﬁling and
optimizing performance. After this session, you’ll leave prepared to tackle your next
enterprise PHP project.
Why performance matters?
The problems with PHP
Best practice designs
Doing work in the background with queues
Fronting with http caching and a reverse proxy cache
Distributed data caches with redis and memcached
Using the right tool for the job
Tools of the trade
Varnish/Squid and reverse proxy caches
Xdebug + Valgrind + WebGrind
Architecture not applications
What I have worked on
• Developer Evangelist @
• Consultant & Trainer @
• Developer Evangelist @
Did you know
Zynga, Tumblr, Etsy,
and Wikipedia were
all built on PHP?
The majority of Americans are said to wait in line (in a real shop) for no longer than 15
1 out of 4 customers will abandon a webpage that takes more than 4 seconds to load.
Microsoft found that Bing
searches that were 2 seconds
slower resulted in a 4.3%
drop in revenue per user
When Mozilla shaved 2.2
seconds off their landing page,
Firefox downloads increased
(60 million more downloads)
Obama’s website 60%
faster increased donation
conversions by 14%
Amazon and Walmart
increased revenue 1% for
every 100ms of improvement
Amazon and Walmart increased revenue 1% for every 100ms of improvement
Upgrade your PHP
One of the easiest improvements you can make to improve performance and stability is to
upgrade your version of PHP. PHP 5.3.x was released in 2009. If you haven’t migrated to PHP
5.4, now is the time! Not only do you beneﬁt from bug ﬁxes and new features, but you will
also see faster response times immediately. See PHP.net to get started.
Installing the latest PHP on Linux - http://php.net/manual/en/install.unix.debian.php
Installing the latest PHP on OSX - http://php.net/manual/en/install.macosx.php
Installing the latest PHP on Windows - http://php.net/manual/en/install.windows.php
Once you’ve ﬁnished upgrading PHP, be sure to disable any unused extensions in production
such as xdebug or xhprof.
Use an opcode
PHP is an interpreted language, which means that every time a PHP page is requested, the
server will interpet the PHP ﬁle and compile it into something the machine can understand
(opcode). Opcode caches preserve this generated code in a cache so that it will only need to
be interpreted on the ﬁrst request. If you aren’t using an opcode cache you’re missing out on
a very easy performance gain. Pick your ﬂavor: APC, Zend Optimizer, XCache, or
Eaccellerator. I highly recommend APC, written by the creator of PHP, Rasmus Lerdorf.
Many developers writing object-oriented applications create one PHP source ﬁle per class
deﬁnition. One of the biggest annoyances in writing PHP is having to write a long list of
needed includes at the beginning of each script (one for each class). PHP re-evaluates these
require/include expressions over and over during the evaluation period each time a ﬁle
containing one or more of these expressions is loaded into the runtime. Using an autoloader
will enable you to remove all of your require/include statements and beneﬁt from a
performance improvement. You can even cache the class map of your autoloader in APC for a
small performance improvement.
Component with APC
While HTTP is stateless, most real life web applications require a way to manage user data. In
PHP, application state is managed via sessions. The default conﬁguration for PHP is to persist
session data to disk. This is extremely slow and not scalable beyond a single server. A better
solution is to store your session data in a database and front with an LRU (Least Recently
Used) cache with Memcached or Redis. If you are super smart you will realize you should limit
your session data size (4096 bytes) and store all session data in a signed or encrypted
The default in PHP
is to persist
sessions to disk
It is better to
store sessions in a
Even better is to store
in a database with a
shared cache in front
The best solution is to
limit session size and
store all data in a signed
or encrypted cookie
Leverage an inmemory data
Applications usually require data. Data is usually structured and organized in a database.
Depending on the data set and how it is accessed it can be expensive to query. An easy
solution is to cache the result of the ﬁrst query in a data cache like Memcached or Redis. If
the data changes, you invalidate the cache and make another SQL query to get the updated
result set from the database.
I highly recommend the Doctrine ORM for PHP which has built-in caching support for
Memcached or Redis.
There are many use cases for a distributed data cache from caching web service responses
and app conﬁgurations to entire rendered pages.
• Any data that is expensive to
generate/query and long lived
should be cached
• Web Service Responses
• HTTP Responses
• Database Result Sets
• Conﬁguration Data
Guzzle HTTP client has
built-in support for
caching web service
Doctrine ORM for PHP
has built-in caching
support for Memcached
Do blocking work
tasks via queues
Often times web applications have to run tasks that can take a while to complete. In most
cases there is no good reason to force the end-user to have to wait for the job to ﬁnish. The
solution is to queue blocking work to run in background jobs. Background jobs are jobs that
are executed outside the main ﬂow of your program, and usually handled by a queue or
message system. There are a lot of great solutions that can help solve running backgrounds
jobs. The beneﬁts come in terms of both end-user experience and scaling by writing and
processing long running jobs from a queue. I am a big fan of Resque for PHP that is a simple
toolkit for running tasks from queues. There are a variety of tools that provide queuing or
messaging systems that work well with PHP:
• Any process that is slow and not
important for the http response
should be queued
• Sending notiﬁcations + posting to
• Analytics + Instrumentation
• Updating proﬁles and discovering
friends from social accounts
• Consuming web services like
Twitter Streaming API
HTTP caching is one of the most misunderstood technologies on the Internet. Go read the
HTTP caching speciﬁcation. Don’t worry, I’ll wait. Seriously, go do it! They solved all of these
caching design problems a few decades ago. It boils down to expiration or invalidation and
when used properly can save your app servers a lot of load. Please read the excellent HTTP
caching guide from Mark Nottingam. I highly recommend using Varnish as a reverse proxy
cache to alleviate load on your app servers.
There are different kinds of HTTP caches that are useful for different kinds of things. I want
to talk about gateway caches -- or, "reverse proxy caches" -- and consider their effects on
modern, dynamic web application design.
Draw an imaginary vertical line, situated between Alice and Cache, from the very top of the
diagram to the very bottom. That line is your public, internet facing interface. In other words,
everything from Cache back is "your site" as far as Alice is concerned.
Alice is actually Alice's web browser, or perhaps some other kind of HTTP user-agent. There's
also Bob and Carol. Gateway caches are primarily interesting when you consider their effects
across multiple clients.
Cache is an HTTP gateway cache, like Varnish, Squid in reverse proxy mode,Django's cache
framework, or my personal favorite: rack-cache. In theory, this could also be a CDN, like
And that brings us to Backend, a dynamic web application built with only the most modern
and sophisticated web framework. Interpreted language, convenient routing, an ORM, slick
template language, and various other crap -- all adding up to amazing developer
Most people understand the expiration model well enough. You specify how long a response
should be considered "fresh" by including either or both of the Cache-Control: max-age=N or
headers. Caches that understand expiration will not make the same request until the
cached version reaches its expiration time and becomes "stale".
A gateway cache dramatically increases the beneﬁts of providing expiration information in
dynamically generated responses. To illustrate, let's suppose Alicerequests a welcome page:
Since the cache has no previous knowledge of the welcome page, it forwards the request to
the backend. The backend generates the response, including a Cache-Control header that
indicates the response should be considered fresh for ten minutes. The cache then shoots the
response back to Alice while storing a copy for itself.
Thirty seconds later, Bob comes along and requests the same welcome page:
The cache recognizes the request, pulls up the stored response, sees that it's still fresh, and
sends the cached response back to Bob, ignoring the backend entirely.
Note that we've experienced no signiﬁcant bandwidth savings here -- the entire response was
delivered to both Alice and Bob. We see savings in CPU usage, database round trips, and the
various other resources required to generate the response at the backend.
Expiration is ideal when you can get away with it. Unfortunately, there are many situations
where it doesn't make sense, and this is especially true for heavily dynamic web apps where
changes in resource state can occur frequently and unpredictably. The validation model is
designed to support these cases.
Again, we'll suppose Alice makes the initial request for the welcome page:
The Last-Modified and ETag header values are called "cache validators" because they can be
used by the cache on subsequent requests to validate the freshness of the stored response
without requiring the backend to generate or transmit the response body. You don't need
both validators -- either one will do, though both have pros and cons, the details of which are
outside the scope of this document.
So Bob comes along at some point after Alice and requests the welcome page:
The cache sees that it has a copy of the welcome page but can't be sure of its freshness so it
needs to pass the request to the backend. But, before doing so, the cache adds the IfModified-Since
and If-None-Match headers to the request, setting them to the original
response's Last-Modified and ETag values, respectively. These headers make the request
conditional. Once the backend receives the request, it generates the current cache validators,
checks them against the values provided in the request, and immediately shoots back a 304
response without generating the response body. The cache, having validated the
freshness of its copy, is now free to respond to Bob.
This requires a round-trip with the backend, but if the backend generates cache validators up
front and in an efficient manner, it can avoid generating the response body. This can be
extremely signiﬁcant. A backend that takes advantage of validation need not generate the
same response twice.
The expiration and validation models form the basic foundation of HTTP caching. A response
may include expiration information, validation information, both, or neither. So far we've seen
what each looks like independently. It's also worth looking at how things work when they're
Suppose, again, that Alice makes the initial request:
The backend speciﬁes that the response should be considered fresh for sixty seconds and
also includes the Last-Modified cache validator.
Bob comes along thirty seconds later. Since the response is still fresh, validation is not
required; he's served directly from cache:
But then Carol makes the same request, thirty seconds after Bob:
The cache relies on expiration if at all possible before falling back on validation. Note also
that the 304 Not Modified response includes updated expiration information, so the cache
knows that it has another sixty seconds before it needs to perform another validation request.
• Nginx Proxy Cache
• Apache Proxy Cache
The basic mechanisms shown here form the conceptual foundation of caching in HTTP -- not
to mention the Cache architectural constraint as deﬁned by REST. There's more to it, of
course: a cache's behavior can be further constrained with additional Cache-Control directives,
and the Vary header narrows a response's cache suitability based on headers of subsequent
requests. For a more thorough look at HTTP caching, I suggest Mark Nottingham's excellent
Caching Tutorial for Web Authors and Webmasters. Paul James's HTTP Caching is also quite
good and bit shorter. And, of course, the relevant sections of RFC 2616 are highly
Use Varnish as a reverse
proxy cache to alleviate
load on your app servers.
Deep diving into the speciﬁcs of optimizing each framework is outside of the scope of this
post, but these principles apply to every framework:
Stay up-to-date with the latest stable version of your favorite framework
Disable features you are not using (I18N, Security, etc)
Enable caching features for view and result set caching
• Stay up-to-date with the latest stable
version of your favorite framework
• Disable features you are not using (I18N,
• Always use a data cache like
• Enable caching features for view and
database result set caching
• Always use a http cache like Varnish
Taking a large problem
and making it into
In short, sharding means splitting the dataset, but a good sharding function would make sure that related information is located on the same
server (strong locality), since that drastically simply things. A bad sharding function (splitting by type) would create very weak locality, which
will drastically impact the system performance down the road.
Service Oriented Architecture
Companies of great scale
move away from PHP or
create their own variant
HipHop 1.0 - HPHPc - Transformed subset of PHP code to C++ for performance
HipHop 2.0 - HHVM - Virtual Machine, Runtime, and JIT for PHP
Learn to how to
proﬁle code for
Xdebug is a PHP extension for powerful debugging. It supports stack and function traces,
proﬁling information and memory allocation and script execution analysis. It allows
developers to easily proﬁle PHP code.
WebGrind is an Xdebug proﬁling web frontend in PHP5. It implements a subset of the features
of kcachegrind and installs in seconds and works on all platforms. For quick-and-dirty
optimizations it does the job. Here’s a screenshot showing the output from proﬁling.
XHprof is a function-level hierarchical proﬁler for PHP with a reporting and UI layer. XHProf is
capable of reporting function-level inclusive and exclusive wall times, memory usage, CPU
times and number of calls for each function. Additionally, it supports the ability to compare
two runs (hierarchical DIFF reports) or aggregate results from multiple runs.
AppDynamics is application performance management software designed to help dev and ops
troubleshoot problems in complex production apps.
Don’t forget to
optimize the client
PHP application performance is only part of the battle
Now that you have optimized the server-side, you can spend time improving the client side!
In modern web applications most of the end-user experience time is spent waiting on the
client side to render. Google has dedicated many resources to helping developers improve
client side performance.
In modern web
applications most of the
latency comes from the
In modern web applications most of the end-user experience time is spent waiting on the
client side to render.
Google PageSpeed Insights – PageSpeed Insights analyzes the content of a web page, then
generates suggestions to make that page faster. Reducing page load times can reduce bounce
rates and increase conversion rates.
Google ngx_pagespeed – ngx_pagespeed speeds up your site and reduces page load time.
This open-source nginx server module automatically applies web performance best practices
your existing content or workﬂow.
Google mod_pagespeed – mod_pagespeed speeds up your site and reduces page load time.
This open-source Apache HTTP server module automatically applies web performance best
modify your existing content or workﬂow.