Rated one of the top 10 sessions at Velocity 2010 by attendees.
By now, we’ve all internalized Steve Souders’ rules for optimizing web performance, but the question is: do you need to spend 6 months and raise an army of top developers to make your sites fast by default?
In this workshop, Joshua Bixby and Hooman Beheshti of Strangeloop subject an unsuspecting website – the Velocity home page – to real-time optimization, following Google and Yahoo’s rules for high-performance websites.
Over the course of the workshop, witness the entire optimization life cycle:
* Using various measurement tools to benchmark current performance, focusing on load time, start render time and round trips.
* Implementing A/B segmentation to measure key business metrics like conversion, bounce rate and page views/visit.
* Iterating through acceleration best practices.
* Analyzing results from different geographical locations using different browsers.
And guess they did.This is Zona’s formula for patience, the basis for the “eight second rule.” Unfortunately, things like tenacity, importance, and natural patience aren’t concrete enough for the no-nonsense folks that run web applications.
And guess they did.This is Zona’s formula for patience, the basis for the “eight second rule.” Unfortunately, things like tenacity, importance, and natural patience aren’t concrete enough for the no-nonsense folks that run web applications.
One example of this is performance experimentation that Google’s done. Google’s a perfect lab. Not only do they have a lot of traffic, they also have computing resources to do back-end analysis of large data sets. Plus, they’re not afraid of experimentation – in fact, they insist on it. So they tried different levels of performance and watched what happened to visitors.
The results, which they presented at Velocity in May, were fascinating. There was a direct impact between delay and the number of searches a user did each day – and to make matters worse, the numbers often didn’t improve even when the delay was removed. You may think 0.7% drop isn’t significant, but for Google this represents a tremendous amount of revenue.
Microsoft’s Bing site is a good lab, too. They looked at key metrics, or KPIs, of their search site.
They showed that as performance got worse, all key metrics did, too. Not just the number of searches, but also the revenue (earned when someone clicks) and refinement of searches.
Microsoft’s Bing site is a good lab, too. They looked at key metrics, or KPIs, of their search site.
Shopzilla overhauled their entire site, dramatically reducing page load time, hardware requirements, and downtime.
They saw a significant increase in revenues
Microsoft’s Bing site is a good lab, too. They looked at key metrics, or KPIs, of their search site.
One example of this is performance experimentation that Google’s done. Google’s a perfect lab. Not only do they have a lot of traffic, they also have computing resources to do back-end analysis of large data sets. Plus, they’re not afraid of experimentation – in fact, they insist on it. So they tried different levels of performance and watched what happened to visitors.
Website owners are sending out increasingly huge web pages through a pipeline whose ability has not grown in the same proportion.Web page objects, then and now:1995: The average web page contained just 2.3 objects. That means just 2.3 calls to whatever data centers were serving the site.Today: The average web page contains a whopping 75.25 objects – everything from CSS to images to Javascript. That means 75.25 server round trips are needed to pull all the page’s resources to the user’s browser. The result: pages that load slowly and inconsistently.Page size, then and now:1995: The average page size was a lean 14.1k.Today: The average page size is a bloated 498k.