An walk-through of several JavaScript loading techniques with a characteristics table for each and at the end a decision tree to help you decide which technique to use.
Also, Chrome's silly preload logic!
13. load JS in a non-blocking way 1 scripts execute in order 2 couple external JS with inline JS 3 rendering starts soon; is progressive 5 DOM ready fires asap 4
21. Chrome’s silly preload logic (CSPL) If there is a non-DEFER, non-ASYNC parser-inserted script in <head>, Chrome (15) only preloads other parser-inserted scripts from <body>, not images!
26. Why CSPL is a problem Other objects start downloading late It’s against developer intent: bottom BODY means “do it last, other stuff comes first”
27. Solutions for CSPL Move to top of <body> Move to bottom of <body> Inline the code Add DEFER attribute Add ASYNC attribute Use Script Insertion Keeps blocking Start Render
45. Script loaders like LABjs can be your best friend. Try it! 5 http://www.flickr.com/photos/valeriebb/290711738/
46. Execute before Start Render? <2k gzipped? Inline, in <head> Y Couple with inline script? Preserve exec order? N Using jQuery? Combine jquery.js & jquery-dependent.js Other script loaders, like Yepnope, may do an equally good job Y N Normal Script Src, top of <body> Y LABjs Y Dynamic insertion N DEFER Y N Execute right before DCL? N
Hi! Yesterday, Mathias Bynens outlined how you can improve the run-time performance of JavaScript. and David Mandellin of Mozilla gave insight in the JavaScript engines in browsers and how you can make your code run faster. Earlier today, Andreas, Tony and Chris talked about how browser vendors contribute to better JavaScript performance. Now, it’s time to talk about *loading* JavaScript.
While this talk is about loading JavaScript, in the end it is about money. In past few years, numerous case studies have shown that better performance results in higher conversion rates, bigger shopping cart size and more repeat visits. Joshua Bixby’s presentation yesterday made that very clear yesterday. A faster site will generate more money, and fast loading JavaScript will contribute to that. And you know what, there is another important effect. If *you* get it right with loading the JavaScript in a high performance way …
*you* are awesome. And that is a good thing.
Let’s start by looking at the problem with JavaScript in the context of performance.
JavaScript blocks. If the developer has not taken performance into account, most likely you’ll see that the external JS files and inline scripts are like big cows sitting in the middle of the road, blocking good things from happening.
And the more of that blocking JS you have, the worse it gets.
The problem is two-fold. The first problem has to do with the browser being single threaded: it cannot do document parsing, rendering and JS execution at the same time. It’s one or the other. Parsing and rendering usually doesn’t take that long each time it happens, but JS execution often causes noticeable slowdowns. I regularly see a JS exec task taking hundreds of milliseconds and it’s no exception that the site is constructed in such a way that the browser has to process a long chain of JS execution tasks before it can do rendering again, which results in the UI being frozen for a long period of time. Not good. The second problem is that scripts have a negative effect on parallel downloading. You probably know about IE8. It was one of the first browsers to download scripts in parallel but it cannot download scripts and images in parallel, if the script comes first and images next. And just recently I became aware of behavior in Chrome that IMO is silly. It has to do with Chrome’s preload scanner. I’ll dive into this in a minute, right after we take a quick look at the blocking effect of JS.
This is an IE8, empty cache waterfall chart for the homepage of radio538.nl. Radio538 is a popular commercial radiostation and the site was recently redesigned. - sso.js: 100 kb (no gzip!). It’s jQuery with a few plugins. - 538min_v9.js: 270 KB (no gzip!), minified, 1290 LoC, and at the bottom of fil: $(document).ready(function(){ // lots of function calls here! } CPU utilization is very high during 3 seconds They have a CDN, but load the innershiv.js from the personal site of guy who created it. Shame on them! On this page, the impact of JS on performance and user experience is exceptionally large, but from my experience I’d say that JS is one of or *the* biggest perf problem on a lot of websites. On most sites, I see lots of JS files, in the <head> section of the document, loaded in a blocking way and inline scripts all over the place.
I’m going to show you an IE8, empty cache waterfall chart for the homepage of radio538.nl. Radio538 is a popular commercial radiostation and the site was recently redesigned. See: x-axis is in seconds and some HTTP requests are not in the chart (I left those out) - sso.js: 100 kb (no gzip). It’s jQuery with a few plugins. - 538min_v9.js : 270 KB (no gzip!), minified,1290 LoC. End: $(document).ready(function(){ // lots of function calls here! } CPU utilization is very high during 3 seconds They have a CDN, but load the innershiv.js from the personal site of guy who created it. Shame on them! Total page weighs 13 MB, because of 2 uploaded images of each 5 MB … ha! On this page, the impact of JS on performance and user experience is exceptionally large, but from my experience I’d say that JS is one of or *the* biggest perf problem on a lot of websites. On most sites, I see lots of JS files, in the <head> section of the document, loaded in a blocking way and inline scripts all over the place.
HTTP archive shows that trend for web pages is to have more JS code in it. 13 JS files per page totaling more than 150KB coming over the wire.
Before I walk you through several JS loading techniques that can help you make your pages load faster, let’s look at what I believe are the requirements for loading JS, in a high performance and fully functional way. Techniques are just techniques and your requirements will guide you in choosing the right one.
This is pretty well known. Steve Souders wrote a chapter on this topic in his book EFWS early 2009, circa 2.5 years ago. And still, per today, most scripts are loaded in a blocking way. Apple, TechCrunch, … big sites. Why is that? Developers don’t know about the performance issues? They do know, but don’t care? Care, but don’t have knowledge/skills to implement it? Can and want to, but no time available? In my consultancy job, I come across all flavours. Sometimes it really is a big hassle to rework the code base and CMS/backend can really make this a challenge. But it is absolutely key to get this job done. I believe Blocking JS to be the biggest perf problem on web pages today. What’s there to say? If script B is dependent on A, you don’t want B to execute before A finished. I’ll show you something crazy later … Some of you may not have this requirement, but I guess most do. Who uses jQuery? Who uses the ready() method to execute code once the DOM is ready? Yeah, this seems to be a common practice and it makes a lot of sense because you want the user to be able to use the functionality on the page asap, well, at least the above the fold important stuff. And for that … … it’s explicitly on my requirements list. I’d almost say: forget about time to onload, DCL is what you should care about. The sooner it fires, the sooner you can safely execute the JS code that brings the page to life and enables the user to meaningfully interact with it . This is really important. Imagine you have a page with a big search box above the fold and it renders quickly, but some JS has to be executed for the user to be able to really use it. And some big JS file or slow, blocking third party JS file is blocking the UI thread and your search-box-initialization code, which is tied to domready, cannot execute. The user may use the search box and has a bad experience (e.g. type, hit enter, nothing happens). If you hook important code execution into the dom ready event, you gotta make sure that event fires asap! Don’t let people stare at a blank screen. People are impatient on the Web, want to get the job done quickly. Start Render should start at max 1 second and continue progressively… Ah, I see I actually left one out: cross-origin. Domain sharding is common practice, and needed if you are serving assets from a CDN … So, basically …
Risk of rendering problems and slowdowns
Happy visitors will return and buy again, and again, and again.
You want your JS files to be ‘good cows’ that don’t get in the way.
I’m going to walk you through 6 techniques, with the objective of giving you the insight to help make the right choice. For each technique, you’ll understand the blocking or non-blocking behavior, when script execution happens, how it impacts DCL, onload and more.
Works in all browsers since 1964. Files execute immediately after loading. Per today, this is the most used loading technique. It’s probably still used so often because … it’s what developers have always used and it’s a habbit. But also because it has some good qualities: execution order is preserved if you have multiple JS files load in this way you can have inline JS execute in the order of the flow. No race conditions. No risk. “It’s like the condom of JS loading” . But the problems are many…
Scripts loaded with the Normal Script Src technique Block document parsing and rendering Block DCL Block onload The first two are big problems. As I already mentioned, in some popular browsers these scripts also still have a blocking effect on parallel downloading. It’s time to show CSPL!
In a nutshell, if there is a parser-inserted script in the HEAD – that does not have the DEFER or ASYNC attribute, Chrome will preload other parser-inserted scripts in the BODY but not images in the BODY! Let’s take a closer look.
Simple testpage. HTML5 doctype. In HEAD: 1 external stylesheet loaded using a LINK tag + 2 scripts using normal, blocking script tag, with an articifial 2 second delay. In BODY: 4 images below the images, 1 script loaded with a normal, blocking script tag, with an articifial 2 second delay + a third party script, also with normal script tag. During further investigating and testing I found out: - The preloaded scripts are not executed after preloading The downloading of the images is initiated by the parser, per the order in the flow and starts after HEAD scripts finish downloading. This behavior surprised me, because the Browserscope table tells me Chrome can download scripts and images in parallel.
The test browserscope does for || Script Image has the script in the BODY, not the HEAD, so it doesn’t catch this. I’d like to see this added to Browserscope, or even better: fixed in Chrome.
You could say preloading logic is good for loading JavaScript fast ;-) But you have to look at the bigger picture and not just at the JS files in the waterfall charts.
PIC: a key <closing notes of Normal Script tag. Shouldn’t there be a ‘ when to use’ slide?> these are the Conclusions slide(s)
Steve wrote about this a few years ago and named it the Script DOM Element technique. Nowadays people call it dynamic insertion or script insertion. It’s great. Works in all browsers and it’s very simple. The external script file is appended to the DOM and starts loading. While it is loading, the browser can continue to do whatever it likes. This technique is great for loading stand-alone scripts that may execute at any time.
Script Insertion has several benefits over the Normal Script Src technique 1) the script does not block parallel downloads This is a clear win in any case. 2) Another benefit is that the script does not block document parsing & page rendering. In most cases this is good, but it’s also a reason you can’t use it sometimes. For example if your JS code adds CSS classes to the HTML element that are needed before rendering starts. 3) A third benefit is that it does not block DCL and in IE9 onload is not even blocked. Only downside is that you can’t preserve exec order for two dynamically loaded external JS files. Well, at least not if you want them to load in parallel, but you can preserve exec order if you load them sequentially. Notice the last column is green? Yes, you can have some inline JS execute directly after the external file finishes loading & executing. Cool! Easy example: Load the GetSatisfaction code that renders that floating feedback thingy on the side of the page,
That would look like this. And maybe it’s needed that your callback code executes not before domready, but it’s ok if it executes some time after DCL. You can easily do this with jQuery: inside the callback function call the ready() method.
DEFER has been around for ages and works in all browsers. The key is that the browser will download the file asynchronously: parsing & rendering can continue, no blocking - Exec order is preserved between scripts that were loaded with DEFER. Cool - You can find information online about defer being applied to inline scripts, but this is not in the HTML5 spec and does not work in modern browsers, do don’t go down that path.
A defered file will execute right before DCL fires, which can be handy. But also: it delays DCL in IE and Chrome, which can be a downside.
Two scripts, first sets innerHTML, then IE8 and IE9 will pause executing that script, look for other deferred scripts and if found execute that first. WTF? This can easily lead to reference errors, like with jQuery and a jQuery dependent script. This is really, really ugly. Solution: don’t use defer or combine the jQuery files Test it yourself in IE8 or IE9: http://test.getify.com/test-ie-script-defer/index-2.html <when to use DEFER?> slide If jQuery: jquery dependent files must be combined with the lib + no coupling inline JS, not even $(document).ready(function() {} Stand alone script, delay execution and exec right before DCL (can’t be done easily with Dyn Insert)
If not combined, you will get ReferenceError in IE and it all fails. BTW, you can’t load the combined.js with DEFER and couple it with inline dom ready activation code … so defer is probably not the best choice if you use jQuery and plugins a lot
It’s like DEFER, but important differences Does not block DCL and executes as soon as finishes loading Exec order is not preserved Why would you use this instead of Dynamic Insertion? I have no idea. With no support in IE8 and IE9, it’s useless.
Using the async attribute on parser-inserted scripts is not widely supported, e.g. IE8 and IE9. Like GA, Facebook and all the other third party code providers that understand performance: use Dyn Insert instead + async for old FF and Opera, so these browsers don’t preserve exec order and exec asap. http://www.nczonline.net/blog/2010/08/10/what-is-a-non-blocking-script/
Only works for script-inserted scripts, not with parser-inserted scripts. Can be confusing! “ Async-loading with exec order preserved”. Cool! Bad: can’t couple with inline scripts Not supported in IE9 so can’t use it for many years.
It’s like DEFER, but important differences Does not block DCL and executes as soon as finishes loading Exec order is not preserved Why would you use this instead of Dynamic Insertion? I have no idea. With no support in IE8 and IE9, it’s useless.
http://labjs.com/ LABjs is very small, only Can even load it async: http://gist.github.com/603980
You see? It’s all good! Multiple JS files + preserve execution order + couple with inline JS + using jQuery? Try LABjs. I have used it often and it delivers for me. The creator, Kyle Simpson, is very smart and extremely responsive.
I have personal experience using LABjs on several sites and am very pleased with it. It’s stable, works in all browsers and to quote the creator of LABjs, Kyle Simpson: “using LABjs to load 3 JS files of equal file size in parallel is often faster then loading a single JS file that is the combined version of those 3”. Something to remember.
On my new site CDNPlanet I wanted to use Twitter Anywhere to generate a feedback box that visitors can use to easily tweet to us. On the Twitter Anywhere I read the developer documentation and it says: “ As a best practice always place the anywhere.js file as close to the top of the page as possible .” They two arguments for this that they, having to do with double counting in GA and something else I didn’t find a good argument. Lucky for me, I know the manager of the platform team at Twitter, Akash. So we exchanged emails and he provided me with the solution, in code, to load the JS file async and the Tweet Box to render and work just fine. The code is too big to fit on this screen, even minified, so view the HTML source of www.cdnplanet.com to get it and beautify it with jsbeautifier.org. This story brings me to my closing of my talk which has to do with people, and not code.
I urge you to become active in the WPO Community. Reall, the best advice I can give you is: interact with other people interested and experienced in WPO. I learned a lot from reading books and blog posts, but by far the most from interacting with people like Steve Souders, Pat Meenan, Paul Irish, Stoyan, Nicholas Zakas and Kyle Simpson. I strongly recommend you build relationships with the heroes and members of the WPO community. Test the code they publish, tweak it, post your findings in the comments. Contribute and be active! It’s great that you are here and I hope you continue to ‘be there’.
Let me point out a few things that can get you on your way to become the new WPO King. - Official twitter hashtag is #webperf - There is a Monthly chat on Twitter, on a Monday in the (late)afternoon or early evening for Europeans, centered around a specific topic like PageSpeed or metrics. A new moderator every time. Really nice and you can read all the past chat transcripts on the site. - Perfplanet is a webperf blog aggregator built by Stoyan. It offers you a single place online where you can view web performance related blog posts from dozens of people. Of course you can also subscribe to Perfplanet’s RSS feed. And once a year, Stoyan gets 24 web performance ‘sultans’ as he likes to call them to write a nice blog post and publish it directly on the Perfplanet. A new blog post every day from December 1 to December 24. High quality stuff. Don’t miss out on that! - In many cities in the US and a couple in Europe there are regular, f2f Webperf Meetups. There are meetups happening in Cologne, London, Vienna and a recent addition is Israel. They are friendly, informal getogethers … someone presenting, beer, pizza, good conversations. And yes, shame on me: there is no Meetup in The Netherlands