Splitting the Initial Payload
This post is based on a chapter from Even Faster Web Sites, the follow-up to High Performance Web Sites. Posts in this series include: chapters and contributing authors, Splitting the Initial Payload, Loading Scripts Without Blocking, Coupling Asynchronous Scripts, Positioning Inline Scripts, Sharding Dominant Domains, Flushing the Document Early, Using Iframes Sparingly, and Simplifying CSS Selectors.
The growing adoption of Ajax and DHTML means today’s web pages have more JavaScript than ever before. The average top ten U.S. web site[1] contains 252K of JavaScript. JavaScript slows pages down. When the browser starts downloading or executing JavaScript it won’t start any other parallel downloads. Also, anything below an external script is not rendered until the script is completely downloaded and executed. Even in the case where external scripts are cached the execution time can still slow down the user experience and thwart progressive rendering.
Every line of JavaScript code matters to a fast loading page. And yet, as shown in Table 1, the average top ten U.S. web site only executes 25% of the JavaScript functionality before the onload event.[2] Why track this relative to the onload event? Any code needed after the onload event (for dropdown menus, Ajax requests, etc.) could be downloaded later, making the initial page render more quickly.
Table 1. Percentage of JavaScript functions executed before onload
Web Site | % of Functions Executed |
JavaScript Size (uncompressed) |
---|---|---|
http://www.aol.com/ | 29% | 115K |
http://www.ebay.com/ | 44% | 183K |
http://www.facebook.com/ | 9% | 1088K |
http://www.google.com/search?q=flowers | 44% | 15K |
http://search.live.com/results.aspx?q=flowers | 25% | 17K |
http://www.msn.com/ | 31% | 131K |
http://www.myspace.com/ | 13% | 297K |
http://en.wikipedia.org/wiki/Flowers | 21% | 114K |
http://www.yahoo.com/ | 12% | 321K |
http://www.youtube.com/ | 16% | 240K |
The task of finding where to split a large set of JavaScript code is not trivial. Doloto, a project from Microsoft Research, attempts to automate this work. Doloto is not publicly available, but the paper provides a good description of their system. (You can hear the creators talk about Doloto at the upcoming Velocity conference.) The approach taken by Doloto uses stub functions that download additional JavaScript on demand. This might result in users having to wait when they trigger an action that requires additional functionality. Downloading the additional JavaScript immediately after the page has rendered might result in an even faster page.
Mike Griffiths | 15-May-08 at 5:46 am | Permalink |
The AspectJS library is designed to tackle this issue (among others) as it allows the tactical downloading of additional JavaScript on demand
Take a look at
http://www.dodeca.co.uk/aspectjs/
I have not yet tried it but it looks promising
James Burke | 15-May-08 at 9:08 am | Permalink |
The Dojo Toolkit also allows for loading code on demand via dojo.require():
function onSomeUserAction(){
dojo.require(“my.module”);
dojo.addOnLoad(function(){
//Do something with my.module
});
}
The process is not as automated as something like Doloto, but it is a proven technique used in sites today, like webmail.aol.com.
Alek Traunic | 15-May-08 at 9:18 am | Permalink |
as much as this horse has been flogged over the years it just will not die:
http://use.perl.org/~schwern/journal/24112
http://tech.groups.yahoo.com/group/ydn-javascript/message/10686
http://groups.google.com/group/jquery-en/msg/0d6d2e44611025db
http://ajaxpatterns.org/On-Demand_Javascript
etc. etc….
Frank Thuerigen | 17-May-08 at 3:41 am | Permalink |
Also there is a whole framework based on the idea of loading everything (html, css and js) on demand to display a one page ajax application:
http://www.two-birds.de
Nicholas C. Zakas | 18-May-08 at 4:36 pm | Permalink |
Having written complex pages, I can definitely relate to this. The issue is typically that pages are including libraries, sometimes large libraries, that can’t easily be separated out into smaller parts. Even libraries such as YUI can only be divided down so far. While it would be nice to load just what you need to execute initially, in reality, I’m not sure that’s actually feasible.
Nagaraj Hubli | 20-May-08 at 11:18 am | Permalink |
This is really helpful to me, and btw I work for a company that is top 1 in the above list :)
Jan | 22-May-08 at 8:46 am | Permalink |
Hi Steve,
sounds intriguing. Could you post how you calculated the percentages above? Specifically, did you calculate this by hand or do you have an automated tool?
This metric could be an interesting addition to YSlow.
Thanks
Michael Julson | 09-Jun-08 at 7:53 pm | Permalink |
Hi Steve,
Your book and continued work on web performance has been great and I really appreciate it.
I’m interested in profiling the top 100 or so ecommerce sites and I wanted to find out if you had come up with a way of recording the data out of YSlow/Firebug for analysis. Is there a way to record it out to excel or some format for later review?
Second, in your web 2.0 talk, you mentioned the issue with inline scripts after a css. Are you planning on adding these items to YSlow? Or if you can’t work on YSlow any longer, are you planning on making something similar that would check for these rules?
Thanks again for the great work for the community.
Steve Souders | 09-Jun-08 at 9:27 pm | Permalink |
Great feedback from everyone. Thanks. Hi, Nicholas!
Lazy loading is available in many instantiations. This post is less about lazy loading, and more about the prevalence of sites downloading more JS than used initially and the need for tools to help identify the split. As Nicholas says, the code dependencies might be too complex to separate. But some libraries, like YUI, are already modular. Also, an approach like Doloto that substitutes stub functions avoids undefined symbol errors, and simply downloads the full implementation later.
This rule, and others, would be great additions to YSlow. I’d love to work with the team at Yahoo! to make that happen. As for a framework, something that used a headless Firefox with xvfb and jssh would probably be the answer. If I get to it first I’ll post about it and make it available. If someone else has experience and can help with this please contact me.
Hakon Damm | 23-Jun-08 at 8:57 pm | Permalink |
Why are we sending all JavaScript down in one go · Cu zillion Video and High Performance Book.
Ryan Hearing | 27-Sep-08 at 4:39 am | Permalink |
I’m interested in profiling the top 100 or so ecommerce sites and I wanted to find out if you had come up with a way of recording the data out of YSlow/Firebug for analysis.
Kirk Cerny | 27-Sep-09 at 2:27 pm | Permalink |
This seems to contradict the YSlow rule where you put all of your JavaScript into 1 file.
Steve Souders | 27-Sep-09 at 7:49 pm | Permalink |
@Kirk: In this case, the extra HTTP request happens after the page has loaded, so it’s not a conflict with Rule 1 from High Performance Web Sites.