ControlJS part 2: delayed execution
This is the second of three blog posts about ControlJS – a JavaScript module for making scripts load faster. The three blog posts describe how ControlJS is used for async loading, delayed execution, and overriding document.write.
The goal of ControlJS is to give developers more control over how JavaScript is loaded. The key insight is to recognize that “loading†has two phases: download (fetching the bytes) and execution (including parsing). In ControlJS part 1 I focused on the download phase using ControlJS to download scripts asynchronously. In this post I focus on the benefits ControlJS brings to the execution phase of script loading.
The issue with execution
The issue with the execution phase is that while the browser is parsing and executing JavaScript it can’t do much else. From a performance perspective this means the browser UI is unresponsive, page rendering stops, and the browser won’t start any new downloads.
The execution phase can take more time than you might think. I don’t have hard numbers on how much time is spent in this phase (although it wouldn’t be that hard to collect this data with dynaTrace Ajax Edition or Speed Tracer). If you do anything computationally intensive with a lot of DOM interaction the blocking effects can be noticeable. If you have a large amount of JavaScript just parsing the code can take a significant amount of time.
If all the JavaScript was used immediately to render the page we might have to throw up our hands and endure these delays, but it turns out that a lot of the JavaScript downloaded isn’t used right away. Across the Alexa US Top 10 only 29% of the downloaded JavaScript is called before the window load event. (I surveyed this using Page Speed‘s “defer JavaScript” feature.) The other 71% of the code is parsed and executed even though it’s not used for rendering the initial page. For these pages an average of 229 kB of JavaScript is downloaded. That 229 kB is compressed – the uncompressed size is upwards of 500 kB. That’s a lot of JavaScript. Rather than parse and execute 71% of the JavaScript in the middle of page loading, it would be better to avoid that pain until after the page is done rendering. But how can a developer do that?
Loading feature code
To ease our discussion let’s call that 29% of code used to render the page immediate-code and we’ll call the other 71% feature-code. The feature-code is typically for DHTML and Ajax features such as drop down menus, popup dialogs, and friend requests where the code is only executed if the user exercises the feature. (That’s why it’s not showing up in the list of functions called before window onload.)
Let’s assume you’ve successfully split your JavaScript into immediate-code and feature-code. The immediate-code scripts are loaded as part of the initial page loading process using ControlJS’s async loading capabilities. The additional scripts that contain the feature-code could also be loaded this way during the initial page load process, but then the browser is going to lock up parsing and executing code that’s not immediately needed. So we don’t want to load the feature-code during the initial page loading process.
Another approach would be to load the feature-code after the initial page is fully rendered. The scripts could be loaded in the background as part of an onload handler. But even though the feature-code scripts are downloaded in the background, the browser still locks up when it parses and executes them. If the user tries to interact with the page the UI is unresponsive. This unnecessary delay is painful enough that the Gmail mobile team went out of their way to avoid it. To make matters worse, the user has no idea why this is happening since they didn’t do anything to cause the lock up. (We kicked it off “in the background”.)
The solution is to parse and execute this feature-code when the user needs it. For example, when the user first clicks on the drop down menu is when we parse and execute menu.js. If the user never uses the drop down menu, then we avoid the cost of parsing and executing that code altogether. But when the user clicks on that menu we don’t want her to wait for the bytes to be downloaded – that would take too long especially on mobile. The best solution is to download the script in the background but delay execution until later when the code is actually needed.
Download now, Execute later
After that lengthy setup we’re ready to look at the delayed execution capabilities of ControlJS. I created an example that has a drop down menu containing the links to the examples.
In order to illustrate the pain from lengthy script execution I added a 2 second delay to the menu code (a while loop that doesn’t relinquish until 2 seconds have passed). Menu withOUT ControlJS is the baseline case. The page takes a long time to render because it’s blocked while the scripts are downloaded and also during the 2 seconds of script execution.
On the other hand, Menu WITH ControlJS renders much more quickly. The scripts are downloaded asynchronously. Since these scripts are for an optional feature we want to avoid executing them until later. This is achieved by using the “data-cjsexec=false” attribute, in addition to the other ControlJS modifications to the SCRIPT’s TYPE and SRC attributes:
<script data-cjsexec=false type="text/cjs" data-cjssrc="jquery.min.js"></script> <script data-cjsexec=false type="text/cjs" data-cjssrc="fg.menu.js"></script>
The “data-cjsexec=false” setting means that the scripts are downloaded and stored in the cache, but they’re not executed. The execution is triggered later if and when the user exercises the feature. In this case, clicking on the Examples button is the trigger:
examplesbtn.onclick = function() { CJS.execScript("jquery.min.js"); CJS.execScript("fg.menu.js", createExamplesMenu); };
CJS.execScript() creates a SCRIPT element with the specified SRC and inserts it into the DOM. The menu creation function, createExamplesMenu, is passed in as the onload callback function for the last script. The 2 second delay is therefore incurred the first time the user clicks on the menu, but it’s tied to a the user’s action, there’s no delay to download the script, and the execution delay only occurs once.
The ability to separate and control the download phase and the execution phase during script loading is a key differentiator of ControlJS. Many websites won’t need this option. But websites that have a lot of code that’s not used to render the initial page, such as Ajax web apps, will benefit from avoiding the pain of script parsing and execution until it’s absolutely necessary.
Ryan Witt | 16-Dec-10 at 12:08 am | Permalink |
The menu example loads only 200-300ms faster on my iPad (iOS 4.2.1) over wifi. Also, the with ControlJS version does not seem to be executing the drop down code and the menu does not appear. :(
jpvincent | 16-Dec-10 at 1:01 am | Permalink |
agreed with Ryan : the effect on the 2nd page is maybe worst from a user point of view. He is used to wait for a page to load (even if we know a lot of them leave) but a click on the interface should have an immediate response.
However a library can not do that, it’s up to the developer to display immediately something, like a layer where the content will be displayed, then execute the JS to obtain that content
Aaron Peters | 16-Dec-10 at 1:26 am | Permalink |
hi Steve,
I really like the concept of the developer being able to delay script execution and do the execution when the user interacts with the page.
Your example page with the menu display being delayed for 2 secs is … not the best example. Yes, it proves your point of the JS exec not delaying the page rendering, but a drop down menu like that really needs to show within 100 ms max. The fake 2 sec delay is a bit high: exec time for Jquery + menu.js is not going to be 2 sec, not even 1 sec on IE7. So the 2 sec makes the test page a bit unrealistic.
As I said: love the concept and a lib that can enable developers to have this control, cross-browser, is great!
BTW: in Chrome8 on Win Vista, the menu does not appear at all on the Menu WITH ControlJS page …
Aaron Peters | 16-Dec-10 at 1:35 am | Permalink |
Some more feedback:
I tested the Menu WITH ControlJS page in IE7 and after page loads and I click on the menu, I get redirected to https://stevesouders.com/controljs/examples/.
After browser restart (cache not cleared) and clicking the menu again, I get a JS error: ‘CJS.aExecs.0.0’ is null or not an object.
Steve Souders | 16-Dec-10 at 2:13 am | Permalink |
@Ryan and @Aaron: I fixed a few bugs in control.js for Chrome, IE, and Safari. If you still experience the problem please file a bug with full details.
@jpvincent: I agree about the delay being jarring. A spinner would help. For me, I don’t want users to undergo the delays of executing script they might never use.
@Aaron: Stretch with me here on the example. It’s too much to come up with an example like Gmail or Facebook that has 500 kB of JS. You could add one to the project. ;-)
Temo | 16-Dec-10 at 2:38 am | Permalink |
I really love the idea behind this feature, specially if u think of the 20/80 rule. Why load 10 jQuery plugins if u only need 2 for your workflow.
But as jpvincent & Aaron sad: This is a bad example. If a user click on a button or a link something must happens, immediatly!
So i would recommend a loadindicator or another action to exectue first, to tell the user “hey, i recognized your action and will perfom now. please stand by”. And after this execScript.
I think the best way to handle script loading is to make 2 categories.
1. Required:
Load scripts before window load event (like jQuery)
2. Main features:
Load after the initial page is fully rendered
(this scripts are used 80% of the time a page is viewed)
3. Other features:
Load with execScript
(this scripts are used 20% of the time a page is viewed)
gasper_k | 16-Dec-10 at 3:26 am | Permalink |
This concept could be pushed even further. If the code that handles onclick event is small enough, it could be parsed on hover, and then executed upon click without delay. Not sure, how would this affect the website performance, but when the user is hovering over a control, there is some time to spare. If the code gets parsed fast enough, there would be no UI lag.
Also it seems that the Menu WITH ControlJS example doesn’t work in Chrome 8 and Firefox 3.6 on Kubuntu. Scripts get loaded, but no menu appears.
Aaron Barker | 16-Dec-10 at 6:26 am | Permalink |
So your suggestions here confuse me a bit as they seem to go directly against your #1 rule of Minimize HTTP Requests.
Say I am using jQuery UI. I am using the full jquery.ui.min.js to have one connection for the 20 or so smaller files that make up the suite. Are you suggesting that I should now call the individual files so that I only download what I need? So on page A I may need files 1,2,3,10,12,15 and on page B I need 1,2,3,7,13,18. Granted jQuery UI files don’t really hang things up till they are called, but hopefully you get the general concept.
Making a bundled featureCode.js would be different per page, so that’s either creating a number of files that will be unique per page (downloading some of the same stuff anew in each new bundle), or keeping them separate (but specific to the page) and increasing the number of connections for the page.
Where is the happy medium?
Brad Bosley | 16-Dec-10 at 9:50 am | Permalink |
I like the idea of deferring until needed, however in the case where you need the code to execute regardless but AFTER the page is rendered what is the best approach? I’ve attempted to move things to a window.onload but on IE8 it still seems to block the render (you see the blank white screen) until this js finishes. Strangely if you do the same page with an empty cache the loading of the other components seems to get the page displayed before the onload triggers. Is there something I can do to force flush any rendering jobs before the onload fires?
Ionatan | 16-Dec-10 at 10:27 am | Permalink |
Same here, the menu with controlJS doesn’t work. I’m on linux and tried chrome and firefox…
@Aaron Barker: this library only gives you the tools for async loading and delayed execution, abstracting you from browser specific bugs. Then comes the difficult part, that’s project dependent that is to figure out what’s the best approach for your project. It involves metrics, and usage statistics to know if you are better having one large file or several smaller files.
@stevesouders: On the other hand, aaron is right. Maybe we could figure out some way to tell CJS that inside one file there are many ant can be executed separated. Something like a comment. For example I have one file that has all the jquery.ui files combined call jquery.ui.all.js. I could load it with cjsexec=false, then inside that file separate each internal file with a special comment like // CJSFILE: jquery.ui.core.js
CONTENTS OF THE FILE
// CJSFILE: jquery.ui.tabs.js
CONTENTS OF THE FILE
Then I could do something like:
CJS.execScript(“jquery.ui.core.js”);
And only that part would get executed…
I don’t know if I’m explaining myself, and don’t know how much time would be spent in parsing the javascript code to split it by the separators.
Steve Souders | 16-Dec-10 at 11:35 am | Permalink |
@Temo: I agree about the load indicator, but I think the developer should decide what to add – that shouldn’t be part of ControlJS.
@gasper_k: I might have fixed those bugs. Can you retry and submit a full bug report? Make sure to clear your cache.
@Aaron: I’m still sorting out the tradeoffs between all the different recommendations. Concatenating scripts was recommended in a book I wrote in 2006 – things have changed a lot since then. Given that most browsers don’t block on script loading, having multiple scripts is more acceptable. And if you’re using ControlJS or some other async loading technique, that’s even more true.
@Brad: By default ControlJS delays all execution until after window onload, so it takes care of it for you.
@Ionatan: Hmmm, okay – you’re 10:27am so my fixes must not have worked for those browsers. I’ll investigate. I’d only pursue the delimited combined script idea if we had evidence that combining scripts is better for today’s browsers. I’m not convinced it is.
Harry | 18-Dec-10 at 6:25 am | Permalink |
> I agree about the delay being jarring.
> A spinner would help. For me, I don’t
> want users to undergo the delays of
> executing script they might never use.
Thoughts:
From UX point of view baseline case is better and favored by the app users who really have to use it (it’s their daily tool to get the job done) — it’s better to wait one time longer than get constantly small irritations of waiting here and there (even with spinner or what not).
ControlJS enhanced version is better when the first load of the page is important (or otherwise app might not see this user never again).
It would be useful to have something like execScriptWhileIdle() which could parse and execute certain scripts when the user happens to take her sip of coffee just after downloading finished, i.e. to the work behind the scenes when moment’s right (user does not interact with the application).
Steve Souders | 18-Dec-10 at 10:11 am | Permalink |
@Harry: By default ControlJS will do what you want – it’ll parse & execute the scripts in the background as soon as the page is done loading.
In this example I actually had to go out of my way to delay the execution until later: I had to add “cjsexec=false” and then later when the menu is clicked explicitly call “execScript()”.
So the behavior you want is the default, but if a developer has the Gmail-like requirement that is supported, too.
Steve Souders | 18-Dec-10 at 10:12 am | Permalink |
I should also have mentioned: I made a few bug fixes the morning of the release that fixed all the reported issues. Thanks to all the people that sent me reports and to “serverherder” who submitted the key patch.
hugo | 04-Jan-11 at 1:39 am | Permalink |
What if the user diable the browser cache? In this case the user have to download the js files twice.