Even Faster Web Sites

April 23, 2009 12:23 am | 12 Comments

This post introduces Even Faster Web Sites, the follow-up to High Performance Web Sites. Posts in this series include: chapters and contributing authors, Splitting the Initial Payload, Loading Scripts Without Blocking, Coupling Asynchronous Scripts, Positioning Inline Scripts, Sharding Dominant Domains, Flushing the Document Early, Using Iframes Sparingly, and Simplifying CSS Selectors.

Last April, I blogged about starting a follow-up to my first book, High Performance Web Sites. Last week I sent in the first round of final edits. Although there will likely be one or two more rounds of edits, they should be small. So, I’m feeling pretty much done. It’s a huge weight off my shoulders. I’ve been working on this book for more than a year. The performance best practices I present required more research than HPWS. I also expanded my testing from just IE and Firefox (as I did in HPWS) to IE, Firefox, Safari, Chrome, and Opera (including multiple versions of each).

The title of this new book is Even Faster Web Sites. It will be published in June, and is available for pre-order now on Amazon and O’Reilly. The cover of HPWS was a greyhound. EFWS’ cover is the Blackbuck Antelope – it can hit 50 mph which is in the top 5 for land animals. (Fastest is cheetah, but that’s taken by Programming the Perl DBI.)

The most exciting thing about EFWS is that it includes six chapters from contributing authors. This came about because I wanted to have best practices for JavaScript performance. I’m a pretty good JavaScript programmer, but not nearly as good as the JavaScript luminaries out there who are writing books and teaching workshops. I also wanted a chapter on image optimization, where Stoyan Stefanov and Nicole Sullivan are the experts. I reached out to folks in these and other areas to contribute performance best practices that they had accumulated. The resulting chapters are listed below. I’ve indicated the contributing authors where appropriate; otherwise, the chapter is written by me.

  1. Understanding Ajax Performance – Doug Crockford
  2. Creating Responsive Web Applications – Ben Galbraith and Dion Almaer
  3. Splitting the Initial Payload
  4. Loading Scripts Without Blocking
  5. Coupling Asynchronous Scripts
  6. Positioning Inline Scripts
  7. Writing Efficient JavaScript – Nicholas C. Zakas
  8. Scaling with Comet – Dylan Schiemann
  9. Going Beyond Gzipping – Tony Gentilcore
  10. Optimizing Images – Stoyan Stefanov and Nicole Sullivan
  11. Sharding Dominant Domains
  12. Flushing the Document Early
  13. Using Iframes Sparingly
  14. Simplifying CSS Selectors
  15. Performance Tools

Between now and when the book comes out, I’ll write a blog post about each of my chapters. I wrote the first one of these, Split the Initial Payload, back in May. Now that I have more time on my hands, I’ll catch up and finish the rest.

If you’re just beginning the process of improving your web site’s performance, you should start with High Performance Web Sites. But as Web 2.0 gains wider adoption and the amount of content on web pages continues to grow, the best practices in Even Faster Web Sites are key to making today’s web sites fast(er).

12 Comments

don’t use @import

April 9, 2009 12:32 am | 90 Comments

In Chapter 5 of High Performance Web Sites, I briefly mention that @import has a negative impact on web page performance. I dug into this deeper for my talk at Web 2.0 Expo, creating several test pages and HTTP waterfall charts, all shown below. The bottomline is: use LINK instead of @import if you want stylesheets to download in parallel resulting in a faster page.

LINK vs. @import

There are two ways to include a stylesheet in your web page. You can use the LINK tag:

<link rel='stylesheet' href='a.css'>

Or you can use the @import rule:

<style>
@import url('a.css');
</style>

I prefer using LINK for simplicity—you have to remember to put @import at the top of the style block or else it won’t work. It turns out that avoiding @import is better for performance, too.

@import @import

I’m going to walk through the different ways LINK and @import can be used. In these examples, there are two stylesheets: a.css and b.css. Each stylesheet is configured to take two seconds to download to make it easier to see the performance impact. The first example uses @import to pull in these two stylesheets. In this example, called @import @import, the HTML document contains the following style block:

<style>
@import url('a.css');
@import url('b.css');
</style>

If you always use @import in this way, there are no performance problems, although we’ll see below it could result in JavaScript errors due to race conditions. The two stylesheets are downloaded in parallel, as shown in Figure 1. (The first tiny request is the HTML document.) The problems arise when @import is embedded in other stylesheets or is used in combination with LINK.

Figure 1

Figure 1. always using @import is okay

LINK @import

The LINK @import example uses LINK for a.css, and @import for b.css:

<link rel='stylesheet' type='text/css' href='a.css'>
<style>
@import url('b.css');
</style>

In IE (tested on 6, 7, and 8), this causes the stylesheets to be downloaded sequentially, as shown in Figure 2. Downloading resources in parallel is key to a faster page. As shown here, this behavior in IE causes the page to take a longer time to finish.

Figure 2. link mixed with @import breaks parallel downloads in IE

Figure 2. link mixed with @import breaks parallel downloads in IE

LINK with @import

In the LINK with @import example, a.css is inserted using LINK, and a.css has an @import rule to pull in b.css:

in the HTML document:
<link rel='stylesheet' type='text/css' href='a.css'>
in a.css:
@import url('b.css');

This pattern also prevents the stylesheets from loading in parallel, but this time it happens on all browsers. When we stop and think about it, we shouldn’t be too surprised. The browser has to download a.css and parse it. At that point, the browser sees the @import rule and starts to fetch b.css.

using @import from within a LINKed stylesheet breaks parallel downloads in all browsers

Figure 3. using @import from within a LINKed stylesheet breaks parallel downloads in all browsers

LINK blocks @import

A slight variation on the previous example with surprising results in IE: LINK is used for a.css and for a new stylesheet called proxy.css. proxy.css is configured to return immediately; it contains an @import rule for b.css.

in the HTML document:
<link rel='stylesheet' type='text/css' href='a.css'>
<link rel='stylesheet' type='text/css' href='proxy.css'>
in proxy.css:
@import url('b.css');

The results of this example in IE, LINK blocks @import, are shown in Figure 4. The first request is the HTML document. The second request is a.css (two seconds). The third (tiny) request is proxy.css. The fourth request is b.css (two seconds). Surprisingly, IE won’t start downloading b.css until a.css finishes. In all other browsers, this blocking issue doesn’t occur, resulting in a faster page as shown in Figure 5.

Figure 4. LINK blocks @import embedded in other stylesheets in IE

Figure 4. LINK blocks @import embedded in other stylesheets in IE

Figure 5. LINK doesnt block @import embedded stylesheets in browsers other than IE

Figure 5. LINK doesn't block @import embedded stylesheets in browsers other than IE

many @imports

The many @imports example shows that using @import in IE causes resources to be downloaded in a different order than specified. This example has six stylesheets (each takes two seconds to download) followed by a script (a four second download).

<style>
@import url('a.css');
@import url('b.css');
@import url('c.css');
@import url('d.css');
@import url('e.css');
@import url('f.css');
</style>
<script src='one.js' type='text/javascript'></script>

Looking at Figure 6, the longest bar is the four second script. Even though it was listed last, it gets downloaded first in IE. If the script contains code that depends on the styles applied from the stylesheets (a la getElementsByClassName, etc.), then unexpected results may occur because the script is loaded before the stylesheets, despite the developer listing it last.

Figure 6. @import causes resources to be downloaded out-of-order in IE

Figure 6. @import causes resources to be downloaded out-of-order in IE

LINK LINK

It’s simpler and safer to use LINK to pull in stylesheets:

<link rel='stylesheet' type='text/css' href='a.css'>
<link rel='stylesheet' type='text/css' href='b.css'>

Using LINK ensures that stylesheets will be downloaded in parallel across all browsers. The LINK LINK example demonstrates this, as shown in Figure 7. Using LINK also guarantees resources are downloaded in the order specified by the developer.

Figure 7. using link ensures parallel downloads across all browsers

Figure 7. using link ensures parallel downloads across all browsers

These issues need to be addressed in IE. It’s especially bad that resources can end up getting downloaded in a different order. All browsers should implement a small lookahead when downloading stylesheets to extract any @import rules and start those downloads immediately. Until browsers make these changes, I recommend avoiding @import and instead using LINK for inserting stylesheets.

Update: April 10, 2009 1:07 PM

Based on questions from the comments, I added two more tests: LINK with @imports and Many LINKs. Each of these insert four stylesheets into the HTML document. LINK with @imports uses LINK to load proxy.css; proxy.css then uses @import to load the four stylesheets. Many LINKs has four LINK tags in the HTML document to pull in the four stylesheets (my recommended approach). The HTTP waterfall charts are shown in Figure 8 and Figure 9.

Figure 8. LINK with @imports

Figure 8. LINK with @imports

Figure 9. Many LINKs

Figure 9. Many LINKs

Looking at LINK with @imports, the first problem is that the four stylesheets don’t start downloading until after proxy.css returns. This happens in all browsers. On the other hand, Many LINKs starts downloading the stylesheets immediately.

The second problem is that IE changes the download order. I added a 10 second script (the really long bar) at the very bottom of the page. In all other browsers, the @import stylesheets (from proxy.css) get downloaded first, and the script is last, exactly the order specified. In IE, however, the script gets inserted before the @import stylesheets, as shown by LINK with @imports in Figure 8. This causes the stylesheets to take longer to download since the long script is using up one of only two connections available in IE 6&7. Since IE won’t render anything in the page until all stylesheets are downloaded, using @import in this way causes the page to be blank for 12 seconds. Using LINK instead of @import preserves the load order, as shown by Many LINKs in Figure 9. Thus, the page renders in 4 seconds.

The load times of these resources are exaggerated to make it easy to see what’s happening. But for people with slow connections, especially those in some of the world’s emerging markets, these response times may not be that far from reality. The takeaways are:

  • Using @import within a stylesheet adds one more roundtrip to the overall download time of the page.
  • Using @import in IE causes the download order to be altered. This may cause stylesheets to take longer to download, which hinders progress rendering making the page feel slower.

90 Comments

SXSW slides

March 16, 2009 9:05 am | 7 Comments

I spoke at SXSW ’09 this past weekend. My session was called Even Faster Web Sites. This is also the title of my next book, so it’s my way of linking my talks with the book. But I realize now that some people might think all of my “Even Faster Web Sites” presentations might be the same material. They’re not! I try to bring out new material for every talk I give. As I finish chapters for the next book, I use that material in the next presentations I give. This talk incorporates five upcoming chapters:

  • Load scripts without blocking
  • Coupling asynchronous scripts
  • Don’t scatter inline scripts
  • Use iframes sparingly
  • Flush the document early

This is the first time I cover all five of these best practices. My session was packed (they stopped letting people in) and it got the highest ratings for that time slot, so I think the material is useful. Checkout the ppt slides or see them on Slideshare. (There’s a lot of animation and hidden slides in this deck which is only visible in the ppt slides).

My next talk is at Web 2.0 Expo on April 1 (no fooling) April 2 (turns out I was fooling), where I’ll present two new chapters about CSS selectors and worldwide issues with gzip. I hope to see you there.

7 Comments

Performance Impact of CSS Selectors

March 10, 2009 11:28 pm | 53 Comments

A few months back there were some posts about the performance impact of inefficient CSS selectors. I was intrigued – this is the kind of browser idiosyncratic behavior that I live for. On further investigation, I’m not so sure that it’s worth the time to make CSS selectors more efficient. I’ll go even farther and say I don’t think anyone would notice if we woke up tomorrow and every web page’s CSS selectors were magically optimized.

The first post that caught my eye was about CSS Qualified Selectors by Shaun Inman. This post wasn’t actually about CSS performance, but in one of the comments David Hyatt (architect for Safari and WebKit, also worked on Mozilla, Camino, and Firefox) dropped this bomb:

The sad truth about CSS3 selectors is that they really shouldn’t be used at all if you care about page performance. Decorating your markup with classes and ids and matching purely on those while avoiding all uses of sibling, descendant and child selectors will actually make a page perform significantly better in all browsers.

Wow. Let me say that again. Wow.

The next posts were amazing. It was a series on Testing CSS Performance from Jon Sykes in three parts: part 1, part 2, and part 3. It’s fun to see how his tests evolve, so part 3 is really the one to read. This had me convinced that optimizing CSS selectors was a key step to fast pages.

But there were two things about the tests that troubled me. First, the large number of DOM elements and rules worried me. The pages contain 60,000 DOM elements and 20,000 CSS rules. This is an order of magnitude more than most pages. Pages this large make browsers behave in unusual ways (we’ll get back to that later). The table below has some stats from the top ten U.S. web sites for comparison.

Web Site # CSS Rules
#DOM Elements
AOL 2289 1628
eBay 305 588
Facebook 2882 1966
Google 92 552
Live Search 376 449
MSN 1038 886
MySpace 932 444
Wikipedia 795 1333
Yahoo! 800 564
YouTube 821 817
average 1033 923

The second thing that concerned me was how small the baseline test page was, compared to the more complex pages. The main question I want to answer is “do inefficient CSS selectors slow down pages?” All five test pages contain 20,000 anchor elements (nested inside P, DIV, DIV, DIV). What changes is their CSS: baseline (no CSS), tag selector (one rule for the A tag), 20,000 class selectors, 20,000 child selectors, and finally 20,000 descendant selectors. The last three pages top out at over 3 megabytes in size. But the baseline page and tag selector page, with little or no CSS, are only 1.8 megabytes. These pages answer the question “how much faster would my page be if I eliminated all CSS?” But not many of us are going to eliminate all CSS from our pages.

I revised the test as follows:

  • 2000 anchors and 2000 rules (instead of 20,000) – this actually results in ~6000 DOM elements because of all the nesting in P, DIV, DIV, DIV
  • the baseline page and tag selector page have 2000 rules just like all the other pages, but these are simple class rules that don’t match any classes in the page

I ran these tests on 12 browsers. Page render time was measured with a script block at the top and bottom of the page. (I loaded the page from local disk to avoid possible impact from chunked encoding.) The results are shown in the chart below. (I don’t show Opera 9.63 – it was way too slow – but you can download all the data as csv. You can also see the test pages.)

Performance varies across browsers; strangely, two new browsers, IE 8 and Firefox 3.1, are the slowest but comparisons should not be made from one browser to another. Although all the tests for a given browser were conducted on a single PC, different browsers might have been tested on different PCs with different performance characteristics. The goal of this experiment is not to compare browser performance – it’s to see how browsers handle progressively more complex CSS selectors.

[Revision: On further inspection comparing Firefox 3.0 and 3.1, I discovered that the test PC I used for testing Firefox 3.1 and IE 8 was slower than the other test PCs used in this experiment. I subsequently re-ran those tests as well as Firefox 3.0 and IE 7 on PCs that were more consistent and updated the chart above. Even with this re-run, because of possible differences in test hardware, do not use this data to compare one browser to another.]

Not surprisingly, the more complex pages (child selectors and descendant selectors) usually perform the worst. The biggest surprise is how small the delta is from the baseline to the most complex, worst performing test page. The average slowdown across all browsers is 50 ms, and if we look at the big ones (IE 6&7, FF3), the average delta is just 20 ms. For 70% or more of today’s users, improving these CSS selectors would only make a 20 ms improvement.

Keep in mind – these test pages are close to worst case. The 2000 anchors wrapped in P, DIV, DIV, DIV result in 6000 DOM elements – that’s twice as big as the max in the top ten sites. And the complex pages have 2000 extremely inefficient rules – a typical site has around one third of their rules that are complex child or descendant selectors. Facebook, for example, with the maximum number of rules at 2882 only has 750 that are these extremely inefficient rules.

Why do the results from my tests suggest something different from what’s been said lately? One difference comes from looking at things at such a large scale. It’s okay to exaggerate test cases if the results are proportional to common use cases. But in this case, browsers behave differently when confronted with a 3 megabyte page with 60,000 elements and 20,000 rules. I especially noticed that my results were much different for IE 6&7. I wondered if there was a hockey stick in how IE handled CSS selectors. To investigate this I loaded the child selector and descendant selector pages with increasing number of anchors and rules, from 1000 to 20,000. The results, shown in the chart below, reveal that IE hits a cliff around 18,000 rules. But when IE 6&7 work on a page that is closer to reality, as in my tests, they’re actually the fastest performers.

Based on these tests I have the following hypothesis: For most web sites, the possible performance gains from optimizing CSS selectors will be small, and are not worth the costs. There are some types of CSS rules and interactions with JavaScript that can make a page noticeably slower. This is where the focus should be. So I’m starting to collect real world examples of small CSS style-related issues (offsetWidth, :hover) that put the hurt on performance. If you have some, send them my way. I’m speaking at SXSW this weekend. If you’re there, and want to discuss CSS selectors, please find me. It’s important that we’re all focusing on the performance improvements that our users will really notice.

53 Comments

O’Reilly Master Class

March 3, 2009 12:11 pm | 2 Comments

O’Reilly, my publisher, has launched a new initiative to bring a deeper level of information and engagement around their titles and technology focus areas. I’m excited to be a part of this by leading a one day workshop on Creating Higher Performance Web Sites. The workshop (or “Master Class” as they call it) is March 30, 9am-5pm, at the Mission Bay Conference Center in SF. The cost is $600 ($550 if you register before March 15).

This is going to be an engaging and fact-filled day for developers who care about web performance. I’m going to go over the best practices from my first book, High Performance Web Sites, which are also captured in YSlow. But I’m also going to touch on the chapters from my next book including loading scripts without blocking, flushing the document early, and using iframes sparingly. I’m just wrapping up these chapters now, so these are new insights “hot off the presses”. Since it’s a workshop, O’Reilly wants a fair amount of audience involvement, so I’m working up a few exercises to give attendees experience analyzing web pages to find performance bottlenecks.

There are also workshops from Doug Crockford (JavaScript: The Good Parts), Scott Berkun (Leading and Managing Breakthrough Projects), and Jonathan Zdziarski (iPhone Forensics: Recovering Evidence, Personal Data, and Corporate Assets). These workshops come right before Web 2.0 Expo in SF, so it’s a great doubleheader. I hope to see you there.

2 Comments

Fronteers 2009

February 10, 2009 10:19 pm | 3 Comments

I’m psyched to be speaking at Fronteers in November – and not just because it’s one of the best conference names ever. And not just because it’s in Amsterdam – although that is a huge plus. The main reason I’m psyched is because I missed last year’s conference and regretted it. The feedback I got was that the speakers were great and so were the attendees. PPK is active in his advocacy for frontend engineering, and (from what I heard) that was apparent in the level of knowledge and participation shown throughout the talks.

Last year’s speakers included Stuart Langridge, Christian Heilmann, and Pete LePage (check out the links to their talks on YDN). PPK has announced myself and Nate Koechley as speakers for 2009, and some other web gurus I know have said they’re speaking there as well. It’s going to be another great set of speakers and sessions. I’m so glad that I’ll be there to experience it, and I hope you can make it, too.

3 Comments

John Resig: Drop-in JavaScript Performance

February 9, 2009 11:32 pm | 1 Comment

I wrote a post on the Google Code Blog about John Resig’s tech talk “Drop-in JavaScript Performance.” The video and slides are now available.

In this talk, John starts off highlighting why performance will improve in the next generation of browsers, thanks to advances in JavaScript engines and new features such as process per tab and parallel script loading. He digs deeper into JavaScript performance, touching on shaping, tracing, just-in-time compilation, and the various benchmarks (SunSpider, Dromaeo, and V8 benchmark). John plugs my UA Profiler, with its tests for simultaneous connections, parallel script loading, and link prefetching. He wraps up with a collection of many other advanced features in the areas of communiction, DOM, styling, data, and measurements.

1 Comment

User Agents in the morning

January 18, 2009 5:25 pm | 17 Comments

Every working day, a script runs at 7am that opens ~20 websites in my browser. I open them at 7am so that they’re ready for me when I sit down with my coffee. I’m the performance guy – I can’t stand waiting for a page to load. Among the sites that I read everyday are blogs (Ajaxian, O’Reilly Radar, Google Reader for the rest), news sites (MarketWatch, CNET Tech News, InternetNews, TheStreet.com), and stuff for fun and life (Dilbert, Woot, The Big Picture, Netflix).

The last site is a page related to UA Profiler. It lists all the new user agents that have been tested in the last day. These are unique user agents – they’ve never been seen by UA Profiler before. When I first launched UA Profiler, there were about 50 each day. Now, it’s down to about 20 per day. But I’ve skipped over the main point.

Why do I review these new user agents every morning?

When I started UA Profiler, I assumed I would be able to find a library to accurately parse the HTTP User-Agent string into its components. I need this in order to categorize the test results. Was the test done with Safari or iPhone? Internet Explorer or Maxthon? NetNewsWire or OmniWeb? My search produced some candidates, but none of them had the level of accuracy I wanted, unable to properly classify edge case browsers, mobile devices, and new browsers (like Chrome and Android).

So, I rolled my own.

I find that it’s very accurate – more accurate than anything else I could find. Another good site out there is UserAgentString.com, but even they misclassify some well known browsers such as iPhone, Shiretoko, and Lunascape. When I do my daily checks I find that every 200-400 new user agents requires me to tweak my code. And I’ve written some good admin tools to do this check – it only takes 5 minutes to complete. And the code tweaks, when necessary, take less than 15 minutes.

It’s great that this helps UA Profiler, but I’d really like to share this with the web community. The first step was adding a new Parse User-Agent page to UA profiler. You can paste any User-Agent string and see how my code classifies it. I also show the results from UserAgentString.com for comparison. The next steps, if there’s interest and I can find the time, would be to make this available as a web service and make the code available, too. What do people think?

  • Do other people share this need for better User Agent parsing?
  • Do you know of something good that’s out there that I missed?
  • Do you see gaps or mistakes in UA Profiler’s parsing?

For now, I’ll keep classifying user agents as I finish the last drops of my (first) coffee in the morning.

17 Comments

CS193H video preview

January 6, 2009 10:17 am | 45 Comments

My class at Stanford, CS193H High Performance Web Sites, was videotaped. Stanford does this so that people enrolled through the Stanford Center for Professional Development, who work fulltime, can watch the class at offhours. SCPD also makes some of the class videos available to the public. I’m currently talking with SCPD about releasing my videos, but in the meantime they’ve released the video of my first class. This lecture covers the logistics of the class (syllabus, mailing list, etc.). I’ve released all the slides from the class. You can find links to the slides in the class schedule. Anyone going through the slides should watch this intro video to get a flavor for how the class was conducted.

If you would be interested in watching the videos from this class, please add a comment below. The more interest there is, the more likely SCPD will be to make the videos available.

Update: The videos are now available! Thanks for all the positive feedback. You can watch the first three lectures for free. The entire 25 lectures have a tuition of $600. The videos are offered as XCS193H on SCPD.

45 Comments